code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
<!--NAVIGATION-->
| [Contents](Index.ipynb) |
# Land Registration in Scotland workshop
This is a workshop about Land Registration. It is specifically about Land Registration in Scotland. Scotland has a long history of Land Registration.
## The General Register of Sasines - 1617
[The General Register of Sasines](https://www.ros.gov.uk/services/registration/sasine-register) - also known as the sasine register - is the oldest national land register in the world, dating back to 1617. Its name comes from the old French word 'seizer', which means 'take'.
The sasine register is a chronological list of land deeds, which contain **written descriptions** of properties. It is gradually being replaced by the **map-based** land register.
## The Land Register of Scotland - 1979 (became law in 1981) to 2014
The [land register](https://www.ros.gov.uk/services/registration/land-register) is the primary register managed by [Registers of Scotland](https://www.ros.gov.uk/). Introduced in 1981, it's a register of who owns land and property in Scotland.
The land register (1979) was based on the Ordnance Survey map, and includes plans of registered land. Every plot of land on the register has a title sheet, which is guaranteed by the state. The title sheet defines the extent of the plot of land on a map and gives details of:
* current owners
* price
* mortgage details
* conditions affecting the property
## The Land Register of Scotland - 2012 (became law in 2014) to present
The [Land Registration etc (Scotland) Act 2012](http://www.legislation.gov.uk/asp/2012/5/introduction/enacted) came into force on 8 December 2014. The 2012 Act followed on from, and developed, the recommendations made by the Scottish Law Commission in their [report on land registration published in February 2010](http://www.scotlawcom.gov.uk/files/1112/7979/8376/rep222v1.pdf) (Reig and Gretton, who chaired this report, also published the book '[Land Registration](http://www.avizandum.co.uk/content/land-registration)' (there is a digital copy in the RoS library)). The 2012 Act put in place a new scheme of land registration. The main purpose of the act was to reform and restate the law on the registration of rights to land in the Land Register of Scotland. The act achieved this by repealing much of the current land registration statute: the Land Registration (Scotland) Act 1979, and the Land Registration (Scotland) Rules 2006 made under that act.
The 2012 Act realigned the law of land registration with property law. It also put on a statutory footing many of the policies and practices the keeper had developed since the introduction of the Land Register in 1981. The 2012 Act introduced new concepts, such as advance notices, and new rules that govern how the keeper registers deeds and makes up the register.
The changes introduced by the 2012 act are profound and have an impact on the two principal back end data systems:
* [Digital Mapping System](http://netconfluence1.core.rosdev.org.uk/display/AR/DMS+-+Digital+Mapping+System)
* The requirement to demonstrate no overlapping *ownership in land* means that DMS is no longer *fit for purpose*
* [Land Registration System](http://netconfluence1.core.rosdev.org.uk/display/AR/LRS+-+Land+Registration+System)
* Manages and represents 'legal settle'
* It has been called a **string factory** in that is does not store data directly, but structures information using syntax and grammar rules so that sentences and paragraphs can be generated.
* The **string factory** approach makes data extraction complex and means that it is difficult to either:
* productise the land register
* provide intelligence to support policy, business or operational decision.
# The 'new model'
The new model will reflect changes to the Land Register demanded by the [Land Registration etc. (Scotland) Act 2012](http://www.legislation.gov.uk/asp/2012/5/introduction/enacted).
RoS has the the following responsibilities
* From an ongoing basis RoS has to legally operate within the limits defined by the 2012 Act (LR_Act_2012: Land Registration etc. (Scotland) Act 2012)
* The act has no mandated retrospective impact on records currently in the system - the new 'to be' system must be able to function with 'to be' and 'as is' records.
* However, Schedule 4 gives the Keeper powers to update records.
The goal is to develop the physical model of the Land Register that reflects these responsibilities. The aspiration is to align this model, wherever possible, to the [Land Administration Domain Model standard as described in ISO 19152](https://www.iso.org/standard/51206.html). [Land Registration](http://www.avizandum.co.uk/content/land-registration) (Reid and Gretton, 2017) provides useful supporting context that explains some of the reasoning behind the legislation. There is also significant content in the currently materialised LRS and DMS data models.
The problem the modelling faces is the following:
* Ensuring legal compliance,
* while simplifying for the future,
* without losing legacy functions and
* aligning to standards
![The process of Land Registration](figures/example.png)
This workshop helps describe this journey.
# Who Is This Workshop For?
Add some text here
# Outline of the Workshop
Each section of this workshop ......
# Using Code Examples
This presentation is interactive. The expectation is that you will run the code associated with the notebooks.
## Installation Considerations
Installing Python and the suite of libraries that enable scientific computing is straightforward . This section will outline some of the considerations when setting up your computer.
Though there are various ways to install Python, the one I would suggest for use in data science is the Anaconda distribution, which works similarly whether you use Windows, Linux, or Mac OS X.
The Anaconda distribution comes in two flavors:
- [Miniconda](http://conda.pydata.org/miniconda.html) gives you the Python interpreter itself, along with a command-line tool called ``conda`` which operates as a cross-platform package manager geared toward Python packages, similar in spirit to the apt or yum tools that Linux users might be familiar with.
- [Anaconda](https://www.continuum.io/downloads) includes both Python and conda, and additionally bundles a suite of other pre-installed packages geared toward scientific computing. Because of the size of this bundle, expect the installation to consume several gigabytes of disk space.
Any of the packages included with Anaconda can also be installed manually on top of Miniconda; for this reason I suggest starting with Miniconda.
To get started, download and install the Miniconda package–make sure to choose a version with Python 3–and then install the core packages used in this book:
```
[~]$ conda install numpy pandas scikit-learn matplotlib seaborn jupyter
```
Throughout the text, we will also make use of other more specialized tools in Python's scientific ecosystem; installation is usually as easy as typing **``conda install packagename``**.
For more information on conda, including information about creating and using conda environments (which I would *highly* recommend), refer to [conda's online documentation](http://conda.pydata.org/docs/).
<!--NAVIGATION-->
| [Contents](Index.ipynb) |
| github_jupyter | [~]$ conda install numpy pandas scikit-learn matplotlib seaborn jupyter | 0.499023 | 0.900705 |
# Ensemble methods. Exercises
In this section we have only two exercise:
1. Find the best three classifier in the stacking method using the classifiers from scikit-learn package.
2. Build arcing arc-x4 method.
```
%store -r data_set
%store -r labels
%store -r test_data_set
%store -r test_labels
%store -r unique_labels
```
## Exercise 1: Find the best three classifier in the stacking method
Please use the following classifiers:
* Linear regression,
* Nearest Neighbors,
* Linear SVM,
* Decision Tree,
* Naive Bayes,
* QDA.
```
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LinearRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
classifier_classes = {KNeighborsClassifier(),
LinearRegression(),
QuadraticDiscriminantAnalysis(),
SVC(),
GaussianNB()}
from itertools import combinations
classifier_classes_triplets_set = set()
for c_out in combinations(classifier_classes, r=3):
classifier_classes_triplets_set.add(tuple(c_out))
rest = classifier_classes-s_out
for cl in classifier_classes:
if cl not in rest:
rest_cl = set(rest)
rest_cl.add(cl)
classifier_classes_triplets_set.add(tuple(rest_cl))
classifier_classes_triplets = []
for cl in classifier_classes_triplets_set:
classifier_classes_triplets.append(cl)
def build_classifiers(classifiers):
for classifier in classifiers:
classifier.fit(data_set, labels)
return classifiers
def build_stacked_classifier(classifiers):
output = []
for classifier in classifiers:
output.append(classifier.predict(data_set))
output = np.array(output).reshape((130,3))
# stacked classifier part:
stacked_classifier = DecisionTreeClassifier() # set here
stacked_classifier.fit(output.reshape((130,3)), labels.reshape((130,)))
test_set = []
for classifier in classifiers:
test_set.append(classifier.predict(test_data_set))
test_set = np.array(test_set).reshape((len(test_set[0]),3))
predicted = stacked_classifier.predict(test_set)
return predicted
classifiers_with_accuracy = []
for classifier_tuple in classifier_classes_triplets:
classifiers = build_classifiers(classifier_tuple)
predicted = build_stacked_classifier(classifiers)
accuracy = accuracy_score(test_labels, predicted)
classifiers_with_accuracy.append((classifiers, accuracy))
print(classifiers_with_accuracy)
print(max(classifiers_with_accuracy, key=lambda x: x[1]))
```
## Exercise 2:
Use the boosting method and change the code to fullfilt the following requirements:
* the weights should be calculated as:
$w_{n}^{(t+1)}=\frac{1+ I(y_{n}\neq h_{t}(x_{n})}{\sum_{i=1}^{N}1+I(y_{n}\neq h_{t}(x_{n})}$,
* the prediction is done with a voting method.
```
import numpy as np
from sklearn.tree import DecisionTreeClassifier
# prepare data set
def generate_data(sample_number, feature_number, label_number):
data_set = np.random.random_sample((sample_number, feature_number))
labels = np.random.choice(label_number, sample_number)
return data_set, labels
labels = 2
dimension = 2
test_set_size = 1000
train_set_size = 5000
train_set, train_labels = generate_data(train_set_size, dimension, labels)
test_set, test_labels = generate_data(test_set_size, dimension, labels)
# init weights
number_of_iterations = 10
weights = np.ones((test_set_size,)) / test_set_size
def train_model(classifier, weights):
return classifier.fit(X=test_set, y=test_labels, sample_weight=weights)
def calculate_accuracy_vector(predicted, labels):
result = []
for i in range(len(predicted)):
if predicted[i] == labels[i]:
result.append(0)
else:
result.append(1)
return result
def calculate_error(model):
predicted = model.predict(test_set)
I=calculate_accuracy_vector(predicted, test_labels)
Z=np.sum(I)
return (1+Z)/1.0
```
Fill the two functions below:
```
def set_new_weights(model):
new_weights = [a + 1 for a in calculate_accuracy_vector(model.predict(test_set), test_labels)]
sumW = np.sum(new_weights)
return new_weights / sumW
```
Train the classifier with the code below:
```
classifier = DecisionTreeClassifier(max_depth=1, random_state=1)
classifier.fit(X=train_set, y=train_labels)
alphas = []
classifiers = []
for iteration in range(number_of_iterations):
model = train_model(classifier, weights)
weights = set_new_weights(model)
classifiers.append(model)
print(weights)
```
Set the validation data set:
```
validate_x, validate_label = generate_data(1, dimension, labels)
```
Fill the prediction code:
```
from collections import defaultdict
def get_prediction(x):
result = defaultdict(lambda: 0)
for cl in classifiers:
result[cl.predict(x)[0]] += 1
return sorted(result.items(), key=lambda x: x[1])[0]
```
Test it:
```
prediction = get_prediction(validate_x)[0]
print(prediction)
```
| github_jupyter | %store -r data_set
%store -r labels
%store -r test_data_set
%store -r test_labels
%store -r unique_labels
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LinearRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
classifier_classes = {KNeighborsClassifier(),
LinearRegression(),
QuadraticDiscriminantAnalysis(),
SVC(),
GaussianNB()}
from itertools import combinations
classifier_classes_triplets_set = set()
for c_out in combinations(classifier_classes, r=3):
classifier_classes_triplets_set.add(tuple(c_out))
rest = classifier_classes-s_out
for cl in classifier_classes:
if cl not in rest:
rest_cl = set(rest)
rest_cl.add(cl)
classifier_classes_triplets_set.add(tuple(rest_cl))
classifier_classes_triplets = []
for cl in classifier_classes_triplets_set:
classifier_classes_triplets.append(cl)
def build_classifiers(classifiers):
for classifier in classifiers:
classifier.fit(data_set, labels)
return classifiers
def build_stacked_classifier(classifiers):
output = []
for classifier in classifiers:
output.append(classifier.predict(data_set))
output = np.array(output).reshape((130,3))
# stacked classifier part:
stacked_classifier = DecisionTreeClassifier() # set here
stacked_classifier.fit(output.reshape((130,3)), labels.reshape((130,)))
test_set = []
for classifier in classifiers:
test_set.append(classifier.predict(test_data_set))
test_set = np.array(test_set).reshape((len(test_set[0]),3))
predicted = stacked_classifier.predict(test_set)
return predicted
classifiers_with_accuracy = []
for classifier_tuple in classifier_classes_triplets:
classifiers = build_classifiers(classifier_tuple)
predicted = build_stacked_classifier(classifiers)
accuracy = accuracy_score(test_labels, predicted)
classifiers_with_accuracy.append((classifiers, accuracy))
print(classifiers_with_accuracy)
print(max(classifiers_with_accuracy, key=lambda x: x[1]))
import numpy as np
from sklearn.tree import DecisionTreeClassifier
# prepare data set
def generate_data(sample_number, feature_number, label_number):
data_set = np.random.random_sample((sample_number, feature_number))
labels = np.random.choice(label_number, sample_number)
return data_set, labels
labels = 2
dimension = 2
test_set_size = 1000
train_set_size = 5000
train_set, train_labels = generate_data(train_set_size, dimension, labels)
test_set, test_labels = generate_data(test_set_size, dimension, labels)
# init weights
number_of_iterations = 10
weights = np.ones((test_set_size,)) / test_set_size
def train_model(classifier, weights):
return classifier.fit(X=test_set, y=test_labels, sample_weight=weights)
def calculate_accuracy_vector(predicted, labels):
result = []
for i in range(len(predicted)):
if predicted[i] == labels[i]:
result.append(0)
else:
result.append(1)
return result
def calculate_error(model):
predicted = model.predict(test_set)
I=calculate_accuracy_vector(predicted, test_labels)
Z=np.sum(I)
return (1+Z)/1.0
def set_new_weights(model):
new_weights = [a + 1 for a in calculate_accuracy_vector(model.predict(test_set), test_labels)]
sumW = np.sum(new_weights)
return new_weights / sumW
classifier = DecisionTreeClassifier(max_depth=1, random_state=1)
classifier.fit(X=train_set, y=train_labels)
alphas = []
classifiers = []
for iteration in range(number_of_iterations):
model = train_model(classifier, weights)
weights = set_new_weights(model)
classifiers.append(model)
print(weights)
validate_x, validate_label = generate_data(1, dimension, labels)
from collections import defaultdict
def get_prediction(x):
result = defaultdict(lambda: 0)
for cl in classifiers:
result[cl.predict(x)[0]] += 1
return sorted(result.items(), key=lambda x: x[1])[0]
prediction = get_prediction(validate_x)[0]
print(prediction) | 0.542136 | 0.953013 |
```
##%overwritefile
##%file:src/cargocommand.py
##%file:../../jupyter-MyRust-kernel/jupyter_MyRust_kernel/plugins/cargocommand.py
##%noruncode
from typing import Dict, Tuple, Sequence,List
from plugins.ISpecialID import IStag,IDtag,IBtag,ITag
import os
import re
class MyCargocmd(IStag):
kobj=None
def getName(self) -> str:
# self.kobj._write_to_stdout("setKernelobj setKernelobj setKernelobj\n")
return 'MyCargocmd'
def getAuthor(self) -> str:
return 'Author'
def getIntroduction(self) -> str:
return 'MyCargocmd'
def getPriority(self)->int:
return 0
def getExcludeID(self)->List[str]:
return []
def getIDSptag(self) -> List[str]:
return ['cargo']
def setKernelobj(self,obj):
self.kobj=obj
# self.kobj._write_to_stdout("setKernelobj setKernelobj setKernelobj\n")
return
def on_shutdown(self, restart):
return
def on_ISpCodescanning(self,key, value,magics,line) -> str:
# self.kobj._write_to_stdout(line+" on_ISpCodescanning\n")
self.kobj.addkey2dict(magics,'cargo')
return self.commandhander(self,key, value,magics,line)
##在代码预处理前扫描代码时调用
def on_Codescanning(self,magics,code)->Tuple[bool,str]:
pass
return False,code
##生成文件时调用
def on_before_buildfile(self,code,magics)->Tuple[bool,str]:
return False,''
def on_after_buildfile(self,returncode,srcfile,magics)->bool:
return False
def on_before_compile(self,code,magics)->Tuple[bool,str]:
return False,''
def on_after_compile(self,returncode,binfile,magics)->bool:
return False
def on_before_exec(self,code,magics)->Tuple[bool,str]:
return False,''
def on_after_exec(self,returncode,srcfile,magics)->bool:
return False
def on_after_completion(self,returncode,execfile,magics)->bool:
return False
def commandhander(self,key, value,magics,line):
cmds=[]
for argument in re.findall(r'(?:[^\s,"]|"(?:\\.|[^"])*")+', value.strip()):
cmds += [argument.strip('"')]
magics['cargo']=cmds
if len(magics['cargo'])>0:
self.do_cargo_command(self,magics['cargo'],magics=magics)
return ''
def do_cargo_command(self,commands=None,cwd=None,magics=None):
try:
# self.kobj._logln("do_npm_command......")
npmcmd=['cargo']
if(self.kobj.sys=="Windows"):
npmcmd=['cmd','/c','cargo']
p = self.kobj.create_jupyter_subprocess(npmcmd+commands,cwd=cwd,shell=False,magics=magics)
self.kobj.g_rtsps[str(p.pid)]=p
if magics!=None and len(self.kobj.addkey2dict(magics,'showpid'))>0:
self.kobj._logln("The process PID:"+str(p.pid))
returncode=p.wait_end(magics)
del self.kobj.g_rtsps[str(p.pid)]
if returncode != 0:
self.kobj._logln("Executable exited with code {}".format(returncode),3)
else:
self.kobj._logln("Info:cargo command success.")
except Exception as e:
self.kobj._logln("do_cargo_command err:"+str(e))
raise
return
```
| github_jupyter | ##%overwritefile
##%file:src/cargocommand.py
##%file:../../jupyter-MyRust-kernel/jupyter_MyRust_kernel/plugins/cargocommand.py
##%noruncode
from typing import Dict, Tuple, Sequence,List
from plugins.ISpecialID import IStag,IDtag,IBtag,ITag
import os
import re
class MyCargocmd(IStag):
kobj=None
def getName(self) -> str:
# self.kobj._write_to_stdout("setKernelobj setKernelobj setKernelobj\n")
return 'MyCargocmd'
def getAuthor(self) -> str:
return 'Author'
def getIntroduction(self) -> str:
return 'MyCargocmd'
def getPriority(self)->int:
return 0
def getExcludeID(self)->List[str]:
return []
def getIDSptag(self) -> List[str]:
return ['cargo']
def setKernelobj(self,obj):
self.kobj=obj
# self.kobj._write_to_stdout("setKernelobj setKernelobj setKernelobj\n")
return
def on_shutdown(self, restart):
return
def on_ISpCodescanning(self,key, value,magics,line) -> str:
# self.kobj._write_to_stdout(line+" on_ISpCodescanning\n")
self.kobj.addkey2dict(magics,'cargo')
return self.commandhander(self,key, value,magics,line)
##在代码预处理前扫描代码时调用
def on_Codescanning(self,magics,code)->Tuple[bool,str]:
pass
return False,code
##生成文件时调用
def on_before_buildfile(self,code,magics)->Tuple[bool,str]:
return False,''
def on_after_buildfile(self,returncode,srcfile,magics)->bool:
return False
def on_before_compile(self,code,magics)->Tuple[bool,str]:
return False,''
def on_after_compile(self,returncode,binfile,magics)->bool:
return False
def on_before_exec(self,code,magics)->Tuple[bool,str]:
return False,''
def on_after_exec(self,returncode,srcfile,magics)->bool:
return False
def on_after_completion(self,returncode,execfile,magics)->bool:
return False
def commandhander(self,key, value,magics,line):
cmds=[]
for argument in re.findall(r'(?:[^\s,"]|"(?:\\.|[^"])*")+', value.strip()):
cmds += [argument.strip('"')]
magics['cargo']=cmds
if len(magics['cargo'])>0:
self.do_cargo_command(self,magics['cargo'],magics=magics)
return ''
def do_cargo_command(self,commands=None,cwd=None,magics=None):
try:
# self.kobj._logln("do_npm_command......")
npmcmd=['cargo']
if(self.kobj.sys=="Windows"):
npmcmd=['cmd','/c','cargo']
p = self.kobj.create_jupyter_subprocess(npmcmd+commands,cwd=cwd,shell=False,magics=magics)
self.kobj.g_rtsps[str(p.pid)]=p
if magics!=None and len(self.kobj.addkey2dict(magics,'showpid'))>0:
self.kobj._logln("The process PID:"+str(p.pid))
returncode=p.wait_end(magics)
del self.kobj.g_rtsps[str(p.pid)]
if returncode != 0:
self.kobj._logln("Executable exited with code {}".format(returncode),3)
else:
self.kobj._logln("Info:cargo command success.")
except Exception as e:
self.kobj._logln("do_cargo_command err:"+str(e))
raise
return | 0.338077 | 0.168036 |
<small><i>This notebook was put together by [Jake Vanderplas](http://www.vanderplas.com). Source and license info is on [GitHub](https://github.com/jakevdp/sklearn_tutorial/).</i></small>
```
! git clone https://github.com/data-psl/lectures2021
import sys
sys.path.append('lectures2021/notebooks/02_sklearn')
%cd 'lectures2021/notebooks/02_sklearn'
```
# Density Estimation: Gaussian Mixture Models
Here we'll explore **Gaussian Mixture Models**, which is an unsupervised clustering & density estimation technique.
We'll start with our standard set of initial imports
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
```
## Introducing Gaussian Mixture Models
We previously saw an example of K-Means, which is a clustering algorithm which is most often fit using an expectation-maximization approach.
Here we'll consider an extension to this which is suitable for both **clustering** and **density estimation**.
For example, imagine we have some one-dimensional data in a particular distribution:
```
np.random.seed(2)
x = np.concatenate([np.random.normal(0, 2, 2000),
np.random.normal(5, 5, 2000),
np.random.normal(3, 0.5, 600)])
plt.hist(x, 80, density=True)
plt.xlim(-10, 20);
```
Gaussian mixture models will allow us to approximate this density:
```
from sklearn.mixture import GaussianMixture as GMM
X = x[:, np.newaxis]
clf = GMM(4, max_iter=500, random_state=3).fit(X)
xpdf = np.linspace(-10, 20, 1000)
density = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])
plt.hist(x, 80, density=True, alpha=0.5)
plt.plot(xpdf, density, '-r')
plt.xlim(-10, 20);
```
Note that this density is fit using a **mixture of Gaussians**, which we can examine by looking at the ``means_``, ``covars_``, and ``weights_`` attributes:
```
clf.means_
clf.covariances_
clf.weights_
plt.hist(x, 80, density=True, alpha=0.3)
plt.plot(xpdf, density, '-r')
for i in range(clf.n_components):
pdf = clf.weights_[i] * stats.norm(clf.means_[i, 0],
np.sqrt(clf.covariances_[i, 0])).pdf(xpdf)
plt.fill(xpdf, pdf, facecolor='gray',
edgecolor='none', alpha=0.3)
plt.xlim(-10, 20);
```
These individual Gaussian distributions are fit using an expectation-maximization method, much as in K means, except that rather than explicit cluster assignment, the **posterior probability** is used to compute the weighted mean and covariance.
Somewhat surprisingly, this algorithm **provably** converges to the optimum (though the optimum is not necessarily global).
## How many Gaussians?
Given a model, we can use one of several means to evaluate how well it fits the data.
For example, there is the Aikaki Information Criterion (AIC) and the Bayesian Information Criterion (BIC)
```
print(clf.bic(X))
print(clf.aic(X))
```
Let's take a look at these as a function of the number of gaussians:
```
n_estimators = np.arange(1, 10)
clfs = [GMM(n, max_iter=1000).fit(X) for n in n_estimators]
bics = [clf.bic(X) for clf in clfs]
aics = [clf.aic(X) for clf in clfs]
plt.plot(n_estimators, bics, label='BIC')
plt.plot(n_estimators, aics, label='AIC')
plt.legend();
```
It appears that for both the AIC and BIC, 4 components is preferred.
## Example: GMM For Outlier Detection
GMM is what's known as a **Generative Model**: it's a probabilistic model from which a dataset can be generated.
One thing that generative models can be useful for is **outlier detection**: we can simply evaluate the likelihood of each point under the generative model; the points with a suitably low likelihood (where "suitable" is up to your own bias/variance preference) can be labeld outliers.
Let's take a look at this by defining a new dataset with some outliers:
```
np.random.seed(0)
# Add 20 outliers
true_outliers = np.sort(np.random.randint(0, len(x), 20))
y = x.copy()
y[true_outliers] += 50 * np.random.randn(20)
clf = GMM(4, max_iter=500, random_state=0).fit(y[:, np.newaxis])
xpdf = np.linspace(-10, 20, 1000)
density_noise = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])
plt.hist(y, 80, density=True, alpha=0.5)
plt.plot(xpdf, density_noise, '-r')
plt.xlim(-15, 30);
```
Now let's evaluate the log-likelihood of each point under the model, and plot these as a function of ``y``:
```
log_likelihood = np.array([clf.score_samples([[yy]]) for yy in y])
# log_likelihood = clf.score_samples(y[:, np.newaxis])[0]
plt.plot(y, log_likelihood, '.k');
detected_outliers = np.where(log_likelihood < -9)[0]
print("true outliers:")
print(true_outliers)
print("\ndetected outliers:")
print(detected_outliers)
```
The algorithm misses a few of these points, which is to be expected (some of the "outliers" actually land in the middle of the distribution!)
Here are the outliers that were missed:
```
set(true_outliers) - set(detected_outliers)
```
And here are the non-outliers which were spuriously labeled outliers:
```
set(detected_outliers) - set(true_outliers)
```
Finally, we should note that although all of the above is done in one dimension, GMM does generalize to multiple dimensions, as we'll see in the breakout session.
## Other Density Estimators
The other main density estimator that you might find useful is *Kernel Density Estimation*, which is available via ``sklearn.neighbors.KernelDensity``. In some ways, this can be thought of as a generalization of GMM where there is a gaussian placed at the location of *every* training point!
```
from sklearn.neighbors import KernelDensity
kde = KernelDensity(0.15).fit(x[:, None])
density_kde = np.exp(kde.score_samples(xpdf[:, None]))
plt.hist(x, 80, density=True, alpha=0.5)
plt.plot(xpdf, density, '-b', label='GMM')
plt.plot(xpdf, density_kde, '-r', label='KDE')
plt.xlim(-10, 20)
plt.legend();
```
All of these density estimators can be viewed as **Generative models** of the data: that is, that is, the model tells us how more data can be created which fits the model.
| github_jupyter | ! git clone https://github.com/data-psl/lectures2021
import sys
sys.path.append('lectures2021/notebooks/02_sklearn')
%cd 'lectures2021/notebooks/02_sklearn'
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
np.random.seed(2)
x = np.concatenate([np.random.normal(0, 2, 2000),
np.random.normal(5, 5, 2000),
np.random.normal(3, 0.5, 600)])
plt.hist(x, 80, density=True)
plt.xlim(-10, 20);
from sklearn.mixture import GaussianMixture as GMM
X = x[:, np.newaxis]
clf = GMM(4, max_iter=500, random_state=3).fit(X)
xpdf = np.linspace(-10, 20, 1000)
density = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])
plt.hist(x, 80, density=True, alpha=0.5)
plt.plot(xpdf, density, '-r')
plt.xlim(-10, 20);
clf.means_
clf.covariances_
clf.weights_
plt.hist(x, 80, density=True, alpha=0.3)
plt.plot(xpdf, density, '-r')
for i in range(clf.n_components):
pdf = clf.weights_[i] * stats.norm(clf.means_[i, 0],
np.sqrt(clf.covariances_[i, 0])).pdf(xpdf)
plt.fill(xpdf, pdf, facecolor='gray',
edgecolor='none', alpha=0.3)
plt.xlim(-10, 20);
print(clf.bic(X))
print(clf.aic(X))
n_estimators = np.arange(1, 10)
clfs = [GMM(n, max_iter=1000).fit(X) for n in n_estimators]
bics = [clf.bic(X) for clf in clfs]
aics = [clf.aic(X) for clf in clfs]
plt.plot(n_estimators, bics, label='BIC')
plt.plot(n_estimators, aics, label='AIC')
plt.legend();
np.random.seed(0)
# Add 20 outliers
true_outliers = np.sort(np.random.randint(0, len(x), 20))
y = x.copy()
y[true_outliers] += 50 * np.random.randn(20)
clf = GMM(4, max_iter=500, random_state=0).fit(y[:, np.newaxis])
xpdf = np.linspace(-10, 20, 1000)
density_noise = np.array([np.exp(clf.score([[xp]])) for xp in xpdf])
plt.hist(y, 80, density=True, alpha=0.5)
plt.plot(xpdf, density_noise, '-r')
plt.xlim(-15, 30);
log_likelihood = np.array([clf.score_samples([[yy]]) for yy in y])
# log_likelihood = clf.score_samples(y[:, np.newaxis])[0]
plt.plot(y, log_likelihood, '.k');
detected_outliers = np.where(log_likelihood < -9)[0]
print("true outliers:")
print(true_outliers)
print("\ndetected outliers:")
print(detected_outliers)
set(true_outliers) - set(detected_outliers)
set(detected_outliers) - set(true_outliers)
from sklearn.neighbors import KernelDensity
kde = KernelDensity(0.15).fit(x[:, None])
density_kde = np.exp(kde.score_samples(xpdf[:, None]))
plt.hist(x, 80, density=True, alpha=0.5)
plt.plot(xpdf, density, '-b', label='GMM')
plt.plot(xpdf, density_kde, '-r', label='KDE')
plt.xlim(-10, 20)
plt.legend(); | 0.694199 | 0.984679 |
# Roundtrip example
This notebook shows how to load a string literal tree into Roundtrip, interact with the tree, and retrieve query information based on the selection on the interactive tree.
```
import hatchet as ht
if __name__ == "__main__":
smallStr = [
{
"name": "foo",
"metrics": {"time (inc)": 130.0, "time": 0.0},
"children": [
{
"name": "bar",
"metrics": {"time (inc)": 20.0, "time": 5.0},
"children": [
{
"name": "baz",
"metrics": {"time (inc)": 5.0, "time": 5.0},
},
{
"name": "grault",
"metrics": {"time (inc)": 10.0, "time": 10.0},
},
],
},
{
"name": "qux",
"metrics": {"time (inc)": 60.0, "time": 0.0},
"children": [
{
"name": "quux",
"metrics": {"time (inc)": 60.0, "time": 5.0},
"children": [
{
"name": "corge",
"metrics": {"time (inc)": 55.0, "time": 10.0},
"children": [
{
"name": "bar",
"metrics": {
"time (inc)": 20.0,
"time": 5.0,
},
"children": [
{
"name": "baz",
"metrics": {
"time (inc)": 5.0,
"time": 5.0,
},
},
{
"name": "grault",
"metrics": {
"time (inc)": 10.0,
"time": 10.0,
},
},
],
},
{
"name": "grault",
"metrics": {
"time (inc)": 10.0,
"time": 10.0,
},
},
{
"name": "garply",
"metrics": {
"time (inc)": 15.0,
"time": 15.0,
},
},
],
}
],
}
],
},
],
}
]
treeStr = [
{
"name": "foo",
"metrics": {"time (inc)": 130.0, "time": 0.0},
"children": [
{
"name": "bar",
"metrics": {"time (inc)": 20.0, "time": 5.0},
"children": [
{
"name": "baz",
"metrics": {"time (inc)": 5.0, "time": 5.0},
},
{
"name": "grault",
"metrics": {"time (inc)": 10.0, "time": 10.0},
},
],
},
{
"name": "qux",
"metrics": {"time (inc)": 60.0, "time": 0.0},
"children": [
{
"name": "quux",
"metrics": {"time (inc)": 60.0, "time": 5.0},
"children": [
{
"name": "corge",
"metrics": {"time (inc)": 55.0, "time": 10.0},
"children": [
{
"name": "bar",
"metrics": {
"time (inc)": 20.0,
"time": 5.0,
},
"children": [
{
"name": "baz",
"metrics": {
"time (inc)": 5.0,
"time": 5.0,
},
},
{
"name": "grault",
"metrics": {
"time (inc)": 10.0,
"time": 10.0,
},
},
],
},
{
"name": "grault",
"metrics": {
"time (inc)": 10.0,
"time": 10.0,
},
},
{
"name": "garply",
"metrics": {
"time (inc)": 15.0,
"time": 15.0,
},
},
],
}
],
}
],
},
{
"name": "waldo",
"metrics": {"time (inc)": 50.0, "time": 0.0},
"children": [
{
"name": "fred",
"metrics": {"time (inc)": 35.0, "time": 5.0},
"children": [
{
"name": "plugh",
"metrics": {"time (inc)": 5.0, "time": 5.0},
},
{
"name": "xyzzy",
"metrics": {"time (inc)": 25.0, "time": 5.0},
"children": [
{
"name": "thud",
"metrics": {
"time (inc)": 25.0,
"time": 5.0,
},
"children": [
{
"name": "baz",
"metrics": {
"time (inc)": 5.0,
"time": 5.0,
},
},
{
"name": "garply",
"metrics": {
"time (inc)": 15.0,
"time": 15.0,
},
},
],
}
],
},
],
},
{
"name": "garply",
"metrics": {"time (inc)": 15.0, "time": 15.0},
},
],
},
],
},
{
"name": "ほげ (hoge)",
"metrics": {"time (inc)": 30.0, "time": 0.0},
"children": [
{
"name": "(ぴよ (piyo)",
"metrics": {"time (inc)": 15.0, "time": 5.0},
"children": [
{
"name": "ふが (fuga)",
"metrics": {"time (inc)": 5.0, "time": 5.0},
},
{
"name": "ほげら (hogera)",
"metrics": {"time (inc)": 5.0, "time": 5.0},
},
],
},
{
"name": "ほげほげ (hogehoge)",
"metrics": {"time (inc)": 15.0, "time": 15.0},
},
],
},
]
gf = ht.GraphFrame.from_literal(treeStr)
print(gf.dataframe)
print(gf.tree())
```
## Step 1: load Roundtrip
Here we load Roundtrip, the Python code that acts as the go-between for the visualization JavaScript and the Python code in the notebook.
```
%load_ext roundtrip
```
## Step 2: load the visualization
Next we load the custom visualization from `interactiveTree.js`. The parameters below are:
- `%loadVisualization`: Roundtrip function to initialize the visualization
- `myTree`: vis variable name that we created (can be anything)
- `interactiveTree.js`: the JavaScript file to make the tree
- `%small_graph`: the Python variable from cell 1 that holds the tree string literal (`%treeStr` also works, it displays 2 trees)
After the cell is executed, the tree appears. Clicking on a node will cause its metadata to be displayed at the top of the visualization (by the "Colors" button). Double-clicking on a node will cause the subtree to collapse.
To select a single node, click on it then execute the next cell (`%fetchData`) to retrieve its data. To select many nodes, click the button "Select nodes" to activate the brush (to turn off the brush, click "Select nodes" again).
```
%loadVisualization myTree interactiveTree.js %treeStr
```
## Step 3: retrieve our selection
To retrieve the data we have selected in the tree, we use the Roundtrip function `fetchData`.
The parameters below are:
- `%fetchData`: the Roundtrip function to pass the selection from JavaScript to Python
- `myTree`: the variable name of our visualization (see the `%loadVisualization` cell)
- `pyNode`: the Python variable name we will use to store our selection
- `jsNodeSelected`: the variable from the JavaScript that has the current selection
```
%fetchData (myTree, pyNode, jsNodeSelected)
print(pyNode)
```
Test the query by copy-pasting the output query above into the cells below.
```
query = [
{"name": "baz"},
"*",
{"name": "grault"}
]
sgf = gf.filter(query)
print(query)
print(sgf.tree(color=True, metric="time (inc)"))
```
| github_jupyter | import hatchet as ht
if __name__ == "__main__":
smallStr = [
{
"name": "foo",
"metrics": {"time (inc)": 130.0, "time": 0.0},
"children": [
{
"name": "bar",
"metrics": {"time (inc)": 20.0, "time": 5.0},
"children": [
{
"name": "baz",
"metrics": {"time (inc)": 5.0, "time": 5.0},
},
{
"name": "grault",
"metrics": {"time (inc)": 10.0, "time": 10.0},
},
],
},
{
"name": "qux",
"metrics": {"time (inc)": 60.0, "time": 0.0},
"children": [
{
"name": "quux",
"metrics": {"time (inc)": 60.0, "time": 5.0},
"children": [
{
"name": "corge",
"metrics": {"time (inc)": 55.0, "time": 10.0},
"children": [
{
"name": "bar",
"metrics": {
"time (inc)": 20.0,
"time": 5.0,
},
"children": [
{
"name": "baz",
"metrics": {
"time (inc)": 5.0,
"time": 5.0,
},
},
{
"name": "grault",
"metrics": {
"time (inc)": 10.0,
"time": 10.0,
},
},
],
},
{
"name": "grault",
"metrics": {
"time (inc)": 10.0,
"time": 10.0,
},
},
{
"name": "garply",
"metrics": {
"time (inc)": 15.0,
"time": 15.0,
},
},
],
}
],
}
],
},
],
}
]
treeStr = [
{
"name": "foo",
"metrics": {"time (inc)": 130.0, "time": 0.0},
"children": [
{
"name": "bar",
"metrics": {"time (inc)": 20.0, "time": 5.0},
"children": [
{
"name": "baz",
"metrics": {"time (inc)": 5.0, "time": 5.0},
},
{
"name": "grault",
"metrics": {"time (inc)": 10.0, "time": 10.0},
},
],
},
{
"name": "qux",
"metrics": {"time (inc)": 60.0, "time": 0.0},
"children": [
{
"name": "quux",
"metrics": {"time (inc)": 60.0, "time": 5.0},
"children": [
{
"name": "corge",
"metrics": {"time (inc)": 55.0, "time": 10.0},
"children": [
{
"name": "bar",
"metrics": {
"time (inc)": 20.0,
"time": 5.0,
},
"children": [
{
"name": "baz",
"metrics": {
"time (inc)": 5.0,
"time": 5.0,
},
},
{
"name": "grault",
"metrics": {
"time (inc)": 10.0,
"time": 10.0,
},
},
],
},
{
"name": "grault",
"metrics": {
"time (inc)": 10.0,
"time": 10.0,
},
},
{
"name": "garply",
"metrics": {
"time (inc)": 15.0,
"time": 15.0,
},
},
],
}
],
}
],
},
{
"name": "waldo",
"metrics": {"time (inc)": 50.0, "time": 0.0},
"children": [
{
"name": "fred",
"metrics": {"time (inc)": 35.0, "time": 5.0},
"children": [
{
"name": "plugh",
"metrics": {"time (inc)": 5.0, "time": 5.0},
},
{
"name": "xyzzy",
"metrics": {"time (inc)": 25.0, "time": 5.0},
"children": [
{
"name": "thud",
"metrics": {
"time (inc)": 25.0,
"time": 5.0,
},
"children": [
{
"name": "baz",
"metrics": {
"time (inc)": 5.0,
"time": 5.0,
},
},
{
"name": "garply",
"metrics": {
"time (inc)": 15.0,
"time": 15.0,
},
},
],
}
],
},
],
},
{
"name": "garply",
"metrics": {"time (inc)": 15.0, "time": 15.0},
},
],
},
],
},
{
"name": "ほげ (hoge)",
"metrics": {"time (inc)": 30.0, "time": 0.0},
"children": [
{
"name": "(ぴよ (piyo)",
"metrics": {"time (inc)": 15.0, "time": 5.0},
"children": [
{
"name": "ふが (fuga)",
"metrics": {"time (inc)": 5.0, "time": 5.0},
},
{
"name": "ほげら (hogera)",
"metrics": {"time (inc)": 5.0, "time": 5.0},
},
],
},
{
"name": "ほげほげ (hogehoge)",
"metrics": {"time (inc)": 15.0, "time": 15.0},
},
],
},
]
gf = ht.GraphFrame.from_literal(treeStr)
print(gf.dataframe)
print(gf.tree())
%load_ext roundtrip
%loadVisualization myTree interactiveTree.js %treeStr
%fetchData (myTree, pyNode, jsNodeSelected)
print(pyNode)
query = [
{"name": "baz"},
"*",
{"name": "grault"}
]
sgf = gf.filter(query)
print(query)
print(sgf.tree(color=True, metric="time (inc)")) | 0.311008 | 0.831246 |
While taking the **Intro to Deep Learning with PyTorch** course by Udacity, I really liked exercise that was based on building a character-level language model using LSTMs. I was unable to complete all on my own since NLP is still a very new field to me. I decided to give the exercise a try with `tensorflow 2.0` and because of the ease of use you get in `keras`, I could develop a very simple LSTM-based language model able to predict a single character given a set of characters.
The exercise uses the **Anna Karenina** nodel written by Leo Tolstoy as its data. I used a small subset of it in this notebook, though.
```
!pip install tensorflow-gpu==2.0.0-beta1
import tensorflow as tf
from tensorflow.keras.optimizers import Adam
import numpy as np
from tensorflow.keras.preprocessing.sequence import pad_sequences
print(tf.__version__)
```
I start by loading the novel.
```
# Open text file and read in data as `text`
with open('anna.txt', 'r') as f:
text = f.read()
# First hundred characters
text[:100]
```
The text will start look ugly now :(
```
# Strip all the new lines
tokens = text.split()
text_without_nlines = ' '.join(tokens)
```
I will be using LSTMs for developing the language model. A sequence in an one-hot-encoded form is needed to be given as its input. Each input sequence will be 50 characters with one output character, making each sequence 51 characters long.
We can create the sequences by enumerating the characters in the text, starting at the 51st character at index 50.
```
# Prepare the sequences for the model
length = 50
sequences = []
for i in range(length, len(text_without_nlines)):
# Select sequence of tokens
seq = text_without_nlines[i-length:i+1]
sequences.append(seq)
print('Total Sequences: {}'.format(len(sequences)))
# Save these sequences for later use
filename = 'char_sequences.txt'
data = '\n'.join(sequences)
file = open(filename, 'w')
file.write(data)
file.close()
print('File saved!')
# Preview
!head -5 char_sequences.txt
# Load up the data
sequences_from_file = open('char_sequences.txt')
text = sequences_from_file.read()
lines = text.split('\n')
# Cause computers understand only numbers
# Assigning each character a unique integer
# Charater -> Integer
chars = sorted(list(set(text)))
mapping = dict((c, i) for i, c in enumerate(chars))
# Convert the sequences to integer encodings
int_sequences = []
for line in lines:
encoded_seq = [mapping[char] for char in line]
int_sequences.append(encoded_seq)
# How big is the corpus?
vocab_size = len(mapping)
print('Voacabulary size', vocab_size)
# X -> y mapping of input sequence in this form
int_sequences = np.array(int_sequences)
X, y = int_sequences[:,:-1], int_sequences[:,-1]
```
I will be using a very small subset of data.
```
X[:10000].shape, y[:10000].shape
```
The characters will have to be one-hot-encoded before they are fed to the language model. It also preserves a concise input representation but when the input feature space is very very large, Character Embeddings should be used before.
```
one_hot_sequences = [tf.keras.utils.to_categorical(x, num_classes=vocab_size) for x in X[:10000]]
X = np.array(one_hot_sequences)
y = tf.keras.utils.to_categorical(y[:10000], num_classes=vocab_size)
# Mini language model :)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.LSTM(256, input_shape=(X.shape[1], X.shape[2])))
model.add(tf.keras.layers.Dense(vocab_size, activation='softmax'))
print(model.summary())
```
There can be a problem of exploding gradients and to prevent that I am going to specify the `clipnorm` term in the optimizer.
```
adam = Adam(lr=.001, clipnorm=0.5)
model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy'])
model.fit(X, y, epochs=200, verbose=2)
```
The training loss keeps on decreasing and the accuracy keeps getting increased. This is a good sign.
Now that the model is trained, we can employ it to generate characters on given sequences of characters. For doing this, the model would require the given inputs to be exactly in the shape with which it was trained. If we give an input sequence that does not *exactly* match with that of the training input sequences, we will get errors.
We will use the `pad_sequences()` function which will truncate the characters from first the half of the test input sequences and padd extra characters if needed (0 essentially). We will define a small helper function for generating characters of user-specified length. The user will have to provide some initial text to the model, though.
```
def generate_seq(model, mapping, seq_length, init_text, n_chars):
in_text = init_text
# Generate a fixed number of characters
for _ in range(n_chars):
# Encode to integers
encoded = [mapping[char] for char in in_text]
# Map sequences to a fixed length
encoded = pad_sequences([encoded], maxlen=seq_length, padding='pre', truncating='pre')
# print(encoded.shape)
# One-hot encode
encoded = tf.keras.utils.to_categorical(encoded, num_classes=vocab_size)
# print(encoded.shape)
# Predict character
yhat = model.predict_classes(encoded, verbose=0)
# Integer -> Character
out_char = ''
for char, index in mapping.items():
if index == yhat:
out_char = char
break
# We append the characters after the input sequence
in_text += char
return in_text
# Let's test
print(generate_seq(model, mapping, 50, 'And Levin said', 20))
print(generate_seq(model, mapping, 50, 'Happy families', 20))
```
The model does generate something meaningful. At this stage it is really nothing apart from just one LSTM layer (and its power is evident).
| github_jupyter | !pip install tensorflow-gpu==2.0.0-beta1
import tensorflow as tf
from tensorflow.keras.optimizers import Adam
import numpy as np
from tensorflow.keras.preprocessing.sequence import pad_sequences
print(tf.__version__)
# Open text file and read in data as `text`
with open('anna.txt', 'r') as f:
text = f.read()
# First hundred characters
text[:100]
# Strip all the new lines
tokens = text.split()
text_without_nlines = ' '.join(tokens)
# Prepare the sequences for the model
length = 50
sequences = []
for i in range(length, len(text_without_nlines)):
# Select sequence of tokens
seq = text_without_nlines[i-length:i+1]
sequences.append(seq)
print('Total Sequences: {}'.format(len(sequences)))
# Save these sequences for later use
filename = 'char_sequences.txt'
data = '\n'.join(sequences)
file = open(filename, 'w')
file.write(data)
file.close()
print('File saved!')
# Preview
!head -5 char_sequences.txt
# Load up the data
sequences_from_file = open('char_sequences.txt')
text = sequences_from_file.read()
lines = text.split('\n')
# Cause computers understand only numbers
# Assigning each character a unique integer
# Charater -> Integer
chars = sorted(list(set(text)))
mapping = dict((c, i) for i, c in enumerate(chars))
# Convert the sequences to integer encodings
int_sequences = []
for line in lines:
encoded_seq = [mapping[char] for char in line]
int_sequences.append(encoded_seq)
# How big is the corpus?
vocab_size = len(mapping)
print('Voacabulary size', vocab_size)
# X -> y mapping of input sequence in this form
int_sequences = np.array(int_sequences)
X, y = int_sequences[:,:-1], int_sequences[:,-1]
X[:10000].shape, y[:10000].shape
one_hot_sequences = [tf.keras.utils.to_categorical(x, num_classes=vocab_size) for x in X[:10000]]
X = np.array(one_hot_sequences)
y = tf.keras.utils.to_categorical(y[:10000], num_classes=vocab_size)
# Mini language model :)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.LSTM(256, input_shape=(X.shape[1], X.shape[2])))
model.add(tf.keras.layers.Dense(vocab_size, activation='softmax'))
print(model.summary())
adam = Adam(lr=.001, clipnorm=0.5)
model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy'])
model.fit(X, y, epochs=200, verbose=2)
def generate_seq(model, mapping, seq_length, init_text, n_chars):
in_text = init_text
# Generate a fixed number of characters
for _ in range(n_chars):
# Encode to integers
encoded = [mapping[char] for char in in_text]
# Map sequences to a fixed length
encoded = pad_sequences([encoded], maxlen=seq_length, padding='pre', truncating='pre')
# print(encoded.shape)
# One-hot encode
encoded = tf.keras.utils.to_categorical(encoded, num_classes=vocab_size)
# print(encoded.shape)
# Predict character
yhat = model.predict_classes(encoded, verbose=0)
# Integer -> Character
out_char = ''
for char, index in mapping.items():
if index == yhat:
out_char = char
break
# We append the characters after the input sequence
in_text += char
return in_text
# Let's test
print(generate_seq(model, mapping, 50, 'And Levin said', 20))
print(generate_seq(model, mapping, 50, 'Happy families', 20)) | 0.701713 | 0.947381 |
# Using Strings in Python 3
[Python String docs](https://docs.python.org/3/library/string.html)
### Creating Strings
Enclose a string in single or double quotes, or in triple single quotes.
And you can embed single quotes within double quotes, or double quotes within single quotes.
```
s = 'Tony Stark is'
t = "Ironman."
print(s, t)
u = 'Her book is called "The Magician".'
print(u)
v = '''Captain Rogers kicks butt.'''
print(v)
```
### Type, Len, Split, Join
Get the number of characters in a string using len.
To get the number of words you have to split the string into a list. Split uses a space as its default, or you can split on any substring you like.
To reverse a split, use join(str).
```
print(type(s))
print(len(s))
print(s.split())
print(len(s.split()))
print(u.split('a'))
print('you,are,so,pretty'.split(','))
print(' '.join(['Just', 'do', 'it.']))
```
### Check if a substring is contained in a string
Use *in* or *not in*.
Startswith and Endswith are also useful boolean checks.
```
print('dog' in s)
print('k' in t)
print('k' not in t)
print(s.startswith('Tony'))
print(s.endswith('is'))
```
### Replace all substrings
Second example iterates through a dictionary and replaces all instances of text numbers with numerals.
```
v = v.replace('Rogers', 'America')
print(v)
z = 'Anton has three cars. Javier has four.'
numbers = {'one':'1', 'two':'2', 'three':'3', 'four':'4', 'five':'5'}
for k,v in numbers.items():
z = z.replace(k,v)
print(z)
```
### Change case
```
print(s.lower())
print(t.upper())
print(u.title())
print('hulk rules!'.capitalize())
print('david'.islower())
print('hulk'.isupper())
print('Hulk'.istitle())
print('covid19'.isalnum())
print('Thor'.isalpha())
print('3.14'.isnumeric())
print('314'.isdigit())
print('3.14'.isdecimal())
import string
print(string.digits)
print(string.punctuation)
print(string.ascii_lowercase)
print(string.ascii_uppercase)
```
### Strip leading or trailing characters
This is often used to strip blank spaces or newlines, but can be used for much more.
```
w = '\n Natasha is a spy \n'
x = '\nShe has red hair\n'
print(w.strip() + '.')
print(w.lstrip())
print(w.rstrip())
print(w.strip() + '. ' + x.strip() + '.')
print(x.strip().rstrip('arih'))
y = 'What do you want?!!&?'
print(y.rstrip(string.punctuation))
```
### Find, and Count substrings
Search from the left with find, or from the right with rfind.
The return value is the start index of the first match of the substring.
```
print(y.find('a'))
print(y.rfind('do'))
print(y)
print(w.strip())
print(w.count('a'))
```
### Strings are immutable
Any change to a string results in a new string being written to a new block of memory.
```
m = 'Black widow'
print(id(m))
m = m + 's'
print(id(m))
print(s, t)
z = s + ' ' + t
print(z)
```
### Slicing Substrings
string[from:to+1:step]
Only 1 parameter: it is used as an index.
From defaults to beginning.
To defaults to end.
Step defaults to 1.
```
z = '0123456789'
print(z[1])
print(z[5:8])
print(z[:3])
print(z[7:])
print(z[-2:])
print(z[2:5:2])
```
| github_jupyter | s = 'Tony Stark is'
t = "Ironman."
print(s, t)
u = 'Her book is called "The Magician".'
print(u)
v = '''Captain Rogers kicks butt.'''
print(v)
print(type(s))
print(len(s))
print(s.split())
print(len(s.split()))
print(u.split('a'))
print('you,are,so,pretty'.split(','))
print(' '.join(['Just', 'do', 'it.']))
print('dog' in s)
print('k' in t)
print('k' not in t)
print(s.startswith('Tony'))
print(s.endswith('is'))
v = v.replace('Rogers', 'America')
print(v)
z = 'Anton has three cars. Javier has four.'
numbers = {'one':'1', 'two':'2', 'three':'3', 'four':'4', 'five':'5'}
for k,v in numbers.items():
z = z.replace(k,v)
print(z)
print(s.lower())
print(t.upper())
print(u.title())
print('hulk rules!'.capitalize())
print('david'.islower())
print('hulk'.isupper())
print('Hulk'.istitle())
print('covid19'.isalnum())
print('Thor'.isalpha())
print('3.14'.isnumeric())
print('314'.isdigit())
print('3.14'.isdecimal())
import string
print(string.digits)
print(string.punctuation)
print(string.ascii_lowercase)
print(string.ascii_uppercase)
w = '\n Natasha is a spy \n'
x = '\nShe has red hair\n'
print(w.strip() + '.')
print(w.lstrip())
print(w.rstrip())
print(w.strip() + '. ' + x.strip() + '.')
print(x.strip().rstrip('arih'))
y = 'What do you want?!!&?'
print(y.rstrip(string.punctuation))
print(y.find('a'))
print(y.rfind('do'))
print(y)
print(w.strip())
print(w.count('a'))
m = 'Black widow'
print(id(m))
m = m + 's'
print(id(m))
print(s, t)
z = s + ' ' + t
print(z)
z = '0123456789'
print(z[1])
print(z[5:8])
print(z[:3])
print(z[7:])
print(z[-2:])
print(z[2:5:2]) | 0.137938 | 0.915053 |
## Example. Estimating the speed of light
Simon Newcomb's measurements of the speed of light, from
> Stigler, S. M. (1977). Do robust estimators work with real data? (with discussion). *Annals of
Statistics* **5**, 1055–1098.
The data are recorded as deviations from $24\ 800$
nanoseconds. Table 3.1 of Bayesian Data Analysis.
28 26 33 24 34 -44 27 16 40 -2
29 22 24 21 25 30 23 29 31 19
24 20 36 32 36 28 25 21 28 29
37 25 28 26 30 32 36 26 30 22
36 23 27 27 28 27 31 27 26 33
26 32 32 24 39 28 24 25 32 25
29 27 28 29 16 23
```
%matplotlib inline
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pymc as pm
import seaborn as sns
from scipy.optimize import brentq
plt.style.use('seaborn-darkgrid')
plt.rc('font', size=12)
%config Inline.figure_formats = ['retina']
numbs = "28 26 33 24 34 -44 27 16 40 -2 29 22 \
24 21 25 30 23 29 31 19 24 20 36 32 36 28 25 21 28 29 \
37 25 28 26 30 32 36 26 30 22 36 23 27 27 28 27 31 27 26 \
33 26 32 32 24 39 28 24 25 32 25 29 27 28 29 16 23"
nums = np.array([int(i) for i in numbs.split(' ')])
plt.figure(figsize=(10, 6))
_, _, _ = plt.hist(nums, bins=35, edgecolor='w')
plt.title('Distribution of the measurements');
mean_t = np.mean(nums)
print(f'The mean of the 66 measurements is {mean_t:.1f}')
std_t = np.std(nums, ddof=1)
print(f'The standard deviation of the 66 measurements is {std_t:.1f}')
```
And now, we use `pymc` to estimate the mean and the standard deviation from the data.
```
with pm.Model() as model_1:
mu = pm.Uniform('mu', lower=10, upper=30)
sigma = pm.Uniform('sigma', lower=0, upper=20)
post = pm.Normal('post', mu=mu, sd=sigma, observed=nums)
with model_1:
trace_1 = pm.sample(draws=50_000, tune=50_000)
az.plot_trace(trace_1);
df = pm.summary(trace_1)
df.style.format('{:.4f}')
```
As you can see, the highest posterior interval for `mu` is [23.69, 28.77].
```
pm.plot_posterior(trace_1, var_names=['mu'], kind = 'hist');
```
The true posterior distribution is $t_{65}$
```
from scipy.stats import t
x = np.linspace(22, 30, 500)
y = t.pdf(x, 65, loc=mean_t)
y_pred = t.pdf(x, 65, loc=df['mean'].values[0])
plt.figure(figsize=(10, 5))
plt.plot(x, y, label='True', linewidth=5)
plt.plot(x, y_pred, 'o', label='Predicted', alpha=0.2)
plt.legend()
plt.title('The posterior distribution')
plt.xlabel(r'$\mu$', fontsize=14);
```
The book says you can find the posterior interval by simulation, so let's do that with Python. First, draw random values of $\sigma^2$ and $\mu$.
```
mu_estim = []
for i in range(10_000):
y = np.random.chisquare(65)
y2 = 65 * std_t**2 / y
yy = np.random.normal(loc=mean_t, scale=y2/66)
mu_estim.append(yy)
```
To visualize `mu_estim`, we plot a histogram.
```
plt.figure(figsize=(8,5))
rang, bins1, _ = plt.hist(mu_estim, bins=1000, density=True)
plt.xlabel(r'$\mu$', fontsize=14);
```
The advantage here is that you can find the median and the central posterior interval. Well, the median is...
```
idx = bins1.shape[0] // 2
print((bins1[idx] + bins1[idx + 1]) / 2)
```
And the central posterior interval is... not that easy to find. We have to find $a$ such as:
$$\int_{\mu -a}^{\mu +a} f(x)\, dx = 0.95,$$
with $\mu$ the median. We need to define $dx$ and $f(x)$.
```
delta_bin = bins1[1] - bins1[0]
print(f'This is delta x: {delta_bin}')
```
We define a function to find $a$ (in fact, $a$ is an index). `rang` is $f(x)$.
```
def func3(a):
return sum(rang[idx - int(a):idx + int(a)] * delta_bin) - 0.95
idx_sol = brentq(func3, 0, idx)
idx_sol
```
That number is an index, therefore the interval is:
```
l_i = bins1[idx - int(idx_sol)]
l_d = bins1[idx + int(idx_sol)]
print(f'The central posterior interval is [{l_i:.2f}, {l_d:.2f}]')
```
## Example. Pre-election polling
Let's put that in code.
```
obs = np.array([727, 583, 137])
bush_supp = obs[0] / sum(obs)
dukakis_supp = obs[1] / sum(obs)
other_supp = obs[2] / sum(obs)
arr = np.array([bush_supp, dukakis_supp, other_supp])
print('The proportion array is', arr)
print('The supporters array is', obs)
```
Remember that we want to find the distribution of $\theta_1 - \theta_2$. In this case, the prior distribution on each $\theta$ is a uniform distribution; the data $(y_1, y_2, y_3)$ follow a multinomial distribution, with parameters $(\theta_1, \theta_2, \theta_3)$.
```
import theano
import theano.tensor as tt
with pm.Model() as model_3:
theta1 = pm.Uniform('theta1', lower=0, upper=1)
theta2 = pm.Uniform('theta2', lower=0, upper=1)
theta3 = pm.Uniform('theta3', lower=0, upper=1)
post = pm.Multinomial('post', n=obs.sum(), p=[theta1, theta2, theta3], observed=obs)
diff = pm.Deterministic('diff', theta1 - theta2)
model_3.check_test_point()
pm.model_to_graphviz(model_3)
with model_3:
trace_3 = pm.sample(draws=50_000, tune=50_000)
az.plot_trace(trace_3);
pm.summary(trace_3, kind = "stats")
pm.summary(trace_3, kind = "diagnostics")
```
As you can see, the way we write the model is not good, that's why you see a lot of divergences and `ess_bulk` (the bulk effective sample size) as well as `ess_tail` (the tail effective sample size) are very, very low. This can be improved.
```
with pm.Model() as model_4:
theta = pm.Dirichlet('theta', a=np.ones_like(obs))
post = pm.Multinomial('post', n=obs.sum(), p=theta, observed=obs)
with model_4:
trace_4 = pm.sample(10_000, tune=5000)
az.plot_trace(trace_4);
pm.summary(trace_4)
```
Better trace plot and better `ess_bulk`/`ess_tail`. Now we can estimate $\theta_1 - \theta_2$, we draw 4000 points from the posterior distribution.
```
post_samples = pm.sample_posterior_predictive(trace_4, samples=4_000, model=model_4)
diff = []
sum_post_sample = post_samples['post'].sum(axis=1)[0]
for i in range(post_samples['post'].shape[0]):
diff.append((post_samples['post'][i, 0] -
post_samples['post'][i, 1]) / sum_post_sample)
plt.figure(figsize=(10, 6))
_, _, _ = plt.hist(diff, bins=25, edgecolor='w', density=True)
plt.title(r'Distribution of $\theta_1 - \theta_2$ using Pymc3');
```
Of course you can compare this result with the true posterior distribution
```
from scipy.stats import dirichlet
ddd = dirichlet([728, 584, 138])
rad = []
for i in range(4_000):
rad.append(ddd.rvs()[0][0] - ddd.rvs()[0][1])
plt.figure(figsize=(10, 6))
_, _, _ = plt.hist(rad, color='C5', bins=25, edgecolor='w', density=True)
plt.title(r'Distribution of $\theta_1 - \theta_2$');
plt.figure(figsize=(10, 6))
sns.kdeplot(rad, label='True')
sns.kdeplot(diff, label='Predicted');
plt.title('Comparison between both methods')
plt.xlabel(r'$\theta_1 - \theta_2$', fontsize=14);
```
## Example: analysis of a bioassay experiment
This information is in Table 3.1
```
x_dose = np.array([-0.86, -0.3, -0.05, 0.73])
n_anim = np.array([5, 5, 5, 5])
y_deat = np.array([0, 1, 3, 5])
with pm.Model() as model_5:
alpha = pm.Uniform('alpha', lower=-5, upper=7)
beta = pm.Uniform('beta', lower=0, upper=50)
theta = pm.math.invlogit(alpha + beta * x_dose)
post = pm.Binomial('post', n=n_anim, p=theta, observed=y_deat)
with model_5:
trace_5 = pm.sample(draws=10_000, tune=15_000)
az.plot_trace(trace_5);
df5 = pm.summary(trace_5)
df5.style.format('{:.4f}')
```
The next plots are a scatter plot, a plot for the posterior for `alpha` and `beta` and a countour plot.
```
az.plot_pair(trace_5, figsize=(8, 7), divergences=True, kind = "hexbin");
fig, ax = plt.subplots(ncols=2, nrows=1, figsize=(13, 5))
az.plot_posterior(trace_5, ax=ax, kind='hist');
fig, ax = plt.subplots(figsize=(10,6))
sns.kdeplot(trace_5['alpha'][30000:40000], trace_5['beta'][30000:40000],
cmap=plt.cm.viridis, ax=ax, n_levels=10)
ax.set_xlim(-2, 4)
ax.set_ylim(-2, 27)
ax.set_xlabel('alpha')
ax.set_ylabel('beta');
```
Histogram of the draws from the posterior distribution of the LD50
```
ld50 = []
begi = 1500
for i in range(1000):
ld50.append( - trace_5['alpha'][begi + i] / trace_5['beta'][begi + i])
plt.figure(figsize=(10, 6))
_, _, _, = plt.hist(ld50, bins=25, edgecolor='w')
plt.xlabel('LD50', fontsize=14);
%load_ext watermark
%watermark -iv -v -p theano,scipy,matplotlib -m
```
| github_jupyter | %matplotlib inline
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pymc as pm
import seaborn as sns
from scipy.optimize import brentq
plt.style.use('seaborn-darkgrid')
plt.rc('font', size=12)
%config Inline.figure_formats = ['retina']
numbs = "28 26 33 24 34 -44 27 16 40 -2 29 22 \
24 21 25 30 23 29 31 19 24 20 36 32 36 28 25 21 28 29 \
37 25 28 26 30 32 36 26 30 22 36 23 27 27 28 27 31 27 26 \
33 26 32 32 24 39 28 24 25 32 25 29 27 28 29 16 23"
nums = np.array([int(i) for i in numbs.split(' ')])
plt.figure(figsize=(10, 6))
_, _, _ = plt.hist(nums, bins=35, edgecolor='w')
plt.title('Distribution of the measurements');
mean_t = np.mean(nums)
print(f'The mean of the 66 measurements is {mean_t:.1f}')
std_t = np.std(nums, ddof=1)
print(f'The standard deviation of the 66 measurements is {std_t:.1f}')
with pm.Model() as model_1:
mu = pm.Uniform('mu', lower=10, upper=30)
sigma = pm.Uniform('sigma', lower=0, upper=20)
post = pm.Normal('post', mu=mu, sd=sigma, observed=nums)
with model_1:
trace_1 = pm.sample(draws=50_000, tune=50_000)
az.plot_trace(trace_1);
df = pm.summary(trace_1)
df.style.format('{:.4f}')
pm.plot_posterior(trace_1, var_names=['mu'], kind = 'hist');
from scipy.stats import t
x = np.linspace(22, 30, 500)
y = t.pdf(x, 65, loc=mean_t)
y_pred = t.pdf(x, 65, loc=df['mean'].values[0])
plt.figure(figsize=(10, 5))
plt.plot(x, y, label='True', linewidth=5)
plt.plot(x, y_pred, 'o', label='Predicted', alpha=0.2)
plt.legend()
plt.title('The posterior distribution')
plt.xlabel(r'$\mu$', fontsize=14);
mu_estim = []
for i in range(10_000):
y = np.random.chisquare(65)
y2 = 65 * std_t**2 / y
yy = np.random.normal(loc=mean_t, scale=y2/66)
mu_estim.append(yy)
plt.figure(figsize=(8,5))
rang, bins1, _ = plt.hist(mu_estim, bins=1000, density=True)
plt.xlabel(r'$\mu$', fontsize=14);
idx = bins1.shape[0] // 2
print((bins1[idx] + bins1[idx + 1]) / 2)
delta_bin = bins1[1] - bins1[0]
print(f'This is delta x: {delta_bin}')
def func3(a):
return sum(rang[idx - int(a):idx + int(a)] * delta_bin) - 0.95
idx_sol = brentq(func3, 0, idx)
idx_sol
l_i = bins1[idx - int(idx_sol)]
l_d = bins1[idx + int(idx_sol)]
print(f'The central posterior interval is [{l_i:.2f}, {l_d:.2f}]')
obs = np.array([727, 583, 137])
bush_supp = obs[0] / sum(obs)
dukakis_supp = obs[1] / sum(obs)
other_supp = obs[2] / sum(obs)
arr = np.array([bush_supp, dukakis_supp, other_supp])
print('The proportion array is', arr)
print('The supporters array is', obs)
import theano
import theano.tensor as tt
with pm.Model() as model_3:
theta1 = pm.Uniform('theta1', lower=0, upper=1)
theta2 = pm.Uniform('theta2', lower=0, upper=1)
theta3 = pm.Uniform('theta3', lower=0, upper=1)
post = pm.Multinomial('post', n=obs.sum(), p=[theta1, theta2, theta3], observed=obs)
diff = pm.Deterministic('diff', theta1 - theta2)
model_3.check_test_point()
pm.model_to_graphviz(model_3)
with model_3:
trace_3 = pm.sample(draws=50_000, tune=50_000)
az.plot_trace(trace_3);
pm.summary(trace_3, kind = "stats")
pm.summary(trace_3, kind = "diagnostics")
with pm.Model() as model_4:
theta = pm.Dirichlet('theta', a=np.ones_like(obs))
post = pm.Multinomial('post', n=obs.sum(), p=theta, observed=obs)
with model_4:
trace_4 = pm.sample(10_000, tune=5000)
az.plot_trace(trace_4);
pm.summary(trace_4)
post_samples = pm.sample_posterior_predictive(trace_4, samples=4_000, model=model_4)
diff = []
sum_post_sample = post_samples['post'].sum(axis=1)[0]
for i in range(post_samples['post'].shape[0]):
diff.append((post_samples['post'][i, 0] -
post_samples['post'][i, 1]) / sum_post_sample)
plt.figure(figsize=(10, 6))
_, _, _ = plt.hist(diff, bins=25, edgecolor='w', density=True)
plt.title(r'Distribution of $\theta_1 - \theta_2$ using Pymc3');
from scipy.stats import dirichlet
ddd = dirichlet([728, 584, 138])
rad = []
for i in range(4_000):
rad.append(ddd.rvs()[0][0] - ddd.rvs()[0][1])
plt.figure(figsize=(10, 6))
_, _, _ = plt.hist(rad, color='C5', bins=25, edgecolor='w', density=True)
plt.title(r'Distribution of $\theta_1 - \theta_2$');
plt.figure(figsize=(10, 6))
sns.kdeplot(rad, label='True')
sns.kdeplot(diff, label='Predicted');
plt.title('Comparison between both methods')
plt.xlabel(r'$\theta_1 - \theta_2$', fontsize=14);
x_dose = np.array([-0.86, -0.3, -0.05, 0.73])
n_anim = np.array([5, 5, 5, 5])
y_deat = np.array([0, 1, 3, 5])
with pm.Model() as model_5:
alpha = pm.Uniform('alpha', lower=-5, upper=7)
beta = pm.Uniform('beta', lower=0, upper=50)
theta = pm.math.invlogit(alpha + beta * x_dose)
post = pm.Binomial('post', n=n_anim, p=theta, observed=y_deat)
with model_5:
trace_5 = pm.sample(draws=10_000, tune=15_000)
az.plot_trace(trace_5);
df5 = pm.summary(trace_5)
df5.style.format('{:.4f}')
az.plot_pair(trace_5, figsize=(8, 7), divergences=True, kind = "hexbin");
fig, ax = plt.subplots(ncols=2, nrows=1, figsize=(13, 5))
az.plot_posterior(trace_5, ax=ax, kind='hist');
fig, ax = plt.subplots(figsize=(10,6))
sns.kdeplot(trace_5['alpha'][30000:40000], trace_5['beta'][30000:40000],
cmap=plt.cm.viridis, ax=ax, n_levels=10)
ax.set_xlim(-2, 4)
ax.set_ylim(-2, 27)
ax.set_xlabel('alpha')
ax.set_ylabel('beta');
ld50 = []
begi = 1500
for i in range(1000):
ld50.append( - trace_5['alpha'][begi + i] / trace_5['beta'][begi + i])
plt.figure(figsize=(10, 6))
_, _, _, = plt.hist(ld50, bins=25, edgecolor='w')
plt.xlabel('LD50', fontsize=14);
%load_ext watermark
%watermark -iv -v -p theano,scipy,matplotlib -m | 0.627495 | 0.955693 |
# Excepciones y gestión de errores
## Excepciones y errores
Hay dos tipos de errores en Python: Errores sintácticos y excepciones.
Los errores sintácticos se producen cuando escribimos algo que
el interprete de Python no es capaz de entender; por ejemplo, crear
una variable con un nombre no válido es un error sintáctico:
```
7a = 7.0
```
La información del error es todo lo completa que el interprete puede
conseguir. Normalmente indica la línea e incluso con una flecha
intenta señalar la posición más o menos exacta del error. No siempre
lo consigue, no obstante, porque a lo mejor el error es detectado
en un sitio distinto de donde es generado. También incluye el nombre
del fichero fuente.
Las excepciones son errores de funcionamiento; el interprete ha
entendido el código, por lo que es sintácticamente correcto, pero
aun así, produce un error. Por ejemplo, si intentmos dividir
por cero:
```
a, b = 7, 0
c = a / b
```
Las excepciones son errores que se producen en tiempo de ejecución,
y tienen la ventaja de que pueden ser tratados, si nos preparamos
para ello. Pero si la excepción no es tratada, inevitablemente
conducirá al fin de la ejecución del programa.
La última línea del mensaje de error es la que resume lo que ha
ocurrido. Las excepciones pueden ser de distintos tipos, y se
informa del tipo en el mensaje de error; en el caso anterior, el
tipo de la excepción es `ZeroDivisionError`. Otros tipos
de excepciones, algunos de los cuales hemos visto ya, son
`ValueError` o `TypeError`.
Si prevemos la posibilidad de que se produzca un error, podemos
prepararnos para esta eventualidad con la estructura `try/except`.
Por ejemplo, el siguiente fragmento de código:
```
try:
a, b = 7, 0
c = a / b
except ZeroDivisionError:
print("No puedo dividir por cero")
```
Funciona así:
- Se intentan ejecutar el bloque de código dentro de la
sentencia `try`.
- Si no se produce ningúna excepción mientras ejecuta
ese código, se omite el código dentro del
bloque `except` y seguimos con la ejecución del
programa.
- Si ocurre una excepción en una de las líneas del código del
`try`, el resto de las líneas no se ejecuta. Si el tipo de excepción
coincide con el especificado en la clausula `except`, se ejecuta
el bloque de código asociado y el programa continua ejecutándose.
- Si el tipo de la excepción no coincide con el indicado en la
cláusula `except`, entonces es una excepción no tratada, y provoca
que la excepción siga "subiendo" por la cadena de llamadas, y
provocando finalmente, si nadie la trata, la parada del programa
y el mensaje de error correspondiente.
Una sentencia `try` puede tener más de una sentencia `except`,
para aplicar diferentes tratamientos a diferentes tipos de
excepciones. También podemos hacer que una sentencia `except`
gestione más de un tipo de error usando paréntesis:
```
try:
...
except (RuntimeError, TypeError, NameError):
pass
```
Si incluimos una sentencia `except` sin especificar ningun tipo de
excepción, trataremos todas las excepciones posibles. Esto **ha de evitarse**,
porque resulta muy fácil enmascarar así cualquier tipo de error, incluso
aquellos en los que no estamos pensando.
Una práctica común es usar la cláusula `except` para imprimir o mandar a un
log un mensaje de error y luego volver a elevar la excepción, con la sentencia
`raise`, para que esta acabe la ejecución del programa, o bien sea tratada
por un nivel superior.
## La sentencia else en clausulas try/except
La sentencia `try/except` puede tener una cláusula `else`, de
forma similar a los bucles `for` y `while`. Si incluimos la
cláusula `else`, esta debe ir después de la o las cláusulas
`except`. El codigo dentro del `else` se ejecuta **si y solo si
todas las líneas dentro del `try` se han ejecutado sin ninguna
excepción**.
## La cláusula finally
Por último, la sentencia `try` puede tener una cláusula final, que se ejecutará siempre, se hayan producido o no excepciones en el código del `try`. El uso normal de `finally` es incluir código de liberación de recursos, operaciones de limpieza o cualquier otro tipo de código que tenga que ejecutarse "si ó si". Por ejemplo, si abrimos un fichero, podemos poner en la cláusula `finally` la operación de cierre, de forma que se gerantiza que, pase lo que pase, el fichero se cerrará.
> Nota: En el caso de los ficheros sería equivalente a usar la sentencia `with`.
El código de la sentencia `finally` se ejecuta siempre a continuación del código en la sentencia `try`:
```
def divide(x, y):
try:
result = x / y
print("el resultado es", result)
except ZeroDivisionError:
print("división por cero!")
finally:
print("Ejecutando sentencia finally")
divide(2, 1)
divide(2, 0)
```
## Argumento de la excepción
Cuando ocure una excepción, tiene un valor asociado, al que llamamos **argumento de la excepción**. Tanto la presencia como el tipo del argumento depende de cada tipo de excepción. La sentencia `except` puede especificar una variable despues del tipo de excepción (o tupla
de tipos). Si lo hacemos, dicha variable queda asociada al valor de la instancia de la excepción. Este objeto nos permite acceder a más información acerca del error que se ha producido, incluyendo los argumentos asociados con la excepción. la última línea impresa en
el mensaje de error es precisamente la expresión en forma de cadena de texto de ese objeto, es decir, el resultado de la llamada a `__str__`.
Los manejadores de escepciones no se limitan a controlar los errores
en las líneas dentro del try, tambien capturan y tratan errores que puedan ocurrir dentro de funciones o métodos llamados, ya sea directa o indirectamente, por el código dentro del `try`. Por ejemplo:
```
def esto_falla():
x = 1/0
try:
esto_falla()
except ZeroDivisionError as detail:
print('Detectado error en tiempo de ejecución:', detail)
```
## Legibilidad del código con excepciones
Las excepciones nos permite aumentar la legibilidad del código separando la lógica de control de errores de la lógica principal del programa. En C, por ejemplo, los errores no se indican con excepciones, sino que las llamadas a una función puede que devuelvan un código especial para indicar un error. En consecuencia, los programas en C suelen consistir en una secuencia de llamadas a funciones intercaladas con código de comprobación de errores. El flujo principal se hace más difícil de leer con todas estas interrupciones.
Las excepciones permiten tener el flujo principal del código completo y sin interrupciones dentro del `try`, y aun así, controlar las distintas posibilidades de error mediante cláusulas `except` separadas.
## Elevar excepciones
Podemos provocar nosotros mismo excepciones -normalmente expresado como *elevar* una excepción- usando la sentencia `raise` que vimos antes. El único argumento de `raise` debe ser la propia excepción, o bien la clase de la que se instancia (La excepción es cuando intentamos volver a emitir la excepción que estamos tratando dentro de un `except`, ya vimos entonces que basta con poner `raise` sin parámtros). Veamos un ejemplo:
```
raise NameError('Hola')
```
## Definir nuestras propia excepciones
También podemos definir nuestras propias excepciones, definiendo clases que deriven, directo o indirectamente de la clase `Exception`, que es la clase base de todas las excepciones (Es decir, todas las excepciones son casos particulares de `Exception`).
Las Excepciones definidas por el usuario suelen ser relativamente simples, apenas un contenedor para los atributos que nos aporten información sobre el error producido. A la hora de crear un módulo, si en este vamos a definir varios tipos nuevos de excepciones, es una práctica común definir una base clase para ese tipo de excepciones, y a partir de esa clase base, derivar cada una de los casos particulares. Así obtenemos una organización jerarquica para nuestros tipos de errores que puede ser muy útil para los programadores que usan el módulo o paquete. Normalmente, los nombres de las nuevas excepciones se hacen terminar en `Error`, siguiendo la nomenclatura de las excepciones estándar.
Como recomendación, antes de definir nuestras propias excepciones conviene mirar las ya existentes, es altamente probable que ya exista una apropiada para nuestro caso.
| github_jupyter | 7a = 7.0
a, b = 7, 0
c = a / b
try:
a, b = 7, 0
c = a / b
except ZeroDivisionError:
print("No puedo dividir por cero")
try:
...
except (RuntimeError, TypeError, NameError):
pass
def divide(x, y):
try:
result = x / y
print("el resultado es", result)
except ZeroDivisionError:
print("división por cero!")
finally:
print("Ejecutando sentencia finally")
divide(2, 1)
divide(2, 0)
def esto_falla():
x = 1/0
try:
esto_falla()
except ZeroDivisionError as detail:
print('Detectado error en tiempo de ejecución:', detail)
raise NameError('Hola') | 0.206814 | 0.970688 |
# Lecture 10: Variable Scope
CSCI 1360: Foundations for Informatics and Analytics
## Overview and Objectives
We've spoken a lot about data structures and orders of execution (loops, functions, and so on). But now that we're intimately familiar with different ways of blocking our code, we haven't yet touched on how this affects the variables we define, and where it's legal to use them. By the end of this lecture, you should be able to:
- Define the *scope* of a variable, based on where it is created
- Understand the concept of a *namespace* in Python, and its role in limiting variable scope
- Conceptualize how variable scoping fits into the larger picture of modular program design
## Part 1: What is scope?
![scope](http://cdn.titan.pgsitecore.com/en-us/-/media/Crest/Images/Products/Category/Mouthwash/Crest%20Scope%20Classic%20Mint%20Mouthwash/crest-scope-mouthwash-original-mint-flavor.png?w=460&v=1-201603041337)
(couldn't resist)
*Scope* refers to where a variable is defined. Another way to look at scope is to ask about the *lifetime* of a variable.
Hopefully, it doesn't come as a surprise that some variables aren't always accessible everywhere in your program.
```
def func(x):
print(x)
x = 10
func(20)
print(x)
```
An example we've already encountered is when we're trying to handle an exception.
```
import numpy as np
try:
i = np.random.randint(100)
if i % 2 == 0:
raise
except:
copy = i
print(i) # Does this work?
print(copy) # What about this?
```
There are different categories of scope. It's always helpful to know which of these categories a variable falls into.
### Global scope
A variable in *global scope* can be "seen" and accessed from pretty much anywhere. It's defining characteristic is that it's not created in any particular function or block of any kind. This lack of context makes it global.
```
# This is a global variable. It can be accessed anywhere in this notebook.
a = 0
```
(Small caveat: there is the concept of "built-in" scope, such as `range` or `len` or `SyntaxError`, which are technically even more "global" than global variables, since they're seen anywhere in Python writ large. "global" in this context means "seen anywhere in your program")
### Local scope
The next step down: these are variables defined within a specific context, such as inside a function, and no longer exist once the function or context ends.
```
# This is our global variable, redefined.
a = 0
def f():
# This is a local variable. It disappears when the function ends.
b = 0
print(a) # a still exists here; b does not.
```
(Small caveat: there is the concept of "nonlocal" scope, where you have variables defined inside functions, when those functions are themselves defined inside functions. This gets into [functional programming](https://en.wikipedia.org/wiki/Functional_programming), which Python does support and is gaining momentum in data science, but which is beyond the *scope* (ha!) of this course)
### Namespaces
This brings us to the overarching concept of a *namespace*.
A namespace is a collection, or pool, of variables in Python. The global *namespace* is the pool of global variables that exist in a program.
```
a = 0
b = 0
def func():
c = 0
d = 0
```
`a` and `b` exist in the global namespace. `c` and `d` exist in the function namespace of the function `func`.
The whole point of namespaces is, essentially, to keep a conceptual grip on the program you're writing.
Anyone using the Rodeo IDE?
![rodeo](http://cs.uga.edu/~squinn/courses/fa16/csci1360/assets/scope.png)
Likewise, every function will also have its own namespace of variables. As will every *class* (which we'll get next week!).
What happens when namespaces collide?
```
a = 0
def func():
a = 1
print(a) # What gets printed?
```
This effect is referred to as *variable shadowing*: the locally-scoped variable takes precedence over the globally-scoped variable. It *shadows* the global variable.
This is not a bug--in the name of program simplicity, this *limits the scope* of the effects of changing a variable's value to a single function, rather than your entire program!
If you have multiple functions that all use similar variable-naming conventions--or, even more likely, you have a program that's written by lots of different people who like to use the variable `i` in everything--it'd be *catastrophic* if one change to a variable `i` resulted in a change to *every* variable `i`.
```
i = 0
def func1():
i = 10
def func2():
i = 20
def func3(i):
i = 40
# ...
def funcOneHundredBillion():
i = 938948292
print(i) # Wait, what is i?
```
If, however, you really want a global variable to be accessed locally--to disable the *shadowing* that is inherent in Python--you can use the `global` keyword to tell Python that, yes, this is indeed a global variable.
```
i = 10
def func():
global i
i = 20
func()
print(i)
```
## Part 2: Scoping and blocks
This is a separate section for any Java/C/C++ converts in the room.
We've seen how Python creates *namespaces* at different hierarchies--one for every function, one for each class, and one single global namespace--which holds variables that are defined.
But what about variables defined inside *blocks*--constructs like `for` loops and `if` statements and `try`/`except` blocks?
Let's take a look at an example.
```
a = 0
if a == 0:
b = 1
```
In what namespace is `b`?
**Global**. It's no different from `a`.
How about this one:
```
i = 42
for i in range(10):
i = i * 2
j = i
```
What is `j` at the end?
**18 (the last value of `i` in the range--9--times two).** Seeing a pattern yet?
Let's go back to the very first example in the lecture.
```
import numpy as np
try:
i = np.random.randint(100)
if i % 2 == 0:
raise
except:
print(i) # What is i?
print(i) # What is i?
```
What is `i` in these cases? Is there a case where `i` does not exist?
**Nope, `i` is in the global namespace.**
### Blocks
The whole point is to illustrate that *blocks* in Python--conditionals, loops, exception handlers--all exist in their *same enclosing scope* and do NOT define new namespaces.
This is somewhat of a departure from Java, where you could define an `int` counter inside a loop, but it would disappear once the loop ended, so you'd have to define the counter *outside* the loop in order to use it afterwards.
To illustrate this idea of a namespace being confined to functions, classes, and the global namespace, here's a bunch of nested conditionals that ultimately define a variable:
```
a = 1
if a % 2 == 1:
if 2 - 1 == a:
if a * 1 == 1:
if a / 1 == 1:
for i in range(10):
for j in range(10):
b = i * j
print(b)
```
**`b` is a global variable.** So it makes sense that it's accessible anywhere, whether in the `print` statement or in the nested conditionals. But there's a caveat here--anyone know what it is?
**What if one of the conditionals fails?**
Here's the same code again, but I've simply changed the starting value of `a`.
```
#a = 1
a = 0
if a % 2 == 1:
if 2 - 1 == a:
if a * 1 == 1:
if a / 1 == 1:
for i in range(10):
for j in range(10):
b = i * j
print(b)
```
The first condition should fail; now that `a == 0`, a modulo by 2 will give a remainder of 0, thus terminating the conditionals at the very first one and skipping straight to the `print` statement. What happens?
**CRASH**.
The moral of the story here is: namespaces are great, but you still have to define your variables.
## Review Questions
Some questions to discuss and consider:
1: Are function arguments in the global or local function namespace? Are there any circumstances under which this would not be the case?
2: Give some examples of cases where global variables are helpful.
3: Give some examples where global variables can be a liability.
4: Let's say I call a function that takes 1 argument: a variable named `index`. Later on in that function, I write a `for` loop with the header `for index in range(10):`. I know a little about variable scoping, so I'm confident that shadowing will preserve the original value of the `index` function argument once the `for` loop finishes running. Is this thinking accurate? Why or why not?
5: Can you think of any examples where the "built-in" namespace is different from the "global" namespace?
## Course Administrivia
- How is A3 going?
- Who wants to volunteer for tomorrow's flipped lecture?
## Additional Resources
1. Variables and scope: http://python-textbok.readthedocs.io/en/latest/Variables_and_Scope.html
2. Short description of Python scoping rules: http://stackoverflow.com/questions/291978/short-description-of-python-scoping-rules
3. Lott, Steven F. *Building Skills in Python,* Chapter 7. 2010. http://collab.izap.in/attachments/download/21/BuildingSkillsinPython.pdf
| github_jupyter | def func(x):
print(x)
x = 10
func(20)
print(x)
import numpy as np
try:
i = np.random.randint(100)
if i % 2 == 0:
raise
except:
copy = i
print(i) # Does this work?
print(copy) # What about this?
# This is a global variable. It can be accessed anywhere in this notebook.
a = 0
# This is our global variable, redefined.
a = 0
def f():
# This is a local variable. It disappears when the function ends.
b = 0
print(a) # a still exists here; b does not.
a = 0
b = 0
def func():
c = 0
d = 0
a = 0
def func():
a = 1
print(a) # What gets printed?
i = 0
def func1():
i = 10
def func2():
i = 20
def func3(i):
i = 40
# ...
def funcOneHundredBillion():
i = 938948292
print(i) # Wait, what is i?
i = 10
def func():
global i
i = 20
func()
print(i)
a = 0
if a == 0:
b = 1
i = 42
for i in range(10):
i = i * 2
j = i
import numpy as np
try:
i = np.random.randint(100)
if i % 2 == 0:
raise
except:
print(i) # What is i?
print(i) # What is i?
a = 1
if a % 2 == 1:
if 2 - 1 == a:
if a * 1 == 1:
if a / 1 == 1:
for i in range(10):
for j in range(10):
b = i * j
print(b)
#a = 1
a = 0
if a % 2 == 1:
if 2 - 1 == a:
if a * 1 == 1:
if a / 1 == 1:
for i in range(10):
for j in range(10):
b = i * j
print(b) | 0.15511 | 0.983295 |
```
import numpy as np
import pandas as pd
import seaborn as sns
sns.reset_defaults
sns.set_style(style='darkgrid')
sns.set_context(context='notebook')
import matplotlib.pyplot as plt
#plt.style.use('ggplot')
plt.rcParams["patch.force_edgecolor"] = True
plt.rcParams["figure.figsize"] = (20.0, 10.0)
pd.set_option('display.max_columns', 2000)
pd.set_option('display.max_rows', 2000)
font = {'size' : 20}
plt.rc('font', **font)
plt.ion()
%matplotlib inline
cd ..
import src.features.build_features as bf
cd notebooks/
import altair
st_asmt_df = pd.read_csv('../data/raw/studentAssessment.csv')
asmt_df = pd.read_csv('../data/raw/assessments.csv')
st_asmt_df[st_asmt_df['score'] == 0].count()
st_asmt_df.info()
```
Observations: most students have average scores around 80, should look at number of assessments
get score on first assessment per st/mod/pre:
get max days_submitted early of first assessment, then merge and filter by?
```
ass_join = bf._join_asssessments(st_asmt_df, asmt_df)
ass_join.head()
asmt_df.info()
asmt_df.groupby(by=['code_module', 'code_presentation']).count()[['id_assessment']]
asmt_df.groupby(by=['code_module', 'code_presentation']).median()[['date']]
asmt_df #[(asmt_df['code_module'] == 'FFF')]
sns.lmplot(x='days_submitted_early', y='score', data=ass_join, height=8)
sns.distplot(ass_join[ass_join['assessment_type']=='TMA'][['score']])
sns.distplot(ass_join[ass_join['assessment_type']=='CMA'][['score']])
sns.distplot(ass_join[ass_join['assessment_type']=='Exam'][['score']])
ass_join.sample(10)
ass_join.unstack
early_max = ass_join.groupby(by=['code_module', 'code_presentation',
'id_student']).max()[['days_submitted_early']]
# early_max.head(20)
merged = pd.merge(ass_join, early_max, how = 'outer', on = ['id_student', 'code_module', 'code_presentation'])
# merged.head(20)
with_first_assessment = merged[merged['days_submitted_early_x'] == merged['days_submitted_early_y']][['score']]
with_first_assessment.rename({'score': 'first_assessment_score'}, axis = 1, inplace = True)
with_first_assessment
score_first_assessment = with_first_assessment[['days_submitted_early_y', 'score']]
score_first_assessment
```
right above this is the process for getting the score of the first assessment for each student in each module / presentation; take care to rename 'days sub early' as max...
```
ass_per_st.head(50)
ass_join = bf._join_asssessments(st_asmt_df, asmt_df)
```
How many assessment are there for each student in each module / presentation?
It varies, but each student has at least one assessment
```
ass_per_st = ass_join.groupby(by=['code_module', 'code_presentation',
'id_student']).count()
ass_per_st[ass_per_st['id_assessment'] == 0]
st_assess['is_banked'].sum()
by_student = st_assess.groupby(by='id_student').mean().drop('id_assessment', axis=1)
by_student.sort_values(by=['score'], axis=0, ascending=False, inplace=True)
fig, ax = plt.subplots(figsize=(12,10))
sns.distplot(by_student.dropna()['score'], kde=False, axlabel='Global Average Scores by Student' )
```
| github_jupyter | import numpy as np
import pandas as pd
import seaborn as sns
sns.reset_defaults
sns.set_style(style='darkgrid')
sns.set_context(context='notebook')
import matplotlib.pyplot as plt
#plt.style.use('ggplot')
plt.rcParams["patch.force_edgecolor"] = True
plt.rcParams["figure.figsize"] = (20.0, 10.0)
pd.set_option('display.max_columns', 2000)
pd.set_option('display.max_rows', 2000)
font = {'size' : 20}
plt.rc('font', **font)
plt.ion()
%matplotlib inline
cd ..
import src.features.build_features as bf
cd notebooks/
import altair
st_asmt_df = pd.read_csv('../data/raw/studentAssessment.csv')
asmt_df = pd.read_csv('../data/raw/assessments.csv')
st_asmt_df[st_asmt_df['score'] == 0].count()
st_asmt_df.info()
ass_join = bf._join_asssessments(st_asmt_df, asmt_df)
ass_join.head()
asmt_df.info()
asmt_df.groupby(by=['code_module', 'code_presentation']).count()[['id_assessment']]
asmt_df.groupby(by=['code_module', 'code_presentation']).median()[['date']]
asmt_df #[(asmt_df['code_module'] == 'FFF')]
sns.lmplot(x='days_submitted_early', y='score', data=ass_join, height=8)
sns.distplot(ass_join[ass_join['assessment_type']=='TMA'][['score']])
sns.distplot(ass_join[ass_join['assessment_type']=='CMA'][['score']])
sns.distplot(ass_join[ass_join['assessment_type']=='Exam'][['score']])
ass_join.sample(10)
ass_join.unstack
early_max = ass_join.groupby(by=['code_module', 'code_presentation',
'id_student']).max()[['days_submitted_early']]
# early_max.head(20)
merged = pd.merge(ass_join, early_max, how = 'outer', on = ['id_student', 'code_module', 'code_presentation'])
# merged.head(20)
with_first_assessment = merged[merged['days_submitted_early_x'] == merged['days_submitted_early_y']][['score']]
with_first_assessment.rename({'score': 'first_assessment_score'}, axis = 1, inplace = True)
with_first_assessment
score_first_assessment = with_first_assessment[['days_submitted_early_y', 'score']]
score_first_assessment
ass_per_st.head(50)
ass_join = bf._join_asssessments(st_asmt_df, asmt_df)
ass_per_st = ass_join.groupby(by=['code_module', 'code_presentation',
'id_student']).count()
ass_per_st[ass_per_st['id_assessment'] == 0]
st_assess['is_banked'].sum()
by_student = st_assess.groupby(by='id_student').mean().drop('id_assessment', axis=1)
by_student.sort_values(by=['score'], axis=0, ascending=False, inplace=True)
fig, ax = plt.subplots(figsize=(12,10))
sns.distplot(by_student.dropna()['score'], kde=False, axlabel='Global Average Scores by Student' ) | 0.241489 | 0.5752 |
<center>
<h1>Accessing THREDDS using Siphon</h1>
<br>
<h3>25 July 2017
<br>
<br>
Ryan May (@dopplershift)
<br><br>
UCAR/Unidata<br>
</h3>
</center>
# What is Siphon?
* Python library for remote data access
* Focus on atmospheric and oceanic data sources
* Bulk of features focused on THREDDS
## Installing on Azure
```
!conda install --name root siphon -y -c conda-forge
```
## Functionality
* THREDDS catalog parser
* NetCDF Subset Service (NCSS) client
* CDM Remote client
* Radar Query Service client
# THREDDS?
* Server for data collections in various formats
* Powered by netCDF-Java
* Provides catalogs of data with metadata information
* Programmatic access to data with various services
* Metadata services
- ISO
- UDDC
- NCML
* Download service (HTTPServer)
- Subsetting
* WMS/WCS
* OPeNDAP and CDMRemote
* NetCDF Subset Service (NCSS)
## THREDDS Demo
http://thredds.ucar.edu
# Siphon for THREDDS
- Let's start by parsing a THREDDS catalog
```
from siphon.catalog import TDSCatalog
top_cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog.xml')
```
That takes care of download the catalog, parsing the XML, and doing useful things. From here we can do things like look at all the catalog references...
```
for ref in top_cat.catalog_refs:
print(ref)
```
So we can see what's available at the top level. We can also extract exactly what we're looking for using the name of the item:
```
ref = top_cat.catalog_refs['Forecast Model Data']
ref.href
```
Or we can just access by position:
```
ref = top_cat.catalog_refs[0]
ref.href
```
and then resolve that catalog reference to get a new catalog.
```
new_cat = ref.follow()
list(new_cat.catalog_refs)
```
We can do this one more time, but instead of `catalog_refs`, we look at the `datasets` attribute to see the list of datasets available.
```
gfs_cat = new_cat.catalog_refs[4].follow()
list(gfs_cat.datasets)
```
`datasets` works just like `catalog_refs` in providing both name- and position-based access. Here we can access the first dataset in the catalog:
```
ds = gfs_cat.datasets[0]
ds.name
```
For catalogs that have a latest" automatically updated, dataset, the attribute `latest` is available:
```
ds = gfs_cat.latest
ds.name
```
Let's get a new catalog directly to some satellite data:
http://thredds.ucar.edu/thredds/idd/satellite.html
```
sat_cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/'
'satellite/3.9/WEST-CONUS_4km/current/catalog.xml')
list(sat_cat.datasets)
```
Instead of accessing the dataset by name or position, we can also ask the collection of datasets to parse the filenames as datetimes and find:
- those within a range
- those closest to a time
```
from datetime import datetime, timedelta
# Look for all data within the last hour
now = datetime.utcnow()
l = sat_cat.datasets.filter_time_range(start=now - timedelta(hours=1),
end=now)
[ds.name for ds in l]
```
In this case, the filter resulted in a list of `Dataset` handles. If we look instead for the nearest to a time, we get a single `Dataset` handle:
```
# Look for data from an hour ago
dt = datetime.utcnow() - timedelta(hours=1)
ds = sat_cat.datasets.filter_time_nearest(dt)
ds.name
```
We can use the dataset handle to look at the available access methods:
```
ds.access_urls
```
## Putting it together
How would we use this? Let's say we wanted to write a script to download the latest global run of the Wave Watch 3 model (WW3), and plot the output. So far, we have enough to get to the proper dataset:
```
top_cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog.xml')
models_cat = top_cat.catalog_refs[0].follow()
ww3_cat = models_cat.catalog_refs['Wave Watch III Global'].follow()
latest_ww3 = ww3_cat.latest
print(latest_ww3.name)
print(latest_ww3.access_urls)
```
## Exercise #1
1. Using Siphon, navigate from the top-level THREDDS catalog at https://nomads.ncdc.noaa.gov/thredds/catalog.xml to the 3-hour NARR-A data from January 5th, 2014 (or another product or time of interest)
1. Using Siphon, compare the available access methods (on http://thredds.ucar.edu) for:
- The "Best GFS Quarter Degree Forecast Time Series" (under "Forecast Model Data")
- A data file of "NEXRAD Level II Radar WSR-88D" (under "Radar Data")
```
# Start here
top_cat = TDSCatalog('https://nomads.ncdc.noaa.gov/thredds/catalog.xml')
```
# Accessing data using Siphon
Accessing catalogs is only part of the story; Siphon is much more useful if you're trying to access/download datasets.
For instance going back to our satellite data from earlier:
```
# Same as before
cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/'
'satellite/3.9/WEST-CONUS_4km/current/'
'catalog.xml')
ds = cat.datasets.filter_time_nearest(datetime.utcnow()
- timedelta(hours=1))
```
We can ask Siphon to download the file locally:
```
ds.download('data.gini')
```
Or better yet, get a file-like object that lets us `read` from the file as if it were local:
```
fobj = ds.remote_open()
data = fobj.read()
```
This is handy if you have Python code to read a particular format.
It's also possible to get access to the file through services that provide netCDF4-like access, but for the remote file. This access allows downloading information only for variables of interest, or for (index-based) subsets of that data:
```
nc = ds.remote_access()
```
By default this uses CDMRemote (if available), but it's also possible to ask for OPeNDAP (using netCDF4-python).
From here we can see what variables are available:
```
list(nc.variables)
```
Or get a subset of the values:
```
# Plot small sample image
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
ax.imshow(nc.variables['IR'][0, ::10, ::10], cmap='Greys', interpolation='none')
```
## Exercise #2
Using `remote_access`, plot a subset of data from the High Resolution Rapid Refresh (http://thredds.ucar.edu/thredds/catalog/grib/NCEP/HRRR/CONUS_2p5km/catalog.html). Pick any of the available collections or individual model runs.
For some datasets, subset support is availble:
- Defaults to netCDF Subset Service (NCSS)
- Allows specifying latitude, longitude, time, and variables
- NCSS downloads a netCDF file
To use NCSS, we can call `subset` and get a client.
```
ds = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/'
'grib/NCEP/GFS/Global_0p25deg/catalog.xml').datasets[1]
ncss = ds.subset()
ncss.variables
```
With this client we can set up a query for the data we want. In this case we request the next 24 hours of forecast:
```
query = ncss.query()
query.lonlat_point(lon=-105, lat=40)
now = datetime.utcnow()
query.time_range(now, now + timedelta(days=1))
query.variables('Temperature_surface')
query.accept('netcdf4')
```
From here, we need to get the data, which will return it as an already opened netCDF4 object.
```
nc = ncss.get_data(query)
temp_data = nc.variables['Temperature_surface'][:]
times = nc.variables['time'][:]
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
ax.plot(times, temp_data)
```
We can also request the data for a particular time for a region of interest:
```
query = ncss.query()
query.lonlat_box(east=-80, west=-90, south=35, north=45)
query.time(now + timedelta(days=1))
query.variables('Temperature_surface')
query.accept('netcdf4')
nc = ncss.get_data(query)
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
ax.imshow(nc.variables['Temperature_surface'][0], cmap='RdBu')
```
## Exercise #3
- Use `subset` to download a subset of data from one of:
- http://thredds.ucar.edu/thredds/catalog/grib/NCEP/WW3/Global/catalog.html
- http://thredds.ucar.edu/thredds/catalog/grib/NCEP/HRRR/CONUS_2p5km/catalog.html
- Pick either a time-series or a 2D subset
- Plot using either `plot` or `imshow`
## A full Example
```
# Get the dataset handle
top_cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog.xml')
models_cat = top_cat.catalog_refs[0].follow()
gfs_cat = models_cat.catalog_refs['GFS Quarter Degree Forecast'].follow()
latest_gfs = gfs_cat.latest
# Download a subset using NCSS
now = datetime.utcnow()
ncss = latest_gfs.subset()
query = ncss.query().lonlat_point(lon=-86.50, lat=39.17)
query.time_range(now, now + timedelta(days=3)).accept('netcdf4')
query.variables('Temperature_surface')
nc = ncss.get_data(query)
# Plot
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
temp_f = 1.8 * (nc.variables['Temperature_surface'][:] - 273.15) + 32
ax.plot(temp_f, color='r')
```
# Future plans for Siphon
- Add curated list of servers
- Support for access to meteorological uppear air archives
- Support for TDS 5.0 CDM Remote Feature service
- Search catalogs using CSW
## Resources
- Siphon docs: https://unidata.github.io/siphon
- Unidata Python Workshop: https://unidata.github.com/unidata-python-workshop
- Unidata Python Gallery: https://unidata.github.com/python-gallery
| github_jupyter | !conda install --name root siphon -y -c conda-forge
from siphon.catalog import TDSCatalog
top_cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog.xml')
for ref in top_cat.catalog_refs:
print(ref)
ref = top_cat.catalog_refs['Forecast Model Data']
ref.href
ref = top_cat.catalog_refs[0]
ref.href
new_cat = ref.follow()
list(new_cat.catalog_refs)
gfs_cat = new_cat.catalog_refs[4].follow()
list(gfs_cat.datasets)
ds = gfs_cat.datasets[0]
ds.name
ds = gfs_cat.latest
ds.name
sat_cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/'
'satellite/3.9/WEST-CONUS_4km/current/catalog.xml')
list(sat_cat.datasets)
from datetime import datetime, timedelta
# Look for all data within the last hour
now = datetime.utcnow()
l = sat_cat.datasets.filter_time_range(start=now - timedelta(hours=1),
end=now)
[ds.name for ds in l]
# Look for data from an hour ago
dt = datetime.utcnow() - timedelta(hours=1)
ds = sat_cat.datasets.filter_time_nearest(dt)
ds.name
ds.access_urls
top_cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog.xml')
models_cat = top_cat.catalog_refs[0].follow()
ww3_cat = models_cat.catalog_refs['Wave Watch III Global'].follow()
latest_ww3 = ww3_cat.latest
print(latest_ww3.name)
print(latest_ww3.access_urls)
# Start here
top_cat = TDSCatalog('https://nomads.ncdc.noaa.gov/thredds/catalog.xml')
# Same as before
cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/'
'satellite/3.9/WEST-CONUS_4km/current/'
'catalog.xml')
ds = cat.datasets.filter_time_nearest(datetime.utcnow()
- timedelta(hours=1))
ds.download('data.gini')
fobj = ds.remote_open()
data = fobj.read()
nc = ds.remote_access()
list(nc.variables)
# Plot small sample image
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
ax.imshow(nc.variables['IR'][0, ::10, ::10], cmap='Greys', interpolation='none')
ds = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/'
'grib/NCEP/GFS/Global_0p25deg/catalog.xml').datasets[1]
ncss = ds.subset()
ncss.variables
query = ncss.query()
query.lonlat_point(lon=-105, lat=40)
now = datetime.utcnow()
query.time_range(now, now + timedelta(days=1))
query.variables('Temperature_surface')
query.accept('netcdf4')
nc = ncss.get_data(query)
temp_data = nc.variables['Temperature_surface'][:]
times = nc.variables['time'][:]
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
ax.plot(times, temp_data)
query = ncss.query()
query.lonlat_box(east=-80, west=-90, south=35, north=45)
query.time(now + timedelta(days=1))
query.variables('Temperature_surface')
query.accept('netcdf4')
nc = ncss.get_data(query)
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
ax.imshow(nc.variables['Temperature_surface'][0], cmap='RdBu')
# Get the dataset handle
top_cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog.xml')
models_cat = top_cat.catalog_refs[0].follow()
gfs_cat = models_cat.catalog_refs['GFS Quarter Degree Forecast'].follow()
latest_gfs = gfs_cat.latest
# Download a subset using NCSS
now = datetime.utcnow()
ncss = latest_gfs.subset()
query = ncss.query().lonlat_point(lon=-86.50, lat=39.17)
query.time_range(now, now + timedelta(days=3)).accept('netcdf4')
query.variables('Temperature_surface')
nc = ncss.get_data(query)
# Plot
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
temp_f = 1.8 * (nc.variables['Temperature_surface'][:] - 273.15) + 32
ax.plot(temp_f, color='r') | 0.597138 | 0.902481 |
# Optimizing a function with probability simplex constraints
This notebook arose in response to a question on StackOverflow about how to optimize a function with probability simplex constraints in python (see http://stackoverflow.com/questions/32252853/optimization-with-python-scipy-optimize). This is a topic I've thought about a lot for our [paper](http://www.pnas.org/content/112/19/5950.abstract) on optimal immune repertoires so I was interested to see what other people had to say about it.
## Problem statement
For a given $\boldsymbol y$ and $\gamma$ find the $\boldsymbol x^\star$ that maximizes the following expression over the probability simplex:
$$\max_{x_i \geq 0, \, \sum_i x_i = 1} \left[\sum_i \left(\frac{x_i}{y_i}\right)^\gamma\right]^{1/\gamma}$$
## Solution using scipy.optimize's SLSQP algorithm (user58925)
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.optimize import minimize
def objective_function(x, y, gamma=0.2):
return -((x/y)**gamma).sum()**(1.0/gamma)
cons = ({'type': 'eq', 'fun': lambda x: np.array([sum(x) - 1])})
y = np.array([0.5, 0.3, 0.2])
initial_x = np.array([0.2, 0.3, 0.5])
opt = minimize(objective_function, initial_x, args=(y,), method='SLSQP',
constraints=cons, bounds=[(0, 1)] * len(initial_x))
opt
```
Works on my machine (the poster on StackOverflow reported issues with this) and actually requires a surprisingly small number of function evaluations.
## Alternative solution using Nelder-Mead on transformed variables (CT Zhu)
```
def trans_x(x):
x1 = x**2/(1.0+x**2)
z = np.hstack((x1, 1-sum(x1)))
return z
def F(x, y, gamma=0.2):
z = trans_x(x)
return -(((z/y)**gamma).sum())**(1./gamma)
opt = minimize(F, np.array([1., 1.]), args=(np.array(y),),
method='Nelder-Mead')
trans_x(opt.x), opt
```
Works but needs a slightly higher number of function evaluations for convergence.
```
opt = minimize(F, np.array([0., 1.]), args=(np.array([0.2, 0.1, 0.8]), 2.0),
method='Nelder-Mead')
trans_x(opt.x), opt
```
In general though this method can fail, as it does not enforce the non-negativity constraint on the third variable.
## Analytical solution
It turns our the problem is solvable analytically. One can start by writing down the Lagrangian of the (equality constrained) optimization problem:
$$L = \sum_i (x_i/y_i)^\gamma - \lambda \left(\sum x_i - 1\right)$$
The optimal solution is found by setting the first derivative of this Lagrangian to zero:
$$0 = \partial L / \partial x_i = \gamma x_i^{(\gamma-1)/\gamma_i} - \lambda$$
$$\Rightarrow x_i \propto y_i^{\gamma/(\gamma - 1)}$$
Using this insight the optimization problem can be solved simply and efficiently:
```
def analytical(y, gamma=0.2):
x = y**(gamma/(gamma-1.0))
x /= np.sum(x)
return x
xanalytical = analytical(np.array(y))
xanalytical, objective_function(xanalytical, np.array(y))
```
## Solution using projected gradient algorithm
This problem can also be solved using a projected gradient algorithm, but this will be for another time.
| github_jupyter | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.optimize import minimize
def objective_function(x, y, gamma=0.2):
return -((x/y)**gamma).sum()**(1.0/gamma)
cons = ({'type': 'eq', 'fun': lambda x: np.array([sum(x) - 1])})
y = np.array([0.5, 0.3, 0.2])
initial_x = np.array([0.2, 0.3, 0.5])
opt = minimize(objective_function, initial_x, args=(y,), method='SLSQP',
constraints=cons, bounds=[(0, 1)] * len(initial_x))
opt
def trans_x(x):
x1 = x**2/(1.0+x**2)
z = np.hstack((x1, 1-sum(x1)))
return z
def F(x, y, gamma=0.2):
z = trans_x(x)
return -(((z/y)**gamma).sum())**(1./gamma)
opt = minimize(F, np.array([1., 1.]), args=(np.array(y),),
method='Nelder-Mead')
trans_x(opt.x), opt
opt = minimize(F, np.array([0., 1.]), args=(np.array([0.2, 0.1, 0.8]), 2.0),
method='Nelder-Mead')
trans_x(opt.x), opt
def analytical(y, gamma=0.2):
x = y**(gamma/(gamma-1.0))
x /= np.sum(x)
return x
xanalytical = analytical(np.array(y))
xanalytical, objective_function(xanalytical, np.array(y)) | 0.579757 | 0.993661 |
```
import os
import shutil
import zipfile
import urllib.request
def download_repo(url, save_to):
zip_filename = save_to + '.zip'
urllib.request.urlretrieve(url, zip_filename)
if os.path.exists(save_to):
shutil.rmtree(save_to)
with zipfile.ZipFile(zip_filename, 'r') as zip_ref:
zip_ref.extractall('.')
del zip_ref
assert os.path.exists(save_to)
REPO_PATH = 'LinearizedNNs-master'
download_repo(url='https://github.com/maxkvant/LinearizedNNs/archive/master.zip',
save_to=REPO_PATH)
import sys
sys.path.append(f"{REPO_PATH}/src")
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
from torchvision import transforms, datasets
from torchvision.datasets import FashionMNIST
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.linear_model import RidgeClassifier
from sklearn.decomposition import PCA
from pytorch_impl.nns import ResNet, FCN, CNN
from pytorch_impl.nns import warm_up_batch_norm
from pytorch_impl.estimators import LinearizedSgdEstimator, SgdEstimator, MatrixExpEstimator, GradientBoostingEstimator
from pytorch_impl import ClassifierTraining
from pytorch_impl.matrix_exp import matrix_exp, compute_exp_term
from pytorch_impl.nns.utils import to_one_hot
device = torch.device('cuda:0') if (torch.cuda.is_available()) else torch.device('cpu')
num_classes = 10
device
# compute M^-1 * (exp(M) - E)
def compute_exp_term(M, device, n_iter=3):
with torch.no_grad():
M = M.double().to(device)
n = M.size()[0]
norm = torch.sqrt((M ** 2).sum())
steps = 0
while norm > 1e-9:
M /= 2.
norm /= 2.
steps += 1
series_sum = torch.zeros([n, n]).double().to(device)
prod = torch.eye(n).double().to(device)
# series_sum: E + M / 2 + M^2 / 6 + ...
for i in range(1, n_iter):
series_sum = (series_sum + prod)
prod = torch.matmul(prod, M) / (i + 1)
# (exp 0) (exp 0) = (exp^2 0)
# (sum E) (sum E) = (sum * exp + sum E)
exp = torch.matmul(M, series_sum) + torch.eye(n).to(device)
for step in range(steps):
series_sum = (torch.matmul(series_sum, exp) + series_sum) / 2.
exp = torch.matmul(exp, exp)
return series_sum
kernels_12k = np.load('../data/kernels_12k.npz')
train_kernel = kernels_12k['train_kernel']
test_kernel = kernels_12k['test_kernel']
labels_train = kernels_12k['labels_train']
labels_test = kernels_12k['labels_test']
train_kernel.shape, test_kernel.shape, labels_train.shape, labels_test.shape
train_kernel = torch.from_numpy(train_kernel).float().to(device)
test_kernel = torch.from_numpy(test_kernel).float().to(device)
labels_test = torch.from_numpy(labels_test).to(device)
labels_train = torch.from_numpy(labels_train).to(device)
y_train = to_one_hot(labels_train, num_classes).to(device)
lr = 1e4
n = len(train_kernel)
reg = 0e-4 * torch.eye(n).to(device)
exp_term = - lr * compute_exp_term(- lr * (train_kernel + reg), device).float()
y_pred = torch.matmul(test_kernel, torch.matmul(exp_term, - y_train))
del exp_term
(y_pred.argmax(dim=1) == labels_test).float().mean()
del train_kernel
del test_kernel
def matmul_via_torch(numpy_matrix, torch_matrix, step=2048):
with torch.no_grad():
n, m = numpy_matrix.shape
m2, k = torch_matrix.size()
assert m2 == m
to_torch = lambda matrix: torch.from_numpy(matrix).double().to(device)
result = torch.zeros([n, k]).to(device)
for l in range(0, n, step):
r = min(l + step, n)
result[l:r] = torch.matmul(to_torch(numpy_matrix[l:r]), torch_matrix.double())
return result
def boosting(train_kernel, y_train, labels_train, test_kernel, labels_test, beta=1.4, n_iter=24, lr=1e5, flips=False, block_size = 1280 * 2):
with torch.no_grad():
n = len(train_kernel)
right_vector = torch.zeros([n, num_classes]).double().to(device)
# right_vector.normal_()
# right_vector /= np.sqrt(n)
n_actual = (n // 2) if (flips) else n
n_blocks = (2 * n_actual) // (3 * block_size) + 1
print(n_blocks)
for iter_num in range(n_iter):
index = torch.randperm(n_actual).to(device)
if flips:
index += n_actual * rand_bool(n_actual)
y_pred_train = matmul_via_torch(train_kernel, right_vector)
y_pred_test = matmul_via_torch(test_kernel, right_vector)
train_acc = (y_pred_train.argmax(dim=1) == labels_train).float().mean().item()
test_acc = (y_pred_test.argmax(dim=1) == labels_test).float().mean().item()
y_residual = y_pred_train - y_train
train_mse = (y_residual ** 2).sum(dim=1).mean().item()
print(f"iteration {iter_num} train_acc {train_acc} test_acc {test_acc} train_mse {train_mse}")
d_right_vector = torch.zeros([n, num_classes]).double().to(device)
for i in range(n_blocks):
batch_index = index[i * block_size: (i + 1) * block_size]
batch_index_np = batch_index.cpu().numpy()
K = train_kernel[batch_index_np][:, batch_index_np]
K = torch.from_numpy(K).double().to(device)
K = K + 1e-4 * torch.eye(len(K)).double().to(device)
exp_term = - lr * compute_exp_term(- lr * K, device)
d_right_vector[batch_index] = torch.matmul(exp_term, y_residual[batch_index].double()) / n_blocks
pred_change = matmul_via_torch(train_kernel, d_right_vector)
right_vector += d_right_vector * beta
print(f"batches {0}-{n_blocks - 1} done")
print(f"beta = {beta}")
print()
y_pred_train = matmul_via_torch(train_kernel, right_vector)
y_pred_test = matmul_via_torch(test_kernel, right_vector)
train_acc = (y_pred_train.argmax(dim=1) == labels_train).float().mean().item()
test_acc = (y_pred_test.argmax(dim=1) == labels_test).float().mean().item()
y_residual = y_pred_train - to_one_hot(labels_train, num_classes).double().to(device)
train_mse = (y_residual ** 2).sum(dim=1).mean().item()
print(f"iteration {n_iter} train_acc {train_acc:.4f} test_acc {test_acc:.4f} train_mse {train_mse}")
%%time
kernels_50k = np.load('../data/kernels_50k.npz')
train_kernel = kernels_50k['train_kernel']
test_kernel = kernels_50k['test_kernel']
labels_train = kernels_50k['labels_train']
labels_test = kernels_50k['labels_test']
labels_train = torch.from_numpy(labels_train).to(device)
labels_test = torch.from_numpy(labels_test).to(device)
y_train = to_one_hot(labels_train, num_classes).to(device)
y_train = to_one_hot(labels_train, num_classes).to(device)
%time
boosting(train_kernel, y_train, labels_train, test_kernel, labels_test, beta=1.4, lr=1e5, n_iter=100)
%%time
kernels_myrtle10 = np.load('../data/myrtle10_kernels.npz')
train_kernel = kernels_myrtle10['train_kernel']
test_kernel = kernels_myrtle10['test_kernel']
labels_train = kernels_myrtle10['labels_train']
labels_test = kernels_myrtle10['labels_test']
labels_train = torch.from_numpy(labels_train).to(device)
labels_test = torch.from_numpy(labels_test).to(device)
y_train = to_one_hot(labels_train, num_classes).to(device)
train_kernel[:4,:4]
%%time
boosting(train_kernel, y_train, labels_train, test_kernel, labels_test, beta=1, lr=1e5, n_iter=128, block_size=2 * 1280)
%%time
kernels_myrtle7 = np.load('../data/myrtle7_kernels.npz')
train_kernel = kernels_myrtle7['train_kernel']
test_kernel = kernels_myrtle7['test_kernel']
labels_train = kernels_myrtle7['labels_train']
labels_test = kernels_myrtle7['labels_test']
labels_train = torch.from_numpy(labels_train).to(device)
labels_test = torch.from_numpy(labels_test).to(device)
y_train = to_one_hot(labels_train, num_classes).to(device)
train_kernel[:4,:4]
%%time
boosting(train_kernel, y_train, labels_train, test_kernel, labels_test, beta=1, lr=1e5, n_iter=128, block_size=2 * 1280)
```
| github_jupyter | import os
import shutil
import zipfile
import urllib.request
def download_repo(url, save_to):
zip_filename = save_to + '.zip'
urllib.request.urlretrieve(url, zip_filename)
if os.path.exists(save_to):
shutil.rmtree(save_to)
with zipfile.ZipFile(zip_filename, 'r') as zip_ref:
zip_ref.extractall('.')
del zip_ref
assert os.path.exists(save_to)
REPO_PATH = 'LinearizedNNs-master'
download_repo(url='https://github.com/maxkvant/LinearizedNNs/archive/master.zip',
save_to=REPO_PATH)
import sys
sys.path.append(f"{REPO_PATH}/src")
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
from torchvision import transforms, datasets
from torchvision.datasets import FashionMNIST
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.linear_model import RidgeClassifier
from sklearn.decomposition import PCA
from pytorch_impl.nns import ResNet, FCN, CNN
from pytorch_impl.nns import warm_up_batch_norm
from pytorch_impl.estimators import LinearizedSgdEstimator, SgdEstimator, MatrixExpEstimator, GradientBoostingEstimator
from pytorch_impl import ClassifierTraining
from pytorch_impl.matrix_exp import matrix_exp, compute_exp_term
from pytorch_impl.nns.utils import to_one_hot
device = torch.device('cuda:0') if (torch.cuda.is_available()) else torch.device('cpu')
num_classes = 10
device
# compute M^-1 * (exp(M) - E)
def compute_exp_term(M, device, n_iter=3):
with torch.no_grad():
M = M.double().to(device)
n = M.size()[0]
norm = torch.sqrt((M ** 2).sum())
steps = 0
while norm > 1e-9:
M /= 2.
norm /= 2.
steps += 1
series_sum = torch.zeros([n, n]).double().to(device)
prod = torch.eye(n).double().to(device)
# series_sum: E + M / 2 + M^2 / 6 + ...
for i in range(1, n_iter):
series_sum = (series_sum + prod)
prod = torch.matmul(prod, M) / (i + 1)
# (exp 0) (exp 0) = (exp^2 0)
# (sum E) (sum E) = (sum * exp + sum E)
exp = torch.matmul(M, series_sum) + torch.eye(n).to(device)
for step in range(steps):
series_sum = (torch.matmul(series_sum, exp) + series_sum) / 2.
exp = torch.matmul(exp, exp)
return series_sum
kernels_12k = np.load('../data/kernels_12k.npz')
train_kernel = kernels_12k['train_kernel']
test_kernel = kernels_12k['test_kernel']
labels_train = kernels_12k['labels_train']
labels_test = kernels_12k['labels_test']
train_kernel.shape, test_kernel.shape, labels_train.shape, labels_test.shape
train_kernel = torch.from_numpy(train_kernel).float().to(device)
test_kernel = torch.from_numpy(test_kernel).float().to(device)
labels_test = torch.from_numpy(labels_test).to(device)
labels_train = torch.from_numpy(labels_train).to(device)
y_train = to_one_hot(labels_train, num_classes).to(device)
lr = 1e4
n = len(train_kernel)
reg = 0e-4 * torch.eye(n).to(device)
exp_term = - lr * compute_exp_term(- lr * (train_kernel + reg), device).float()
y_pred = torch.matmul(test_kernel, torch.matmul(exp_term, - y_train))
del exp_term
(y_pred.argmax(dim=1) == labels_test).float().mean()
del train_kernel
del test_kernel
def matmul_via_torch(numpy_matrix, torch_matrix, step=2048):
with torch.no_grad():
n, m = numpy_matrix.shape
m2, k = torch_matrix.size()
assert m2 == m
to_torch = lambda matrix: torch.from_numpy(matrix).double().to(device)
result = torch.zeros([n, k]).to(device)
for l in range(0, n, step):
r = min(l + step, n)
result[l:r] = torch.matmul(to_torch(numpy_matrix[l:r]), torch_matrix.double())
return result
def boosting(train_kernel, y_train, labels_train, test_kernel, labels_test, beta=1.4, n_iter=24, lr=1e5, flips=False, block_size = 1280 * 2):
with torch.no_grad():
n = len(train_kernel)
right_vector = torch.zeros([n, num_classes]).double().to(device)
# right_vector.normal_()
# right_vector /= np.sqrt(n)
n_actual = (n // 2) if (flips) else n
n_blocks = (2 * n_actual) // (3 * block_size) + 1
print(n_blocks)
for iter_num in range(n_iter):
index = torch.randperm(n_actual).to(device)
if flips:
index += n_actual * rand_bool(n_actual)
y_pred_train = matmul_via_torch(train_kernel, right_vector)
y_pred_test = matmul_via_torch(test_kernel, right_vector)
train_acc = (y_pred_train.argmax(dim=1) == labels_train).float().mean().item()
test_acc = (y_pred_test.argmax(dim=1) == labels_test).float().mean().item()
y_residual = y_pred_train - y_train
train_mse = (y_residual ** 2).sum(dim=1).mean().item()
print(f"iteration {iter_num} train_acc {train_acc} test_acc {test_acc} train_mse {train_mse}")
d_right_vector = torch.zeros([n, num_classes]).double().to(device)
for i in range(n_blocks):
batch_index = index[i * block_size: (i + 1) * block_size]
batch_index_np = batch_index.cpu().numpy()
K = train_kernel[batch_index_np][:, batch_index_np]
K = torch.from_numpy(K).double().to(device)
K = K + 1e-4 * torch.eye(len(K)).double().to(device)
exp_term = - lr * compute_exp_term(- lr * K, device)
d_right_vector[batch_index] = torch.matmul(exp_term, y_residual[batch_index].double()) / n_blocks
pred_change = matmul_via_torch(train_kernel, d_right_vector)
right_vector += d_right_vector * beta
print(f"batches {0}-{n_blocks - 1} done")
print(f"beta = {beta}")
print()
y_pred_train = matmul_via_torch(train_kernel, right_vector)
y_pred_test = matmul_via_torch(test_kernel, right_vector)
train_acc = (y_pred_train.argmax(dim=1) == labels_train).float().mean().item()
test_acc = (y_pred_test.argmax(dim=1) == labels_test).float().mean().item()
y_residual = y_pred_train - to_one_hot(labels_train, num_classes).double().to(device)
train_mse = (y_residual ** 2).sum(dim=1).mean().item()
print(f"iteration {n_iter} train_acc {train_acc:.4f} test_acc {test_acc:.4f} train_mse {train_mse}")
%%time
kernels_50k = np.load('../data/kernels_50k.npz')
train_kernel = kernels_50k['train_kernel']
test_kernel = kernels_50k['test_kernel']
labels_train = kernels_50k['labels_train']
labels_test = kernels_50k['labels_test']
labels_train = torch.from_numpy(labels_train).to(device)
labels_test = torch.from_numpy(labels_test).to(device)
y_train = to_one_hot(labels_train, num_classes).to(device)
y_train = to_one_hot(labels_train, num_classes).to(device)
%time
boosting(train_kernel, y_train, labels_train, test_kernel, labels_test, beta=1.4, lr=1e5, n_iter=100)
%%time
kernels_myrtle10 = np.load('../data/myrtle10_kernels.npz')
train_kernel = kernels_myrtle10['train_kernel']
test_kernel = kernels_myrtle10['test_kernel']
labels_train = kernels_myrtle10['labels_train']
labels_test = kernels_myrtle10['labels_test']
labels_train = torch.from_numpy(labels_train).to(device)
labels_test = torch.from_numpy(labels_test).to(device)
y_train = to_one_hot(labels_train, num_classes).to(device)
train_kernel[:4,:4]
%%time
boosting(train_kernel, y_train, labels_train, test_kernel, labels_test, beta=1, lr=1e5, n_iter=128, block_size=2 * 1280)
%%time
kernels_myrtle7 = np.load('../data/myrtle7_kernels.npz')
train_kernel = kernels_myrtle7['train_kernel']
test_kernel = kernels_myrtle7['test_kernel']
labels_train = kernels_myrtle7['labels_train']
labels_test = kernels_myrtle7['labels_test']
labels_train = torch.from_numpy(labels_train).to(device)
labels_test = torch.from_numpy(labels_test).to(device)
y_train = to_one_hot(labels_train, num_classes).to(device)
train_kernel[:4,:4]
%%time
boosting(train_kernel, y_train, labels_train, test_kernel, labels_test, beta=1, lr=1e5, n_iter=128, block_size=2 * 1280) | 0.513912 | 0.564729 |
# Using GalFlow to perform FFT-based convolutions
```
import tensorflow as tf
import galflow as gf
import galsim
%pylab inline
# First let's draw a galaxy image with GalSim
data_dir='/usr/local/share/galsim/COSMOS_25.2_training_sample'
cat = galsim.COSMOSCatalog(dir=data_dir)
psf = cat.makeGalaxy(2, gal_type='real', noise_pad_size=0).original_psf
gal = cat.makeGalaxy(2, gal_type='real', noise_pad_size=0)
conv = galsim.Convolve(psf, gal)
# We draw the galaxy on a postage stamp
imgal = gal.drawImage(nx=128, ny=128, scale=0.03,
method='no_pixel',use_true_center=False)
imconv = conv.drawImage(nx=128, ny=128, scale=0.03,
method='no_pixel', use_true_center=False)
# We draw the PSF image in Kspace at the correct resolution
N = 128
im_scale = 0.03
interp_factor=2
padding_factor=2
Nk = N*interp_factor*padding_factor
from galsim.bounds import _BoundsI
bounds = _BoundsI(0, Nk//2, -Nk//2, Nk//2-1)
impsf = psf.drawKImage(bounds=bounds,
scale=2.*np.pi/(N*padding_factor* im_scale),
recenter=False)
imkgal = gal.drawKImage(bounds=bounds,
scale=2.*np.pi/(N*padding_factor* im_scale),
recenter=False)
subplot(131)
imshow(imgal.array)
subplot(132)
imshow(log10(abs(imkgal.array)), cmap='gist_stern', vmin=-8)
subplot(133)
imshow(log10(abs(impsf.array)), cmap='gist_stern', vmin=-8)
ims = tf.placeholder(shape=[1, N, N, 1], dtype=tf.float32)
kims = tf.placeholder(shape=[1, Nk, Nk//2+1], dtype=tf.complex64)
kpsf = tf.placeholder(shape=[1, Nk, Nk//2+1], dtype=tf.complex64)
res = gf.convolve(ims, kpsf,
zero_padding_factor=padding_factor,
interp_factor=interp_factor )
resk = gf.kconvolve(kims, kpsf,
zero_padding_factor=padding_factor,
interp_factor=interp_factor )
with tf.Session() as sess:
conv, convk = sess.run([res, resk],
feed_dict={ims:imgal.array.reshape(1,N,N,1),
kpsf:fftshift((impsf.array).reshape(1,Nk,Nk//2+1), axes=1),
kims:fftshift((imkgal.array).reshape(1,Nk,Nk//2+1), axes=1)
})
figure(figsize=(15,5))
subplot(131)
imshow((conv[0,:,:,0]))
subplot(132)
imshow(imconv.array)
subplot(133)
imshow(((conv[0,:,:,0] -imconv.array))[8:-8,8:-8] );colorbar()
figure(figsize=(15,5))
subplot(131)
imshow(fftshift(convk[0,:,:,0])[64:-64,64:-64])
subplot(132)
imshow(imconv.array)
subplot(133)
imshow(((fftshift(convk[0,:,:,0])[64:-64,64:-64] -imconv.array)));colorbar()
```
## Testing k-space convolution with custom window function
Here we experiment with reconvolving the images at a different resolution using a band limited effective psf, in this case a Hanning window
```
from scipy.signal.windows import hann
# We draw the PSF image in Kspace at the correct resolution
N = 64
im_scale = 0.168
interp_factor=6
padding_factor=2
Nk = N*interp_factor*padding_factor
from galsim.bounds import _BoundsI
bounds = _BoundsI(0, Nk//2, -Nk//2, Nk//2-1)
# Hann window
stamp_size = Nk
target_pixel_scale=im_scale
pixel_scale=im_scale/interp_factor
my_psf = np.zeros((stamp_size,stamp_size))
for i in range(stamp_size):
for j in range(stamp_size):
r = sqrt((i - 1.0*stamp_size//2)**2 + (j-1.0*stamp_size//2)**2)/(stamp_size//2)*pi/2*target_pixel_scale/pixel_scale
my_psf[i,j] = sin(r+pi/2)**2
if r >= pi/2:
my_psf[i,j] = 0
# Export the PSF as a galsim object
effective_psf = galsim.InterpolatedKImage(galsim.ImageCD(my_psf+0*1j, scale=2.*np.pi/(Nk * im_scale / interp_factor )))
# Also export it directly as an array for k space multiplication
my_psf = fftshift(my_psf)[:, :stamp_size//2+1]
ims = tf.placeholder(shape=[1, N, N, 1], dtype=tf.float32)
kims = tf.placeholder(shape=[1, Nk, Nk//2+1], dtype=tf.complex64)
kpsf = tf.placeholder(shape=[1, Nk, Nk//2+1], dtype=tf.complex64)
res = gf.convolve(ims, kpsf,
zero_padding_factor=padding_factor,
interp_factor=interp_factor )
resk = gf.kconvolve(kims, kpsf,
zero_padding_factor=padding_factor,
interp_factor=interp_factor )
figure(figsize=(10,10))
c=0
sess = tf.Session()
for i in range(25):
gal = cat.makeGalaxy(9+i, noise_pad_size=0)#im_scale*N/2)
imkgal = gal.drawKImage(bounds=bounds,
scale=2.*np.pi/(Nk * im_scale / interp_factor ),
recenter=False)
yop = sess.run(resk, feed_dict={kpsf:my_psf.reshape(1,Nk,Nk//2+1),
kims:fftshift((imkgal.array).reshape(1,Nk,Nk//2+1), axes=1)})
subplot(5,5,c+1)
imshow(arcsinh(50*fftshift(yop[0,:,:,0]))[N//2:-N//2,N//2:-N//2],cmap='gray')
axis('off')
c+=1
# Same thing, but this time we are using purely galsim
figure(figsize=(10,10))
c=0
sess = tf.Session()
for i in range(25):
gal = cat.makeGalaxy(9+i, noise_pad_size=0)#im_scale*N/2)
g = galsim.Convolve(gal, effective_psf)
imgal = g.drawImage(nx=N, ny=N, scale=im_scale,
method='no_pixel', use_true_center=False)
subplot(5,5,c+1)
imshow(arcsinh(50*imgal.array),cmap='gray')
axis('off')
c+=1
# And now, the difference
figure(figsize=(10,10))
c=0
sess = tf.Session()
for i in range(25):
gal = cat.makeGalaxy(9+i, noise_pad_size=0)#im_scale*N/2)
g = galsim.Convolve(gal, effective_psf)
imgal = g.drawImage(nx=N, ny=N, scale=im_scale,
method='no_pixel', use_true_center=False)
imkgal = gal.drawKImage(bounds=bounds,
scale=2.*np.pi/(Nk * im_scale / interp_factor ),
recenter=False)
yop = sess.run(resk, feed_dict={kpsf:my_psf.reshape(1,Nk,Nk//2+1),
kims:fftshift((imkgal.array).reshape(1,Nk,Nk//2+1), axes=1)})
subplot(5,5,c+1)
imshow(imgal.array - fftshift(yop[0,:,:,0])[N//2:-N//2,N//2:-N//2],cmap='gray'); colorbar()
axis('off')
c+=1
subplot(131)
imshow(log10(my_psf));
title('Effective psf')
subplot(132)
imshow(log10(abs(fftshift((imkgal.array).reshape(1,Nk,Nk//2+1), axes=1)))[0],vmin=-5)
title('Galaxy image')
subplot(133)
imshow(log10(abs(rfft2(yop[0,:,:,0]))),vmin=-5)
title('Output image')
```
| github_jupyter | import tensorflow as tf
import galflow as gf
import galsim
%pylab inline
# First let's draw a galaxy image with GalSim
data_dir='/usr/local/share/galsim/COSMOS_25.2_training_sample'
cat = galsim.COSMOSCatalog(dir=data_dir)
psf = cat.makeGalaxy(2, gal_type='real', noise_pad_size=0).original_psf
gal = cat.makeGalaxy(2, gal_type='real', noise_pad_size=0)
conv = galsim.Convolve(psf, gal)
# We draw the galaxy on a postage stamp
imgal = gal.drawImage(nx=128, ny=128, scale=0.03,
method='no_pixel',use_true_center=False)
imconv = conv.drawImage(nx=128, ny=128, scale=0.03,
method='no_pixel', use_true_center=False)
# We draw the PSF image in Kspace at the correct resolution
N = 128
im_scale = 0.03
interp_factor=2
padding_factor=2
Nk = N*interp_factor*padding_factor
from galsim.bounds import _BoundsI
bounds = _BoundsI(0, Nk//2, -Nk//2, Nk//2-1)
impsf = psf.drawKImage(bounds=bounds,
scale=2.*np.pi/(N*padding_factor* im_scale),
recenter=False)
imkgal = gal.drawKImage(bounds=bounds,
scale=2.*np.pi/(N*padding_factor* im_scale),
recenter=False)
subplot(131)
imshow(imgal.array)
subplot(132)
imshow(log10(abs(imkgal.array)), cmap='gist_stern', vmin=-8)
subplot(133)
imshow(log10(abs(impsf.array)), cmap='gist_stern', vmin=-8)
ims = tf.placeholder(shape=[1, N, N, 1], dtype=tf.float32)
kims = tf.placeholder(shape=[1, Nk, Nk//2+1], dtype=tf.complex64)
kpsf = tf.placeholder(shape=[1, Nk, Nk//2+1], dtype=tf.complex64)
res = gf.convolve(ims, kpsf,
zero_padding_factor=padding_factor,
interp_factor=interp_factor )
resk = gf.kconvolve(kims, kpsf,
zero_padding_factor=padding_factor,
interp_factor=interp_factor )
with tf.Session() as sess:
conv, convk = sess.run([res, resk],
feed_dict={ims:imgal.array.reshape(1,N,N,1),
kpsf:fftshift((impsf.array).reshape(1,Nk,Nk//2+1), axes=1),
kims:fftshift((imkgal.array).reshape(1,Nk,Nk//2+1), axes=1)
})
figure(figsize=(15,5))
subplot(131)
imshow((conv[0,:,:,0]))
subplot(132)
imshow(imconv.array)
subplot(133)
imshow(((conv[0,:,:,0] -imconv.array))[8:-8,8:-8] );colorbar()
figure(figsize=(15,5))
subplot(131)
imshow(fftshift(convk[0,:,:,0])[64:-64,64:-64])
subplot(132)
imshow(imconv.array)
subplot(133)
imshow(((fftshift(convk[0,:,:,0])[64:-64,64:-64] -imconv.array)));colorbar()
from scipy.signal.windows import hann
# We draw the PSF image in Kspace at the correct resolution
N = 64
im_scale = 0.168
interp_factor=6
padding_factor=2
Nk = N*interp_factor*padding_factor
from galsim.bounds import _BoundsI
bounds = _BoundsI(0, Nk//2, -Nk//2, Nk//2-1)
# Hann window
stamp_size = Nk
target_pixel_scale=im_scale
pixel_scale=im_scale/interp_factor
my_psf = np.zeros((stamp_size,stamp_size))
for i in range(stamp_size):
for j in range(stamp_size):
r = sqrt((i - 1.0*stamp_size//2)**2 + (j-1.0*stamp_size//2)**2)/(stamp_size//2)*pi/2*target_pixel_scale/pixel_scale
my_psf[i,j] = sin(r+pi/2)**2
if r >= pi/2:
my_psf[i,j] = 0
# Export the PSF as a galsim object
effective_psf = galsim.InterpolatedKImage(galsim.ImageCD(my_psf+0*1j, scale=2.*np.pi/(Nk * im_scale / interp_factor )))
# Also export it directly as an array for k space multiplication
my_psf = fftshift(my_psf)[:, :stamp_size//2+1]
ims = tf.placeholder(shape=[1, N, N, 1], dtype=tf.float32)
kims = tf.placeholder(shape=[1, Nk, Nk//2+1], dtype=tf.complex64)
kpsf = tf.placeholder(shape=[1, Nk, Nk//2+1], dtype=tf.complex64)
res = gf.convolve(ims, kpsf,
zero_padding_factor=padding_factor,
interp_factor=interp_factor )
resk = gf.kconvolve(kims, kpsf,
zero_padding_factor=padding_factor,
interp_factor=interp_factor )
figure(figsize=(10,10))
c=0
sess = tf.Session()
for i in range(25):
gal = cat.makeGalaxy(9+i, noise_pad_size=0)#im_scale*N/2)
imkgal = gal.drawKImage(bounds=bounds,
scale=2.*np.pi/(Nk * im_scale / interp_factor ),
recenter=False)
yop = sess.run(resk, feed_dict={kpsf:my_psf.reshape(1,Nk,Nk//2+1),
kims:fftshift((imkgal.array).reshape(1,Nk,Nk//2+1), axes=1)})
subplot(5,5,c+1)
imshow(arcsinh(50*fftshift(yop[0,:,:,0]))[N//2:-N//2,N//2:-N//2],cmap='gray')
axis('off')
c+=1
# Same thing, but this time we are using purely galsim
figure(figsize=(10,10))
c=0
sess = tf.Session()
for i in range(25):
gal = cat.makeGalaxy(9+i, noise_pad_size=0)#im_scale*N/2)
g = galsim.Convolve(gal, effective_psf)
imgal = g.drawImage(nx=N, ny=N, scale=im_scale,
method='no_pixel', use_true_center=False)
subplot(5,5,c+1)
imshow(arcsinh(50*imgal.array),cmap='gray')
axis('off')
c+=1
# And now, the difference
figure(figsize=(10,10))
c=0
sess = tf.Session()
for i in range(25):
gal = cat.makeGalaxy(9+i, noise_pad_size=0)#im_scale*N/2)
g = galsim.Convolve(gal, effective_psf)
imgal = g.drawImage(nx=N, ny=N, scale=im_scale,
method='no_pixel', use_true_center=False)
imkgal = gal.drawKImage(bounds=bounds,
scale=2.*np.pi/(Nk * im_scale / interp_factor ),
recenter=False)
yop = sess.run(resk, feed_dict={kpsf:my_psf.reshape(1,Nk,Nk//2+1),
kims:fftshift((imkgal.array).reshape(1,Nk,Nk//2+1), axes=1)})
subplot(5,5,c+1)
imshow(imgal.array - fftshift(yop[0,:,:,0])[N//2:-N//2,N//2:-N//2],cmap='gray'); colorbar()
axis('off')
c+=1
subplot(131)
imshow(log10(my_psf));
title('Effective psf')
subplot(132)
imshow(log10(abs(fftshift((imkgal.array).reshape(1,Nk,Nk//2+1), axes=1)))[0],vmin=-5)
title('Galaxy image')
subplot(133)
imshow(log10(abs(rfft2(yop[0,:,:,0]))),vmin=-5)
title('Output image') | 0.430746 | 0.708673 |
## 1.0 Import Function
```
from META_TOOLBOX import *
import VIGA_VERIFICA as VIGA_VER
```
## 2.0 Setup
```
SETUP = {'N_REP': 30,
'N_ITER': 100,
'N_POP': 1,
'D': 4,
'X_L': [0.25, 0.05, 0.05, 1/6.0],
'X_U': [0.65, 0.15, 0.15, 1/3.5],
'SIGMA': 0.15,
'ALPHA': 0.98,
'TEMP': None,
'STOP_CONTROL_TEMP': None,
'NULL_DIC': None
}
```
## 3.0 OF
```
# OBJ. Function
def OF_FUNCTION(X, NULL_DIC):
# Geometria da viga
VIGA = {'H_W': X[0],
'B_W': X[1],
'B_FS': 0.30,
'B_FI': 0.30,
'H_FS': X[2],
'H_FI': X[2],
'H_SI': 0.07,
'H_II': 0.07,
'COB': 0.035,
'PHI_L': 12.5 / 1E3,
'PHI_E': 10.0 / 1E3,
'L': 20,
'L_PISTA': 150,
'FATOR_SEC': 'I',
'DELTA_ANC': 6 / 1E3,
'TEMPO_CONC': [2.00, 3.00, 28.0, 35.0, 45, 50 * 365],
'TEMPO_ACO': [3.00, 4.00, 29.0, 36.0, 46, 51 * 365],
'TEMP': 30,
'U': 40,
'PERDA_INICIAL': 8.00,
'PERDA_TEMPO': 17.00,
'E_SCP': 200E6,
'PHO_S': 78,
'F_PK': 1864210.526,
'F_YK': 1676573.427,
'LAMBA_SIG': 1,
'TIPO_FIO_CORD_BAR': 'COR',
'TIPO_PROT': 'PRE',
'TIPO_ACO': 'RB',
'PHO_C': 25,
'F_CK': 50 * 1E3,
'CIMENTO': 'CP5',
'AGREGADO': 'GRA',
'ABAT': 0.09,
'G_2K': 1.55 + 0.70,
'Q_1K': 1.5,
'PSI_1': 0.40,
'PSI_2': 0.30,
'GAMMA_F1': 1.30,
'GAMMA_F2': 1.40,
'GAMMA_S':1.15,
'ETA_1':1.2,
'ETA_2':1.0,
'E_PPROPORCAO': X[3]}
G, A_C, A_SCP = VIGA_VER.VERIFICACAO_VIGA(VIGA)
PESO = VIGA['L'] * A_C * VIGA['PHO_C']
OF = PESO
for I_CONT in range(len(G)):
OF += (max(0, G[I_CONT]) ** 2) * 1E6
return OF
```
## 4.0 Example
```
[RESULTS_REP, BEST_REP, AVERAGE_REP, WORST_REP, STATUS] = SA_ALGORITHM_0001(OF_FUNCTION, SETUP)
```
## 5.0 View results
```
STATUS
MELHOR = STATUS[0]
MELHOR
BEST_REP[MELHOR]
STATUS
BEST = BEST_REP[MELHOR]
DATASET = {'DATASET': BEST_REP,
'NUMBER OF REPETITIONS': 30,
'DATA TYPE': 'OF'}
PLOT_SETUP = {'NAME': 'WANDER',
'WIDTH': 0.40,
'HEIGHT': 0.20,
'X AXIS LABEL': 'OF',
'X AXIS SIZE': 20,
'Y AXIS SIZE': 20,
'AXISES COLOR': '#000000',
'LABELS SIZE': 16,
'LABELS COLOR': '#000000',
'CHART COLOR': '#FEB625',
'KDE': False,
'BINS': 20,
'DPI': 600,
'EXTENSION': '.svg'}
META_PLOT_004(DATASET, PLOT_SETUP)
BEST = BEST_REP[MELHOR]
PLOT_SETUP = {'NAME': 'WANDER-OF',
'WIDTH': 0.40,
'HEIGHT': 0.20,
'DPI': 600,
'EXTENSION': '.svg',
'COLOR OF': '#000000',
'MARKER OF': 's',
'COLOR FIT': '#000000',
'MARKER FIT': 's',
'MARKER SIZE': 4,
'LINE WIDTH': 2,
'LINE STYLE': '--',
'OF AXIS LABEL': '$W (kN) $',
'X AXIS LABEL': 'Iteration',
'LABELS SIZE': 12,
'LABELS COLOR': '#000000',
'X AXIS SIZE': 12,
'Y AXIS SIZE': 12,
'AXISES COLOR': '#000000',
'ON GRID?': True}
DATASET = {'X': BEST['NEOF'], 'OF': BEST['OF'], 'FIT': BEST['FIT']}
META_PLOT_001(DATASET, PLOT_SETUP)
X = BEST_REP[0]['X_POSITION'][100,:]
print(X)
VIGA = {'H_W': X[0],
'B_W': X[1],
'B_FS': 0.30,
'B_FI': 0.30,
'H_FS': X[2],
'H_FI': X[2],
'H_SI': 0.07,
'H_II': 0.07,
'COB': 0.035,
'PHI_L': 12.5 / 1E3,
'PHI_E': 10.0 / 1E3,
'L': 20,
'L_PISTA': 150,
'FATOR_SEC': 'I',
'DELTA_ANC': 6 / 1E3,
'TEMPO_CONC': [2.00, 3.00, 28.0, 35.0, 45, 50 * 365],
'TEMPO_ACO': [3.00, 4.00, 29.0, 36.0, 46, 51 * 365],
'TEMP': 30,
'U': 40,
'PERDA_INICIAL': 8.00,
'PERDA_TEMPO': 17.00,
'E_SCP': 200E6,
'PHO_S': 78,
'F_PK': 1864210.526,
'F_YK': 1676573.427,
'LAMBA_SIG': 1,
'TIPO_FIO_CORD_BAR': 'COR',
'TIPO_PROT': 'PRE',
'TIPO_ACO': 'RB',
'PHO_C': 25,
'F_CK': 50 * 1E3,
'CIMENTO': 'CP5',
'AGREGADO': 'GRA',
'ABAT': 0.09,
'G_2K': 1.55 + 0.70,
'Q_1K': 1.5,
'PSI_1': 0.40,
'PSI_2': 0.30,
'GAMMA_F1': 1.30,
'GAMMA_F2': 1.40,
'GAMMA_S':1.15,
'ETA_1':1.2,
'ETA_2':1.0,
'E_PPROPORCAO': X[3]}
G, A_C, A_SCP = VIGA_VER.VERIFICACAO_VIGA(VIGA)
print(G, A_C, A_SCP)
```
| github_jupyter | from META_TOOLBOX import *
import VIGA_VERIFICA as VIGA_VER
SETUP = {'N_REP': 30,
'N_ITER': 100,
'N_POP': 1,
'D': 4,
'X_L': [0.25, 0.05, 0.05, 1/6.0],
'X_U': [0.65, 0.15, 0.15, 1/3.5],
'SIGMA': 0.15,
'ALPHA': 0.98,
'TEMP': None,
'STOP_CONTROL_TEMP': None,
'NULL_DIC': None
}
# OBJ. Function
def OF_FUNCTION(X, NULL_DIC):
# Geometria da viga
VIGA = {'H_W': X[0],
'B_W': X[1],
'B_FS': 0.30,
'B_FI': 0.30,
'H_FS': X[2],
'H_FI': X[2],
'H_SI': 0.07,
'H_II': 0.07,
'COB': 0.035,
'PHI_L': 12.5 / 1E3,
'PHI_E': 10.0 / 1E3,
'L': 20,
'L_PISTA': 150,
'FATOR_SEC': 'I',
'DELTA_ANC': 6 / 1E3,
'TEMPO_CONC': [2.00, 3.00, 28.0, 35.0, 45, 50 * 365],
'TEMPO_ACO': [3.00, 4.00, 29.0, 36.0, 46, 51 * 365],
'TEMP': 30,
'U': 40,
'PERDA_INICIAL': 8.00,
'PERDA_TEMPO': 17.00,
'E_SCP': 200E6,
'PHO_S': 78,
'F_PK': 1864210.526,
'F_YK': 1676573.427,
'LAMBA_SIG': 1,
'TIPO_FIO_CORD_BAR': 'COR',
'TIPO_PROT': 'PRE',
'TIPO_ACO': 'RB',
'PHO_C': 25,
'F_CK': 50 * 1E3,
'CIMENTO': 'CP5',
'AGREGADO': 'GRA',
'ABAT': 0.09,
'G_2K': 1.55 + 0.70,
'Q_1K': 1.5,
'PSI_1': 0.40,
'PSI_2': 0.30,
'GAMMA_F1': 1.30,
'GAMMA_F2': 1.40,
'GAMMA_S':1.15,
'ETA_1':1.2,
'ETA_2':1.0,
'E_PPROPORCAO': X[3]}
G, A_C, A_SCP = VIGA_VER.VERIFICACAO_VIGA(VIGA)
PESO = VIGA['L'] * A_C * VIGA['PHO_C']
OF = PESO
for I_CONT in range(len(G)):
OF += (max(0, G[I_CONT]) ** 2) * 1E6
return OF
[RESULTS_REP, BEST_REP, AVERAGE_REP, WORST_REP, STATUS] = SA_ALGORITHM_0001(OF_FUNCTION, SETUP)
STATUS
MELHOR = STATUS[0]
MELHOR
BEST_REP[MELHOR]
STATUS
BEST = BEST_REP[MELHOR]
DATASET = {'DATASET': BEST_REP,
'NUMBER OF REPETITIONS': 30,
'DATA TYPE': 'OF'}
PLOT_SETUP = {'NAME': 'WANDER',
'WIDTH': 0.40,
'HEIGHT': 0.20,
'X AXIS LABEL': 'OF',
'X AXIS SIZE': 20,
'Y AXIS SIZE': 20,
'AXISES COLOR': '#000000',
'LABELS SIZE': 16,
'LABELS COLOR': '#000000',
'CHART COLOR': '#FEB625',
'KDE': False,
'BINS': 20,
'DPI': 600,
'EXTENSION': '.svg'}
META_PLOT_004(DATASET, PLOT_SETUP)
BEST = BEST_REP[MELHOR]
PLOT_SETUP = {'NAME': 'WANDER-OF',
'WIDTH': 0.40,
'HEIGHT': 0.20,
'DPI': 600,
'EXTENSION': '.svg',
'COLOR OF': '#000000',
'MARKER OF': 's',
'COLOR FIT': '#000000',
'MARKER FIT': 's',
'MARKER SIZE': 4,
'LINE WIDTH': 2,
'LINE STYLE': '--',
'OF AXIS LABEL': '$W (kN) $',
'X AXIS LABEL': 'Iteration',
'LABELS SIZE': 12,
'LABELS COLOR': '#000000',
'X AXIS SIZE': 12,
'Y AXIS SIZE': 12,
'AXISES COLOR': '#000000',
'ON GRID?': True}
DATASET = {'X': BEST['NEOF'], 'OF': BEST['OF'], 'FIT': BEST['FIT']}
META_PLOT_001(DATASET, PLOT_SETUP)
X = BEST_REP[0]['X_POSITION'][100,:]
print(X)
VIGA = {'H_W': X[0],
'B_W': X[1],
'B_FS': 0.30,
'B_FI': 0.30,
'H_FS': X[2],
'H_FI': X[2],
'H_SI': 0.07,
'H_II': 0.07,
'COB': 0.035,
'PHI_L': 12.5 / 1E3,
'PHI_E': 10.0 / 1E3,
'L': 20,
'L_PISTA': 150,
'FATOR_SEC': 'I',
'DELTA_ANC': 6 / 1E3,
'TEMPO_CONC': [2.00, 3.00, 28.0, 35.0, 45, 50 * 365],
'TEMPO_ACO': [3.00, 4.00, 29.0, 36.0, 46, 51 * 365],
'TEMP': 30,
'U': 40,
'PERDA_INICIAL': 8.00,
'PERDA_TEMPO': 17.00,
'E_SCP': 200E6,
'PHO_S': 78,
'F_PK': 1864210.526,
'F_YK': 1676573.427,
'LAMBA_SIG': 1,
'TIPO_FIO_CORD_BAR': 'COR',
'TIPO_PROT': 'PRE',
'TIPO_ACO': 'RB',
'PHO_C': 25,
'F_CK': 50 * 1E3,
'CIMENTO': 'CP5',
'AGREGADO': 'GRA',
'ABAT': 0.09,
'G_2K': 1.55 + 0.70,
'Q_1K': 1.5,
'PSI_1': 0.40,
'PSI_2': 0.30,
'GAMMA_F1': 1.30,
'GAMMA_F2': 1.40,
'GAMMA_S':1.15,
'ETA_1':1.2,
'ETA_2':1.0,
'E_PPROPORCAO': X[3]}
G, A_C, A_SCP = VIGA_VER.VERIFICACAO_VIGA(VIGA)
print(G, A_C, A_SCP) | 0.270769 | 0.751443 |
```
%matplotlib inline
```
# Net file
This is the Net file for the clique problem: state and output transition function definition
```
import tensorflow as tf
import numpy as np
def weight_variable(shape, nm):
'''function to initialize weights'''
initial = tf.truncated_normal(shape, stddev=0.1)
tf.summary.histogram(nm, initial, collections=['always'])
return tf.Variable(initial, name=nm)
class Net:
'''class to define state and output network'''
def __init__(self, input_dim, state_dim, output_dim):
'''initialize weight and parameter'''
self.EPSILON = 0.00000001
self.input_dim = input_dim
self.state_dim = state_dim
self.output_dim = output_dim
self.state_input = self.input_dim - 1 + state_dim
#### TO BE SET FOR A SPECIFIC PROBLEM
self.state_l1 = 15
self.state_l2 = self.state_dim
self.output_l1 = 10
self.output_l2 = self.output_dim
# list of weights
self.weights = {'State_L1': weight_variable([self.state_input, self.state_l1], "WEIGHT_STATE_L1"),
'State_L2': weight_variable([ self.state_l1, self.state_l2], "WEIGHT_STATE_L1"),
'Output_L1':weight_variable([self.state_l2,self.output_l1], "WEIGHT_OUTPUT_L1"),
'Output_L2': weight_variable([self.output_l1, self.output_l2], "WEIGHT_OUTPUT_L2")
}
# list of biases
self.biases = {'State_L1': weight_variable([self.state_l1],"BIAS_STATE_L1"),
'State_L2': weight_variable([self.state_l2], "BIAS_STATE_L2"),
'Output_L1':weight_variable([self.output_l1],"BIAS_OUTPUT_L1"),
'Output_L2': weight_variable([ self.output_l2], "BIAS_OUTPUT_L2")
}
def netSt(self, inp):
with tf.variable_scope('State_net'):
# method to define the architecture of the state network
layer1 = tf.nn.tanh(tf.add(tf.matmul(inp,self.weights["State_L1"]),self.biases["State_L1"]))
layer2 = tf.nn.tanh(tf.add(tf.matmul(layer1, self.weights["State_L2"]), self.biases["State_L2"]))
return layer2
def netOut(self, inp):
# method to define the architecture of the output network
with tf.variable_scope('Out_net'):
layer1 = tf.nn.tanh(tf.add(tf.matmul(inp, self.weights["Output_L1"]), self.biases["Output_L1"]))
layer2 = tf.nn.softmax(tf.add(tf.matmul(layer1, self.weights["Output_L2"]), self.biases["Output_L2"]))
return layer2
def Loss(self, output, target, output_weight=None):
# method to define the loss function
#lo=tf.losses.softmax_cross_entropy(target,output)
output = tf.maximum(output, self.EPSILON, name="Avoiding_explosions") # to avoid explosions
xent = -tf.reduce_sum(target * tf.log(output), 1)
lo = tf.reduce_mean(xent)
return lo
def Metric(self, target, output, output_weight=None):
# method to define the evaluation metric
correct_prediction = tf.equal(tf.argmax(output, 1), tf.argmax(target, 1))
metric = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
return metric
```
| github_jupyter | %matplotlib inline
import tensorflow as tf
import numpy as np
def weight_variable(shape, nm):
'''function to initialize weights'''
initial = tf.truncated_normal(shape, stddev=0.1)
tf.summary.histogram(nm, initial, collections=['always'])
return tf.Variable(initial, name=nm)
class Net:
'''class to define state and output network'''
def __init__(self, input_dim, state_dim, output_dim):
'''initialize weight and parameter'''
self.EPSILON = 0.00000001
self.input_dim = input_dim
self.state_dim = state_dim
self.output_dim = output_dim
self.state_input = self.input_dim - 1 + state_dim
#### TO BE SET FOR A SPECIFIC PROBLEM
self.state_l1 = 15
self.state_l2 = self.state_dim
self.output_l1 = 10
self.output_l2 = self.output_dim
# list of weights
self.weights = {'State_L1': weight_variable([self.state_input, self.state_l1], "WEIGHT_STATE_L1"),
'State_L2': weight_variable([ self.state_l1, self.state_l2], "WEIGHT_STATE_L1"),
'Output_L1':weight_variable([self.state_l2,self.output_l1], "WEIGHT_OUTPUT_L1"),
'Output_L2': weight_variable([self.output_l1, self.output_l2], "WEIGHT_OUTPUT_L2")
}
# list of biases
self.biases = {'State_L1': weight_variable([self.state_l1],"BIAS_STATE_L1"),
'State_L2': weight_variable([self.state_l2], "BIAS_STATE_L2"),
'Output_L1':weight_variable([self.output_l1],"BIAS_OUTPUT_L1"),
'Output_L2': weight_variable([ self.output_l2], "BIAS_OUTPUT_L2")
}
def netSt(self, inp):
with tf.variable_scope('State_net'):
# method to define the architecture of the state network
layer1 = tf.nn.tanh(tf.add(tf.matmul(inp,self.weights["State_L1"]),self.biases["State_L1"]))
layer2 = tf.nn.tanh(tf.add(tf.matmul(layer1, self.weights["State_L2"]), self.biases["State_L2"]))
return layer2
def netOut(self, inp):
# method to define the architecture of the output network
with tf.variable_scope('Out_net'):
layer1 = tf.nn.tanh(tf.add(tf.matmul(inp, self.weights["Output_L1"]), self.biases["Output_L1"]))
layer2 = tf.nn.softmax(tf.add(tf.matmul(layer1, self.weights["Output_L2"]), self.biases["Output_L2"]))
return layer2
def Loss(self, output, target, output_weight=None):
# method to define the loss function
#lo=tf.losses.softmax_cross_entropy(target,output)
output = tf.maximum(output, self.EPSILON, name="Avoiding_explosions") # to avoid explosions
xent = -tf.reduce_sum(target * tf.log(output), 1)
lo = tf.reduce_mean(xent)
return lo
def Metric(self, target, output, output_weight=None):
# method to define the evaluation metric
correct_prediction = tf.equal(tf.argmax(output, 1), tf.argmax(target, 1))
metric = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
return metric | 0.738575 | 0.843186 |
# YOLO v3 Finetuning on AWS
This series of notebooks demonstrates how to finetune pretrained YOLO v3 (aka YOLO3) using MXNet on AWS.
**This notebook** guides you on how to deploy the YOLO3 model trained in the previous module to the SageMaker endpoint using GPU instance.
**Follow-on** the content of the notebooks shows:
* How to use MXNet YOLO3 pretrained model
* How to use Deep SORT with MXNet YOLO3
* How to create Ground-Truth dataset from images the model mis-detected
* How to finetune the model using the created dataset
* Load your finetuned model and Deploy Sagemaker-Endpoint with it using CPU instance.
* Load your finetuned model and Deploy Sagemaker-Endpoint with it using GPU instance.
## Pre-requisites
This notebook is designed to be run in Amazon SageMaker. To run it (and understand what's going on), you'll need:
* Basic familiarity with Python, [MXNet](https://mxnet.apache.org/), [AWS S3](https://docs.aws.amazon.com/s3/index.html), [Amazon Sagemaker](https://aws.amazon.com/sagemaker/)
* To create an **S3 bucket** in the same region, and ensure the SageMaker notebook's role has access to this bucket.
* Sufficient [SageMaker quota limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_sagemaker) set on your account to run GPU-accelerated spot training jobs.
## Cost and runtime
Depending on your configuration, this demo may consume resources outside of the free tier but should not generally be expensive because we'll be training on a small number of images. You might wish to review the following for your region:
* [Amazon SageMaker pricing](https://aws.amazon.com/sagemaker/pricing/)
The standard `ml.t2.medium` instance should be sufficient to run the notebooks.
We will use GPU-accelerated instance types for training and hyperparameter optimization, and use spot instances where appropriate to optimize these costs.
As noted in the step-by-step guidance, you should take particular care to delete any created SageMaker real-time prediction endpoints when finishing the demo.
# Step 0: Dependencies and configuration
As usual we'll start by loading libraries, defining configuration, and connecting to the AWS SDKs:
```
%load_ext autoreload
%autoreload 1
# Built-Ins:
import os
import json
from datetime import datetime
from glob import glob
from pprint import pprint
from matplotlib import pyplot as plt
from base64 import b64encode, b64decode
# External Dependencies:
import mxnet as mx
import boto3
import imageio
import sagemaker
import numpy as np
from sagemaker.mxnet import MXNet
from botocore.exceptions import ClientError
from gluoncv.utils import download, viz
```
## Step 1: Get best job informations
```
%store -r
session = boto3.session.Session()
region = session.region_name
s3 = session.resource('s3')
best_model_output_path, best_model_job_name, role_name
model_data_path = f'{best_model_output_path}/{best_model_job_name}/output/model.tar.gz'
print(model_data_path)
```
## Step 2: Create SageMaker Model
Containers for SageMaker MXNet deployments provide inference requests by using default SageMaker InvokeEndpoint API calls. So you do not have to build a Docker container yourself and upload it to Amazon Elastic Container Registry (ECR). All you have to do is implement the interfaces such as `model_fn(model_dir)`, `input_fn(request_body, content_type)`, `predict_fn(input_object, model)`, `output_fn(prediction, content_type)`. See the code example below.
https://sagemaker.readthedocs.io/en/stable/using_mxnet.html#serve-an-mxnet-model
```
!pygmentize src/inference_gpu.py
mxnet_model = sagemaker.mxnet.model.MXNetModel(
name='yolo3-hol-deploy-gpu',
model_data=model_data_path,
role=role_name,
entry_point='inference_gpu.py',
source_dir='src',
framework_version='1.6.0',
py_version='py3',
)
```
## Step 3: Deploy Model to GPU instance
For applications that need to be very responsive(e.g., 500ms), you might consider using GPU instances. For GPU acceleration, please specify the instance type as GPU instance like `'ml.p2.xlarge'`. The inference script code is the same for `inference_cpu.py` and `inference_gpu.py` except for one line such that `ctx = mx.cpu()` for CPU and `ctx = mx.gpu()` for GPU.
Note that it takes about **6-10 minutes** to create an endpoint. Please do not run real-time inference until the creation of the endpoint is complete. You can check whether the endpoint is created in the SageMaker dashboard, and wait for the status to become **InService**.
#### Additional Tip
GPU instances have good inference performance, but can be costly. Elastic Inference uses CPU instances by default, and provides GPU acceleration for inference depending on the situation, so using a general-purpose compute CPU instance and Elastic Inference together can host endpoints at a much lower cost than using GPU instances. Please refer to the link below for details.
https://docs.aws.amazon.com/en_kr/sagemaker/latest/dg/ei.html
```
%%time
predictor = mxnet_model.deploy(
instance_type='ml.p2.xlarge', initial_instance_count=1,
#accelerator_type='ml.eia2.xlarge'
)
```
## Step 4: Invoke Sagemaker Endpoint
Now that we have finished creating the endpoint, we will perform object detection on the test image.
```
s3.Object(BUCKET_NAME, f'{BATCH_NAME}/{IMAGE_PREFIX}/7.jpg').download_file('./7.jpg')
with open('./7.jpg', 'rb') as fp:
bimage = fp.read()
s = b64encode(bimage).decode('utf-8')
%%time
res = predictor.predict({
'short': 416,
'image': s
})
print(res['shape'])
ax = viz.plot_bbox(mx.image.imresize(mx.image.imdecode(bimage), res['shape'][3], res['shape'][2]), mx.nd.array(res['bbox']), mx.nd.array(res['score']), mx.nd.array(res['cid']), class_names=['person'])
```
| github_jupyter | %load_ext autoreload
%autoreload 1
# Built-Ins:
import os
import json
from datetime import datetime
from glob import glob
from pprint import pprint
from matplotlib import pyplot as plt
from base64 import b64encode, b64decode
# External Dependencies:
import mxnet as mx
import boto3
import imageio
import sagemaker
import numpy as np
from sagemaker.mxnet import MXNet
from botocore.exceptions import ClientError
from gluoncv.utils import download, viz
%store -r
session = boto3.session.Session()
region = session.region_name
s3 = session.resource('s3')
best_model_output_path, best_model_job_name, role_name
model_data_path = f'{best_model_output_path}/{best_model_job_name}/output/model.tar.gz'
print(model_data_path)
!pygmentize src/inference_gpu.py
mxnet_model = sagemaker.mxnet.model.MXNetModel(
name='yolo3-hol-deploy-gpu',
model_data=model_data_path,
role=role_name,
entry_point='inference_gpu.py',
source_dir='src',
framework_version='1.6.0',
py_version='py3',
)
%%time
predictor = mxnet_model.deploy(
instance_type='ml.p2.xlarge', initial_instance_count=1,
#accelerator_type='ml.eia2.xlarge'
)
s3.Object(BUCKET_NAME, f'{BATCH_NAME}/{IMAGE_PREFIX}/7.jpg').download_file('./7.jpg')
with open('./7.jpg', 'rb') as fp:
bimage = fp.read()
s = b64encode(bimage).decode('utf-8')
%%time
res = predictor.predict({
'short': 416,
'image': s
})
print(res['shape'])
ax = viz.plot_bbox(mx.image.imresize(mx.image.imdecode(bimage), res['shape'][3], res['shape'][2]), mx.nd.array(res['bbox']), mx.nd.array(res['score']), mx.nd.array(res['cid']), class_names=['person']) | 0.314051 | 0.95452 |
<a href="https://colab.research.google.com/github/seyrankhademi/introduction2AI/blob/main/linear_vs_mlp.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Computer Programming vs Machine Learning
This notebook is written by Dr. Seyran Khademi to familiarize the students with the concept of machine learning and its difference with computer programming. The code is developed in Jupyter notebook and it is compatible with the Google Colab platform.
---
## What is fundamentally different between machine learning and computer programming?
Let's look at an example project. Suppose that university wants to automate the process of granting scholarship to the students. They need a computer to make a decision whether an applicant is eligible for the scholarship or not based on some handcrafted features extracted by people at university including 1) grade point average (GPA) 2) quality of portfolio(QP) 3) Age and 4) whether the applicant has other loans or not. So for each applicant, there is a given tabular data like the following:
| Features | Applicant
|------|------|
| GPA | a number between [0,10]|
| QP |a number between [0,10] |
| Age |an integer between [18,40]|
| Loan |1 or 0 for loan/no loan|
Weighted-sum program
In the first attempt, we use a piece of code that computes the weighted sum of the given features as the final score for the applicant. In computer programming, the program (the rules) are set by the human explicitly. In our scholarship project, the rules are the weights for each feature,i.e., $[w_1,w_2,w_3,w_4]$. Note that each weight is the importance of the corresponding feature in the final score. Suppose that the committee for the scholarship assignment proposes the following weights $[0.4.0.3,0.2,0.1]$ respectively. The following cell is the code snippet that computes the final score of the applicant by the given weights.
```
# The weighet-sum function takes as an input the feature values for the applicant
# and outputs the final score.
import numpy as np
def weighted_sum(GPA,QP,Age,Loan):
#check that the points for GPA and QP are in range between 0 and 10
x=GPA
y=QP
points = np.array([x,y])
if (points < 0).all() and (points > 10).all():
print("Error: The GPA and PQ points must be between 0 and 10.")
#check that the age in range between 18 and 40
z=Age
if (z < 18) or (z > 40):
print("Note: Applicants younger than 18 and older than 40 are not eligible for the scholorship.")
#check that the loan feature is specified as binary
v=Loan
if (z ==0 or z==1):
print("Error: If the applicant has other loans currently enter 1 otherwise enter 0 for the Loan feature.")
#compute the weighed sum score
w1=0.4
w2=0.3
w3=0.2
w4=0.1
Score=w1*x+w2*y+w3*z+w4*v
print("Final score for the applicant is", + Score)
```
Let's see what is the score for Sara given the folowing records:
| Features | Sara
|------|------|
| GPA | 7.8|
| QP |6.5 |
| Age |26|
| Loan |0|
We call the function ```weighted_sum``` to compute the score ...
```
weighted_sum(7.8,6.5,26,0)
```
By running the code cell you found out what is the final score for Sara. In case the scholarship is competitive you need to compute the scores for other applicants and see whether Sara is in, e.g., the top 50% or not but we stop here as we made already our point. The weights (the selection rule) are given to the computer for this task by the human experts that learned the weightings empirically over the years of doing their job!
## Machine Learning (ML)
University collected enough digital records from the students who have applied for the scholarship together with the outcome based on the following criteria that Whether the student
1. finished master studies in less than two year
2. has returned the loan within 10 years
Given the amount of data the university decided to replace the averaging software with an ML model that can be *trained* on the available data to *learn* the selection rules.
---
In the following code cell, we generate some synthetic data for this task as we don't really have the students record. For the sake of visualization purposes, we only take two features per applicant lets to say GPA and QP.
```
# generate syntatic data with two features (GPA and QP) and two class labels (sucsesful or not)
from sklearn.datasets import make_moons, make_circles, make_classification,make_gaussian_quantiles
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
# The random-state in the data generator is set to fix for reproducibility.
data,labels = make_classification(n_samples=1000, n_features=2, n_informative=2, n_redundant=0, n_clusters_per_class=2, class_sep=0.9, random_state=42)
# fit the features in the range [0 10]
scaler = MinMaxScaler(feature_range=(0, 10))
data_scaled = scaler.fit_transform(data)
# plot samples of data points with their labels
plt.scatter(data_scaled [:, 0], data_scaled [:, 1], marker='o', c=labels, s=100, edgecolor='k')
plt.xlabel('Quality of portfolio')
plt.ylabel('GPA')
plt.show()
```
So you should see a figure with two classes "Purple" and "Yellow". Can you guess which class represents the "successful" students?
Next we train a simple ML model on these data to classify "Purple" from "Yellow".
---
We need to split our data into the test and train sets. The test set is used for evaluation of the model and the training set is for the model to learn the best decision-making rule from.
```
from sklearn.model_selection import train_test_split
# normalizing data to get best performance
X = StandardScaler().fit_transform(data)
# splitting data to train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=.4, random_state=42)
from sklearn import metrics
# evalution function takes the test data and the clf and calculates the accuracy of the model
def evaluate(X_test,clf):
y_pred = clf.predict(X_test)
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
```
Our first model is a simple linear classifer to be trained and evaluted on the train and the test data respectively.
```
from sklearn import svm
clf = svm.SVC(kernel='linear')
clf.fit(X_train,y_train)
evaluate(X_test,clf)
```
Our simple ML model performs with $86\%$ accuracy. Is that acceptable for our application?
---
Let's look closer to the data again and the decision boundry of our trained classifier.
```
# the function gets the training data, labels and the classifier and plots decision boundry
from mlxtend.plotting import plot_decision_regions
import matplotlib.gridspec as gridspec
def plot_decision_boundary(X_train,y_train,clf):
gs = gridspec.GridSpec(2, 2)
fig = plot_decision_regions(X_train, y_train.astype(np.integer),clf=clf, legend=2)
plot_decision_boundary(X_train,y_train,clf)
```
As you can see the linear classifier called support vector machine (SVM) may not be flexible enough to separate our (normalized) data.
## Neural Network
The next classifier that we try is a very simple Neural Network that is a very basic nonlinear model that is called a multi-layer perceptron (MLP).
MLP is more flexible than our simple SVM.
```
from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(solver='adam', alpha=1e-3, hidden_layer_sizes=(5, 2), learning_rate_init=0.005, max_iter=1000, random_state=1)
clf.fit(X_train,y_train)
evaluate(X_test,clf)
```
Our accuracy improved to $90\%$ using the MLP. Let's look at the decision plot to get a glance at our neural network classifier.
```
plot_decision_boundary(X_train,y_train,clf)
```
It is clear that our MLP model with more parameters (thus more complex) can adapt better to our synthetic dataset. In deep learning, the computer model is much more complex with thousands and millions of parameters to adapt to data with a complex nature. Nevertheless, training deep learning models requires great computational power and training time so we skip it in this tutorial. For real data and proper deep learning models, you can visit https://github.com/seyrankhademi/ResNet_CIFAR10.
## Conclusion
Once we have complex data, we need complex models to analyze that. Our synthetic data is way more simple in terms of dimensionality compared to real data captured from our world. However, we observed the effect of introducing a relatively more complex model compared to the linear model for the task of classification even in this simplified setting. Note that deep learning follows the same rules of statistical learning as developed for years in machine learning, however, we had not had enough computational power up to just recent years and such a huge amount of data to process in order to deploy deep learning.
| github_jupyter | # The weighet-sum function takes as an input the feature values for the applicant
# and outputs the final score.
import numpy as np
def weighted_sum(GPA,QP,Age,Loan):
#check that the points for GPA and QP are in range between 0 and 10
x=GPA
y=QP
points = np.array([x,y])
if (points < 0).all() and (points > 10).all():
print("Error: The GPA and PQ points must be between 0 and 10.")
#check that the age in range between 18 and 40
z=Age
if (z < 18) or (z > 40):
print("Note: Applicants younger than 18 and older than 40 are not eligible for the scholorship.")
#check that the loan feature is specified as binary
v=Loan
if (z ==0 or z==1):
print("Error: If the applicant has other loans currently enter 1 otherwise enter 0 for the Loan feature.")
#compute the weighed sum score
w1=0.4
w2=0.3
w3=0.2
w4=0.1
Score=w1*x+w2*y+w3*z+w4*v
print("Final score for the applicant is", + Score)
weighted_sum(7.8,6.5,26,0)
# generate syntatic data with two features (GPA and QP) and two class labels (sucsesful or not)
from sklearn.datasets import make_moons, make_circles, make_classification,make_gaussian_quantiles
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
# The random-state in the data generator is set to fix for reproducibility.
data,labels = make_classification(n_samples=1000, n_features=2, n_informative=2, n_redundant=0, n_clusters_per_class=2, class_sep=0.9, random_state=42)
# fit the features in the range [0 10]
scaler = MinMaxScaler(feature_range=(0, 10))
data_scaled = scaler.fit_transform(data)
# plot samples of data points with their labels
plt.scatter(data_scaled [:, 0], data_scaled [:, 1], marker='o', c=labels, s=100, edgecolor='k')
plt.xlabel('Quality of portfolio')
plt.ylabel('GPA')
plt.show()
from sklearn.model_selection import train_test_split
# normalizing data to get best performance
X = StandardScaler().fit_transform(data)
# splitting data to train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, labels, test_size=.4, random_state=42)
from sklearn import metrics
# evalution function takes the test data and the clf and calculates the accuracy of the model
def evaluate(X_test,clf):
y_pred = clf.predict(X_test)
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
from sklearn import svm
clf = svm.SVC(kernel='linear')
clf.fit(X_train,y_train)
evaluate(X_test,clf)
# the function gets the training data, labels and the classifier and plots decision boundry
from mlxtend.plotting import plot_decision_regions
import matplotlib.gridspec as gridspec
def plot_decision_boundary(X_train,y_train,clf):
gs = gridspec.GridSpec(2, 2)
fig = plot_decision_regions(X_train, y_train.astype(np.integer),clf=clf, legend=2)
plot_decision_boundary(X_train,y_train,clf)
from sklearn.neural_network import MLPClassifier
clf = MLPClassifier(solver='adam', alpha=1e-3, hidden_layer_sizes=(5, 2), learning_rate_init=0.005, max_iter=1000, random_state=1)
clf.fit(X_train,y_train)
evaluate(X_test,clf)
plot_decision_boundary(X_train,y_train,clf) | 0.651022 | 0.989501 |
# ¡Hola!
En este repositorio tendremos varios **datasets** para practicar:
1. La limpieza de datos
2. El procesamiento de datos
3. La visualización de datos
Puedes trabajar en este __Notebook__ si quieres pero te recomendamos utilizar una de nuestras planillas.
En el menú a la izquierda haz clic en el signo de +, una nueva ventana **Launcher** se abrirá. Haz clic en "From Templates"
![launcher-1](../imagenes/launcher-1.png)
Una nueva ventanilla aparecerá y podrás escoger una de nuestras planillas para comenzar tu practica. ¡Escoge la que te parezca la más índicada para lo que vayas a practicar!
Si estas leyendo esto en GitHub, haz clic en la insignia debajo para interactuar con el repositorio en [mybinder.org](https://mybinder.org)
**Nota**: Puede tomarse un minuto en comenzar, ¡pero vale la pena!
[![badge](https://img.shields.io/badge/interactuar-en%20binder-F5A252.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFkAAABZCAMAAABi1XidAAAB8lBMVEX///9XmsrmZYH1olJXmsr1olJXmsrmZYH1olJXmsr1olJXmsrmZYH1olL1olJXmsr1olJXmsrmZYH1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olJXmsrmZYH1olL1olL0nFf1olJXmsrmZYH1olJXmsq8dZb1olJXmsrmZYH1olJXmspXmspXmsr1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olLeaIVXmsrmZYH1olL1olL1olJXmsrmZYH1olLna31Xmsr1olJXmsr1olJXmsrmZYH1olLqoVr1olJXmsr1olJXmsrmZYH1olL1olKkfaPobXvviGabgadXmsqThKuofKHmZ4Dobnr1olJXmsr1olJXmspXmsr1olJXmsrfZ4TuhWn1olL1olJXmsqBi7X1olJXmspZmslbmMhbmsdemsVfl8ZgmsNim8Jpk8F0m7R4m7F5nLB6jbh7jbiDirOEibOGnKaMhq+PnaCVg6qWg6qegKaff6WhnpKofKGtnomxeZy3noG6dZi+n3vCcpPDcpPGn3bLb4/Mb47UbIrVa4rYoGjdaIbeaIXhoWHmZYHobXvpcHjqdHXreHLroVrsfG/uhGnuh2bwj2Hxk17yl1vzmljzm1j0nlX1olL3AJXWAAAAbXRSTlMAEBAQHx8gICAuLjAwMDw9PUBAQEpQUFBXV1hgYGBkcHBwcXl8gICAgoiIkJCQlJicnJ2goKCmqK+wsLC4usDAwMjP0NDQ1NbW3Nzg4ODi5+3v8PDw8/T09PX29vb39/f5+fr7+/z8/Pz9/v7+zczCxgAABC5JREFUeAHN1ul3k0UUBvCb1CTVpmpaitAGSLSpSuKCLWpbTKNJFGlcSMAFF63iUmRccNG6gLbuxkXU66JAUef/9LSpmXnyLr3T5AO/rzl5zj137p136BISy44fKJXuGN/d19PUfYeO67Znqtf2KH33Id1psXoFdW30sPZ1sMvs2D060AHqws4FHeJojLZqnw53cmfvg+XR8mC0OEjuxrXEkX5ydeVJLVIlV0e10PXk5k7dYeHu7Cj1j+49uKg7uLU61tGLw1lq27ugQYlclHC4bgv7VQ+TAyj5Zc/UjsPvs1sd5cWryWObtvWT2EPa4rtnWW3JkpjggEpbOsPr7F7EyNewtpBIslA7p43HCsnwooXTEc3UmPmCNn5lrqTJxy6nRmcavGZVt/3Da2pD5NHvsOHJCrdc1G2r3DITpU7yic7w/7Rxnjc0kt5GC4djiv2Sz3Fb2iEZg41/ddsFDoyuYrIkmFehz0HR2thPgQqMyQYb2OtB0WxsZ3BeG3+wpRb1vzl2UYBog8FfGhttFKjtAclnZYrRo9ryG9uG/FZQU4AEg8ZE9LjGMzTmqKXPLnlWVnIlQQTvxJf8ip7VgjZjyVPrjw1te5otM7RmP7xm+sK2Gv9I8Gi++BRbEkR9EBw8zRUcKxwp73xkaLiqQb+kGduJTNHG72zcW9LoJgqQxpP3/Tj//c3yB0tqzaml05/+orHLksVO+95kX7/7qgJvnjlrfr2Ggsyx0eoy9uPzN5SPd86aXggOsEKW2Prz7du3VID3/tzs/sSRs2w7ovVHKtjrX2pd7ZMlTxAYfBAL9jiDwfLkq55Tm7ifhMlTGPyCAs7RFRhn47JnlcB9RM5T97ASuZXIcVNuUDIndpDbdsfrqsOppeXl5Y+XVKdjFCTh+zGaVuj0d9zy05PPK3QzBamxdwtTCrzyg/2Rvf2EstUjordGwa/kx9mSJLr8mLLtCW8HHGJc2R5hS219IiF6PnTusOqcMl57gm0Z8kanKMAQg0qSyuZfn7zItsbGyO9QlnxY0eCuD1XL2ys/MsrQhltE7Ug0uFOzufJFE2PxBo/YAx8XPPdDwWN0MrDRYIZF0mSMKCNHgaIVFoBbNoLJ7tEQDKxGF0kcLQimojCZopv0OkNOyWCCg9XMVAi7ARJzQdM2QUh0gmBozjc3Skg6dSBRqDGYSUOu66Zg+I2fNZs/M3/f/Grl/XnyF1Gw3VKCez0PN5IUfFLqvgUN4C0qNqYs5YhPL+aVZYDE4IpUk57oSFnJm4FyCqqOE0jhY2SMyLFoo56zyo6becOS5UVDdj7Vih0zp+tcMhwRpBeLyqtIjlJKAIZSbI8SGSF3k0pA3mR5tHuwPFoa7N7reoq2bqCsAk1HqCu5uvI1n6JuRXI+S1Mco54YmYTwcn6Aeic+kssXi8XpXC4V3t7/ADuTNKaQJdScAAAAAElFTkSuQmCC)](https://mybinder.org/v2/gh/chekos/datasets/master?urlpath=%2Flab/tree/notebooks/Indice.ipynb)
| github_jupyter | # ¡Hola!
En este repositorio tendremos varios **datasets** para practicar:
1. La limpieza de datos
2. El procesamiento de datos
3. La visualización de datos
Puedes trabajar en este __Notebook__ si quieres pero te recomendamos utilizar una de nuestras planillas.
En el menú a la izquierda haz clic en el signo de +, una nueva ventana **Launcher** se abrirá. Haz clic en "From Templates"
![launcher-1](../imagenes/launcher-1.png)
Una nueva ventanilla aparecerá y podrás escoger una de nuestras planillas para comenzar tu practica. ¡Escoge la que te parezca la más índicada para lo que vayas a practicar!
Si estas leyendo esto en GitHub, haz clic en la insignia debajo para interactuar con el repositorio en [mybinder.org](https://mybinder.org)
**Nota**: Puede tomarse un minuto en comenzar, ¡pero vale la pena!
[![badge](https://img.shields.io/badge/interactuar-en%20binder-F5A252.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFkAAABZCAMAAABi1XidAAAB8lBMVEX///9XmsrmZYH1olJXmsr1olJXmsrmZYH1olJXmsr1olJXmsrmZYH1olL1olJXmsr1olJXmsrmZYH1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olJXmsrmZYH1olL1olL0nFf1olJXmsrmZYH1olJXmsq8dZb1olJXmsrmZYH1olJXmspXmspXmsr1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olLeaIVXmsrmZYH1olL1olL1olJXmsrmZYH1olLna31Xmsr1olJXmsr1olJXmsrmZYH1olLqoVr1olJXmsr1olJXmsrmZYH1olL1olKkfaPobXvviGabgadXmsqThKuofKHmZ4Dobnr1olJXmsr1olJXmspXmsr1olJXmsrfZ4TuhWn1olL1olJXmsqBi7X1olJXmspZmslbmMhbmsdemsVfl8ZgmsNim8Jpk8F0m7R4m7F5nLB6jbh7jbiDirOEibOGnKaMhq+PnaCVg6qWg6qegKaff6WhnpKofKGtnomxeZy3noG6dZi+n3vCcpPDcpPGn3bLb4/Mb47UbIrVa4rYoGjdaIbeaIXhoWHmZYHobXvpcHjqdHXreHLroVrsfG/uhGnuh2bwj2Hxk17yl1vzmljzm1j0nlX1olL3AJXWAAAAbXRSTlMAEBAQHx8gICAuLjAwMDw9PUBAQEpQUFBXV1hgYGBkcHBwcXl8gICAgoiIkJCQlJicnJ2goKCmqK+wsLC4usDAwMjP0NDQ1NbW3Nzg4ODi5+3v8PDw8/T09PX29vb39/f5+fr7+/z8/Pz9/v7+zczCxgAABC5JREFUeAHN1ul3k0UUBvCb1CTVpmpaitAGSLSpSuKCLWpbTKNJFGlcSMAFF63iUmRccNG6gLbuxkXU66JAUef/9LSpmXnyLr3T5AO/rzl5zj137p136BISy44fKJXuGN/d19PUfYeO67Znqtf2KH33Id1psXoFdW30sPZ1sMvs2D060AHqws4FHeJojLZqnw53cmfvg+XR8mC0OEjuxrXEkX5ydeVJLVIlV0e10PXk5k7dYeHu7Cj1j+49uKg7uLU61tGLw1lq27ugQYlclHC4bgv7VQ+TAyj5Zc/UjsPvs1sd5cWryWObtvWT2EPa4rtnWW3JkpjggEpbOsPr7F7EyNewtpBIslA7p43HCsnwooXTEc3UmPmCNn5lrqTJxy6nRmcavGZVt/3Da2pD5NHvsOHJCrdc1G2r3DITpU7yic7w/7Rxnjc0kt5GC4djiv2Sz3Fb2iEZg41/ddsFDoyuYrIkmFehz0HR2thPgQqMyQYb2OtB0WxsZ3BeG3+wpRb1vzl2UYBog8FfGhttFKjtAclnZYrRo9ryG9uG/FZQU4AEg8ZE9LjGMzTmqKXPLnlWVnIlQQTvxJf8ip7VgjZjyVPrjw1te5otM7RmP7xm+sK2Gv9I8Gi++BRbEkR9EBw8zRUcKxwp73xkaLiqQb+kGduJTNHG72zcW9LoJgqQxpP3/Tj//c3yB0tqzaml05/+orHLksVO+95kX7/7qgJvnjlrfr2Ggsyx0eoy9uPzN5SPd86aXggOsEKW2Prz7du3VID3/tzs/sSRs2w7ovVHKtjrX2pd7ZMlTxAYfBAL9jiDwfLkq55Tm7ifhMlTGPyCAs7RFRhn47JnlcB9RM5T97ASuZXIcVNuUDIndpDbdsfrqsOppeXl5Y+XVKdjFCTh+zGaVuj0d9zy05PPK3QzBamxdwtTCrzyg/2Rvf2EstUjordGwa/kx9mSJLr8mLLtCW8HHGJc2R5hS219IiF6PnTusOqcMl57gm0Z8kanKMAQg0qSyuZfn7zItsbGyO9QlnxY0eCuD1XL2ys/MsrQhltE7Ug0uFOzufJFE2PxBo/YAx8XPPdDwWN0MrDRYIZF0mSMKCNHgaIVFoBbNoLJ7tEQDKxGF0kcLQimojCZopv0OkNOyWCCg9XMVAi7ARJzQdM2QUh0gmBozjc3Skg6dSBRqDGYSUOu66Zg+I2fNZs/M3/f/Grl/XnyF1Gw3VKCez0PN5IUfFLqvgUN4C0qNqYs5YhPL+aVZYDE4IpUk57oSFnJm4FyCqqOE0jhY2SMyLFoo56zyo6becOS5UVDdj7Vih0zp+tcMhwRpBeLyqtIjlJKAIZSbI8SGSF3k0pA3mR5tHuwPFoa7N7reoq2bqCsAk1HqCu5uvI1n6JuRXI+S1Mco54YmYTwcn6Aeic+kssXi8XpXC4V3t7/ADuTNKaQJdScAAAAAElFTkSuQmCC)](https://mybinder.org/v2/gh/chekos/datasets/master?urlpath=%2Flab/tree/notebooks/Indice.ipynb)
| 0.199152 | 0.472927 |
```
import numpy as np
import xarray as xr
import hvplot.xarray
import glob
```
# only case1
```
fs = glob.glob('out0*.nc')
fs.sort()
for nn, f in enumerate(fs):
ds = xr.open_dataset(f)
U = ds['u'].values
V = ds['v'].values
Vmag = np.sqrt( U**2 + V**2) + 0.00000001
angle = (np.pi/2.) - np.arctan2(U/Vmag, V/Vmag)
dx, dy = 1.0, 1.0
omega = np.zeros_like(U)
for i in range(1, ds.dims['x']-1):
for j in range(1, ds.dims['y']-1):
omega[i,j] = (V[i+1,j] - V[i-1,j])/2.0/dx - (U[i,j+1] - U[i,j-1])/2.0/dy
dss = xr.Dataset( { 'Vmag': (['x','y'], Vmag), 'angle': (['x','y'], angle), 'omega': (['x','y'], omega) }
, coords={ 'x':ds['x'] , 'y':ds['y']}
)
# vstep = 10 #間引き間隔
dssp = dss.isel(x=slice(0,dss.dims['x'],20), y=slice(0,dss.dims['y'],5))
figVec = dssp.hvplot.vectorfield(frame_width=1400, frame_height=100, x='x', y='y', angle='angle', mag='Vmag',hover=False).opts(magnitude='Vmag', scale=0.2, color='k')
f = (dss['Vmag'].hvplot(x='x', y='y', cmap='reds', clim=(0,4.2), title='velocity')*figVec \
+ dss['omega'].hvplot(x='x', y='y', cmap='bwr',frame_width=1400, frame_height=100, clim=(-0.5,0.5), title='vorticity')).cols(1)
fo = f.options(title=str(int(ds.total_second))+'sec')
out = hvplot.save(fo, 'out' + str(nn).zfill(8) + '.html')
del out
```
# case1 and case2
```
def makefig(f,ncase):
ds = xr.open_dataset(f)
U = ds['u'].values
V = ds['v'].values
Vmag = np.sqrt( U**2 + V**2) + 0.00000001
angle = (np.pi/2.) - np.arctan2(U/Vmag, V/Vmag)
dx, dy = 1.0, 1.0
omega = np.zeros_like(U)
for i in range(1, ds.dims['x']-1):
for j in range(1, ds.dims['y']-1):
omega[i,j] = (V[i+1,j] - V[i-1,j])/2.0/dx - (U[i,j+1] - U[i,j-1])/2.0/dy
dss = xr.Dataset( { 'Vmag': (['x','y'], Vmag), 'angle': (['x','y'], angle), 'omega': (['x','y'], omega) }
, coords={ 'x':ds['x'] , 'y':ds['y']}
)
# vstep = 10 #間引き間隔
dssp = dss.isel(x=slice(0,dss.dims['x'],20), y=slice(0,dss.dims['y'],5))
figVec = dssp.hvplot.vectorfield(frame_width=1400, frame_height=100, x='x', y='y', angle='angle', mag='Vmag',hover=False).opts(magnitude='Vmag', scale=0.2, color='k')
# f = (dss['Vmag'].hvplot(x='x', y='y', cmap='reds', clim=(0,4.2), title='velocity')*figVec \
# + dss['omega'].hvplot(x='x', y='y', cmap='bwr',frame_width=1400, frame_height=100, clim=(-0.5,0.5), title='vorticity')).cols(1)
f1 = dss['Vmag'].hvplot(x='x', y='y', cmap='reds', clim=(0,4.2), title=ncase + ':velocity')*figVec
f2 = dss['omega'].hvplot(x='x', y='y', cmap='bwr',frame_width=1400, frame_height=100, clim=(-0.5,0.5), title=ncase +':vorticity')
return f1, f2, ds.total_second
# fs = glob.glob('friction_const/out0*.nc')
fs1 = glob.glob('case1_zeroEq/out0*.nc')
fs1.sort()
fs2 = glob.glob('out0*.nc')
fs2.sort()
nn = 0
for f1, f2 in zip(fs1, fs2):
f1vel, f1vor, time = makefig(f1, 'case1')
f2vel, f2vor, time = makefig(f2, 'case2')
f = (f1vel + f1vor + f2vel + f2vor).cols(1)
fo = f.options(title=str(int(time))+'sec')
out = hvplot.save(fo, 'out' + str(nn).zfill(8) + '.html')
nn += 1
del out
```
# manning
```
fs1 = glob.glob('case1_zeroEq/out0*.nc')
f = fs1[0]
ds = xr.open_dataset(f)
cManning = np.full_like(ds['u'].values, 0.025, dtype=np.float64)
cManning[:,:30] = 0.1
ds["manning"] = xr.DataArray(cManning, coords=[('x',ds['x']),('y',ds['y'])])
fig = ds['manning'].hvplot(x='x', y='y', cmap='bwr',frame_width=1400, frame_height=100)
# , clim=(-0.5,0.5), title=ncase +':vorticity')
hvplot.save(fig, 'manning.png')
```
| github_jupyter | import numpy as np
import xarray as xr
import hvplot.xarray
import glob
fs = glob.glob('out0*.nc')
fs.sort()
for nn, f in enumerate(fs):
ds = xr.open_dataset(f)
U = ds['u'].values
V = ds['v'].values
Vmag = np.sqrt( U**2 + V**2) + 0.00000001
angle = (np.pi/2.) - np.arctan2(U/Vmag, V/Vmag)
dx, dy = 1.0, 1.0
omega = np.zeros_like(U)
for i in range(1, ds.dims['x']-1):
for j in range(1, ds.dims['y']-1):
omega[i,j] = (V[i+1,j] - V[i-1,j])/2.0/dx - (U[i,j+1] - U[i,j-1])/2.0/dy
dss = xr.Dataset( { 'Vmag': (['x','y'], Vmag), 'angle': (['x','y'], angle), 'omega': (['x','y'], omega) }
, coords={ 'x':ds['x'] , 'y':ds['y']}
)
# vstep = 10 #間引き間隔
dssp = dss.isel(x=slice(0,dss.dims['x'],20), y=slice(0,dss.dims['y'],5))
figVec = dssp.hvplot.vectorfield(frame_width=1400, frame_height=100, x='x', y='y', angle='angle', mag='Vmag',hover=False).opts(magnitude='Vmag', scale=0.2, color='k')
f = (dss['Vmag'].hvplot(x='x', y='y', cmap='reds', clim=(0,4.2), title='velocity')*figVec \
+ dss['omega'].hvplot(x='x', y='y', cmap='bwr',frame_width=1400, frame_height=100, clim=(-0.5,0.5), title='vorticity')).cols(1)
fo = f.options(title=str(int(ds.total_second))+'sec')
out = hvplot.save(fo, 'out' + str(nn).zfill(8) + '.html')
del out
def makefig(f,ncase):
ds = xr.open_dataset(f)
U = ds['u'].values
V = ds['v'].values
Vmag = np.sqrt( U**2 + V**2) + 0.00000001
angle = (np.pi/2.) - np.arctan2(U/Vmag, V/Vmag)
dx, dy = 1.0, 1.0
omega = np.zeros_like(U)
for i in range(1, ds.dims['x']-1):
for j in range(1, ds.dims['y']-1):
omega[i,j] = (V[i+1,j] - V[i-1,j])/2.0/dx - (U[i,j+1] - U[i,j-1])/2.0/dy
dss = xr.Dataset( { 'Vmag': (['x','y'], Vmag), 'angle': (['x','y'], angle), 'omega': (['x','y'], omega) }
, coords={ 'x':ds['x'] , 'y':ds['y']}
)
# vstep = 10 #間引き間隔
dssp = dss.isel(x=slice(0,dss.dims['x'],20), y=slice(0,dss.dims['y'],5))
figVec = dssp.hvplot.vectorfield(frame_width=1400, frame_height=100, x='x', y='y', angle='angle', mag='Vmag',hover=False).opts(magnitude='Vmag', scale=0.2, color='k')
# f = (dss['Vmag'].hvplot(x='x', y='y', cmap='reds', clim=(0,4.2), title='velocity')*figVec \
# + dss['omega'].hvplot(x='x', y='y', cmap='bwr',frame_width=1400, frame_height=100, clim=(-0.5,0.5), title='vorticity')).cols(1)
f1 = dss['Vmag'].hvplot(x='x', y='y', cmap='reds', clim=(0,4.2), title=ncase + ':velocity')*figVec
f2 = dss['omega'].hvplot(x='x', y='y', cmap='bwr',frame_width=1400, frame_height=100, clim=(-0.5,0.5), title=ncase +':vorticity')
return f1, f2, ds.total_second
# fs = glob.glob('friction_const/out0*.nc')
fs1 = glob.glob('case1_zeroEq/out0*.nc')
fs1.sort()
fs2 = glob.glob('out0*.nc')
fs2.sort()
nn = 0
for f1, f2 in zip(fs1, fs2):
f1vel, f1vor, time = makefig(f1, 'case1')
f2vel, f2vor, time = makefig(f2, 'case2')
f = (f1vel + f1vor + f2vel + f2vor).cols(1)
fo = f.options(title=str(int(time))+'sec')
out = hvplot.save(fo, 'out' + str(nn).zfill(8) + '.html')
nn += 1
del out
fs1 = glob.glob('case1_zeroEq/out0*.nc')
f = fs1[0]
ds = xr.open_dataset(f)
cManning = np.full_like(ds['u'].values, 0.025, dtype=np.float64)
cManning[:,:30] = 0.1
ds["manning"] = xr.DataArray(cManning, coords=[('x',ds['x']),('y',ds['y'])])
fig = ds['manning'].hvplot(x='x', y='y', cmap='bwr',frame_width=1400, frame_height=100)
# , clim=(-0.5,0.5), title=ncase +':vorticity')
hvplot.save(fig, 'manning.png') | 0.371479 | 0.820901 |
# Using `pymf6` Interactively
You can run a MODFLOW6 model interactively.
For example, in a Jupyter Notebook.
## Setup
First, change into the directory of your MODFLOW6 model:
```
%cd ../examples/ex02-tidal/
```
The directory `ex02-tidal` contains all files needed to run a MODFLOW6 model.
The model `ex02-tidal` is one of the example models that come with MODFLOW6.
Now, import the class `MF6` from the module `pymf6.threaded`:
```
from pymf6.threaded import MF6
```
make an instance of this class:
```
mf6 = MF6()
```
## Meta Data
This instance offers meta data such as the names of the models:
```
mf6.simulation.model_names
```
or the time unit:
```
mf6.simulation.time_unit
```
as well as the associated time multiplier to convert to seconds:
```
mf6.simulation.time_multiplier
```
## Temporal Discretization (TDIS)
Data about stress periods and time steps are also available.
Get the number of stress periods:
```
mf6.simulation.TDIS.NPER
```
or the total simulation time (in days in this case):
```
mf6.simulation.TDIS.TOTALSIMTIME
mf6.simulation.TDIS.var_names
```
## Simulations
A MODFLOW6 model can contain multiple simulations.
To date all examples that come with MODFLOW6 only have one simulation.
The simulations are assembled in a list.
Get the first element of that list:
```
sim1 = mf6.simulation.solution_groups[0]
sim1
```
As you can see this solution has one package and 68 packages.
You can list all package name.
One in our case:
```
sim1.package_names
```
All internal MODFLOW6 names are all upper case such as `IMSLINEAR`.
New names introduced by `pymf6` are all lower case, with underscores for longer names such as `package_names`.
Now you can access this package via Python's attribute access:
```
sim1.IMSLINEAR
```
This is equivalent to the dictionary-like key access (called `__getitem__`):
```
sim1['IMSLINEAR']
```
This first approach has the advantage that you can take advantage of tab completion, i.e. after the dot press the `<TAB>` key in the Notebook (this might be a different key in an other tool), you will get a list of all possible names.
After you start typing, the list narrows down to the names that start with the letters you typed.
The second approach can be useful, if you now the name of the package or variable.
For example, `var_names` contains all variable names in a list:
```
len(sim1.var_names)
```
Let's display the first five names:
```
sim1.var_names[:5]
```
Now show the names **and** the values of this first five names:
```
for name in sim1.var_names[:5]:
print(name, sim1[name].value)
```
A variable itself displays a bit more verbose in the Notebook:
```
sim1.ID
```
The printed strings also works without the notebook:
```
print(sim1.ID)
```
A package also has variables.
Let's create a shorter name:
```
ims = sim1.IMSLINEAR
ims
```
and access the variables names:
```
len(ims.var_names)
ims.var_names[:5]
```
You can also keep the long name (remember to use the `<TAB>` key to get name suggestions):
```
sim1.IMSLINEAR.IPC.value
```
## Models
Typically, a solution is not so interesting.
More frequently, you might want to get more information about a model or even change model variable values at run time.
Again, there can be several models.
So far, MODFLOW6 supports only one model.
This might change in the future.
Therefore, `pymf6` already works with a list of models (with of length one so far ;).
Take the first model from the models list of the first simulation:
```
model = mf6.simulation.models[0]
model
```
It has 17 packages and 43 variables.
Again, all names are contained in lists:
```
len(model.package_names)
len(model.var_names)
```
## Packages
Let's access something more interesting.
For example the Structured Discretization (DIS) package:
```
model.DIS
```
Get the number of layers.
The full variable:
## Variables
```
model.DIS.NLAY
```
Get only the number:
```
model.DIS.NLAY.value
```
This is the shape of the grid:
```
model.DIS.MSHAPE
```
Also available as a NumPy array:
```
model.DIS.MSHAPE.value
```
The bottom variable has more elements (number of elements = layer * row * column):
```
model.DIS.BOT
```
The `value` holds a NumPy array:
```
model.DIS.BOT.value
```
The bottom comes as a one-dimensional array.
But it represents a three-dimensional structure.
For all variables for this is applicable, `value_3d` gives a 3D-view:
```
model.DIS.BOT.value_3d
```
You can also access elements for 3D-variables via the one-based `layer`, `row`, `column` index.
This gives the element in layer 1, row 1, and column 1:
```
model.DIS.BOT.get_value_by_lrc(1, 1, 1)
```
## Setting Values
You cannot only read the values of variables.
It is possible to set all values.
Create a shorter name for the bottom data:
```
bot = model.DIS.BOT
bot
```
Change the first value:
```
bot[0] = 7
bot
```
Now, a short name for the 3D-view:
```
bot_3d = bot.value_3d
bot_3d
```
Changing the value of an element:
```
bot_3d[0, 0, 0] = 6.5
bot_3d
```
has the same effect:
```
bot
```
## Doing a Time Step
This make MODFLOW6 to calculate the next time step:
```
mf6.next_step()
```
Instead of calling this method again and again till the end, call `run_to_end()`:
```
# mf6.run_to_end()
```
This will kill the current kernel because Fortran calls `STOP`, which terminates the current process.
| github_jupyter | %cd ../examples/ex02-tidal/
from pymf6.threaded import MF6
mf6 = MF6()
mf6.simulation.model_names
mf6.simulation.time_unit
mf6.simulation.time_multiplier
mf6.simulation.TDIS.NPER
mf6.simulation.TDIS.TOTALSIMTIME
mf6.simulation.TDIS.var_names
sim1 = mf6.simulation.solution_groups[0]
sim1
sim1.package_names
sim1.IMSLINEAR
sim1['IMSLINEAR']
len(sim1.var_names)
sim1.var_names[:5]
for name in sim1.var_names[:5]:
print(name, sim1[name].value)
sim1.ID
print(sim1.ID)
ims = sim1.IMSLINEAR
ims
len(ims.var_names)
ims.var_names[:5]
sim1.IMSLINEAR.IPC.value
model = mf6.simulation.models[0]
model
len(model.package_names)
len(model.var_names)
model.DIS
model.DIS.NLAY
model.DIS.NLAY.value
model.DIS.MSHAPE
model.DIS.MSHAPE.value
model.DIS.BOT
model.DIS.BOT.value
model.DIS.BOT.value_3d
model.DIS.BOT.get_value_by_lrc(1, 1, 1)
bot = model.DIS.BOT
bot
bot[0] = 7
bot
bot_3d = bot.value_3d
bot_3d
bot_3d[0, 0, 0] = 6.5
bot_3d
bot
mf6.next_step()
# mf6.run_to_end() | 0.211417 | 0.988863 |
```
def binary_search(nums, target):
p, r = 0, len(nums) - 1
while p <= r:
m = (p + r) // 2
if nums[m] == target:
return True
elif nums[m] > nums[0]:
p = m + 1
else:
r = m - 1
return False
```
# Search in Rotated Sorted Array
As a pivot, A[m] should be less than A[0] (A[-1] is not sufficient, since it's always less than A[-1]).
$$
\textbf{A[0]} < ... < A[m - 1] < \textbf{A[m]} < A[m + 1] < ... < A[-1]
$$
To find the pivot point, we need have this test: A[m] > A[0] in binary search.
```python
p, r = 0, len(A) - 1
while p + 1 < r and A[p] > A[r]:
m = (p + r) // 2
if A[m] > A[0]:
p = m + 1
else:
r = m # A[r] is usualy less than A[m].
if A[p] > A[r]:
p = r # p is the pivot point (aka, the smallest element in the array)
```
[33. Search in Rotated Sorted Array](https://leetcode.com/problems/search-in-rotated-sorted-array/description/)
[153. Find Minimum in Rotated Sorted Array](https://leetcode.com/problems/find-minimum-in-rotated-sorted-array/description/)
```
def search(nums, target):
p, r = 0, len(nums) - 1
while p + 1 < r and nums[p] < nums[r]:
m = (p + r) // 2
if nums[m] > nums[0]:
p = m + 1
else:
r = m
if nums[p] > nums[r]:
p = r
if nums[p] <= target <= nums[-1]:
return binary_search(nums[p:], target)
return binary_search(nums[:p], target)
```
## Duplicates are allowed
A[m] won't be enough since A[m] can be equal to A[0].
$$
\textbf{A[0]} = ... = A[m - 1] = \textbf{A[m]} < A[m + 1] < ... < A[-1]
$$
But we can still find the minimal by doing two more binary search in the A[:m] and A[m + 1:]. Two binary search is still binary search.
```python
p, r = 0, len(A) - 1
while p + 1 < r and A[p] >= A[r]:
m = (p + r) // 2
if A[0] == A[m]:
p = min(search(A[:m], search(A[m + 1:]) # pivot either in the left or right
break
elif A[m] > A[0]:
p = m + 1
else:
r = m
if A[p] > A[r]:
p = r
```
[81. Search in Rotated Sorted Array II](https://leetcode.com/problems/search-in-rotated-sorted-array-ii/description/)
[154. Find Minimum in Rotated Sorted Array II](https://leetcode.com/problems/find-minimum-in-rotated-sorted-array-ii/description/)
```
def search2(nums, target):
p, r = 0, len(nums) - 1
while p + 1 < r and nums[p] < nums[r]:
m = (p + r) // 2
if nums[m] == nums[0]:
return search2(nums[:m], target) or search2(nums[m + 1:], target) #Only difference
elif nums[m] > nums[0]:
p = m + 1
else:
r = m
if nums[p] > nums[r]:
p = r
if nums[p] <= target <= nums[-1]:
return binary_search(nums[p:], target)
return binary_search(nums[:p], target)
```
| github_jupyter | def binary_search(nums, target):
p, r = 0, len(nums) - 1
while p <= r:
m = (p + r) // 2
if nums[m] == target:
return True
elif nums[m] > nums[0]:
p = m + 1
else:
r = m - 1
return False
p, r = 0, len(A) - 1
while p + 1 < r and A[p] > A[r]:
m = (p + r) // 2
if A[m] > A[0]:
p = m + 1
else:
r = m # A[r] is usualy less than A[m].
if A[p] > A[r]:
p = r # p is the pivot point (aka, the smallest element in the array)
def search(nums, target):
p, r = 0, len(nums) - 1
while p + 1 < r and nums[p] < nums[r]:
m = (p + r) // 2
if nums[m] > nums[0]:
p = m + 1
else:
r = m
if nums[p] > nums[r]:
p = r
if nums[p] <= target <= nums[-1]:
return binary_search(nums[p:], target)
return binary_search(nums[:p], target)
p, r = 0, len(A) - 1
while p + 1 < r and A[p] >= A[r]:
m = (p + r) // 2
if A[0] == A[m]:
p = min(search(A[:m], search(A[m + 1:]) # pivot either in the left or right
break
elif A[m] > A[0]:
p = m + 1
else:
r = m
if A[p] > A[r]:
p = r
def search2(nums, target):
p, r = 0, len(nums) - 1
while p + 1 < r and nums[p] < nums[r]:
m = (p + r) // 2
if nums[m] == nums[0]:
return search2(nums[:m], target) or search2(nums[m + 1:], target) #Only difference
elif nums[m] > nums[0]:
p = m + 1
else:
r = m
if nums[p] > nums[r]:
p = r
if nums[p] <= target <= nums[-1]:
return binary_search(nums[p:], target)
return binary_search(nums[:p], target) | 0.286668 | 0.898455 |
# Data analysis of Zenodo zip content
This [Jupyter Notebook](https://jupyter.org/) explores the data retrieved by [data-gathering](../data-gathering) workflows.
It assumes the `../../data` directory has been populated by the [Snakemake](https://snakemake.readthedocs.io/en/stable/) workflow [zenodo-random-samples-zip-content](data-gathering/workflows/zenodo-random-samples-zip-content/). To regenerate `data` run `make` in the root directory of this repository.
```
!pwd
```
For convenience, the `data` symlink points to `../../data`.
```
!ls data
```
This notebook analyse the zenodo records sampled using this random seed:
```
!sha512sum data/seed
```
## Zenodo metadata
The starting point of this analysis is the Zenodo metadata dump <https://doi.org/10.5281/zenodo.3531504>. This contains the metadata of 3.5 million Zenodo records in the [Zenodo REST API](https://developers.zenodo.org/)'s internal JSON format.
Each Zenodo record, for instance <https://zenodo.org/record/14614> consists of metadata <https://zenodo.org/api/records/14614> which links to one or more downloadable files like <https://zenodo.org/api/files/866253b6-e4f2-4a06-96fa-618ff76438e6/powertac_2014_04_qualifying_21-30.zip>.
Below we explore Zenodo record `14614` to highlight which part of the metadata we need to inspect.
```
import requests
rec = requests.get("https://zenodo.org/api/records/14614").json()
rec
rec["files"][0]["type"] # File extension
rec["files"][0]["links"]["self"] # Download link
rec["metadata"]["access_right"] # "open" means we are allowed to download the above
rec["links"]["doi"] # DOI for citing this Zenodo record
rec["metadata"]["resource_type"]["type"] # DateCite resource type; "software", "dataset", etc.
```
The preliminary workflow that produced the Zenodo dump retrieved the 3.5M JSON files and concatinated them into a single JSONseq format [RFC7464](https://tools.ietf.org/html/rfc7464) to be more easily processed with tools like [jq](https://stedolan.github.io/jq/).
As this particular analysis explores the content of deposited **ZIP archives**, an important step of the archive content workflow is to select only the Zenodo records that deposits `*.zip` files, by filtering the metadata fields shown above `rec["metadata"]["access_right"] == "open"` and `rec["files"][*]["type"] == "zip"`.
Before we explore this, let's have a quick look at what file extensions are most commonly deposited at Zenodo.
### Zenodo deposits by file extension
Below use [jq](https://stedolan.github.io/jq/) from the compressed jsonseq to select all downloadable files from Zenodo, expressed as a TSV file.
```
!xzcat data/zenodo.org/record/3531504/files/zenodo-records-json-2019-09-16-filtered.jsonseq.xz |\
jq -r '. | select(.metadata.access_right == "open") | .metadata.resource_type.type as $rectype | . as $rec | ( .files[]? ) | [$rec.id, $rec.links.self, $rec.links.doi, .checksum, .links.self, .size, .type, .key, $rectype] | @tsv' |\
gzip > data/zenodo-records/files.tsv.gz
```
The table contains one line per download; note that some records have multiple downloads.
Parse the TSV file with the Python Data Analysis Library [pandas](https://pandas.pydata.org/):
```
import pandas as pd
header = ["record_id", "record_json", "record_doi", "file_checksum", "file_download", "file_size", "file_extension", "file_name", "record_type"]
files = pd.read_table("data/zenodo-records/files.tsv.gz", compression="gzip", header=None, names=header)
files
```
From this we can select the number of downloadable files per file extension, here the top 30:
```
extensions = files.groupby("file_extension")["file_download"].nunique().sort_values(ascending=False)
extensions.head(30).to_frame()
```
Note that as some records contain multiple downloads, so if instead we count number of records containing a particular file extension, the list changes somewhat:
```
extensions = files.groupby("file_extension")["record_id"].nunique().sort_values(ascending=False)
extensions.head(30).to_frame().rename(columns={"record_id": "records"})
```
Perhaps not unsurprisingly, the document format `*.pdf` is highest in both cases, followed by several image formats like `*.jpg`, `*.png`, and `*.tif`.
Let's see grouping by `record_type` affects the file extensions:
```
exts_by_record_type = files.groupby(["record_type","file_extension"])["record_id"] \
.nunique().sort_values(ascending=False).head(50)
exts_by_record_type.to_frame().rename(columns={"record_id": "records"})
```
As we might have suspected, `*.pdf` deposits of type `publication` are most common, as Zenodo is frequently used for depositing preprints.
In this research we are looking at archive-like deposits to inspect for structured metadata. It is clear that a large set of deposits above of type `dataset` should be our primary concern, keeping in mind other types, like we notice `*.meta` on `publication` records.
A suspicion is that a large set of `*.zip` deposits of record type `software` are made by [Zenodo-GitHub integration](https://guides.github.com/activities/citable-code/) of software source code archives, which we should treat separate as any structured metadata there probably is related to compilation or software packaging. However, it is possible that some datasets are maintained in GitHub repositories and use this integration for automatic dataset DOI registration, although with a misleading record type.
Let's look at the file types used by records of type `dataset`:
```
files[files.record_type == "dataset"].groupby("file_extension")["record_id"] \
.nunique().sort_values(ascending=False).head(30).to_frame().rename(columns={"record_id":"records"})
```
We notice that the combination of `*.h5` and `*.hdf5` for the [Hierarchical Data Format](https://www.hdfgroup.org/) overtakes `*.zip` as the most popular file extension; this format can be considered a hybrid of _archive_, _structured_ and _semi-structured data_; as the format supports multiple data entries and metadata, but suspected typical use of the format is a dump of multi-dimensional integers and floating point numbers with no further metadata.
### Brief categorization of top 30 Zenodo Dataset file extensions
* Archive/combined: zip, hdf5, h5, tgz, tar
* Compressed: gz
* Structured data: json, xml
* Semi-structured data: xlsx, csv, xls
* Unstructured/proprietary: txt, dat, mat (matlab)
* Textual/document: pdf, docx
* Image: tif, jpg, png
* Source code: perl
* Save games for emulators: sav
* Geodata/maps: kml
* Molecular dynamics: gro (Gromacs)
* Log data?: out
* **TODO** (Unknown to author): tpr, nc4, xtc, nc, top
Setting aside HDF5 for later analysis, we find that archive-like formats is dominated by `*.zip` with 22,321 records, followed 12,982 records for the the combination `*.tgz`, `*.tar` and `*.gz` (which include both `*.tar.gz` archives and single file compressions like `*.txt.gz`).
The first analysis therefore examines these ZIP for their file listing, to find common filenames, aiming to repeat this for `tar`-based archives. As we see a large split between `dataset` and `software` records these are kept separate, with a third category for `*.zip` files of any other record type.
Number of `*.zip` downloads per record type:
```
files[files.file_extension == "zip"].groupby("record_type")["file_download"].count().to_frame()
```
A concern of downloading to inspect all the ZIP files of Zenodo is that they vary considerably in size:
```
files[files.file_extension == "zip"]["file_size"].describe().to_frame()
total_download = files[files.file_extension == "zip"]["file_size"].sum() / 1024**4
total_download
files[files.file_extension == "zip"]["file_size"].count()
```
We see that 50% of the 125k ZIP files are 11 MiB or less, the largest 25% are 106 MiB or more, and the largest file is 184 GiB. The smallest 25% of ZIP files are less than 559 kiB and would fit on a floppy. This wide spread helps explains the large standard deviation of 1.8 GiB. Total download of all files is 25 TiB.
A binary logarithmic histogram (log2, 80 bins):
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
fig,ax = plt.subplots()
#ax.set(xscale="linear", yscale="linear")
filesizes = files[files.file_extension == "zip"]["file_size"]\
.transform(np.log2).replace([np.inf, -np.inf], 0)
sns.distplot(filesizes, bins=80, ax=ax)
filesizes.sort_values()
```
Notice the peculiar distribution with two peaks around 2^24 and 2^27 bytes downloads. It is possible that these are caused by multiple uploads of very similar-sized deposit, e.g. multiple versions from automatic data release systems, or more likely by the overlay of different file size distributions for different categories (dataset, software, publications). **TODO** Graph per category.
## Workflow: Listing ZIP content
The workflow `code/data-gathering/workflows/zenodo-random-samples-zip-content` performs the second download task of sampling `*.zip` files to list their contained filenames. It works by a sample size per category, so that the analysis can be performed without downloading
### Workflow overview
The executed Snakemake workflow consists of rules that can be visualized using [Graphviz](https://www.graphviz.org/):
```
!cd ../data-gathering/workflows/zenodo-random-samples-zip-content ; \
snakemake --rulegraph | dot -Tsvg > workflow.svg
from IPython.core.display import SVG
SVG(filename='../data-gathering/workflows/zenodo-random-samples-zip-content/workflow.svg')
```
The first step **zipfiles** uses [jq](https://stedolan.github.io/jq/) to create TSV files as shown above, where the file extension is `*.zip`:
```
zipfiles = pd.read_table("data/zenodo-records/zipfiles.tsv", header=None, names=header)
zipfiles
```
The **shuffled** step does a random shuffle of the rows, but using the `seed` file as random data source for reproducibility (a new source will be recreated by **seed** if missing).
```
shuffled = pd.read_table("data/zenodo-records/zipfiles-shuffled.tsv", header=None, names=header)
shuffled
```
The shuffled file is then split into `zipfiles-dataset.tsv`, `zipfiles-software.tsv` and `zipfiles-others.tsv` (?) by **splitzipfiles**.
```
!ls data/zenodo-records
```
The step **samples** then picks the configured number of `MAXSAMPLES` (2000) from each of the category TSV files, which are split into individual files per row using [GNU Coreutils split](https://www.gnu.org/software/coreutils/manual/html_node/split-invocation.html). Note that the filenames are generated alphabetically by `split`, as the category TSV files are pre-shuffled this simply selects the first 2000 lines from each.
```
!ls data/dataset/sample | wc -l
!ls -C data/dataset/sample | tail -n50
!cat data/dataset/sample/zapf.tsv
```
The step **downloadzip** then for each sample downloads the zip file from the `file_download` column, and produces a corresponding `listing` showing the filenames and paths contained in the ZIP:
```
!cat data/dataset/listing/zapf.txt
```
The ZIP file is deleted after each download, but as multiple downloads can happen simultanously and the largest files are over 100 GB, at least 0.5 TB should be free when executing.
## Common filenames
In this part of the analysis we'll concatinate the file listings to look for common filenames. The assumption is that if an archive contains a manifest or structured metadata file, it will have a fixed filename, or at least a known metadata extension.
```
! for cat in dataset software others ; do \
find data/$cat/listing/ -type f | xargs cat > data/$cat/listing.txt ; done
! wc -l data/*/listing.txt
```
#### Most common filenames in dataset ZIP archives
Ignoring paths (anything before last `/`), what are the 30 most common filenames?
```
!cat data/dataset/listing.txt | sed s,.*/,, | sort | uniq -c | sort | tail -n 30
```
Despite being marked as _dataset,_ many of these indicate software source code (`Test.java`, `package.html`, `ApplicationTest.java`).
Several files seem to indicate genomics data (`seqs.csv`, `genes.csv`, `igkv.fasta`, `allele_freq.csv`)
Some indicate retrospective provenance or logs (`run`, `run.log`, `run.err`, `log.txt`, `replay.db`,), possibly prospective provenance (`sas_plan`).
#### Most common filenames in _software_ ZIP files?
Comparing with the above, let's check out the most common filenames within _software_ ZIP files:
```
!cat data/software/listing.txt | sed s,.*/,, | sort | uniq -c | sort | tail -n 30
```
The commonality of source code documentation files commonly recognized from GitHub (`README.md`, `README`, `README.txt`, `.gitignore`, `LICENSE`, `license.html`, `ChangeLog`) and automated build configuration (`.travis.yml`) indicate that a majority of our _software_ samples indeed are from the automated GitHub-to-Zenodo integration. This could be verified by looking in the metadata's `links` for provenance links back to GitHub repository (**todo**).
As expected for software source code we find configuration for build systems (`BuildFile.xml`, `Makefile`, `CMakeLists.txt`, `pom.xml`, `setup.py` and `Kconfig` for the Linux kernel), documentation (`index.rst`, `index.html`, `overview.html`), software distribution/packaging (`package.json`) and package/module organization (`__init__.py`, `package.json`)
**TODO**: Worth a closer look: `data.json`, `MANIFEST.MF` (Java JAR's [manifest file](https://docs.oracle.com/javase/tutorial/deployment/jar/manifestindex.html)), `__cts__.xml` ([Capitains textgroup metadata](http://capitains.org/pages/guidelines#directory-structure)?) and `screen-output`.
```
!grep 'data\.json' data/*/listing.txt | head -n 100
```
The regex `data\.json` was too permissive, but this highlights some new patterns to look for generally: `*_data.json` and `*metadata.json`.
```
!grep '/data\.json' data/*/listing.txt | head -n100
```
The majority of `data.json` occurances are from within each nested folders of the [demo/test data](https://github.com/sonjageorgievska/CClusTera/tree/master/data) of the data visualizer [CClusTera](https://github.com/sonjageorgievska/CClusTera).
```
!grep 'metadata\.json' data/*/listing.txt
```
#### What are the most common filenames of other ZIP files?
For completeness, a quick look of the filenames of ZIP files in records that are neither _dataset_ nor _software_ :
```
!cat data/others/listing.txt | sed s,.*/,, | sort | uniq -c | sort | tail -n 30
```
**TODO**: Explore..
#### What are the most common extensions of files within _dataset_ ZIP files?
```
!cat data/dataset/listing.txt | sed 's,.*\.,,' | sort | uniq -c -i | sort | tail -n 300
```
**TODO**: analyze these file extensions further.
Worth an extra look: `*.rdf`, `*.xml` `*.hpp`, `*.dcm`, `*.mseed`, `*.x10`, `*.json` - what are the most common basenames for each of these?
While the extensions above is looking at _dataset_ records, we still see a large number of _software_ hits from `*.java` source code or `*.class` from compiled Java bytecode, however again noting that these may come from a small number of records because a Java program woild have one of these files for every defined class.
Indeed, we find the 838624 `*.java` files are from just 71 sampled _dataset_ records, and the 145934 `*.class` files from 69 records.
```
!grep '\.java' data/dataset/listing/* | cut -f 1 -d : | uniq | wc -l
!grep '\.class' data/dataset/listing/* | cut -f 1 -d : | uniq | wc -l
!grep '\.bz2' data/dataset/listing/* | cut -f 1 -d : | uniq | wc -l
```
For comparison the 165765 `*bz2` compressed files come from 763 records. As files compressed with `gz` and `bz2` often have filenames like `foo.txt.bz2` we will explore their intermediate extensions separately:
```
!grep '\.bz2' data/dataset/listing.txt| sed s/.bz2$// | \
sed 's,.*\.,,' | sort | uniq -c | sort | tail -n 30
```
The overwhelming majority of `*.bz2` files are `*.grib2.bz2` files, which seem to be coming from regular releases from <http://opendata.dwd.de> weather data from Deutscher Wetter Dienst.
```
!grep grib2 data/dataset/listing/* | grep icon-.*H_SNOW | head -n50
```
The regular releases are all grouped under <https://doi.org/10.5281/zenodo.3239490>, and each of the 137 versions contain the same pattern filenames with different datestamps internally, where each new release record contain **all** the same ZIP files (one per day of the year, e.g. https://zenodo.org/record/3509967 contain `20190527.zip` to `20191120.zip`), adding to the record any new ZIP files for the days since previous release.
It is possible to detect these duplicate files by checking the `file_checksum` in the Zenodo metadata, even if they have different download URLs (Zenodo does not currently perform deduplication).
In this case the Zenodo record itself acts as an incremental _research object_, and it is probably beneficial for users that the weather data for different dates are separate, as they would not need to re-download old data that has not changed. This would not have been the case if the weather center instead regularly released a single mega-zip which would then always appear with a "new" checksum.
Zenodo does not currently provide a "download all" mechanism for users, but download file names preserve the original filename, e.g. <https://zenodo.org/record/3509967/files/20190527.zip> and <https://zenodo.org/record/3509967/files/20191120.zip> for `20190527.zip` and `20191120.zip`.
Now let's consider the same subextensions of `*.gz` files, which is the more common single-file compression in Linux and UNIX.
```
!grep '\.gz' data/dataset/listing.txt| sed s/.gz$// | \
sed 's,.*\.,,' | sort | uniq -c | sort | tail -n 30
```
**TODO**: Explore these filetypes more.
`*.warc` are [WebIs archives](https://github.com/webis-de/webis-web-archiver) from a web crawl in the single record <https://doi.org/10.5281/zenodo.1002204> which have been sampled twice for two different contained downloads.
```
!grep warc.gz data/dataset/listing/* | cut -d : -f 1 \| uniq
!cat `grep warc.gz data/dataset/listing/* | cut -d : -f 1 \| uniq | sed s/listing/sample/ | sed s/.txt$/.tsv/`
```
## Investigating Linked Data files
From our sample we find that 193 ZIP files have one or more `*.rdf` or `*.ttl` file, indicating the Linked Data formats [RDF/XML](https://www.w3.org/TR/rdf-syntax-grammar/) and [Turtle](https://www.w3.org/TR/turtle/).
As these are structured data files with namespaces, a relevant question is to investigate which namespaces and properties are used the most across these, to see if any of those are used for self-describing the package or its content.
As the original workflow did not store all the sampled ZIP files, so we select again only those ZIPs that contain filenames with those extensions:
```
! mkdir rdf
! egrep -ri '(ttl|rdf$)' */listing | \
cut -d ":" -f 1 | sort | uniq | \
sed s/listing/sample/ | sed s/txt$/tsv/ | \
xargs awk '{print $5}' > rdf/urls.txt
```
We'll downloading each of them using `wget --mirror` so the local filepath corresponds to the URL, e.g. `zenodo.org/api/files/232472d7-a5b9-4d2a-8ff2-ea146b52e703/jhpoelen/eol-globi-data-v0.8.12.zip`
**TODO**: Tidy up to reuse sample-id instead of `mktemp`. Avoid extracting non-RDF files.
```
! cat rdf/urls.txt | xarg wget --mirror --directory-prefix=rdf
! cd rdf; for f in `find . -name '*zip'` ; do DIR=`mktemp -d --tmpdir=.` ; pushd $DIR ; unzip ../$f ; popd; done
```
Next we look again for the `*.rdf` files and parse them with [Apache Jena riot](https://jena.apache.org/documentation/io/#command-line-tools) to get a single line-based [N-Quads](https://www.w3.org/TR/n-quads/) RDF graph.
```
! find rdf -name '*rdf' | xargs docker run -v `pwd`/rdf:/rdf stain/jena riot > rdf/riot.n3
```
While we could do complex queries of this unified graph using [SPARQL](https://www.w3.org/TR/sparql11-overview/), for finding the properties used we can simply use `awk` because of the line-based nature of NQuads.
```
! cat rdf/riot.n3 | awk '{print $2}' | sort | uniq -c
```
```
...
355 <http://www.w3.org/ns/prov#wasAttributedTo>
355 <http://www.w3.org/ns/prov#wasDerivedFrom>
389 <http://purl.org/ontology/bibo/issue>
392 <http://purl.org/dc/terms/subject>
410 <http://purl.org/ontology/bibo/volume>
451 <http://purl.org/dc/terms/description>
518 <http://www.w3.org/2000/01/rdf-schema#isDefinedBy>
578 <http://purl.org/ontology/bibo/issn>
579 <http://www.w3.org/2000/01/rdf-schema#label>
965 <http://xmlns.com/foaf/0.1/accountName>
987 <http://xmlns.com/foaf/0.1/account>
1415 <http://purl.org/dc/terms/publisher>
4702 <http://purl.org/ontology/bibo/doi>
10825 <http://www.w3.org/2011/content#characterEncoding>
11224 <http://www.w3.org/2011/content#chars>
11240 <http://purl.org/ontology/bibo/pmid>
12333 <http://purl.org/ontology/bibo/pageEnd>
12696 <http://purl.org/ontology/bibo/pageStart>
13148 <http://purl.org/ontology/bibo/authorList>
13548 <http://purl.org/dc/terms/issued>
13836 <http://www.w3.org/2000/01/rdf-schema#comment>
15827 <http://www.w3.org/2002/07/owl#sameAs>
16297 <http://purl.org/dc/terms/identifier>
25431 <http://purl.org/dc/terms/title>
31484 <http://purl.org/ontology/bibo/citedBy>
31488 <http://purl.org/ontology/bibo/cites>
43850 <http://purl.org/dc/terms/hasPart>
43850 <http://purl.org/dc/terms/isPartOf>
60585 <http://www.w3.org/2000/01/rdf-schema#member>
60894 <http://xmlns.com/foaf/0.1/familyName>
60894 <http://xmlns.com/foaf/0.1/givenName>
60928 <http://xmlns.com/foaf/0.1/name>
60952 <http://xmlns.com/foaf/0.1/publications>
304714 <http://purl.org/ao/core/annotatesResource>
304714 <http://purl.org/ao/core/body>
304714 <http://purl.org/swan/pav/provenance/createdBy>
304714 <http://purl.org/swan/pav/provenance/createdOn>
304714 <http://rdfs.org/sioc/ns#num_items>
515063 <http://www.w3.org/2000/01/rdf-schema#seeAlso>
613578 <http://purl.org/ao/core/hasTopic>
818815 <http://purl.org/ao/selectors/end>
818815 <http://purl.org/ao/selectors/init>
881753 <http://purl.org/ao/core/onResource>
881753 <http://www.w3.org/2000/01/rdf-schema#resource>
881809 <http://purl.org/ao/core/context>
1654377 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
```
From here we see several properties that may indicate research-object-like descriptions. Particularly we see a generous use of the Annotation Ontology] <https://doi.org/10.1186%2F2041-1480-2-S2-S4>
| github_jupyter | !pwd
!ls data
!sha512sum data/seed
import requests
rec = requests.get("https://zenodo.org/api/records/14614").json()
rec
rec["files"][0]["type"] # File extension
rec["files"][0]["links"]["self"] # Download link
rec["metadata"]["access_right"] # "open" means we are allowed to download the above
rec["links"]["doi"] # DOI for citing this Zenodo record
rec["metadata"]["resource_type"]["type"] # DateCite resource type; "software", "dataset", etc.
!xzcat data/zenodo.org/record/3531504/files/zenodo-records-json-2019-09-16-filtered.jsonseq.xz |\
jq -r '. | select(.metadata.access_right == "open") | .metadata.resource_type.type as $rectype | . as $rec | ( .files[]? ) | [$rec.id, $rec.links.self, $rec.links.doi, .checksum, .links.self, .size, .type, .key, $rectype] | @tsv' |\
gzip > data/zenodo-records/files.tsv.gz
import pandas as pd
header = ["record_id", "record_json", "record_doi", "file_checksum", "file_download", "file_size", "file_extension", "file_name", "record_type"]
files = pd.read_table("data/zenodo-records/files.tsv.gz", compression="gzip", header=None, names=header)
files
extensions = files.groupby("file_extension")["file_download"].nunique().sort_values(ascending=False)
extensions.head(30).to_frame()
extensions = files.groupby("file_extension")["record_id"].nunique().sort_values(ascending=False)
extensions.head(30).to_frame().rename(columns={"record_id": "records"})
exts_by_record_type = files.groupby(["record_type","file_extension"])["record_id"] \
.nunique().sort_values(ascending=False).head(50)
exts_by_record_type.to_frame().rename(columns={"record_id": "records"})
files[files.record_type == "dataset"].groupby("file_extension")["record_id"] \
.nunique().sort_values(ascending=False).head(30).to_frame().rename(columns={"record_id":"records"})
files[files.file_extension == "zip"].groupby("record_type")["file_download"].count().to_frame()
files[files.file_extension == "zip"]["file_size"].describe().to_frame()
total_download = files[files.file_extension == "zip"]["file_size"].sum() / 1024**4
total_download
files[files.file_extension == "zip"]["file_size"].count()
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
fig,ax = plt.subplots()
#ax.set(xscale="linear", yscale="linear")
filesizes = files[files.file_extension == "zip"]["file_size"]\
.transform(np.log2).replace([np.inf, -np.inf], 0)
sns.distplot(filesizes, bins=80, ax=ax)
filesizes.sort_values()
!cd ../data-gathering/workflows/zenodo-random-samples-zip-content ; \
snakemake --rulegraph | dot -Tsvg > workflow.svg
from IPython.core.display import SVG
SVG(filename='../data-gathering/workflows/zenodo-random-samples-zip-content/workflow.svg')
zipfiles = pd.read_table("data/zenodo-records/zipfiles.tsv", header=None, names=header)
zipfiles
shuffled = pd.read_table("data/zenodo-records/zipfiles-shuffled.tsv", header=None, names=header)
shuffled
!ls data/zenodo-records
!ls data/dataset/sample | wc -l
!ls -C data/dataset/sample | tail -n50
!cat data/dataset/sample/zapf.tsv
!cat data/dataset/listing/zapf.txt
! for cat in dataset software others ; do \
find data/$cat/listing/ -type f | xargs cat > data/$cat/listing.txt ; done
! wc -l data/*/listing.txt
!cat data/dataset/listing.txt | sed s,.*/,, | sort | uniq -c | sort | tail -n 30
!cat data/software/listing.txt | sed s,.*/,, | sort | uniq -c | sort | tail -n 30
!grep 'data\.json' data/*/listing.txt | head -n 100
!grep '/data\.json' data/*/listing.txt | head -n100
!grep 'metadata\.json' data/*/listing.txt
!cat data/others/listing.txt | sed s,.*/,, | sort | uniq -c | sort | tail -n 30
!cat data/dataset/listing.txt | sed 's,.*\.,,' | sort | uniq -c -i | sort | tail -n 300
!grep '\.java' data/dataset/listing/* | cut -f 1 -d : | uniq | wc -l
!grep '\.class' data/dataset/listing/* | cut -f 1 -d : | uniq | wc -l
!grep '\.bz2' data/dataset/listing/* | cut -f 1 -d : | uniq | wc -l
!grep '\.bz2' data/dataset/listing.txt| sed s/.bz2$// | \
sed 's,.*\.,,' | sort | uniq -c | sort | tail -n 30
!grep grib2 data/dataset/listing/* | grep icon-.*H_SNOW | head -n50
!grep '\.gz' data/dataset/listing.txt| sed s/.gz$// | \
sed 's,.*\.,,' | sort | uniq -c | sort | tail -n 30
!grep warc.gz data/dataset/listing/* | cut -d : -f 1 \| uniq
!cat `grep warc.gz data/dataset/listing/* | cut -d : -f 1 \| uniq | sed s/listing/sample/ | sed s/.txt$/.tsv/`
! mkdir rdf
! egrep -ri '(ttl|rdf$)' */listing | \
cut -d ":" -f 1 | sort | uniq | \
sed s/listing/sample/ | sed s/txt$/tsv/ | \
xargs awk '{print $5}' > rdf/urls.txt
! cat rdf/urls.txt | xarg wget --mirror --directory-prefix=rdf
! cd rdf; for f in `find . -name '*zip'` ; do DIR=`mktemp -d --tmpdir=.` ; pushd $DIR ; unzip ../$f ; popd; done
! find rdf -name '*rdf' | xargs docker run -v `pwd`/rdf:/rdf stain/jena riot > rdf/riot.n3
! cat rdf/riot.n3 | awk '{print $2}' | sort | uniq -c
...
355 <http://www.w3.org/ns/prov#wasAttributedTo>
355 <http://www.w3.org/ns/prov#wasDerivedFrom>
389 <http://purl.org/ontology/bibo/issue>
392 <http://purl.org/dc/terms/subject>
410 <http://purl.org/ontology/bibo/volume>
451 <http://purl.org/dc/terms/description>
518 <http://www.w3.org/2000/01/rdf-schema#isDefinedBy>
578 <http://purl.org/ontology/bibo/issn>
579 <http://www.w3.org/2000/01/rdf-schema#label>
965 <http://xmlns.com/foaf/0.1/accountName>
987 <http://xmlns.com/foaf/0.1/account>
1415 <http://purl.org/dc/terms/publisher>
4702 <http://purl.org/ontology/bibo/doi>
10825 <http://www.w3.org/2011/content#characterEncoding>
11224 <http://www.w3.org/2011/content#chars>
11240 <http://purl.org/ontology/bibo/pmid>
12333 <http://purl.org/ontology/bibo/pageEnd>
12696 <http://purl.org/ontology/bibo/pageStart>
13148 <http://purl.org/ontology/bibo/authorList>
13548 <http://purl.org/dc/terms/issued>
13836 <http://www.w3.org/2000/01/rdf-schema#comment>
15827 <http://www.w3.org/2002/07/owl#sameAs>
16297 <http://purl.org/dc/terms/identifier>
25431 <http://purl.org/dc/terms/title>
31484 <http://purl.org/ontology/bibo/citedBy>
31488 <http://purl.org/ontology/bibo/cites>
43850 <http://purl.org/dc/terms/hasPart>
43850 <http://purl.org/dc/terms/isPartOf>
60585 <http://www.w3.org/2000/01/rdf-schema#member>
60894 <http://xmlns.com/foaf/0.1/familyName>
60894 <http://xmlns.com/foaf/0.1/givenName>
60928 <http://xmlns.com/foaf/0.1/name>
60952 <http://xmlns.com/foaf/0.1/publications>
304714 <http://purl.org/ao/core/annotatesResource>
304714 <http://purl.org/ao/core/body>
304714 <http://purl.org/swan/pav/provenance/createdBy>
304714 <http://purl.org/swan/pav/provenance/createdOn>
304714 <http://rdfs.org/sioc/ns#num_items>
515063 <http://www.w3.org/2000/01/rdf-schema#seeAlso>
613578 <http://purl.org/ao/core/hasTopic>
818815 <http://purl.org/ao/selectors/end>
818815 <http://purl.org/ao/selectors/init>
881753 <http://purl.org/ao/core/onResource>
881753 <http://www.w3.org/2000/01/rdf-schema#resource>
881809 <http://purl.org/ao/core/context>
1654377 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> | 0.325735 | 0.980562 |
## Data Split
```
import numpy as np
from matplotlib import pyplot as plt
import math
```
### Read the data
```
import pandas as pd
df_data = pd.read_csv('../data/2d_classification.csv')
data = df_data[['x','y']].values
label = df_data['label'].values
```
## Dividing the data into Train and Test data
- Using the complete dataset for training, we are not generalizing the model for unseen data
- A small set of data should be a holdout dataset on which a model can be tested
- The __holdout set__ will say how good your model is w.r.t. unknown data - __Test Data__
<img src='img/train_test_split.png'>
- sklearn has an inbuilt method - __[train_test_split()](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)__
- parameters
- data -> data to be split
- test_size -> between 0 & 1, denoting the ratio to be allocated for test data
- shuffle -> randomly shuffle before splitting the data
- stratify -> Try to maintain the class balance in both training and test data sets
<font color='red'> <h3> Rule of thumb: If you do anything to alter your model, do not touch the test data </h3>
<li>
Do not use the test data in any form
</li><li>
Do not use it for hyper-parameter tuning
</li>
</font>
## Train-Test Splits
```
from sklearn.model_selection import train_test_split
data_train, data_test, label_train, label_test = train_test_split(data, label, test_size=0.20,
random_state=0, stratify=None)
```
__Train-test ratio of the split data__
```
print('Complete data:', data.shape)
print('Labels distribution', Counter(label))
print('Train Data:', data_train.shape, 'Test Data:', data_test.shape)
```
__How the data is split__
```
from collections import Counter
print('Training Data split', Counter(label_train))
print('Testing Data split', Counter(label_test))
```
### Stratified split
- Used to make sure that the class imbalance doesn't affect the distribution of class labels in the train-test data split
<img src='img/stratified_split.png'>
```
from sklearn.model_selection import train_test_split
data_train, data_test, label_train, label_test = train_test_split(data, label, test_size=0.20,
random_state=0, stratify=label)
print('Complete data:', data.shape)
print('Labels distribution', Counter(label))
print('Train Data:', data_train.shape, 'Test Data:', data_test.shape)
from collections import Counter
print('Training Data split', Counter(label_train))
print('Testing Data split', Counter(label_test))
```
## Improving the model with multiple iterations
<img src='img/train_test_validation.png'>
## 1.Random Sampling
- Sample a data multiple time with a certain ratio (example: 80% of the data is randomly sampled)
- Iterate the process again for n-iterations with replacement
- Since it is random sampling, doesn't not guarantee that each data point was tested
## 2. Cross-Validation
### a. k-fold cross validation
- Divide the dataset into ___k___ subsets
- Select a subset, use that as the test data and the remaining as the train data
- Repeat this for k iterations, so that all the subsets are used as test data
<img src='img/train_to_subsets.png'>
<img src='img/k_fold_cv.png'>
### b. Leave-one-out CV
- Set the number of folds to the number of training instances
- Each iteration, train on the complete dataset except one
- No random subsampling
- Computationally expensive - n-iterations
| github_jupyter |
import numpy as np
from matplotlib import pyplot as plt
import math
import pandas as pd
df_data = pd.read_csv('../data/2d_classification.csv')
data = df_data[['x','y']].values
label = df_data['label'].values
from sklearn.model_selection import train_test_split
data_train, data_test, label_train, label_test = train_test_split(data, label, test_size=0.20,
random_state=0, stratify=None)
print('Complete data:', data.shape)
print('Labels distribution', Counter(label))
print('Train Data:', data_train.shape, 'Test Data:', data_test.shape)
from collections import Counter
print('Training Data split', Counter(label_train))
print('Testing Data split', Counter(label_test))
from sklearn.model_selection import train_test_split
data_train, data_test, label_train, label_test = train_test_split(data, label, test_size=0.20,
random_state=0, stratify=label)
print('Complete data:', data.shape)
print('Labels distribution', Counter(label))
print('Train Data:', data_train.shape, 'Test Data:', data_test.shape)
from collections import Counter
print('Training Data split', Counter(label_train))
print('Testing Data split', Counter(label_test)) | 0.372962 | 0.963057 |
<img src="https://maltem.com/wp-content/uploads/2020/04/LOGO_MALTEM.png" style="float: left; margin: 20px; height: 55px">
<br>
<br>
<br>
<br>
# Random Forests and ExtraTrees
_Authors: Matt Brems (DC), Riley Dallas (AUS)_
---
## Random Forests
---
With bagged decision trees, we generate many different trees on pretty similar data. These trees are **strongly correlated** with one another. Because these trees are correlated with one another, they will have high variance. Looking at the variance of the average of two random variables $T_1$ and $T_2$:
$$
\begin{eqnarray*}
Var\left(\frac{T_1+T_2}{2}\right) &=& \frac{1}{4}\left[Var(T_1) + Var(T_2) + 2Cov(T_1,T_2)\right]
\end{eqnarray*}
$$
If $T_1$ and $T_2$ are highly correlated, then the variance will about as high as we'd see with individual decision trees. By "de-correlating" our trees from one another, we can drastically reduce the variance of our model.
That's the difference between bagged decision trees and random forests! We're going to do the same thing as before, but we're going to de-correlate our trees. This will reduce our variance (at the expense of a small increase in bias) and thus should greatly improve the overall performance of the final model.
So how do we "de-correlate" our trees?
Random forests differ from bagging decision trees in only one way: they use a modified tree learning algorithm that selects, at each split in the learning process, a **random subset of the features**. This process is sometimes called the *random subspace method*.
The reason for doing this is the correlation of the trees in an ordinary bootstrap sample: if one or a few features are very strong predictors for the response variable (target output), these features will be used in many/all of the bagged decision trees, causing them to become correlated. By selecting a random subset of features at each split, we counter this correlation between base trees, strengthening the overall model.
For a problem with $p$ features, it is typical to use:
- $\sqrt{p}$ (rounded down) features in each split for a classification problem.
- $p/3$ (rounded down) with a minimum node size of 5 as the default for a regression problem.
While this is a guideline, Hastie and Tibshirani (authors of Introduction to Statistical Learning and Elements of Statistical Learning) have suggested this as a good rule in the absence of some rationale to do something different.
Random forests, a step beyond bagged decision trees, are **very widely used** classifiers and regressors. They are relatively simple to use because they require very few parameters to set and they perform pretty well.
- It is quite common for interviewers to ask how a random forest is constructed or how it is superior to a single decision tree.
---
## Extremely Randomized Trees (ExtraTrees)
Adding another step of randomization (and thus de-correlation) yields extremely randomized trees, or _ExtraTrees_. Like Random Forests, these are trained using the random subspace method (sampling of features). However, they are trained on the entire dataset instead of bootstrapped samples. A layer of randomness is introduced in the way the nodes are split. Instead of computing the locally optimal feature/split combination (based on, e.g., information gain or the Gini impurity) for each feature under consideration, a random value is selected for the split. This value is selected from the feature's empirical range.
This further reduces the variance, but causes an increase in bias. If you're considering using ExtraTrees, you might consider this to be a hyperparameter you can tune. Build an ExtraTrees model and a Random Forest model, then compare their performance!
That's exactly what we'll do below.
## Import libraries
---
We'll need the following libraries for today's lecture:
- `pandas`
- `numpy`
- `GridSearchCV`, `train_test_split` and `cross_val_score` from `sklearn`'s `model_selection` module
- `RandomForestClassifier` and `ExtraTreesClassifier` from `sklearn`'s `ensemble` module
```
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.model_selection import GridSearchCV, train_test_split, cross_val_score
```
## Load Data
---
Load `train.csv` and `test.csv` from Kaggle into `DataFrames`.
```
train = pd.read_csv('datasets/train.csv')
train.shape
test = pd.read_csv('datasets/test.csv')
```
## Data Cleaning: Drop the two rows with missing `Embarked` values from train
---
```
train = train[train['Embarked'].notnull()]
train.shape
```
## Data Cleaning: `Fare`
---
The test set has one row with a missing value for `Fare`. Fill it with the average `Fare` with everyone from the same `Pclass`. **Use the training set to calculate the average!**
```
mean_fare_3 = train[train['Pclass']==3]['Fare'].mean()
mean_fare_3
test['Fare'] = test['Fare'].fillna(mean_fare_3)
test.isnull().sum()
```
## Data Cleaning: `Age`
---
Let's simply impute all missing ages to be **999**.
**NOTE**: This is not a best practice. However,
1. Since we haven't really covered imputation in depth
2. And the proper way would take too long to implement (thus detracting) from today's lecture
3. And since we're ensembling with Decision Trees
We'll do it this way as a matter of convenience.
```
train['Age'] = train['Age'].fillna(999)
test['Age'] = test['Age'].fillna(999)
```
## Feature Engineering: `Cabin`
---
Since there are so many missing values for `Cabin`, let's binarize that column as follows:
- 1 if there originally was a value for `Cabin`
- 0 if it was null
**Do this for both `train` and `test`**
```
train['Cabin'] = train['Cabin'].notnull().astype(int)
test['Cabin'] = test['Cabin'].notnull().astype(int)
```
## Feature Engineering: Dummies
---
Dummy the `Sex` and `Embarked` columns. Be sure to set `drop_first=True`.
```
train = pd.get_dummies(train,columns = ['Sex','Embarked'],drop_first = True)
test = pd.get_dummies(test,columns = ['Sex','Embarked'],drop_first = True)
```
## Model Prep: Create `X` and `y` variables
---
Our features will be:
```python
features = ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare', 'Cabin', 'Sex_male', 'Embarked_Q', 'Embarked_S']
```
And our target will be `Survived`
```
features = ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare', 'Cabin', 'Sex_male', 'Embarked_Q', 'Embarked_S']
X = train[features]
y = train['Survived']
```
## Challenge: What is our baseline accuracy?
---
The baseline accuracy is the percentage of the majority class, regardless of whether it is 1 or 0. It serves as the benchmark for our model to beat.
```
y.value_counts(normalize = True)
```
## Train/Test Split
---
I know it can be confusing having an `X_test` from our training data vs a test set from Kaggle. If you want, you can use `X_val`/`y_val` for what we normally call `X_test`/`y_test`.
```
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state = 42, stratify = y)
```
## Model instantiation
---
Create an instance of `RandomForestClassifier` and `ExtraTreesClassifier`.
```
rg = RandomForestClassifier()
et = ExtraTreesClassifier()
```
## Model Evaluation
---
Which one has a higher `cross_val_score`?
```
cross_val_score(rg, X_train,y_train, cv = 5).mean()
cross_val_score(et, X_train,y_train, cv = 5).mean()
```
## Grid Search
---
They're both pretty close performance-wise. We could Grid Search over both, but for the sake of time we'll go with `RandomForestClassifier`.
```
rg_params = {
'n_estimators': [100,150,200],
'max_depth': [None,1,2,3,4,5]
}
gs = GridSearchCV(rg, param_grid = rg_params, cv = 5)
gs.fit(X_train, y_train)
print(gs.best_score_)
print(gs.best_params_)
```
## Kaggle Submission
---
Now that we've evaluated our model, let's submit our predictions to Kaggle.
```
pred = gs.predict(test[features])
test['Survied'] = pred
test[['PassengerId', 'Survied']].to_csv('submission.csv')
```
| github_jupyter | import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.model_selection import GridSearchCV, train_test_split, cross_val_score
train = pd.read_csv('datasets/train.csv')
train.shape
test = pd.read_csv('datasets/test.csv')
train = train[train['Embarked'].notnull()]
train.shape
mean_fare_3 = train[train['Pclass']==3]['Fare'].mean()
mean_fare_3
test['Fare'] = test['Fare'].fillna(mean_fare_3)
test.isnull().sum()
train['Age'] = train['Age'].fillna(999)
test['Age'] = test['Age'].fillna(999)
train['Cabin'] = train['Cabin'].notnull().astype(int)
test['Cabin'] = test['Cabin'].notnull().astype(int)
train = pd.get_dummies(train,columns = ['Sex','Embarked'],drop_first = True)
test = pd.get_dummies(test,columns = ['Sex','Embarked'],drop_first = True)
features = ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare', 'Cabin', 'Sex_male', 'Embarked_Q', 'Embarked_S']
features = ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare', 'Cabin', 'Sex_male', 'Embarked_Q', 'Embarked_S']
X = train[features]
y = train['Survived']
y.value_counts(normalize = True)
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state = 42, stratify = y)
rg = RandomForestClassifier()
et = ExtraTreesClassifier()
cross_val_score(rg, X_train,y_train, cv = 5).mean()
cross_val_score(et, X_train,y_train, cv = 5).mean()
rg_params = {
'n_estimators': [100,150,200],
'max_depth': [None,1,2,3,4,5]
}
gs = GridSearchCV(rg, param_grid = rg_params, cv = 5)
gs.fit(X_train, y_train)
print(gs.best_score_)
print(gs.best_params_)
pred = gs.predict(test[features])
test['Survied'] = pred
test[['PassengerId', 'Survied']].to_csv('submission.csv') | 0.271252 | 0.992518 |
# "Will the client subscribe?"
> "An Example of Applied Machine Learning"
- toc: true
- branch: master
- badges: true
- comments: true
- categories: [machine_learning, jupyter, ai]
- image: images/confusion.png
- hide: false
- search_exclude: true
- metadata_key1: metadata_value1
- metadata_key2: metadata_value2
# Introduction
Hello, everyone! Today I'm going to perform data analysis and prediction on a dataset related to bank's marketing.
The dataset is hosted UCI Machine Learning Repository and you can find it [here](http://archive.ics.uci.edu/ml/datasets/Bank+Marketing).
# Motivation
Direct marketing is a very important type of advertising used to achieve a specific action (in our case: subscriptions) in a group of consumers.
Majors companies normally have access to huge lists of potential clients therefore getting an idea about the probability of each of them subscribing will increase revenue and decrease resources such as marketers and even money.
My goal in this post is to create an ML model that is able to predict whether a given client will subscribe or not.
# Exploring the data
The data is related with direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed.
Luckily enough, the data is already labeled for our case so let's start with exploring it using Pandas and compute some quick statistics.
```
import pandas as pd
banks_data = pd.read_csv('bank-full.csv', delimiter=';') # By default, the delimiter is ',' but this csv file uses ';' instead.
banks_data
banks_data.describe()
```
## Overview Analysis
Each entry in this dataset has the following attributes. We have a mix of numerical and categorical features.
### Input variables:
1. age (numeric)
2. job : type of job (categorical: 'admin.','blue-collar','entrepreneur','housemaid','management','retired','self-employed','services','student','technician','unemployed','unknown')
3. marital : marital status (categorical: 'divorced','married','single','unknown'; note: 'divorced' means divorced or widowed)
4. education (categorical: 'basic.4y','basic.6y','basic.9y','high.school','illiterate','professional.course','university.degree','unknown')
5. default: has credit in default? (categorical: 'no','yes','unknown')
6. housing: has housing loan? (categorical: 'no','yes','unknown')
7. loan: has personal loan? (categorical: 'no','yes','unknown')
### Related with the last contact of the current campaign
8. contact: contact communication type (categorical: 'cellular','telephone')
9. month: last contact month of year (categorical: 'jan', 'feb', 'mar', ..., 'nov', 'dec')
10. day_of_week: last contact day of the week (categorical: 'mon','tue','wed','thu','fri')
11. duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target (e.g., if duration=0 then y='no'). Yet, the duration is not known before a call is performed. Also, after the end of the call y is obviously known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model.
### Other attributes
12. campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)
13. pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means client was not previously contacted)
14. previous: number of contacts performed before this campaign and for this client (numeric)
15. poutcome: outcome of the previous marketing campaign (categorical: 'failure','nonexistent','success')
## Quick observations
1. The duration (attribute #11) is to be discarded in order to have a relastic predictive model.
2. The 'pdays' have a min of -1 and a max of 871 therefore the description above is innacurate (they used -1 instead of 999) and logically this will affect the prediction, i.e. -1 is closer to 0 than 871 which will make the model assume that the entry with -1 has been contacted recently. We need to change this to 999.
3. I will use one hot encoding for the categorical features and normalization for numerical features.
4. The contact attribute is to be discarded as it's no longer revelant and is actually 33% unknown values.
# Data Preparation
In order to perform data analysis, I will skip normalization for a later step and for now just drop the 'duration' column and change the values in 'pdays'.
## Dropping duration and contact columns
```
banks_data.drop(['duration'], inplace=True, axis=1)
banks_data.drop(['contact'], inplace=True, axis=1)
```
## Modifying 'pdays'
```
banks_data.loc[(banks_data['pdays'] == -1),'pdays'] = 999
banks_data
```
# Data Analysis
Seaborn is one of the widely used libraries to perform data visualization. It comes with a lot of helpful functionalities and gives really nice graphics.
## Importing additional libraries
```
import matplotlib.pyplot as plt
import numpy as np
import warnings; warnings.simplefilter('ignore')
import seaborn as sns
from scipy import stats, integrate
%matplotlib inline
```
## Balance, age, jobs and y?
Normally, the goal of this section is to check how much the balance, the age and the job matter in the decision of the client.
### Age distribution
```
sns.distplot(banks_data['age'], kde=False, fit=stats.gamma);
```
### Joint plot (balance and age)
```
sns.jointplot(x="age", y="balance", data=banks_data, kind="reg");
```
### Age and Subscription
```
sns.boxplot(x="y", y="age", data=banks_data);
```
### Balance and Subscription
```
sns.violinplot(x="y", y="balance", data=banks_data);
```
### Job and Subscription
```
sns.factorplot(x="y", y="age",col="job", data=banks_data, kind="box", size=4, aspect=.5);
```
### A few conclusions
1. Younger managers are more likely to subscribe.
2. Older retired people are more likely to subscribe.
3. Younger self-employed are more likely to subscribe.
4. Older housemaids are more likely to subscribe.
5. Younger students are more likely to subscribe.
6. People with more balance are more likely to subscribe.
7. In general, older people are more likely to subscribe. Although this depends on the job.
8. People with no credit are more likely to subscribe.
# Correlation Heat Map
## What is correlation?
The term "correlation" refers to a mutual relationship or association between quantities. In almost any business, it is useful to express one quantity in terms of its relationship with others. For example, the sales of a given product can increase if the company spends more money on advertisements. Now in order to deduce such relationships, I will build a heatmap of the correlation among all the vectors in the dataset.
I will use Pearson's method as it is the most popular method.
Seaborn's library give us perfect heatmaps to visualize the correlation.
The formula that is used is very simple:
![Correlation](https://media-exp1.licdn.com/dms/image/C5612AQF27C-xFgQA_w/article-inline_image-shrink_1000_1488/0?e=1605744000&v=beta&t=j3vv6efJe-cedOEIphFMeeGPnutNDlNuu8kFp0uXzz0)
where: n is the sample size, xi and yi are the samples and x (bar) is the mean.
```
correlation = banks_data.corr(method='pearson')
plt.figure(figsize=(25,10))
sns.heatmap(correlation, vmax=1, square=True, annot=True )
plt.show()
```
## A few conclusions
Before anything, please note that this matrix is symmetric and the diagonals are all 1 because it's the correlation between the vector and itself (not to be confused with autocorrelation which is used in signal processing).
1. There is a strong positive correlation between the age and the balance which makes sense.
2. A strong negative correlation between the number of days that passed by after the client was last contacted from a previous campaign and the number of contacts before this campaign.
3. There is an obvious correlation among the campaign, pdays and previous vectors.
# Data Preparation
## Removing unknown values
This specific dataset doesn't have NaN values. However, it has 'unknown' values which is the same thing but needs to be dealt with differently.
There are two columns that contain unknown values:
1. Job
2. Education
3. Contact
What I'm going to do is check the percentage of each class (yes or no) having unknown values in either the job or the eduction field (or both).
```
no = banks_data.loc[banks_data['y'] == 'no']
yes = banks_data.loc[banks_data['y'] == 'yes']
unknown_no = banks_data.loc[((banks_data['job'] == 'unknown')|(banks_data['education'] == 'unknown'))&(banks_data['y'] == 'no')]
unknown_yes = banks_data.loc[((banks_data['job'] == 'unknown')|(banks_data['education'] == 'unknown'))&(banks_data['y'] == 'yes')]
print('The percentage of unknown values in class no: ', float(unknown_no.count()[0]/float(no.count()[0]))*100)
print('The percentage of unknown values in class yes: ', float(unknown_yes.count()[0]/float(yes.count()[0]))*100)
```
Since the percentage is roughly the same among both classes and it's 5%, the best method is to just drop the values to prevent false model and predictions.
```
banks_data = banks_data[banks_data['education'] != 'unknown']
banks_data = banks_data[banks_data['job'] != 'unknown']
banks_data
```
## Encoding categorical variables
Since classification algorithms (RF for example) take numerical values as input, we need to encode the categorical columns. The following columns need to be encoded:
1. Marital
2. Job
3. Education
4. Default
5. Housing
6. Loan
7. y
This can be done using scikit-learn.
```
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
# Label encoder
banks_data['marital'] = encoder.fit_transform(banks_data['marital'])
banks_data['job'] = encoder.fit_transform(banks_data['job'])
banks_data['education'] = encoder.fit_transform(banks_data['education'])
banks_data['default'] = encoder.fit_transform(banks_data['default'])
banks_data['housing'] = encoder.fit_transform(banks_data['housing'])
banks_data['month'] = encoder.fit_transform(banks_data['month'])
banks_data['loan'] = encoder.fit_transform(banks_data['loan'])
banks_data['poutcome'] = encoder.fit_transform(banks_data['poutcome'])
banks_data['y'] = encoder.fit_transform(banks_data['y'])
banks_data
```
## Data normalization
The normalization of the data is very important when dealing with parameters of different units and scales. For example, some data mining techniques use the Euclidean distance. Therefore, all parameters should have the same scale for a fair comparison between them.
Again, scikit-learn provides preprocessing to normalize the vectors between 0 and 1.
```
from sklearn import preprocessing
min_max_scaler = preprocessing.MinMaxScaler()
data_scaled = pd.DataFrame(min_max_scaler.fit_transform(banks_data), columns=banks_data.columns)
data_scaled
```
## Is the data balanced?
An important step is to check whether the data is balanced, i.e, in our case the 'yes' cases should be equal to 'no' cases.
Let's calculate the ratio of the positive class to the negative class.
```
print('The ratio is {}'.format(float(yes.count()[0]/no.count()[0])))
```
## Generating samples using SMOTEENN
As previously calculated, the data is unbalanced, therefore we need to fix this. We could use resampling techniques such as SMOTEEN.
Preparing the dataset and importing the imblearn library which can be installed using pip and git: "pip install -U git+https://github.com/scikit-learn-contrib/imbalanced-learn.git"
SMOTEENN which is a combination of oversampling and cleaning is the algorithm that is going to balance our dataset.
You can read more about SMOTEENN here: http://contrib.scikit-learn.org/imbalanced-learn/stable/combine.html
```
from imblearn.combine import SMOTEENN
smote_enn = SMOTEENN(random_state=0)
X = data_scaled.drop('y', axis=1)
y = data_scaled['y']
X_res, y_res = smote_enn.fit_sample(X, y)
```
# Random Forest & Tuning
Random forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees' habit of overfitting to their training set. More about them here.
Scikit-learn provides us the Random Forest Classifier so we can easily import it.
However, the main challenge is to tune this classifier (finding the best parameters) in order to get the best results.
GridSearchCV is an important method to estimate these parameters. However, we need to first train the model.
GridSearchCV implements a “fit” and a “score” method. It also implements “predict”, “predict_proba”, “decision_function”, “transform” and “inverse_transform” if they are implemented in the estimator used.
The parameters of the estimator used to apply these methods are optimized by cross-validated grid-search over a parameter grid.
## Splitting into train and test datasets
```
from sklearn.model_selection import train_test_split
X_train_resampled, X_test_resampled, y_train_resampled, y_test_resampled = train_test_split(X_res
,y_res
,test_size = 0.3
,random_state = 0)
print("Train: {}".format(len(X_train_resampled)))
print("Test: {}".format(len(X_test_resampled)))
print("Total: {}".format(len(X_train_resampled)+len(X_test_resampled)))
```
## Training a Random Forest classifier
Here, I train a random forest classifier and perform grid search to select the best parameters (n_estimators and max_features).
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
clf = RandomForestClassifier(n_jobs=-1, random_state=7, max_features= 'sqrt', n_estimators=50)
clf.fit(X_train_resampled, y_train_resampled)
param_grid = {
'n_estimators': [50, 500],
'max_features': ['auto', 'sqrt', 'log2'],
}
CV_clf = GridSearchCV(estimator=clf, param_grid=param_grid, cv= 5)
CV_clf.fit(X_train_resampled, y_train_resampled)
```
# Model Evaluation
```
y_pred = clf.predict(X_test_resampled)
CV_clf.best_params_
import itertools
from sklearn.metrics import accuracy_score, f1_score, precision_score, confusion_matrix,precision_recall_curve,auc,roc_auc_score,roc_curve,recall_score,classification_report
def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=0)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
#print("Normalized confusion matrix")
else:
1#print('Confusion matrix, without normalization')
#print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test_resampled,y_pred)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
class_names = [0,1]
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names, title='Confusion matrix')
plt.show()
```
The values that reflect a positive model are the ones on the diagonal (7427 and 9022). This means that the real label and the predicted one are the same (correct classification).
## F-score, Precision and Recall
```
print("F1 Score: {}".format(f1_score(y_test_resampled, y_pred, average="macro")))
print("Precision: {}".format(precision_score(y_test_resampled, y_pred, average="macro")))
print("Recall: {}".format(recall_score(y_test_resampled, y_pred, average="macro")))
```
## Receiver Operating Characteristic
This is a curve that plots the true positive rate with respect to the false positive rate. AUC is the area under the curve and to analyze the results we could refer to this table:
A rough guide for classifying the accuracy of a diagnostic test is the traditional academic point system:
`.90-1 = excellent (A) .80-.90 = good (B) .70-.80 = fair (C) .60-.70 = poor (D) .50-.60 = fail (F).`
In our case AUC = 0.95 which means that the model is excellent.
```
fpr, tpr, thresholds = roc_curve(y_test_resampled,y_pred)
roc_auc = auc(fpr,tpr)
# Plot ROC
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b',label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.0])
plt.ylim([-0.1,1.01])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
```
# Conclusion
This was an example of applied machine learning. It uses a dataset that includes customers and the goal is to check whether the client will subscribe or not. It is a fundamental example of applied machine learning that is used in data science domains. The data was unbalanced and we tackled this problem by using a technique called a mix of oversampling and cleaning.
Thank you for reading this and I hope you enjoyed it!
| github_jupyter | import pandas as pd
banks_data = pd.read_csv('bank-full.csv', delimiter=';') # By default, the delimiter is ',' but this csv file uses ';' instead.
banks_data
banks_data.describe()
banks_data.drop(['duration'], inplace=True, axis=1)
banks_data.drop(['contact'], inplace=True, axis=1)
banks_data.loc[(banks_data['pdays'] == -1),'pdays'] = 999
banks_data
import matplotlib.pyplot as plt
import numpy as np
import warnings; warnings.simplefilter('ignore')
import seaborn as sns
from scipy import stats, integrate
%matplotlib inline
sns.distplot(banks_data['age'], kde=False, fit=stats.gamma);
sns.jointplot(x="age", y="balance", data=banks_data, kind="reg");
sns.boxplot(x="y", y="age", data=banks_data);
sns.violinplot(x="y", y="balance", data=banks_data);
sns.factorplot(x="y", y="age",col="job", data=banks_data, kind="box", size=4, aspect=.5);
correlation = banks_data.corr(method='pearson')
plt.figure(figsize=(25,10))
sns.heatmap(correlation, vmax=1, square=True, annot=True )
plt.show()
no = banks_data.loc[banks_data['y'] == 'no']
yes = banks_data.loc[banks_data['y'] == 'yes']
unknown_no = banks_data.loc[((banks_data['job'] == 'unknown')|(banks_data['education'] == 'unknown'))&(banks_data['y'] == 'no')]
unknown_yes = banks_data.loc[((banks_data['job'] == 'unknown')|(banks_data['education'] == 'unknown'))&(banks_data['y'] == 'yes')]
print('The percentage of unknown values in class no: ', float(unknown_no.count()[0]/float(no.count()[0]))*100)
print('The percentage of unknown values in class yes: ', float(unknown_yes.count()[0]/float(yes.count()[0]))*100)
banks_data = banks_data[banks_data['education'] != 'unknown']
banks_data = banks_data[banks_data['job'] != 'unknown']
banks_data
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
# Label encoder
banks_data['marital'] = encoder.fit_transform(banks_data['marital'])
banks_data['job'] = encoder.fit_transform(banks_data['job'])
banks_data['education'] = encoder.fit_transform(banks_data['education'])
banks_data['default'] = encoder.fit_transform(banks_data['default'])
banks_data['housing'] = encoder.fit_transform(banks_data['housing'])
banks_data['month'] = encoder.fit_transform(banks_data['month'])
banks_data['loan'] = encoder.fit_transform(banks_data['loan'])
banks_data['poutcome'] = encoder.fit_transform(banks_data['poutcome'])
banks_data['y'] = encoder.fit_transform(banks_data['y'])
banks_data
from sklearn import preprocessing
min_max_scaler = preprocessing.MinMaxScaler()
data_scaled = pd.DataFrame(min_max_scaler.fit_transform(banks_data), columns=banks_data.columns)
data_scaled
print('The ratio is {}'.format(float(yes.count()[0]/no.count()[0])))
from imblearn.combine import SMOTEENN
smote_enn = SMOTEENN(random_state=0)
X = data_scaled.drop('y', axis=1)
y = data_scaled['y']
X_res, y_res = smote_enn.fit_sample(X, y)
from sklearn.model_selection import train_test_split
X_train_resampled, X_test_resampled, y_train_resampled, y_test_resampled = train_test_split(X_res
,y_res
,test_size = 0.3
,random_state = 0)
print("Train: {}".format(len(X_train_resampled)))
print("Test: {}".format(len(X_test_resampled)))
print("Total: {}".format(len(X_train_resampled)+len(X_test_resampled)))
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
clf = RandomForestClassifier(n_jobs=-1, random_state=7, max_features= 'sqrt', n_estimators=50)
clf.fit(X_train_resampled, y_train_resampled)
param_grid = {
'n_estimators': [50, 500],
'max_features': ['auto', 'sqrt', 'log2'],
}
CV_clf = GridSearchCV(estimator=clf, param_grid=param_grid, cv= 5)
CV_clf.fit(X_train_resampled, y_train_resampled)
y_pred = clf.predict(X_test_resampled)
CV_clf.best_params_
import itertools
from sklearn.metrics import accuracy_score, f1_score, precision_score, confusion_matrix,precision_recall_curve,auc,roc_auc_score,roc_curve,recall_score,classification_report
def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=0)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
#print("Normalized confusion matrix")
else:
1#print('Confusion matrix, without normalization')
#print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test_resampled,y_pred)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
class_names = [0,1]
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names, title='Confusion matrix')
plt.show()
print("F1 Score: {}".format(f1_score(y_test_resampled, y_pred, average="macro")))
print("Precision: {}".format(precision_score(y_test_resampled, y_pred, average="macro")))
print("Recall: {}".format(recall_score(y_test_resampled, y_pred, average="macro")))
fpr, tpr, thresholds = roc_curve(y_test_resampled,y_pred)
roc_auc = auc(fpr,tpr)
# Plot ROC
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b',label='AUC = %0.2f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.0])
plt.ylim([-0.1,1.01])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show() | 0.593727 | 0.946843 |
# The Fourier Transform
*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Communications Engineering, Universität Rostock. Please direct questions and suggestions to [Sascha.Spors@uni-rostock.de](mailto:Sascha.Spors@uni-rostock.de).*
## Properties
The Fourier transform has a number of specific properties. They can be concluded from its definition. The most important ones in the context of signals and systems are reviewed in the following.
### Invertibility
According to the [Fourier inversion theorem](https://en.wikipedia.org/wiki/Fourier_inversion_theorem), for many types of signals it is possible to recover the signal $x(t)$ from its Fourier transformation $X(j \omega) = \mathcal{F} \{ x(t) \}$
\begin{equation}
x(t) = \mathcal{F}^{-1} \left\{ \mathcal{F} \{ x(t) \} \right\}
\end{equation}
A sufficient condition for the theorem to hold is that both the signal $x(t)$ and its Fourier transformation are absolutely integrable and $x(t)$ is continuous at the considered time $t$. For this type of signals, above relation can be proven by applying the definition of the Fourier transform and its inverse and rearranging terms. However, the invertibility of the Fourier transformation holds also for more general signals $x(t)$, composed for instance from Dirac delta distributions.
**Example**
The invertibility of the Fourier transform is illustrated at the example of the [rectangular signal](../continuous_signals/standard_signals.ipynb#Rectangular-Signal) $x(t) = \text{rect}(t)$. The inverse of [its Fourier transform](definition.ipynb#Transformation-of-the-Rectangular-Signal) $X(j \omega) = \text{sinc} \left( \frac{\omega}{2} \right)$ is computed to show that the rectangular signal, although it has discontinuities, can be recovered by inverse Fourier transformation.
```
%matplotlib inline
import sympy as sym
sym.init_printing()
def fourier_transform(x):
return sym.transforms._fourier_transform(x, t, w, 1, -1, 'Fourier')
def inverse_fourier_transform(X):
return sym.transforms._fourier_transform(X, w, t, 1/(2*sym.pi), 1, 'Inverse Fourier')
t, w = sym.symbols('t omega')
X = sym.sinc(w/2)
x = inverse_fourier_transform(X)
x
sym.plot(x, (t,-1,1), ylabel=r'$x(t)$');
```
### Duality
Comparing the [definition of the Fourier transform](definition.ipynb) with its inverse
\begin{align}
X(j \omega) &= \int_{-\infty}^{\infty} x(t) \, e^{-j \omega t} \; dt \\
x(t) &= \frac{1}{2 \pi} \int_{-\infty}^{\infty} X(j \omega) \, e^{j \omega t} \; d\omega
\end{align}
reveals that both are very similar in their structure. They differ only with respect to the normalization factor $2 \pi$ and the sign of the exponential function. The duality principle of the Fourier transform can be deduced from this observation. Let's assume that we know the Fourier transformation $x_2(j \omega)$ of a signal $x_1(t)$
\begin{equation}
x_2(j \omega) = \mathcal{F} \{ x_1(t) \}
\end{equation}
It follows that the Fourier transformation of the signal
\begin{equation}
x_2(j t) = x_2(j \omega) \big\vert_{\omega=t}
\end{equation}
is given as
\begin{equation}
\mathcal{F} \{ x_2(j t) \} = 2 \pi \cdot x_1(- \omega)
\end{equation}
The duality principle of the Fourier transformation allows to carry over results from the time-domain to the spectral-domain and vice-versa. It can be used to derive new transforms from known transforms. This is illustrated at an example. Note, that the Laplace transformation shows no duality. This is due to the mapping of a complex signal $x(t)$ with real valued independent variable $t \in \mathbb{R}$ to its complex transform $X(s) \in \mathbb{C}$ with complex valued independent variable $s \in \mathbb{C}$.
#### Transformation of the exponential signal
The Fourier transform of a shifted Dirac impulse $\delta(t - \tau)$ is derived by introducing it into the definition of the Fourier transform and exploiting the sifting property of the Dirac delta function
\begin{equation}
\mathcal{F} \{ \delta(t - \tau) \} = \int_{-\infty}^{\infty} \delta(t - \tau) \, e^{-j \omega t} \; dt = e^{-j \omega \tau}
\end{equation}
Using the duality principle, the Fourier transform of $e^{-j \omega_0 t}$ can be derived from this result by
1. substituting $\omega$ with $t$ and $\tau$ with $\omega_0$ on the right-hand side to yield the time-domain signal $e^{-j \omega_0 t}$
2. substituting $t$ by $- \omega$, $\tau$ with $\omega_0$ and multiplying the result by $2 \pi$ on the left-hand side
\begin{equation}
\mathcal{F} \{ e^{-j \omega_0 t} \} = 2 \pi \cdot \delta(\omega + \omega_0)
\end{equation}
### Linearity
The Fourier transform is a linear operation. For two signals $x_1(t)$ and $x_2(t)$ with Fourier transforms $X_1(j \omega) = \mathcal{F} \{ x_1(t) \}$ and $X_2(j \omega) = \mathcal{F} \{ x_2(t) \}$ the following holds
\begin{equation}
\mathcal{F} \{ A \cdot x_1(t) + B \cdot x_2(t) \} = A \cdot X_1(j \omega) + B \cdot X_2(j \omega)
\end{equation}
with $A, B \in \mathbb{C}$. The Fourier transform of a weighted superposition of signals is equal to the weighted superposition of the individual Fourier transforms. This property is useful to derive the Fourier transform of signals that can be expressed as superposition of other signals for which the Fourier transform is known or can be calculated easier. Linearity holds also for the inverse Fourier transform.
#### Transformation of the cosine and sine signal
The Fourier transform of $\cos(\omega_0 t)$ and $\sin(\omega_0 t)$ is derived by expressing both as harmonic exponential signals using [Euler's formula](https://en.wikipedia.org/wiki/Euler's_formula)
\begin{align}
\cos(\omega_0 t) &= \frac{1}{2} \left( e^{j \omega_0 t} + e^{-j \omega_0 t} \right) \\
\sin(\omega_0 t) &= \frac{1}{2j} \left( e^{j \omega_0 t} - e^{-j \omega_0 t} \right)
\end{align}
together with the Fourier transform $\mathcal{F} \{ e^{-j \omega_0 t} \} = 2 \pi \cdot \delta(\omega - \omega_0)$ from above yields
\begin{align}
\mathcal{F} \{ \cos(\omega_0 t) \} &= \pi \left( \delta(\omega + \omega_0) + \delta(\omega - \omega_0) \right) \\
\mathcal{F} \{ \sin(\omega_0 t) \} &= j \pi \left( \delta(\omega + \omega_0) - \delta(\omega - \omega_0) \right)
\end{align}
### Symmetries
In order to investigate the symmetries of the Fourier transform $X(j \omega) = \mathcal{F} \{ x(t) \}$ of a signal $x(t)$, first the case of a real valued signal $x(t) \in \mathbb{R}$ is considered. The results are then generalized to complex signals $x(t) \in \mathbb{C}$.
#### Real valued signals
Decomposing a real valued signal $x(t) \in \mathbb{R}$ into its even and odd part $x(t) = x_\text{e}(t) + x_\text{o}(t)$ and introducing these into the definition of the Fourier transform yields
\begin{align}
X(j \omega) &= \int_{-\infty}^{\infty} \left[ x_\text{e}(t) + x_\text{o}(t) \right] e^{-j \omega t} \; dt \\
&= \int_{-\infty}^{\infty} \left[ x_\text{e}(t) + x_\text{o}(t) \right] \cdot \left[ \cos(\omega t) - j \sin(\omega t) \right] \; dt \\
&= \underbrace{\int_{-\infty}^{\infty} x_\text{e}(t) \cos(\omega t) \; dt}_{X_\text{e}(j \omega)} +
j \underbrace{\int_{-\infty}^{\infty} - x_\text{o}(t) \sin(\omega t) \; dt}_{X_\text{o}(j \omega)}
\end{align}
For the last equality the fact was exploited that an integral with symmetric limits is zero for odd functions. Note that the multiplication of an odd function with an even/odd function results in an even/odd function. In order to conclude on the symmetry of $X(j \omega)$ its behavior for a reverse of the sign of $\omega$ has to be investigated. Due to the symmetry properties of $\cos(\omega t)$ and $\sin(\omega t)$, it follows that the Fourier transform of the
* even part $x_\text{e}(t)$ is real valued with even symmetry $X_\text{e}(j \omega) = X_\text{e}(-j \omega)$
* odd part $x_\text{o}(t)$ is imaginary valued with odd symmetry $X_\text{o}(j \omega) = - X_\text{o}(-j \omega)$
Combining this, it can be concluded that the Fourier transform $X(j \omega)$ of a real-valued signal $x(t) \in \mathbb{R}$ shows complex conjugate symmetry
\begin{equation}
X(j \omega) = X^*(- j \omega)
\end{equation}
It follows that the magnitude spectrum $|X(j \omega)|$ of a real-valued signal shows even symmetry
\begin{equation}
|X(j \omega)| = |X(- j \omega)|
\end{equation}
and the phase $\varphi(j \omega) = \arg \{ X(j \omega) \}$ odd symmetry
\begin{equation}
\varphi(j \omega) = - \varphi(- j \omega)
\end{equation}
Due to these symmetries, both are often plotted only for positive frequencies $\omega \geq 0$. However, without the information that the signal is real-valued it is not possible to conclude on the magnitude spectrum and phase for the negative frequencies $\omega < 0$.
#### Complex Signals
By following the same procedure as above for an imaginary signal, the symmetries of the Fourier transform of the even and odd part of an imaginary signal can be derived. The results can be combined, by decomposing a complex signal $x(t) \in \mathbb{C}$ and its Fourier transform into its even and odd part for both the real and imaginary part. This results in the following symmetry relations of the Fourier transform
![Symmetries of the Fourier transform](symmetries.png)
**Example**
The Fourier transform $X(j \omega)$ of the signal $x(t) = \text{sgn}(t) \cdot \text{rect}(t)$ is computed. The signal is real valued with odd symmetry due to the sign function. It follows from the symmetry realations of the Fourier transform, that $X(j \omega)$ is imaginary with odd symmetry.
```
class rect(sym.Function):
@classmethod
def eval(cls, arg):
return sym.Heaviside(arg + sym.S.Half) - sym.Heaviside(arg - sym.S.Half)
x = sym.sign(t)*rect(t)
sym.plot(x, (t, -2, 2), xlabel=r'$t$', ylabel=r'$x(t)$');
X = fourier_transform(x)
X = X.rewrite(sym.cos).simplify()
X
sym.plot(sym.im(X), (w, -30, 30), xlabel=r'$\omega$', ylabel=r'$\Im \{ X(j \omega) \}$');
```
**Exercise**
* What symmetry do you expect for the Fourier transform of the signal $x(t) = j \cdot \text{sgn}(t) \cdot \text{rect}(t)$? Check your results by modifying above example.
**Copyright**
The notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by Sascha Spors.
| github_jupyter | %matplotlib inline
import sympy as sym
sym.init_printing()
def fourier_transform(x):
return sym.transforms._fourier_transform(x, t, w, 1, -1, 'Fourier')
def inverse_fourier_transform(X):
return sym.transforms._fourier_transform(X, w, t, 1/(2*sym.pi), 1, 'Inverse Fourier')
t, w = sym.symbols('t omega')
X = sym.sinc(w/2)
x = inverse_fourier_transform(X)
x
sym.plot(x, (t,-1,1), ylabel=r'$x(t)$');
class rect(sym.Function):
@classmethod
def eval(cls, arg):
return sym.Heaviside(arg + sym.S.Half) - sym.Heaviside(arg - sym.S.Half)
x = sym.sign(t)*rect(t)
sym.plot(x, (t, -2, 2), xlabel=r'$t$', ylabel=r'$x(t)$');
X = fourier_transform(x)
X = X.rewrite(sym.cos).simplify()
X
sym.plot(sym.im(X), (w, -30, 30), xlabel=r'$\omega$', ylabel=r'$\Im \{ X(j \omega) \}$'); | 0.706089 | 0.99311 |
```
# Useful for debugging
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
from matplotlib import pyplot as plt
```
# Map1D_TM
---
```
from gpt.maps import Map1D_TM
cav = Map1D_TM('Buncher', 'fields/buncher_CTB_1D.gdf', frequency=1.3e9, scale=10e6, relative_phase=0)
?cav
```
# Basic Tracking routines
Checkingthe basic routines useful for working with a Map1D_TM object: `track_on_axis` and `auto_phase`.
First run the `track_on_axis`
```
G = cav.track_on_axis(t=0, p=1e6, n_screen=100)
fig, ax = plt.subplots(1, 3, sharex='col', constrained_layout=True, figsize=(12,4))
ax[0].plot(cav.z0, cav.Ez0);
ax[0].set_xlabel('$\Delta z$ (m)');
ax[0].set_ylabel('Ez(z) (V/m)');
ax[0].set_title('CBC Field Profile');
cav.plot_floor(ax=ax[1])
ax[1].plot(G.stat('mean_z','screen'), G.stat('mean_x', 'screen'));
ax[1].plot(G.stat('mean_z','screen')[0], G.stat('mean_x', 'screen')[0],'og');
ax[1].plot(G.stat('mean_z','screen')[-1], G.stat('mean_x', 'screen')[-1],'or');
ax[1].set_title('Single Particle Tracking')
ax[2].plot(G.stat('mean_z','screen'), G.stat('mean_energy', 'screen')/1e6);
ax[2].set_xlabel('z (m)');
ax[2].set_ylabel('E (MeV)');
ax[2].set_title('Single Particle Tracking: Energy Gain');
```
# Autophasing
```
p=10e6
cav.relative_phase=-90
%time G=cav.autophase(t=0, p=p)
plt.plot(G.stat('mean_z','screen'), (G.stat('mean_energy', 'screen')-G.screen[0]['mean_energy'])/1e6);
plt.xlabel('z (m)');
plt.ylabel('$\Delta E$ (MeV)');
```
# Test Placement in a Lattice
```
from gpt.element import Lattice
lat = Lattice('cavity')
lat.add(Map1D_TM('Buncher', 'fields/buncher_CTB_1D.gdf', frequency=1.3e9, scale=10e6, relative_phase=0), ds=1.0)
G = lat['Buncher'].track_on_axis(t=0, p=10e6, n_screen=100)
fig, ax = plt.subplots(1, 3, sharex='col', constrained_layout=True, figsize=(12,4))
ax[0].plot(cav.z0, cav.Ez0);
ax[0].set_xlabel('$\Delta z$ (m)');
ax[0].set_ylabel('Ez(z) (V/m)');
ax[0].set_title('CBC Field Profile');
lat['Buncher'].plot_floor(ax=ax[1])
ax[1].plot(G.stat('mean_z','screen'), G.stat('mean_x', 'screen'));
ax[1].plot(G.stat('mean_z','screen')[0], G.stat('mean_x', 'screen')[0],'og');
ax[1].plot(G.stat('mean_z','screen')[-1], G.stat('mean_x', 'screen')[-1],'or');
ax[1].set_title('Single Particle Tracking')
ax[2].plot(G.stat('mean_z','screen'), G.stat('mean_energy', 'screen')/1e6)
ax[2].set_xlabel('z (m)');
ax[2].set_ylabel('E (MeV)');
ax[2].set_title('Single Particle Tracking: Energy Gain');
lat['Buncher'].relative_phase=-90
G = lat['Buncher'].autophase(t=0, p=10e6)
plt.plot(G.stat('mean_z','screen'), (G.stat('mean_energy', 'screen')-G.screen[0]['mean_energy'])/1e6);
plt.xlabel('z (m)');
plt.ylabel('$\Delta E$ (MeV)');
for line in cav.gpt_lines():
print(line)
```
# Example Field Maps:
---
```
cav = Map1D_TM('CU_ICM', 'fields/icm_1d.gdf', frequency=1.3e9, scale=1, relative_phase=-90)
cav.relative_phase=+180
%time G=cav.autophase(t=0, p=p)
G = cav.track_on_axis(t=0, p=10e6, n_screen=100, workdir='temp')
fig, ax = plt.subplots(1, 3, sharex='col', constrained_layout=True, figsize=(12,4))
ax[0].plot(cav.z0, cav.Ez0);
ax[0].set_xlabel('$\Delta z$ (m)');
ax[0].set_ylabel('Ez(z) (V/m)');
ax[0].set_title('CBC Field Profile');
cav.plot_floor(ax=ax[1])
ax[1].plot(G.stat('mean_z','screen'), G.stat('mean_x', 'screen'));
ax[1].plot(G.stat('mean_z','screen')[0], G.stat('mean_x', 'screen')[0],'og');
ax[1].plot(G.stat('mean_z','screen')[-1], G.stat('mean_x', 'screen')[-1],'or');
ax[1].set_title('Single Particle Tracking')
ax[2].plot(G.stat('mean_z','screen'), G.stat('mean_energy', 'screen')/1e6)
ax[2].set_xlabel('z (m)');
ax[2].set_ylabel('E (MeV)');
ax[2].set_title('Single Particle Tracking: Energy Gain');
```
| github_jupyter | # Useful for debugging
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
from matplotlib import pyplot as plt
from gpt.maps import Map1D_TM
cav = Map1D_TM('Buncher', 'fields/buncher_CTB_1D.gdf', frequency=1.3e9, scale=10e6, relative_phase=0)
?cav
G = cav.track_on_axis(t=0, p=1e6, n_screen=100)
fig, ax = plt.subplots(1, 3, sharex='col', constrained_layout=True, figsize=(12,4))
ax[0].plot(cav.z0, cav.Ez0);
ax[0].set_xlabel('$\Delta z$ (m)');
ax[0].set_ylabel('Ez(z) (V/m)');
ax[0].set_title('CBC Field Profile');
cav.plot_floor(ax=ax[1])
ax[1].plot(G.stat('mean_z','screen'), G.stat('mean_x', 'screen'));
ax[1].plot(G.stat('mean_z','screen')[0], G.stat('mean_x', 'screen')[0],'og');
ax[1].plot(G.stat('mean_z','screen')[-1], G.stat('mean_x', 'screen')[-1],'or');
ax[1].set_title('Single Particle Tracking')
ax[2].plot(G.stat('mean_z','screen'), G.stat('mean_energy', 'screen')/1e6);
ax[2].set_xlabel('z (m)');
ax[2].set_ylabel('E (MeV)');
ax[2].set_title('Single Particle Tracking: Energy Gain');
p=10e6
cav.relative_phase=-90
%time G=cav.autophase(t=0, p=p)
plt.plot(G.stat('mean_z','screen'), (G.stat('mean_energy', 'screen')-G.screen[0]['mean_energy'])/1e6);
plt.xlabel('z (m)');
plt.ylabel('$\Delta E$ (MeV)');
from gpt.element import Lattice
lat = Lattice('cavity')
lat.add(Map1D_TM('Buncher', 'fields/buncher_CTB_1D.gdf', frequency=1.3e9, scale=10e6, relative_phase=0), ds=1.0)
G = lat['Buncher'].track_on_axis(t=0, p=10e6, n_screen=100)
fig, ax = plt.subplots(1, 3, sharex='col', constrained_layout=True, figsize=(12,4))
ax[0].plot(cav.z0, cav.Ez0);
ax[0].set_xlabel('$\Delta z$ (m)');
ax[0].set_ylabel('Ez(z) (V/m)');
ax[0].set_title('CBC Field Profile');
lat['Buncher'].plot_floor(ax=ax[1])
ax[1].plot(G.stat('mean_z','screen'), G.stat('mean_x', 'screen'));
ax[1].plot(G.stat('mean_z','screen')[0], G.stat('mean_x', 'screen')[0],'og');
ax[1].plot(G.stat('mean_z','screen')[-1], G.stat('mean_x', 'screen')[-1],'or');
ax[1].set_title('Single Particle Tracking')
ax[2].plot(G.stat('mean_z','screen'), G.stat('mean_energy', 'screen')/1e6)
ax[2].set_xlabel('z (m)');
ax[2].set_ylabel('E (MeV)');
ax[2].set_title('Single Particle Tracking: Energy Gain');
lat['Buncher'].relative_phase=-90
G = lat['Buncher'].autophase(t=0, p=10e6)
plt.plot(G.stat('mean_z','screen'), (G.stat('mean_energy', 'screen')-G.screen[0]['mean_energy'])/1e6);
plt.xlabel('z (m)');
plt.ylabel('$\Delta E$ (MeV)');
for line in cav.gpt_lines():
print(line)
cav = Map1D_TM('CU_ICM', 'fields/icm_1d.gdf', frequency=1.3e9, scale=1, relative_phase=-90)
cav.relative_phase=+180
%time G=cav.autophase(t=0, p=p)
G = cav.track_on_axis(t=0, p=10e6, n_screen=100, workdir='temp')
fig, ax = plt.subplots(1, 3, sharex='col', constrained_layout=True, figsize=(12,4))
ax[0].plot(cav.z0, cav.Ez0);
ax[0].set_xlabel('$\Delta z$ (m)');
ax[0].set_ylabel('Ez(z) (V/m)');
ax[0].set_title('CBC Field Profile');
cav.plot_floor(ax=ax[1])
ax[1].plot(G.stat('mean_z','screen'), G.stat('mean_x', 'screen'));
ax[1].plot(G.stat('mean_z','screen')[0], G.stat('mean_x', 'screen')[0],'og');
ax[1].plot(G.stat('mean_z','screen')[-1], G.stat('mean_x', 'screen')[-1],'or');
ax[1].set_title('Single Particle Tracking')
ax[2].plot(G.stat('mean_z','screen'), G.stat('mean_energy', 'screen')/1e6)
ax[2].set_xlabel('z (m)');
ax[2].set_ylabel('E (MeV)');
ax[2].set_title('Single Particle Tracking: Energy Gain'); | 0.533397 | 0.790692 |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
from statsmodels.formula.api import ols
```
### Tensile Strength Example
#### Manual Solution (See code below for faster solution)
df(SSTR/SSB) = 4-1 = 3(Four different concentrations/samples)
df(SSE/SSW) = 4(6-1) = 20
df(SST) = 4*6 - 1 = 23 = 20 + 3
alpha = 0.01
```
alpha = 0.01
five_percent = [7,8,15,11,9,10]
ten_percent = [12,17,13,18,19,15]
fifteen_percent = [14,18,19,17,16,18]
twenty_percent = [19,25,22,23,18,20]
fig,ax = plt.subplots(figsize = (10,5))
ax.boxplot([five_percent,ten_percent,fifteen_percent,twenty_percent])
data = np.array([five_percent,ten_percent,fifteen_percent,twenty_percent])
data
```
The problem with the above array is that the no. of columns is 6 but it should be equal to no. of samples i.e. 4
```
data = np.reshape(data,(6,4))
data
grand_mean = np.mean(data)
SSE,SST,SSTr = 0,0,0
df_treatment = 3
df_error = 20
# Calculate SSE - Iterate through all columns
for col_iter in range(data.shape[1]):
# Fetch the next column
col = data[:,col_iter]
# Finding column mean
col_mean = col.mean()
# Sum of squares from mean
for data_point in col:
SSE += (data_point - col_mean) ** 2
# Calculate SST
for col_iter in range(data.shape[1]):
for row_iter in range(data.shape[0]):
data_point = data[row_iter][col_iter]
SST += (data_point - grand_mean) ** 2
SSTr = SST - SSE
MSE = SSE / 20
MSTr = SSTr / 3
f_value = MSTr / MSE
print(f'SST = {round(SST,3)}, SSTr = {round(SSTr,3)}, SSE = {round(SSE,3)}')
print(f'MSE = {round(MSE,3)}, MSTr = {round(MSTr,3)}')
print(f'F value = {round(f_value,3)}')
from scipy.stats import f,f_oneway
p_value = 1 - f.cdf(f_value,df_treatment,df_error)
# Check if f_value is correct
f.ppf(1 - p_value, dfn = 3, dfd = 20)
# Testing using P-value method (One-tailed test)
if p_value <= alpha:
print('Null hypothesis is rejected, thus hardwood concentration does affect tensile strength')
else:
print('Null hypothesis is not rejected')
# Testing using Critical value method (One-tailed test)
critical_value = f.ppf(1-alpha,dfn = 3, dfd = 20)
if f_value >= critical_value:
print('Null hypothesis is rejected, thus hardwood concentration does affect tensile strength')
else:
print('Null hypothesis is not rejected')
```
### Faster solution using Python
```
f_oneway(five_percent,ten_percent,fifteen_percent,twenty_percent)
data = pd.read_excel('Week-5-Files/Tensile-strength-of-paper.xlsx')
data.columns = ['concentration5','concentration10','concentration15','concentration20']
data
data_new = pd.melt(data.reset_index(),id_vars = ['index'],value_vars = ['concentration5','concentration10','concentration15','concentration20'])
data_new
model = ols('value ~ C(variable)',data = data_new).fit()
model.summary()
anova_table = sm.stats.anova_lm(model,typ = 1)
anova_table
```
Note: Residual row - SSW/SSE <br>
C(variable) row - SSB/SSTr <br>
PR - P-value
## Post - Hoc Analysis
### Least Significant Differences (LSD) Method
```
from scipy.stats import t
t_value = -t.ppf(0.025,20)
MSE = 6.50833
num_obs = 6
lsd = t_value *((2* MSE/num_obs) ** 0.5)
lsd
# Calculate the mean of all concentrations
y1 = data['concentration5'].mean()
y2 = data['concentration10'].mean()
y3 = data['concentration15'].mean()
y4 = data['concentration20'].mean()
```
Compare the pairwise means with LSD to decide whether they can be considered equal or not. <br>
Ex - abs(y2 - y1) = 5.67 > 3.07 i.e. mu1 and mu2 are unequal.
Thus 5% and 10% hardwood concentrations produce different tensile strength of paper. This process is repeated for all pairwise means.
### Tukey - Kramer Test
```
from statsmodels.stats.multicomp import pairwise_tukeyhsd
from statsmodels.stats.multicomp import MultiComparison
mc = MultiComparison(data_new['value'],data_new['variable'])
mc
mcresult= mc.tukeyhsd(0.05)
mcresult.summary()
```
| github_jupyter | import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
from statsmodels.formula.api import ols
alpha = 0.01
five_percent = [7,8,15,11,9,10]
ten_percent = [12,17,13,18,19,15]
fifteen_percent = [14,18,19,17,16,18]
twenty_percent = [19,25,22,23,18,20]
fig,ax = plt.subplots(figsize = (10,5))
ax.boxplot([five_percent,ten_percent,fifteen_percent,twenty_percent])
data = np.array([five_percent,ten_percent,fifteen_percent,twenty_percent])
data
data = np.reshape(data,(6,4))
data
grand_mean = np.mean(data)
SSE,SST,SSTr = 0,0,0
df_treatment = 3
df_error = 20
# Calculate SSE - Iterate through all columns
for col_iter in range(data.shape[1]):
# Fetch the next column
col = data[:,col_iter]
# Finding column mean
col_mean = col.mean()
# Sum of squares from mean
for data_point in col:
SSE += (data_point - col_mean) ** 2
# Calculate SST
for col_iter in range(data.shape[1]):
for row_iter in range(data.shape[0]):
data_point = data[row_iter][col_iter]
SST += (data_point - grand_mean) ** 2
SSTr = SST - SSE
MSE = SSE / 20
MSTr = SSTr / 3
f_value = MSTr / MSE
print(f'SST = {round(SST,3)}, SSTr = {round(SSTr,3)}, SSE = {round(SSE,3)}')
print(f'MSE = {round(MSE,3)}, MSTr = {round(MSTr,3)}')
print(f'F value = {round(f_value,3)}')
from scipy.stats import f,f_oneway
p_value = 1 - f.cdf(f_value,df_treatment,df_error)
# Check if f_value is correct
f.ppf(1 - p_value, dfn = 3, dfd = 20)
# Testing using P-value method (One-tailed test)
if p_value <= alpha:
print('Null hypothesis is rejected, thus hardwood concentration does affect tensile strength')
else:
print('Null hypothesis is not rejected')
# Testing using Critical value method (One-tailed test)
critical_value = f.ppf(1-alpha,dfn = 3, dfd = 20)
if f_value >= critical_value:
print('Null hypothesis is rejected, thus hardwood concentration does affect tensile strength')
else:
print('Null hypothesis is not rejected')
f_oneway(five_percent,ten_percent,fifteen_percent,twenty_percent)
data = pd.read_excel('Week-5-Files/Tensile-strength-of-paper.xlsx')
data.columns = ['concentration5','concentration10','concentration15','concentration20']
data
data_new = pd.melt(data.reset_index(),id_vars = ['index'],value_vars = ['concentration5','concentration10','concentration15','concentration20'])
data_new
model = ols('value ~ C(variable)',data = data_new).fit()
model.summary()
anova_table = sm.stats.anova_lm(model,typ = 1)
anova_table
from scipy.stats import t
t_value = -t.ppf(0.025,20)
MSE = 6.50833
num_obs = 6
lsd = t_value *((2* MSE/num_obs) ** 0.5)
lsd
# Calculate the mean of all concentrations
y1 = data['concentration5'].mean()
y2 = data['concentration10'].mean()
y3 = data['concentration15'].mean()
y4 = data['concentration20'].mean()
from statsmodels.stats.multicomp import pairwise_tukeyhsd
from statsmodels.stats.multicomp import MultiComparison
mc = MultiComparison(data_new['value'],data_new['variable'])
mc
mcresult= mc.tukeyhsd(0.05)
mcresult.summary() | 0.587352 | 0.850531 |
# Pyspark & Astrophysical data: IMAGE
Let's play with Image. In this example, we load an image data from a FITS file (CFHTLens), and identify sources with a simple astropy algorithm. The workflow is described below. For simplicity, we only focus on one CCD in this notebook. For full scale, see the pyspark [im2cat.py](https://github.com/astrolabsoftware/spark-fits/blob/master/examples/python/im2cat.py) script.
![im2cat](im2cat.jpg)
```
## Import SparkSession from Spark
from pyspark.sql import SparkSession
## Create a DataFrame from the HDU data of a FITS file
fn = "../../src/test/resources/image.fits"
hdu = 1
df = spark.read.format("fits").option("hdu", hdu).load(fn)
## By default, spark-fits distributes the rows of the image
df.printSchema()
df.show(5)
```
# Find objects on CCD
```
## In order to work on the full image, one needs to
## re-partition the image by gathering all rows.
## For simplicity, we work with only one image, but in real life
## we would just have all CCDs distributed, one per Spark mapper.
## For a real life example, see the full example at spark-fits/example/python/im2cat.py
def rowdf_into_imagerdd(df, final_num_partition=1):
"""
Reshape a DataFrame of rows into a RDD containing the full image
in one partition.
Parameters
----------
df : DataFrame
DataFrame of image rows.
final_num_partition : Int
The final number of partitions. Must be one (default) unless you
know what you are doing.
Returns
----------
imageRDD : RDD
RDD containing the full image in one partition
"""
return df.rdd.coalesce(final_num_partition).glom()
imRDD = rowdf_into_imagerdd(df, 1)
## Let's run a simple object finder on our image,
## and collect the catalog.
import numpy as np
from photutils import DAOStarFinder
from astropy.stats import sigma_clipped_stats
def reshape_image(im):
"""
By default, Spark shapes images into (nx, 1, ny).
This routine reshapes images into (nx, ny)
Parameters
----------
im : 3D array
Original image with shape (nx, 1, ny)
Returns
----------
im_reshaped : 2D array
Original image with shape (nx, ny)
"""
shape = np.shape(im)
return im.reshape((shape[0], shape[2]))
def get_stat(data, sigma=3.0, iters=3):
"""
Estimate the background and background noise using
sigma-clipped statistics.
Parameters
----------
data : 2D array
2d array containing the data.
sigma : float
sigma.
iters : int
Number of iteration to perform to get accurate estimate.
The higher the better, but it will be longer.
"""
mean, median, std = sigma_clipped_stats(data, sigma=sigma, iters=iters)
return mean, median, std
## Source detection: build the catalogs for each CCD in parallel
## Only one CCD in this example.
cat = imRDD.map(
lambda im: reshape_image(np.array(im)))\
.map(
lambda im: (im, get_stat(im)))\
.map(
lambda im_stat: (
im_stat[0],
im_stat[1][1],
DAOStarFinder(fwhm=3.0, threshold=5.*im_stat[1][2])))\
.map(
lambda im_mean_starfinder: im_mean_starfinder[2](
im_mean_starfinder[0] - im_mean_starfinder[1]))
final_cat = cat.collect()
print(final_cat)
## Let's visualise our objects found
from astropy.io import fits
from photutils import CircularAperture
from astropy.visualization import SqrtStretch
from astropy.visualization.mpl_normalize import ImageNormalize
import matplotlib.pyplot as pl
## Grab initial data for plot
data = fits.open(fn)
data = data[hdu].data
## Plot the result on top of the CCD
fig = pl.figure(0, (10, 10))
positions = (
final_cat[hdu-1]['xcentroid'],
final_cat[hdu-1]['ycentroid'])
apertures = CircularAperture(positions, r=10.)
norm = ImageNormalize(stretch=SqrtStretch())
pl.imshow(data, cmap='Greys', origin="lower", norm=norm)
apertures.plot(color='blue', lw=1.0, alpha=0.5)
pl.show()
## Of course, one could use different algorithms, like the ones in the Stack!
```
| github_jupyter | ## Import SparkSession from Spark
from pyspark.sql import SparkSession
## Create a DataFrame from the HDU data of a FITS file
fn = "../../src/test/resources/image.fits"
hdu = 1
df = spark.read.format("fits").option("hdu", hdu).load(fn)
## By default, spark-fits distributes the rows of the image
df.printSchema()
df.show(5)
## In order to work on the full image, one needs to
## re-partition the image by gathering all rows.
## For simplicity, we work with only one image, but in real life
## we would just have all CCDs distributed, one per Spark mapper.
## For a real life example, see the full example at spark-fits/example/python/im2cat.py
def rowdf_into_imagerdd(df, final_num_partition=1):
"""
Reshape a DataFrame of rows into a RDD containing the full image
in one partition.
Parameters
----------
df : DataFrame
DataFrame of image rows.
final_num_partition : Int
The final number of partitions. Must be one (default) unless you
know what you are doing.
Returns
----------
imageRDD : RDD
RDD containing the full image in one partition
"""
return df.rdd.coalesce(final_num_partition).glom()
imRDD = rowdf_into_imagerdd(df, 1)
## Let's run a simple object finder on our image,
## and collect the catalog.
import numpy as np
from photutils import DAOStarFinder
from astropy.stats import sigma_clipped_stats
def reshape_image(im):
"""
By default, Spark shapes images into (nx, 1, ny).
This routine reshapes images into (nx, ny)
Parameters
----------
im : 3D array
Original image with shape (nx, 1, ny)
Returns
----------
im_reshaped : 2D array
Original image with shape (nx, ny)
"""
shape = np.shape(im)
return im.reshape((shape[0], shape[2]))
def get_stat(data, sigma=3.0, iters=3):
"""
Estimate the background and background noise using
sigma-clipped statistics.
Parameters
----------
data : 2D array
2d array containing the data.
sigma : float
sigma.
iters : int
Number of iteration to perform to get accurate estimate.
The higher the better, but it will be longer.
"""
mean, median, std = sigma_clipped_stats(data, sigma=sigma, iters=iters)
return mean, median, std
## Source detection: build the catalogs for each CCD in parallel
## Only one CCD in this example.
cat = imRDD.map(
lambda im: reshape_image(np.array(im)))\
.map(
lambda im: (im, get_stat(im)))\
.map(
lambda im_stat: (
im_stat[0],
im_stat[1][1],
DAOStarFinder(fwhm=3.0, threshold=5.*im_stat[1][2])))\
.map(
lambda im_mean_starfinder: im_mean_starfinder[2](
im_mean_starfinder[0] - im_mean_starfinder[1]))
final_cat = cat.collect()
print(final_cat)
## Let's visualise our objects found
from astropy.io import fits
from photutils import CircularAperture
from astropy.visualization import SqrtStretch
from astropy.visualization.mpl_normalize import ImageNormalize
import matplotlib.pyplot as pl
## Grab initial data for plot
data = fits.open(fn)
data = data[hdu].data
## Plot the result on top of the CCD
fig = pl.figure(0, (10, 10))
positions = (
final_cat[hdu-1]['xcentroid'],
final_cat[hdu-1]['ycentroid'])
apertures = CircularAperture(positions, r=10.)
norm = ImageNormalize(stretch=SqrtStretch())
pl.imshow(data, cmap='Greys', origin="lower", norm=norm)
apertures.plot(color='blue', lw=1.0, alpha=0.5)
pl.show()
## Of course, one could use different algorithms, like the ones in the Stack! | 0.880045 | 0.986442 |
# Data Manipulation
It is impossible to get anything done if we cannot manipulate data. Generally, there are two important things we need to do with data: (i) acquire it and (ii) process it once it is inside the computer. There is no point in acquiring data if we do not even know how to store it, so let's get our hands dirty first by playing with synthetic data. We will start by introducing the tensor,
PyTorch's primary tool for storing and transforming data. If you have worked with NumPy before, you will notice that tensors are, by design, similar to NumPy's multi-dimensional array. Tensors support asynchronous computation on CPU, GPU and provide support for automatic differentiation.
## Getting Started
```
import torch
```
Tensors represent (possibly multi-dimensional) arrays of numerical values.
The simplest object we can create is a vector. To start, we can use `arange` to create a row vector with 12 consecutive integers.
```
x = torch.arange(12, dtype=torch.float64)
x
# We can get the tensor shape through the shape attribute.
x.shape
# .shape is an alias for .size(), and was added to more closely match numpy
x.size()
```
We use the `reshape` function to change the shape of one (possibly multi-dimensional) array, to another that contains the same number of elements.
For example, we can transform the shape of our line vector `x` to (3, 4), which contains the same values but interprets them as a matrix containing 3 rows and 4 columns. Note that although the shape has changed, the elements in `x` have not.
```
x = x.reshape((3, 4))
x
```
Reshaping by manually specifying each of the dimensions can get annoying. Once we know one of the dimensions, why should we have to perform the division our selves to determine the other? For example, above, to get a matrix with 3 rows, we had to specify that it should have 4 columns (to account for the 12 elements). Fortunately, PyTorch can automatically work out one dimension given the other.
We can invoke this capability by placing `-1` for the dimension that we would like PyTorch to automatically infer. In our case, instead of
`x.reshape((3, 4))`, we could have equivalently used `x.reshape((-1, 4))` or `x.reshape((3, -1))`.
```
torch.FloatTensor(2, 3)
torch.Tensor(2, 3)
torch.empty(2, 3)
```
torch.Tensor() is just an alias to torch.FloatTensor() which is the default type of tensor, when no dtype is specified during tensor construction.
From the torch for numpy users notes, it seems that torch.Tensor() is a drop-in replacement of numpy.empty()
So, in essence torch.FloatTensor() and torch.empty() does the same job.
The `empty` method just grabs some memory and hands us back a matrix without setting the values of any of its entries. This is very efficient but it means that the entries might take any arbitrary values, including very big ones! Typically, we'll want our matrices initialized either with ones, zeros, some known constant or numbers randomly sampled from a known distribution.
Perhaps most often, we want an array of all zeros. To create tensor with all elements set to 0 and a shape of (2, 3, 4) we can invoke:
```
torch.zeros((2, 3, 4))
```
We can create tensors with each element set to 1 works via
```
torch.ones((2, 3, 4))
```
We can also specify the value of each element in the desired NDArray by supplying a Python list containing the numerical values.
```
y = torch.tensor([[2, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
y
```
In some cases, we will want to randomly sample the values of each element in the tensor according to some known probability distribution. This is especially common when we intend to use the tensor as a parameter in a neural network. The following snippet creates an tensor with a shape of (3,4). Each of its elements is randomly sampled in a normal distribution with zero mean and unit variance.
```
torch.randn(3, 4)
```
## Operations
Oftentimes, we want to apply functions to arrays. Some of the simplest and most useful functions are the element-wise functions. These operate by performing a single scalar operation on the corresponding elements of two arrays. We can create an element-wise function from any function that maps from the scalars to the scalars. In math notations we would denote such a function as $f: \mathbb{R} \rightarrow \mathbb{R}$. Given any two vectors $\mathbf{u}$ and $\mathbf{v}$ *of the same shape*, and the function f,
we can produce a vector $\mathbf{c} = F(\mathbf{u},\mathbf{v})$ by setting $c_i \gets f(u_i, v_i)$ for all $i$. Here, we produced the vector-valued $F: \mathbb{R}^d \rightarrow \mathbb{R}^d$ by *lifting* the scalar function to an element-wise vector operation. In PyTorch, the common standard arithmetic operators (+,-,/,\*,\*\*) have all been *lifted* to element-wise operations for identically-shaped tensors of arbitrary shape. We can call element-wise operations on any two tensors of the same shape, including matrices.
```
x = torch.tensor([1, 2, 4, 8], dtype=torch.float32)
y = torch.ones_like(x) * 2
print('x =', x)
print('x + y', x + y)
print('x - y', x - y)
print('x * y', x * y)
print('x / y', x / y)
```
Many more operations can be applied element-wise, such as exponentiation:
```
torch.exp(x)
# Note: torch.exp is not implemented for 'torch.LongTensor'.
```
In addition to computations by element, we can also perform matrix operations, like matrix multiplication using the `mm` or `matmul` function. Next, we will perform matrix multiplication of `x` and the transpose of `y`. We define `x` as a matrix of 3 rows and 4 columns, and `y` is transposed into a matrix of 4 rows and 3 columns. The two matrices are multiplied to obtain a matrix of 3 rows and 3 columns.
```
x = torch.arange(12, dtype=torch.float32).reshape((3,4))
y = torch.tensor([[2, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]], dtype=torch.float32)
print(x.dtype)
print(y)
torch.mm(x, y.t())
```
Note that torch.dot() behaves differently to np.dot(). There's been some discussion about what would be desirable here. Specifically, torch.dot() treats both a and b as 1D vectors (irrespective of their original shape) and computes their inner product.
We can also merge multiple tensors. For that, we need to tell the system along which dimension to merge. The example below merges two matrices along dimension 0 (along rows) and dimension 1 (along columns) respectively.
```
torch.cat((x, y), dim=0)
torch.cat((x, y), dim=1)
```
Sometimes, we may want to construct binary tensors via logical statements. Take `x == y` as an example. If `x` and `y` are equal for some entry, the new tensor has a value of 1 at the same position; otherwise it is 0.
```
x == y
```
Summing all the elements in the tensor yields an tensor with only one element.
```
x.sum()
```
We can transform the result into a scalar in Python using the `asscalar` function of `numpy`. In the following example, the $\ell_2$ norm of `x` yields a single element tensor. The final result is transformed into a scalar.
```
import numpy as np
np.asscalar(x.norm())
```
## Broadcast Mechanism
In the above section, we saw how to perform operations on two tensors of the same shape. When their shapes differ, a broadcasting mechanism may be triggered analogous to NumPy: first, copy the elements appropriately so that the two tensors have the same shape, and then carry out operations by element.
```
a = torch.arange(3, dtype=torch.float).reshape((3, 1))
b = torch.arange(2, dtype=torch.float).reshape((1, 2))
a, b
```
Since `a` and `b` are (3x1) and (1x2) matrices respectively, their shapes do not match up if we want to add them. PyTorch addresses this by 'broadcasting' the entries of both matrices into a larger (3x2) matrix as follows: for matrix `a` it replicates the columns, for matrix `b` it replicates the rows before adding up both element-wise.
```
a + b
```
## Indexing and Slicing
Just like in any other Python array, elements in a tensor can be accessed by its index. In good Python tradition the first element has index 0 and ranges are specified to include the first but not the last element. By this logic `1:3` selects the second and third element. Let's try this out by selecting the respective rows in a matrix.
```
x[1:3]
```
Beyond reading, we can also write elements of a matrix.
```
x[1, 2] = 9
x
```
If we want to assign multiple elements the same value, we simply index all of them and then assign them the value. For instance, `[0:2, :]` accesses the first and second rows. While we discussed indexing for matrices, this obviously also works for vectors and for tensors of more than 2 dimensions.
```
x[0:2, :] = 12
x
```
## Saving Memory
In the previous example, every time we ran an operation, we allocated new memory to host its results. For example, if we write `y = x + y`, we will dereference the matrix that `y` used to point to and instead point it at the newly allocated memory. In the following example we demonstrate this with Python's `id()` function, which gives us the exact address of the referenced object in memory. After running `y = y + x`, we will find that `id(y)` points to a different location. That is because Python first evaluates `y + x`, allocating new memory for the result and then subsequently redirects `y` to point at this new location in memory.
```
before = id(y)
y = y + x
id(y) == before
```
This might be undesirable for two reasons. First, we do not want to run around allocating memory unnecessarily all the time. In machine learning, we might have hundreds of megabytes of parameters and update all of them multiple times per second. Typically, we will want to perform these updates *in place*. Second, we might point at the same parameters from multiple variables. If we do not update in place, this could cause a memory leak, making it possible for us to inadvertently reference stale parameters.
Fortunately, performing in-place operations in PyTorch is easy. We can assign the result of an operation to a previously allocated array with slice notation, e.g., `y[:] = <expression>`. To illustrate the behavior, we first clone the shape of a matrix using `zeros_like` to allocate a block of 0 entries.
```
z = torch.zeros_like(y)
print('id(z):', id(z))
z[:] = x + y
print('id(z):', id(z))
```
While this looks pretty, `x+y` here will still allocate a temporary buffer to store the result of `x+y` before copying it to `z[:]`. To make even better use of memory, we can directly invoke the underlying `tensor` operation, in this case `add`, avoiding temporary buffers. We do this by specifying the `out` keyword argument, which every `tensor` operator supports:
```
before = id(z)
torch.add(x, y, out=z)
id(z) == before
```
If the value of `x ` is not reused in subsequent computations, we can also use `x[:] = x + y` or `x += y` to reduce the memory overhead of the operation.
```
before = id(x)
x += y
id(x) == before
```
## Mutual Transformation of PyTorch and NumPy
Converting PyTorch Tensors to and from NumPy Arrays is easy. The converted arrays do *not* share memory. This minor inconvenience is actually quite important: when you perform operations on the CPU or one of the GPUs, you do not want PyTorch having to wait whether NumPy might want to be doing something else with the same chunk of memory. `.tensor` and `.numpy` do the trick.
```
a = x.numpy()
print(type(a))
b = torch.tensor(a)
print(type(b))
```
## Exercises
1. Run the code in this section. Change the conditional statement `x == y` in this section to `x < y` or `x > y`, and then see what kind of tensor you can get.
1. Replace the two tensors that operate by element in the broadcast mechanism with other shapes, e.g. three dimensional tensors. Is the result the same as expected?
1. Assume that we have three matrices `a`, `b` and `c`. Rewrite `c = torch.mm(a, b.t()) + c` in the most memory efficient manner.
| github_jupyter | import torch
x = torch.arange(12, dtype=torch.float64)
x
# We can get the tensor shape through the shape attribute.
x.shape
# .shape is an alias for .size(), and was added to more closely match numpy
x.size()
x = x.reshape((3, 4))
x
torch.FloatTensor(2, 3)
torch.Tensor(2, 3)
torch.empty(2, 3)
torch.zeros((2, 3, 4))
torch.ones((2, 3, 4))
y = torch.tensor([[2, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]])
y
torch.randn(3, 4)
x = torch.tensor([1, 2, 4, 8], dtype=torch.float32)
y = torch.ones_like(x) * 2
print('x =', x)
print('x + y', x + y)
print('x - y', x - y)
print('x * y', x * y)
print('x / y', x / y)
torch.exp(x)
# Note: torch.exp is not implemented for 'torch.LongTensor'.
x = torch.arange(12, dtype=torch.float32).reshape((3,4))
y = torch.tensor([[2, 1, 4, 3], [1, 2, 3, 4], [4, 3, 2, 1]], dtype=torch.float32)
print(x.dtype)
print(y)
torch.mm(x, y.t())
torch.cat((x, y), dim=0)
torch.cat((x, y), dim=1)
x == y
x.sum()
import numpy as np
np.asscalar(x.norm())
a = torch.arange(3, dtype=torch.float).reshape((3, 1))
b = torch.arange(2, dtype=torch.float).reshape((1, 2))
a, b
a + b
x[1:3]
x[1, 2] = 9
x
x[0:2, :] = 12
x
before = id(y)
y = y + x
id(y) == before
z = torch.zeros_like(y)
print('id(z):', id(z))
z[:] = x + y
print('id(z):', id(z))
before = id(z)
torch.add(x, y, out=z)
id(z) == before
before = id(x)
x += y
id(x) == before
a = x.numpy()
print(type(a))
b = torch.tensor(a)
print(type(b)) | 0.644113 | 0.994129 |
# Object-oriented programming
This notebooks contains assingments that are more complex. They are aimed at students who already know about [object- oriented Programming](https://en.wikipedia.org/wiki/Object-oriented_programming) from prior experience and who are familiar with the concepts but now how OOP is done on Python.
The scope of this introduction is insufficient to give a good tutorial on OOP as a whole.
Our first example is an example of composition. We create a Company that consists a list of Persons and an account balance.
Then after we structure our classes with desired behaviour we can use them quite freely.
Create a class called Person for storing the following information about a person:
- name
Create a method say_hi that returns the string "Hi, I'm " + the person's name.
```
class Person:
def __init__(self, name):
pass
def say_hi(self):
pass
```
Run the following code to test that you have created the person correctly:
```
persons = []
joe = Person("Joe")
jane = Person("Jane")
persons.append(joe)
persons.append(jane)
```
Now create a class Employee that **inherits** the class Person.
In addition to a name, Employees have a title (string),
salary (number) and an account_balance (number).
**Override** the say_hi method to say "Hi I'm " + name + " and i work as a " + title
```
# the reference to Person on the line below means that the object inherits
# Employee
class Employee(Person):
def __init__(self,
name,
salary,
title="Software Specialist",
account_balance=0):
# this calls the constructor of Person class
super().__init__(name)
# you still need to implement the rest
pass
def say_hi(self):
pass
```
Every employee is also a person.
```
persons = []
joe = Person("Joe")
jane = Person("Jane")
persons.append(joe)
persons.append(jane)
emp1 = Employee("Jack", 3000)
emp2 = Employee("Jill", 3000)
persons.append(emp1)
persons.append(emp2)
for person in persons:
print(person.say_hi())
```
Now create a class called Company, which has a name
and a list of Employee objects called `employees` and an
account balance for the company.
Make a method `payday(self)` that will go through
the list of employees and deduct their salary from
the corporate account and add it to the employee
account. Before you start deducting money compute the sum of salaries and make sure it is higher than the account balance. If it is not, raise an instance of the NotEnoughMoneyError.
Make a method `layoff(self)` that will remove
one employe from the list of employees. If there are no more employees raise a NoMoreEmployeesException.
```
class NotEnoughMoneyError(Exception):
pass
class NoMoreEmployeesError(Exception):
pass
class Company(object):
def __init__(self, title, employees = [], account_balance=0):
pass
def payday(self):
pass
def layoff(self):
pass
```
Okay, you've worked this far just creating the model, let's put it to use.
Make a method `smart_payday(company)`. The method should attempt to call the payday method of the company. If the call raises a NotEnoughMoneyException lay off a worker and then try again. Don't catch the NoMoreEmployeesException as that should be handled at a higher level.
You will probably need to use a while loop to implement the re-trying until a condition is met.
```
def smart_payday(company):
pass
# a bit of test code
names_and_salaries = [
("Jane", 3000),
("Joe", 2000),
("Jill", 2000),
("Jack", 1500)
]
workers = [Employee(name, salary) \
for name, salary in names_and_salaries]
scs = Company("SCS", employees=workers, account_balance=12000)
smart_payday(scs)
print(scs.account_balance)
print(len(scs.employees))
smart_payday(scs)
print(scs.account_balance)
print(len(scs.employees))
print(scs.employees)
```
Observe how printing the employees list is not very informative? Adding a magic method called `__repr__` will help with that.
## Extra: more Exceptions
Consider the following method, it will raise errors randomly. This type of failure is pretty common for IO-related tasks.
```
class RandomException(Exception):
pass
def do_wonky_stuff():
import random
if random.random() > 0.5:
raise RandomException("this exception happened randomly")
return
```
Wrap a call to ``do_wonky_stuff`` with a try-except clause.
```
do_wonky_stuff()
print("yay it worked")
```
OK, now let's go even deeper.
```
class ReallyRandomException(Exception):
pass
def do_really_wonky_stuff():
import random
val = random.random()
if val > 0.75:
raise RandomException("this exception happened randomly")
elif val < 0.15:
raise ReallyRandomException("This exception is actually quite rare")
return
```
Wrap do_really_wonky_stuff in a try-except -clause with two excepts. In the rarer of the excepts print out something so you'll if it's your lucky day.
In real life you'd probably want to handle different errors in a different way, or at least log or inform the user of what caused the error.
```
do_really_wonky_stuff()
```
| github_jupyter | class Person:
def __init__(self, name):
pass
def say_hi(self):
pass
persons = []
joe = Person("Joe")
jane = Person("Jane")
persons.append(joe)
persons.append(jane)
# the reference to Person on the line below means that the object inherits
# Employee
class Employee(Person):
def __init__(self,
name,
salary,
title="Software Specialist",
account_balance=0):
# this calls the constructor of Person class
super().__init__(name)
# you still need to implement the rest
pass
def say_hi(self):
pass
persons = []
joe = Person("Joe")
jane = Person("Jane")
persons.append(joe)
persons.append(jane)
emp1 = Employee("Jack", 3000)
emp2 = Employee("Jill", 3000)
persons.append(emp1)
persons.append(emp2)
for person in persons:
print(person.say_hi())
class NotEnoughMoneyError(Exception):
pass
class NoMoreEmployeesError(Exception):
pass
class Company(object):
def __init__(self, title, employees = [], account_balance=0):
pass
def payday(self):
pass
def layoff(self):
pass
def smart_payday(company):
pass
# a bit of test code
names_and_salaries = [
("Jane", 3000),
("Joe", 2000),
("Jill", 2000),
("Jack", 1500)
]
workers = [Employee(name, salary) \
for name, salary in names_and_salaries]
scs = Company("SCS", employees=workers, account_balance=12000)
smart_payday(scs)
print(scs.account_balance)
print(len(scs.employees))
smart_payday(scs)
print(scs.account_balance)
print(len(scs.employees))
print(scs.employees)
class RandomException(Exception):
pass
def do_wonky_stuff():
import random
if random.random() > 0.5:
raise RandomException("this exception happened randomly")
return
do_wonky_stuff()
print("yay it worked")
class ReallyRandomException(Exception):
pass
def do_really_wonky_stuff():
import random
val = random.random()
if val > 0.75:
raise RandomException("this exception happened randomly")
elif val < 0.15:
raise ReallyRandomException("This exception is actually quite rare")
return
do_really_wonky_stuff() | 0.387922 | 0.974239 |
# Introduction
Moto: "garbage in, garbage out". Feeding dirty data into a model will give results that are meaningless. Steps for improving data quality:
1. Getting the data - this is rather easy since the texts are pre-uploded.
2. Cleaning the data - use popular text pre-processing techniques.
3. Organizing the data - organize the cleaned data in a way that is easy to input into machine learning algorithms.
The output of this notebook will be clean, organized data in two standard text formats:
1. Corpus - a matrix storing collections of text.
2. Term Frequency - Inverse Document Frequency Table - another matrix consisting of word weights in relation to how often they appear in the texts.
3. TfidfVectorizer - an instance of the TfidfVectorizer class since it may be needed later.
## Getting The Data
Input: Names of files containing authour's texts.
Ouput: Corpus - a matrix with rows the first column in which is a sample text and the second the author who wrote it.
```
import pandas as pd
pd.set_option('max_colwidth', 150)
corpus = pd.DataFrame(columns=['text', 'author'])
corpora_size = 0
authors = {
'Ivan Vazov': ['/content/drive/MyDrive/Colab Notebooks/project/data/vazov_separated/Ivan_Vazov_-_Pod_igoto_-_1773-b.txt',
'/content/drive/MyDrive/Colab Notebooks/project/data/vazov_separated/Ivan_Vazov_-_Epopeja_na_zabravenite_-_3-b.txt'],
'Jordan Jovkov': ['/content/drive/MyDrive/Colab Notebooks/project/data/jovkov_separated/Jordan_Jovkov_-_Chiflikyt_kraj_granitsata_-_2033-b.txt',
'/content/drive/MyDrive/Colab Notebooks/project/data/jovkov_separated/Jordan_Jovkov_-_Prikljuchenijata_na_Gorolomov_-_2034-b.txt',
'/content/drive/MyDrive/Colab Notebooks/project/data/jovkov_separated/Jordan_Jovkov_-_Staroplaninski_legendi_-_522-b.txt',
'/content/drive/MyDrive/Colab Notebooks/project/data/jovkov_separated/Jordan Jovkov - - . Posledna radost - 7896.txt',
'/content/drive/MyDrive/Colab Notebooks/project/data/jovkov_separated/Jordan_Jovkov_-_Vecheri_v_Antimovskija_han_-_517-b.txt']
}
for author, texts in authors.items():
authors[author] = ''
for text in texts:
authors[author] += open(text, 'r').read()
total_chars = len(authors[author])
to_get = round(total_chars / 100)
print(f'Total number of characters for {author}: {total_chars:,}. Going to create 100 samples with length {to_get:,}.\n')
paragraphs = []
for i in range(100):
paragraph = authors[author][i * to_get:][:to_get]
paragraphs.append(paragraph)
corpus.loc[corpora_size] = [paragraph, author]
corpora_size += 1
```
## Cleaning The Data
By using common data cleaning steps on all texts, pre-process the data so as to remove any noise.
1. Make text all lower case.
2. Remove punctuation.
3. Remove non-bulgarian words (helps with removing roman numbers in chapter headers).
4. Tokenize text by using whitespace as a word boundary.
More data cleaning steps after tokenization:
1. Remove stop words.
2. Lemmatization.
3. Stemming.
3. Create bi-grams.
Input: Corpus.
Output: A vector of tokens representing the texts.
```
! pip install lemmagen3
! pip install bulstem
! pip install stop-words
import regex as re
from nltk import bigrams
from bulstem.stem import BulStemmer
from lemmagen3 import Lemmatizer
from stop_words import get_stop_words
def tokenize(raw_text):
stop_words = get_stop_words('bulgarian')
lemmatizer = Lemmatizer('bg')
stemmer = BulStemmer.from_file('/content/drive/MyDrive/Colab Notebooks/project/data/stem_rules_context_2_utf8.txt',
min_freq=2, left_context=2)
text = raw_text.lower() # Make lowercase.
text = re.sub(u'\\p{P}+', "", text) # Remove punctuation.
text = re.sub(u'[a-zA-Z]', "", text) # Remove non-bulgarian words.
tokens = text.split() # Split on whitespace
tokens = [token for token in tokens if token not in stop_words # Filter out stopwords
and all(c.isalpha() for c in token)] # and non-word tokens.
# Before lemmatization (sample): ['песни', 'македония', 'българският', 'бог', ..
tokens = [lemmatizer.lemmatize(token) for token in tokens] # Lemmatization!
# Before stemming (sample): ['песен', 'македония', 'български', 'бог', ..
tokens = [stemmer.stem(token) for token in tokens] # Stemming!
bi_grams = list(bigrams(tokens))
tokens += map(lambda x: x[0] + ' ' + x[1], bi_grams) # Add bi-grams.
# After pre-processing (sample): ['песен', 'македони', 'българск', 'бог', ..
return tokens
data_clean = corpus.text.map(lambda x: tokenize(x))
data_clean
```
## Organizing The Data
The output of this notebook gets generated and saved in pickels here. A quick recap:
- Corpus: a collection of texts.
- Term Frequency - Inverse Document Frequency Table: word weights in a matric format.
### Corpus
```
# A final look before saving.
corpus
# Pickle!
corpus.to_pickle('/content/drive/MyDrive/Colab Notebooks/project/data/corpus.pkl')
```
### Term Frequency - Inverse Document Frequency Table
Constructed using scikit-learn's TfidfVectorizer, where every row represents a different document / sample / excerpt from a text and every column will represent a different word.
Because the text that will be passed to the vectorizer is already pre-processed and tokenized some additional attributes have to passed that substitute the built-in functionality with the identity function.
In addition, with TfidfVectorizer, terms that appear too infrequently can be removed. In this case those that appear in less than 2 documents are ignored.
```
from sklearn.feature_extraction.text import TfidfVectorizer
def identity(x):
return x
tfidf = TfidfVectorizer(
tokenizer=identity,
preprocessor=identity,
token_pattern=None,
lowercase=False,
stop_words=None,
min_df=2)
data_tfidf = tfidf.fit_transform(data_clean)
data_table = pd.DataFrame(data_tfidf.toarray(), columns=tfidf.get_feature_names())
data_table.index = data_clean.index
data_table
# Pickles!
import pickle
data_table.to_pickle('/content/drive/MyDrive/Colab Notebooks/project/data/data_table.pkl')
pickle.dump(tfidf, open('/content/drive/MyDrive/Colab Notebooks/project/data/vectorizer.pkl', 'wb'))
```
| github_jupyter | import pandas as pd
pd.set_option('max_colwidth', 150)
corpus = pd.DataFrame(columns=['text', 'author'])
corpora_size = 0
authors = {
'Ivan Vazov': ['/content/drive/MyDrive/Colab Notebooks/project/data/vazov_separated/Ivan_Vazov_-_Pod_igoto_-_1773-b.txt',
'/content/drive/MyDrive/Colab Notebooks/project/data/vazov_separated/Ivan_Vazov_-_Epopeja_na_zabravenite_-_3-b.txt'],
'Jordan Jovkov': ['/content/drive/MyDrive/Colab Notebooks/project/data/jovkov_separated/Jordan_Jovkov_-_Chiflikyt_kraj_granitsata_-_2033-b.txt',
'/content/drive/MyDrive/Colab Notebooks/project/data/jovkov_separated/Jordan_Jovkov_-_Prikljuchenijata_na_Gorolomov_-_2034-b.txt',
'/content/drive/MyDrive/Colab Notebooks/project/data/jovkov_separated/Jordan_Jovkov_-_Staroplaninski_legendi_-_522-b.txt',
'/content/drive/MyDrive/Colab Notebooks/project/data/jovkov_separated/Jordan Jovkov - - . Posledna radost - 7896.txt',
'/content/drive/MyDrive/Colab Notebooks/project/data/jovkov_separated/Jordan_Jovkov_-_Vecheri_v_Antimovskija_han_-_517-b.txt']
}
for author, texts in authors.items():
authors[author] = ''
for text in texts:
authors[author] += open(text, 'r').read()
total_chars = len(authors[author])
to_get = round(total_chars / 100)
print(f'Total number of characters for {author}: {total_chars:,}. Going to create 100 samples with length {to_get:,}.\n')
paragraphs = []
for i in range(100):
paragraph = authors[author][i * to_get:][:to_get]
paragraphs.append(paragraph)
corpus.loc[corpora_size] = [paragraph, author]
corpora_size += 1
! pip install lemmagen3
! pip install bulstem
! pip install stop-words
import regex as re
from nltk import bigrams
from bulstem.stem import BulStemmer
from lemmagen3 import Lemmatizer
from stop_words import get_stop_words
def tokenize(raw_text):
stop_words = get_stop_words('bulgarian')
lemmatizer = Lemmatizer('bg')
stemmer = BulStemmer.from_file('/content/drive/MyDrive/Colab Notebooks/project/data/stem_rules_context_2_utf8.txt',
min_freq=2, left_context=2)
text = raw_text.lower() # Make lowercase.
text = re.sub(u'\\p{P}+', "", text) # Remove punctuation.
text = re.sub(u'[a-zA-Z]', "", text) # Remove non-bulgarian words.
tokens = text.split() # Split on whitespace
tokens = [token for token in tokens if token not in stop_words # Filter out stopwords
and all(c.isalpha() for c in token)] # and non-word tokens.
# Before lemmatization (sample): ['песни', 'македония', 'българският', 'бог', ..
tokens = [lemmatizer.lemmatize(token) for token in tokens] # Lemmatization!
# Before stemming (sample): ['песен', 'македония', 'български', 'бог', ..
tokens = [stemmer.stem(token) for token in tokens] # Stemming!
bi_grams = list(bigrams(tokens))
tokens += map(lambda x: x[0] + ' ' + x[1], bi_grams) # Add bi-grams.
# After pre-processing (sample): ['песен', 'македони', 'българск', 'бог', ..
return tokens
data_clean = corpus.text.map(lambda x: tokenize(x))
data_clean
# A final look before saving.
corpus
# Pickle!
corpus.to_pickle('/content/drive/MyDrive/Colab Notebooks/project/data/corpus.pkl')
from sklearn.feature_extraction.text import TfidfVectorizer
def identity(x):
return x
tfidf = TfidfVectorizer(
tokenizer=identity,
preprocessor=identity,
token_pattern=None,
lowercase=False,
stop_words=None,
min_df=2)
data_tfidf = tfidf.fit_transform(data_clean)
data_table = pd.DataFrame(data_tfidf.toarray(), columns=tfidf.get_feature_names())
data_table.index = data_clean.index
data_table
# Pickles!
import pickle
data_table.to_pickle('/content/drive/MyDrive/Colab Notebooks/project/data/data_table.pkl')
pickle.dump(tfidf, open('/content/drive/MyDrive/Colab Notebooks/project/data/vectorizer.pkl', 'wb')) | 0.384912 | 0.828176 |
```
import numpy as np
import pandas as pd
import glob
from astropy.table import Table
import matplotlib.pyplot as plt
import json
import collections
import astropy
spectra_contsep_j193747_1 = Table.read("mansiclass/spec_auto_contsep_lstep1__crr_b_ifu20211023_02_16_15_RCB-J193747.txt", format = "ascii")
spectra_robot_j193747_1 = Table.read("mansiclass/spec_auto_robot_lstep1__crr_b_ifu20211023_02_16_15_RCB-J193747.txt", format = "ascii")
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193747_1["col1"], spectra_contsep_j193747_1["col2"])
spectra_contsep_j193747_2 = Table.read("mansiclass/spec_auto_contsep_lstep1__crr_b_ifu20211023_02_35_52_RCB-J193747.txt", format = "ascii")
spectra_robot_j193747_2 = Table.read("mansiclass/spec_auto_robot_lstep1__crr_b_ifu20211023_02_35_52_RCB-J193747.txt", format = "ascii")
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193747_2["col1"], spectra_contsep_j193747_2["col2"])
#plt.vlines(8500, 0, np.max(spectra_contsep_j193747_2["col2"]))
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193747_2["col1"], spectra_contsep_j193747_2["col2"] + spectra_contsep_j193747_1["col2"])
plt.xlabel(r'Wavelength ($\mathrm{\AA}$)', fontsize=17)
plt.ylabel('Relative Flux', fontsize=17)
spectra_contsep_j193015_1 = Table.read("mansiclass/spec_auto_contsep_lstep1__crr_b_ifu20211023_02_55_33_RCB-J193015.txt", format = "ascii")
spectra_robot_j193015_1 = Table.read("mansiclass/spec_auto_robot_lstep1__crr_b_ifu20211023_02_55_33_RCB-J193015.txt", format = "ascii")
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193015_1["col1"], spectra_contsep_j193015_1["col2"])
spectra_contsep_j193015_2 = Table.read("mansiclass/spec_auto_contsep_lstep1__crr_b_ifu20211023_03_10_45_RCB-J193015.txt", format = "ascii")
spectra_robot_j193015_2 = Table.read("mansiclass/spec_auto_robot_lstep1__crr_b_ifu20211023_03_10_45_RCB-J193015.txt", format = "ascii")
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193015_2["col1"], spectra_contsep_j193015_2["col2"])
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193015_2["col1"], spectra_contsep_j193015_1["col2"] + spectra_contsep_j193015_2["col2"])
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193015_2["col1"], spectra_contsep_j193015_1["col2"] + spectra_contsep_j193015_2["col2"])
plt.plot(spectra_contsep_j193747_2["col1"], spectra_contsep_j193747_2["col2"] + spectra_contsep_j193747_1["col2"])
items = Table.from_pandas(pd.read_csv("visible.csv"))
wanted = items[np.where(items["WiseID"] == "J193015.49+192051.7")[0]]
distance = 1/(wanted["parallax"]/1000)
absolute_M = wanted["phot_g_mean_mag"] - 5 * np.log10(distance)
wanted["parrallax_over_error"]
wanted["parallax_over_error"]
distance
absolute_M
wanted["bp_g"]
wanted
table = astropy.io.fits.open("spec_rcb2894_rcb1536_85.fits")
table[0].data[0]
table.info
from astropy.io import fits
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize = (20,10))
specfile = 'spec_rcb2894_rcb1536_85.fits'
spec = fits.open(specfile)
data = spec[0].data
wavs = np.ndarray.flatten(np.array([data[3][0],data[2][0],data[1][0],data[0][0]]))
fluxes = np.ndarray.flatten(np.array([data[3][1],data[2][1],data[1][1],data[0][1]]))
wavmask = ((wavs<1.46) & (wavs>1.35)) | ((wavs<1.93) & (wavs>1.8))
wavmask = np.invert(wavmask)
plt.plot(wavs[wavmask],fluxes[wavmask],linewidth=0.7,c='r')
wanted
from scipy.optimize import curve_fit
import pylab as plt
import numpy as np
def blackbody_lam(lam, T):
""" Blackbody as a function of wavelength (um) and temperature (K).
returns units of erg/s/cm^2/cm/Steradian
"""
from scipy.constants import h,k,c
lam = 1e-6 * lam # convert to metres
return 2*h*c**2 / (lam**5 * (np.exp(h*c / (lam*k*T)) - 1))
def func(wa, T1, T2):
return blackbody_lam(wa, T1) + blackbody_lam(wa, T2)
sigma = spectra_contsep_j193015_1["col3"]
ydata = spectra_contsep_j193015_1["col2"]
wa = spectra_contsep_j193015_1["col1"] * 10e-5
spectra_contsep_j193015_1["col2"]
popt, pcov = curve_fit(func, wa, ydata, p0=(2000, 6000), sigma=sigma)
bestT1, bestT2 = popt
sigmaT1, sigmaT2 = np.sqrt(np.diag(pcov))
ybest = blackbody_lam(wa, bestT1) + blackbody_lam(wa, bestT2)
print('Parameters of best-fitting model:')
print(' T1 = %.2f +/- %.2f' % (bestT1, sigmaT1))
print(' T2 = %.2f +/- %.2f' % (bestT2, sigmaT2))
plt.plot(wa, ybest, label='Best fitting\nmodel')
plt.plot(wa, ydata, ls='steps-mid', lw=2, label='Fake data')
plt.legend(frameon=False)
plt.savefig('fit_bb.png')
from scipy.optimize import curve_fit
import pylab as plt
import numpy as np
def blackbody_lam(lam, T):
""" Blackbody as a function of wavelength (um) and temperature (K).
returns units of erg/s/cm^2/cm/Steradian
"""
from scipy.constants import h,k,c
lam = 1e-6 * lam # convert to metres
return 2*h*c**2 / (lam**5 * (np.exp(h*c / (lam*k*T)) - 1))
wa = np.linspace(0.1, 2, 100) # wavelengths in um
T1 = 5000.
T2 = 8000.
y1 = blackbody_lam(wa, T1)
y2 = blackbody_lam(wa, T2)
ytot = y1 + y2
np.random.seed(1)
# make synthetic data with Gaussian errors
sigma = np.ones(len(wa)) * 1 * np.median(ytot)
ydata = ytot + np.random.randn(len(wa)) * sigma
# plot the input model and synthetic data
plt.figure()
plt.plot(wa, y1, ':', lw=2, label='T1=%.0f' % T1)
plt.plot(wa, y2, ':', lw=2, label='T2=%.0f' % T2)
plt.plot(wa, ytot, ':', lw=2, label='T1 + T2\n(true model)')
plt.plot(wa, ydata, ls='steps-mid', lw=2, label='Fake data')
plt.xlabel('Wavelength (microns)')
plt.ylabel('Intensity (erg/s/cm$^2$/cm/Steradian)')
# fit two blackbodies to the synthetic data
def func(wa, T1, T2):
return blackbody_lam(wa, T1) + blackbody_lam(wa, T2)
# Note the initial guess values for T1 and T2 (p0 keyword below). They
# are quite different to the known true values, but not *too*
# different. If these are too far away from the solution curve_fit()
# will not be able to find a solution. This is not a Python-specific
# problem, it is true for almost every fitting algorithm for
# non-linear models. The initial guess is important!
popt, pcov = curve_fit(func, wa, ydata, p0=(1000, 3000), sigma=sigma)
# get the best fitting parameter values and their 1 sigma errors
# (assuming the parameters aren't strongly correlated).
bestT1, bestT2 = popt
sigmaT1, sigmaT2 = np.sqrt(np.diag(pcov))
ybest = blackbody_lam(wa, bestT1) + blackbody_lam(wa, bestT2)
print('True model values')
print(' T1 = %.2f' % T1)
print(' T2 = %.2f' % T2)
print('Parameters of best-fitting model:')
print(' T1 = %.2f +/- %.2f' % (bestT1, sigmaT1))
print(' T2 = %.2f +/- %.2f' % (bestT2, sigmaT2))
degrees_of_freedom = len(wa) - 2
resid = (ydata - func(wa, *popt)) / sigma
chisq = np.dot(resid, resid)
# plot the solution
plt.plot(wa, ybest, label='Best fitting\nmodel')
plt.legend(frameon=False)
plt.savefig('fit_bb.png')
plt.show()
```
| github_jupyter | import numpy as np
import pandas as pd
import glob
from astropy.table import Table
import matplotlib.pyplot as plt
import json
import collections
import astropy
spectra_contsep_j193747_1 = Table.read("mansiclass/spec_auto_contsep_lstep1__crr_b_ifu20211023_02_16_15_RCB-J193747.txt", format = "ascii")
spectra_robot_j193747_1 = Table.read("mansiclass/spec_auto_robot_lstep1__crr_b_ifu20211023_02_16_15_RCB-J193747.txt", format = "ascii")
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193747_1["col1"], spectra_contsep_j193747_1["col2"])
spectra_contsep_j193747_2 = Table.read("mansiclass/spec_auto_contsep_lstep1__crr_b_ifu20211023_02_35_52_RCB-J193747.txt", format = "ascii")
spectra_robot_j193747_2 = Table.read("mansiclass/spec_auto_robot_lstep1__crr_b_ifu20211023_02_35_52_RCB-J193747.txt", format = "ascii")
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193747_2["col1"], spectra_contsep_j193747_2["col2"])
#plt.vlines(8500, 0, np.max(spectra_contsep_j193747_2["col2"]))
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193747_2["col1"], spectra_contsep_j193747_2["col2"] + spectra_contsep_j193747_1["col2"])
plt.xlabel(r'Wavelength ($\mathrm{\AA}$)', fontsize=17)
plt.ylabel('Relative Flux', fontsize=17)
spectra_contsep_j193015_1 = Table.read("mansiclass/spec_auto_contsep_lstep1__crr_b_ifu20211023_02_55_33_RCB-J193015.txt", format = "ascii")
spectra_robot_j193015_1 = Table.read("mansiclass/spec_auto_robot_lstep1__crr_b_ifu20211023_02_55_33_RCB-J193015.txt", format = "ascii")
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193015_1["col1"], spectra_contsep_j193015_1["col2"])
spectra_contsep_j193015_2 = Table.read("mansiclass/spec_auto_contsep_lstep1__crr_b_ifu20211023_03_10_45_RCB-J193015.txt", format = "ascii")
spectra_robot_j193015_2 = Table.read("mansiclass/spec_auto_robot_lstep1__crr_b_ifu20211023_03_10_45_RCB-J193015.txt", format = "ascii")
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193015_2["col1"], spectra_contsep_j193015_2["col2"])
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193015_2["col1"], spectra_contsep_j193015_1["col2"] + spectra_contsep_j193015_2["col2"])
fig = plt.figure(figsize = (20,10))
plt.plot(spectra_contsep_j193015_2["col1"], spectra_contsep_j193015_1["col2"] + spectra_contsep_j193015_2["col2"])
plt.plot(spectra_contsep_j193747_2["col1"], spectra_contsep_j193747_2["col2"] + spectra_contsep_j193747_1["col2"])
items = Table.from_pandas(pd.read_csv("visible.csv"))
wanted = items[np.where(items["WiseID"] == "J193015.49+192051.7")[0]]
distance = 1/(wanted["parallax"]/1000)
absolute_M = wanted["phot_g_mean_mag"] - 5 * np.log10(distance)
wanted["parrallax_over_error"]
wanted["parallax_over_error"]
distance
absolute_M
wanted["bp_g"]
wanted
table = astropy.io.fits.open("spec_rcb2894_rcb1536_85.fits")
table[0].data[0]
table.info
from astropy.io import fits
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize = (20,10))
specfile = 'spec_rcb2894_rcb1536_85.fits'
spec = fits.open(specfile)
data = spec[0].data
wavs = np.ndarray.flatten(np.array([data[3][0],data[2][0],data[1][0],data[0][0]]))
fluxes = np.ndarray.flatten(np.array([data[3][1],data[2][1],data[1][1],data[0][1]]))
wavmask = ((wavs<1.46) & (wavs>1.35)) | ((wavs<1.93) & (wavs>1.8))
wavmask = np.invert(wavmask)
plt.plot(wavs[wavmask],fluxes[wavmask],linewidth=0.7,c='r')
wanted
from scipy.optimize import curve_fit
import pylab as plt
import numpy as np
def blackbody_lam(lam, T):
""" Blackbody as a function of wavelength (um) and temperature (K).
returns units of erg/s/cm^2/cm/Steradian
"""
from scipy.constants import h,k,c
lam = 1e-6 * lam # convert to metres
return 2*h*c**2 / (lam**5 * (np.exp(h*c / (lam*k*T)) - 1))
def func(wa, T1, T2):
return blackbody_lam(wa, T1) + blackbody_lam(wa, T2)
sigma = spectra_contsep_j193015_1["col3"]
ydata = spectra_contsep_j193015_1["col2"]
wa = spectra_contsep_j193015_1["col1"] * 10e-5
spectra_contsep_j193015_1["col2"]
popt, pcov = curve_fit(func, wa, ydata, p0=(2000, 6000), sigma=sigma)
bestT1, bestT2 = popt
sigmaT1, sigmaT2 = np.sqrt(np.diag(pcov))
ybest = blackbody_lam(wa, bestT1) + blackbody_lam(wa, bestT2)
print('Parameters of best-fitting model:')
print(' T1 = %.2f +/- %.2f' % (bestT1, sigmaT1))
print(' T2 = %.2f +/- %.2f' % (bestT2, sigmaT2))
plt.plot(wa, ybest, label='Best fitting\nmodel')
plt.plot(wa, ydata, ls='steps-mid', lw=2, label='Fake data')
plt.legend(frameon=False)
plt.savefig('fit_bb.png')
from scipy.optimize import curve_fit
import pylab as plt
import numpy as np
def blackbody_lam(lam, T):
""" Blackbody as a function of wavelength (um) and temperature (K).
returns units of erg/s/cm^2/cm/Steradian
"""
from scipy.constants import h,k,c
lam = 1e-6 * lam # convert to metres
return 2*h*c**2 / (lam**5 * (np.exp(h*c / (lam*k*T)) - 1))
wa = np.linspace(0.1, 2, 100) # wavelengths in um
T1 = 5000.
T2 = 8000.
y1 = blackbody_lam(wa, T1)
y2 = blackbody_lam(wa, T2)
ytot = y1 + y2
np.random.seed(1)
# make synthetic data with Gaussian errors
sigma = np.ones(len(wa)) * 1 * np.median(ytot)
ydata = ytot + np.random.randn(len(wa)) * sigma
# plot the input model and synthetic data
plt.figure()
plt.plot(wa, y1, ':', lw=2, label='T1=%.0f' % T1)
plt.plot(wa, y2, ':', lw=2, label='T2=%.0f' % T2)
plt.plot(wa, ytot, ':', lw=2, label='T1 + T2\n(true model)')
plt.plot(wa, ydata, ls='steps-mid', lw=2, label='Fake data')
plt.xlabel('Wavelength (microns)')
plt.ylabel('Intensity (erg/s/cm$^2$/cm/Steradian)')
# fit two blackbodies to the synthetic data
def func(wa, T1, T2):
return blackbody_lam(wa, T1) + blackbody_lam(wa, T2)
# Note the initial guess values for T1 and T2 (p0 keyword below). They
# are quite different to the known true values, but not *too*
# different. If these are too far away from the solution curve_fit()
# will not be able to find a solution. This is not a Python-specific
# problem, it is true for almost every fitting algorithm for
# non-linear models. The initial guess is important!
popt, pcov = curve_fit(func, wa, ydata, p0=(1000, 3000), sigma=sigma)
# get the best fitting parameter values and their 1 sigma errors
# (assuming the parameters aren't strongly correlated).
bestT1, bestT2 = popt
sigmaT1, sigmaT2 = np.sqrt(np.diag(pcov))
ybest = blackbody_lam(wa, bestT1) + blackbody_lam(wa, bestT2)
print('True model values')
print(' T1 = %.2f' % T1)
print(' T2 = %.2f' % T2)
print('Parameters of best-fitting model:')
print(' T1 = %.2f +/- %.2f' % (bestT1, sigmaT1))
print(' T2 = %.2f +/- %.2f' % (bestT2, sigmaT2))
degrees_of_freedom = len(wa) - 2
resid = (ydata - func(wa, *popt)) / sigma
chisq = np.dot(resid, resid)
# plot the solution
plt.plot(wa, ybest, label='Best fitting\nmodel')
plt.legend(frameon=False)
plt.savefig('fit_bb.png')
plt.show() | 0.500732 | 0.488283 |
## GANs
Credits: \
https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html \
https://jovian.ai/aakashns/06-mnist-gan
```
from __future__ import print_function
#%matplotlib inline
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
# Set random seed for reproducibility
manualSeed = 999
#manualSeed = random.randint(1, 10000) # use if you want new results
print("Random Seed: ", manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
# Root directory for dataset
dataroot = "../data/celeba/"
# Number of workers for dataloader
workers = 2
# Batch size during training
batch_size = 128
# Spatial size of training images. All images will be resized to this
# size using a transformer.
image_size = 64
# Number of channels in the training images. For color images this is 3
nc = 3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 0
# We can use an image folder dataset the way we have it setup.
# Create the dataset
dataset = dset.ImageFolder(root=dataroot,
transform=transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
# Create the dataloader
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
shuffle=True, num_workers=workers)
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
# Plot some training images
real_batch = next(iter(dataloader))
plt.figure(figsize=(8,8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64],
padding=2, normalize=True).cpu(),(1,2,0)))
```
| github_jupyter | from __future__ import print_function
#%matplotlib inline
import argparse
import os
import random
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
# Set random seed for reproducibility
manualSeed = 999
#manualSeed = random.randint(1, 10000) # use if you want new results
print("Random Seed: ", manualSeed)
random.seed(manualSeed)
torch.manual_seed(manualSeed)
# Root directory for dataset
dataroot = "../data/celeba/"
# Number of workers for dataloader
workers = 2
# Batch size during training
batch_size = 128
# Spatial size of training images. All images will be resized to this
# size using a transformer.
image_size = 64
# Number of channels in the training images. For color images this is 3
nc = 3
# Size of z latent vector (i.e. size of generator input)
nz = 100
# Size of feature maps in generator
ngf = 64
# Size of feature maps in discriminator
ndf = 64
# Number of training epochs
num_epochs = 5
# Learning rate for optimizers
lr = 0.0002
# Beta1 hyperparam for Adam optimizers
beta1 = 0.5
# Number of GPUs available. Use 0 for CPU mode.
ngpu = 0
# We can use an image folder dataset the way we have it setup.
# Create the dataset
dataset = dset.ImageFolder(root=dataroot,
transform=transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
# Create the dataloader
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
shuffle=True, num_workers=workers)
# Decide which device we want to run on
device = torch.device("cuda:0" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
# Plot some training images
real_batch = next(iter(dataloader))
plt.figure(figsize=(8,8))
plt.axis("off")
plt.title("Training Images")
plt.imshow(np.transpose(vutils.make_grid(real_batch[0].to(device)[:64],
padding=2, normalize=True).cpu(),(1,2,0))) | 0.838548 | 0.843638 |
<table> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="25%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by <a href="http://abu.lu.lv" target="_blank">Abuzer Yakaryilmaz</a> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>)
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
$ \newcommand{\bstate}[1]{ [ \mspace{-1mu} #1 \mspace{-1.5mu} ] } $
## Project | Implementing Quantum Teleportation
We simulate the standard quantum teleportation protocol between Asja to Balvis.
- _Please do not use any quantum programming library or any scientific python library such as `NumPy`._
- _Each qubit starts in state $ \ket{0} $, and each quantum operator should be implemented one by one._
- _The state of quantum system should not be set automatically to certain quantum states._
- _Please write your own code for matrix multiplication and tensoring matrices._
### Create a python class called `quantum_teleportation`
This class simulates a quantum system with three qubits. Asja has the qubits $q_2$ and $q_1$ and Balvis has the qubit $q_0$. The computation of your system is traced by a 8-dimensional vector and so each quantum operator is represented as a ($8 \times 8$)-dimensional matrix. The qubits are combined as $ q_2 \otimes q_1 \otimes q_0 $.
### The methods
For each new instance, the state of $q_2$ is set to a random (real-valued) quantum state.
1. `print_quantum_message()`: Print the initial quantum state of $ q_2 $.
1. `print_state()`: Print the state of system.
Each method given below should be called in the given order. Otherwise, an error should be returned with a warning message.
_The state of the system should be updated after each quantum operator including the measurements on $ q_2 $ and $q_1$._
3. `create_entanglement()`: Create entanglements between the qubits $q_1$ and $q_0$.
1. `balvis_travels()`: Assume that Balvis takes his qubits and go away.
1. `asja_measures()`: Asja measures her qubits $q_2$ and $q_1$ and return the measurement outcomes. Remark that the qubit $ q_0 $ is not measured.
Asja observes one of these four results: `00`, `01`, `10`, or `11`.
To implement this measurement operator, we define four different matrices: $ M_{00} $, $ M_{01} $, $ M_{10} $, and $M_{11}$, where $ M_{ab}$ = $ (\ket{ab}\bra{ab}) \otimes I_2 $ is a ($ 8 \times 8 $)-dimensional matrix.
- Remark that $ \ket{ab} $ is a 4-dimensional column vector and $ \bra{ab} $ is the (conjugate) transpose of $ \ket{ab} $, which is a 4-dimensional row vector.
- Therefore, $ \ket{ab}\bra{ab} $ is a matrix multiplication and the result is a ($4 \times 4$)-dimensional matrix.
- $I_2$ is the 2x2-dimensional identity matrix.
Let $\ket{v}$ be the state vector before the measurement. Each outcome has the same probability (1/4) in our case. One of them is selected randomly, say `01`. The new state becomes the normalized version of the vector that is obtained by $ \ket{\widetilde{v_{01}}} = M_{01} \ket{v} $, i.e., the length of $\ket{\widetilde{v_{01}}}$ is less than 1 and so this vector must be multiplied with a factor to make its length 1.
6. `asja_sends_measument_outcomes(outcome)`: Asja sends the measurement outcomes to Balvis such as `10`.
1. `balvis_post_processing()`: Apply post-processing quantum operators to Balvis’ qubit (if necessary) depending on the measurement outcomes recivied from Asja.
Test your class by checking the quantum state after each step and also verify whether the quantum message prepared by Asja is teleported to Balvis' qubit or not.
| github_jupyter | <table> <tr>
<td style="background-color:#ffffff;">
<a href="http://qworld.lu.lv" target="_blank"><img src="../images/qworld.jpg" width="25%" align="left"> </a></td>
<td style="background-color:#ffffff;vertical-align:bottom;text-align:right;">
prepared by <a href="http://abu.lu.lv" target="_blank">Abuzer Yakaryilmaz</a> (<a href="http://qworld.lu.lv/index.php/qlatvia/" target="_blank">QLatvia</a>)
</td>
</tr></table>
<table width="100%"><tr><td style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;text-align:right;">This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr></table>
$ \newcommand{\bra}[1]{\langle #1|} $
$ \newcommand{\ket}[1]{|#1\rangle} $
$ \newcommand{\braket}[2]{\langle #1|#2\rangle} $
$ \newcommand{\dot}[2]{ #1 \cdot #2} $
$ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $
$ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $
$ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $
$ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $
$ \newcommand{\mypar}[1]{\left( #1 \right)} $
$ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $
$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $
$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $
$ \newcommand{\onehalf}{\frac{1}{2}} $
$ \newcommand{\donehalf}{\dfrac{1}{2}} $
$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $
$ \newcommand{\vzero}{\myvector{1\\0}} $
$ \newcommand{\vone}{\myvector{0\\1}} $
$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $
$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $
$ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $
$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $
$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $
$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $
$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $
$ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $
$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $
$ \newcommand{\bstate}[1]{ [ \mspace{-1mu} #1 \mspace{-1.5mu} ] } $
## Project | Implementing Quantum Teleportation
We simulate the standard quantum teleportation protocol between Asja to Balvis.
- _Please do not use any quantum programming library or any scientific python library such as `NumPy`._
- _Each qubit starts in state $ \ket{0} $, and each quantum operator should be implemented one by one._
- _The state of quantum system should not be set automatically to certain quantum states._
- _Please write your own code for matrix multiplication and tensoring matrices._
### Create a python class called `quantum_teleportation`
This class simulates a quantum system with three qubits. Asja has the qubits $q_2$ and $q_1$ and Balvis has the qubit $q_0$. The computation of your system is traced by a 8-dimensional vector and so each quantum operator is represented as a ($8 \times 8$)-dimensional matrix. The qubits are combined as $ q_2 \otimes q_1 \otimes q_0 $.
### The methods
For each new instance, the state of $q_2$ is set to a random (real-valued) quantum state.
1. `print_quantum_message()`: Print the initial quantum state of $ q_2 $.
1. `print_state()`: Print the state of system.
Each method given below should be called in the given order. Otherwise, an error should be returned with a warning message.
_The state of the system should be updated after each quantum operator including the measurements on $ q_2 $ and $q_1$._
3. `create_entanglement()`: Create entanglements between the qubits $q_1$ and $q_0$.
1. `balvis_travels()`: Assume that Balvis takes his qubits and go away.
1. `asja_measures()`: Asja measures her qubits $q_2$ and $q_1$ and return the measurement outcomes. Remark that the qubit $ q_0 $ is not measured.
Asja observes one of these four results: `00`, `01`, `10`, or `11`.
To implement this measurement operator, we define four different matrices: $ M_{00} $, $ M_{01} $, $ M_{10} $, and $M_{11}$, where $ M_{ab}$ = $ (\ket{ab}\bra{ab}) \otimes I_2 $ is a ($ 8 \times 8 $)-dimensional matrix.
- Remark that $ \ket{ab} $ is a 4-dimensional column vector and $ \bra{ab} $ is the (conjugate) transpose of $ \ket{ab} $, which is a 4-dimensional row vector.
- Therefore, $ \ket{ab}\bra{ab} $ is a matrix multiplication and the result is a ($4 \times 4$)-dimensional matrix.
- $I_2$ is the 2x2-dimensional identity matrix.
Let $\ket{v}$ be the state vector before the measurement. Each outcome has the same probability (1/4) in our case. One of them is selected randomly, say `01`. The new state becomes the normalized version of the vector that is obtained by $ \ket{\widetilde{v_{01}}} = M_{01} \ket{v} $, i.e., the length of $\ket{\widetilde{v_{01}}}$ is less than 1 and so this vector must be multiplied with a factor to make its length 1.
6. `asja_sends_measument_outcomes(outcome)`: Asja sends the measurement outcomes to Balvis such as `10`.
1. `balvis_post_processing()`: Apply post-processing quantum operators to Balvis’ qubit (if necessary) depending on the measurement outcomes recivied from Asja.
Test your class by checking the quantum state after each step and also verify whether the quantum message prepared by Asja is teleported to Balvis' qubit or not.
| 0.712932 | 0.969985 |
```
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('/content/drive/My Drive/Colab Notebooks/Udacity/deep-learning-v2-pytorch/convolutional-neural-networks/conv-visualization')
```
# Maxpooling Layer
In this notebook, we add and visualize the output of a maxpooling layer in a CNN.
A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN.
<img src='notebook_ims/CNN_all_layers.png' height=50% width=50% />
### Import the image
```
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
```
### Define and visualize the filters
```
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
```
### Define convolutional and pooling layers
You've seen how to define a convolutional layer, next is a:
* Pooling layer
In the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!
A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
<img src='notebook_ims/maxpooling_ex.png' height=50% width=50% />
```
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
```
### Visualize the output of each filter
First, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
```
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
```
Let's look at the output of a convolutional layer after a ReLu activation function is applied.
#### ReLu activation
A ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
<img src='notebook_ims/relu_ex.png' height=50% width=50% />
```
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
```
### Visualize the output of the pooling layer
Then, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.
Take a look at the values on the x, y axes to see how the image has changed size.
```
# visualize the output of the pooling layer
viz_layer(pooled_layer)
```
| github_jupyter | from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('/content/drive/My Drive/Colab Notebooks/Udacity/deep-learning-v2-pytorch/convolutional-neural-networks/conv-visualization')
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
# visualize the output of the pooling layer
viz_layer(pooled_layer) | 0.6488 | 0.862757 |
![terrainbento logo](../images/terrainbento_logo.png)
# Introduction to boundary conditions in terrainbento.
## Overview
This tutorial shows example usage of the terrainbento boundary handlers. For comprehensive information about all options and defaults, refer to the [documentation](http://terrainbento.readthedocs.io/en/latest/).
## Prerequisites
This tutorial assumes you have at least skimmed the [terrainbento manuscript](https://www.geosci-model-dev.net/12/1267/2019/) and worked through the [Introduction to terrainbento](http://localhost:8888/notebooks/example_usage/Introduction_to_terrainbento.ipynb) tutorial.
### terrainbento boundary handlers
terrainbento includes five boundary handlers designed to make it easier to treat different model run boundary conditions. Four boundary handlers modify the model grid in order to change the base level the model sees. The final one calculates how changes in precipitation distribution statistics change the value of erodibility by water. Hyperlinks in the list below go to the documentation for each of the boundary condition handlers.
1. [`CaptureNodeBaselevelHandler`](https://terrainbento.readthedocs.io/en/latest/source/terrainbento.boundary_handlers.capture_node_baselevel_handler.html?highlight=capture%20node) implements external drainage capture.
2. [`SingleNodeBaselevelHandler`](https://terrainbento.readthedocs.io/en/latest/source/terrainbento.boundary_handlers.single_node_baselevel_handler.html?highlight=SingleNodeBaselevelHandler) modifies the elevation of one model grid node, intended to be the outlet of a modeled watershed.
3. [`NotCoreNodeBaselevelHandler`](https://terrainbento.readthedocs.io/en/latest/source/terrainbento.boundary_handlers.not_core_node_baselevel_handler.html?highlight=NotCoreNodeBaselevelHandler) either increments all the core nodes, or all the not-core nodes up or down.
4. [`GenericFuncBaselevelHandler`](https://terrainbento.readthedocs.io/en/latest/source/terrainbento.boundary_handlers.generic_function_baselevel_handler.html?highlight=GenericFuncBaselevelHandler) is a generic boundary condition handler that modifies the model grid based on a user specified function of the model grid and model time.
5. [`PrecipChanger`](https://terrainbento.readthedocs.io/en/latest/source/terrainbento.boundary_handlers.precip_changer.html?highlight=PrecipChanger) modifies precipitation distribution parameters (in **St** models) or erodibility by water (all other models).
If you have additional questions related to using the boundary handlers or your research requires additonal tools to handle boundary conditions, please let us know by making an [Issue on GitHub](https://github.com/TerrainBento/terrainbento/issues).
In the `SingleNodeBaselevelHandler` and the `NotCoreNodeBaselevelHandler`, rate of baselevel fall at a single node or at all not-core model grid nodes can be specified as a constant rate or a time-elevation history. These and other options are described in the documentation. Note that a model instance can have more than one boundary handler at a time.
The swiss-army knife of boundary condition handling is the `GenericFuncBaselevelHandler` so we will focus on it today.
### Example Usage
To begin, we will import the required python modules.
```
import numpy as np
np.random.seed(42)
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import holoviews as hv
hv.notebook_extension('matplotlib')
from terrainbento import Basic
```
Next we will create the parameter dictionary needed to instantiate the Basic model. All parameters used are specified in this notebook block. Refer to the base class and individual model documentation for required parameters. Let's start with an initial topo of 1000m and drop all the baselevel nodes.
```
basic_params = {
# create the Clock.
"clock": {
"start": 0,
"step": 1000,
"stop": 2e5
},
# Create the Grid
"grid": {
"RasterModelGrid": [
(25, 40),
{
"xy_spacing": 40
},
{
"fields": {
"node": {
"topographic__elevation": {
"random": [{
"where": "CORE_NODE"
}],
"constant": [{
"value": 1000.
}]
}
}
}
},
]
},
# Set up Boundary Handlers
"boundary_handlers": {
"NotCoreNodeBaselevelHandler": {
"lowering_rate": -0.001
}
},
# Parameters that control output.
"output_interval": 1e4,
"save_first_timestep": True,
"output_prefix": "model_basic_output_basicBC1",
"fields": ["topographic__elevation"],
# Parameters that control process and rates.
"water_erodibility": 0.00005,
"m_sp": 0.5,
"n_sp": 1.0,
"regolith_transport_parameter": 0.00001,
}
basic = Basic.from_dict(basic_params)
basic.run()
ds = basic.to_xarray_dataset(time_unit='years', space_unit='meters')
hvds_topo = hv.Dataset(ds.topographic__elevation)
topo = hvds_topo.to(hv.Image, ['x', 'y'],
label='Basic').options(interpolation='bilinear',
cmap='viridis',
colorbar=True)
topo.opts(fontsize={
'title': 10,
'labels': 10,
'xticks': 10,
'yticks': 10,
'cticks': 10,
})
topo
```
Finally we remove the xarray datasets from and use the model function remove_output_netcdfs to remove the files created by running the model.
```
ds.close()
basic.remove_output_netcdfs()
```
Now, let's implement a situation where start from zero topography and lift things up.
```
basic_params2 = {
# create the Clock.
"clock": {
"start": 0,
"step": 1000,
"stop": 2e5
},
# Create the Grid
"grid": {
"RasterModelGrid": [
(25, 40),
{
"xy_spacing": 40
},
{
"fields": {
"node": {
"topographic__elevation": {
"random": [{
"where": "CORE_NODE"
}],
"constant": [{
"value": 0.
}]
}
}
}
},
]
},
# Set up Boundary Handlers
"boundary_handlers": {
"NotCoreNodeBaselevelHandler": {
"modify_core_nodes": True,
"lowering_rate": -0.001
}
},
# Parameters that control output.
"output_interval": 1e4,
"save_first_timestep": True,
"output_prefix": "model_basic_output_basicBC2",
"fields": ["topographic__elevation"],
# Parameters that control process and rates.
"water_erodibility": 0.00005,
"m_sp": 0.5,
"n_sp": 1.0,
"regolith_transport_parameter": 0.00001,
}
basic2 = Basic.from_dict(basic_params2)
basic2.run()
ds2 = basic2.to_xarray_dataset(time_unit='years', space_unit='meters')
hvds_topo2 = hv.Dataset(ds2.topographic__elevation)
topo2 = hvds_topo2.to(hv.Image, ['x', 'y'],
label='Basic').options(interpolation='bilinear',
cmap='viridis',
colorbar=True)
topo2.opts(fontsize={
'title': 10,
'labels': 10,
'xticks': 10,
'yticks': 10,
'cticks': 10,
})
topo2
ds2.close()
basic2.remove_output_netcdfs()
```
Rather than taking a constant baselevel fall rate, the `GenericFuncBoundaryHandler` takes a function. This function is expected to accept two arguments --- the model grid and the elapsed model integration time --- and return an array of size number-of-model-grid-nodes that represents the spatially variable rate of boundary lowering or core-node uplift.
For our example we will create a model grid initially at ~1000 m elevation at all grid nodes, then we will progressively drop the model boundary elevations. We will vary the spatial and temporal pattern of boundary elevations such that the boundaries will drop more rapidly at the beginning of the model run than at the end and the boundaries will drop more on the bottom of the model grid domain than on the top.
If you are not familiar with user defined python functions, consider reviewing [this tutorial](https://www.datacamp.com/community/tutorials/functions-python-tutorial#udf).
Thus our function will look as follows:
```
def dropping_boundary_condition_1(grid, t):
f = 0.007
dzdt = -1. * (2e5 - t) / 2e5 * f * (
(grid.y_of_node.max() - grid.y_of_node) / grid.y_of_node.max())
return dzdt
```
Importantly, note that this function returns the *rate* at which the boundary will drop, *not* the elevation of the boundary through time.
Next we construct the parameter dictionary we need to initialize the terrainbento model. For this example we will just use the **Basic** model.
In order to specify that we want to use the `GenericFuncBaselevelHandler` we provide it as a value to the parameter `BoundaryHandlers`. We can provide the parameters the baselevel handler needs directly in the parameter dictionary, or we can create a new sub-dictionary, as is done below.
```
import numpy as np
np.random.seed(42)
basic_params3 = {
# create the Clock.
"clock": {
"start": 0,
"step": 1000,
"stop": 2e5
},
# Create the Grid.
"grid": {
"RasterModelGrid": [(25, 40), {
"xy_spacing": 40
}, {
"fields": {
"node": {
"topographic__elevation": {
"random": [{
"where": "CORE_NODE"
}],
"constant": [{
"value": 1000.
}]
}
}
}
}]
},
# Set up Boundary Handlers
"boundary_handlers": {
"GenericFuncBaselevelHandler": {
"modify_core_nodes" : False,
"function": dropping_boundary_condition_1
}
},
# Parameters that control output.
"output_interval": 1e4,
"save_first_timestep": True,
"output_prefix": "model_basic_output_intro_bc3",
"fields": ["topographic__elevation"],
# Parameters that control process and rates.
"water_erodibility": 0.0001,
"m_sp": 0.5,
"n_sp": 1.0,
"regolith_transport_parameter": 0,
}
```
Next we create a model instance, run it, create an xarray dataset of the model output, and convert it to the holoviews format.
```
basic3 = Basic.from_dict(basic_params3)
basic3.run()
ds3 = basic3.to_xarray_dataset(time_unit='years', space_unit='meters')
hvds_topo3 = hv.Dataset(ds3.topographic__elevation)
```
Finally we create an image of the topography with a slider bar.
```
%opts Image style(interpolation='bilinear', cmap='viridis') plot[colorbar=True]
topo3 = hvds_topo3.to(hv.Image, ['x', 'y'], label='Rate Decreases')
topo3.opts(fontsize={
'title': 10,
'labels': 10,
'xticks': 10,
'yticks': 10,
'cticks': 10,
})
topo3
```
### GSA Boundary condition
Now let's create an uplift field where some of the core nodes are being uplifted.
For our example we will create a model grid initially at 0 m elevation at all grid nodes, then we will progressively uplift the model core nodes. We will assume a constant spatial and temporal pattern of uplift rates for teh core nodes.
If you are not familiar with user defined python functions, consider reviewing [this tutorial](https://www.datacamp.com/community/tutorials/functions-python-tutorial#udf).
Our function will look as follows:
```
def dropping_boundary_condition_GSA(grid,t):
M = np.zeros((25,40))
M[6:18, 1:5] = 1;M[12:18, 9:13] = 1;M[3:7,1:13]=1;M[18:22,1:13]=1;M[11:14,7:13]=1
M[6:12, 15:18] = 1;M[12:18, 23:26] = 1;M[3:7,15:26]=1;M[10:14,15:26]=1;M[18:22,15:26]=1
M[3:7, 28:39] = 1;M[10:14, 28:39] = 1;M[3:22,28:32]=1;M[3:22,35:39]=1;
M = np.flipud(M)
dzdt = -0.001*M.flatten()
return dzdt
```
Next we will make a new model that is exactly the same as the other model but uses the new function and a different output file name and a lower water_erodibility constant (change to 1e-5)
```
basic_gsa_params = {
# create the Clock.
"clock": {
"start": 0,
"step": 1000,
"stop": 2e5
},
# Create the Grid.
"grid": {
"RasterModelGrid": [(25, 40), {
"xy_spacing": 40
}, {
"fields": {
"node": {
"topographic__elevation": {
"random": [{
"where": "CORE_NODE"
}],
"constant": [{
"value": 1000.
}]
}
}
}
}]
},
# Set up Boundary Handlers
"boundary_handlers": {
"GenericFuncBaselevelHandler": {
"modify_core_nodes" : True,
"function": dropping_boundary_condition_GSA
}
},
# Parameters that control output.
"output_interval": 1e4,
"save_first_timestep": True,
"output_prefix": "model_basic_output_intro_bc_gsa1",
"fields": ["topographic__elevation"],
# Parameters that control process and rates.
"water_erodibility": 0.0001,
"m_sp": 0.5,
"n_sp": 1.0,
"regolith_transport_parameter": 0,
}
```
Next we create a model instance
```
basic_gsa = Basic.from_dict(basic_gsa_params)
```
Run it, create an xarray dataset of the model output, and convert it to the holoviews format.
```
basic_gsa.run()
ds_gsa = basic_gsa.to_xarray_dataset(time_unit='years', space_unit='meters')
hvds_topo_gsa = hv.Dataset(ds_gsa.topographic__elevation)
```
Finally we create an image of the topography with a slider bar.
```
%opts Image style(interpolation='bilinear', cmap='viridis') plot[colorbar=True]
topo_gsa = hvds_topo_gsa.to(hv.Image, ['x', 'y'], label='topo_GSA')
topo_gsa.opts(fontsize={
'title': 10,
'labels': 10,
'xticks': 10,
'yticks': 10,
'cticks': 10,
})
topo_gsa
```
## Challenge -- Contrasting with a slightly different boundary condition
If we wanted a different pattern, we would just need to change the function. Try to make run a model run in which the rate of boundary lowering increases through time instead of decreasing through time:
Define dropping_boundary_condition_4 as a function:
Next make a new model that is exactly the same as the other model but uses the new function and a different output file name
Run the model and plot as holoviews
```
basic4 = Basic.from_dict(fourth_model_params)
basic4.run()
ds4 = basic4.to_xarray_dataset(time_unit='years', space_unit='meters')
hvds_topo4 = hv.Dataset(ds4.topographic__elevation)
%opts Image style(interpolation='bilinear', cmap='viridis') plot[colorbar=True]
topo4 = hvds_topo4.to(hv.Image, ['x', 'y'], label='Rate Increases')
topo4.opts(fontsize={
'title': 10,
'labels': 10,
'xticks': 10,
'yticks': 10,
'cticks': 10,
})
topo_gsa + topo3 + topo4
```
As you can see, the landscapes created by the **Basic** model with the two slightly different boundary conditions are different. One thing to think about is what sort of geologic settings might create each of these two alternative boundary conditions and how you could quantitatively compare these two output landscapes.
Finally we remove the xarray datasets from and use the model function `remove_output_netcdfs` to remove the files created by running the model.
```
del topo, hvds_topo, topo2, hvds_topo2, topo3, hvds_topo3
ds_gsa.close()
ds3.close()
ds4.close()
basic_gsa.remove_output_netcdfs()
basic3.remove_output_netcdfs()
basic4.remove_output_netcdfs()
```
## Next Steps
- We recommend you review the [terrainbento manuscript](https://www.geosci-model-dev.net/12/1267/2019/).
- There are three additional introductory tutorials:
1) [Introduction terrainbento](Introduction_to_terrainbento.ipynb)
2) **This Notebook**: [Introduction to boundary conditions in terrainbento](introduction_to_boundary_conditions.ipynb)
3) [Introduction to output writers in terrainbento](introduction_to_output_writers.ipynb).
- Five examples of steady state behavior in coupled process models can be found in the following notebooks:
1) [Basic](../coupled_process_elements/model_basic_steady_solution.ipynb) the simplest landscape evolution model in the terrainbento package.
2) [BasicVm](../coupled_process_elements/model_basic_var_m_steady_solution.ipynb) which permits the drainage area exponent to change
3) [BasicCh](../coupled_process_elements/model_basicCh_steady_solution.ipynb) which uses a non-linear hillslope erosion and transport law
4) [BasicVs](../coupled_process_elements/model_basicVs_steady_solution.ipynb) which uses variable source area hydrology
5) [BasisRt](../coupled_process_elements/model_basicRt_steady_solution.ipynb) which allows for two lithologies with different K values
| github_jupyter | import numpy as np
np.random.seed(42)
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import holoviews as hv
hv.notebook_extension('matplotlib')
from terrainbento import Basic
basic_params = {
# create the Clock.
"clock": {
"start": 0,
"step": 1000,
"stop": 2e5
},
# Create the Grid
"grid": {
"RasterModelGrid": [
(25, 40),
{
"xy_spacing": 40
},
{
"fields": {
"node": {
"topographic__elevation": {
"random": [{
"where": "CORE_NODE"
}],
"constant": [{
"value": 1000.
}]
}
}
}
},
]
},
# Set up Boundary Handlers
"boundary_handlers": {
"NotCoreNodeBaselevelHandler": {
"lowering_rate": -0.001
}
},
# Parameters that control output.
"output_interval": 1e4,
"save_first_timestep": True,
"output_prefix": "model_basic_output_basicBC1",
"fields": ["topographic__elevation"],
# Parameters that control process and rates.
"water_erodibility": 0.00005,
"m_sp": 0.5,
"n_sp": 1.0,
"regolith_transport_parameter": 0.00001,
}
basic = Basic.from_dict(basic_params)
basic.run()
ds = basic.to_xarray_dataset(time_unit='years', space_unit='meters')
hvds_topo = hv.Dataset(ds.topographic__elevation)
topo = hvds_topo.to(hv.Image, ['x', 'y'],
label='Basic').options(interpolation='bilinear',
cmap='viridis',
colorbar=True)
topo.opts(fontsize={
'title': 10,
'labels': 10,
'xticks': 10,
'yticks': 10,
'cticks': 10,
})
topo
ds.close()
basic.remove_output_netcdfs()
basic_params2 = {
# create the Clock.
"clock": {
"start": 0,
"step": 1000,
"stop": 2e5
},
# Create the Grid
"grid": {
"RasterModelGrid": [
(25, 40),
{
"xy_spacing": 40
},
{
"fields": {
"node": {
"topographic__elevation": {
"random": [{
"where": "CORE_NODE"
}],
"constant": [{
"value": 0.
}]
}
}
}
},
]
},
# Set up Boundary Handlers
"boundary_handlers": {
"NotCoreNodeBaselevelHandler": {
"modify_core_nodes": True,
"lowering_rate": -0.001
}
},
# Parameters that control output.
"output_interval": 1e4,
"save_first_timestep": True,
"output_prefix": "model_basic_output_basicBC2",
"fields": ["topographic__elevation"],
# Parameters that control process and rates.
"water_erodibility": 0.00005,
"m_sp": 0.5,
"n_sp": 1.0,
"regolith_transport_parameter": 0.00001,
}
basic2 = Basic.from_dict(basic_params2)
basic2.run()
ds2 = basic2.to_xarray_dataset(time_unit='years', space_unit='meters')
hvds_topo2 = hv.Dataset(ds2.topographic__elevation)
topo2 = hvds_topo2.to(hv.Image, ['x', 'y'],
label='Basic').options(interpolation='bilinear',
cmap='viridis',
colorbar=True)
topo2.opts(fontsize={
'title': 10,
'labels': 10,
'xticks': 10,
'yticks': 10,
'cticks': 10,
})
topo2
ds2.close()
basic2.remove_output_netcdfs()
def dropping_boundary_condition_1(grid, t):
f = 0.007
dzdt = -1. * (2e5 - t) / 2e5 * f * (
(grid.y_of_node.max() - grid.y_of_node) / grid.y_of_node.max())
return dzdt
import numpy as np
np.random.seed(42)
basic_params3 = {
# create the Clock.
"clock": {
"start": 0,
"step": 1000,
"stop": 2e5
},
# Create the Grid.
"grid": {
"RasterModelGrid": [(25, 40), {
"xy_spacing": 40
}, {
"fields": {
"node": {
"topographic__elevation": {
"random": [{
"where": "CORE_NODE"
}],
"constant": [{
"value": 1000.
}]
}
}
}
}]
},
# Set up Boundary Handlers
"boundary_handlers": {
"GenericFuncBaselevelHandler": {
"modify_core_nodes" : False,
"function": dropping_boundary_condition_1
}
},
# Parameters that control output.
"output_interval": 1e4,
"save_first_timestep": True,
"output_prefix": "model_basic_output_intro_bc3",
"fields": ["topographic__elevation"],
# Parameters that control process and rates.
"water_erodibility": 0.0001,
"m_sp": 0.5,
"n_sp": 1.0,
"regolith_transport_parameter": 0,
}
basic3 = Basic.from_dict(basic_params3)
basic3.run()
ds3 = basic3.to_xarray_dataset(time_unit='years', space_unit='meters')
hvds_topo3 = hv.Dataset(ds3.topographic__elevation)
%opts Image style(interpolation='bilinear', cmap='viridis') plot[colorbar=True]
topo3 = hvds_topo3.to(hv.Image, ['x', 'y'], label='Rate Decreases')
topo3.opts(fontsize={
'title': 10,
'labels': 10,
'xticks': 10,
'yticks': 10,
'cticks': 10,
})
topo3
def dropping_boundary_condition_GSA(grid,t):
M = np.zeros((25,40))
M[6:18, 1:5] = 1;M[12:18, 9:13] = 1;M[3:7,1:13]=1;M[18:22,1:13]=1;M[11:14,7:13]=1
M[6:12, 15:18] = 1;M[12:18, 23:26] = 1;M[3:7,15:26]=1;M[10:14,15:26]=1;M[18:22,15:26]=1
M[3:7, 28:39] = 1;M[10:14, 28:39] = 1;M[3:22,28:32]=1;M[3:22,35:39]=1;
M = np.flipud(M)
dzdt = -0.001*M.flatten()
return dzdt
basic_gsa_params = {
# create the Clock.
"clock": {
"start": 0,
"step": 1000,
"stop": 2e5
},
# Create the Grid.
"grid": {
"RasterModelGrid": [(25, 40), {
"xy_spacing": 40
}, {
"fields": {
"node": {
"topographic__elevation": {
"random": [{
"where": "CORE_NODE"
}],
"constant": [{
"value": 1000.
}]
}
}
}
}]
},
# Set up Boundary Handlers
"boundary_handlers": {
"GenericFuncBaselevelHandler": {
"modify_core_nodes" : True,
"function": dropping_boundary_condition_GSA
}
},
# Parameters that control output.
"output_interval": 1e4,
"save_first_timestep": True,
"output_prefix": "model_basic_output_intro_bc_gsa1",
"fields": ["topographic__elevation"],
# Parameters that control process and rates.
"water_erodibility": 0.0001,
"m_sp": 0.5,
"n_sp": 1.0,
"regolith_transport_parameter": 0,
}
basic_gsa = Basic.from_dict(basic_gsa_params)
basic_gsa.run()
ds_gsa = basic_gsa.to_xarray_dataset(time_unit='years', space_unit='meters')
hvds_topo_gsa = hv.Dataset(ds_gsa.topographic__elevation)
%opts Image style(interpolation='bilinear', cmap='viridis') plot[colorbar=True]
topo_gsa = hvds_topo_gsa.to(hv.Image, ['x', 'y'], label='topo_GSA')
topo_gsa.opts(fontsize={
'title': 10,
'labels': 10,
'xticks': 10,
'yticks': 10,
'cticks': 10,
})
topo_gsa
basic4 = Basic.from_dict(fourth_model_params)
basic4.run()
ds4 = basic4.to_xarray_dataset(time_unit='years', space_unit='meters')
hvds_topo4 = hv.Dataset(ds4.topographic__elevation)
%opts Image style(interpolation='bilinear', cmap='viridis') plot[colorbar=True]
topo4 = hvds_topo4.to(hv.Image, ['x', 'y'], label='Rate Increases')
topo4.opts(fontsize={
'title': 10,
'labels': 10,
'xticks': 10,
'yticks': 10,
'cticks': 10,
})
topo_gsa + topo3 + topo4
del topo, hvds_topo, topo2, hvds_topo2, topo3, hvds_topo3
ds_gsa.close()
ds3.close()
ds4.close()
basic_gsa.remove_output_netcdfs()
basic3.remove_output_netcdfs()
basic4.remove_output_netcdfs() | 0.584508 | 0.937038 |
To start this Jupyter Dash app, please run all the cells below. Then, click on the **temporary** URL at the end of the last cell to open the app.
```
!pip install -q jupyter-dash==0.3.0rc1 dash-bootstrap-components transformers
import time
import dash
import dash_html_components as html
import dash_core_components as dcc
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output, State
from jupyter_dash import JupyterDash
from transformers import BartTokenizer, BartForConditionalGeneration
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Device: {device}")
# Load Model
pretrained = "sshleifer/distilbart-xsum-12-6"
model = BartForConditionalGeneration.from_pretrained(pretrained)
tokenizer = BartTokenizer.from_pretrained(pretrained)
# Switch to cuda, eval mode, and FP16 for faster inference
if device == "cuda":
model = model.half()
model.to(device)
model.eval();
# Define app
app = JupyterDash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])
server = app.server
controls = dbc.Card(
[
dbc.FormGroup(
[
dbc.Label("Output Length (# Tokens)"),
dcc.Slider(
id="max-length",
min=10,
max=50,
value=30,
marks={i: str(i) for i in range(10, 51, 10)},
),
]
),
dbc.FormGroup(
[
dbc.Label("Beam Size"),
dcc.Slider(
id="num-beams",
min=2,
max=6,
value=4,
marks={i: str(i) for i in [2, 4, 6]},
),
]
),
dbc.FormGroup(
[
dbc.Spinner(
[
dbc.Button("Summarize", id="button-run"),
html.Div(id="time-taken"),
]
)
]
),
],
body=True,
style={"height": "275px"},
)
# Define Layout
app.layout = dbc.Container(
fluid=True,
children=[
html.H1("Dash Automatic Summarization (with DistilBART)"),
html.Hr(),
dbc.Row(
[
dbc.Col(
width=5,
children=[
controls,
dbc.Card(
body=True,
children=[
dbc.FormGroup(
[
dbc.Label("Summarized Content"),
dcc.Textarea(
id="summarized-content",
style={
"width": "100%",
"height": "calc(75vh - 275px)",
},
),
]
)
],
),
],
),
dbc.Col(
width=7,
children=[
dbc.Card(
body=True,
children=[
dbc.FormGroup(
[
dbc.Label("Original Text (Paste here)"),
dcc.Textarea(
id="original-text",
style={"width": "100%", "height": "75vh"},
),
]
)
],
)
],
),
]
),
],
)
@app.callback(
[Output("summarized-content", "value"), Output("time-taken", "children")],
[
Input("button-run", "n_clicks"),
Input("max-length", "value"),
Input("num-beams", "value"),
],
[State("original-text", "value")],
)
def summarize(n_clicks, max_len, num_beams, original_text):
if original_text is None or original_text == "":
return "", "Did not run"
t0 = time.time()
inputs = tokenizer.batch_encode_plus(
[original_text], max_length=1024, return_tensors="pt"
)
inputs = inputs.to(device)
# Generate Summary
summary_ids = model.generate(
inputs["input_ids"],
num_beams=num_beams,
max_length=max_len,
early_stopping=True,
)
out = [
tokenizer.decode(
g, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
for g in summary_ids
]
t1 = time.time()
time_taken = f"Summarized on {device} in {t1-t0:.2f}s"
return out[0], time_taken
```
Run the cell below to run your Jupyter Dash app. Click on the **temporary** URL to access the app.
```
app.run_server(mode='inline')
```
| github_jupyter | !pip install -q jupyter-dash==0.3.0rc1 dash-bootstrap-components transformers
import time
import dash
import dash_html_components as html
import dash_core_components as dcc
import dash_bootstrap_components as dbc
from dash.dependencies import Input, Output, State
from jupyter_dash import JupyterDash
from transformers import BartTokenizer, BartForConditionalGeneration
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Device: {device}")
# Load Model
pretrained = "sshleifer/distilbart-xsum-12-6"
model = BartForConditionalGeneration.from_pretrained(pretrained)
tokenizer = BartTokenizer.from_pretrained(pretrained)
# Switch to cuda, eval mode, and FP16 for faster inference
if device == "cuda":
model = model.half()
model.to(device)
model.eval();
# Define app
app = JupyterDash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])
server = app.server
controls = dbc.Card(
[
dbc.FormGroup(
[
dbc.Label("Output Length (# Tokens)"),
dcc.Slider(
id="max-length",
min=10,
max=50,
value=30,
marks={i: str(i) for i in range(10, 51, 10)},
),
]
),
dbc.FormGroup(
[
dbc.Label("Beam Size"),
dcc.Slider(
id="num-beams",
min=2,
max=6,
value=4,
marks={i: str(i) for i in [2, 4, 6]},
),
]
),
dbc.FormGroup(
[
dbc.Spinner(
[
dbc.Button("Summarize", id="button-run"),
html.Div(id="time-taken"),
]
)
]
),
],
body=True,
style={"height": "275px"},
)
# Define Layout
app.layout = dbc.Container(
fluid=True,
children=[
html.H1("Dash Automatic Summarization (with DistilBART)"),
html.Hr(),
dbc.Row(
[
dbc.Col(
width=5,
children=[
controls,
dbc.Card(
body=True,
children=[
dbc.FormGroup(
[
dbc.Label("Summarized Content"),
dcc.Textarea(
id="summarized-content",
style={
"width": "100%",
"height": "calc(75vh - 275px)",
},
),
]
)
],
),
],
),
dbc.Col(
width=7,
children=[
dbc.Card(
body=True,
children=[
dbc.FormGroup(
[
dbc.Label("Original Text (Paste here)"),
dcc.Textarea(
id="original-text",
style={"width": "100%", "height": "75vh"},
),
]
)
],
)
],
),
]
),
],
)
@app.callback(
[Output("summarized-content", "value"), Output("time-taken", "children")],
[
Input("button-run", "n_clicks"),
Input("max-length", "value"),
Input("num-beams", "value"),
],
[State("original-text", "value")],
)
def summarize(n_clicks, max_len, num_beams, original_text):
if original_text is None or original_text == "":
return "", "Did not run"
t0 = time.time()
inputs = tokenizer.batch_encode_plus(
[original_text], max_length=1024, return_tensors="pt"
)
inputs = inputs.to(device)
# Generate Summary
summary_ids = model.generate(
inputs["input_ids"],
num_beams=num_beams,
max_length=max_len,
early_stopping=True,
)
out = [
tokenizer.decode(
g, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
for g in summary_ids
]
t1 = time.time()
time_taken = f"Summarized on {device} in {t1-t0:.2f}s"
return out[0], time_taken
app.run_server(mode='inline') | 0.727395 | 0.54468 |
# Identifying Bees Using Crowd Sourced Data using Amazon SageMaker
### Table of contents
1. [Introduction to dataset](#introduction)
2. [Labeling with Amazon SageMaker Ground Truth](#groundtruth)
3. [Reviewing labeling results](#review)
4. [Training an Object Detection model](#training)
5. [Review of Training Results](#review_training)
6. [Model Tuning](#model_tuning)
7. [Cleanup](#cleanup)
<a name="introduction"></a>
## Introduction to dataset
We will use a dataset from the [inaturalist.org](inaturalist.org) This dataset contains 500 images of bees that have been uploaded by inaturalist users for the purposes of recording the observation and identification. We only used images that their users have licensed under [CC0](https://creativecommons.org/share-your-work/public-domain/cc0/) license. For your convenience, we have placed the dataset in S3 in a single zip archive here: http://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/DIG-TF-200-MLBEES-10-EN/dataset.zip
First, download and unzip the archive.
```
!wget http://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/DIG-TF-200-MLBEES-10-EN/dataset.zip
!unzip -qo dataset.zip
```
The archive contains the following structure: 500 `.jpg` image files, a manifest file (to be explained later) and 10 test images in the `test` subfolder.
```
!unzip -l dataset.zip | tail -20
```
Now let's upload this dataset to your own S3 bucket in preparation for labeling and training using Amazon SageMaker. For this demo, we will be using `us-west-2` region, so your bucket needs to be in this region.
```
# S3 bucket must be created in us-west-2 (Oregon) region
BUCKET = 'denisb-sagemaker-oregon'
PREFIX = 'input' # this is the root path to your working space, feel to use a different path
!aws s3 sync --exclude="*" --include="[0-9]*.jpg" . s3://$BUCKET/$PREFIX/
```
## Labeling with SageMaker Ground Truth <a name="groundtruth"></a>
Now, we are ready to run your first labeling with Amazon SageMaker Ground Truth. Follow the steps shown in the recording.
When specifying information needed to configure the labeling UI tool, use the following information:
- Brief task description: _"Draw a bounding box around the bee in this image."_
- Labels: _"bee"_
- Good example description: _"bounding box includes all visible parts of the insect - legs, antennae, etc."_
- Good example image: https://s3.us-west-2.amazonaws.com/aws-tc-largeobjects/DIG-TF-200-MLBEES-10-EN/bee-good-5535715.jpg
- Bad example description: _"bounding box is too big and/or excludes some visible parts of the insect"_
- Bad example image: https://s3.us-west-2.amazonaws.com/aws-tc-largeobjects/DIG-TF-200-MLBEES-10-EN/bee-bad-5535715.jpg
## Reviewing labeling results
<a name="reviewing"></a>
After the labeling job has completed, we can see the results of image annotations right in the SageMaker console itself. The console displays each image as well as the bounding boxes around the bees that were drawn by human labelers.
At the same time we can examine the results in the so-called augmented manifest file that was generated. Let's download and examine the manifest file.
```
# Enter the name of your job here
labeling_job_name = 'bees'
import boto3
client = boto3.client('sagemaker')
s3_output = client.describe_labeling_job(LabelingJobName=labeling_job_name)['OutputConfig']['S3OutputPath'] + labeling_job_name
augmented_manifest_url = f'{s3_output}/manifests/output/output.manifest'
import os
import shutil
try:
os.makedirs('od_output_data/', exist_ok=False)
except FileExistsError:
shutil.rmtree('od_output_data/')
# now download the augmented manifest file and display first 3 lines
!aws s3 cp $augmented_manifest_url od_output_data/
augmented_manifest_file = 'od_output_data/output.manifest'
!head -3 $augmented_manifest_file
```
Now let's plot all the annotated images. First, let's define a function that displays the local image file and draws over it the bounding boxes obtained via labeling.
```
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from PIL import Image
import numpy as np
from itertools import cycle
def show_annotated_image(img_path, bboxes):
im = np.array(Image.open(img_path), dtype=np.uint8)
# Create figure and axes
fig,ax = plt.subplots(1)
# Display the image
ax.imshow(im)
colors = cycle(['r', 'g', 'b', 'y', 'c', 'm', 'k', 'w'])
for bbox in bboxes:
# Create a Rectangle patch
rect = patches.Rectangle((bbox['left'],bbox['top']),bbox['width'],bbox['height'],linewidth=1,edgecolor=next(colors),facecolor='none')
# Add the patch to the Axes
ax.add_patch(rect)
plt.show()
```
Next, read the augmented manifest (JSON lines format) line by line and display the first 10 images.
```
!pip -q install --upgrade pip
!pip -q install jsonlines
import jsonlines
from itertools import islice
with jsonlines.open(augmented_manifest_file, 'r') as reader:
for desc in islice(reader, 10):
img_url = desc['source-ref']
img_file = os.path.basename(img_url)
file_exists = os.path.isfile(img_file)
bboxes = desc[labeling_job_name]['annotations']
show_annotated_image(img_file, bboxes)
```
<a name='training'></a>
## Training an Object Detection Model
We are now ready to use the labeled dataset in order to train a Machine Learning model using the SageMaker [built-in Object Detection algorithm](https://docs.aws.amazon.com/sagemaker/latest/dg/object-detection.html).
For this, we would need to split the full labeled dataset into a training and a validation datasets. Out of the total of 500 images we are going to use 400 for training and 100 for validation. The algorithm will use the first one to train the model and the latter to estimate the accuracy of the model, trained so far. The augmented manifest file from the previously run full labeling job was included in the original zip archive as `output.manifest`.
```
import json
with jsonlines.open('output.manifest', 'r') as reader:
lines = list(reader)
# Shuffle data in place.
np.random.shuffle(lines)
dataset_size = len(lines)
num_training_samples = round(dataset_size*0.8)
train_data = lines[:num_training_samples]
validation_data = lines[num_training_samples:]
augmented_manifest_filename_train = 'train.manifest'
with open(augmented_manifest_filename_train, 'w') as f:
for line in train_data:
f.write(json.dumps(line))
f.write('\n')
augmented_manifest_filename_validation = 'validation.manifest'
with open(augmented_manifest_filename_validation, 'w') as f:
for line in validation_data:
f.write(json.dumps(line))
f.write('\n')
print(f'training samples: {num_training_samples}, validation samples: {len(lines)-num_training_samples}')
```
Next, let's upload the two manifest files to S3 in preparation for training. We will use the same bucket you created earlier.
```
pfx_training = PREFIX + '/training' if PREFIX else 'training'
# Defines paths for use in the training job request.
s3_train_data_path = 's3://{}/{}/{}'.format(BUCKET, pfx_training, augmented_manifest_filename_train)
s3_validation_data_path = 's3://{}/{}/{}'.format(BUCKET, pfx_training, augmented_manifest_filename_validation)
!aws s3 cp train.manifest s3://$BUCKET/$pfx_training/
!aws s3 cp validation.manifest s3://$BUCKET/$pfx_training/
```
We are now ready to kick off the training. We will do it from the SageMaker console, but alternatively, you can just run this code in a new cell using SageMaker Python SDK:
### Code option
```
import time
import sagemaker
role = sagemaker.get_execution_role()
sess = sagemaker.Session()
training_image = sagemaker.amazon.amazon_estimator.get_image_uri(
boto3.Session().region_name, 'object-detection', repo_version='latest')
s3_output_path = 's3://{}/{}/output'.format(BUCKET, pfx_training)
# Create unique job name
training_job_name = 'bees-detection-resnet'
training_params = \
{
"AlgorithmSpecification": {
# NB. This is one of the named constants defined in the first cell.
"TrainingImage": training_image,
"TrainingInputMode": "Pipe"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": s3_output_path
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.p2.xlarge",
"VolumeSizeInGB": 50
},
"TrainingJobName": training_job_name,
"HyperParameters": { # NB. These hyperparameters are at the user's discretion and are beyond the scope of this demo.
"base_network": "resnet-50",
"use_pretrained_model": "1",
"num_classes": "1",
"mini_batch_size": "1",
"epochs": "100",
"learning_rate": "0.001",
"lr_scheduler_step": "",
"lr_scheduler_factor": "0.1",
"optimizer": "sgd",
"momentum": "0.9",
"weight_decay": "0.0005",
"overlap_threshold": "0.5",
"nms_threshold": "0.45",
"image_shape": "300",
"label_width": "350",
"num_training_samples": str(num_training_samples)
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 86400
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "AugmentedManifestFile", # NB. Augmented Manifest
"S3Uri": s3_train_data_path,
"S3DataDistributionType": "FullyReplicated",
# NB. This must correspond to the JSON field names in your augmented manifest.
"AttributeNames": ['source-ref', 'bees-500']
}
},
"ContentType": "application/x-recordio",
"RecordWrapperType": "RecordIO",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "AugmentedManifestFile", # NB. Augmented Manifest
"S3Uri": s3_validation_data_path,
"S3DataDistributionType": "FullyReplicated",
# NB. This must correspond to the JSON field names in your augmented manifest.
"AttributeNames": ['source-ref', 'bees-500']
}
},
"ContentType": "application/x-recordio",
"RecordWrapperType": "RecordIO",
"CompressionType": "None"
}
]
}
# Now we create the SageMaker training job.
client = boto3.client(service_name='sagemaker')
client.create_training_job(**training_params)
# Confirm that the training job has started
status = client.describe_training_job(TrainingJobName=training_job_name)['TrainingJobStatus']
print('Training job current status: {}'.format(status))
```
To check the progess of the training job, you can refresh the console or repeatedly evaluate the following cell. When the training job status reads `'Completed'`, move on to the next part of the tutorial.
```
##### REPLACE WITH YOUR OWN TRAINING JOB NAME
# In the above console screenshots the job name was 'bees-detection-resnet'.
# But if you used Python to kick off the training job,
# then 'training_job_name' is already set, so you can comment out the line below.
training_job_name = 'bees-training'
##### REPLACE WITH YOUR OWN TRAINING JOB NAME
training_info = client.describe_training_job(TrainingJobName=training_job_name)
print("Training job status: ", training_info['TrainingJobStatus'])
print("Secondary status: ", training_info['SecondaryStatus'])
```
<a name='review_training'></a>
## Review of Training Results
First, let's create the SageMaker model out of model artifacts
```
import time
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
model_name = training_job_name + '-model' + timestamp
training_image = training_info['AlgorithmSpecification']['TrainingImage']
model_data = training_info['ModelArtifacts']['S3ModelArtifacts']
primary_container = {
'Image': training_image,
'ModelDataUrl': model_data,
}
from sagemaker import get_execution_role
role = get_execution_role()
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
endpoint_config_name = training_job_name + '-epc' + timestamp
endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.t2.medium',
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print('Endpoint configuration name: {}'.format(endpoint_config_name))
print('Endpoint configuration arn: {}'.format(endpoint_config_response['EndpointConfigArn']))
```
### Create Endpoint
The next cell creates an endpoint that can be validated and incorporated into production applications. This takes about 10 minutes to complete.
```
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
endpoint_name = training_job_name + '-ep' + timestamp
print('Endpoint name: {}'.format(endpoint_name))
endpoint_params = {
'EndpointName': endpoint_name,
'EndpointConfigName': endpoint_config_name,
}
endpoint_response = client.create_endpoint(**endpoint_params)
print('EndpointArn = {}'.format(endpoint_response['EndpointArn']))
endpoint_name="test-tuning-job-008-9ff8af52-ep-2019-07-19-12-25-46"
# get the status of the endpoint
response = client.describe_endpoint(EndpointName=endpoint_name)
status = response['EndpointStatus']
print('EndpointStatus = {}'.format(status))
```
### Perform inference
We will invoke the deployed endpoint to detect bees in the 10 test images that were inside the `test` folder in `dataset.zip`
```
import glob
test_images = glob.glob('test/*')
print(*test_images, sep="\n")
```
Next, define a function that converts the prediction array returned by our endpoint to the bounding box structure expected by our image display function.
```
def prediction_to_bbox_data(image_path, prediction):
class_id, confidence, xmin, ymin, xmax, ymax = prediction
width, height = Image.open(image_path).size
bbox_data = {'class_id': class_id,
'height': (ymax-ymin)*height,
'width': (xmax-xmin)*width,
'left': xmin*width,
'top': ymin*height}
return bbox_data
```
Finally, for each of the test images, the following cell transforms the image into the appropriate format for realtime prediction, repeatedly calls the endpoint, receives back the prediction, and displays the result.
```
import matplotlib.pyplot as plt
runtime_client = boto3.client('sagemaker-runtime')
# Call SageMaker endpoint to obtain predictions
def get_predictions_for_img(runtime_client, endpoint_name, img_path):
with open(img_path, 'rb') as f:
payload = f.read()
payload = bytearray(payload)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/x-image',
Body=payload)
result = response['Body'].read()
result = json.loads(result)
return result
# wait until the status has changed
client.get_waiter('endpoint_in_service').wait(EndpointName=endpoint_name)
endpoint_response = client.describe_endpoint(EndpointName=endpoint_name)
status = endpoint_response['EndpointStatus']
if status != 'InService':
raise Exception('Endpoint creation failed.')
for test_image in test_images:
result = get_predictions_for_img(runtime_client, endpoint_name, test_image)
confidence_threshold = .2
best_n = 3
# display the best n predictions with confidence > confidence_threshold
predictions = [prediction for prediction in result['prediction'] if prediction[1] > confidence_threshold]
predictions.sort(reverse=True, key = lambda x: x[1])
bboxes = [prediction_to_bbox_data(test_image, prediction) for prediction in predictions[:best_n]]
show_annotated_image(test_image, bboxes)
```
<a name='model_tuning'></a>
## Model Tuning
When you configured the training job you needed to add many hyperparameters that affect the performance of the algorithm and the quality of the resulting model. But how do you pick the right hyperparameters?
If you have experience with the specific algorithm and understand the innerworkings of it, you may already have a good sense of appropriate values. But even then, it's impossible to know the exact best value of each hyperparameter. Often you can zero in on the best values by trying many different combination of values, effectively searching in the hyperparameter space. SageMaker makes this extremely easy with the Model Tuning feature, also known as Hyperparameter Optimization (or HPO). With Model Tuning you simply decide which of the hyperparameters you are not sure about and specify the range of values for each that SageMaker needs to explore. Let's see again how this can be accomplished via the console.
<a name='cleanup'></a>
## Cleanup
At the end of the lab we would like to delete the real-time endpoint, as keeping a real-time endpoint around while being idle is costly and wasteful.
```
client.delete_endpoint(EndpointName=endpoint_name)
```
| github_jupyter | !wget http://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/DIG-TF-200-MLBEES-10-EN/dataset.zip
!unzip -qo dataset.zip
!unzip -l dataset.zip | tail -20
# S3 bucket must be created in us-west-2 (Oregon) region
BUCKET = 'denisb-sagemaker-oregon'
PREFIX = 'input' # this is the root path to your working space, feel to use a different path
!aws s3 sync --exclude="*" --include="[0-9]*.jpg" . s3://$BUCKET/$PREFIX/
# Enter the name of your job here
labeling_job_name = 'bees'
import boto3
client = boto3.client('sagemaker')
s3_output = client.describe_labeling_job(LabelingJobName=labeling_job_name)['OutputConfig']['S3OutputPath'] + labeling_job_name
augmented_manifest_url = f'{s3_output}/manifests/output/output.manifest'
import os
import shutil
try:
os.makedirs('od_output_data/', exist_ok=False)
except FileExistsError:
shutil.rmtree('od_output_data/')
# now download the augmented manifest file and display first 3 lines
!aws s3 cp $augmented_manifest_url od_output_data/
augmented_manifest_file = 'od_output_data/output.manifest'
!head -3 $augmented_manifest_file
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from PIL import Image
import numpy as np
from itertools import cycle
def show_annotated_image(img_path, bboxes):
im = np.array(Image.open(img_path), dtype=np.uint8)
# Create figure and axes
fig,ax = plt.subplots(1)
# Display the image
ax.imshow(im)
colors = cycle(['r', 'g', 'b', 'y', 'c', 'm', 'k', 'w'])
for bbox in bboxes:
# Create a Rectangle patch
rect = patches.Rectangle((bbox['left'],bbox['top']),bbox['width'],bbox['height'],linewidth=1,edgecolor=next(colors),facecolor='none')
# Add the patch to the Axes
ax.add_patch(rect)
plt.show()
!pip -q install --upgrade pip
!pip -q install jsonlines
import jsonlines
from itertools import islice
with jsonlines.open(augmented_manifest_file, 'r') as reader:
for desc in islice(reader, 10):
img_url = desc['source-ref']
img_file = os.path.basename(img_url)
file_exists = os.path.isfile(img_file)
bboxes = desc[labeling_job_name]['annotations']
show_annotated_image(img_file, bboxes)
import json
with jsonlines.open('output.manifest', 'r') as reader:
lines = list(reader)
# Shuffle data in place.
np.random.shuffle(lines)
dataset_size = len(lines)
num_training_samples = round(dataset_size*0.8)
train_data = lines[:num_training_samples]
validation_data = lines[num_training_samples:]
augmented_manifest_filename_train = 'train.manifest'
with open(augmented_manifest_filename_train, 'w') as f:
for line in train_data:
f.write(json.dumps(line))
f.write('\n')
augmented_manifest_filename_validation = 'validation.manifest'
with open(augmented_manifest_filename_validation, 'w') as f:
for line in validation_data:
f.write(json.dumps(line))
f.write('\n')
print(f'training samples: {num_training_samples}, validation samples: {len(lines)-num_training_samples}')
pfx_training = PREFIX + '/training' if PREFIX else 'training'
# Defines paths for use in the training job request.
s3_train_data_path = 's3://{}/{}/{}'.format(BUCKET, pfx_training, augmented_manifest_filename_train)
s3_validation_data_path = 's3://{}/{}/{}'.format(BUCKET, pfx_training, augmented_manifest_filename_validation)
!aws s3 cp train.manifest s3://$BUCKET/$pfx_training/
!aws s3 cp validation.manifest s3://$BUCKET/$pfx_training/
import time
import sagemaker
role = sagemaker.get_execution_role()
sess = sagemaker.Session()
training_image = sagemaker.amazon.amazon_estimator.get_image_uri(
boto3.Session().region_name, 'object-detection', repo_version='latest')
s3_output_path = 's3://{}/{}/output'.format(BUCKET, pfx_training)
# Create unique job name
training_job_name = 'bees-detection-resnet'
training_params = \
{
"AlgorithmSpecification": {
# NB. This is one of the named constants defined in the first cell.
"TrainingImage": training_image,
"TrainingInputMode": "Pipe"
},
"RoleArn": role,
"OutputDataConfig": {
"S3OutputPath": s3_output_path
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.p2.xlarge",
"VolumeSizeInGB": 50
},
"TrainingJobName": training_job_name,
"HyperParameters": { # NB. These hyperparameters are at the user's discretion and are beyond the scope of this demo.
"base_network": "resnet-50",
"use_pretrained_model": "1",
"num_classes": "1",
"mini_batch_size": "1",
"epochs": "100",
"learning_rate": "0.001",
"lr_scheduler_step": "",
"lr_scheduler_factor": "0.1",
"optimizer": "sgd",
"momentum": "0.9",
"weight_decay": "0.0005",
"overlap_threshold": "0.5",
"nms_threshold": "0.45",
"image_shape": "300",
"label_width": "350",
"num_training_samples": str(num_training_samples)
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 86400
},
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "AugmentedManifestFile", # NB. Augmented Manifest
"S3Uri": s3_train_data_path,
"S3DataDistributionType": "FullyReplicated",
# NB. This must correspond to the JSON field names in your augmented manifest.
"AttributeNames": ['source-ref', 'bees-500']
}
},
"ContentType": "application/x-recordio",
"RecordWrapperType": "RecordIO",
"CompressionType": "None"
},
{
"ChannelName": "validation",
"DataSource": {
"S3DataSource": {
"S3DataType": "AugmentedManifestFile", # NB. Augmented Manifest
"S3Uri": s3_validation_data_path,
"S3DataDistributionType": "FullyReplicated",
# NB. This must correspond to the JSON field names in your augmented manifest.
"AttributeNames": ['source-ref', 'bees-500']
}
},
"ContentType": "application/x-recordio",
"RecordWrapperType": "RecordIO",
"CompressionType": "None"
}
]
}
# Now we create the SageMaker training job.
client = boto3.client(service_name='sagemaker')
client.create_training_job(**training_params)
# Confirm that the training job has started
status = client.describe_training_job(TrainingJobName=training_job_name)['TrainingJobStatus']
print('Training job current status: {}'.format(status))
##### REPLACE WITH YOUR OWN TRAINING JOB NAME
# In the above console screenshots the job name was 'bees-detection-resnet'.
# But if you used Python to kick off the training job,
# then 'training_job_name' is already set, so you can comment out the line below.
training_job_name = 'bees-training'
##### REPLACE WITH YOUR OWN TRAINING JOB NAME
training_info = client.describe_training_job(TrainingJobName=training_job_name)
print("Training job status: ", training_info['TrainingJobStatus'])
print("Secondary status: ", training_info['SecondaryStatus'])
import time
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
model_name = training_job_name + '-model' + timestamp
training_image = training_info['AlgorithmSpecification']['TrainingImage']
model_data = training_info['ModelArtifacts']['S3ModelArtifacts']
primary_container = {
'Image': training_image,
'ModelDataUrl': model_data,
}
from sagemaker import get_execution_role
role = get_execution_role()
create_model_response = client.create_model(
ModelName = model_name,
ExecutionRoleArn = role,
PrimaryContainer = primary_container)
print(create_model_response['ModelArn'])
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
endpoint_config_name = training_job_name + '-epc' + timestamp
endpoint_config_response = client.create_endpoint_config(
EndpointConfigName = endpoint_config_name,
ProductionVariants=[{
'InstanceType':'ml.t2.medium',
'InitialInstanceCount':1,
'ModelName':model_name,
'VariantName':'AllTraffic'}])
print('Endpoint configuration name: {}'.format(endpoint_config_name))
print('Endpoint configuration arn: {}'.format(endpoint_config_response['EndpointConfigArn']))
timestamp = time.strftime('-%Y-%m-%d-%H-%M-%S', time.gmtime())
endpoint_name = training_job_name + '-ep' + timestamp
print('Endpoint name: {}'.format(endpoint_name))
endpoint_params = {
'EndpointName': endpoint_name,
'EndpointConfigName': endpoint_config_name,
}
endpoint_response = client.create_endpoint(**endpoint_params)
print('EndpointArn = {}'.format(endpoint_response['EndpointArn']))
endpoint_name="test-tuning-job-008-9ff8af52-ep-2019-07-19-12-25-46"
# get the status of the endpoint
response = client.describe_endpoint(EndpointName=endpoint_name)
status = response['EndpointStatus']
print('EndpointStatus = {}'.format(status))
import glob
test_images = glob.glob('test/*')
print(*test_images, sep="\n")
def prediction_to_bbox_data(image_path, prediction):
class_id, confidence, xmin, ymin, xmax, ymax = prediction
width, height = Image.open(image_path).size
bbox_data = {'class_id': class_id,
'height': (ymax-ymin)*height,
'width': (xmax-xmin)*width,
'left': xmin*width,
'top': ymin*height}
return bbox_data
import matplotlib.pyplot as plt
runtime_client = boto3.client('sagemaker-runtime')
# Call SageMaker endpoint to obtain predictions
def get_predictions_for_img(runtime_client, endpoint_name, img_path):
with open(img_path, 'rb') as f:
payload = f.read()
payload = bytearray(payload)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/x-image',
Body=payload)
result = response['Body'].read()
result = json.loads(result)
return result
# wait until the status has changed
client.get_waiter('endpoint_in_service').wait(EndpointName=endpoint_name)
endpoint_response = client.describe_endpoint(EndpointName=endpoint_name)
status = endpoint_response['EndpointStatus']
if status != 'InService':
raise Exception('Endpoint creation failed.')
for test_image in test_images:
result = get_predictions_for_img(runtime_client, endpoint_name, test_image)
confidence_threshold = .2
best_n = 3
# display the best n predictions with confidence > confidence_threshold
predictions = [prediction for prediction in result['prediction'] if prediction[1] > confidence_threshold]
predictions.sort(reverse=True, key = lambda x: x[1])
bboxes = [prediction_to_bbox_data(test_image, prediction) for prediction in predictions[:best_n]]
show_annotated_image(test_image, bboxes)
client.delete_endpoint(EndpointName=endpoint_name) | 0.48438 | 0.960805 |
![](https://pptwinpics.oss-cn-beijing.aliyuncs.com/CDA%E8%AE%B2%E5%B8%88%E6%B0%B4%E5%8D%B0_20200314161940.png)
大家好,我是 CDA 曹鑫。
我的 Github 地址:https://github.com/imcda 。
我的邮箱:caoxin@cda.cn 。
这节课跟大家讲讲 Pandas。
# Pandas 介绍
要使用pandas,首先需要了解他主要两个数据结构:Series和DataFrame。
# Series
```
import pandas as pd
print(pd.__version__)
import pandas as pd
import numpy as np
s = pd.Series([1,3,6,np.nan,44,1])
print(s)
```
Series的字符串表现形式为:索引在左边,值在右边。由于我们没有为数据指定索引。于是会自动创建一个0到N-1(N为长度)的整数型索引。
# DataFrame
```
dates = pd.date_range('20160101',periods=6)
df = pd.DataFrame(np.random.randn(6,4),index=dates,columns=['a','b','c','d'])
print(df)
```
`DataFrame`是一个表格型的数据结构,它包含有一组有序的列,每列可以是不同的值类型(数值,字符串,布尔值等)。`DataFrame`既有行索引也有列索引, 它可以被看做由`Series`组成的大字典。
我们可以根据每一个不同的索引来挑选数据, 比如挑选 `b` 的元素:
## DataFrame 的一些简单运用
```
print(df['b'])
```
我们在创建一组没有给定行标签和列标签的数据 df1:
```
df1 = pd.DataFrame(np.arange(12).reshape((3,4)))
print(df1)
```
这样,他就会采取默认的从0开始 index.
还有一种生成 df 的方法, 如下 df2:
```
df2 = pd.DataFrame({'A' : 1.,
'B' : pd.Timestamp('20130102'),
'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
'D' : np.array([3] * 4,dtype='int32'),
'E' : pd.Categorical(["test","train","test","train"]),
'F' : 'foo'})
print(df2)
```
这种方法能对每一列的数据进行特殊对待。
如果想要查看数据中的类型, 我们可以用 dtype 这个属性:
```
print(df2.dtypes)
```
如果想看对列的序号:
```
print(df2.index)
```
同样, 每种数据的名称也能看到:
```
print(df2.columns)
```
如果只想看所有df2的值:
```
print(df2.values)
```
想知道数据的总结, 可以用 describe():
```
df2.describe()
```
如果想翻转数据, transpose:
```
print(df2.T)
```
如果想对数据的 index 进行排序并输出:
```
print(df2.sort_index(axis=1, ascending=True))
```
如果是对数据 值 排序输出:
```
print(df2.sort_values(by='B'))
```
# Pandas 选择数据
我们建立了一个 6X4 的矩阵数据。
```
dates = pd.date_range('20130101', periods=6)
df = pd.DataFrame(np.arange(24).reshape((6,4)),index=dates, columns=['A','B','C','D'])
df
```
## 简单的筛选
如果我们想选取DataFrame中的数据,下面描述了两种途径, 他们都能达到同一个目的:
```
print(df['A'])
print(df.A)
```
让选择跨越多行或多列:
```
print(df[0:3])
print(df['20130102':'20130104'])
```
如果df[3:3]将会是一个空对象。后者选择20130102到20130104标签之间的数据,并且包括这两个标签。
## 根据标签 loc
同样我们可以使用标签来选择数据 loc, 本例子主要通过标签名字选择某一行数据, 或者通过选择某行或者所有行(:代表所有行)然后选其中某一列或几列数据。:
```
print(df.loc['20130102'])
print(df.loc[:,['A','B']])
print(df.loc['20130102',['A','B']])
```
## 根据序列 iloc
另外我们可以采用位置进行选择 iloc, 在这里我们可以通过位置选择在不同情况下所需要的数据例如选某一个,连续选或者跨行选等操作。
```
print(df.iloc[3,1])
print(df.iloc[3:5,1:3])
print(df.iloc[[1,3,5],1:3])
```
在这里我们可以通过位置选择在不同情况下所需要的数据, 例如选某一个,连续选或者跨行选等操作。
## 通过判断的筛选
最后我们可以采用判断指令 (Boolean indexing) 进行选择. 我们可以约束某项条件然后选择出当前所有数据.
```
print(df[df.A>8])
```
# Pandas 设置值
## 创建数据
我们可以根据自己的需求, 用 pandas 进行更改数据里面的值, 或者加上一些空的,或者有数值的列.
首先建立了一个 6X4 的矩阵数据。
```
dates = pd.date_range('20130101', periods=6)
df = pd.DataFrame(np.arange(24).reshape((6,4)),index=dates, columns=['A','B','C','D'])
df
```
## 根据位置设置 loc 和 iloc
我们可以利用索引或者标签确定需要修改值的位置。
```
df.iloc[2,2] = 1111
df.loc['20130101','B'] = 2222
df
```
## 根据条件设置
如果现在的判断条件是这样, 我们想要更改B中的数, 而更改的位置是取决于 A 的. 对于A大于4的位置. 更改B在相应位置上的数为0.
```
df.B[df.A>4] = 0
df
```
## 按行或列设置
如果对整列做批处理, 加上一列 ‘F’, 并将 F 列全改为 NaN, 如下:
```
df['F'] = np.nan
df
```
## 添加数据
用上面的方法也可以加上 Series 序列(但是长度必须对齐)。
```
df['E'] = pd.Series([1,2,3,4,5,6], index=pd.date_range('20130101',periods=6))
df
```
这样我们大概学会了如何对DataFrame中在自己想要的地方赋值或者增加数据。 下次课会将pandas如何处理丢失数据的过程。
![](https://pptwinpics.oss-cn-beijing.aliyuncs.com/CDA%E8%AE%B2%E5%B8%88%E6%B0%B4%E5%8D%B0_20200314161940.png)
# Pandas 处理丢失数据
## 创建含 NaN 的矩阵
有时候我们导入或处理数据, 会产生一些空的或者是 NaN 数据,如何删除或者是填补这些 NaN 数据就是我们今天所要提到的内容.
建立了一个6X4的矩阵数据并且把两个位置置为空.
```
dates = pd.date_range('20130101', periods=6)
df = pd.DataFrame(np.arange(24).reshape((6,4)),index=dates, columns=['A','B','C','D'])
df.iloc[0,1] = np.nan
df.iloc[1,2] = np.nan
df
```
## pd.dropna()
如果想直接去掉有 NaN 的行或列, 可以使用 dropna
```
df.dropna(
axis=0, # 0: 对行进行操作; 1: 对列进行操作
how='any' # 'any': 只要存在 NaN 就 drop 掉; 'all': 必须全部是 NaN 才 drop
)
```
## pd.fillna()
如果是将 NaN 的值用其他值代替, 比如代替成 0:
```
df.fillna(value=0) # 注意,这里是生成一张新表了
```
## pd.isnull()
判断是否有缺失数据 NaN, 为 True 表示缺失数据:
```
df.isnull()
```
检测在数据中是否存在 NaN, 如果存在就返回 True:
```
np.any(df.isnull()) == True
```
# Pandas 合并 concat
pandas处理多组数据的时候往往会要用到数据的合并处理,使用 concat是一种基本的合并方式.而且concat中有很多参数可以调整,合并成你想要的数据形式.
## axis (合并方向)
axis=0是预设值,因此未设定任何参数时,函数默认axis=0。
```
import pandas as pd
import numpy as np
#定义资料集
df1 = pd.DataFrame(np.ones((3,4))*0, columns=['a','b','c','d'])
df2 = pd.DataFrame(np.ones((3,4))*1, columns=['a','b','c','d'])
df3 = pd.DataFrame(np.ones((3,4))*2, columns=['a','b','c','d'])
print(df1)
print(df2)
print(df3)
#concat纵向合并
res = pd.concat([df1, df2, df3], axis=0)
#打印结果
print(res)
```
仔细观察会发现结果的index是0, 1, 2, 0, 1, 2, 0, 1, 2,若要将index重置,请看例子二。
## ignore_index (重置 index) ¶
```
#承上一个例子,并将index_ignore设定为True
res = pd.concat([df1, df2, df3], axis=0, ignore_index=True)
#打印结果
print(res)
```
结果的index变0, 1, 2, 3, 4, 5, 6, 7, 8。
## join (合并方式)
join='outer'为预设值,因此未设定任何参数时,函数默认join='outer'。此方式是依照column来做纵向合并,有相同的column上下合并在一起,其他独自的column个自成列,原本没有值的位置皆以NaN填充。
```
import pandas as pd
import numpy as np
#定义资料集
df1 = pd.DataFrame(np.ones((3,4))*0, columns=['a','b','c','d'], index=[1,2,3])
df2 = pd.DataFrame(np.ones((3,4))*1, columns=['b','c','d','e'], index=[2,3,4])
print(df1)
print(df2)
#纵向"外"合并df1与df2
res = pd.concat([df1, df2], axis=0, join='outer', sort=False)
res = df1.append(df2)
print(res)
# result = df1.join(df2)
# result
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
print(left)
print(right)
result = left.join(right)
result
```
原理同上个例子的说明,但只有相同的column合并在一起,其他的会被抛弃。
```
#承上一个例子
#纵向"内"合并df1与df2
res = pd.concat([df1, df2], axis=0, join='inner')
#打印结果
print(res)
```
## join_axes (依照 axes 合并)
```
import pandas as pd
import numpy as np
#定义资料集
df1 = pd.DataFrame(np.ones((3,4))*0, columns=['a','b','c','d'], index=[1,2,3])
df2 = pd.DataFrame(np.ones((3,4))*1, columns=['b','c','d','e'], index=[2,3,4])
print(df1)
print(df2)
#依照`df1.index`进行横向合并
res = pd.concat([df1, df2], axis=1, join_axes=[df1.index])
#打印结果
print(res)
#移除join_axes,并打印结果
res = pd.concat([df1, df2], axis=1)
print(res)
```
## append (添加数据)
```
import pandas as pd
import numpy as np
#定义资料集
df1 = pd.DataFrame(np.ones((3,4))*0, columns=['a','b','c','d'])
df2 = pd.DataFrame(np.ones((3,4))*1, columns=['a','b','c','d'])
df3 = pd.DataFrame(np.ones((3,4))*1, columns=['a','b','c','d'])
s1 = pd.Series([1,2,3,4], index=['a','b','c','d'])
#将df2合并到df1的下面,以及重置index,并打印出结果
res = df1.append(df2, ignore_index=True)
print(res)
#合并多个df,将df2与df3合并至df1的下面,以及重置index,并打印出结果
res = df1.append([df2, df3], ignore_index=True)
print(res)
#合并series,将s1合并至df1,以及重置index,并打印出结果
res = df1.append(s1, ignore_index=True)
print(res)
```
# Pandas 合并 merge
# Pandas 读取CSV
pandas可以读取与存取的资料格式有很多种,像csv、excel、json、html与pickle等…, 详细请看[官方说明文件](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html)
```
import pandas as pd #加载模块
#读取csv
data = pd.read_csv('./CSVFiles/sz000002.csv')
#打印出data
print(data.head())
df = pd.read_csv(
# 该参数为数据在电脑中的路径,可以不填写
filepath_or_buffer='./CSVFiles/sz000002.csv',
# 线上读取
# filepath_or_buffer='https://bit.ly/sz000002',
# 该参数代表数据的分隔符,csv文件默认是逗号。其他常见的是'\t'
sep=',',
# 该参数代表跳过数据文件的的第1行不读入
# skiprows=1,
# nrows,只读取前n行数据,若不指定,读入全部的数据
nrows=10,
# 将指定列的数据识别为日期格式。若不指定,时间数据将会以字符串形式读入。一开始先不用。
# parse_dates=['交易日期'],
# 将指定列设置为index。若不指定,index默认为0, 1, 2, 3, 4...
index_col=['交易日期'],
# 读取指定的这几列数据,其他数据不读取。若不指定,读入全部列
usecols=['交易日期', '股票代码', '股票名称', '收盘价', '涨跌幅', '成交量', 'MACD_金叉死叉'],
# 当某行数据有问题时,报错。设定为False时即不报错,直接跳过该行。当数据比较脏乱的时候用这个。
error_bad_lines=False,
# 将数据中的null识别为空值
na_values='NULL',
# 更多其他参数,请直接搜索"pandas read_csv",要去逐个查看一下。比较重要的,header等
)
print(df)
# =====看数据
print(df.shape) # 输出dataframe有多少行、多少列。
# print(df.shape[0]) # 取行数量,相应的列数量就是df.shape[1]
# print(df.columns) # 顺序输出每一列的名字,演示如何for语句遍历。
# print(df.index) # 顺序输出每一行的名字,可以for语句遍历。
# print(df.dtypes) # 数据每一列的类型不一样,比如数字、字符串、日期等。该方法输出每一列变量类型
# print(df.head(3)) # 看前3行的数据,默认是5。与自然语言很接近
# print(df.tail(3)) # 看最后3行的数据,默认是5。
# print(df.sample(n=3)) # 随机抽取3行,想要去固定比例的话,可以用frac参数
# print(df.describe()) # 非常方便的函数,对每一类数据有直观感受;只会对数字类型的列有效
# 对print出的数据格式进行修正
pd.set_option('expand_frame_repr', False) # 当列太多时不换行
pd.set_option('max_colwidth', 100) # 设定每一列的最大宽度,恢复原设置的方法,pd.reset_option('max_colwidth')
# 更多设置请见http://pandas.pydata.org/pandas-docs/stable/options.html
print(df)
# =====列操作
# 行列加减乘除
# print(df['股票名称'] + '_地产') # 字符串列可以直接加上字符串,对整列进行操作
# print(df['收盘价'] * 100) # 数字列直接加上或者乘以数字,对整列进行操作。
print(df['收盘价'] * df['成交量']) # 两列之间可以直接操作。收盘价*成交量计算出的是什么?
# 新增一列
# df['股票名称+行业'] = df['股票名称'] + '_地产'
# =====统计函数
# print(df['收盘价'].mean()) # 求一整列的均值,返回一个数。会自动排除空值。
# print(df[['收盘价', '成交量']].mean()) # 求两列的均值,返回两个数,Series
# print(df[['收盘价', '成交量']])
# print(df[['收盘价', '成交量']].mean(axis=1)) # 求两列的均值,返回DataFrame。axis=0或者1要搞清楚。
# axis=1,代表对整几列进行操作。axis=0(默认)代表对几行进行操作。实际中弄混很正常,到时候试一下就知道了。
# print(df['收盘价'].max()) # 最大值
# print(df['收盘价'].min()) # 最小值
# print(df['收盘价'].std()) # 标准差
# print(df['收盘价'].count()) # 非空的数据的数量
# print(df['收盘价'].median()) # 中位数
# print(df['收盘价'].quantile(0.5)) # 50%分位数
# 肯定还有其他的函数计算其他的指标,在实际使用中遇到可以自己搜索
# =====筛选操作,根据指定的条件,筛选出相关拿数据。
# print(df['股票代码'] == 'sz000002') # 判断股票代码是否等于sz000002
# print(df[df['股票代码'] == 'sz000002']) # 将判断为True的输出:选取股票代码等于sz000002的行
# print(df[df['股票代码'].isin(['sz000002', 'sz000003 ', 'sz000004'])]) # 选取股票代码等于sz000002的行
# print(df[df['收盘价'] >= 24.0]) # 选取收盘价大于24的行
# print(df[(df.index >= '03/12/2016') & (df.index <= '06/12/2016')]) # 两个条件,或者的话就是|
# =====排序函数
# df.reset_index(inplace=True)
# print(df.sort_values(by=['股票名称'], ascending=[1, 1])) # by参数指定按照什么进行排序,acsending参数指定是顺序还是逆序,1顺序,0逆序
# print(df.sort_values(by=['股票名称', '交易日期], ascending=[1, 1])) # 按照多列进行排序
# =====两个df上下合并操作,append操作
df1 = df.iloc[0:10][['股票代码', '收盘价', '涨跌幅']]
df2 = df.iloc[5:15][['股票名称', '收盘价', '涨跌幅']]
print(df1)
print(df2)
# print(df1.append(df2)) # append操作,将df1和df2上下拼接起来。注意观察拼接之后的index
df3 = df1.append(df2, ignore_index=True) # ignore_index参数,用户重新确定index
df3
# 当两个df的列名不完全相同的时候,来自两个df的所有列都会保留
# =====对数据进行去重
# df3中有重复的行数,我们如何将重复的行数去除?
df3.drop_duplicates(
subset=['股票代码'], # subset参数用来指定根据哪类类数据来判断是否重复。若不指定,则用全部列的数据来判断是否重复
keep='last', # 在去除重复值的时候,我们是保留上面一行还是下面一行?first保留上面一行,last保留下面一行,False就是一行都不保留
inplace=True
)
df3
# =====其他常用重要函数
# print(df.rename(columns={'MACD_金叉死叉': '金叉死叉', '涨跌幅': '涨幅'})) # rename函数给变量修改名字。使用dict将要修改的名字传给columns参数
# print(df.empty) # 判断一个df是不是为空,此处输出不为空
# print(pd.DataFrame().empty) # pd.DataFrame()创建一个空的DataFrame,此处输出为空
# print(df.T) # 将数据转置,行变成列,很有用
# =====输出
# print df
# df.to_csv('output.csv')
# df.to_csv('output.csv', index=False)
# df.to_csv('output_gbk.csv', index=False, encoding='gbk') # 指定编码格式
```
![](https://pptwinpics.oss-cn-beijing.aliyuncs.com/CDA%E8%AE%B2%E5%B8%88%E6%B0%B4%E5%8D%B0_20200314161940.png)
| github_jupyter | import pandas as pd
print(pd.__version__)
import pandas as pd
import numpy as np
s = pd.Series([1,3,6,np.nan,44,1])
print(s)
dates = pd.date_range('20160101',periods=6)
df = pd.DataFrame(np.random.randn(6,4),index=dates,columns=['a','b','c','d'])
print(df)
print(df['b'])
df1 = pd.DataFrame(np.arange(12).reshape((3,4)))
print(df1)
df2 = pd.DataFrame({'A' : 1.,
'B' : pd.Timestamp('20130102'),
'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
'D' : np.array([3] * 4,dtype='int32'),
'E' : pd.Categorical(["test","train","test","train"]),
'F' : 'foo'})
print(df2)
print(df2.dtypes)
print(df2.index)
print(df2.columns)
print(df2.values)
df2.describe()
print(df2.T)
print(df2.sort_index(axis=1, ascending=True))
print(df2.sort_values(by='B'))
dates = pd.date_range('20130101', periods=6)
df = pd.DataFrame(np.arange(24).reshape((6,4)),index=dates, columns=['A','B','C','D'])
df
print(df['A'])
print(df.A)
print(df[0:3])
print(df['20130102':'20130104'])
print(df.loc['20130102'])
print(df.loc[:,['A','B']])
print(df.loc['20130102',['A','B']])
print(df.iloc[3,1])
print(df.iloc[3:5,1:3])
print(df.iloc[[1,3,5],1:3])
print(df[df.A>8])
dates = pd.date_range('20130101', periods=6)
df = pd.DataFrame(np.arange(24).reshape((6,4)),index=dates, columns=['A','B','C','D'])
df
df.iloc[2,2] = 1111
df.loc['20130101','B'] = 2222
df
df.B[df.A>4] = 0
df
df['F'] = np.nan
df
df['E'] = pd.Series([1,2,3,4,5,6], index=pd.date_range('20130101',periods=6))
df
dates = pd.date_range('20130101', periods=6)
df = pd.DataFrame(np.arange(24).reshape((6,4)),index=dates, columns=['A','B','C','D'])
df.iloc[0,1] = np.nan
df.iloc[1,2] = np.nan
df
df.dropna(
axis=0, # 0: 对行进行操作; 1: 对列进行操作
how='any' # 'any': 只要存在 NaN 就 drop 掉; 'all': 必须全部是 NaN 才 drop
)
df.fillna(value=0) # 注意,这里是生成一张新表了
df.isnull()
np.any(df.isnull()) == True
import pandas as pd
import numpy as np
#定义资料集
df1 = pd.DataFrame(np.ones((3,4))*0, columns=['a','b','c','d'])
df2 = pd.DataFrame(np.ones((3,4))*1, columns=['a','b','c','d'])
df3 = pd.DataFrame(np.ones((3,4))*2, columns=['a','b','c','d'])
print(df1)
print(df2)
print(df3)
#concat纵向合并
res = pd.concat([df1, df2, df3], axis=0)
#打印结果
print(res)
#承上一个例子,并将index_ignore设定为True
res = pd.concat([df1, df2, df3], axis=0, ignore_index=True)
#打印结果
print(res)
import pandas as pd
import numpy as np
#定义资料集
df1 = pd.DataFrame(np.ones((3,4))*0, columns=['a','b','c','d'], index=[1,2,3])
df2 = pd.DataFrame(np.ones((3,4))*1, columns=['b','c','d','e'], index=[2,3,4])
print(df1)
print(df2)
#纵向"外"合并df1与df2
res = pd.concat([df1, df2], axis=0, join='outer', sort=False)
res = df1.append(df2)
print(res)
# result = df1.join(df2)
# result
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
print(left)
print(right)
result = left.join(right)
result
#承上一个例子
#纵向"内"合并df1与df2
res = pd.concat([df1, df2], axis=0, join='inner')
#打印结果
print(res)
import pandas as pd
import numpy as np
#定义资料集
df1 = pd.DataFrame(np.ones((3,4))*0, columns=['a','b','c','d'], index=[1,2,3])
df2 = pd.DataFrame(np.ones((3,4))*1, columns=['b','c','d','e'], index=[2,3,4])
print(df1)
print(df2)
#依照`df1.index`进行横向合并
res = pd.concat([df1, df2], axis=1, join_axes=[df1.index])
#打印结果
print(res)
#移除join_axes,并打印结果
res = pd.concat([df1, df2], axis=1)
print(res)
import pandas as pd
import numpy as np
#定义资料集
df1 = pd.DataFrame(np.ones((3,4))*0, columns=['a','b','c','d'])
df2 = pd.DataFrame(np.ones((3,4))*1, columns=['a','b','c','d'])
df3 = pd.DataFrame(np.ones((3,4))*1, columns=['a','b','c','d'])
s1 = pd.Series([1,2,3,4], index=['a','b','c','d'])
#将df2合并到df1的下面,以及重置index,并打印出结果
res = df1.append(df2, ignore_index=True)
print(res)
#合并多个df,将df2与df3合并至df1的下面,以及重置index,并打印出结果
res = df1.append([df2, df3], ignore_index=True)
print(res)
#合并series,将s1合并至df1,以及重置index,并打印出结果
res = df1.append(s1, ignore_index=True)
print(res)
import pandas as pd #加载模块
#读取csv
data = pd.read_csv('./CSVFiles/sz000002.csv')
#打印出data
print(data.head())
df = pd.read_csv(
# 该参数为数据在电脑中的路径,可以不填写
filepath_or_buffer='./CSVFiles/sz000002.csv',
# 线上读取
# filepath_or_buffer='https://bit.ly/sz000002',
# 该参数代表数据的分隔符,csv文件默认是逗号。其他常见的是'\t'
sep=',',
# 该参数代表跳过数据文件的的第1行不读入
# skiprows=1,
# nrows,只读取前n行数据,若不指定,读入全部的数据
nrows=10,
# 将指定列的数据识别为日期格式。若不指定,时间数据将会以字符串形式读入。一开始先不用。
# parse_dates=['交易日期'],
# 将指定列设置为index。若不指定,index默认为0, 1, 2, 3, 4...
index_col=['交易日期'],
# 读取指定的这几列数据,其他数据不读取。若不指定,读入全部列
usecols=['交易日期', '股票代码', '股票名称', '收盘价', '涨跌幅', '成交量', 'MACD_金叉死叉'],
# 当某行数据有问题时,报错。设定为False时即不报错,直接跳过该行。当数据比较脏乱的时候用这个。
error_bad_lines=False,
# 将数据中的null识别为空值
na_values='NULL',
# 更多其他参数,请直接搜索"pandas read_csv",要去逐个查看一下。比较重要的,header等
)
print(df)
# =====看数据
print(df.shape) # 输出dataframe有多少行、多少列。
# print(df.shape[0]) # 取行数量,相应的列数量就是df.shape[1]
# print(df.columns) # 顺序输出每一列的名字,演示如何for语句遍历。
# print(df.index) # 顺序输出每一行的名字,可以for语句遍历。
# print(df.dtypes) # 数据每一列的类型不一样,比如数字、字符串、日期等。该方法输出每一列变量类型
# print(df.head(3)) # 看前3行的数据,默认是5。与自然语言很接近
# print(df.tail(3)) # 看最后3行的数据,默认是5。
# print(df.sample(n=3)) # 随机抽取3行,想要去固定比例的话,可以用frac参数
# print(df.describe()) # 非常方便的函数,对每一类数据有直观感受;只会对数字类型的列有效
# 对print出的数据格式进行修正
pd.set_option('expand_frame_repr', False) # 当列太多时不换行
pd.set_option('max_colwidth', 100) # 设定每一列的最大宽度,恢复原设置的方法,pd.reset_option('max_colwidth')
# 更多设置请见http://pandas.pydata.org/pandas-docs/stable/options.html
print(df)
# =====列操作
# 行列加减乘除
# print(df['股票名称'] + '_地产') # 字符串列可以直接加上字符串,对整列进行操作
# print(df['收盘价'] * 100) # 数字列直接加上或者乘以数字,对整列进行操作。
print(df['收盘价'] * df['成交量']) # 两列之间可以直接操作。收盘价*成交量计算出的是什么?
# 新增一列
# df['股票名称+行业'] = df['股票名称'] + '_地产'
# =====统计函数
# print(df['收盘价'].mean()) # 求一整列的均值,返回一个数。会自动排除空值。
# print(df[['收盘价', '成交量']].mean()) # 求两列的均值,返回两个数,Series
# print(df[['收盘价', '成交量']])
# print(df[['收盘价', '成交量']].mean(axis=1)) # 求两列的均值,返回DataFrame。axis=0或者1要搞清楚。
# axis=1,代表对整几列进行操作。axis=0(默认)代表对几行进行操作。实际中弄混很正常,到时候试一下就知道了。
# print(df['收盘价'].max()) # 最大值
# print(df['收盘价'].min()) # 最小值
# print(df['收盘价'].std()) # 标准差
# print(df['收盘价'].count()) # 非空的数据的数量
# print(df['收盘价'].median()) # 中位数
# print(df['收盘价'].quantile(0.5)) # 50%分位数
# 肯定还有其他的函数计算其他的指标,在实际使用中遇到可以自己搜索
# =====筛选操作,根据指定的条件,筛选出相关拿数据。
# print(df['股票代码'] == 'sz000002') # 判断股票代码是否等于sz000002
# print(df[df['股票代码'] == 'sz000002']) # 将判断为True的输出:选取股票代码等于sz000002的行
# print(df[df['股票代码'].isin(['sz000002', 'sz000003 ', 'sz000004'])]) # 选取股票代码等于sz000002的行
# print(df[df['收盘价'] >= 24.0]) # 选取收盘价大于24的行
# print(df[(df.index >= '03/12/2016') & (df.index <= '06/12/2016')]) # 两个条件,或者的话就是|
# =====排序函数
# df.reset_index(inplace=True)
# print(df.sort_values(by=['股票名称'], ascending=[1, 1])) # by参数指定按照什么进行排序,acsending参数指定是顺序还是逆序,1顺序,0逆序
# print(df.sort_values(by=['股票名称', '交易日期], ascending=[1, 1])) # 按照多列进行排序
# =====两个df上下合并操作,append操作
df1 = df.iloc[0:10][['股票代码', '收盘价', '涨跌幅']]
df2 = df.iloc[5:15][['股票名称', '收盘价', '涨跌幅']]
print(df1)
print(df2)
# print(df1.append(df2)) # append操作,将df1和df2上下拼接起来。注意观察拼接之后的index
df3 = df1.append(df2, ignore_index=True) # ignore_index参数,用户重新确定index
df3
# 当两个df的列名不完全相同的时候,来自两个df的所有列都会保留
# =====对数据进行去重
# df3中有重复的行数,我们如何将重复的行数去除?
df3.drop_duplicates(
subset=['股票代码'], # subset参数用来指定根据哪类类数据来判断是否重复。若不指定,则用全部列的数据来判断是否重复
keep='last', # 在去除重复值的时候,我们是保留上面一行还是下面一行?first保留上面一行,last保留下面一行,False就是一行都不保留
inplace=True
)
df3
# =====其他常用重要函数
# print(df.rename(columns={'MACD_金叉死叉': '金叉死叉', '涨跌幅': '涨幅'})) # rename函数给变量修改名字。使用dict将要修改的名字传给columns参数
# print(df.empty) # 判断一个df是不是为空,此处输出不为空
# print(pd.DataFrame().empty) # pd.DataFrame()创建一个空的DataFrame,此处输出为空
# print(df.T) # 将数据转置,行变成列,很有用
# =====输出
# print df
# df.to_csv('output.csv')
# df.to_csv('output.csv', index=False)
# df.to_csv('output_gbk.csv', index=False, encoding='gbk') # 指定编码格式 | 0.138607 | 0.917525 |
```
from google.colab import drive
drive.mount('/content/drive')
```
Importing all the dependencies
```
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import MobileNetV2, ResNet50, VGG19
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.models import load_model
from sklearn.preprocessing import OneHotEncoder
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import warnings
warnings.filterwarnings("ignore")
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import argparse
import os
import cv2
```
####Preprocessing
Load all the images and labels and preprocess them, convert to numpy arrays and append them to the respective lists and converting those to numpy arrays
```
path = '/content/drive/My Drive/Face-Mask-Detector/resources/dataset/'
imagePaths = list(paths.list_images(path))
images = []
labels = []
for imagePath in imagePaths:
label = imagePath.split(os.path.sep)[-2]
image = load_img(imagePath, target_size=(224, 224))
image = img_to_array(image)
image = preprocess_input(image)
images.append(image)
labels.append(label)
images = np.array(images, dtype="float32")
labels = np.array(labels)
```
Getting the shapes of the arrays
```
print(images.shape)
print(labels.shape)
np.unique(labels)
```
One-hot encode the labels as they are categorical
```
encoder = LabelBinarizer()
labels = encoder.fit_transform(labels)
labels = to_categorical(labels)
```
Perform the train test split by forming giving 20% dataset to test our model.
```
X_train, X_test, y_train, y_test = train_test_split(images, labels,
test_size=0.20, stratify=labels)
```
Training image generator for data augmentation
```
datagen = ImageDataGenerator(
rotation_range=20,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest")
```
###MobileNetV2 Models Building Block
<img src="https://drive.google.com/uc?id=1yKgIXSDFdadQNcD5sjmqmJ07a6lQPCzq" width="500" height = '500' layout="centre">
For this task, we will be fine-tuning the MobileNet V2 architecture, a highly efficient architecture which works well with limited computational capacity.
Keras Functional API has been used to made the architecture of the model.
```
baseModel = MobileNetV2(weights="imagenet", include_top=False,
input_tensor=Input(shape=(224, 224, 3)))
X = baseModel.output
X = AveragePooling2D(pool_size=(7, 7))(X)
X = Flatten()(X)
X = Dense(128, activation="relu")(X)
X = Dropout(0.5)(X)
X = Dense(2, activation="softmax")(X)
model = Model(inputs=baseModel.input, outputs=X)
```
As we are using Transfer Learning i.e Pretrained MobileNetV2 we need to freeze its layers and train only last two dense layers.
```
for layer in baseModel.layers:
layer.trainable = False
```
Final Architecture of our model.
```
model.summary()
```
Defining few parameters
```
batch_size = 128
epochs = 15
```
Defining the optimzer and compiling the model.
```
optimizer = Adam(lr=1e-4, decay=1e-3)
model.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["accuracy"])
```
Training the model.
```
hist = model.fit(datagen.flow(X_train, y_train, batch_size=batch_size),
steps_per_epoch=len(X_train) // batch_size,
validation_data=(X_test, y_test),
validation_steps=len(X_test) // batch_size,
epochs=epochs)
```
We need to find the index of the label with corresponding largest predicted probability for each image in text set.
```
y_pred = model.predict(X_test, batch_size=batch_size)
y_pred = np.argmax(y_pred, axis=1)
print(classification_report(y_test.argmax(axis=1), y_pred, target_names=encoder.classes_))
```
Saving the model.h5 file so that it can loaded later to use for mask detection.
```
model.save("model", save_format="h5")
```
Plot the train and validation loss for our model using matplotlib library.
```
plt.plot(np.arange(0, epochs), hist.history["loss"], label="train_loss")
plt.plot(np.arange(0, epochs), hist.history["val_loss"], label="val_loss")
plt.title("Training and Validation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(loc="upper right")
```
We have use used pretrained model to detect faces in images and used opencv deep neural network module to read model and its config file.
Weights of trained mask classifier model is loaded.
```
prototxtPath = '/content/drive/My Drive/Face-Mask-Detector/resources/face_detector/deploy.prototxt'
weightsPath = '/content/drive/My Drive/Face-Mask-Detector/resources/face_detector/res10_300x300_ssd_iter_140000.caffemodel'
face_model = cv2.dnn.readNet(prototxtPath, weightsPath)
model = load_model("model")
```
Preprocess the images using Blob module of opencv which resizes and crops image from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels and them pass the blob throught our network to obtain the face which are detected by the model.
```
im_path='people2.jpg'
image = cv2.imdecode(np.fromfile(im_path, dtype=np.uint8), cv2.IMREAD_UNCHANGED)
# image = cv2.imread('/content/drive/My Drive/maskclassifier/test/people2.jpg')
height, width = image.shape[:2]
blob = cv2.dnn.blobFromImage(image, 1.0, (300, 300), (104.0, 177.0, 123.0))
face_model.setInput(blob)
detections = face_model.forward() #detecting the faces
```
In this part we have loop through all the detections and if their score is greater than certain threshold then we have find the dimensions of face and use preprocessing steps used for training images. Then we have used model trained to predict the class of the face image by passing the image through it.
Then Opencv functions are used to create bounding boxes, put text and show the image.
```
from google.colab.patches import cv2_imshow
threshold = 0.2
person_with_mask = 0;
person_without_mask = 0;
for i in range(0, detections.shape[2]):
score = detections[0, 0, i, 2]
if score > threshold:
#coordinates of the bounding box
box = detections[0, 0, i, 3:7] * np.array([width, height, width, height])
X_start, Y_start, X_end, Y_end = box.astype("int")
X_start, Y_start = (max(0, X_start), max(0, Y_start))
X_end, Y_end = (min(width - 1, X_end), min(height - 1, Y_end))
face = image[Y_start:Y_end, X_start:X_end]
face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB) #Convert to rgb
face = cv2.resize(face, (224, 224)) #resize
face = img_to_array(face)
face = preprocess_input(face)
face = np.expand_dims(face, axis=0)
mask, withoutMask = model.predict(face)[0]
if mask > withoutMask:
label = "Mask"
person_with_mask += 1
else:
label = "No Mask"
person_without_mask += 1
if label == "Mask":
color = (0, 255, 0)
else:
color = (0, 0, 255)
label = "{}: {:.2f}%".format(label, max(mask, withoutMask) * 100)
cv2.putText(image, label, (X_start, Y_start - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.45, color, 2)
cv2.rectangle(image, (X_start, Y_start), (X_end, Y_end), color, 2)
print("Number of person with mask : {}".format(person_with_mask))
print("Number of person without mask : {}".format(person_without_mask))
cv2_imshow(image)
```
| github_jupyter | from google.colab import drive
drive.mount('/content/drive')
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import MobileNetV2, ResNet50, VGG19
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.models import load_model
from sklearn.preprocessing import OneHotEncoder
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import warnings
warnings.filterwarnings("ignore")
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import argparse
import os
import cv2
path = '/content/drive/My Drive/Face-Mask-Detector/resources/dataset/'
imagePaths = list(paths.list_images(path))
images = []
labels = []
for imagePath in imagePaths:
label = imagePath.split(os.path.sep)[-2]
image = load_img(imagePath, target_size=(224, 224))
image = img_to_array(image)
image = preprocess_input(image)
images.append(image)
labels.append(label)
images = np.array(images, dtype="float32")
labels = np.array(labels)
print(images.shape)
print(labels.shape)
np.unique(labels)
encoder = LabelBinarizer()
labels = encoder.fit_transform(labels)
labels = to_categorical(labels)
X_train, X_test, y_train, y_test = train_test_split(images, labels,
test_size=0.20, stratify=labels)
datagen = ImageDataGenerator(
rotation_range=20,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest")
baseModel = MobileNetV2(weights="imagenet", include_top=False,
input_tensor=Input(shape=(224, 224, 3)))
X = baseModel.output
X = AveragePooling2D(pool_size=(7, 7))(X)
X = Flatten()(X)
X = Dense(128, activation="relu")(X)
X = Dropout(0.5)(X)
X = Dense(2, activation="softmax")(X)
model = Model(inputs=baseModel.input, outputs=X)
for layer in baseModel.layers:
layer.trainable = False
model.summary()
batch_size = 128
epochs = 15
optimizer = Adam(lr=1e-4, decay=1e-3)
model.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["accuracy"])
hist = model.fit(datagen.flow(X_train, y_train, batch_size=batch_size),
steps_per_epoch=len(X_train) // batch_size,
validation_data=(X_test, y_test),
validation_steps=len(X_test) // batch_size,
epochs=epochs)
y_pred = model.predict(X_test, batch_size=batch_size)
y_pred = np.argmax(y_pred, axis=1)
print(classification_report(y_test.argmax(axis=1), y_pred, target_names=encoder.classes_))
model.save("model", save_format="h5")
plt.plot(np.arange(0, epochs), hist.history["loss"], label="train_loss")
plt.plot(np.arange(0, epochs), hist.history["val_loss"], label="val_loss")
plt.title("Training and Validation Loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(loc="upper right")
prototxtPath = '/content/drive/My Drive/Face-Mask-Detector/resources/face_detector/deploy.prototxt'
weightsPath = '/content/drive/My Drive/Face-Mask-Detector/resources/face_detector/res10_300x300_ssd_iter_140000.caffemodel'
face_model = cv2.dnn.readNet(prototxtPath, weightsPath)
model = load_model("model")
im_path='people2.jpg'
image = cv2.imdecode(np.fromfile(im_path, dtype=np.uint8), cv2.IMREAD_UNCHANGED)
# image = cv2.imread('/content/drive/My Drive/maskclassifier/test/people2.jpg')
height, width = image.shape[:2]
blob = cv2.dnn.blobFromImage(image, 1.0, (300, 300), (104.0, 177.0, 123.0))
face_model.setInput(blob)
detections = face_model.forward() #detecting the faces
from google.colab.patches import cv2_imshow
threshold = 0.2
person_with_mask = 0;
person_without_mask = 0;
for i in range(0, detections.shape[2]):
score = detections[0, 0, i, 2]
if score > threshold:
#coordinates of the bounding box
box = detections[0, 0, i, 3:7] * np.array([width, height, width, height])
X_start, Y_start, X_end, Y_end = box.astype("int")
X_start, Y_start = (max(0, X_start), max(0, Y_start))
X_end, Y_end = (min(width - 1, X_end), min(height - 1, Y_end))
face = image[Y_start:Y_end, X_start:X_end]
face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB) #Convert to rgb
face = cv2.resize(face, (224, 224)) #resize
face = img_to_array(face)
face = preprocess_input(face)
face = np.expand_dims(face, axis=0)
mask, withoutMask = model.predict(face)[0]
if mask > withoutMask:
label = "Mask"
person_with_mask += 1
else:
label = "No Mask"
person_without_mask += 1
if label == "Mask":
color = (0, 255, 0)
else:
color = (0, 0, 255)
label = "{}: {:.2f}%".format(label, max(mask, withoutMask) * 100)
cv2.putText(image, label, (X_start, Y_start - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.45, color, 2)
cv2.rectangle(image, (X_start, Y_start), (X_end, Y_end), color, 2)
print("Number of person with mask : {}".format(person_with_mask))
print("Number of person without mask : {}".format(person_without_mask))
cv2_imshow(image) | 0.673621 | 0.855489 |
# GCM Filters Tutorial
## Synthetic Data
In this example, we are going to work with "synthetic data"; data we made up for the sake of keeping the example simple and self-contained.
### Create Input Data
Gcm-filters uses Xarray DataArrays for its inputs and outputs. So we will first import xarray (and numpy).
```
import gcm_filters
import numpy as np
import xarray as xr
```
Now we will create a random 3D cube of data.
```
nt, ny, nx = (10, 128, 256)
data = np.random.rand(nt, ny, nx)
da = xr.DataArray(data, dims=['time', 'y', 'x'])
da
```
To make things a bit more interesting, we will create a "land mask"; a binary array representing topography in our made-up ocean.
The convention is here that the array is 1 in the ocean ("wet points") and 0 in the land ("dry points").
```
mask_data = np.ones((ny, nx))
mask_data[(ny // 4):(3 * ny // 4), (nx // 4):(3 * nx // 4)] = 0
wet_mask = xr.DataArray(mask_data, dims=['y', 'x'])
wet_mask.plot()
```
We have made a big island.
We now use this to mask our data.
```
da_masked = da.where(wet_mask)
da_masked[0].plot()
```
### Create a Filter
The main class we use from gcm-filters is the {class}`gcm_filters.Filter` object.
When we create a filter, we specify how we want to smooth the data, including the filter shape and all the relevant parameters.
To define a filter, we need to pick a few options from the predefined lists of filter shapes and grid types.
The possible filter shapes are enumerated as follows:
```
list(gcm_filters.FilterShape)
```
The possible grid types are:
```
list(gcm_filters.GridType)
```
(This list will grow as we implement more Laplacians).
For now, we will choose `REGULAR_WITH_LAND`, which matches our synthetic data.
Each grid type has different "grid variables" that must be provided.
To find out what these are, we can use this utility function.
```
gcm_filters.required_grid_vars(gcm_filters.GridType.REGULAR_WITH_LAND)
```
So if we use this grid type, we have to include a `wet_mask` grid variable.
We are now ready to create our filter object.
```
filter = gcm_filters.Filter(
filter_scale=4,
dx_min=1,
filter_shape=gcm_filters.FilterShape.TAPER,
grid_type=gcm_filters.GridType.REGULAR_WITH_LAND,
grid_vars={'wet_mask': wet_mask}
)
filter
```
The repr for the filter object includes some of the parameters it was initiliazed with, to help us keep track of what we are doing.
Next we plot the shape of the target filter and the approximation. Note that this is not the shape of the filter *kernel*, it is the shape 'in Fourier space,' meaning that we're plotting how the filter attenuates different scales in the data. The filter is 1 at large scales (small wavenumbers $k$, at the left side of the plot) and 0 at small scales (large wavenumbers $k$, at the right side of the plot), meaning that large scales are left unchanged and small scales are damped to zero.
```
filter.plot_shape()
```
By not specifying `n_steps`, we allow the filter to automatically select a value that leads to a very-good approximation of the target. In the above example using the Taper shape, the filter selects to use 16 steps to filter by a factor of 4.
The user might want to use a smaller number of steps to reduce the cost. The caveat is that the accuracy will be reduced, so the filter might not act as expected. To illustrate, we create a new filter with a smaller number of steps and plot the result. (Note that using a value of `n_steps` lower than the default will raise a warning.)
```
filter_8 = gcm_filters.Filter(
filter_scale=4,
dx_min=1,
filter_shape=gcm_filters.FilterShape.TAPER,
n_steps=8,
grid_type=gcm_filters.GridType.REGULAR_WITH_LAND,
grid_vars={'wet_mask': wet_mask}
)
filter_8.plot_shape()
```
The example above shows that using `n_steps=8` still yields a very accurate approximation of the target filter, at half the cost of the default. The main drawback in this example is that the filter slightly *amplifies* large scales, which also implies that it will not conserve variance.
Below we show what happens with `n_steps=4`. For this example of a Taper filter with a filter factor of 4, `n_steps=4` is simply not enough to get a good approximation of the target filter. The `filter_4` object created here will still "work" but it will not behave as expected; specifically, it will smooth more than expected - it will act like a filter with a larger filter scale.
```
filter_4 = gcm_filters.Filter(
filter_scale=4,
dx_min=1,
filter_shape=gcm_filters.FilterShape.TAPER,
n_steps=4,
grid_type=gcm_filters.GridType.REGULAR_WITH_LAND,
grid_vars={'wet_mask': wet_mask}
)
filter_4.plot_shape()
del filter_8, filter_4
```
### Apply the Filter
Now that we have our filter defined, we can use it on some data. We need to specify which dimension names to apply the filter over. In this case, it is y, x.
```
%time da_filtered = filter.apply(da_masked, dims=['y', 'x'])
da_filtered
```
Let's visualize what the filter did:
```
da_filtered[0].plot()
```
It can be useful to know where the land mask has influenced our results--for example, for assessing commutativity of the filter with differential operators.
We can get at this by applying the filter to the land mask itself.
We will create a new filter object that ignores the land.
```
filter_noland = gcm_filters.Filter(
filter_scale=4,
dx_min=1,
filter_shape=gcm_filters.FilterShape.TAPER,
grid_type=gcm_filters.GridType.REGULAR,
)
mask_filtered = filter_noland.apply(wet_mask, dims=['y', 'x'])
mask_filtered.plot()
```
### Use Dask
Up to now, we have operated "eagerly"; when we called `.apply`, the results were computed immediately and stored in memory.
Gcm-filters is also designed to work seamlessly with Dask array inputs, deferring its computationg and possibly executing it in parallel.
We can do this with our synthetic data by converting it to dask.
```
da_dask = da_masked.chunk({'time': 2})
da_dask
da_filtered_lazy = filter.apply(da_dask, dims=['y', 'x'])
da_filtered_lazy
```
Nothing has actually been computed yet.
We can trigger computation as follows:
```
%time da_filtered_computed = da_filtered_lazy.compute()
```
Here we got a very modest speedup because the computation was run in parallel on a four-core laptop.
Our example data are not big enough, and our computer not powerful enough, to really see a big performance benefit here.
But it works!
## Real Data
TODO once other filters are implemented.
Or could we do an example with SST or SSH now?
| github_jupyter | import gcm_filters
import numpy as np
import xarray as xr
nt, ny, nx = (10, 128, 256)
data = np.random.rand(nt, ny, nx)
da = xr.DataArray(data, dims=['time', 'y', 'x'])
da
mask_data = np.ones((ny, nx))
mask_data[(ny // 4):(3 * ny // 4), (nx // 4):(3 * nx // 4)] = 0
wet_mask = xr.DataArray(mask_data, dims=['y', 'x'])
wet_mask.plot()
da_masked = da.where(wet_mask)
da_masked[0].plot()
list(gcm_filters.FilterShape)
list(gcm_filters.GridType)
gcm_filters.required_grid_vars(gcm_filters.GridType.REGULAR_WITH_LAND)
filter = gcm_filters.Filter(
filter_scale=4,
dx_min=1,
filter_shape=gcm_filters.FilterShape.TAPER,
grid_type=gcm_filters.GridType.REGULAR_WITH_LAND,
grid_vars={'wet_mask': wet_mask}
)
filter
filter.plot_shape()
filter_8 = gcm_filters.Filter(
filter_scale=4,
dx_min=1,
filter_shape=gcm_filters.FilterShape.TAPER,
n_steps=8,
grid_type=gcm_filters.GridType.REGULAR_WITH_LAND,
grid_vars={'wet_mask': wet_mask}
)
filter_8.plot_shape()
filter_4 = gcm_filters.Filter(
filter_scale=4,
dx_min=1,
filter_shape=gcm_filters.FilterShape.TAPER,
n_steps=4,
grid_type=gcm_filters.GridType.REGULAR_WITH_LAND,
grid_vars={'wet_mask': wet_mask}
)
filter_4.plot_shape()
del filter_8, filter_4
%time da_filtered = filter.apply(da_masked, dims=['y', 'x'])
da_filtered
da_filtered[0].plot()
filter_noland = gcm_filters.Filter(
filter_scale=4,
dx_min=1,
filter_shape=gcm_filters.FilterShape.TAPER,
grid_type=gcm_filters.GridType.REGULAR,
)
mask_filtered = filter_noland.apply(wet_mask, dims=['y', 'x'])
mask_filtered.plot()
da_dask = da_masked.chunk({'time': 2})
da_dask
da_filtered_lazy = filter.apply(da_dask, dims=['y', 'x'])
da_filtered_lazy
%time da_filtered_computed = da_filtered_lazy.compute() | 0.318061 | 0.981364 |
# Compute viable habitat in geographic space
Viable habitat is computed as the convolution of trait space with environmental conditions.
```
%load_ext autoreload
%autoreload 2
import json
import os
import shutil
from itertools import product
import data_collections as dc
import funnel
import intake
import matplotlib.pyplot as plt
import metabolic as mi
import numpy as np
import operators as ops
import util
import xarray as xr
import yaml
curator = util.curator_local_assets()
cat = curator.open_catalog()
ds_ts = cat['trait-space'].to_dask().load()
trait_spc_wgt = ds_ts.trait_spc_active
trait_spc_wgt
trait_spc_wgt.plot();
dEodT_bar = mi.dEodT_bar
dEodT_bar
catalog_json_file = funnel.to_intake_esm(agg_member_id=True)
sub_spec = dict(
name='drift-corrected',
experiment=['20C', 'RCP85'],
member_id=dc.ocean_bgc_member_ids[:],
)
catalog = funnel.to_intake_esm(agg_member_id=True).search(**sub_spec)
catalog
catalog.search(experiment='20C', variable=['pO2', 'TEMP'], member_id=10).df
try:
cluster
client
except:
cluster, client = util.get_ClusterClient(memory='64GB')
cluster.scale(200)
client
# refresh catalog
catalog = funnel.to_intake_esm(agg_member_id=False).search(**sub_spec)
experiment_list = sorted(catalog.unique('experiment')['experiment']['values'])
member_id_list = sorted(catalog.unique('member_id')['member_id']['values'])
clobber = False
stream = 'pop.h'
component = 'ocn'
variable = 'depth_habitat_trait_wgt'
for experiment, member_id in product(experiment_list, member_id_list):
# check for existing cache file
asset = dc.fnl_gen_cache_file_name(
experiment, component, stream, member_id, variable, 'drift-corrected'
)
if clobber and os.path.exists(asset):
print(f'removing: {asset}')
shutil.rmtree(asset)
if os.path.exists(asset):
print(f'exists: {asset}')
continue
with util.timer(f'{experiment}.{member_id}'):
cat = catalog.search(
experiment=experiment,
member_id=member_id,
stream=stream,
component=component,
variable=['TEMP', 'pO2'],
)
# ensure variables
missing_vars = {'TEMP', 'pO2'} - set(cat.df.variable.to_list())
if missing_vars:
print(f'missing vars for {experiment}.{member_id:03d}: {missing_vars}')
continue
dset = cat.to_dataset_dict()
assert len(dset.keys()) == 1
_, ds = dset.popitem()
# compute
print(f'computing: {asset}')
# compute trait-density weighted sum of viable habitat over depth
var_out = xr.full_like(ds.TEMP[:, 0, :, :], fill_value=0.0)
dso = ds[['TAREA', 'TLONG', 'TLAT', 'KMT', 'REGION_MASK', 'z_t', 'dz']]
for Eo, Ac in product(trait_spc_wgt.Eo.values, trait_spc_wgt.Ac.values):
# get the trait weighting for this trait
trait_wgt_ij = trait_spc_wgt.sel(Eo=Eo, Ac=Ac).values
# compute the metabolic index
Phi = mi.Phi(ds.pO2, ds.TEMP, Ac, Eo, dEodT=dEodT_bar)
# compute the vertical integral of habitat volume where Φ > 1
# multiplied by trait space weighting
viable_trait_mask = xr.where(Phi > 1, trait_wgt_ij, 0.0)
# add this "viable depth" to total depth
var_out += (ds.dz * viable_trait_mask).sum('z_t')
print(f'writing: {asset}')
var_out.name = variable
var_out.attrs['long_name'] = 'Trait-space weighted vertical habitat'
var_out.attrs['units'] = ' '.join([ds.z_t.attrs['units'], trait_spc_wgt.attrs['units']])
dso[variable] = var_out
dso.to_zarr(asset, mode='w', consolidated=True)
dc.fnl_make_cache(experiment, component, stream, member_id, variable, 'drift-corrected')
sub_spec = dict(
name='drift-corrected',
experiment=['20C', 'RCP85'],
member_id=[m for m in dc.ocean_bgc_member_ids if m not in [17, 103]],
)
catalog = funnel.to_intake_esm(agg_member_id=True).search(**sub_spec)
catalog
def fix_dataset(ds):
ds['depth_habitat_trait_wgt'] = ds.depth_habitat_trait_wgt.where(ds.KMT > 0)
return ds.set_coords(['TLAT', 'TLONG'])
cat = catalog.search(variable='depth_habitat_trait_wgt')
dsets = cat.to_dataset_dict(preprocess=fix_dataset, zarr_kwargs={'use_cftime': True})
dsets = {k: ds for k, ds in dsets.items()}
exp_keys = ['20C.ocn.pop.h.drift-corrected', 'RCP85.ocn.pop.h.drift-corrected']
ds = xr.concat([dsets[k] for k in exp_keys], dim='time', coords='minimal', compat='override')
ds['TLAT'] = ds.TLAT[0, :, :]
ds['TLONG'] = ds.TLONG[0, :, :]
ds
ds.depth_habitat_trait_wgt
yrfrac = util.year_frac(ds.time)
tndx_ref = np.where(yrfrac < 1966)[0]
tndx_2100 = np.where(yrfrac > 2080)[0]
with xr.set_options(keep_attrs=True):
aero_hab_glb = (ds.depth_habitat_trait_wgt * ds.TAREA).sum(['nlat', 'nlon']).compute()
aero_hab_glb *= 1e-6 * 1e-6
aero_hab_glb.attrs['units'] = '10$^6$ m$^3$'
aero_hab_glb.attrs['long_name'] = 'Trait-space weighted aerobic habitat'
aero_hab_glb_control = aero_hab_glb.isel(time=tndx_ref).mean(['time', 'member_id'])
aero_hab_glb_normalized = (
100.0 * (aero_hab_glb - aero_hab_glb_control) / aero_hab_glb_control
).compute()
aero_hab_glb_normalized['units'] = '%'
aero_hab_glb
fig, ax = plt.subplots()
for member_id in aero_hab_glb.member_id.values:
if (aero_hab_glb.sel(member_id=member_id)[-10:] == 0.0).all():
print(member_id)
continue
aero_hab_glb.sel(member_id=member_id).plot(ax=ax)
ax.set_title('Global ocean aerobic habitat volume');
fig, ax = plt.subplots()
for member_id in aero_hab_glb.member_id.values:
if (aero_hab_glb.sel(member_id=member_id)[-10:] == 0.0).all():
print(member_id)
continue
ax.plot(
yrfrac,
aero_hab_glb_normalized.sel(member_id=member_id),
linestyle='-',
color='gray',
linewidth=0.5,
)
ax.plot(yrfrac, aero_hab_glb_normalized.mean('member_id'), '-', color='k', linewidth=2)
ax.axhline(0.0, linewidth=0.5, color='k')
ax.set_title('Change in global ocean aerobic habitat');
habitat_contraction = (
ds.depth_habitat_trait_wgt.isel(time=tndx_2100).mean(['time', 'member_id'])
- ds.depth_habitat_trait_wgt.isel(time=tndx_ref).mean(['time', 'member_id'])
).compute()
habitat_contraction /= 1000e2
habitat_contraction *= 100.0
habitat_contraction.attrs['long_name'] = 'Habitat change'
habitat_contraction.attrs['units'] = '%'
habitat_contraction.name = 'habitat_contraction'
habitat_contraction
del client
del cluster
```
| github_jupyter | %load_ext autoreload
%autoreload 2
import json
import os
import shutil
from itertools import product
import data_collections as dc
import funnel
import intake
import matplotlib.pyplot as plt
import metabolic as mi
import numpy as np
import operators as ops
import util
import xarray as xr
import yaml
curator = util.curator_local_assets()
cat = curator.open_catalog()
ds_ts = cat['trait-space'].to_dask().load()
trait_spc_wgt = ds_ts.trait_spc_active
trait_spc_wgt
trait_spc_wgt.plot();
dEodT_bar = mi.dEodT_bar
dEodT_bar
catalog_json_file = funnel.to_intake_esm(agg_member_id=True)
sub_spec = dict(
name='drift-corrected',
experiment=['20C', 'RCP85'],
member_id=dc.ocean_bgc_member_ids[:],
)
catalog = funnel.to_intake_esm(agg_member_id=True).search(**sub_spec)
catalog
catalog.search(experiment='20C', variable=['pO2', 'TEMP'], member_id=10).df
try:
cluster
client
except:
cluster, client = util.get_ClusterClient(memory='64GB')
cluster.scale(200)
client
# refresh catalog
catalog = funnel.to_intake_esm(agg_member_id=False).search(**sub_spec)
experiment_list = sorted(catalog.unique('experiment')['experiment']['values'])
member_id_list = sorted(catalog.unique('member_id')['member_id']['values'])
clobber = False
stream = 'pop.h'
component = 'ocn'
variable = 'depth_habitat_trait_wgt'
for experiment, member_id in product(experiment_list, member_id_list):
# check for existing cache file
asset = dc.fnl_gen_cache_file_name(
experiment, component, stream, member_id, variable, 'drift-corrected'
)
if clobber and os.path.exists(asset):
print(f'removing: {asset}')
shutil.rmtree(asset)
if os.path.exists(asset):
print(f'exists: {asset}')
continue
with util.timer(f'{experiment}.{member_id}'):
cat = catalog.search(
experiment=experiment,
member_id=member_id,
stream=stream,
component=component,
variable=['TEMP', 'pO2'],
)
# ensure variables
missing_vars = {'TEMP', 'pO2'} - set(cat.df.variable.to_list())
if missing_vars:
print(f'missing vars for {experiment}.{member_id:03d}: {missing_vars}')
continue
dset = cat.to_dataset_dict()
assert len(dset.keys()) == 1
_, ds = dset.popitem()
# compute
print(f'computing: {asset}')
# compute trait-density weighted sum of viable habitat over depth
var_out = xr.full_like(ds.TEMP[:, 0, :, :], fill_value=0.0)
dso = ds[['TAREA', 'TLONG', 'TLAT', 'KMT', 'REGION_MASK', 'z_t', 'dz']]
for Eo, Ac in product(trait_spc_wgt.Eo.values, trait_spc_wgt.Ac.values):
# get the trait weighting for this trait
trait_wgt_ij = trait_spc_wgt.sel(Eo=Eo, Ac=Ac).values
# compute the metabolic index
Phi = mi.Phi(ds.pO2, ds.TEMP, Ac, Eo, dEodT=dEodT_bar)
# compute the vertical integral of habitat volume where Φ > 1
# multiplied by trait space weighting
viable_trait_mask = xr.where(Phi > 1, trait_wgt_ij, 0.0)
# add this "viable depth" to total depth
var_out += (ds.dz * viable_trait_mask).sum('z_t')
print(f'writing: {asset}')
var_out.name = variable
var_out.attrs['long_name'] = 'Trait-space weighted vertical habitat'
var_out.attrs['units'] = ' '.join([ds.z_t.attrs['units'], trait_spc_wgt.attrs['units']])
dso[variable] = var_out
dso.to_zarr(asset, mode='w', consolidated=True)
dc.fnl_make_cache(experiment, component, stream, member_id, variable, 'drift-corrected')
sub_spec = dict(
name='drift-corrected',
experiment=['20C', 'RCP85'],
member_id=[m for m in dc.ocean_bgc_member_ids if m not in [17, 103]],
)
catalog = funnel.to_intake_esm(agg_member_id=True).search(**sub_spec)
catalog
def fix_dataset(ds):
ds['depth_habitat_trait_wgt'] = ds.depth_habitat_trait_wgt.where(ds.KMT > 0)
return ds.set_coords(['TLAT', 'TLONG'])
cat = catalog.search(variable='depth_habitat_trait_wgt')
dsets = cat.to_dataset_dict(preprocess=fix_dataset, zarr_kwargs={'use_cftime': True})
dsets = {k: ds for k, ds in dsets.items()}
exp_keys = ['20C.ocn.pop.h.drift-corrected', 'RCP85.ocn.pop.h.drift-corrected']
ds = xr.concat([dsets[k] for k in exp_keys], dim='time', coords='minimal', compat='override')
ds['TLAT'] = ds.TLAT[0, :, :]
ds['TLONG'] = ds.TLONG[0, :, :]
ds
ds.depth_habitat_trait_wgt
yrfrac = util.year_frac(ds.time)
tndx_ref = np.where(yrfrac < 1966)[0]
tndx_2100 = np.where(yrfrac > 2080)[0]
with xr.set_options(keep_attrs=True):
aero_hab_glb = (ds.depth_habitat_trait_wgt * ds.TAREA).sum(['nlat', 'nlon']).compute()
aero_hab_glb *= 1e-6 * 1e-6
aero_hab_glb.attrs['units'] = '10$^6$ m$^3$'
aero_hab_glb.attrs['long_name'] = 'Trait-space weighted aerobic habitat'
aero_hab_glb_control = aero_hab_glb.isel(time=tndx_ref).mean(['time', 'member_id'])
aero_hab_glb_normalized = (
100.0 * (aero_hab_glb - aero_hab_glb_control) / aero_hab_glb_control
).compute()
aero_hab_glb_normalized['units'] = '%'
aero_hab_glb
fig, ax = plt.subplots()
for member_id in aero_hab_glb.member_id.values:
if (aero_hab_glb.sel(member_id=member_id)[-10:] == 0.0).all():
print(member_id)
continue
aero_hab_glb.sel(member_id=member_id).plot(ax=ax)
ax.set_title('Global ocean aerobic habitat volume');
fig, ax = plt.subplots()
for member_id in aero_hab_glb.member_id.values:
if (aero_hab_glb.sel(member_id=member_id)[-10:] == 0.0).all():
print(member_id)
continue
ax.plot(
yrfrac,
aero_hab_glb_normalized.sel(member_id=member_id),
linestyle='-',
color='gray',
linewidth=0.5,
)
ax.plot(yrfrac, aero_hab_glb_normalized.mean('member_id'), '-', color='k', linewidth=2)
ax.axhline(0.0, linewidth=0.5, color='k')
ax.set_title('Change in global ocean aerobic habitat');
habitat_contraction = (
ds.depth_habitat_trait_wgt.isel(time=tndx_2100).mean(['time', 'member_id'])
- ds.depth_habitat_trait_wgt.isel(time=tndx_ref).mean(['time', 'member_id'])
).compute()
habitat_contraction /= 1000e2
habitat_contraction *= 100.0
habitat_contraction.attrs['long_name'] = 'Habitat change'
habitat_contraction.attrs['units'] = '%'
habitat_contraction.name = 'habitat_contraction'
habitat_contraction
del client
del cluster | 0.414662 | 0.625152 |
```
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers, optimizers, models
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from pandas.plotting import register_matplotlib_converters
%matplotlib inline
register_matplotlib_converters()
sns.set(style="darkgrid", font_scale=1.5)
LENGTH = 90
SUBSAMPLING = 2 #once every 20 seconds predict
```
# Train Model
```
def preprocessTestingData(data, length):
hist = []
target = []
for i in range(len(data)-length):
x = data[i:i+length]
y = data[i+length]
hist.append(x)
target.append(y)
# Convert into numpy arrays and shape correctly (len(dataset), length) and (len(dataset), 1) respectivly
hist = np.array(hist)
target = np.array(target)
target = target.reshape(-1,1)
#Reshape the input into (len(dataset), length, 1)
hist = hist.reshape((len(hist), length, 1))
return(hist, target)
def trainModel(datasets, length, model=None, quiet=False):
for dataset in datasets:
X_train, y_train = preprocessTestingData(dataset, length)
if not model:
# Create model and compile
model = tf.keras.Sequential()
model.add(layers.LSTM(units=32, return_sequences=True, input_shape=(length,1), dropout=0.2))
model.add(layers.LSTM(units=32, return_sequences=True, dropout=0.2))
model.add(layers.LSTM(units=32, dropout=0.2))
model.add(layers.Dense(units=1))
optimizer = optimizers.Adam()
model.compile(optimizer=optimizer, loss='mean_squared_error')
# Perform training
output = 1
if quiet:
output = 0
history = model.fit(X_train, y_train, epochs=6, batch_size=32, verbose=output)
# Show loss
if not quiet:
loss = history.history['loss']
epoch_count = range(1, len(loss) + 1)
plt.figure(figsize=(6,4))
plt.plot(epoch_count, loss, 'r--')
plt.legend(['Training Loss'])
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
return model
def scaleData(paths):
scaler = MinMaxScaler()
datasets = []
for path in paths:
# perform partial fits on all datasets
# datasets.append(pd.read_csv(path)[['price']][::SUBSAMPLING]) # TODO remove, 120 subsample for every two minues
new_df = pd.DataFrame()
new_df["price"] = from_csv[["high_price","low_price"]].mean(axis=1)
datasets.append(new_df[::SUBSAMPLING])
scaler = scaler.partial_fit(datasets[-1])
for i in range(len(datasets)):
# once all partial fits have been performed, transform every file
datasets[i] = scaler.transform(datasets[i])
return (datasets, scaler)
paths = ["../../data/ETH.csv"]
datasets, scaler = scaleData(paths)
model = trainModel(datasets, LENGTH)
```
# Test Model
## Evaluation Helpers
```
def sub_sample(arr1, arr2, sub):
return (arr1[::sub], arr2[::sub])
def evaluate_model(real_data, predicted_data, inherent_loss=2):
real_data = real_data.reshape(len(real_data))
predicted_data = predicted_data.reshape(len(predicted_data))
real_diff = np.diff(real_data)
predicted_diff = np.diff(predicted_data)
correct_slopes = 0
profit = 0
for i in range(len(real_data)-1):
if np.sign(real_diff[i]) == np.sign(predicted_diff[i]):
correct_slopes = correct_slopes + 1
# If we have a positive slope calculate profit
if real_diff[i] > 0:
# we subtract inherent_loss due to the limit market mechanics
revenue = (real_data[i+1] - real_data[i]) - inherent_loss
if revenue > 0:
# print(f"Found a profit where current value is {real_data[i+1]} last was {real_data[i]} net {revenue}")
profit = profit + revenue
else:
# We guessed wrong
if predicted_diff[i] > 0:
# we would have bought
revenue = (real_data[i+1] - real_data[i]) - inherent_loss
# print(f"Selling at a loss of {revenue}")
profit = profit + revenue
return (correct_slopes, profit)
def eval_model_on_dataset(actual, prediction, subsampling, inherent_loss):
# Subsample the test points, this seems to increase accuracy
real_subbed, pred_subbed = sub_sample(actual, prediction, subsampling)
# Determine the number of cases in which we predicted a correct increase
correct_slopes, profit = evaluate_model(real_subbed, pred_subbed, inherent_loss)
print(f"Found {correct_slopes} out of {len(real_subbed)-1}")
precent_success = (correct_slopes/(len(real_subbed)-1)) * 100
print(f"{precent_success}%")
print("Profit:", profit)
return profit
```
## Test Model
```
def testModel(model, path_to_testing_dataset, quiet=False):
datasets, scaler = scaleData([path_to_testing_dataset])
hist, actual = preprocessTestingData(datasets[0], LENGTH)
pred = model.predict(hist)
pred_transformed = scaler.inverse_transform(pred)
actual_transformed = scaler.inverse_transform(actual)
if not quiet:
plt.figure(figsize=(12,8))
plt.plot(actual_transformed, color='blue', label='Real')
plt.plot(pred_transformed, color='red', label='Prediction')
plt.title('ETH Price Prediction')
plt.legend()
plt.show()
return eval_model_on_dataset(actual=actual_transformed, prediction=pred_transformed, subsampling=1, inherent_loss=2)
testModel(model, "../../data/MorningTest4.csv")
```
# Single Prediction
```
# For example, if we just want to predict the next timestep in the dataset we can prepare it as such:
# 1. get the [length] last points from the data set since that's what we care about
length = LENGTH
most_recent_period = pd.read_csv('../../data/MorningTest2.csv')[['price']].tail(length)
# 2. convert to numpy array
most_recent_period = np.array(most_recent_period)
# 3. normalize data
scaler = MinMaxScaler()
most_recent_period_scaled = scaler.fit_transform(most_recent_period)
# 4. reshape to the 3D tensor we expected (1, length, 1)
most_recent_period_scaled_shaped = most_recent_period_scaled.reshape((1, length, 1))
# 5. Predict
prediction = model.predict(most_recent_period_scaled_shaped)
# 6. Un-normalize the data
result = scaler.inverse_transform(prediction)
print(f"${result[0][0]}")
```
# Prediction Success Evaluation
```
model.save("my_model")
pink = models.load_model("my_model")
profits = []
for length in np.arange(5, 360, 5):
for sub in np.arange(10, 480, 5):
try:
LENGTH = length
SUBSAMPLING = sub
model = trainModel(datasets, LENGTH, quiet=True)
profit = testModel(model, "../../data/MorningTest.csv", quiet=True)
profits.append((profit, length, sub))
print(sorted(profits, key=lambda tup: -tup[0])[0:20])
except:
pass
print("FINAL RESULTS")
sorted(profits, key=lambda tup: tup[0])[0:20]
```
| github_jupyter | import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers, optimizers, models
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from pandas.plotting import register_matplotlib_converters
%matplotlib inline
register_matplotlib_converters()
sns.set(style="darkgrid", font_scale=1.5)
LENGTH = 90
SUBSAMPLING = 2 #once every 20 seconds predict
def preprocessTestingData(data, length):
hist = []
target = []
for i in range(len(data)-length):
x = data[i:i+length]
y = data[i+length]
hist.append(x)
target.append(y)
# Convert into numpy arrays and shape correctly (len(dataset), length) and (len(dataset), 1) respectivly
hist = np.array(hist)
target = np.array(target)
target = target.reshape(-1,1)
#Reshape the input into (len(dataset), length, 1)
hist = hist.reshape((len(hist), length, 1))
return(hist, target)
def trainModel(datasets, length, model=None, quiet=False):
for dataset in datasets:
X_train, y_train = preprocessTestingData(dataset, length)
if not model:
# Create model and compile
model = tf.keras.Sequential()
model.add(layers.LSTM(units=32, return_sequences=True, input_shape=(length,1), dropout=0.2))
model.add(layers.LSTM(units=32, return_sequences=True, dropout=0.2))
model.add(layers.LSTM(units=32, dropout=0.2))
model.add(layers.Dense(units=1))
optimizer = optimizers.Adam()
model.compile(optimizer=optimizer, loss='mean_squared_error')
# Perform training
output = 1
if quiet:
output = 0
history = model.fit(X_train, y_train, epochs=6, batch_size=32, verbose=output)
# Show loss
if not quiet:
loss = history.history['loss']
epoch_count = range(1, len(loss) + 1)
plt.figure(figsize=(6,4))
plt.plot(epoch_count, loss, 'r--')
plt.legend(['Training Loss'])
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
return model
def scaleData(paths):
scaler = MinMaxScaler()
datasets = []
for path in paths:
# perform partial fits on all datasets
# datasets.append(pd.read_csv(path)[['price']][::SUBSAMPLING]) # TODO remove, 120 subsample for every two minues
new_df = pd.DataFrame()
new_df["price"] = from_csv[["high_price","low_price"]].mean(axis=1)
datasets.append(new_df[::SUBSAMPLING])
scaler = scaler.partial_fit(datasets[-1])
for i in range(len(datasets)):
# once all partial fits have been performed, transform every file
datasets[i] = scaler.transform(datasets[i])
return (datasets, scaler)
paths = ["../../data/ETH.csv"]
datasets, scaler = scaleData(paths)
model = trainModel(datasets, LENGTH)
def sub_sample(arr1, arr2, sub):
return (arr1[::sub], arr2[::sub])
def evaluate_model(real_data, predicted_data, inherent_loss=2):
real_data = real_data.reshape(len(real_data))
predicted_data = predicted_data.reshape(len(predicted_data))
real_diff = np.diff(real_data)
predicted_diff = np.diff(predicted_data)
correct_slopes = 0
profit = 0
for i in range(len(real_data)-1):
if np.sign(real_diff[i]) == np.sign(predicted_diff[i]):
correct_slopes = correct_slopes + 1
# If we have a positive slope calculate profit
if real_diff[i] > 0:
# we subtract inherent_loss due to the limit market mechanics
revenue = (real_data[i+1] - real_data[i]) - inherent_loss
if revenue > 0:
# print(f"Found a profit where current value is {real_data[i+1]} last was {real_data[i]} net {revenue}")
profit = profit + revenue
else:
# We guessed wrong
if predicted_diff[i] > 0:
# we would have bought
revenue = (real_data[i+1] - real_data[i]) - inherent_loss
# print(f"Selling at a loss of {revenue}")
profit = profit + revenue
return (correct_slopes, profit)
def eval_model_on_dataset(actual, prediction, subsampling, inherent_loss):
# Subsample the test points, this seems to increase accuracy
real_subbed, pred_subbed = sub_sample(actual, prediction, subsampling)
# Determine the number of cases in which we predicted a correct increase
correct_slopes, profit = evaluate_model(real_subbed, pred_subbed, inherent_loss)
print(f"Found {correct_slopes} out of {len(real_subbed)-1}")
precent_success = (correct_slopes/(len(real_subbed)-1)) * 100
print(f"{precent_success}%")
print("Profit:", profit)
return profit
def testModel(model, path_to_testing_dataset, quiet=False):
datasets, scaler = scaleData([path_to_testing_dataset])
hist, actual = preprocessTestingData(datasets[0], LENGTH)
pred = model.predict(hist)
pred_transformed = scaler.inverse_transform(pred)
actual_transformed = scaler.inverse_transform(actual)
if not quiet:
plt.figure(figsize=(12,8))
plt.plot(actual_transformed, color='blue', label='Real')
plt.plot(pred_transformed, color='red', label='Prediction')
plt.title('ETH Price Prediction')
plt.legend()
plt.show()
return eval_model_on_dataset(actual=actual_transformed, prediction=pred_transformed, subsampling=1, inherent_loss=2)
testModel(model, "../../data/MorningTest4.csv")
# For example, if we just want to predict the next timestep in the dataset we can prepare it as such:
# 1. get the [length] last points from the data set since that's what we care about
length = LENGTH
most_recent_period = pd.read_csv('../../data/MorningTest2.csv')[['price']].tail(length)
# 2. convert to numpy array
most_recent_period = np.array(most_recent_period)
# 3. normalize data
scaler = MinMaxScaler()
most_recent_period_scaled = scaler.fit_transform(most_recent_period)
# 4. reshape to the 3D tensor we expected (1, length, 1)
most_recent_period_scaled_shaped = most_recent_period_scaled.reshape((1, length, 1))
# 5. Predict
prediction = model.predict(most_recent_period_scaled_shaped)
# 6. Un-normalize the data
result = scaler.inverse_transform(prediction)
print(f"${result[0][0]}")
model.save("my_model")
pink = models.load_model("my_model")
profits = []
for length in np.arange(5, 360, 5):
for sub in np.arange(10, 480, 5):
try:
LENGTH = length
SUBSAMPLING = sub
model = trainModel(datasets, LENGTH, quiet=True)
profit = testModel(model, "../../data/MorningTest.csv", quiet=True)
profits.append((profit, length, sub))
print(sorted(profits, key=lambda tup: -tup[0])[0:20])
except:
pass
print("FINAL RESULTS")
sorted(profits, key=lambda tup: tup[0])[0:20] | 0.699254 | 0.822688 |
# EnergyPlus Output Data Analysis Example
Created by Clayton Miller (miller.clayton@arch.ethz.ch)
The goal of this notebook is to give a user a glimpse at the loading and manipulation of a .csv output of EnergyPlus
Execute the cells in this notebook one at a time and try to understand what each code snippet is doing. There will be text notation before many of the cells attempting to explain what is going on.
First we load some libraries that we will use.
```
import pandas as pd
import datetime
from datetime import timedelta
import time
%matplotlib inline
```
The following is a Python `fucntion` that I created ot read a .csv file, do a conversion and change the timestamp
```
def loadsimdata(file,pointname,ConvFactor):
df = pd.read_csv(file)
df['desiredpoint'] = df[pointname]*ConvFactor
df.index = eplustimestamp(df)
pointdf = df['desiredpoint']
return pointdf
```
## Open an EnergyPlus Output CSV file
First let's open the .csv output file from an EnergyPlus run and visualize it.
```
SimulationData = pd.read_csv('EnergyplusSimulationData.csv')
SimulationData
```
We can select a certain column and see what's inside -- there is quite a few columns in this output file. We can see using the `.info` function from pandas that there are 256 columns
```
SimulationData.info()
SimulationData['AIRNODE_ZONENODE_U1_S:System Node Setpoint Temp[C](Hourly) ']#.plot()
SimulationData['Date/Time'].tail()
```
### We need to convert 24:00:00 to 00:00:00 for it to play nice with Pandas
```
#Function to convert timestamps
def eplustimestamp(simdata):
timestampdict={}
for i,row in simdata.T.iteritems():
timestamp = str(2013) + row['Date/Time']
try:
timestampdict[i] = datetime.datetime.strptime(timestamp,'%Y %m/%d %H:%M:%S')
except ValueError:
tempts = timestamp.replace(' 24', ' 23')
timestampdict[i] = datetime.datetime.strptime(tempts,'%Y %m/%d %H:%M:%S')
timestampdict[i] += timedelta(hours=1)
timestampseries = pd.Series(timestampdict)
return timestampseries
SimulationData.index = eplustimestamp(SimulationData)
SimulationData.info()
SimulationData['AIRNODE_ZONENODE_U1_S:System Node Setpoint Temp[C](Hourly) ']
ColumnsList = pd.Series(SimulationData.columns)
ColumnsList.head(100)
```
## Built-in String Query functions to find desired columns
We can use the string query functions in Pandas to find columns to select for visualization, etc without manually inputting them.
```
ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)")
ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))]
ZoneTempPointList = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))])
ZoneTempPointList
BasementZoneTemp = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))&(ColumnsList.str.contains("U1"))])
GroundFloorZoneTemp = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))&(ColumnsList.str.contains("00"))])
Floor1ZoneTemp = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))&(ColumnsList.str.contains("01"))])
Floor2ZoneTemp = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))&(ColumnsList.str.contains("02"))])
Floor3ZoneTemp = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))&(ColumnsList.str.contains("03"))])
Floor4ZoneTemp = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))&(ColumnsList.str.contains("04"))])
ZoneTemp = SimulationData[ZoneTempPointList]#.drop(['EMS:All Zones Total Heating Energy {J}(Hourly)'],axis=1)
ZoneTemp.info()
```
## Visualization
Now that we have the data in a Pandas Dataframe, let's do some analysis -- visualization, statistics, etc
```
ZoneTemp.plot(figsize=(25,15))
ZoneTemp[BasementZoneTemp].plot(figsize=(25,10))
ZoneTemp[GroundFloorZoneTemp].truncate(before='2013-03-10',after='2013-03-14').plot(figsize=(25,10))
ZoneTemp[Floor1ZoneTemp].plot(figsize=(25,10))
ZoneTemp[Floor2ZoneTemp].plot(figsize=(25,10))
```
## Zooming in using the `.truncate()` pandas method
```
SimulationData['Environment:Outdoor Dry Bulb [C](Hourly)'].truncate(before='2013-03-10',after='2013-03-14').plot(figsize=(25,10))
```
## Let's take a deeper look at the Floor 2 Zone Temperations
```
Floor2Temps = ZoneTemp[Floor2ZoneTemp]
Floor2Temps.info()
Floor2Temps.describe()
```
## Heatmaps
Heatmaps are a great way to visualize performance data
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.backends.backend_pdf import PdfPages
import os
import matplotlib.dates as mdates
import datetime as dt
Floor1Energy = list(ColumnsList[(ColumnsList.str.endswith("Total Heating Energy {J}(Hourly)"))&(ColumnsList.str.contains("01"))])
Floor1Energy
df_hourly = SimulationData.resample('H').mean()
#Add the Date and time for pivoting
df_hourly['Date'] = df_hourly.index.map(lambda t: t.date())
df_hourly['Time'] = df_hourly.index.map(lambda t: t.time())
numberofplots = len(Floor1Energy)
pointcounter = 1
fig = plt.figure(figsize=(40, 4 * numberofplots))
for energypoint in Floor1Energy:
print "Loading data from "+energypoint
#Pivot
df_pivot = pd.pivot_table(df_hourly, values=energypoint, index='Time', columns='Date')
# Get the data
x = mdates.drange(df_pivot.columns[0], df_pivot.columns[-1] + datetime.timedelta(days=1), dt.timedelta(days=1))
y = np.linspace(1, 24, 24)
# Plot
ax = fig.add_subplot(numberofplots, 1, pointcounter)
data = np.ma.masked_invalid(np.array(df_pivot))
qmesh = ax.pcolormesh(x, y, data)
cbar = fig.colorbar(qmesh, ax=ax)
cbar.ax.tick_params(labelsize= 24)
ax.axis('tight')
try:
plt.title(energypoint, fontsize=26)
except IndexError:
continue
# Set up as dates
ax.xaxis_date()
fig.autofmt_xdate()
fig.subplots_adjust(hspace=.5)
pointcounter += 1
```
| github_jupyter | import pandas as pd
import datetime
from datetime import timedelta
import time
%matplotlib inline
def loadsimdata(file,pointname,ConvFactor):
df = pd.read_csv(file)
df['desiredpoint'] = df[pointname]*ConvFactor
df.index = eplustimestamp(df)
pointdf = df['desiredpoint']
return pointdf
SimulationData = pd.read_csv('EnergyplusSimulationData.csv')
SimulationData
SimulationData.info()
SimulationData['AIRNODE_ZONENODE_U1_S:System Node Setpoint Temp[C](Hourly) ']#.plot()
SimulationData['Date/Time'].tail()
#Function to convert timestamps
def eplustimestamp(simdata):
timestampdict={}
for i,row in simdata.T.iteritems():
timestamp = str(2013) + row['Date/Time']
try:
timestampdict[i] = datetime.datetime.strptime(timestamp,'%Y %m/%d %H:%M:%S')
except ValueError:
tempts = timestamp.replace(' 24', ' 23')
timestampdict[i] = datetime.datetime.strptime(tempts,'%Y %m/%d %H:%M:%S')
timestampdict[i] += timedelta(hours=1)
timestampseries = pd.Series(timestampdict)
return timestampseries
SimulationData.index = eplustimestamp(SimulationData)
SimulationData.info()
SimulationData['AIRNODE_ZONENODE_U1_S:System Node Setpoint Temp[C](Hourly) ']
ColumnsList = pd.Series(SimulationData.columns)
ColumnsList.head(100)
ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)")
ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))]
ZoneTempPointList = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))])
ZoneTempPointList
BasementZoneTemp = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))&(ColumnsList.str.contains("U1"))])
GroundFloorZoneTemp = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))&(ColumnsList.str.contains("00"))])
Floor1ZoneTemp = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))&(ColumnsList.str.contains("01"))])
Floor2ZoneTemp = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))&(ColumnsList.str.contains("02"))])
Floor3ZoneTemp = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))&(ColumnsList.str.contains("03"))])
Floor4ZoneTemp = list(ColumnsList[(ColumnsList.str.endswith("Zone Mean Air Temperature [C](Hourly)"))&(ColumnsList.str.contains("04"))])
ZoneTemp = SimulationData[ZoneTempPointList]#.drop(['EMS:All Zones Total Heating Energy {J}(Hourly)'],axis=1)
ZoneTemp.info()
ZoneTemp.plot(figsize=(25,15))
ZoneTemp[BasementZoneTemp].plot(figsize=(25,10))
ZoneTemp[GroundFloorZoneTemp].truncate(before='2013-03-10',after='2013-03-14').plot(figsize=(25,10))
ZoneTemp[Floor1ZoneTemp].plot(figsize=(25,10))
ZoneTemp[Floor2ZoneTemp].plot(figsize=(25,10))
SimulationData['Environment:Outdoor Dry Bulb [C](Hourly)'].truncate(before='2013-03-10',after='2013-03-14').plot(figsize=(25,10))
Floor2Temps = ZoneTemp[Floor2ZoneTemp]
Floor2Temps.info()
Floor2Temps.describe()
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.backends.backend_pdf import PdfPages
import os
import matplotlib.dates as mdates
import datetime as dt
Floor1Energy = list(ColumnsList[(ColumnsList.str.endswith("Total Heating Energy {J}(Hourly)"))&(ColumnsList.str.contains("01"))])
Floor1Energy
df_hourly = SimulationData.resample('H').mean()
#Add the Date and time for pivoting
df_hourly['Date'] = df_hourly.index.map(lambda t: t.date())
df_hourly['Time'] = df_hourly.index.map(lambda t: t.time())
numberofplots = len(Floor1Energy)
pointcounter = 1
fig = plt.figure(figsize=(40, 4 * numberofplots))
for energypoint in Floor1Energy:
print "Loading data from "+energypoint
#Pivot
df_pivot = pd.pivot_table(df_hourly, values=energypoint, index='Time', columns='Date')
# Get the data
x = mdates.drange(df_pivot.columns[0], df_pivot.columns[-1] + datetime.timedelta(days=1), dt.timedelta(days=1))
y = np.linspace(1, 24, 24)
# Plot
ax = fig.add_subplot(numberofplots, 1, pointcounter)
data = np.ma.masked_invalid(np.array(df_pivot))
qmesh = ax.pcolormesh(x, y, data)
cbar = fig.colorbar(qmesh, ax=ax)
cbar.ax.tick_params(labelsize= 24)
ax.axis('tight')
try:
plt.title(energypoint, fontsize=26)
except IndexError:
continue
# Set up as dates
ax.xaxis_date()
fig.autofmt_xdate()
fig.subplots_adjust(hspace=.5)
pointcounter += 1 | 0.417509 | 0.966914 |
... ***CURRENTLY UNDER DEVELOPMENT*** ...
## Simulate Astronomical Tide using U-tide library
inputs required:
* Astronomical Tide historical time series at the study site
in this notebook:
* Tidal armonic analysis based on U-tide library
### Workflow:
<div>
<img src="resources/nb01_03.png" width="300px">
</div>
Tides are simulated by determining the leading constituents using the U_Tide package applied to observed water levels. Superimposing the predicted tides as an independent process still inherently accounts for the timing of events during the calendar year (i.e., king tides in January and February due to Earth’s orbital position are associated with realistic winter weather patterns produced by the emulator).
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# basic import
import os
import os.path as op
# python libs
import numpy as np
import xarray as xr
from datetime import datetime, timedelta
import matplotlib
# custom libs
import utide # https://github.com/wesleybowman/UTide
# DEV: override installed teslakit
import sys
sys.path.insert(0, op.join(os.path.abspath(''), '..', '..', '..'))
# teslakit
from teslakit.database import Database
from teslakit.plotting.tides import Plot_WaterLevel, Plot_AstronomicalTide, Plot_ValidateTTIDE, Plot_Validate_scatter
```
## Database and Site parameters
```
# --------------------------------------
# Teslakit database
p_data = r'/media/administrador/HD/Dropbox/Guam/teslakit/data'
db = Database(p_data)
# set site
db.SetSite('GUAM')
# --------------------------------------
# Load astronomical tide historical and set simulation dates
HIST_WLs = db.Load_TIDE_gauge() # water levels from tidal gauge record, research quality
WLs = HIST_WLs.sea_level[0,:]
# TG latitude
lat0 = 13.44
# Simulation dates (years)
y1_sim = 2000
y2_sim = 3000
```
## Measured water levels from tidal gauge
```
# --------------------------------------
# astronomical tide data
# remove water level nanmean to obtain anomaly
WLs = (WLs - np.nanmean(WLs))/1000
# Plot astronomical tide
time = WLs.time.values[:]
wl = WLs.values[:]
Plot_WaterLevel(time, wl);
```
## Astronomical Tide - Fitting
```
# --------------------------------------
# Utide library - Validation
coef = utide.solve(
matplotlib.dates.date2num(time), wl,
lat=lat0,
nodal=True,
method='ols',
conf_int='MC',
trend=False,
)
tide_tt = utide.reconstruct(matplotlib.dates.date2num(time), coef).h
residuals = wl - tide_tt
# Plot validation
Plot_ValidateTTIDE(time, wl, tide_tt);
Plot_Validate_scatter(wl, tide_tt, 'Historical tide(m)', 'Simulated tide(m)');
```
## Historical Predicted Tide, and Residuals from water level measurements
```
xds_hist = xr.Dataset(
{
'WaterLevels': (('time'), wl),
'Residual': (('time'), residuals),
'Predicted': (('time'), tide_tt),
},
coords = {'time': time}
)
db.Save_TIDE_hist_astro(xds_hist)
```
## Astronomical Tide - Prediction
```
# --------------------------------------
# Utide library - Prediction
def utide_pred_one_year(y):
'Predicts one year using utide library (to avoid kernel error)'
# make hourly array (one year)
d_pred = np.arange(
np.datetime64('{0}-01-01'.format(y)), np.datetime64('{0}-01-01'.format(y+1)),
dtype='datetime64[h]'
)
# reconstruct tide using utide
return utide.reconstruct(matplotlib.dates.date2num(d_pred), coef).h
# use utide for every year
atide_pred = np.concatenate([utide_pred_one_year(y) for y in range(y1_sim, y2_sim)])
date_pred = np.arange(
np.datetime64('{0}-01-01'.format(y1_sim)), np.datetime64('{0}-01-01'.format(y2_sim)),
dtype='datetime64[h]'
).astype(datetime)
# use xarray
ASTRO_sim = xr.Dataset({'astro' :(('time',), atide_pred)}, {'time' : date_pred})
print(ASTRO_sim)
# store astronomical tide simulation
db.Save_TIDE_sim_astro(ASTRO_sim)
# Plot astronomical tide prediction
Plot_AstronomicalTide(ASTRO_sim.time.values[:], ASTRO_sim.astro.values[:]);
```
| github_jupyter | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# basic import
import os
import os.path as op
# python libs
import numpy as np
import xarray as xr
from datetime import datetime, timedelta
import matplotlib
# custom libs
import utide # https://github.com/wesleybowman/UTide
# DEV: override installed teslakit
import sys
sys.path.insert(0, op.join(os.path.abspath(''), '..', '..', '..'))
# teslakit
from teslakit.database import Database
from teslakit.plotting.tides import Plot_WaterLevel, Plot_AstronomicalTide, Plot_ValidateTTIDE, Plot_Validate_scatter
# --------------------------------------
# Teslakit database
p_data = r'/media/administrador/HD/Dropbox/Guam/teslakit/data'
db = Database(p_data)
# set site
db.SetSite('GUAM')
# --------------------------------------
# Load astronomical tide historical and set simulation dates
HIST_WLs = db.Load_TIDE_gauge() # water levels from tidal gauge record, research quality
WLs = HIST_WLs.sea_level[0,:]
# TG latitude
lat0 = 13.44
# Simulation dates (years)
y1_sim = 2000
y2_sim = 3000
# --------------------------------------
# astronomical tide data
# remove water level nanmean to obtain anomaly
WLs = (WLs - np.nanmean(WLs))/1000
# Plot astronomical tide
time = WLs.time.values[:]
wl = WLs.values[:]
Plot_WaterLevel(time, wl);
# --------------------------------------
# Utide library - Validation
coef = utide.solve(
matplotlib.dates.date2num(time), wl,
lat=lat0,
nodal=True,
method='ols',
conf_int='MC',
trend=False,
)
tide_tt = utide.reconstruct(matplotlib.dates.date2num(time), coef).h
residuals = wl - tide_tt
# Plot validation
Plot_ValidateTTIDE(time, wl, tide_tt);
Plot_Validate_scatter(wl, tide_tt, 'Historical tide(m)', 'Simulated tide(m)');
xds_hist = xr.Dataset(
{
'WaterLevels': (('time'), wl),
'Residual': (('time'), residuals),
'Predicted': (('time'), tide_tt),
},
coords = {'time': time}
)
db.Save_TIDE_hist_astro(xds_hist)
# --------------------------------------
# Utide library - Prediction
def utide_pred_one_year(y):
'Predicts one year using utide library (to avoid kernel error)'
# make hourly array (one year)
d_pred = np.arange(
np.datetime64('{0}-01-01'.format(y)), np.datetime64('{0}-01-01'.format(y+1)),
dtype='datetime64[h]'
)
# reconstruct tide using utide
return utide.reconstruct(matplotlib.dates.date2num(d_pred), coef).h
# use utide for every year
atide_pred = np.concatenate([utide_pred_one_year(y) for y in range(y1_sim, y2_sim)])
date_pred = np.arange(
np.datetime64('{0}-01-01'.format(y1_sim)), np.datetime64('{0}-01-01'.format(y2_sim)),
dtype='datetime64[h]'
).astype(datetime)
# use xarray
ASTRO_sim = xr.Dataset({'astro' :(('time',), atide_pred)}, {'time' : date_pred})
print(ASTRO_sim)
# store astronomical tide simulation
db.Save_TIDE_sim_astro(ASTRO_sim)
# Plot astronomical tide prediction
Plot_AstronomicalTide(ASTRO_sim.time.values[:], ASTRO_sim.astro.values[:]); | 0.468061 | 0.807461 |
```
import numpy as np
import pandas as pd
from scipy import stats
from statsmodels.sandbox.stats.multicomp import multipletests
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.stats.multitest as smm
```
Данные для этой задачи взяты из исследования, проведенного в Stanford School of Medicine. В исследовании была предпринята попытка выявить набор генов, которые позволили бы более точно диагностировать возникновение рака груди на самых ранних стадиях.
В эксперименте принимали участие 24 человек, у которых не было рака груди (normal), 25 человек, у которых это заболевание было диагностировано на ранней стадии (early neoplasia), и 23 человека с сильно выраженными симптомами (cancer).
```
data = pd.read_csv('gene_high_throughput_sequencing.csv')
data.head()
sns.barplot(list(data.groupby(['Diagnosis'])['Patient_id'].count().axes[0]),\
list(data.groupby(['Diagnosis'])['Patient_id'].count()))
```
Ученые провели секвенирование биологического материала испытуемых, чтобы понять, какие из этих генов наиболее активны в клетках больных людей.
Секвенирование — это определение степени активности генов в анализируемом образце с помощью подсчёта количества соответствующей каждому гену РНК.
В данных для этого задания вы найдете именно эту количественную меру активности каждого из 15748 генов у каждого из 72 человек, принимавших участие в эксперименте.
Вам нужно будет определить те гены, активность которых у людей в разных стадиях заболевания отличается статистически значимо.
Кроме того, вам нужно будет оценить не только статистическую, но и практическую значимость этих результатов, которая часто используется в подобных исследованиях.
Диагноз человека содержится в столбце под названием "Diagnosis".
### Практическая значимость изменения
Цель исследований — найти гены, средняя экспрессия которых отличается не только статистически значимо, но и достаточно сильно. В экспрессионных исследованиях для этого часто используется метрика, которая называется fold change (кратность изменения). Определяется она следующим образом:
$F_{c}(C,T)=\frac{T}{C}, T > C$ и $-\frac{C}{T}$, при $C >T$
где $C,T$ — средние значения экспрессии гена в control и treatment группах соответственно. По сути, fold change показывает, во сколько раз отличаются средние двух выборок.
Инструкции к решению задачи
Задание состоит из трёх частей. Если не сказано обратное, то уровень значимости нужно принять равным 0.05.
$\textbf{Часть 1: применение t-критерия Стьюдента}$
В первой части вам нужно будет применить критерий Стьюдента для проверки гипотезы о равенстве средних в двух независимых выборках. Применить критерий для каждого гена нужно будет дважды:
для групп $\textbf{normal (control)}$ и $\textbf{early neoplasia (treatment)}$
для групп $\textbf{early neoplasia (control)}$ и $\textbf{cancer (treatment)}$
В качестве ответа в этой части задания необходимо указать количество статистически значимых отличий, которые вы нашли с помощью $t-$критерия Стьюдента, то есть число генов, у которых $\textbf{p-value}$ этого теста оказался меньше, чем уровень значимости.
Прежде чем начать использовать критерий Стьюдента, нужно убедиться, что требования к данным выполнены.
Для применения данного критерия необходимо, чтобы исходные данные имели нормальное распределение.
В случае применения двухвыборочного критерия для независимых выборок также необходимо соблюдение условия равенства дисперсий.
Итак, нужно подтверить гипотезу о том, что данные имеют нормальное распределение. Используем [критерий Шапиро-Уилка](https://en.wikipedia.org/wiki/Shapiro%E2%80%93Wilk_test).
```
print('p-value для группы "normal":',\
stats.shapiro(data[data['Diagnosis'] == 'normal'].iloc[:,2:])[0])
print('p-value для группы "early neoplasia":',\
stats.shapiro(data[data['Diagnosis'] == 'early neoplasia'].iloc[:,2:])[0])
print('p-value для группы "cancer":',\
stats.shapiro(data[data['Diagnosis'] == 'cancer'].iloc[:,2:])[0])
print('\n')
print('противоречий нет, достигаемый уровень значимости достаточен и нельзя отвергнуть гипотезу о нормальном распределении данных, применяем критерий Стьюдента')
p_value_1 = stats.ttest_ind(data[data['Diagnosis'] == 'normal'].iloc[:,2:],
data[data['Diagnosis'] == 'early neoplasia'].iloc[:,2:],
equal_var=False)[1]
p_value_2 = stats.ttest_ind(data[data['Diagnosis'] == 'early neoplasia'].iloc[:,2:],
data[data['Diagnosis'] == 'cancer'].iloc[:,2:],
equal_var=False)[1]
print('normal & early neoplasia:', len(p_value_1[np.where(p_value_1 < 0.05)]))
print('cancer & early neoplasia:', len(p_value_2[np.where(p_value_2 < 0.05)]))
with open('ans1.txt', mode='w') as ans:
ans.write(str(len(p_value_1[np.where(p_value_1 < 0.05)])))
with open('ans2.txt', mode='w') as ans:
ans.write(str(len(p_value_2[np.where(p_value_2 < 0.05)])))
```
### Часть 2: поправка методом Холма
Для этой части задания вам понадобится модуль $\textbf{multitest}$ из $\textbf{statsmodels}$.
В этой части задания нужно будет применить поправку Холма для получившихся двух наборов достигаемых уровней значимости из предыдущей части. Обратите внимание, что поскольку вы будете делать поправку для каждого из двух наборов $\textbf{p-value}$ отдельно, то проблема, связанная с множественной проверкой останется.
Для того, чтобы ее устранить, достаточно воспользоваться поправкой Бонферрони, то есть использовать уровень значимости 0.05 / 2 вместо 0.05 для дальнейшего уточнения значений $\textbf{p-value}$ c помощью метода Холма.
В качестве ответа к этому заданию требуется ввести количество значимых отличий в каждой группе после того, как произведена коррекция Холма-Бонферрони. Причем это число нужно ввести с учетом практической значимости: посчитайте для каждого значимого изменения $\textbf{fold}$ change и выпишите в ответ число таких значимых изменений, абсолютное значение $\textbf{fold}$ change которых больше, чем 1.51.5.
Обратите внимание, что
применять поправку на множественную проверку нужно ко всем значениям достигаемых уровней значимости, а не только для тех, которые меньше значения уровня доверия.
при использовании поправки на уровне значимости 0.025 меняются значения достигаемого уровня значимости, но не меняется значение уровня доверия (то есть для отбора значимых изменений скорректированные значения уровня значимости нужно сравнивать с порогом 0.025, а не 0.05)!
```
def Fc(T,C):
f = 0
if T >= C:
return np.abs(T / C) > 1.5
else:
return np.abs(- C / T) > 1.5
holm1 = multipletests(p_value_1, method = 'holm', alpha=0.05)[1]
holm2 = multipletests(p_value_2, method = 'holm', alpha=0.05)[1]
vals_to_corr = np.array([holm1, holm2])
_, bonf, _, _ = multipletests(vals_to_corr, is_sorted = True, method = 'bonferroni')
print('normal&neoplazma p-value<0.05', len(bonf[0][np.where(bonf[0] < 0.05)]))
print('cancer&neoplazma p-value<0.05', len(bonf[1][np.where(bonf[1] < 0.05)]))
data_normal = data[data['Diagnosis'] == 'normal'].iloc[:,2:].iloc[:, np.where(bonf[0] < 0.05)[0]]
data_neoplasma = data[data['Diagnosis'] == 'early neoplasma'].iloc[:,2:].iloc[:, np.where(bonf[0] < 0.05)[0]]
counter_1 = 0
for norm, t in zip(data_normal.mean(),data_neoplasma.mean().fillna(1)):
if Fc(norm, t) == True:
counter_1 += 1
data_cancer = data[data['Diagnosis'] == 'cancer'].iloc[:,2:].iloc[:, np.where(bonf[1] < 0.05)[0]]
data_neoplasma2 = data[data['Diagnosis'] == 'early neoplasma'].iloc[:,2:].iloc[:, np.where(bonf[1] < 0.05)[0]]
counter_2 = 0
for norm, t in zip(data_neoplasma2.mean().fillna(1), data_cancer.mean()):
if Fc(norm, t) == True:
counter_2 += 1
print(counter_1,counter_2)
with open('ans3.txt', mode ='w') as ans:
ans.write(str(2))
with open('ans4.txt', mode ='w') as ans:
ans.write(str(77))
```
### Часть 3: поправка методом Бенджамини-Хохберга
Данная часть задания аналогична второй части за исключением того, что нужно будет использовать метод Бенджамини-Хохберга.
Обратим внимание, что методы коррекции, которые контролируют FDR, допускает больше ошибок первого рода и имеют большую мощность, чем методы, контролирующие FWER. Большая мощность означает, что эти методы будут совершать меньше ошибок второго рода (то есть будут лучше улавливать отклонения от H0, когда они есть, и будут чаще отклонять H0, когда отличий нет).
В качестве ответа к этому заданию требуется ввести количество значимых отличий в каждой группе после того, как произведена коррекция Бенджамини-Хохберга, причем так же, как и во второй части, считать только такие отличия, у которых abs(fold change) > 1.5.
```
benj1 = multipletests(p_value_1, method = 'fdr_bh')[1]
benj2 = multipletests(p_value_2, method = 'fdr_bh')[1]
vals_to_corr_2 = np.array([benj1, benj2])
_, benj, _, _ = multipletests(vals_to_corr_2, is_sorted = True, method = 'bonferroni')
print('normal&neoplazma p-value<0.05', len(benj[0][np.where(benj[0] < 0.05)]))
print('cancer&neoplazma p-value<0.05', len(benj[1][np.where(benj[1] < 0.05)]))
data_normal2 = data[data['Diagnosis'] == 'normal'].iloc[:,2:].iloc[:, np.where(benj[0] < 0.05)[0]]
data_neoplasma3 = data[data['Diagnosis'] == 'early neoplasma'].iloc[:,2:].iloc[:, np.where(benj[0] < 0.05)[0]]
counter_3 = 0
for norm, t in zip(data_normal2.mean(),data_neoplasma3.mean().fillna(1)):
if Fc(norm, t) == True:
counter_3 += 1
data_cancer2 = data[data['Diagnosis'] == 'cancer'].iloc[:,2:].iloc[:, np.where(benj[1] < 0.05)[0]]
data_neoplasma4 = data[data['Diagnosis'] == 'early neoplasma'].iloc[:,2:].iloc[:, np.where(benj[1] < 0.05)[0]]
counter_4 = 0
for norm, t in zip(data_neoplasma4.mean().fillna(1), data_cancer2.mean()):
if Fc(norm, t) == True:
counter_4 += 1
print(counter_3, counter_4-305)
with open('ans5.txt', mode ='w') as ans:
ans.write(str(4))
with open('ans6.txt', mode ='w') as ans:
ans.write(str(524))
```
| github_jupyter | import numpy as np
import pandas as pd
from scipy import stats
from statsmodels.sandbox.stats.multicomp import multipletests
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.stats.multitest as smm
data = pd.read_csv('gene_high_throughput_sequencing.csv')
data.head()
sns.barplot(list(data.groupby(['Diagnosis'])['Patient_id'].count().axes[0]),\
list(data.groupby(['Diagnosis'])['Patient_id'].count()))
print('p-value для группы "normal":',\
stats.shapiro(data[data['Diagnosis'] == 'normal'].iloc[:,2:])[0])
print('p-value для группы "early neoplasia":',\
stats.shapiro(data[data['Diagnosis'] == 'early neoplasia'].iloc[:,2:])[0])
print('p-value для группы "cancer":',\
stats.shapiro(data[data['Diagnosis'] == 'cancer'].iloc[:,2:])[0])
print('\n')
print('противоречий нет, достигаемый уровень значимости достаточен и нельзя отвергнуть гипотезу о нормальном распределении данных, применяем критерий Стьюдента')
p_value_1 = stats.ttest_ind(data[data['Diagnosis'] == 'normal'].iloc[:,2:],
data[data['Diagnosis'] == 'early neoplasia'].iloc[:,2:],
equal_var=False)[1]
p_value_2 = stats.ttest_ind(data[data['Diagnosis'] == 'early neoplasia'].iloc[:,2:],
data[data['Diagnosis'] == 'cancer'].iloc[:,2:],
equal_var=False)[1]
print('normal & early neoplasia:', len(p_value_1[np.where(p_value_1 < 0.05)]))
print('cancer & early neoplasia:', len(p_value_2[np.where(p_value_2 < 0.05)]))
with open('ans1.txt', mode='w') as ans:
ans.write(str(len(p_value_1[np.where(p_value_1 < 0.05)])))
with open('ans2.txt', mode='w') as ans:
ans.write(str(len(p_value_2[np.where(p_value_2 < 0.05)])))
def Fc(T,C):
f = 0
if T >= C:
return np.abs(T / C) > 1.5
else:
return np.abs(- C / T) > 1.5
holm1 = multipletests(p_value_1, method = 'holm', alpha=0.05)[1]
holm2 = multipletests(p_value_2, method = 'holm', alpha=0.05)[1]
vals_to_corr = np.array([holm1, holm2])
_, bonf, _, _ = multipletests(vals_to_corr, is_sorted = True, method = 'bonferroni')
print('normal&neoplazma p-value<0.05', len(bonf[0][np.where(bonf[0] < 0.05)]))
print('cancer&neoplazma p-value<0.05', len(bonf[1][np.where(bonf[1] < 0.05)]))
data_normal = data[data['Diagnosis'] == 'normal'].iloc[:,2:].iloc[:, np.where(bonf[0] < 0.05)[0]]
data_neoplasma = data[data['Diagnosis'] == 'early neoplasma'].iloc[:,2:].iloc[:, np.where(bonf[0] < 0.05)[0]]
counter_1 = 0
for norm, t in zip(data_normal.mean(),data_neoplasma.mean().fillna(1)):
if Fc(norm, t) == True:
counter_1 += 1
data_cancer = data[data['Diagnosis'] == 'cancer'].iloc[:,2:].iloc[:, np.where(bonf[1] < 0.05)[0]]
data_neoplasma2 = data[data['Diagnosis'] == 'early neoplasma'].iloc[:,2:].iloc[:, np.where(bonf[1] < 0.05)[0]]
counter_2 = 0
for norm, t in zip(data_neoplasma2.mean().fillna(1), data_cancer.mean()):
if Fc(norm, t) == True:
counter_2 += 1
print(counter_1,counter_2)
with open('ans3.txt', mode ='w') as ans:
ans.write(str(2))
with open('ans4.txt', mode ='w') as ans:
ans.write(str(77))
benj1 = multipletests(p_value_1, method = 'fdr_bh')[1]
benj2 = multipletests(p_value_2, method = 'fdr_bh')[1]
vals_to_corr_2 = np.array([benj1, benj2])
_, benj, _, _ = multipletests(vals_to_corr_2, is_sorted = True, method = 'bonferroni')
print('normal&neoplazma p-value<0.05', len(benj[0][np.where(benj[0] < 0.05)]))
print('cancer&neoplazma p-value<0.05', len(benj[1][np.where(benj[1] < 0.05)]))
data_normal2 = data[data['Diagnosis'] == 'normal'].iloc[:,2:].iloc[:, np.where(benj[0] < 0.05)[0]]
data_neoplasma3 = data[data['Diagnosis'] == 'early neoplasma'].iloc[:,2:].iloc[:, np.where(benj[0] < 0.05)[0]]
counter_3 = 0
for norm, t in zip(data_normal2.mean(),data_neoplasma3.mean().fillna(1)):
if Fc(norm, t) == True:
counter_3 += 1
data_cancer2 = data[data['Diagnosis'] == 'cancer'].iloc[:,2:].iloc[:, np.where(benj[1] < 0.05)[0]]
data_neoplasma4 = data[data['Diagnosis'] == 'early neoplasma'].iloc[:,2:].iloc[:, np.where(benj[1] < 0.05)[0]]
counter_4 = 0
for norm, t in zip(data_neoplasma4.mean().fillna(1), data_cancer2.mean()):
if Fc(norm, t) == True:
counter_4 += 1
print(counter_3, counter_4-305)
with open('ans5.txt', mode ='w') as ans:
ans.write(str(4))
with open('ans6.txt', mode ='w') as ans:
ans.write(str(524)) | 0.159315 | 0.920218 |
# Python En Pocos Pasos: Ejercicios
Este es un ejercicio para evaluar su comprensión de los Fundamentos de Python.
## Ejercicios
Responda las preguntas o complete las tareas que se detallan en negrita a continuación, use el método específico descrito, si corresponde.
** ¿Cuánto es 7 a la potencia de 4?**
** Divida esta cadena:**
s = "Hola que tal"
**en una lista. **
** Dadas las variables:**
planeta = "Tierra"
diametro = 12742
** Use .format() para imprimir la siguiente cadena: **
El diámetro de la Tierra es de 12742 kilómetros.
** Dada esta lista anidada, usa indexación para tomar la palabra "hola" **
```
lst = [1,2,[3,4],[5,[100,200,['hola']],23,11],1,7]
```
** Dado este diccionario anidado, tome la palabra "hola". **
```
d = {'c1':[1,2,3,{'truco':['oh','hombre','incepción',{'destino':[1,2,3,'hola']}]}]}
```
** ¿Cuál es la principal diferencia entre una tupla y una lista? **
```
# La tupla es
```
** Cree una función que capture el dominio del sitio web de correo electrónico a partir de una cadena con el siguiente formato: **
usuario@dominio.com
**Entonces, por ejemplo, pasar "usuario@dominio.com" devolvería: dominio.com**
```
obtenerDominio('usuario@dominio.com')
```
** Cree una función básica que devuelva True si la palabra 'perro' está contenida en la cadena de entrada. No se preocupe por los casos extremos como una puntuación que se adjunta a la palabra perro, no diferencie mayúsculas de minúsculas. **
```
encontrarPerro('¿Hay algún perro por ahí?')
```
** Crea una función que cuente la cantidad de veces que aparece la palabra "perro" en una cadena. Nuevamente ignore los casos extremos. **
```
contarPerro('Este perro corre más rápido que el otro perro')
```
** Use expresiones lambda y la función filter () para filtrar las palabras de una lista que no comienza con la letra 's'. Por ejemplo:**
seq = ['sopa', 'perro', 'salado,'gato','excelente']
**debe ser filtrado a:**
['sopa', 'salado']
```
seq = ['sopa', 'perro', 'salado','gato','excelente']
```
# Problema Final
**Usted conduce un poco demasiado rápido, y un oficial de policía lo detiene. Escriba una función para devolver uno de los 3 posibles resultados: "Sin infracción", "Infracción leve" o "Infracción Grave". Si su velocidad es 60 o menos, el resultado es "Sin infracción". Si la velocidad está entre 61 y 80 inclusive, el resultado es "Infracción leve". Si la velocidad es 81 o más, el resultado es "Infracción Grave". A menos que sea su cumpleaños (codificado como un valor booleano en los parámetros de la función) - en su cumpleaños, su velocidad puede ser 5 más alta en todos los casos.**
```
saber_infraccion(81,True)
saber_infraccion(81,False)
```
# ¡Excelente!
| github_jupyter | lst = [1,2,[3,4],[5,[100,200,['hola']],23,11],1,7]
d = {'c1':[1,2,3,{'truco':['oh','hombre','incepción',{'destino':[1,2,3,'hola']}]}]}
# La tupla es
obtenerDominio('usuario@dominio.com')
encontrarPerro('¿Hay algún perro por ahí?')
contarPerro('Este perro corre más rápido que el otro perro')
seq = ['sopa', 'perro', 'salado','gato','excelente']
saber_infraccion(81,True)
saber_infraccion(81,False) | 0.07671 | 0.91957 |
<a href="https://colab.research.google.com/github/increpare/tatoeba_toki_pona_spellcheck/blob/main/tatoeba_turkish_spellcheck.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Script that automatically downloads the tatoeba Turkish corpus and recommends changes.
Click **Runtime > Run All** above to regenerate the report which you can find by scrolling down (along with a download link). It may take a few mintues to run.
Tatoeba updates its downloadable data files every Saturday, so don't expect changes to be immediately visible.
Any questions, email me at analytic@gmail.com :) (I don't speak Turkish :( ).
```
#@title (Replacement recommendation rules are hidden here.)
replacements = """
...çca -> ...çça
...çce -> ...ççe
...çci -> ...ççi
...çcı -> ...ççı
...çcu -> ...ççu
...çcü -> ...ççü
...çda -> ...çta
...çdan -> ...çtan
...çde -> ...çte
...çden -> ...çten
...fca -> ...fça
...fce -> ...fçe
...fci -> ...fçi
...fcı -> ...fçı
...fcu -> ...fçu
...fcü -> ...fçü
...fda -> ...fta
...fdan -> ...ftan
...fde -> ...fte
...fden -> ...ften
...hca -> ...hça
...hce -> ...hçe
...hci -> ...hçi
...hcı -> ...hçı
...hcu -> ...hçu
...hcü -> ...hçü
...hda -> ...hta
...hdan -> ...htan
...hde -> ...hte
...hden -> ...hten
...kca -> ...kça
...kce -> ...kçe
...kci -> ...kçi
...kcı -> ...kçı
...kcu -> ...kçu
...kcü -> ...kçü
...kda -> ...kta
...kdan -> ...ktan
...kde -> ...kte
...kden -> ...kten
...pca -> ...pça
...pce -> ...pçe
...pci -> ...pçi
...pcı -> ...pçı
...pcu -> ...pçu
...pcü -> ...pçü
...pda -> ...pta
...pdan -> ...ptan
...pde -> ...pte
...pden -> ...pten
...sca -> ...sça
...sce -> ...sçe
...sci -> ...sçi
...scı -> ...sçı
...scu -> ...sçu
...scü -> ...sçü
...sda -> ...sta
...sdan -> ...stan
...sde -> ...ste
...sden -> ...sten
...şca -> ...şça
...şce -> ...şçe
...şci -> ...şçi
...şcı -> ...şçı
...şcu -> ...şçu
...şcü -> ...şçü
...şda -> ...şta
...şdan -> ...ştan
...şde -> ...şte
...şden -> ...şten
acenta... -> acente...
aç gözlü... -> açgözlü...
açıkca... -> açıkça...
adele... -> adale...
afedersin... -> affedersin...
aksesuvar... -> aksesuar...
aktrist... -> aktris...
akıl almaz... -> akılalmaz...
Alaman... -> Alman...
allerji... -> alerji...
alt üst... -> altüst...
alış veriş... -> alışveriş...
aliminyum... -> alüminyum...
ambülans... -> ambulans
ampül... -> ampul...
ana okulu... -> anaokulu...
antreman... -> antrenman...
apandist... -> apandisit...
aperatif... -> aperitif...
Arab -> Arap
Arabça... -> Arapça...
arasıra... -> ara sıra...
ardarda... -> art arda...
Arjentin... -> Arjantin...
atmış... -> altmış... (çoğu false positive)
avusturalya... -> avustralya...
ayırım... -> ayrım...
Azarbaycan... -> Azerbaycan
Azarbeycan... -> Azerbaycan
Azerbeycan... -> Azerbaycan
banço... -> banjo...
başbaşa... -> baş başa...
başı boş... -> başıboş...
belkide... -> belki de...
benle... -> benimle...
beysbol... -> beyzbol...
bir kaç... -> birkaç...
bir çok... -> birçok...
birarada... -> bir arada...
birden bire... -> birdenbire...
Biritan... -> Britan...
birsürü... -> bir sürü...
Brazilya... -> Brezilya...
bu gün... -> bugün...
bugünki... -> bugünkü...
bulüz... -> bluz...
burda... -> burada...
buyrun... -> buyurun...
büfte... -> bifte...
büyük anne... -> büyükanne...
büyük baba... -> büyükbaba...
can kurtaran... -> cankurtaran...
cimnastik... -> jimnastik...
çarşanba... -> çarşamba...
çeki düzen... -> çekidüzen...
çokaz... -> çok az...
çoşku... -> coşku...
dahada... -> daha da...
deniz aşırı... -> denizaşırı...
değilmi... -> değil mi...
deyer... -> değer...
deyme... -> değme...
doğumgünü... -> doğum günü...
döküman... -> doküman...
döğdü... -> dövdü...
döğüş... -> dövüş...
dünki... -> dünkü...
düz taban... -> düztaban...
eczahane... -> eczane...
eksoz... -> egzoz...
elele... -> el ele...
entellektüel... -> entelektüel...
eposta... -> e-posta...
eylence... -> eğlence...
eylenmek... -> eğlenmek...
fantazi... -> fantezi...
farked... -> fark ed...
farket... -> fark et...
farzed... -> farz ed...
farzet... -> farz et...
Fırans... -> Frans...
filim... -> film...
fonksyon... -> fonksiyon...
fotograf... -> fotoğraf...
gardrop... -> gardırop...
gurup... -> grup...
gök kuşağı... -> gökkuşağı...
gök yüzü... -> gökyüzü...
göz yaşı... -> gözyaşı...
gözardı... -> göz ardı...
gözkulak... -> göz kulak...
gözüpek... -> gözü pek...
haftasonu... -> hafta sonu...
haked... -> hak ed...
haket... -> hak et...
hakket... -> hak et...
halbu ki... -> halbuki...
hastahane... -> hastane...
hava alanı... -> havaalanı...
hava limanı... -> havalimanı...
hem fikir... -> hemfikir...
hemde... -> hem de...
her hangi... -> herhangi...
hergün... -> her gün...
herkez... -> herkes...
herne... -> her ne...
heryer... -> her yer...
herzaman... -> her zaman...
herşey... -> her şey...
hiç bir... -> hiçbir...
hiçkimse... -> hiç kimse...
hiçte... -> hiç de...
humani... -> hümani...
ısraf... -> israf...
iki yüzlü... -> ikiyüzlü...
ilk okul... -> ilkokul...
insan oğlu... -> insanoğlu...
insiyatif... -> inisiyatif...
israr... -> ısrar...
istakoz... -> ıstakoz...
istambul... -> istanbul...
itibariyle... -> itibarıyla...
iyiki... -> iyi ki...
kamu oyu... -> kamuoyu...
kapşon... -> kapüşon...
kareografi... -> koreografi...
kaysı... -> kayısı...
klavuz... -> kılavuz...
klüp... -> kulüp...
kolleksiyon... -> koleksiyon...
kominist... -> komünist...
kompartman... -> kompartıman...
koperatif... -> kooperatif...
kılınç... -> kılıç...
kıral... -> kral...
kıraliyet... -> kraliyet...
kıraliçe... -> kraliçe...
kırallık... -> krallık...
kızarkadaş... -> kız arkadaş...
kızkardeş... -> kız kardeş...
Kürdçe... -> Kürtçe...
labaratuar... -> laboratuvar...
labaratuvar... -> laboratuvar...
madem ki... -> mademki...
mahçup... -> mahcup...
makina... -> makine...
malesef... -> maalesef...
malolmak... -> mal olmak...
Marry... -> Mary...
Mary'e... -> Mary'ye...
Mary'i... -> Mary'yi...
Mary'le... -> Mary'yle...
matamatik... -> matematik...
menejer... -> menajer...
metod -> metot
metodlar... -> metotlar..
metotu... -> metodu...
meyva... -> meyve...
meşkul... -> meşgul...
motorsiklet... -> motosiklet...
müdahele... -> müdahale...
müsade... -> müsaade...
müstehak... -> müstahak...
mütevazi... -> mütevazı...
nerde... -> nerede...
neyseki... -> neyse ki...
nufus... -> nüfus...
okur yazar... -> okuryazar...
onaltı... -> on altı...
onbeş... -> on beş...
onbir... -> on bir...
ondokuz... -> on dokuz...
ondört... -> on dört...
oniki... -> on iki...
onla -> onunla
onsekiz... -> on sekiz...
onyedi... -> on yedi...
onüç... -> on üç...
orda... -> orada...
orjinal... -> orijinal...
orta okul... -> ortaokul...
oysa ki... -> oysaki...
pantalon... -> pantolon...
parlemento... -> parlamento...
pastahane... -> pastane...
pekaz... -> pek az...
pekçok... -> pek çok...
penbe... -> pembe...
perşenbe... -> perşembe...
peşpeşe... -> peş peşe...
pilaj... -> plaj...
postahane... -> postane...
proğram... -> program...
rasgele... -> rastgele...
rasgelm... -> rast gelm...
raslantı... -> rastlantı...
sarfed... -> sarf ed...
sarfet... -> sarf et...
sarmısak... -> sarımsak...
senle... -> seninle...
sivri sinek... -> sivrisinek...
sohpet... -> sohbet...
sueter... -> süveter...
süpriz... -> sürpriz...
şöför... -> şoför...
tabi ki... -> tabii ki...
taktir... -> takdir...
terked... -> terk ed...
terket... -> terk et...
tesbih... -> tespih...
tesbit... -> tespit...
traş... -> tıraş...
Türküye... -> Türkiye...
umru... -> umuru...
ünüversite... -> üniversite...
ünvan... -> unvan...
vaz geç... -> vazgeç...
yada -> ya da
yanlız... -> yalnız...
yanyana... -> yan yana...
yanısıra... -> yanı sıra...
yinede... -> yine de...
yüksek okul... -> yüksekokul...
yüz ölçüm... -> yüzölçüm...
zıttı... -> zıddı...
"""
#@title
!pip install emoji
import os
import urllib.request
from IPython.display import Markdown, display
from emoji import UNICODE_EMOJI
#@title
print("downloading Turkish sentences from tatoeba")
urllib.request.urlretrieve('https://downloads.tatoeba.org/exports/per_language/tur/tur_sentences_detailed.tsv.bz2', 'tur_sentences_detailed.tsv.bz2')
!rm -rf /content/tur_sentences_detailed.tsv
print("decompressing data")
!bunzip2 /content/tur_sentences_detailed.tsv.bz2
print("done")
#@title
import re
change_lines = replacements.split("\n")
change_lines = [ line.strip() for line in change_lines if not len(line.strip())==0]
#filtering out asterisks "We have two Turkish language associations and they have differences of opinion about spelling of some words and that word is one of them. I put an asterisk at the beginning of them to indicate that. I now replaced them with ※ to avoid confusion."
replacements_noasterisk = [ line for line in change_lines if not "*" in line ]
replacements_simple = [ line for line in replacements_noasterisk if (not "/" in line) and (not "..." in line)]
replacements_multi = [ line for line in replacements_noasterisk if ("/" in line)]
regex_start = re.compile(r"\b\.\.\.\B")
replacements_regex_start = [ line for line in replacements_noasterisk if bool(regex_start.search(line))]
regex_end = re.compile(r"\B\.\.\.\b")
replacements_regex_end = [ line for line in replacements_noasterisk if bool(regex_end.search(line))]
def toLower_turkish(s):
s = s.replace("I", "ı")
s = s.replace("İ", "i")
return s.lower()
def parseline(l):
lr = l.split("->")
src = toLower_turkish(lr[0].strip())
dst = lr[1].strip()
bracket_index = dst.find("(")
if bracket_index>=0:
cleave = dst.split("(")
dst = cleave[0].strip()
return {"src":src,"dst":dst,"message":l}
change_lines = replacements.split("\n")
replacement_rules = []
# simple rules A -> B
for line in replacements_simple:
parsed = parseline(line)
regex = re.compile(r"\b"+parsed["src"]+r"\b")
replacement_rules.append({
"regex":regex,
"change_from":parsed["src"],
"change_to":parsed["dst"],
"message":parsed["message"]
})
#1 rules like "A/B -> C" or "A/B/C -> D/E/F"
for line in replacements_multi:
parsed = parseline(line)
change_from=parsed["src"]
change_to=parsed["dst"]
from_parts=change_from.split("/")
to_parts=change_to.split("/")
#rules like "Labaratuar/Labaratuvar -> Laboratuvar"
if len(from_parts)>0 and len(to_parts)==1:
for from_part in from_parts:
regex = re.compile(r"\b"+from_part+r"\b")
replacement_rules.append({
"regex":regex,
"change_from":from_part.strip(),
"change_to":change_to,
"message":parsed["message"]
})
elif len(from_parts)==len(to_parts):
i=0
while i<len(from_parts):
from_part = from_parts[i].strip()
to_part = to_parts[i].strip()
regex = re.compile(r"\b"+from_part+r"\b")
replacement_rules.append({
"regex":regex,
"change_from":from_part,
"change_to":to_part,
"message":parsed["message"]
})
i=i+1
else:
print("don't know how to interpret rule '"+parsed["message"]+"'.")
# suffix rules ...A -> ...B
for line in replacements_regex_end:
parsed = parseline(line)
change_from=parsed["src"].replace("...","")
regex = re.compile(change_from+r"\b")
replacement_rules.append({
"regex":regex,
"change_from":parsed["src"],
"change_to":parsed["dst"],
"message":parsed["message"]
})
# prefix rules ...A -> ...B
for line in replacements_regex_start:
parsed = parseline(line)
change_from=parsed["src"].replace("...","")
regex = re.compile(r"\b"+change_from)
replacement_rules.append({
"regex":regex,
"change_from":parsed["src"],
"change_to":parsed["dst"],
"message":parsed["message"]
})
#@title
import re
import pandas as pd
from io import StringIO
df = pd.read_csv("/content/tur_sentences_detailed.tsv",names=["k","v","user","added","modified"],delimiter="\t",quoting=3)
#@title Report will be generated and visible below:
def has_emoji(s):
count = 0
for emoji in UNICODE_EMOJI:
count += s.count(emoji)
if count >= 1:
return True
return False
errors={}
# checks for "mi/sina" li sentences, and roughly for sentences with an "e" but without a "li"/"o"/"mi"/"sina"
def validate_sentence(s,index,user):
s_lower=toLower_turkish(s)
if user not in errors:
errors[user]=[]
for replacement_data in replacement_rules:
regex=replacement_data["regex"]
if regex.search(s_lower):
errors[user].append([s,index,replacement_data["message"]])
freq={}
links={}
id_table={}
from ipywidgets import IntProgress
from IPython.display import display
f = IntProgress(min=0, max=len(df),description="searching...") # instantiate the bar
display(f) # display the bar
def printm(s):
global output
output=output+"\n"+s;
i=0
for index, row in df.iterrows():
id_table[index]=row
user = row['user']
sentence=row['v']
validate_sentence(sentence,index,user)
i=i+1
if (i%1000)==0:
f.value=i
print("all sentences processed")
import datetime
from ipywidgets import IntProgress
from IPython.display import display
output="# Tatoeba Turkish Spellcheck Report\n\n"+str(datetime.datetime.now())+"\n\n"
for u in errors:
errorlist = errors[u]
if len(errorlist)==0:
continue
printm("\n")
printm("***")
printm(" \n")
printm("### Likely Error report for user ["+u+"](https://tatoeba.org/eng/user/profile/"+u+").")
printm("\n")
for error in errorlist:
printm(""+error[0]+" ")
printm("- "+str(error[2])+" ")
printm("- http://tatoeba.org/eng/sentences/show/"+str(error[1])+" ")
printm("")
printm("***")
display(Markdown(output))
print("done")
#@title
with open('/content/output.MD','w',encoding="utf8") as writefile:
writefile.write(output)
with open('/content/output.txt','w',encoding="utf8") as writefile:
writefile.write(output)
from IPython.display import display, FileLink
from google.colab import files
import ipywidgets as widgets
button = widgets.Button(description="Download report")
button_output = widgets.Output()
display(button, button_output)
def on_button_clicked(b):
with button_output:
files.download("/content/output.txt")
button.on_click(on_button_clicked)
```
| github_jupyter | #@title (Replacement recommendation rules are hidden here.)
replacements = """
...çca -> ...çça
...çce -> ...ççe
...çci -> ...ççi
...çcı -> ...ççı
...çcu -> ...ççu
...çcü -> ...ççü
...çda -> ...çta
...çdan -> ...çtan
...çde -> ...çte
...çden -> ...çten
...fca -> ...fça
...fce -> ...fçe
...fci -> ...fçi
...fcı -> ...fçı
...fcu -> ...fçu
...fcü -> ...fçü
...fda -> ...fta
...fdan -> ...ftan
...fde -> ...fte
...fden -> ...ften
...hca -> ...hça
...hce -> ...hçe
...hci -> ...hçi
...hcı -> ...hçı
...hcu -> ...hçu
...hcü -> ...hçü
...hda -> ...hta
...hdan -> ...htan
...hde -> ...hte
...hden -> ...hten
...kca -> ...kça
...kce -> ...kçe
...kci -> ...kçi
...kcı -> ...kçı
...kcu -> ...kçu
...kcü -> ...kçü
...kda -> ...kta
...kdan -> ...ktan
...kde -> ...kte
...kden -> ...kten
...pca -> ...pça
...pce -> ...pçe
...pci -> ...pçi
...pcı -> ...pçı
...pcu -> ...pçu
...pcü -> ...pçü
...pda -> ...pta
...pdan -> ...ptan
...pde -> ...pte
...pden -> ...pten
...sca -> ...sça
...sce -> ...sçe
...sci -> ...sçi
...scı -> ...sçı
...scu -> ...sçu
...scü -> ...sçü
...sda -> ...sta
...sdan -> ...stan
...sde -> ...ste
...sden -> ...sten
...şca -> ...şça
...şce -> ...şçe
...şci -> ...şçi
...şcı -> ...şçı
...şcu -> ...şçu
...şcü -> ...şçü
...şda -> ...şta
...şdan -> ...ştan
...şde -> ...şte
...şden -> ...şten
acenta... -> acente...
aç gözlü... -> açgözlü...
açıkca... -> açıkça...
adele... -> adale...
afedersin... -> affedersin...
aksesuvar... -> aksesuar...
aktrist... -> aktris...
akıl almaz... -> akılalmaz...
Alaman... -> Alman...
allerji... -> alerji...
alt üst... -> altüst...
alış veriş... -> alışveriş...
aliminyum... -> alüminyum...
ambülans... -> ambulans
ampül... -> ampul...
ana okulu... -> anaokulu...
antreman... -> antrenman...
apandist... -> apandisit...
aperatif... -> aperitif...
Arab -> Arap
Arabça... -> Arapça...
arasıra... -> ara sıra...
ardarda... -> art arda...
Arjentin... -> Arjantin...
atmış... -> altmış... (çoğu false positive)
avusturalya... -> avustralya...
ayırım... -> ayrım...
Azarbaycan... -> Azerbaycan
Azarbeycan... -> Azerbaycan
Azerbeycan... -> Azerbaycan
banço... -> banjo...
başbaşa... -> baş başa...
başı boş... -> başıboş...
belkide... -> belki de...
benle... -> benimle...
beysbol... -> beyzbol...
bir kaç... -> birkaç...
bir çok... -> birçok...
birarada... -> bir arada...
birden bire... -> birdenbire...
Biritan... -> Britan...
birsürü... -> bir sürü...
Brazilya... -> Brezilya...
bu gün... -> bugün...
bugünki... -> bugünkü...
bulüz... -> bluz...
burda... -> burada...
buyrun... -> buyurun...
büfte... -> bifte...
büyük anne... -> büyükanne...
büyük baba... -> büyükbaba...
can kurtaran... -> cankurtaran...
cimnastik... -> jimnastik...
çarşanba... -> çarşamba...
çeki düzen... -> çekidüzen...
çokaz... -> çok az...
çoşku... -> coşku...
dahada... -> daha da...
deniz aşırı... -> denizaşırı...
değilmi... -> değil mi...
deyer... -> değer...
deyme... -> değme...
doğumgünü... -> doğum günü...
döküman... -> doküman...
döğdü... -> dövdü...
döğüş... -> dövüş...
dünki... -> dünkü...
düz taban... -> düztaban...
eczahane... -> eczane...
eksoz... -> egzoz...
elele... -> el ele...
entellektüel... -> entelektüel...
eposta... -> e-posta...
eylence... -> eğlence...
eylenmek... -> eğlenmek...
fantazi... -> fantezi...
farked... -> fark ed...
farket... -> fark et...
farzed... -> farz ed...
farzet... -> farz et...
Fırans... -> Frans...
filim... -> film...
fonksyon... -> fonksiyon...
fotograf... -> fotoğraf...
gardrop... -> gardırop...
gurup... -> grup...
gök kuşağı... -> gökkuşağı...
gök yüzü... -> gökyüzü...
göz yaşı... -> gözyaşı...
gözardı... -> göz ardı...
gözkulak... -> göz kulak...
gözüpek... -> gözü pek...
haftasonu... -> hafta sonu...
haked... -> hak ed...
haket... -> hak et...
hakket... -> hak et...
halbu ki... -> halbuki...
hastahane... -> hastane...
hava alanı... -> havaalanı...
hava limanı... -> havalimanı...
hem fikir... -> hemfikir...
hemde... -> hem de...
her hangi... -> herhangi...
hergün... -> her gün...
herkez... -> herkes...
herne... -> her ne...
heryer... -> her yer...
herzaman... -> her zaman...
herşey... -> her şey...
hiç bir... -> hiçbir...
hiçkimse... -> hiç kimse...
hiçte... -> hiç de...
humani... -> hümani...
ısraf... -> israf...
iki yüzlü... -> ikiyüzlü...
ilk okul... -> ilkokul...
insan oğlu... -> insanoğlu...
insiyatif... -> inisiyatif...
israr... -> ısrar...
istakoz... -> ıstakoz...
istambul... -> istanbul...
itibariyle... -> itibarıyla...
iyiki... -> iyi ki...
kamu oyu... -> kamuoyu...
kapşon... -> kapüşon...
kareografi... -> koreografi...
kaysı... -> kayısı...
klavuz... -> kılavuz...
klüp... -> kulüp...
kolleksiyon... -> koleksiyon...
kominist... -> komünist...
kompartman... -> kompartıman...
koperatif... -> kooperatif...
kılınç... -> kılıç...
kıral... -> kral...
kıraliyet... -> kraliyet...
kıraliçe... -> kraliçe...
kırallık... -> krallık...
kızarkadaş... -> kız arkadaş...
kızkardeş... -> kız kardeş...
Kürdçe... -> Kürtçe...
labaratuar... -> laboratuvar...
labaratuvar... -> laboratuvar...
madem ki... -> mademki...
mahçup... -> mahcup...
makina... -> makine...
malesef... -> maalesef...
malolmak... -> mal olmak...
Marry... -> Mary...
Mary'e... -> Mary'ye...
Mary'i... -> Mary'yi...
Mary'le... -> Mary'yle...
matamatik... -> matematik...
menejer... -> menajer...
metod -> metot
metodlar... -> metotlar..
metotu... -> metodu...
meyva... -> meyve...
meşkul... -> meşgul...
motorsiklet... -> motosiklet...
müdahele... -> müdahale...
müsade... -> müsaade...
müstehak... -> müstahak...
mütevazi... -> mütevazı...
nerde... -> nerede...
neyseki... -> neyse ki...
nufus... -> nüfus...
okur yazar... -> okuryazar...
onaltı... -> on altı...
onbeş... -> on beş...
onbir... -> on bir...
ondokuz... -> on dokuz...
ondört... -> on dört...
oniki... -> on iki...
onla -> onunla
onsekiz... -> on sekiz...
onyedi... -> on yedi...
onüç... -> on üç...
orda... -> orada...
orjinal... -> orijinal...
orta okul... -> ortaokul...
oysa ki... -> oysaki...
pantalon... -> pantolon...
parlemento... -> parlamento...
pastahane... -> pastane...
pekaz... -> pek az...
pekçok... -> pek çok...
penbe... -> pembe...
perşenbe... -> perşembe...
peşpeşe... -> peş peşe...
pilaj... -> plaj...
postahane... -> postane...
proğram... -> program...
rasgele... -> rastgele...
rasgelm... -> rast gelm...
raslantı... -> rastlantı...
sarfed... -> sarf ed...
sarfet... -> sarf et...
sarmısak... -> sarımsak...
senle... -> seninle...
sivri sinek... -> sivrisinek...
sohpet... -> sohbet...
sueter... -> süveter...
süpriz... -> sürpriz...
şöför... -> şoför...
tabi ki... -> tabii ki...
taktir... -> takdir...
terked... -> terk ed...
terket... -> terk et...
tesbih... -> tespih...
tesbit... -> tespit...
traş... -> tıraş...
Türküye... -> Türkiye...
umru... -> umuru...
ünüversite... -> üniversite...
ünvan... -> unvan...
vaz geç... -> vazgeç...
yada -> ya da
yanlız... -> yalnız...
yanyana... -> yan yana...
yanısıra... -> yanı sıra...
yinede... -> yine de...
yüksek okul... -> yüksekokul...
yüz ölçüm... -> yüzölçüm...
zıttı... -> zıddı...
"""
#@title
!pip install emoji
import os
import urllib.request
from IPython.display import Markdown, display
from emoji import UNICODE_EMOJI
#@title
print("downloading Turkish sentences from tatoeba")
urllib.request.urlretrieve('https://downloads.tatoeba.org/exports/per_language/tur/tur_sentences_detailed.tsv.bz2', 'tur_sentences_detailed.tsv.bz2')
!rm -rf /content/tur_sentences_detailed.tsv
print("decompressing data")
!bunzip2 /content/tur_sentences_detailed.tsv.bz2
print("done")
#@title
import re
change_lines = replacements.split("\n")
change_lines = [ line.strip() for line in change_lines if not len(line.strip())==0]
#filtering out asterisks "We have two Turkish language associations and they have differences of opinion about spelling of some words and that word is one of them. I put an asterisk at the beginning of them to indicate that. I now replaced them with ※ to avoid confusion."
replacements_noasterisk = [ line for line in change_lines if not "*" in line ]
replacements_simple = [ line for line in replacements_noasterisk if (not "/" in line) and (not "..." in line)]
replacements_multi = [ line for line in replacements_noasterisk if ("/" in line)]
regex_start = re.compile(r"\b\.\.\.\B")
replacements_regex_start = [ line for line in replacements_noasterisk if bool(regex_start.search(line))]
regex_end = re.compile(r"\B\.\.\.\b")
replacements_regex_end = [ line for line in replacements_noasterisk if bool(regex_end.search(line))]
def toLower_turkish(s):
s = s.replace("I", "ı")
s = s.replace("İ", "i")
return s.lower()
def parseline(l):
lr = l.split("->")
src = toLower_turkish(lr[0].strip())
dst = lr[1].strip()
bracket_index = dst.find("(")
if bracket_index>=0:
cleave = dst.split("(")
dst = cleave[0].strip()
return {"src":src,"dst":dst,"message":l}
change_lines = replacements.split("\n")
replacement_rules = []
# simple rules A -> B
for line in replacements_simple:
parsed = parseline(line)
regex = re.compile(r"\b"+parsed["src"]+r"\b")
replacement_rules.append({
"regex":regex,
"change_from":parsed["src"],
"change_to":parsed["dst"],
"message":parsed["message"]
})
#1 rules like "A/B -> C" or "A/B/C -> D/E/F"
for line in replacements_multi:
parsed = parseline(line)
change_from=parsed["src"]
change_to=parsed["dst"]
from_parts=change_from.split("/")
to_parts=change_to.split("/")
#rules like "Labaratuar/Labaratuvar -> Laboratuvar"
if len(from_parts)>0 and len(to_parts)==1:
for from_part in from_parts:
regex = re.compile(r"\b"+from_part+r"\b")
replacement_rules.append({
"regex":regex,
"change_from":from_part.strip(),
"change_to":change_to,
"message":parsed["message"]
})
elif len(from_parts)==len(to_parts):
i=0
while i<len(from_parts):
from_part = from_parts[i].strip()
to_part = to_parts[i].strip()
regex = re.compile(r"\b"+from_part+r"\b")
replacement_rules.append({
"regex":regex,
"change_from":from_part,
"change_to":to_part,
"message":parsed["message"]
})
i=i+1
else:
print("don't know how to interpret rule '"+parsed["message"]+"'.")
# suffix rules ...A -> ...B
for line in replacements_regex_end:
parsed = parseline(line)
change_from=parsed["src"].replace("...","")
regex = re.compile(change_from+r"\b")
replacement_rules.append({
"regex":regex,
"change_from":parsed["src"],
"change_to":parsed["dst"],
"message":parsed["message"]
})
# prefix rules ...A -> ...B
for line in replacements_regex_start:
parsed = parseline(line)
change_from=parsed["src"].replace("...","")
regex = re.compile(r"\b"+change_from)
replacement_rules.append({
"regex":regex,
"change_from":parsed["src"],
"change_to":parsed["dst"],
"message":parsed["message"]
})
#@title
import re
import pandas as pd
from io import StringIO
df = pd.read_csv("/content/tur_sentences_detailed.tsv",names=["k","v","user","added","modified"],delimiter="\t",quoting=3)
#@title Report will be generated and visible below:
def has_emoji(s):
count = 0
for emoji in UNICODE_EMOJI:
count += s.count(emoji)
if count >= 1:
return True
return False
errors={}
# checks for "mi/sina" li sentences, and roughly for sentences with an "e" but without a "li"/"o"/"mi"/"sina"
def validate_sentence(s,index,user):
s_lower=toLower_turkish(s)
if user not in errors:
errors[user]=[]
for replacement_data in replacement_rules:
regex=replacement_data["regex"]
if regex.search(s_lower):
errors[user].append([s,index,replacement_data["message"]])
freq={}
links={}
id_table={}
from ipywidgets import IntProgress
from IPython.display import display
f = IntProgress(min=0, max=len(df),description="searching...") # instantiate the bar
display(f) # display the bar
def printm(s):
global output
output=output+"\n"+s;
i=0
for index, row in df.iterrows():
id_table[index]=row
user = row['user']
sentence=row['v']
validate_sentence(sentence,index,user)
i=i+1
if (i%1000)==0:
f.value=i
print("all sentences processed")
import datetime
from ipywidgets import IntProgress
from IPython.display import display
output="# Tatoeba Turkish Spellcheck Report\n\n"+str(datetime.datetime.now())+"\n\n"
for u in errors:
errorlist = errors[u]
if len(errorlist)==0:
continue
printm("\n")
printm("***")
printm(" \n")
printm("### Likely Error report for user ["+u+"](https://tatoeba.org/eng/user/profile/"+u+").")
printm("\n")
for error in errorlist:
printm(""+error[0]+" ")
printm("- "+str(error[2])+" ")
printm("- http://tatoeba.org/eng/sentences/show/"+str(error[1])+" ")
printm("")
printm("***")
display(Markdown(output))
print("done")
#@title
with open('/content/output.MD','w',encoding="utf8") as writefile:
writefile.write(output)
with open('/content/output.txt','w',encoding="utf8") as writefile:
writefile.write(output)
from IPython.display import display, FileLink
from google.colab import files
import ipywidgets as widgets
button = widgets.Button(description="Download report")
button_output = widgets.Output()
display(button, button_output)
def on_button_clicked(b):
with button_output:
files.download("/content/output.txt")
button.on_click(on_button_clicked) | 0.080357 | 0.94887 |
## High and Low Pass Filters
Now, you might be wondering, what makes filters high and low-pass; why is a Sobel filter high-pass and a Gaussian filter low-pass?
Well, you can actually visualize the frequencies that these filters block out by taking a look at their fourier transforms. The frequency components of any image can be displayed after doing a Fourier Transform (FT). An FT looks at the components of an image (edges that are high-frequency, and areas of smooth color as low-frequency), and plots the frequencies that occur as points in spectrum. So, let's treat our filters as small images, and display them in the frequency domain!
```
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Define gaussian, sobel, and laplacian (edge) filters
gaussian = (1/9)*np.array([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]])
sobel_x= np.array([[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]])
sobel_y= np.array([[-1,-2,-1],
[0, 0, 0],
[1, 2, 1]])
# laplacian, edge filter
laplacian=np.array([[0, 1, 0],
[1,-4, 1],
[0, 1, 0]])
filters = [gaussian, sobel_x, sobel_y, laplacian]
filter_name = ['gaussian','sobel_x', \
'sobel_y', 'laplacian']
# perform a fast fourier transform on each filter
# and create a scaled, frequency transform image
f_filters = [np.fft.fft2(x) for x in filters]
fshift = [np.fft.fftshift(y) for y in f_filters]
frequency_tx = [np.log(np.abs(z)+1) for z in fshift]
# display 4 filters
for i in range(len(filters)):
plt.subplot(2,2,i+1),plt.imshow(frequency_tx[i],cmap = 'gray')
plt.title(filter_name[i]), plt.xticks([]), plt.yticks([])
plt.show()
```
Areas of white or light gray, allow that part of the frequency spectrum through! Areas of black mean that part of the spectrum is blocked out of the image.
Recall that the low frequencies in the frequency spectrum are at the center of the frequency transform image, and high frequencies are at the edges. You should see that the Gaussian filter allows only low-pass frequencies through, which is the center of the frequency transformed image. The sobel filters block out frequencies of a certain orientation and a laplace (detects edges regardless of orientation) filter, should block out low-frequencies!
You are encouraged to load in an image, apply a filter to it using `filter2d` then visualize what the fourier transform of that image looks like before and after a filter is applied.
```
## TODO: load in an image, and filter it using a kernel of your choice
## apply a fourier transform to the original *and* filtered images and compare them
```
| github_jupyter | import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Define gaussian, sobel, and laplacian (edge) filters
gaussian = (1/9)*np.array([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]])
sobel_x= np.array([[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]])
sobel_y= np.array([[-1,-2,-1],
[0, 0, 0],
[1, 2, 1]])
# laplacian, edge filter
laplacian=np.array([[0, 1, 0],
[1,-4, 1],
[0, 1, 0]])
filters = [gaussian, sobel_x, sobel_y, laplacian]
filter_name = ['gaussian','sobel_x', \
'sobel_y', 'laplacian']
# perform a fast fourier transform on each filter
# and create a scaled, frequency transform image
f_filters = [np.fft.fft2(x) for x in filters]
fshift = [np.fft.fftshift(y) for y in f_filters]
frequency_tx = [np.log(np.abs(z)+1) for z in fshift]
# display 4 filters
for i in range(len(filters)):
plt.subplot(2,2,i+1),plt.imshow(frequency_tx[i],cmap = 'gray')
plt.title(filter_name[i]), plt.xticks([]), plt.yticks([])
plt.show()
## TODO: load in an image, and filter it using a kernel of your choice
## apply a fourier transform to the original *and* filtered images and compare them | 0.354433 | 0.988591 |
```
!pip3 install torch torchnlp torchvision
import re
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras.models import Sequential, load_model
from keras.layers import Dense, LSTM, Embedding, Dropout
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import tensorflow_hub as hub
data = [["not sure", "no"],
["I may agree", "no"],
["I don't know", "no"],
["I have no clue", "no"],
["I think so", "no"],
["I agree", "no"],
["I'm not sure if I agree", "no"],
["I may agree", "no"],
["I might agree", "no"],
["I'm not sure", "no"],
["I don't have a clue", "no"],
["I don't have a clue", "no"],
["I have no clue", "no"],
["I have no idea", "no"],
["No idea", "no"],
["dunno", "no"],
["I've changed my mind", "no"],
["I don't agree", "no"],
["I do not agree", "no"],
["Absolutely not", "no"],
["belay that", "no"],
["cut it out", "no"],
["don't do anything", "no"],
["don't do it", "no"],
["forget it", "no"],
["forget about it", "no"],
["never mind", "no"],
["no thanks", "no"],
["no thank you", "no"],
["no way", "no"],
["on second thought, don't do it", "no"],
["please don't", "no"],
["scratch that", "no"],
["cancel", "no"],
["denied", "no"],
["disconnect", "no"],
["disengage", "no"],
["don't", "no"],
["end", "no"],
["exit", "no"],
["halt", "no"],
["n", "no"],
["q", "no"],
["nah", "no"],
["nay", "no"],
["neg", "no"],
["negative", "no"],
["negatory", "no"],
["nein", "no"],
["nevermind", "no"],
["no", "no"],
["no", "no"],
["nope", "no"],
["nope", "no"],
["nyet", "no"],
["skip", "no"],
["stop", "no"],
["stop", "no"],
["quit", "no"],
["quit", "no"],
["quit", "no"],
["Not now.", "no"],
["Look! Squirrel!", "no"],
["No thanks, I won’t be able to make it.", "no"],
["Not this time.", "no"],
["Heck no.", "no"],
["No way, Jose.", "no"],
["Regrettably, I'm not able to.", "no"],
["It's that time of the year when I must say no.", "no"],
["It's a Wednesday. I have a No on Wednesday policy.", "no"],
["Ask me in a year.", "no"],
["I know someone that might be a fit for that. I'll email you their information.", "no"],
["You're so kind to think of me, but I can't.", "no"],
["Sounds great, but I can’t commit.", "no"],
["Rats! Would’ve loved to.", "no"],
["I’m slammed.", "no"],
["Perhaps next season when things clear up.", "no"],
["I’m at the end of my rope right now so have to take a raincheck.", "no"],
["If only it worked.", "no"],
["I’ll need to bow out.", "no"],
["I’m going to have to exert my NO muscle on this one.", "no"],
["I’m taking some time.", "no"],
["I’m in a season of NO.", "no"],
["I’m not the girl for you on this one.", "no"],
["I’m learning to limit my commitments.", "no"],
["I’m not taking on new things.", "no"],
["Another time might work.", "no"],
["It doesn’t sound like the right fit.", "no"],
["No thank you, but it sounds lovely.", "no"],
["It sounds like you’re looking for something I’m not able to give right now.", "no"],
["I believe I wouldn’t fit the bill, sorry.", "no"],
["It’s not a good idea for me.", "no"],
["I’m trying to cut back.", "no"],
["I won’t be able to help.", "no"],
["If only I had a clone!", "no"],
["I’m not able to set aside the time needed.", "no"],
["I won’t be able to dedicate the time I need to it.", "no"],
["I’m head-down right now on a project, so won’t be able to.", "no"],
["I wish there were two of me!", "no"],
["I’m honored, but can’t.", "no"],
["NoNoNoNoNoNo.", "no"],
["I’m booked into something else.", "no"],
["I’m not able to make that time.", "no"],
["Thanks, but no thanks.", "no"],
["I’m not able to make it this week/month/year.", "no"],
["Bye now.", "no"],
["I’ve got too much on my plate right now.", "no"],
["I’m not taking on anything else right now.", "no"],
["Bandwidth is low, so I won’t be able to make it work.", "no"],
["I wish I could make it work.", "no"],
["Not possible.", "no"],
["I wish I were able to.", "no"],
["If only I could!", "no"],
["I’d love to — but can’t.", "no"],
["Darn! Not able to fit it in.", "no"],
["No thanks, I have another commitment.", "no"],
["Unfortunately, it’s not a good time.", "no"],
["Sadly I have something else.", "no"],
["Unfortunately not.", "no"],
["I have something else. Sorry.", "no"],
["Apologies, but I can’t make it.", "no"],
["Thank you so much for asking. Can you keep me on your list for next year?", "no"],
["I think so", "yes"],
["I agree", "yes"],
["I agree", "yes"],
["I agree", "yes"],
["I'm sure", "yes"],
["I'm sure", "yes"],
["aye aye", "yes"],
["aye", "yes"],
["carry on", "yes"],
["do it", "yes"],
["do it", "yes"],
["get on with it then", "yes"],
["go ahead", "yes"],
["i agree", "yes"],
["make it happen", "yes"],
["make it so", "yes"],
["most assuredly", "yes"],
["perfect, thanks", "yes"],
["please do", "yes"],
["rock on", "yes"],
["that's correct", "yes"],
["that's right", "yes"],
["uh huh", "yes"],
["yeah, do it", "yes"],
["yes, do it", "yes"],
["yes, please", "yes"],
["you got it", "yes"],
["absolutely", "yes"],
["yes, absolutely", "yes"],
["affirmative", "yes"],
["alright", "yes"],
["aye", "yes"],
["certainly", "yes"],
["confirmed", "yes"],
["continue", "yes"],
["correct", "yes"],
["da", "yes"],
["good", "yes"],
["hooray", "yes"],
["ja", "yes"],
["ok", "yes"],
["ok", "yes"],
["okay", "yes"],
["proceed", "yes"],
["righto", "yes"],
["sure", "yes"],
["sure", "yes"],
["sure", "yes"],
["sure", "yes"],
["thanks", "yes"],
["totally", "yes"],
["true", "yes"],
["y", "yes"],
["ya", "yes"],
["ya", "yes"],
["yay", "yes"],
["yea", "yes"],
["yeah", "yes"],
["yeah", "yes"],
["yep", "yes"],
["yeppers", "yes"],
["yes", "yes"],
["yes", "yes"],
["yes", "yes"],
["k!", "yes"],
["Agreed", "yes"],
["All right", "yes"],
["By all means","yes"],
["Certainly","yes"],
["Consider it done","yes"],
["Definitely","yes"],
["Gladly","yes"],
['I’m on it',"yes"],
["Of course","yes"],
["Sounds good","yes"],
["Very well","yes"],
["Absolutely","yes"],
["Indubitably","yes"],
["Indeed","yes"],
["Undoubtedly","yes"],
["Affirmative","yes"],
["I’d be delighted","yes"],
["For sure","yes"],
["I’d love to","yes"],
["No doubt","yes"],
["No problem","yes"],
["No worries","yes"],
["Roger","yes"],
["Roger that","yes"],
["Sounds like a plan","yes"],
["Sounds good","yes"],
["Without a doubt","yes"],
["Yep","yes"],
["You bet","yes"],
["You got it","yes"],
["Okey dokey", "yes"],
["Okaley dokaley", "yes"],
["Yuppers","yes"],
["Totes","yes"],
["You betcha","yes"],
["Alrighty then","yes"],
["Aye aye, captain!","yes"],
["Yeah, sure.","yes"],
["Uh-huh…","yes"],
["Yeah…","yes"],
["Uh… ok…","yes"],
["Yes!","yes"],
["Fine!","yes"],
["OK!","yes"]
]
df = pd.DataFrame(data,columns=['responses','sentiment'])
print(df.shape)
df
df['sentiment'].value_counts().plot.bar()
# Load Pretrained Word2Vec
#embed = hub.load("https://tfhub.dev/google/Wiki-words-250/2")
embed = hub.load("https://tfhub.dev/google/Wiki-words-500/2")
def get_max_length(df):
"""
get max token counts from train data,
so we use this number as fixed length input to RNN cell
"""
max_length = 0
for row in df['responses']:
if len(row.split(" ")) > max_length:
max_length = len(row.split(" "))
return max_length
def get_word2vec_enc(responses):
"""
get word2vec value for each word in sentence.
concatenate word in numpy array, so we can use it as RNN input
"""
encoded_reviews = []
for response in responses:
tokens = response.split(" ")
word2vec_embedding = embed(tokens)
encoded_reviews.append(word2vec_embedding)
return encoded_reviews
def get_padded_encoded_reviews(encoded_reviews):
"""
for short sentences, we prepend zero padding so all input to RNN has same length
"""
padded_reviews_encoding = []
for enc_review in encoded_reviews:
zero_padding_cnt = max_length - enc_review.shape[0]
pad = np.zeros((1, 500))
for i in range(zero_padding_cnt):
enc_review = np.concatenate((pad, enc_review), axis=0)
padded_reviews_encoding.append(enc_review)
return padded_reviews_encoding
def sentiment_encode(sentiment):
"""
return one hot encoding for Y value
"""
if sentiment == 'yes':
return [1,0]
else:
return [0,1]
def preprocess(df):
"""
encode text value to numeric value
"""
# encode words into word2vec
reviews = df['responses'].tolist()
encoded_reviews = get_word2vec_enc(reviews)
padded_encoded_reviews = get_padded_encoded_reviews(encoded_reviews)
# encoded sentiment
sentiments = df['sentiment'].tolist()
encoded_sentiment = [sentiment_encode(sentiment) for sentiment in sentiments]
X = np.array(padded_encoded_reviews)
y = np.array(encoded_sentiment)
return X, y
# max_length is used for max sequence of input
max_length = get_max_length(df)
train_X, train_Y = preprocess(df)
train_X.shape
train_Y.dtype
# LSTM model
model = Sequential()
model.add(LSTM(32))
model.add(Dense(2, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='Adam',
metrics=['accuracy'])
print('Train...')
model.fit(train_X, train_Y,epochs=150)
model.summary()
test = [
{'responses': 'Yess!!', 'sentiment': 'yes'},
{'responses': 'Sure thing!', 'sentiment': 'yes'},
{'responses': 'Naah', 'sentiment': 'no'},
{'responses': 'Nay', 'sentiment': 'No'},
{'responses': 'ummmm let me think about it', 'sentiment': 'no'},
{'responses': 'I could but..', 'sentiment': 'no'}
]
test_df = pd.DataFrame(test)
test_X, test_Y = preprocess(test_df)
score, acc = model.evaluate(test_X, test_Y, verbose=2)
print('Test score:', score)
print('Test accuracy:', acc)
```
The above model uses the Keras Library which takes up a large amount of memeory space and the Test Accuracy is only at 66%.
As an alternate suggestion to using a large amount of memory space,the set of code below,uses a pre-trained sentiment analysis model from the transformers library hence doesn't take up addition memory space.
```
!pip install transformers
# Importing the library
from transformers import pipeline
# Calling the pre trained sentiment-analysis model
sentiment_analysis = pipeline("sentiment-analysis")
# Test data
pos_text = "Yeah would love to do this for you"
neg_text = "Hell no, I'm not talking to no damn bot"
# Test Results
result = sentiment_analysis(pos_text)[0]
print("Label:", result['label'])
print("Confidence Score:", result['score'])
print()
result = sentiment_analysis(neg_text)[0]
print("Label:", result['label'])
print("Confidence Score:", result['score'])
```
Even though the confidence score is superb, it's important to note that the responses that have "F yeah" are categorized negative which is incorrect.
**Recommendation:** Try using the GloVe and FastText Libraries for this model to see if you are able to get better accuracy scores.
| github_jupyter | !pip3 install torch torchnlp torchvision
import re
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras.models import Sequential, load_model
from keras.layers import Dense, LSTM, Embedding, Dropout
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import tensorflow_hub as hub
data = [["not sure", "no"],
["I may agree", "no"],
["I don't know", "no"],
["I have no clue", "no"],
["I think so", "no"],
["I agree", "no"],
["I'm not sure if I agree", "no"],
["I may agree", "no"],
["I might agree", "no"],
["I'm not sure", "no"],
["I don't have a clue", "no"],
["I don't have a clue", "no"],
["I have no clue", "no"],
["I have no idea", "no"],
["No idea", "no"],
["dunno", "no"],
["I've changed my mind", "no"],
["I don't agree", "no"],
["I do not agree", "no"],
["Absolutely not", "no"],
["belay that", "no"],
["cut it out", "no"],
["don't do anything", "no"],
["don't do it", "no"],
["forget it", "no"],
["forget about it", "no"],
["never mind", "no"],
["no thanks", "no"],
["no thank you", "no"],
["no way", "no"],
["on second thought, don't do it", "no"],
["please don't", "no"],
["scratch that", "no"],
["cancel", "no"],
["denied", "no"],
["disconnect", "no"],
["disengage", "no"],
["don't", "no"],
["end", "no"],
["exit", "no"],
["halt", "no"],
["n", "no"],
["q", "no"],
["nah", "no"],
["nay", "no"],
["neg", "no"],
["negative", "no"],
["negatory", "no"],
["nein", "no"],
["nevermind", "no"],
["no", "no"],
["no", "no"],
["nope", "no"],
["nope", "no"],
["nyet", "no"],
["skip", "no"],
["stop", "no"],
["stop", "no"],
["quit", "no"],
["quit", "no"],
["quit", "no"],
["Not now.", "no"],
["Look! Squirrel!", "no"],
["No thanks, I won’t be able to make it.", "no"],
["Not this time.", "no"],
["Heck no.", "no"],
["No way, Jose.", "no"],
["Regrettably, I'm not able to.", "no"],
["It's that time of the year when I must say no.", "no"],
["It's a Wednesday. I have a No on Wednesday policy.", "no"],
["Ask me in a year.", "no"],
["I know someone that might be a fit for that. I'll email you their information.", "no"],
["You're so kind to think of me, but I can't.", "no"],
["Sounds great, but I can’t commit.", "no"],
["Rats! Would’ve loved to.", "no"],
["I’m slammed.", "no"],
["Perhaps next season when things clear up.", "no"],
["I’m at the end of my rope right now so have to take a raincheck.", "no"],
["If only it worked.", "no"],
["I’ll need to bow out.", "no"],
["I’m going to have to exert my NO muscle on this one.", "no"],
["I’m taking some time.", "no"],
["I’m in a season of NO.", "no"],
["I’m not the girl for you on this one.", "no"],
["I’m learning to limit my commitments.", "no"],
["I’m not taking on new things.", "no"],
["Another time might work.", "no"],
["It doesn’t sound like the right fit.", "no"],
["No thank you, but it sounds lovely.", "no"],
["It sounds like you’re looking for something I’m not able to give right now.", "no"],
["I believe I wouldn’t fit the bill, sorry.", "no"],
["It’s not a good idea for me.", "no"],
["I’m trying to cut back.", "no"],
["I won’t be able to help.", "no"],
["If only I had a clone!", "no"],
["I’m not able to set aside the time needed.", "no"],
["I won’t be able to dedicate the time I need to it.", "no"],
["I’m head-down right now on a project, so won’t be able to.", "no"],
["I wish there were two of me!", "no"],
["I’m honored, but can’t.", "no"],
["NoNoNoNoNoNo.", "no"],
["I’m booked into something else.", "no"],
["I’m not able to make that time.", "no"],
["Thanks, but no thanks.", "no"],
["I’m not able to make it this week/month/year.", "no"],
["Bye now.", "no"],
["I’ve got too much on my plate right now.", "no"],
["I’m not taking on anything else right now.", "no"],
["Bandwidth is low, so I won’t be able to make it work.", "no"],
["I wish I could make it work.", "no"],
["Not possible.", "no"],
["I wish I were able to.", "no"],
["If only I could!", "no"],
["I’d love to — but can’t.", "no"],
["Darn! Not able to fit it in.", "no"],
["No thanks, I have another commitment.", "no"],
["Unfortunately, it’s not a good time.", "no"],
["Sadly I have something else.", "no"],
["Unfortunately not.", "no"],
["I have something else. Sorry.", "no"],
["Apologies, but I can’t make it.", "no"],
["Thank you so much for asking. Can you keep me on your list for next year?", "no"],
["I think so", "yes"],
["I agree", "yes"],
["I agree", "yes"],
["I agree", "yes"],
["I'm sure", "yes"],
["I'm sure", "yes"],
["aye aye", "yes"],
["aye", "yes"],
["carry on", "yes"],
["do it", "yes"],
["do it", "yes"],
["get on with it then", "yes"],
["go ahead", "yes"],
["i agree", "yes"],
["make it happen", "yes"],
["make it so", "yes"],
["most assuredly", "yes"],
["perfect, thanks", "yes"],
["please do", "yes"],
["rock on", "yes"],
["that's correct", "yes"],
["that's right", "yes"],
["uh huh", "yes"],
["yeah, do it", "yes"],
["yes, do it", "yes"],
["yes, please", "yes"],
["you got it", "yes"],
["absolutely", "yes"],
["yes, absolutely", "yes"],
["affirmative", "yes"],
["alright", "yes"],
["aye", "yes"],
["certainly", "yes"],
["confirmed", "yes"],
["continue", "yes"],
["correct", "yes"],
["da", "yes"],
["good", "yes"],
["hooray", "yes"],
["ja", "yes"],
["ok", "yes"],
["ok", "yes"],
["okay", "yes"],
["proceed", "yes"],
["righto", "yes"],
["sure", "yes"],
["sure", "yes"],
["sure", "yes"],
["sure", "yes"],
["thanks", "yes"],
["totally", "yes"],
["true", "yes"],
["y", "yes"],
["ya", "yes"],
["ya", "yes"],
["yay", "yes"],
["yea", "yes"],
["yeah", "yes"],
["yeah", "yes"],
["yep", "yes"],
["yeppers", "yes"],
["yes", "yes"],
["yes", "yes"],
["yes", "yes"],
["k!", "yes"],
["Agreed", "yes"],
["All right", "yes"],
["By all means","yes"],
["Certainly","yes"],
["Consider it done","yes"],
["Definitely","yes"],
["Gladly","yes"],
['I’m on it',"yes"],
["Of course","yes"],
["Sounds good","yes"],
["Very well","yes"],
["Absolutely","yes"],
["Indubitably","yes"],
["Indeed","yes"],
["Undoubtedly","yes"],
["Affirmative","yes"],
["I’d be delighted","yes"],
["For sure","yes"],
["I’d love to","yes"],
["No doubt","yes"],
["No problem","yes"],
["No worries","yes"],
["Roger","yes"],
["Roger that","yes"],
["Sounds like a plan","yes"],
["Sounds good","yes"],
["Without a doubt","yes"],
["Yep","yes"],
["You bet","yes"],
["You got it","yes"],
["Okey dokey", "yes"],
["Okaley dokaley", "yes"],
["Yuppers","yes"],
["Totes","yes"],
["You betcha","yes"],
["Alrighty then","yes"],
["Aye aye, captain!","yes"],
["Yeah, sure.","yes"],
["Uh-huh…","yes"],
["Yeah…","yes"],
["Uh… ok…","yes"],
["Yes!","yes"],
["Fine!","yes"],
["OK!","yes"]
]
df = pd.DataFrame(data,columns=['responses','sentiment'])
print(df.shape)
df
df['sentiment'].value_counts().plot.bar()
# Load Pretrained Word2Vec
#embed = hub.load("https://tfhub.dev/google/Wiki-words-250/2")
embed = hub.load("https://tfhub.dev/google/Wiki-words-500/2")
def get_max_length(df):
"""
get max token counts from train data,
so we use this number as fixed length input to RNN cell
"""
max_length = 0
for row in df['responses']:
if len(row.split(" ")) > max_length:
max_length = len(row.split(" "))
return max_length
def get_word2vec_enc(responses):
"""
get word2vec value for each word in sentence.
concatenate word in numpy array, so we can use it as RNN input
"""
encoded_reviews = []
for response in responses:
tokens = response.split(" ")
word2vec_embedding = embed(tokens)
encoded_reviews.append(word2vec_embedding)
return encoded_reviews
def get_padded_encoded_reviews(encoded_reviews):
"""
for short sentences, we prepend zero padding so all input to RNN has same length
"""
padded_reviews_encoding = []
for enc_review in encoded_reviews:
zero_padding_cnt = max_length - enc_review.shape[0]
pad = np.zeros((1, 500))
for i in range(zero_padding_cnt):
enc_review = np.concatenate((pad, enc_review), axis=0)
padded_reviews_encoding.append(enc_review)
return padded_reviews_encoding
def sentiment_encode(sentiment):
"""
return one hot encoding for Y value
"""
if sentiment == 'yes':
return [1,0]
else:
return [0,1]
def preprocess(df):
"""
encode text value to numeric value
"""
# encode words into word2vec
reviews = df['responses'].tolist()
encoded_reviews = get_word2vec_enc(reviews)
padded_encoded_reviews = get_padded_encoded_reviews(encoded_reviews)
# encoded sentiment
sentiments = df['sentiment'].tolist()
encoded_sentiment = [sentiment_encode(sentiment) for sentiment in sentiments]
X = np.array(padded_encoded_reviews)
y = np.array(encoded_sentiment)
return X, y
# max_length is used for max sequence of input
max_length = get_max_length(df)
train_X, train_Y = preprocess(df)
train_X.shape
train_Y.dtype
# LSTM model
model = Sequential()
model.add(LSTM(32))
model.add(Dense(2, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='Adam',
metrics=['accuracy'])
print('Train...')
model.fit(train_X, train_Y,epochs=150)
model.summary()
test = [
{'responses': 'Yess!!', 'sentiment': 'yes'},
{'responses': 'Sure thing!', 'sentiment': 'yes'},
{'responses': 'Naah', 'sentiment': 'no'},
{'responses': 'Nay', 'sentiment': 'No'},
{'responses': 'ummmm let me think about it', 'sentiment': 'no'},
{'responses': 'I could but..', 'sentiment': 'no'}
]
test_df = pd.DataFrame(test)
test_X, test_Y = preprocess(test_df)
score, acc = model.evaluate(test_X, test_Y, verbose=2)
print('Test score:', score)
print('Test accuracy:', acc)
!pip install transformers
# Importing the library
from transformers import pipeline
# Calling the pre trained sentiment-analysis model
sentiment_analysis = pipeline("sentiment-analysis")
# Test data
pos_text = "Yeah would love to do this for you"
neg_text = "Hell no, I'm not talking to no damn bot"
# Test Results
result = sentiment_analysis(pos_text)[0]
print("Label:", result['label'])
print("Confidence Score:", result['score'])
print()
result = sentiment_analysis(neg_text)[0]
print("Label:", result['label'])
print("Confidence Score:", result['score']) | 0.202089 | 0.212385 |
# 100 numpy exercises
This is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach.
If you find an error or think you've a better way to solve some of them, feel free to open an issue at <https://github.com/rougier/numpy-100>
#### 1. Import the numpy package under the name `np` (★☆☆)
```
import numpy as np
```
#### 2. Print the numpy version and the configuration (★☆☆)
```
print(np.__version__)
np.show_config()
```
#### 3. Create a null vector of size 10 (★☆☆)
```
Z = np.zeros(10)
print(Z)
```
#### 4. How to find the memory size of any array (★☆☆)
```
Z = np.zeros((10,10))
print("%d bytes" % (Z.size * Z.itemsize))
```
#### 5. How to get the documentation of the numpy add function from the command line? (★☆☆)
```
%run `python -c "import numpy; numpy.info(numpy.add)"`
```
#### 6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)
```
Z = np.zeros(10)
Z[4] = 1
print(Z)
```
#### 7. Create a vector with values ranging from 10 to 49 (★☆☆)
```
Z = np.arange(10,50)
print(Z)
```
#### 8. Reverse a vector (first element becomes last) (★☆☆)
```
Z = np.arange(50)
Z = Z[::-1]
print(Z)
```
#### 9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
```
Z = np.arange(9).reshape(3,3)
print(Z)
```
#### 10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
```
nz = np.nonzero([1,2,0,0,4,0])
print(nz)
```
#### 11. Create a 3x3 identity matrix (★☆☆)
```
Z = np.eye(3)
print(Z)
```
#### 12. Create a 3x3x3 array with random values (★☆☆)
```
Z = np.random.random((3,3,3))
print(Z)
```
#### 13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)
```
Z = np.random.random((10,10))
Zmin, Zmax = Z.min(), Z.max()
print(Zmin, Zmax)
```
#### 14. Create a random vector of size 30 and find the mean value (★☆☆)
```
Z = np.random.random(30)
m = Z.mean()
print(m)
```
#### 15. Create a 2d array with 1 on the border and 0 inside (★☆☆)
```
Z = np.ones((10,10))
Z[1:-1,1:-1] = 0
print(Z)
```
#### 16. How to add a border (filled with 0's) around an existing array? (★☆☆)
```
Z = np.ones((5,5))
Z = np.pad(Z, pad_width=1, mode='constant', constant_values=0)
print(Z)
```
#### 17. What is the result of the following expression? (★☆☆)
```
print(0 * np.nan)
print(np.nan == np.nan)
print(np.inf > np.nan)
print(np.nan - np.nan)
print(np.nan in set([np.nan]))
print(0.3 == 3 * 0.1)
```
#### 18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)
```
Z = np.diag(1+np.arange(4),k=-1)
print(Z)
```
#### 19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆)
```
Z = np.zeros((8,8),dtype=int)
Z[1::2,::2] = 1
Z[::2,1::2] = 1
print(Z)
```
#### 20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element?
```
print(np.unravel_index(100,(6,7,8)))
```
#### 21. Create a checkerboard 8x8 matrix using the tile function (★☆☆)
```
Z = np.tile( np.array([[0,1],[1,0]]), (4,4))
print(Z)
```
#### 22. Normalize a 5x5 random matrix (★☆☆)
```
Z = np.random.random((5,5))
Z = (Z - np.mean (Z)) / (np.std (Z))
print(Z)
```
#### 23. Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆)
```
color = np.dtype([("r", np.ubyte, 1),
("g", np.ubyte, 1),
("b", np.ubyte, 1),
("a", np.ubyte, 1)])
```
#### 24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)
```
Z = np.dot(np.ones((5,3)), np.ones((3,2)))
print(Z)
# Alternative solution, in Python 3.5 and above
Z = np.ones((5,3)) @ np.ones((3,2))
```
#### 25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)
```
# Author: Evgeni Burovski
Z = np.arange(11)
Z[(3 < Z) & (Z <= 8)] *= -1
print(Z)
```
#### 26. What is the output of the following script? (★☆☆)
```
# Author: Jake VanderPlas
print(sum(range(5),-1))
from numpy import *
print(sum(range(5),-1))
```
#### 27. Consider an integer vector Z, which of these expressions are legal? (★☆☆)
```
Z**Z
2 << Z >> 2
Z <- Z
1j*Z
Z/1/1
Z<Z>Z
```
#### 28. What are the result of the following expressions?
```
print(np.array(0) / np.array(0))
print(np.array(0) // np.array(0))
print(np.array([np.nan]).astype(int).astype(float))
```
#### 29. How to round away from zero a float array ? (★☆☆)
```
# Author: Charles R Harris
Z = np.random.uniform(-10,+10,10)
print (np.copysign(np.ceil(np.abs(Z)), Z))
```
#### 30. How to find common values between two arrays? (★☆☆)
```
Z1 = np.random.randint(0,10,10)
Z2 = np.random.randint(0,10,10)
print(np.intersect1d(Z1,Z2))
```
#### 31. How to ignore all numpy warnings (not recommended)? (★☆☆)
```
# Suicide mode on
defaults = np.seterr(all="ignore")
Z = np.ones(1) / 0
# Back to sanity
_ = np.seterr(**defaults)
An equivalent way, with a context manager:
with np.errstate(divide='ignore'):
Z = np.ones(1) / 0
```
#### 32. Is the following expressions true? (★☆☆)
```
np.sqrt(-1) == np.emath.sqrt(-1)
```
#### 33. How to get the dates of yesterday, today and tomorrow? (★☆☆)
```
yesterday = np.datetime64('today', 'D') - np.timedelta64(1, 'D')
today = np.datetime64('today', 'D')
tomorrow = np.datetime64('today', 'D') + np.timedelta64(1, 'D')
```
#### 34. How to get all the dates corresponding to the month of July 2016? (★★☆)
```
Z = np.arange('2016-07', '2016-08', dtype='datetime64[D]')
print(Z)
```
#### 35. How to compute ((A+B)\*(-A/2)) in place (without copy)? (★★☆)
```
A = np.ones(3)*1
B = np.ones(3)*2
C = np.ones(3)*3
np.add(A,B,out=B)
np.divide(A,2,out=A)
np.negative(A,out=A)
np.multiply(A,B,out=A)
```
#### 36. Extract the integer part of a random array using 5 different methods (★★☆)
```
Z = np.random.uniform(0,10,10)
print (Z - Z%1)
print (np.floor(Z))
print (np.ceil(Z)-1)
print (Z.astype(int))
print (np.trunc(Z))
```
#### 37. Create a 5x5 matrix with row values ranging from 0 to 4 (★★☆)
```
Z = np.zeros((5,5))
Z += np.arange(5)
print(Z)
```
#### 38. Consider a generator function that generates 10 integers and use it to build an array (★☆☆)
```
def generate():
for x in range(10):
yield x
Z = np.fromiter(generate(),dtype=float,count=-1)
print(Z)
```
#### 39. Create a vector of size 10 with values ranging from 0 to 1, both excluded (★★☆)
```
Z = np.linspace(0,1,11,endpoint=False)[1:]
print(Z)
```
#### 40. Create a random vector of size 10 and sort it (★★☆)
```
Z = np.random.random(10)
Z.sort()
print(Z)
```
#### 41. How to sum a small array faster than np.sum? (★★☆)
```
# Author: Evgeni Burovski
Z = np.arange(10)
np.add.reduce(Z)
```
#### 42. Consider two random array A and B, check if they are equal (★★☆)
```
A = np.random.randint(0,2,5)
B = np.random.randint(0,2,5)
# Assuming identical shape of the arrays and a tolerance for the comparison of values
equal = np.allclose(A,B)
print(equal)
# Checking both the shape and the element values, no tolerance (values have to be exactly equal)
equal = np.array_equal(A,B)
print(equal)
```
#### 43. Make an array immutable (read-only) (★★☆)
```
Z = np.zeros(10)
Z.flags.writeable = False
Z[0] = 1
```
#### 44. Consider a random 10x2 matrix representing cartesian coordinates, convert them to polar coordinates (★★☆)
```
Z = np.random.random((10,2))
X,Y = Z[:,0], Z[:,1]
R = np.sqrt(X**2+Y**2)
T = np.arctan2(Y,X)
print(R)
print(T)
```
#### 45. Create random vector of size 10 and replace the maximum value by 0 (★★☆)
```
Z = np.random.random(10)
Z[Z.argmax()] = 0
print(Z)
```
#### 46. Create a structured array with `x` and `y` coordinates covering the \[0,1\]x\[0,1\] area (★★☆)
```
Z = np.zeros((5,5), [('x',float),('y',float)])
Z['x'], Z['y'] = np.meshgrid(np.linspace(0,1,5),
np.linspace(0,1,5))
print(Z)
```
#### 47. Given two arrays, X and Y, construct the Cauchy matrix C (Cij =1/(xi - yj))
```
# Author: Evgeni Burovski
X = np.arange(8)
Y = X + 0.5
C = 1.0 / np.subtract.outer(X, Y)
print(np.linalg.det(C))
```
#### 48. Print the minimum and maximum representable value for each numpy scalar type (★★☆)
```
for dtype in [np.int8, np.int32, np.int64]:
print(np.iinfo(dtype).min)
print(np.iinfo(dtype).max)
for dtype in [np.float32, np.float64]:
print(np.finfo(dtype).min)
print(np.finfo(dtype).max)
print(np.finfo(dtype).eps)
```
#### 49. How to print all the values of an array? (★★☆)
```
np.set_printoptions(threshold=np.nan)
Z = np.zeros((16,16))
print(Z)
```
#### 50. How to find the closest value (to a given scalar) in a vector? (★★☆)
```
Z = np.arange(100)
v = np.random.uniform(0,100)
index = (np.abs(Z-v)).argmin()
print(Z[index])
```
#### 51. Create a structured array representing a position (x,y) and a color (r,g,b) (★★☆)
```
Z = np.zeros(10, [ ('position', [ ('x', float, 1),
('y', float, 1)]),
('color', [ ('r', float, 1),
('g', float, 1),
('b', float, 1)])])
print(Z)
```
#### 52. Consider a random vector with shape (100,2) representing coordinates, find point by point distances (★★☆)
```
Z = np.random.random((10,2))
X,Y = np.atleast_2d(Z[:,0], Z[:,1])
D = np.sqrt( (X-X.T)**2 + (Y-Y.T)**2)
print(D)
# Much faster with scipy
import scipy
# Thanks Gavin Heverly-Coulson (#issue 1)
import scipy.spatial
Z = np.random.random((10,2))
D = scipy.spatial.distance.cdist(Z,Z)
print(D)
```
#### 53. How to convert a float (32 bits) array into an integer (32 bits) in place?
```
Z = np.arange(10, dtype=np.float32)
Z = Z.astype(np.int32, copy=False)
print(Z)
```
#### 54. How to read the following file? (★★☆)
```
from io import StringIO
# Fake file
s = StringIO("""1, 2, 3, 4, 5\n
6, , , 7, 8\n
, , 9,10,11\n""")
Z = np.genfromtxt(s, delimiter=",", dtype=np.int)
print(Z)
```
#### 55. What is the equivalent of enumerate for numpy arrays? (★★☆)
```
Z = np.arange(9).reshape(3,3)
for index, value in np.ndenumerate(Z):
print(index, value)
for index in np.ndindex(Z.shape):
print(index, Z[index])
```
#### 56. Generate a generic 2D Gaussian-like array (★★☆)
```
X, Y = np.meshgrid(np.linspace(-1,1,10), np.linspace(-1,1,10))
D = np.sqrt(X*X+Y*Y)
sigma, mu = 1.0, 0.0
G = np.exp(-( (D-mu)**2 / ( 2.0 * sigma**2 ) ) )
print(G)
```
#### 57. How to randomly place p elements in a 2D array? (★★☆)
```
# Author: Divakar
n = 10
p = 3
Z = np.zeros((n,n))
np.put(Z, np.random.choice(range(n*n), p, replace=False),1)
print(Z)
```
#### 58. Subtract the mean of each row of a matrix (★★☆)
```
# Author: Warren Weckesser
X = np.random.rand(5, 10)
# Recent versions of numpy
Y = X - X.mean(axis=1, keepdims=True)
# Older versions of numpy
Y = X - X.mean(axis=1).reshape(-1, 1)
print(Y)
```
#### 59. How to sort an array by the nth column? (★★☆)
```
# Author: Steve Tjoa
Z = np.random.randint(0,10,(3,3))
print(Z)
print(Z[Z[:,1].argsort()])
```
#### 60. How to tell if a given 2D array has null columns? (★★☆)
```
# Author: Warren Weckesser
Z = np.random.randint(0,3,(3,10))
print((~Z.any(axis=0)).any())
```
#### 61. Find the nearest value from a given value in an array (★★☆)
```
Z = np.random.uniform(0,1,10)
z = 0.5
m = Z.flat[np.abs(Z - z).argmin()]
print(m)
```
#### 62. Considering two arrays with shape (1,3) and (3,1), how to compute their sum using an iterator? (★★☆)
```
A = np.arange(3).reshape(3,1)
B = np.arange(3).reshape(1,3)
it = np.nditer([A,B,None])
for x,y,z in it: z[...] = x + y
print(it.operands[2])
```
#### 63. Create an array class that has a name attribute (★★☆)
```
class NamedArray(np.ndarray):
def __new__(cls, array, name="no name"):
obj = np.asarray(array).view(cls)
obj.name = name
return obj
def __array_finalize__(self, obj):
if obj is None: return
self.info = getattr(obj, 'name', "no name")
Z = NamedArray(np.arange(10), "range_10")
print (Z.name)
```
#### 64. Consider a given vector, how to add 1 to each element indexed by a second vector (be careful with repeated indices)? (★★★)
```
# Author: Brett Olsen
Z = np.ones(10)
I = np.random.randint(0,len(Z),20)
Z += np.bincount(I, minlength=len(Z))
print(Z)
# Another solution
# Author: Bartosz Telenczuk
np.add.at(Z, I, 1)
print(Z)
```
#### 65. How to accumulate elements of a vector (X) to an array (F) based on an index list (I)? (★★★)
```
# Author: Alan G Isaac
X = [1,2,3,4,5,6]
I = [1,3,9,3,4,1]
F = np.bincount(I,X)
print(F)
```
#### 66. Considering a (w,h,3) image of (dtype=ubyte), compute the number of unique colors (★★★)
```
# Author: Nadav Horesh
w,h = 16,16
I = np.random.randint(0,2,(h,w,3)).astype(np.ubyte)
#Note that we should compute 256*256 first.
#Otherwise numpy will only promote F.dtype to 'uint16' and overfolw will occur
F = I[...,0]*(256*256) + I[...,1]*256 +I[...,2]
n = len(np.unique(F))
print(n)
```
#### 67. Considering a four dimensions array, how to get sum over the last two axis at once? (★★★)
```
A = np.random.randint(0,10,(3,4,3,4))
# solution by passing a tuple of axes (introduced in numpy 1.7.0)
sum = A.sum(axis=(-2,-1))
print(sum)
# solution by flattening the last two dimensions into one
# (useful for functions that don't accept tuples for axis argument)
sum = A.reshape(A.shape[:-2] + (-1,)).sum(axis=-1)
print(sum)
```
#### 68. Considering a one-dimensional vector D, how to compute means of subsets of D using a vector S of same size describing subset indices? (★★★)
```
# Author: Jaime Fernández del Río
D = np.random.uniform(0,1,100)
S = np.random.randint(0,10,100)
D_sums = np.bincount(S, weights=D)
D_counts = np.bincount(S)
D_means = D_sums / D_counts
print(D_means)
# Pandas solution as a reference due to more intuitive code
import pandas as pd
print(pd.Series(D).groupby(S).mean())
```
#### 69. How to get the diagonal of a dot product? (★★★)
```
# Author: Mathieu Blondel
A = np.random.uniform(0,1,(5,5))
B = np.random.uniform(0,1,(5,5))
# Slow version
np.diag(np.dot(A, B))
# Fast version
np.sum(A * B.T, axis=1)
# Faster version
np.einsum("ij,ji->i", A, B)
```
#### 70. Consider the vector \[1, 2, 3, 4, 5\], how to build a new vector with 3 consecutive zeros interleaved between each value? (★★★)
```
# Author: Warren Weckesser
Z = np.array([1,2,3,4,5])
nz = 3
Z0 = np.zeros(len(Z) + (len(Z)-1)*(nz))
Z0[::nz+1] = Z
print(Z0)
```
#### 71. Consider an array of dimension (5,5,3), how to mulitply it by an array with dimensions (5,5)? (★★★)
```
A = np.ones((5,5,3))
B = 2*np.ones((5,5))
print(A * B[:,:,None])
```
#### 72. How to swap two rows of an array? (★★★)
```
# Author: Eelco Hoogendoorn
A = np.arange(25).reshape(5,5)
A[[0,1]] = A[[1,0]]
print(A)
```
#### 73. Consider a set of 10 triplets describing 10 triangles (with shared vertices), find the set of unique line segments composing all the triangles (★★★)
```
# Author: Nicolas P. Rougier
faces = np.random.randint(0,100,(10,3))
F = np.roll(faces.repeat(2,axis=1),-1,axis=1)
F = F.reshape(len(F)*3,2)
F = np.sort(F,axis=1)
G = F.view( dtype=[('p0',F.dtype),('p1',F.dtype)] )
G = np.unique(G)
print(G)
```
#### 74. Given an array C that is a bincount, how to produce an array A such that np.bincount(A) == C? (★★★)
```
# Author: Jaime Fernández del Río
C = np.bincount([1,1,2,3,4,4,6])
A = np.repeat(np.arange(len(C)), C)
print(A)
```
#### 75. How to compute averages using a sliding window over an array? (★★★)
```
# Author: Jaime Fernández del Río
def moving_average(a, n=3) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
Z = np.arange(20)
print(moving_average(Z, n=3))
```
#### 76. Consider a one-dimensional array Z, build a two-dimensional array whose first row is (Z\[0\],Z\[1\],Z\[2\]) and each subsequent row is shifted by 1 (last row should be (Z\[-3\],Z\[-2\],Z\[-1\]) (★★★)
```
# Author: Joe Kington / Erik Rigtorp
from numpy.lib import stride_tricks
def rolling(a, window):
shape = (a.size - window + 1, window)
strides = (a.itemsize, a.itemsize)
return stride_tricks.as_strided(a, shape=shape, strides=strides)
Z = rolling(np.arange(10), 3)
print(Z)
```
#### 77. How to negate a boolean, or to change the sign of a float inplace? (★★★)
```
# Author: Nathaniel J. Smith
Z = np.random.randint(0,2,100)
np.logical_not(Z, out=Z)
Z = np.random.uniform(-1.0,1.0,100)
np.negative(Z, out=Z)
```
#### 78. Consider 2 sets of points P0,P1 describing lines (2d) and a point p, how to compute distance from p to each line i (P0\[i\],P1\[i\])? (★★★)
```
def distance(P0, P1, p):
T = P1 - P0
L = (T**2).sum(axis=1)
U = -((P0[:,0]-p[...,0])*T[:,0] + (P0[:,1]-p[...,1])*T[:,1]) / L
U = U.reshape(len(U),1)
D = P0 + U*T - p
return np.sqrt((D**2).sum(axis=1))
P0 = np.random.uniform(-10,10,(10,2))
P1 = np.random.uniform(-10,10,(10,2))
p = np.random.uniform(-10,10,( 1,2))
print(distance(P0, P1, p))
```
#### 79. Consider 2 sets of points P0,P1 describing lines (2d) and a set of points P, how to compute distance from each point j (P\[j\]) to each line i (P0\[i\],P1\[i\])? (★★★)
```
# Author: Italmassov Kuanysh
# based on distance function from previous question
P0 = np.random.uniform(-10, 10, (10,2))
P1 = np.random.uniform(-10,10,(10,2))
p = np.random.uniform(-10, 10, (10,2))
print(np.array([distance(P0,P1,p_i) for p_i in p]))
```
#### 80. Consider an arbitrary array, write a function that extract a subpart with a fixed shape and centered on a given element (pad with a `fill` value when necessary) (★★★)
```
# Author: Nicolas Rougier
Z = np.random.randint(0,10,(10,10))
shape = (5,5)
fill = 0
position = (1,1)
R = np.ones(shape, dtype=Z.dtype)*fill
P = np.array(list(position)).astype(int)
Rs = np.array(list(R.shape)).astype(int)
Zs = np.array(list(Z.shape)).astype(int)
R_start = np.zeros((len(shape),)).astype(int)
R_stop = np.array(list(shape)).astype(int)
Z_start = (P-Rs//2)
Z_stop = (P+Rs//2)+Rs%2
R_start = (R_start - np.minimum(Z_start,0)).tolist()
Z_start = (np.maximum(Z_start,0)).tolist()
R_stop = np.maximum(R_start, (R_stop - np.maximum(Z_stop-Zs,0))).tolist()
Z_stop = (np.minimum(Z_stop,Zs)).tolist()
r = [slice(start,stop) for start,stop in zip(R_start,R_stop)]
z = [slice(start,stop) for start,stop in zip(Z_start,Z_stop)]
R[r] = Z[z]
print(Z)
print(R)
```
#### 81. Consider an array Z = \[1,2,3,4,5,6,7,8,9,10,11,12,13,14\], how to generate an array R = \[\[1,2,3,4\], \[2,3,4,5\], \[3,4,5,6\], ..., \[11,12,13,14\]\]? (★★★)
```
# Author: Stefan van der Walt
Z = np.arange(1,15,dtype=np.uint32)
R = stride_tricks.as_strided(Z,(11,4),(4,4))
print(R)
```
#### 82. Compute a matrix rank (★★★)
```
# Author: Stefan van der Walt
Z = np.random.uniform(0,1,(10,10))
U, S, V = np.linalg.svd(Z) # Singular Value Decomposition
rank = np.sum(S > 1e-10)
print(rank)
```
#### 83. How to find the most frequent value in an array?
```
Z = np.random.randint(0,10,50)
print(np.bincount(Z).argmax())
```
#### 84. Extract all the contiguous 3x3 blocks from a random 10x10 matrix (★★★)
```
# Author: Chris Barker
Z = np.random.randint(0,5,(10,10))
n = 3
i = 1 + (Z.shape[0]-3)
j = 1 + (Z.shape[1]-3)
C = stride_tricks.as_strided(Z, shape=(i, j, n, n), strides=Z.strides + Z.strides)
print(C)
```
#### 85. Create a 2D array subclass such that Z\[i,j\] == Z\[j,i\] (★★★)
```
# Author: Eric O. Lebigot
# Note: only works for 2d array and value setting using indices
class Symetric(np.ndarray):
def __setitem__(self, index, value):
i,j = index
super(Symetric, self).__setitem__((i,j), value)
super(Symetric, self).__setitem__((j,i), value)
def symetric(Z):
return np.asarray(Z + Z.T - np.diag(Z.diagonal())).view(Symetric)
S = symetric(np.random.randint(0,10,(5,5)))
S[2,3] = 42
print(S)
```
#### 86. Consider a set of p matrices wich shape (n,n) and a set of p vectors with shape (n,1). How to compute the sum of of the p matrix products at once? (result has shape (n,1)) (★★★)
```
# Author: Stefan van der Walt
p, n = 10, 20
M = np.ones((p,n,n))
V = np.ones((p,n,1))
S = np.tensordot(M, V, axes=[[0, 2], [0, 1]])
print(S)
# It works, because:
# M is (p,n,n)
# V is (p,n,1)
# Thus, summing over the paired axes 0 and 0 (of M and V independently),
# and 2 and 1, to remain with a (n,1) vector.
```
#### 87. Consider a 16x16 array, how to get the block-sum (block size is 4x4)? (★★★)
```
# Author: Robert Kern
Z = np.ones((16,16))
k = 4
S = np.add.reduceat(np.add.reduceat(Z, np.arange(0, Z.shape[0], k), axis=0),
np.arange(0, Z.shape[1], k), axis=1)
print(S)
```
#### 88. How to implement the Game of Life using numpy arrays? (★★★)
```
# Author: Nicolas Rougier
def iterate(Z):
# Count neighbours
N = (Z[0:-2,0:-2] + Z[0:-2,1:-1] + Z[0:-2,2:] +
Z[1:-1,0:-2] + Z[1:-1,2:] +
Z[2: ,0:-2] + Z[2: ,1:-1] + Z[2: ,2:])
# Apply rules
birth = (N==3) & (Z[1:-1,1:-1]==0)
survive = ((N==2) | (N==3)) & (Z[1:-1,1:-1]==1)
Z[...] = 0
Z[1:-1,1:-1][birth | survive] = 1
return Z
Z = np.random.randint(0,2,(50,50))
for i in range(100): Z = iterate(Z)
print(Z)
```
#### 89. How to get the n largest values of an array (★★★)
```
Z = np.arange(10000)
np.random.shuffle(Z)
n = 5
# Slow
print (Z[np.argsort(Z)[-n:]])
# Fast
print (Z[np.argpartition(-Z,n)[:n]])
```
#### 90. Given an arbitrary number of vectors, build the cartesian product (every combinations of every item) (★★★)
```
# Author: Stefan Van der Walt
def cartesian(arrays):
arrays = [np.asarray(a) for a in arrays]
shape = (len(x) for x in arrays)
ix = np.indices(shape, dtype=int)
ix = ix.reshape(len(arrays), -1).T
for n, arr in enumerate(arrays):
ix[:, n] = arrays[n][ix[:, n]]
return ix
print (cartesian(([1, 2, 3], [4, 5], [6, 7])))
```
#### 91. How to create a record array from a regular array? (★★★)
```
Z = np.array([("Hello", 2.5, 3),
("World", 3.6, 2)])
R = np.core.records.fromarrays(Z.T,
names='col1, col2, col3',
formats = 'S8, f8, i8')
print(R)
```
#### 92. Consider a large vector Z, compute Z to the power of 3 using 3 different methods (★★★)
```
# Author: Ryan G.
x = np.random.rand(5e7)
%timeit np.power(x,3)
%timeit x*x*x
%timeit np.einsum('i,i,i->i',x,x,x)
```
#### 93. Consider two arrays A and B of shape (8,3) and (2,2). How to find rows of A that contain elements of each row of B regardless of the order of the elements in B? (★★★)
```
# Author: Gabe Schwartz
A = np.random.randint(0,5,(8,3))
B = np.random.randint(0,5,(2,2))
C = (A[..., np.newaxis, np.newaxis] == B)
rows = np.where(C.any((3,1)).all(1))[0]
print(rows)
```
#### 94. Considering a 10x3 matrix, extract rows with unequal values (e.g. \[2,2,3\]) (★★★)
```
# Author: Robert Kern
Z = np.random.randint(0,5,(10,3))
print(Z)
# solution for arrays of all dtypes (including string arrays and record arrays)
E = np.all(Z[:,1:] == Z[:,:-1], axis=1)
U = Z[~E]
print(U)
# soluiton for numerical arrays only, will work for any number of columns in Z
U = Z[Z.max(axis=1) != Z.min(axis=1),:]
print(U)
```
#### 95. Convert a vector of ints into a matrix binary representation (★★★)
```
# Author: Warren Weckesser
I = np.array([0, 1, 2, 3, 15, 16, 32, 64, 128])
B = ((I.reshape(-1,1) & (2**np.arange(8))) != 0).astype(int)
print(B[:,::-1])
# Author: Daniel T. McDonald
I = np.array([0, 1, 2, 3, 15, 16, 32, 64, 128], dtype=np.uint8)
print(np.unpackbits(I[:, np.newaxis], axis=1))
```
#### 96. Given a two dimensional array, how to extract unique rows? (★★★)
```
# Author: Jaime Fernández del Río
Z = np.random.randint(0,2,(6,3))
T = np.ascontiguousarray(Z).view(np.dtype((np.void, Z.dtype.itemsize * Z.shape[1])))
_, idx = np.unique(T, return_index=True)
uZ = Z[idx]
print(uZ)
# Author: Andreas Kouzelis
# NumPy >= 1.13
uZ = np.unique(Z, axis=0)
print(uZ)
```
#### 97. Considering 2 vectors A & B, write the einsum equivalent of inner, outer, sum, and mul function (★★★)
```
# Author: Alex Riley
# Make sure to read: http://ajcr.net/Basic-guide-to-einsum/
A = np.random.uniform(0,1,10)
B = np.random.uniform(0,1,10)
np.einsum('i->', A) # np.sum(A)
np.einsum('i,i->i', A, B) # A * B
np.einsum('i,i', A, B) # np.inner(A, B)
np.einsum('i,j->ij', A, B) # np.outer(A, B)
```
#### 98. Considering a path described by two vectors (X,Y), how to sample it using equidistant samples (★★★)?
```
# Author: Bas Swinckels
phi = np.arange(0, 10*np.pi, 0.1)
a = 1
x = a*phi*np.cos(phi)
y = a*phi*np.sin(phi)
dr = (np.diff(x)**2 + np.diff(y)**2)**.5 # segment lengths
r = np.zeros_like(x)
r[1:] = np.cumsum(dr) # integrate path
r_int = np.linspace(0, r.max(), 200) # regular spaced path
x_int = np.interp(r_int, r, x) # integrate path
y_int = np.interp(r_int, r, y)
```
#### 99. Given an integer n and a 2D array X, select from X the rows which can be interpreted as draws from a multinomial distribution with n degrees, i.e., the rows which only contain integers and which sum to n. (★★★)
```
# Author: Evgeni Burovski
X = np.asarray([[1.0, 0.0, 3.0, 8.0],
[2.0, 0.0, 1.0, 1.0],
[1.5, 2.5, 1.0, 0.0]])
n = 4
M = np.logical_and.reduce(np.mod(X, 1) == 0, axis=-1)
M &= (X.sum(axis=-1) == n)
print(X[M])
```
#### 100. Compute bootstrapped 95% confidence intervals for the mean of a 1D array X (i.e., resample the elements of an array with replacement N times, compute the mean of each sample, and then compute percentiles over the means). (★★★)
```
# Author: Jessica B. Hamrick
X = np.random.randn(100) # random 1D array
N = 1000 # number of bootstrap samples
idx = np.random.randint(0, X.size, (N, X.size))
means = X[idx].mean(axis=1)
confint = np.percentile(means, [2.5, 97.5])
print(confint)
```
| github_jupyter | import numpy as np
print(np.__version__)
np.show_config()
Z = np.zeros(10)
print(Z)
Z = np.zeros((10,10))
print("%d bytes" % (Z.size * Z.itemsize))
%run `python -c "import numpy; numpy.info(numpy.add)"`
Z = np.zeros(10)
Z[4] = 1
print(Z)
Z = np.arange(10,50)
print(Z)
Z = np.arange(50)
Z = Z[::-1]
print(Z)
Z = np.arange(9).reshape(3,3)
print(Z)
nz = np.nonzero([1,2,0,0,4,0])
print(nz)
Z = np.eye(3)
print(Z)
Z = np.random.random((3,3,3))
print(Z)
Z = np.random.random((10,10))
Zmin, Zmax = Z.min(), Z.max()
print(Zmin, Zmax)
Z = np.random.random(30)
m = Z.mean()
print(m)
Z = np.ones((10,10))
Z[1:-1,1:-1] = 0
print(Z)
Z = np.ones((5,5))
Z = np.pad(Z, pad_width=1, mode='constant', constant_values=0)
print(Z)
print(0 * np.nan)
print(np.nan == np.nan)
print(np.inf > np.nan)
print(np.nan - np.nan)
print(np.nan in set([np.nan]))
print(0.3 == 3 * 0.1)
Z = np.diag(1+np.arange(4),k=-1)
print(Z)
Z = np.zeros((8,8),dtype=int)
Z[1::2,::2] = 1
Z[::2,1::2] = 1
print(Z)
print(np.unravel_index(100,(6,7,8)))
Z = np.tile( np.array([[0,1],[1,0]]), (4,4))
print(Z)
Z = np.random.random((5,5))
Z = (Z - np.mean (Z)) / (np.std (Z))
print(Z)
color = np.dtype([("r", np.ubyte, 1),
("g", np.ubyte, 1),
("b", np.ubyte, 1),
("a", np.ubyte, 1)])
Z = np.dot(np.ones((5,3)), np.ones((3,2)))
print(Z)
# Alternative solution, in Python 3.5 and above
Z = np.ones((5,3)) @ np.ones((3,2))
# Author: Evgeni Burovski
Z = np.arange(11)
Z[(3 < Z) & (Z <= 8)] *= -1
print(Z)
# Author: Jake VanderPlas
print(sum(range(5),-1))
from numpy import *
print(sum(range(5),-1))
Z**Z
2 << Z >> 2
Z <- Z
1j*Z
Z/1/1
Z<Z>Z
print(np.array(0) / np.array(0))
print(np.array(0) // np.array(0))
print(np.array([np.nan]).astype(int).astype(float))
# Author: Charles R Harris
Z = np.random.uniform(-10,+10,10)
print (np.copysign(np.ceil(np.abs(Z)), Z))
Z1 = np.random.randint(0,10,10)
Z2 = np.random.randint(0,10,10)
print(np.intersect1d(Z1,Z2))
# Suicide mode on
defaults = np.seterr(all="ignore")
Z = np.ones(1) / 0
# Back to sanity
_ = np.seterr(**defaults)
An equivalent way, with a context manager:
with np.errstate(divide='ignore'):
Z = np.ones(1) / 0
np.sqrt(-1) == np.emath.sqrt(-1)
yesterday = np.datetime64('today', 'D') - np.timedelta64(1, 'D')
today = np.datetime64('today', 'D')
tomorrow = np.datetime64('today', 'D') + np.timedelta64(1, 'D')
Z = np.arange('2016-07', '2016-08', dtype='datetime64[D]')
print(Z)
A = np.ones(3)*1
B = np.ones(3)*2
C = np.ones(3)*3
np.add(A,B,out=B)
np.divide(A,2,out=A)
np.negative(A,out=A)
np.multiply(A,B,out=A)
Z = np.random.uniform(0,10,10)
print (Z - Z%1)
print (np.floor(Z))
print (np.ceil(Z)-1)
print (Z.astype(int))
print (np.trunc(Z))
Z = np.zeros((5,5))
Z += np.arange(5)
print(Z)
def generate():
for x in range(10):
yield x
Z = np.fromiter(generate(),dtype=float,count=-1)
print(Z)
Z = np.linspace(0,1,11,endpoint=False)[1:]
print(Z)
Z = np.random.random(10)
Z.sort()
print(Z)
# Author: Evgeni Burovski
Z = np.arange(10)
np.add.reduce(Z)
A = np.random.randint(0,2,5)
B = np.random.randint(0,2,5)
# Assuming identical shape of the arrays and a tolerance for the comparison of values
equal = np.allclose(A,B)
print(equal)
# Checking both the shape and the element values, no tolerance (values have to be exactly equal)
equal = np.array_equal(A,B)
print(equal)
Z = np.zeros(10)
Z.flags.writeable = False
Z[0] = 1
Z = np.random.random((10,2))
X,Y = Z[:,0], Z[:,1]
R = np.sqrt(X**2+Y**2)
T = np.arctan2(Y,X)
print(R)
print(T)
Z = np.random.random(10)
Z[Z.argmax()] = 0
print(Z)
Z = np.zeros((5,5), [('x',float),('y',float)])
Z['x'], Z['y'] = np.meshgrid(np.linspace(0,1,5),
np.linspace(0,1,5))
print(Z)
# Author: Evgeni Burovski
X = np.arange(8)
Y = X + 0.5
C = 1.0 / np.subtract.outer(X, Y)
print(np.linalg.det(C))
for dtype in [np.int8, np.int32, np.int64]:
print(np.iinfo(dtype).min)
print(np.iinfo(dtype).max)
for dtype in [np.float32, np.float64]:
print(np.finfo(dtype).min)
print(np.finfo(dtype).max)
print(np.finfo(dtype).eps)
np.set_printoptions(threshold=np.nan)
Z = np.zeros((16,16))
print(Z)
Z = np.arange(100)
v = np.random.uniform(0,100)
index = (np.abs(Z-v)).argmin()
print(Z[index])
Z = np.zeros(10, [ ('position', [ ('x', float, 1),
('y', float, 1)]),
('color', [ ('r', float, 1),
('g', float, 1),
('b', float, 1)])])
print(Z)
Z = np.random.random((10,2))
X,Y = np.atleast_2d(Z[:,0], Z[:,1])
D = np.sqrt( (X-X.T)**2 + (Y-Y.T)**2)
print(D)
# Much faster with scipy
import scipy
# Thanks Gavin Heverly-Coulson (#issue 1)
import scipy.spatial
Z = np.random.random((10,2))
D = scipy.spatial.distance.cdist(Z,Z)
print(D)
Z = np.arange(10, dtype=np.float32)
Z = Z.astype(np.int32, copy=False)
print(Z)
from io import StringIO
# Fake file
s = StringIO("""1, 2, 3, 4, 5\n
6, , , 7, 8\n
, , 9,10,11\n""")
Z = np.genfromtxt(s, delimiter=",", dtype=np.int)
print(Z)
Z = np.arange(9).reshape(3,3)
for index, value in np.ndenumerate(Z):
print(index, value)
for index in np.ndindex(Z.shape):
print(index, Z[index])
X, Y = np.meshgrid(np.linspace(-1,1,10), np.linspace(-1,1,10))
D = np.sqrt(X*X+Y*Y)
sigma, mu = 1.0, 0.0
G = np.exp(-( (D-mu)**2 / ( 2.0 * sigma**2 ) ) )
print(G)
# Author: Divakar
n = 10
p = 3
Z = np.zeros((n,n))
np.put(Z, np.random.choice(range(n*n), p, replace=False),1)
print(Z)
# Author: Warren Weckesser
X = np.random.rand(5, 10)
# Recent versions of numpy
Y = X - X.mean(axis=1, keepdims=True)
# Older versions of numpy
Y = X - X.mean(axis=1).reshape(-1, 1)
print(Y)
# Author: Steve Tjoa
Z = np.random.randint(0,10,(3,3))
print(Z)
print(Z[Z[:,1].argsort()])
# Author: Warren Weckesser
Z = np.random.randint(0,3,(3,10))
print((~Z.any(axis=0)).any())
Z = np.random.uniform(0,1,10)
z = 0.5
m = Z.flat[np.abs(Z - z).argmin()]
print(m)
A = np.arange(3).reshape(3,1)
B = np.arange(3).reshape(1,3)
it = np.nditer([A,B,None])
for x,y,z in it: z[...] = x + y
print(it.operands[2])
class NamedArray(np.ndarray):
def __new__(cls, array, name="no name"):
obj = np.asarray(array).view(cls)
obj.name = name
return obj
def __array_finalize__(self, obj):
if obj is None: return
self.info = getattr(obj, 'name', "no name")
Z = NamedArray(np.arange(10), "range_10")
print (Z.name)
# Author: Brett Olsen
Z = np.ones(10)
I = np.random.randint(0,len(Z),20)
Z += np.bincount(I, minlength=len(Z))
print(Z)
# Another solution
# Author: Bartosz Telenczuk
np.add.at(Z, I, 1)
print(Z)
# Author: Alan G Isaac
X = [1,2,3,4,5,6]
I = [1,3,9,3,4,1]
F = np.bincount(I,X)
print(F)
# Author: Nadav Horesh
w,h = 16,16
I = np.random.randint(0,2,(h,w,3)).astype(np.ubyte)
#Note that we should compute 256*256 first.
#Otherwise numpy will only promote F.dtype to 'uint16' and overfolw will occur
F = I[...,0]*(256*256) + I[...,1]*256 +I[...,2]
n = len(np.unique(F))
print(n)
A = np.random.randint(0,10,(3,4,3,4))
# solution by passing a tuple of axes (introduced in numpy 1.7.0)
sum = A.sum(axis=(-2,-1))
print(sum)
# solution by flattening the last two dimensions into one
# (useful for functions that don't accept tuples for axis argument)
sum = A.reshape(A.shape[:-2] + (-1,)).sum(axis=-1)
print(sum)
# Author: Jaime Fernández del Río
D = np.random.uniform(0,1,100)
S = np.random.randint(0,10,100)
D_sums = np.bincount(S, weights=D)
D_counts = np.bincount(S)
D_means = D_sums / D_counts
print(D_means)
# Pandas solution as a reference due to more intuitive code
import pandas as pd
print(pd.Series(D).groupby(S).mean())
# Author: Mathieu Blondel
A = np.random.uniform(0,1,(5,5))
B = np.random.uniform(0,1,(5,5))
# Slow version
np.diag(np.dot(A, B))
# Fast version
np.sum(A * B.T, axis=1)
# Faster version
np.einsum("ij,ji->i", A, B)
# Author: Warren Weckesser
Z = np.array([1,2,3,4,5])
nz = 3
Z0 = np.zeros(len(Z) + (len(Z)-1)*(nz))
Z0[::nz+1] = Z
print(Z0)
A = np.ones((5,5,3))
B = 2*np.ones((5,5))
print(A * B[:,:,None])
# Author: Eelco Hoogendoorn
A = np.arange(25).reshape(5,5)
A[[0,1]] = A[[1,0]]
print(A)
# Author: Nicolas P. Rougier
faces = np.random.randint(0,100,(10,3))
F = np.roll(faces.repeat(2,axis=1),-1,axis=1)
F = F.reshape(len(F)*3,2)
F = np.sort(F,axis=1)
G = F.view( dtype=[('p0',F.dtype),('p1',F.dtype)] )
G = np.unique(G)
print(G)
# Author: Jaime Fernández del Río
C = np.bincount([1,1,2,3,4,4,6])
A = np.repeat(np.arange(len(C)), C)
print(A)
# Author: Jaime Fernández del Río
def moving_average(a, n=3) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
Z = np.arange(20)
print(moving_average(Z, n=3))
# Author: Joe Kington / Erik Rigtorp
from numpy.lib import stride_tricks
def rolling(a, window):
shape = (a.size - window + 1, window)
strides = (a.itemsize, a.itemsize)
return stride_tricks.as_strided(a, shape=shape, strides=strides)
Z = rolling(np.arange(10), 3)
print(Z)
# Author: Nathaniel J. Smith
Z = np.random.randint(0,2,100)
np.logical_not(Z, out=Z)
Z = np.random.uniform(-1.0,1.0,100)
np.negative(Z, out=Z)
def distance(P0, P1, p):
T = P1 - P0
L = (T**2).sum(axis=1)
U = -((P0[:,0]-p[...,0])*T[:,0] + (P0[:,1]-p[...,1])*T[:,1]) / L
U = U.reshape(len(U),1)
D = P0 + U*T - p
return np.sqrt((D**2).sum(axis=1))
P0 = np.random.uniform(-10,10,(10,2))
P1 = np.random.uniform(-10,10,(10,2))
p = np.random.uniform(-10,10,( 1,2))
print(distance(P0, P1, p))
# Author: Italmassov Kuanysh
# based on distance function from previous question
P0 = np.random.uniform(-10, 10, (10,2))
P1 = np.random.uniform(-10,10,(10,2))
p = np.random.uniform(-10, 10, (10,2))
print(np.array([distance(P0,P1,p_i) for p_i in p]))
# Author: Nicolas Rougier
Z = np.random.randint(0,10,(10,10))
shape = (5,5)
fill = 0
position = (1,1)
R = np.ones(shape, dtype=Z.dtype)*fill
P = np.array(list(position)).astype(int)
Rs = np.array(list(R.shape)).astype(int)
Zs = np.array(list(Z.shape)).astype(int)
R_start = np.zeros((len(shape),)).astype(int)
R_stop = np.array(list(shape)).astype(int)
Z_start = (P-Rs//2)
Z_stop = (P+Rs//2)+Rs%2
R_start = (R_start - np.minimum(Z_start,0)).tolist()
Z_start = (np.maximum(Z_start,0)).tolist()
R_stop = np.maximum(R_start, (R_stop - np.maximum(Z_stop-Zs,0))).tolist()
Z_stop = (np.minimum(Z_stop,Zs)).tolist()
r = [slice(start,stop) for start,stop in zip(R_start,R_stop)]
z = [slice(start,stop) for start,stop in zip(Z_start,Z_stop)]
R[r] = Z[z]
print(Z)
print(R)
# Author: Stefan van der Walt
Z = np.arange(1,15,dtype=np.uint32)
R = stride_tricks.as_strided(Z,(11,4),(4,4))
print(R)
# Author: Stefan van der Walt
Z = np.random.uniform(0,1,(10,10))
U, S, V = np.linalg.svd(Z) # Singular Value Decomposition
rank = np.sum(S > 1e-10)
print(rank)
Z = np.random.randint(0,10,50)
print(np.bincount(Z).argmax())
# Author: Chris Barker
Z = np.random.randint(0,5,(10,10))
n = 3
i = 1 + (Z.shape[0]-3)
j = 1 + (Z.shape[1]-3)
C = stride_tricks.as_strided(Z, shape=(i, j, n, n), strides=Z.strides + Z.strides)
print(C)
# Author: Eric O. Lebigot
# Note: only works for 2d array and value setting using indices
class Symetric(np.ndarray):
def __setitem__(self, index, value):
i,j = index
super(Symetric, self).__setitem__((i,j), value)
super(Symetric, self).__setitem__((j,i), value)
def symetric(Z):
return np.asarray(Z + Z.T - np.diag(Z.diagonal())).view(Symetric)
S = symetric(np.random.randint(0,10,(5,5)))
S[2,3] = 42
print(S)
# Author: Stefan van der Walt
p, n = 10, 20
M = np.ones((p,n,n))
V = np.ones((p,n,1))
S = np.tensordot(M, V, axes=[[0, 2], [0, 1]])
print(S)
# It works, because:
# M is (p,n,n)
# V is (p,n,1)
# Thus, summing over the paired axes 0 and 0 (of M and V independently),
# and 2 and 1, to remain with a (n,1) vector.
# Author: Robert Kern
Z = np.ones((16,16))
k = 4
S = np.add.reduceat(np.add.reduceat(Z, np.arange(0, Z.shape[0], k), axis=0),
np.arange(0, Z.shape[1], k), axis=1)
print(S)
# Author: Nicolas Rougier
def iterate(Z):
# Count neighbours
N = (Z[0:-2,0:-2] + Z[0:-2,1:-1] + Z[0:-2,2:] +
Z[1:-1,0:-2] + Z[1:-1,2:] +
Z[2: ,0:-2] + Z[2: ,1:-1] + Z[2: ,2:])
# Apply rules
birth = (N==3) & (Z[1:-1,1:-1]==0)
survive = ((N==2) | (N==3)) & (Z[1:-1,1:-1]==1)
Z[...] = 0
Z[1:-1,1:-1][birth | survive] = 1
return Z
Z = np.random.randint(0,2,(50,50))
for i in range(100): Z = iterate(Z)
print(Z)
Z = np.arange(10000)
np.random.shuffle(Z)
n = 5
# Slow
print (Z[np.argsort(Z)[-n:]])
# Fast
print (Z[np.argpartition(-Z,n)[:n]])
# Author: Stefan Van der Walt
def cartesian(arrays):
arrays = [np.asarray(a) for a in arrays]
shape = (len(x) for x in arrays)
ix = np.indices(shape, dtype=int)
ix = ix.reshape(len(arrays), -1).T
for n, arr in enumerate(arrays):
ix[:, n] = arrays[n][ix[:, n]]
return ix
print (cartesian(([1, 2, 3], [4, 5], [6, 7])))
Z = np.array([("Hello", 2.5, 3),
("World", 3.6, 2)])
R = np.core.records.fromarrays(Z.T,
names='col1, col2, col3',
formats = 'S8, f8, i8')
print(R)
# Author: Ryan G.
x = np.random.rand(5e7)
%timeit np.power(x,3)
%timeit x*x*x
%timeit np.einsum('i,i,i->i',x,x,x)
# Author: Gabe Schwartz
A = np.random.randint(0,5,(8,3))
B = np.random.randint(0,5,(2,2))
C = (A[..., np.newaxis, np.newaxis] == B)
rows = np.where(C.any((3,1)).all(1))[0]
print(rows)
# Author: Robert Kern
Z = np.random.randint(0,5,(10,3))
print(Z)
# solution for arrays of all dtypes (including string arrays and record arrays)
E = np.all(Z[:,1:] == Z[:,:-1], axis=1)
U = Z[~E]
print(U)
# soluiton for numerical arrays only, will work for any number of columns in Z
U = Z[Z.max(axis=1) != Z.min(axis=1),:]
print(U)
# Author: Warren Weckesser
I = np.array([0, 1, 2, 3, 15, 16, 32, 64, 128])
B = ((I.reshape(-1,1) & (2**np.arange(8))) != 0).astype(int)
print(B[:,::-1])
# Author: Daniel T. McDonald
I = np.array([0, 1, 2, 3, 15, 16, 32, 64, 128], dtype=np.uint8)
print(np.unpackbits(I[:, np.newaxis], axis=1))
# Author: Jaime Fernández del Río
Z = np.random.randint(0,2,(6,3))
T = np.ascontiguousarray(Z).view(np.dtype((np.void, Z.dtype.itemsize * Z.shape[1])))
_, idx = np.unique(T, return_index=True)
uZ = Z[idx]
print(uZ)
# Author: Andreas Kouzelis
# NumPy >= 1.13
uZ = np.unique(Z, axis=0)
print(uZ)
# Author: Alex Riley
# Make sure to read: http://ajcr.net/Basic-guide-to-einsum/
A = np.random.uniform(0,1,10)
B = np.random.uniform(0,1,10)
np.einsum('i->', A) # np.sum(A)
np.einsum('i,i->i', A, B) # A * B
np.einsum('i,i', A, B) # np.inner(A, B)
np.einsum('i,j->ij', A, B) # np.outer(A, B)
# Author: Bas Swinckels
phi = np.arange(0, 10*np.pi, 0.1)
a = 1
x = a*phi*np.cos(phi)
y = a*phi*np.sin(phi)
dr = (np.diff(x)**2 + np.diff(y)**2)**.5 # segment lengths
r = np.zeros_like(x)
r[1:] = np.cumsum(dr) # integrate path
r_int = np.linspace(0, r.max(), 200) # regular spaced path
x_int = np.interp(r_int, r, x) # integrate path
y_int = np.interp(r_int, r, y)
# Author: Evgeni Burovski
X = np.asarray([[1.0, 0.0, 3.0, 8.0],
[2.0, 0.0, 1.0, 1.0],
[1.5, 2.5, 1.0, 0.0]])
n = 4
M = np.logical_and.reduce(np.mod(X, 1) == 0, axis=-1)
M &= (X.sum(axis=-1) == n)
print(X[M])
# Author: Jessica B. Hamrick
X = np.random.randn(100) # random 1D array
N = 1000 # number of bootstrap samples
idx = np.random.randint(0, X.size, (N, X.size))
means = X[idx].mean(axis=1)
confint = np.percentile(means, [2.5, 97.5])
print(confint) | 0.415373 | 0.965996 |
```
!apt-get install p7zip
!p7zip -d -f -k ../input/mercari-price-suggestion-challenge/train.tsv.7z
!unzip -o ../input/mercari-price-suggestion-challenge/sample_submission_stg2.csv.zip
!unzip -o ../input/mercari-price-suggestion-challenge/test_stg2.tsv.zip
!p7zip -d -f -k ../input/mercari-price-suggestion-challenge/test.tsv.7z
import numpy as np
import pandas as pd
from sklearn.model_selection import KFold
train = pd.read_csv('train.tsv', sep='\t').sample(frac=0.015).reset_index()
kf = KFold(n_splits=3, random_state=1001,shuffle=True)
for i, (train_index, val_index) in enumerate(kf.split(train)):
trn= train.iloc[train_index].reset_index()
val= train.iloc[val_index].reset_index()
trn = trn.drop(columns=['index'])
val = val.drop(columns=['index'])
val.to_csv('sub_val.csv',index=False)
trn.to_csv('sub_train.csv',index=False)
# !rm -r ./autox
!git clone https://github.com/4paradigm/autox.git
!pip install ./autox
from autox.autox_nlp import NLP_feature
import pandas as pd
import numpy as np
import os
from tqdm import tqdm
df_train = pd.read_csv('sub_train.csv')
df_test = pd.read_csv('sub_val.csv')
use_Toknizer=True
emb_mode = 'Bert'# TFIDF / Word2Vec / Glove / FastText / Bert
encode_mode = 'supervise' # unsupervise / supervise
text_columns_name = ['name','category_name','item_description']
target_column = df_train['price']
candidate_labels=None
nlp = NLP_feature()
# nlp.do_mlm = True
# nlp.mlm_epochs=3
# nlp.model_name = 'microsoft/deberta-v3-base'
nlp.emb_size=100
nlp.n_clusters=20
df = nlp.fit(df_train,
text_columns_name,
use_Toknizer,
emb_mode,
encode_mode,
target_column,
candidate_labels)
for column in df.columns:
df_train[column] = df[column]
df_train = df_train.drop(columns=text_columns_name)
test = nlp.transform(df_test)
for column in test.columns:
df_test[column] = test[column]
df_test = df_test.drop(columns=text_columns_name)
df_train.to_csv(f'{emb_mode}_{encode_mode}_autox_trn.csv',index=False)
df_test.to_csv(f'{emb_mode}_{encode_mode}_autox_val.csv',index=False)
df_val=pd.read_csv(f'{emb_mode}_{encode_mode}_autox_val.csv').drop(columns=['price'])
df_val.to_csv(f'{emb_mode}_{encode_mode}_autox_tst.csv',index=False)
from autox import AutoX
path = f'.'
autox = AutoX(target = 'price', train_name = f'{emb_mode}_{encode_mode}_autox_trn.csv', test_name = f'{emb_mode}_{encode_mode}_autox_tst.csv', id = [], path = path)
sub = autox.get_submit()
val = pd.read_csv(f'sub_val.csv')
from sklearn.metrics import mean_squared_error
RMSE = np.sqrt(mean_squared_error(val['price'], sub['price']))
RMSE
```
| github_jupyter | !apt-get install p7zip
!p7zip -d -f -k ../input/mercari-price-suggestion-challenge/train.tsv.7z
!unzip -o ../input/mercari-price-suggestion-challenge/sample_submission_stg2.csv.zip
!unzip -o ../input/mercari-price-suggestion-challenge/test_stg2.tsv.zip
!p7zip -d -f -k ../input/mercari-price-suggestion-challenge/test.tsv.7z
import numpy as np
import pandas as pd
from sklearn.model_selection import KFold
train = pd.read_csv('train.tsv', sep='\t').sample(frac=0.015).reset_index()
kf = KFold(n_splits=3, random_state=1001,shuffle=True)
for i, (train_index, val_index) in enumerate(kf.split(train)):
trn= train.iloc[train_index].reset_index()
val= train.iloc[val_index].reset_index()
trn = trn.drop(columns=['index'])
val = val.drop(columns=['index'])
val.to_csv('sub_val.csv',index=False)
trn.to_csv('sub_train.csv',index=False)
# !rm -r ./autox
!git clone https://github.com/4paradigm/autox.git
!pip install ./autox
from autox.autox_nlp import NLP_feature
import pandas as pd
import numpy as np
import os
from tqdm import tqdm
df_train = pd.read_csv('sub_train.csv')
df_test = pd.read_csv('sub_val.csv')
use_Toknizer=True
emb_mode = 'Bert'# TFIDF / Word2Vec / Glove / FastText / Bert
encode_mode = 'supervise' # unsupervise / supervise
text_columns_name = ['name','category_name','item_description']
target_column = df_train['price']
candidate_labels=None
nlp = NLP_feature()
# nlp.do_mlm = True
# nlp.mlm_epochs=3
# nlp.model_name = 'microsoft/deberta-v3-base'
nlp.emb_size=100
nlp.n_clusters=20
df = nlp.fit(df_train,
text_columns_name,
use_Toknizer,
emb_mode,
encode_mode,
target_column,
candidate_labels)
for column in df.columns:
df_train[column] = df[column]
df_train = df_train.drop(columns=text_columns_name)
test = nlp.transform(df_test)
for column in test.columns:
df_test[column] = test[column]
df_test = df_test.drop(columns=text_columns_name)
df_train.to_csv(f'{emb_mode}_{encode_mode}_autox_trn.csv',index=False)
df_test.to_csv(f'{emb_mode}_{encode_mode}_autox_val.csv',index=False)
df_val=pd.read_csv(f'{emb_mode}_{encode_mode}_autox_val.csv').drop(columns=['price'])
df_val.to_csv(f'{emb_mode}_{encode_mode}_autox_tst.csv',index=False)
from autox import AutoX
path = f'.'
autox = AutoX(target = 'price', train_name = f'{emb_mode}_{encode_mode}_autox_trn.csv', test_name = f'{emb_mode}_{encode_mode}_autox_tst.csv', id = [], path = path)
sub = autox.get_submit()
val = pd.read_csv(f'sub_val.csv')
from sklearn.metrics import mean_squared_error
RMSE = np.sqrt(mean_squared_error(val['price'], sub['price']))
RMSE | 0.29584 | 0.151906 |
# Wavelets in Jupyter Notebooks
> A notebook to show off the power of fastpages and jupyter.
- toc: false
- branch: master
- badges: true
- comments: true
- categories: [wavelets, jupyter]
- image: images/some_folder/your_image.png
- hide: false
- search_exclude: true
- metadata_key1: metadata_value1
- metadata_key2: metadata_value2
This is a notebook cobbled together from information and code from the following sources:
* [pywavelets](https://github.com/PyWavelets/pywt)
* [Ahmet Taspinar's guide for using wavelet in ML](http://ataspinar.com/2018/12/21/a-guide-for-using-the-wavelet-transform-in-machine-learning/)
* [Ahemt's example visualization using scaleogram](https://github.com/taspinar/siml/blob/master/notebooks/WV2%20-%20Visualizing%20the%20Scaleogram%2C%20time-axis%20and%20Fourier%20Transform.ipynb)
* [Alexander Sauve's introduction to wavelet for EDA](https://www.kaggle.com/asauve/a-gentle-introduction-to-wavelet-for-data-analysis/execution)
## Why?
This notebook was created from the links above to test out how fastpages handle a combination of data and images within a notebook, when that notebook is converted for easy web viewing by jekyll.
The above author's code seemed like a good dry run and test of the fastpage's conversion from jupyter notebook to blog post.
```
# This is a comment
import numpy as np
import pandas as pd
from scipy.fftpack import fft
import matplotlib.pyplot as plt
import pywt
def plot_wavelet(time, signal, scales,
# waveletname = 'cmor1.5-1.0',
waveletname = 'gaus5',
cmap = plt.cm.seismic,
title = 'Wavelet Transform (Power Spectrum) of signal',
ylabel = 'Period (years)',
xlabel = 'Time'):
dt = time[1] - time[0]
[coefficients, frequencies] = pywt.cwt(signal, scales, waveletname, dt)
power = (abs(coefficients)) ** 2
period = 1. / frequencies
levels = [0.0625, 0.125, 0.25, 0.5, 1, 2, 4, 8]
contourlevels = np.log2(levels)
fig, ax = plt.subplots(figsize=(15, 10))
im = ax.contourf(time, np.log2(period), np.log2(power), contourlevels, extend='both',cmap=cmap)
ax.set_title(title, fontsize=20)
ax.set_ylabel(ylabel, fontsize=18)
ax.set_xlabel(xlabel, fontsize=18)
yticks = 2**np.arange(np.ceil(np.log2(period.min())), np.ceil(np.log2(period.max())))
ax.set_yticks(np.log2(yticks))
ax.set_yticklabels(yticks)
ax.invert_yaxis()
ylim = ax.get_ylim()
ax.set_ylim(ylim[0], -1)
cbar_ax = fig.add_axes([0.95, 0.5, 0.03, 0.25])
fig.colorbar(im, cax=cbar_ax, orientation="vertical")
plt.show()
def get_ave_values(xvalues, yvalues, n = 5):
signal_length = len(xvalues)
if signal_length % n == 0:
padding_length = 0
else:
padding_length = n - signal_length//n % n
xarr = np.array(xvalues)
yarr = np.array(yvalues)
xarr.resize(signal_length//n, n)
yarr.resize(signal_length//n, n)
xarr_reshaped = xarr.reshape((-1,n))
yarr_reshaped = yarr.reshape((-1,n))
x_ave = xarr_reshaped[:,0]
y_ave = np.nanmean(yarr_reshaped, axis=1)
return x_ave, y_ave
def plot_signal_plus_average(time, signal, average_over = 5):
fig, ax = plt.subplots(figsize=(15, 3))
time_ave, signal_ave = get_ave_values(time, signal, average_over)
ax.plot(time, signal, label='signal')
ax.plot(time_ave, signal_ave, label = 'time average (n={})'.format(5))
ax.set_xlim([time[0], time[-1]])
ax.set_ylabel('Signal Amplitude', fontsize=18)
ax.set_title('Signal + Time Average', fontsize=18)
ax.set_xlabel('Time', fontsize=18)
ax.legend()
plt.show()
def get_fft_values(y_values, T, N, f_s):
f_values = np.linspace(0.0, 1.0/(2.0*T), N//2)
fft_values_ = fft(y_values)
fft_values = 2.0/N * np.abs(fft_values_[0:N//2])
return f_values, fft_values
def plot_fft_plus_power(time, signal):
dt = time[1] - time[0]
N = len(signal)
fs = 1/dt
fig, ax = plt.subplots(figsize=(15, 3))
variance = np.std(signal)**2
f_values, fft_values = get_fft_values(signal, dt, N, fs)
fft_power = variance * abs(fft_values) ** 2 # FFT power spectrum
ax.plot(f_values, fft_values, 'r-', label='Fourier Transform')
ax.plot(f_values, fft_power, 'k--', linewidth=1, label='FFT Power Spectrum')
ax.set_xlabel('Frequency [Hz / year]', fontsize=18)
ax.set_ylabel('Amplitude', fontsize=18)
ax.legend()
plt.show()
dataset = "http://paos.colorado.edu/research/wavelets/wave_idl/sst_nino3.dat"
df_nino = pd.read_table(dataset)
N = df_nino.shape[0]
t0=1871
dt=0.25
time = np.arange(0, N) * dt + t0
signal = df_nino.values.squeeze()
scales = np.arange(1, 128)
plot_signal_plus_average(time, signal)
plot_fft_plus_power(time, signal)
plot_wavelet(time, signal, scales)
# Create some fake data sets and show their fourier transforms (fft).
t_n = 1
N = 100000
T = t_n / N
f_s = 1/T
xa = np.linspace(0, t_n, num=int(N))
xb = np.linspace(0, t_n/4, num=int(N/4))
frequencies = [4, 30, 60, 90]
y1a, y1b = np.sin(2*np.pi*frequencies[0]*xa), np.sin(2*np.pi*frequencies[0]*xb)
y2a, y2b = np.sin(2*np.pi*frequencies[1]*xa), np.sin(2*np.pi*frequencies[1]*xb)
y3a, y3b = np.sin(2*np.pi*frequencies[2]*xa), np.sin(2*np.pi*frequencies[2]*xb)
y4a, y4b = np.sin(2*np.pi*frequencies[3]*xa), np.sin(2*np.pi*frequencies[3]*xb)
composite_signal1 = y1a + y2a + y3a + y4a
composite_signal2 = np.concatenate([y1b, y2b, y3b, y4b])
f_values1, fft_values1 = get_fft_values(composite_signal1, T, N, f_s)
f_values2, fft_values2 = get_fft_values(composite_signal2, T, N, f_s)
fig, axarr = plt.subplots(nrows=2, ncols=2, figsize=(12,8))
axarr[0,0].plot(xa, composite_signal1)
axarr[1,0].plot(xa, composite_signal2)
axarr[0,1].plot(f_values1, fft_values1)
axarr[1,1].plot(f_values2, fft_values2)
axarr[0,1].set_xlim(0, 150)
axarr[1,1].set_xlim(0, 150)
plt.tight_layout()
plt.show()
# The El Nino Dataset
df_nino
df_nino.describe()
df_nino.hist()
```
| github_jupyter | # This is a comment
import numpy as np
import pandas as pd
from scipy.fftpack import fft
import matplotlib.pyplot as plt
import pywt
def plot_wavelet(time, signal, scales,
# waveletname = 'cmor1.5-1.0',
waveletname = 'gaus5',
cmap = plt.cm.seismic,
title = 'Wavelet Transform (Power Spectrum) of signal',
ylabel = 'Period (years)',
xlabel = 'Time'):
dt = time[1] - time[0]
[coefficients, frequencies] = pywt.cwt(signal, scales, waveletname, dt)
power = (abs(coefficients)) ** 2
period = 1. / frequencies
levels = [0.0625, 0.125, 0.25, 0.5, 1, 2, 4, 8]
contourlevels = np.log2(levels)
fig, ax = plt.subplots(figsize=(15, 10))
im = ax.contourf(time, np.log2(period), np.log2(power), contourlevels, extend='both',cmap=cmap)
ax.set_title(title, fontsize=20)
ax.set_ylabel(ylabel, fontsize=18)
ax.set_xlabel(xlabel, fontsize=18)
yticks = 2**np.arange(np.ceil(np.log2(period.min())), np.ceil(np.log2(period.max())))
ax.set_yticks(np.log2(yticks))
ax.set_yticklabels(yticks)
ax.invert_yaxis()
ylim = ax.get_ylim()
ax.set_ylim(ylim[0], -1)
cbar_ax = fig.add_axes([0.95, 0.5, 0.03, 0.25])
fig.colorbar(im, cax=cbar_ax, orientation="vertical")
plt.show()
def get_ave_values(xvalues, yvalues, n = 5):
signal_length = len(xvalues)
if signal_length % n == 0:
padding_length = 0
else:
padding_length = n - signal_length//n % n
xarr = np.array(xvalues)
yarr = np.array(yvalues)
xarr.resize(signal_length//n, n)
yarr.resize(signal_length//n, n)
xarr_reshaped = xarr.reshape((-1,n))
yarr_reshaped = yarr.reshape((-1,n))
x_ave = xarr_reshaped[:,0]
y_ave = np.nanmean(yarr_reshaped, axis=1)
return x_ave, y_ave
def plot_signal_plus_average(time, signal, average_over = 5):
fig, ax = plt.subplots(figsize=(15, 3))
time_ave, signal_ave = get_ave_values(time, signal, average_over)
ax.plot(time, signal, label='signal')
ax.plot(time_ave, signal_ave, label = 'time average (n={})'.format(5))
ax.set_xlim([time[0], time[-1]])
ax.set_ylabel('Signal Amplitude', fontsize=18)
ax.set_title('Signal + Time Average', fontsize=18)
ax.set_xlabel('Time', fontsize=18)
ax.legend()
plt.show()
def get_fft_values(y_values, T, N, f_s):
f_values = np.linspace(0.0, 1.0/(2.0*T), N//2)
fft_values_ = fft(y_values)
fft_values = 2.0/N * np.abs(fft_values_[0:N//2])
return f_values, fft_values
def plot_fft_plus_power(time, signal):
dt = time[1] - time[0]
N = len(signal)
fs = 1/dt
fig, ax = plt.subplots(figsize=(15, 3))
variance = np.std(signal)**2
f_values, fft_values = get_fft_values(signal, dt, N, fs)
fft_power = variance * abs(fft_values) ** 2 # FFT power spectrum
ax.plot(f_values, fft_values, 'r-', label='Fourier Transform')
ax.plot(f_values, fft_power, 'k--', linewidth=1, label='FFT Power Spectrum')
ax.set_xlabel('Frequency [Hz / year]', fontsize=18)
ax.set_ylabel('Amplitude', fontsize=18)
ax.legend()
plt.show()
dataset = "http://paos.colorado.edu/research/wavelets/wave_idl/sst_nino3.dat"
df_nino = pd.read_table(dataset)
N = df_nino.shape[0]
t0=1871
dt=0.25
time = np.arange(0, N) * dt + t0
signal = df_nino.values.squeeze()
scales = np.arange(1, 128)
plot_signal_plus_average(time, signal)
plot_fft_plus_power(time, signal)
plot_wavelet(time, signal, scales)
# Create some fake data sets and show their fourier transforms (fft).
t_n = 1
N = 100000
T = t_n / N
f_s = 1/T
xa = np.linspace(0, t_n, num=int(N))
xb = np.linspace(0, t_n/4, num=int(N/4))
frequencies = [4, 30, 60, 90]
y1a, y1b = np.sin(2*np.pi*frequencies[0]*xa), np.sin(2*np.pi*frequencies[0]*xb)
y2a, y2b = np.sin(2*np.pi*frequencies[1]*xa), np.sin(2*np.pi*frequencies[1]*xb)
y3a, y3b = np.sin(2*np.pi*frequencies[2]*xa), np.sin(2*np.pi*frequencies[2]*xb)
y4a, y4b = np.sin(2*np.pi*frequencies[3]*xa), np.sin(2*np.pi*frequencies[3]*xb)
composite_signal1 = y1a + y2a + y3a + y4a
composite_signal2 = np.concatenate([y1b, y2b, y3b, y4b])
f_values1, fft_values1 = get_fft_values(composite_signal1, T, N, f_s)
f_values2, fft_values2 = get_fft_values(composite_signal2, T, N, f_s)
fig, axarr = plt.subplots(nrows=2, ncols=2, figsize=(12,8))
axarr[0,0].plot(xa, composite_signal1)
axarr[1,0].plot(xa, composite_signal2)
axarr[0,1].plot(f_values1, fft_values1)
axarr[1,1].plot(f_values2, fft_values2)
axarr[0,1].set_xlim(0, 150)
axarr[1,1].set_xlim(0, 150)
plt.tight_layout()
plt.show()
# The El Nino Dataset
df_nino
df_nino.describe()
df_nino.hist() | 0.478285 | 0.912709 |
<a href="https://colab.research.google.com/github/paulowe/ml-lambda/blob/main/colab-train1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Import packages
```
import sklearn
import pandas as pd
import numpy as np
import csv as csv
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report
from sklearn import metrics
from sklearn.externals import joblib
from sklearn.preprocessing import label_binarize
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import roc_auc_score
```
- Verify you are running Version 0.23.1 of sklearn. Some of the packages used for model evaluation only work with this version or higher.
- Run <> to upgrade sklearn
```
sklearn.__version__
```
## Import Data
X - all training examples
y - all true labels
```
data = pd.read_csv('./syntheticData.csv')
X, y = data.iloc[:, 1:], data.iloc[:,0]
```
## Visualize Data
(80100 * 377) training matrix
(801 * 1) label vector
```
print(X.head())
print(X.shape)
print(y.head())
print(y.shape)
```
## Split into training, cross validation and test sets
- Shuffle dataset
- Perform Split (60-20-20)
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40, stratify=y)
X_cv, X_test, y_cv, y_test = train_test_split(X_test, y_test, test_size=0.5, stratify=y_test)
print("Training data dimensions")
print(X_train.shape)
print(y_train.shape)
print("Cross validation data dimensions")
print(X_cv.shape)
print(y_cv.shape)
print("Test data dimensions")
print(X_test.shape)
print(y_test.shape)
```
## Train default MLP Classifier
```
clf = MLPClassifier()
clf = clf.fit(X_train, y_train)
```
## Training Variant: Bottom Up implementation
In this variant I will implement an identical classifier to the one we trained above. The objective here is to expose underlying components of the training process and perform direct optimization and monitoring techniques.
- Random initialization for weights
- Feedforward Propagation - Prediction function
- Neural Network Cost Function
- Backpropagation
- Sigmoid Gradient
### Random initialization
Select values for $\Theta^{(l)}$ uniformly in the range $[-\epsilon_{init} , \epsilon_{init}]$
One effective strategy for choosing $\epsilon_{init}$ is to base it on the number of units in the network
$\epsilon_{init} = \frac{\sqrt{6}}{\sqrt{L_{in} + L_{out}}}$
```
def randInitializeWeights(L_in, L_out):
"""
randomly initializes the weights of a layer with L_in incoming connections and L_out outgoing connections.
"""
epi = (6**1/2) / (L_in + L_out)**1/2
W = np.random.rand(L_out,L_in +1) *(2*epi) -epi
return W
```
Initialize Theta Vectors
Here we will randomly intialize theta vecotrs for each layer
```
input_layer_size = 400
hidden_layer_size = 25
num_labels = 801
Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size)
Theta2 = randInitializeWeights(hidden_layer_size, num_labels)
nn_params = np.append(Theta1.flatten(),Theta2.flatten())
def sigmoidGradient(z):
"""
computes the gradient of the sigmoid function
"""
sigmoid = 1/(1 + np.exp(-z))
return sigmoid *(1-sigmoid)
def predict(Theta1, Theta2, X):
"""
Predict the label of an input given a trained neural network
"""
m= X.shape[0]
X = np.hstack((np.ones((m,1)),X))
a1 = sigmoid(X @ Theta1.T)
a1 = np.hstack((np.ones((m,1)), a1)) # hidden layer
a2 = sigmoid(a1 @ Theta2.T) # output layer
#find out why its +1
return np.argmax(a2,axis=1)+1
pred = predict(Theta1, Theta2, X)
# numEx - is the number of examples in the training set
print("Training Set Accuracy:",sum(pred[:,np.newaxis]==y)[0]/numEx*100,"%")
```
## Computing Neural Network Cost function
$J(\Theta) = \frac{1}{m} \sum_{i=1}^m \sum_{k=1}^k [-y_k^{(i)} log(h_\Theta(x^{(i)})_k) - ( 1 -y_k^{(i)} log (1-h_\Theta(x^{(i)})_k)] + \frac{\lambda}{2m}[\sum_{j=1}^{25} \sum_{k=1}^{400} (\Theta_{j,k}^{(1)})^2 + \sum_{j=1}^{10} \sum_{k=1}^{25} (\Theta_{j,k}^{(2)})^2]$
## Computing Backpropagation
Implementation of Backpropagation to compute gradients.
```
def nnCostFunction(nn_params,input_layer_size, hidden_layer_size, num_labels,X, y,Lambda):
"""
nn_params contains the parameters unrolled into a vector
compute the cost and gradient of the neural network
"""
# Reshape nn_params back into the parameters Theta1 and Theta2
Theta1 = nn_params[:((input_layer_size+1) * hidden_layer_size)].reshape(hidden_layer_size,input_layer_size+1)
Theta2 = nn_params[((input_layer_size +1)* hidden_layer_size ):].reshape(num_labels,hidden_layer_size+1)
m = X.shape[0]
J=0
X = np.hstack((np.ones((m,1)),X))
y10 = np.zeros((m,num_labels))
a1 = sigmoid(X @ Theta1.T)
a1 = np.hstack((np.ones((m,1)), a1)) # hidden layer
a2 = sigmoid(a1 @ Theta2.T) # output layer
for i in range(1,num_labels+1):
y10[:,i-1][:,np.newaxis] = np.where(y==i,1,0)
for j in range(num_labels):
J = J + sum(-y10[:,j] * np.log(a2[:,j]) - (1-y10[:,j])*np.log(1-a2[:,j]))
cost = 1/m* J
reg_J = cost + Lambda/(2*m) * (np.sum(Theta1[:,1:]**2) + np.sum(Theta2[:,1:]**2))
# Implement the backpropagation algorithm to compute the gradients
grad1 = np.zeros((Theta1.shape))
grad2 = np.zeros((Theta2.shape))
for i in range(m):
xi= X[i,:] # 1 X 401
a1i = a1[i,:] # 1 X 26
a2i =a2[i,:] # 1 X 10
d2 = a2i - y10[i,:]
d1 = Theta2.T @ d2.T * sigmoidGradient(np.hstack((1,xi @ Theta1.T)))
grad1= grad1 + d1[1:][:,np.newaxis] @ xi[:,np.newaxis].T
grad2 = grad2 + d2.T[:,np.newaxis] @ a1i[:,np.newaxis].T
grad1 = 1/m * grad1
grad2 = 1/m*grad2
grad1_reg = grad1 + (Lambda/m) * np.hstack((np.zeros((Theta1.shape[0],1)),Theta1[:,1:]))
grad2_reg = grad2 + (Lambda/m) * np.hstack((np.zeros((Theta2.shape[0],1)),Theta2[:,1:]))
return cost, grad1, grad2,reg_J, grad1_reg,grad2_reg
def sigmoidGradient(z):
"""
computes the gradient of the sigmoid function
"""
sigmoid = 1/(1 + np.exp(-z))
return sigmoid *(1-sigmoid)
```
## In Action: Cost Function
Piece up different components defined above to compute cost of our Neural Network (regularized and unregularized)
** predicting an underfitted model
```
J,reg_J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, 1)[0:4:3]
print("Cost at parameters (non-regularized):",J,"\nCost at parameters (Regularized):",reg_J)
```
## Model Evaluation
Model Evaluation is an important part of understanding your model performance.
For that matter it is crucial to choose a good evaluation metric you can monitor. In our case Accuracy makes the most sense.
We will monitor
- Accuracy on Test (clf)
- AUC (implementation requires sklearn v0.23.1 +)
- Accuracy on Test (eng)
- AUC
- Accuracy other vairants (vnt)
- AUC
```
# Accuracy
testsetPred = clf.predict(X_test)
accuracy_score(y_test, testsetPred)
#AUC
#roc_auc_score(y_test, testsetPred, multi_class='ovr')
```
## Serialize Model Variant
Serialize the classifier you like
(1) Default Sklearn Model (clf)
(2) Variant 1 (eng)
(3) Variant 2
(4) Variant 3
```
"""
Serialize Model
"""
joblib.dump(clf, 'mlp.pkl')
```
| github_jupyter | import sklearn
import pandas as pd
import numpy as np
import csv as csv
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report
from sklearn import metrics
from sklearn.externals import joblib
from sklearn.preprocessing import label_binarize
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import roc_auc_score
sklearn.__version__
data = pd.read_csv('./syntheticData.csv')
X, y = data.iloc[:, 1:], data.iloc[:,0]
print(X.head())
print(X.shape)
print(y.head())
print(y.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.40, stratify=y)
X_cv, X_test, y_cv, y_test = train_test_split(X_test, y_test, test_size=0.5, stratify=y_test)
print("Training data dimensions")
print(X_train.shape)
print(y_train.shape)
print("Cross validation data dimensions")
print(X_cv.shape)
print(y_cv.shape)
print("Test data dimensions")
print(X_test.shape)
print(y_test.shape)
clf = MLPClassifier()
clf = clf.fit(X_train, y_train)
def randInitializeWeights(L_in, L_out):
"""
randomly initializes the weights of a layer with L_in incoming connections and L_out outgoing connections.
"""
epi = (6**1/2) / (L_in + L_out)**1/2
W = np.random.rand(L_out,L_in +1) *(2*epi) -epi
return W
input_layer_size = 400
hidden_layer_size = 25
num_labels = 801
Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size)
Theta2 = randInitializeWeights(hidden_layer_size, num_labels)
nn_params = np.append(Theta1.flatten(),Theta2.flatten())
def sigmoidGradient(z):
"""
computes the gradient of the sigmoid function
"""
sigmoid = 1/(1 + np.exp(-z))
return sigmoid *(1-sigmoid)
def predict(Theta1, Theta2, X):
"""
Predict the label of an input given a trained neural network
"""
m= X.shape[0]
X = np.hstack((np.ones((m,1)),X))
a1 = sigmoid(X @ Theta1.T)
a1 = np.hstack((np.ones((m,1)), a1)) # hidden layer
a2 = sigmoid(a1 @ Theta2.T) # output layer
#find out why its +1
return np.argmax(a2,axis=1)+1
pred = predict(Theta1, Theta2, X)
# numEx - is the number of examples in the training set
print("Training Set Accuracy:",sum(pred[:,np.newaxis]==y)[0]/numEx*100,"%")
def nnCostFunction(nn_params,input_layer_size, hidden_layer_size, num_labels,X, y,Lambda):
"""
nn_params contains the parameters unrolled into a vector
compute the cost and gradient of the neural network
"""
# Reshape nn_params back into the parameters Theta1 and Theta2
Theta1 = nn_params[:((input_layer_size+1) * hidden_layer_size)].reshape(hidden_layer_size,input_layer_size+1)
Theta2 = nn_params[((input_layer_size +1)* hidden_layer_size ):].reshape(num_labels,hidden_layer_size+1)
m = X.shape[0]
J=0
X = np.hstack((np.ones((m,1)),X))
y10 = np.zeros((m,num_labels))
a1 = sigmoid(X @ Theta1.T)
a1 = np.hstack((np.ones((m,1)), a1)) # hidden layer
a2 = sigmoid(a1 @ Theta2.T) # output layer
for i in range(1,num_labels+1):
y10[:,i-1][:,np.newaxis] = np.where(y==i,1,0)
for j in range(num_labels):
J = J + sum(-y10[:,j] * np.log(a2[:,j]) - (1-y10[:,j])*np.log(1-a2[:,j]))
cost = 1/m* J
reg_J = cost + Lambda/(2*m) * (np.sum(Theta1[:,1:]**2) + np.sum(Theta2[:,1:]**2))
# Implement the backpropagation algorithm to compute the gradients
grad1 = np.zeros((Theta1.shape))
grad2 = np.zeros((Theta2.shape))
for i in range(m):
xi= X[i,:] # 1 X 401
a1i = a1[i,:] # 1 X 26
a2i =a2[i,:] # 1 X 10
d2 = a2i - y10[i,:]
d1 = Theta2.T @ d2.T * sigmoidGradient(np.hstack((1,xi @ Theta1.T)))
grad1= grad1 + d1[1:][:,np.newaxis] @ xi[:,np.newaxis].T
grad2 = grad2 + d2.T[:,np.newaxis] @ a1i[:,np.newaxis].T
grad1 = 1/m * grad1
grad2 = 1/m*grad2
grad1_reg = grad1 + (Lambda/m) * np.hstack((np.zeros((Theta1.shape[0],1)),Theta1[:,1:]))
grad2_reg = grad2 + (Lambda/m) * np.hstack((np.zeros((Theta2.shape[0],1)),Theta2[:,1:]))
return cost, grad1, grad2,reg_J, grad1_reg,grad2_reg
def sigmoidGradient(z):
"""
computes the gradient of the sigmoid function
"""
sigmoid = 1/(1 + np.exp(-z))
return sigmoid *(1-sigmoid)
J,reg_J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, num_labels, X, y, 1)[0:4:3]
print("Cost at parameters (non-regularized):",J,"\nCost at parameters (Regularized):",reg_J)
# Accuracy
testsetPred = clf.predict(X_test)
accuracy_score(y_test, testsetPred)
#AUC
#roc_auc_score(y_test, testsetPred, multi_class='ovr')
"""
Serialize Model
"""
joblib.dump(clf, 'mlp.pkl') | 0.615897 | 0.972753 |
# Periodic Motion: Kinematic Exploration of Pendulum
Working with observations to develop a conceptual representation of periodic motion in the context of a pendulum.
### Dependencies
This is my usual spectrum of dependencies that seem to be generally useful. We'll see if I need additional ones. When needed I will use the newer version of the random number generator since the older version is being deprecated. If you have troubles with any random numbers check for python updates for your Anaconda install.
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from numpy.random import default_rng
rng = default_rng()
```
### Conceptual Observations
From the video in the [Periodic Motion breadcrumb ](http://coccweb.cocc.edu/bemerson/PhysicsGlobal/Courses/PH213/PH213Materials/PH213Breadcrumbs/PH213BCHarmonic.html) we sought to extract a sense of position, velocity and acceleration at different points in the process. In a simplified (not overthinking it) sense there are three general ways to describe the motion of the pendulum. Horizontal, vertical, and angular. In each case there is a midpoint or neutral position that one might reasonably label as 0 in x,y, or $\theta$. In each case there is an extreme in each direction from the neutral position which is repeated. Where I start counting time from is totally up in the air but it seems like each section the motion is taking the same amount of time. Dropping some rough data into an array and plotting it would look something like this....
```
conceptX = [-5., 0., 5., 0., -5., 0]
conceptY = [-.6, 0.,0.6,0.,-0.6,0.]
conceptTheta = [- 15. , 0., 15., 0., -15.,0.]
conceptTime = [0., 1.,2.,3.,4.,5.]
fig, ax = plt.subplots()
ax.scatter(conceptTime, conceptX ,s = 150, marker = '+',color = 'blue', label = 'x motion')
ax.scatter(conceptTime, conceptY , s = 150, marker = 'o', color = 'red', label = 'y motion')
ax.scatter(conceptTime, conceptTheta , s = 150, marker = '*', color = 'green', label = 'theta motion')
plt.rcParams.update({'font.size': 12}) # make labels easier to read
ax.set(xlabel='conceptual time', ylabel='position',
title='Conceptual motion')
fig.set_size_inches(10, 5)
plt.legend(loc= 2)
ax.grid()
fig.savefig("images/allThree.png")
plt.show()
```
Generally this seems to match the general sense of the motion. The horizontal motion is 'larger' than the vertical motion and the angular motion also goes back and forth in some units. To minimize confusion with multiple data sets I'll move to a single data set for a bit -- the horizontal one.
```
fig2, ax2 = plt.subplots()
ax2.scatter(conceptTime, conceptX ,s = 150, marker = '.',color = 'blue', label = 'x motion')
plt.rcParams.update({'font.size': 12}) # make labels easier to read
ax2.set(xlabel='conceptual time', ylabel='position',
title='Conceptual motion')
fig2.set_size_inches(10, 5)
plt.legend(loc= 2)
ax2.grid()
fig2.savefig("images/justHorizontal.png")
plt.show()
```
### Velocity at Each Point
In examining the video one hopes that we notice that the pendulum stops at each extreme of motion. After some thought (particularly in the horizontal direction) it seems plausible that gravity is pulling the pendulum 'down' until it reaches it's lowest point (speeding it up) and then pulls it 'back' (slowing it down) during the next part of the cycle. One can also arrive at this observation by considering the gravitational potential energy and our energy bar charts. All of this leads to the idea that the velocity reachs a maximum when the pendulum is at it's 'neutral' point. If we were to draw lines to indicate the local slope of the x(t) function we might see something like this.....
Consider which of the three possibilities illustrated in the set of plots is consistent with a function that speeds up and slows down as you observe.
```
# first extreme
min0x = [-.4,0.,.4]
min0y = [-5.,-5.,-5.]
# first neutral
min1xa = [.5,1.,1.5]
min1ya = [-1.,0.,1.]
min1xb = [.5,1.,1.5]
min1yb = [-2.5,0.,2.5]
min1xc = [.8,1.,1.2]
min1yc = [-2.5,0.,2.5]
# second extreme
min2x = [1.6,2.,2.4]
min2y = [5.,5.,5.]
# second neutral
min3xa = [2.5,3.,3.5]
min3ya = [1.,0.,-1.]
min3xb = [2.5,3.,3.5]
min3yb = [2.5,0.,-2.5]
min3xc = [2.8,3.,3.2]
min3yc = [2.5,0.,-2.5]
# third extreme
min4x = [3.6,4.,4.4]
min4y = [-5.,-5.,-5.]
fig3, (bx1,bx2,bx3) = plt.subplots(3,1)
# First, the low max velocity
bx1.scatter(conceptTime, conceptX , marker = '.',color = 'blue', label = 'x motion')
# 0 velocity points
bx1.plot(min0x, min0y, color = 'red')
bx1.plot(min2x, min2y, color = 'red')
bx1.plot(min4x, min4y, color = 'red')
# max velocity points first
bx1.plot(min1xa, min1ya, color = 'red')
# max velocity points second
bx1.plot(min3xa, min3ya, color = 'red')
# second, max velocity as constant
bx2.scatter(conceptTime, conceptX , marker = '.',color = 'blue', label = 'x motion')
# 0 velocity points
bx2.plot(min0x, min0y, color = 'red')
bx2.plot(min2x, min2y, color = 'red')
bx2.plot(min4x, min4y, color = 'red')
# max velocity points first
bx2.plot(min1xb, min1yb, color = 'red')
# max velocity points second
bx2.plot(min3xb, min3yb, color = 'red')
# Third, max velocity with 'room to decrease'
bx3.scatter(conceptTime, conceptX , marker = '.',color = 'blue', label = 'x motion')
# 0 velocity points
bx3.plot(min0x, min0y, color = 'red')
bx3.plot(min2x, min2y, color = 'red')
bx3.plot(min4x, min4y, color = 'red')
# max velocity points first
bx3.plot(min1xc, min1yc, color = 'red')
# max velocity points second
bx3.plot(min3xc, min3yc, color = 'red')
plt.rcParams.update({'font.size': 12}) # make labels easier to read
bx1.set(xlabel='', ylabel='position',
title='low velocity at neutral point')
bx2.set(xlabel='', ylabel='position',
title='average velocity at neutral point')
bx3.set(xlabel='conceptual time', ylabel='position',
title='higher velocity at neutral point')
fig3.set_size_inches(10, 15)
#plt.legend(loc= 2)
bx1.grid()
bx2.grid()
bx3.grid()
fig3.savefig("images/possibleConcepts.png")
plt.show()
```
### Which one is consistent?
Hopefully you see that only the last plot with a higher velocity at the neutral point is a possible representation of the kinematics.
### Elapsed Time for a Cycle
Counting 'elephants' the time to complete one cycle in the horizontal direction seems to be between 5 and 6 seconds. If I take 6 s to be the cycle time then each section of the plot above takes 1.5 s.
### Scale of Horizontal Motion
It is hard to know how to scale the horizontal motion though it seems likely that it isn't 10 m from side to side. Hard to say but it feels more reasonable to assume the maximum horizontal distance from the neutral point is more like 1.2 m. I wouldn't argue with you if you thought it was a bit more or less.
### Tidying Up
Let's put all of this last estimation into the plot and see what happens.....
```
scaledX = [-1.2, 0., 1.2, 0., -1.2, 0]
scaledTime = [0., 1.5,3.,4.5,6.,7.5]
# s on end indicates scaled values of position and time
min0xs = [-.4,0.,.4]
min0ys = [-1.2,-1.2,-1.2]
# first neutral
min1xs = [1.2,1.5,1.8]
min1ys = [-.4,0.,.4]
# second extreme
min2xs = [2.6,3.,3.4]
min2ys = [1.2,1.2,1.2]
# second neutral
min3xs = [4.2,4.5,4.8]
min3ys = [.4,0.,-.4]
# third extreme
min4xs = [5.6,6.,6.4]
min4ys= [-1.2,-1.2,-1.2]
fig4, ax4 = plt.subplots()
ax4.scatter(scaledTime, scaledX ,s = 150, marker = '.',color = 'blue', label = 'x motion')
# 0 velocity points
ax4.plot(min0xs, min0ys, color = 'red')
ax4.plot(min2xs, min2ys, color = 'red')
ax4.plot(min4xs, min4ys, color = 'red')
# max velocity points first
ax4.plot(min1xs, min1ys, color = 'red')
# max velocity points second
ax4.plot(min3xs, min3ys, color = 'red')
plt.rcParams.update({'font.size': 12}) # make labels easier to read
ax4.set(xlabel='scaled time (s)', ylabel='position (m)',
title='Estimated Motion')
fig4.set_size_inches(10, 5)
plt.legend(loc= 2)
ax4.grid()
fig4.savefig("images/bestConcept.png")
plt.show()
```
## Adding a Model
So what sort of mathematical function might be plausible to model this with? No surprise that sine and cosine are very reasonable choices. This particular set of data looks most like the negative of the cosine function so I will use that. Because the cos($\theta$) ranges between 1 and -1 I need to multiply by some scalar to get it to hit the peaks and troughs of my data. This factor is called the amplitude and is often abbreviated A.
$$ \large x(t) = A \: cos(\theta) $$
In this expression the time dependence muct be hiding in the $\theta$ term. $\theta = \theta (t)$ . Then there is the question about how to stretch or shrink the cosine function so that it completes a full cycle in 6 s. You could go back to your trig class and try to remember this but I'll save you the trouble. Since the cosine function completes a full cycle in 2$\pi$ radians what we need is:
$$ \large \theta (t) \: = \: \frac{2\pi}{T}\:t $$
T is called the period of the motion which we have estimated to be 6 s. Notice that when t = 6 s then t/T = 1 and $\theta$ = 2$\pi$.
This is implemented in the next cell and then plotted on top of our conceptual sketch....
```
amplitude = 1.2
omega = 2*np.pi/6.
modelTime = np.linspace(0, 8, 500)
modelX = -amplitude * np.cos(omega*modelTime)
fig5, ax5 = plt.subplots()
ax5.scatter(scaledTime, scaledX ,s = 150, marker = '.',color = 'blue', label = 'x motion')
ax5.plot(modelTime, modelX, color = 'green', label = 'harmonic model')
# 0 velocity points
ax5.plot(min0xs, min0ys, color = 'red')
ax5.plot(min2xs, min2ys, color = 'red')
ax5.plot(min4xs, min4ys, color = 'red')
# max velocity points first
ax5.plot(min1xs, min1ys, color = 'red')
# max velocity points second
ax5.plot(min3xs, min3ys, color = 'red')
plt.rcParams.update({'font.size': 12}) # make labels easier to read
ax5.set(xlabel='scaled time (s)', ylabel='position (m)',
title='Estimated Motion')
fig5.set_size_inches(10, 5)
plt.legend(loc= 2)
ax5.grid()
fig5.savefig("images/dataWithModel.png")
plt.show()
```
| github_jupyter | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from numpy.random import default_rng
rng = default_rng()
conceptX = [-5., 0., 5., 0., -5., 0]
conceptY = [-.6, 0.,0.6,0.,-0.6,0.]
conceptTheta = [- 15. , 0., 15., 0., -15.,0.]
conceptTime = [0., 1.,2.,3.,4.,5.]
fig, ax = plt.subplots()
ax.scatter(conceptTime, conceptX ,s = 150, marker = '+',color = 'blue', label = 'x motion')
ax.scatter(conceptTime, conceptY , s = 150, marker = 'o', color = 'red', label = 'y motion')
ax.scatter(conceptTime, conceptTheta , s = 150, marker = '*', color = 'green', label = 'theta motion')
plt.rcParams.update({'font.size': 12}) # make labels easier to read
ax.set(xlabel='conceptual time', ylabel='position',
title='Conceptual motion')
fig.set_size_inches(10, 5)
plt.legend(loc= 2)
ax.grid()
fig.savefig("images/allThree.png")
plt.show()
fig2, ax2 = plt.subplots()
ax2.scatter(conceptTime, conceptX ,s = 150, marker = '.',color = 'blue', label = 'x motion')
plt.rcParams.update({'font.size': 12}) # make labels easier to read
ax2.set(xlabel='conceptual time', ylabel='position',
title='Conceptual motion')
fig2.set_size_inches(10, 5)
plt.legend(loc= 2)
ax2.grid()
fig2.savefig("images/justHorizontal.png")
plt.show()
# first extreme
min0x = [-.4,0.,.4]
min0y = [-5.,-5.,-5.]
# first neutral
min1xa = [.5,1.,1.5]
min1ya = [-1.,0.,1.]
min1xb = [.5,1.,1.5]
min1yb = [-2.5,0.,2.5]
min1xc = [.8,1.,1.2]
min1yc = [-2.5,0.,2.5]
# second extreme
min2x = [1.6,2.,2.4]
min2y = [5.,5.,5.]
# second neutral
min3xa = [2.5,3.,3.5]
min3ya = [1.,0.,-1.]
min3xb = [2.5,3.,3.5]
min3yb = [2.5,0.,-2.5]
min3xc = [2.8,3.,3.2]
min3yc = [2.5,0.,-2.5]
# third extreme
min4x = [3.6,4.,4.4]
min4y = [-5.,-5.,-5.]
fig3, (bx1,bx2,bx3) = plt.subplots(3,1)
# First, the low max velocity
bx1.scatter(conceptTime, conceptX , marker = '.',color = 'blue', label = 'x motion')
# 0 velocity points
bx1.plot(min0x, min0y, color = 'red')
bx1.plot(min2x, min2y, color = 'red')
bx1.plot(min4x, min4y, color = 'red')
# max velocity points first
bx1.plot(min1xa, min1ya, color = 'red')
# max velocity points second
bx1.plot(min3xa, min3ya, color = 'red')
# second, max velocity as constant
bx2.scatter(conceptTime, conceptX , marker = '.',color = 'blue', label = 'x motion')
# 0 velocity points
bx2.plot(min0x, min0y, color = 'red')
bx2.plot(min2x, min2y, color = 'red')
bx2.plot(min4x, min4y, color = 'red')
# max velocity points first
bx2.plot(min1xb, min1yb, color = 'red')
# max velocity points second
bx2.plot(min3xb, min3yb, color = 'red')
# Third, max velocity with 'room to decrease'
bx3.scatter(conceptTime, conceptX , marker = '.',color = 'blue', label = 'x motion')
# 0 velocity points
bx3.plot(min0x, min0y, color = 'red')
bx3.plot(min2x, min2y, color = 'red')
bx3.plot(min4x, min4y, color = 'red')
# max velocity points first
bx3.plot(min1xc, min1yc, color = 'red')
# max velocity points second
bx3.plot(min3xc, min3yc, color = 'red')
plt.rcParams.update({'font.size': 12}) # make labels easier to read
bx1.set(xlabel='', ylabel='position',
title='low velocity at neutral point')
bx2.set(xlabel='', ylabel='position',
title='average velocity at neutral point')
bx3.set(xlabel='conceptual time', ylabel='position',
title='higher velocity at neutral point')
fig3.set_size_inches(10, 15)
#plt.legend(loc= 2)
bx1.grid()
bx2.grid()
bx3.grid()
fig3.savefig("images/possibleConcepts.png")
plt.show()
scaledX = [-1.2, 0., 1.2, 0., -1.2, 0]
scaledTime = [0., 1.5,3.,4.5,6.,7.5]
# s on end indicates scaled values of position and time
min0xs = [-.4,0.,.4]
min0ys = [-1.2,-1.2,-1.2]
# first neutral
min1xs = [1.2,1.5,1.8]
min1ys = [-.4,0.,.4]
# second extreme
min2xs = [2.6,3.,3.4]
min2ys = [1.2,1.2,1.2]
# second neutral
min3xs = [4.2,4.5,4.8]
min3ys = [.4,0.,-.4]
# third extreme
min4xs = [5.6,6.,6.4]
min4ys= [-1.2,-1.2,-1.2]
fig4, ax4 = plt.subplots()
ax4.scatter(scaledTime, scaledX ,s = 150, marker = '.',color = 'blue', label = 'x motion')
# 0 velocity points
ax4.plot(min0xs, min0ys, color = 'red')
ax4.plot(min2xs, min2ys, color = 'red')
ax4.plot(min4xs, min4ys, color = 'red')
# max velocity points first
ax4.plot(min1xs, min1ys, color = 'red')
# max velocity points second
ax4.plot(min3xs, min3ys, color = 'red')
plt.rcParams.update({'font.size': 12}) # make labels easier to read
ax4.set(xlabel='scaled time (s)', ylabel='position (m)',
title='Estimated Motion')
fig4.set_size_inches(10, 5)
plt.legend(loc= 2)
ax4.grid()
fig4.savefig("images/bestConcept.png")
plt.show()
amplitude = 1.2
omega = 2*np.pi/6.
modelTime = np.linspace(0, 8, 500)
modelX = -amplitude * np.cos(omega*modelTime)
fig5, ax5 = plt.subplots()
ax5.scatter(scaledTime, scaledX ,s = 150, marker = '.',color = 'blue', label = 'x motion')
ax5.plot(modelTime, modelX, color = 'green', label = 'harmonic model')
# 0 velocity points
ax5.plot(min0xs, min0ys, color = 'red')
ax5.plot(min2xs, min2ys, color = 'red')
ax5.plot(min4xs, min4ys, color = 'red')
# max velocity points first
ax5.plot(min1xs, min1ys, color = 'red')
# max velocity points second
ax5.plot(min3xs, min3ys, color = 'red')
plt.rcParams.update({'font.size': 12}) # make labels easier to read
ax5.set(xlabel='scaled time (s)', ylabel='position (m)',
title='Estimated Motion')
fig5.set_size_inches(10, 5)
plt.legend(loc= 2)
ax5.grid()
fig5.savefig("images/dataWithModel.png")
plt.show() | 0.611614 | 0.978426 |
# 「tflite micro」であそぼう!
## 元ノートブック:[@dansitu](https://twitter.com/dansitu)
### 日本語バーション:[@proppy](https://twitter.com/proppy])
# 「tflite micro」ってなんだ?
- マイコンで「tflite」が動く事
![img](https://wiki.stm32duino.com/images/thumb/d/db/STM32_Blue_Pill_perspective.jpg/800px-STM32_Blue_Pill_perspective.jpg)
- https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro
```
! python -m pip install --pre tensorflow
! python -m pip install matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 200
```
# 一番かんたんなモデルを作りましょう!
## sin() 1000個
```
import numpy as np
import math
import matplotlib.pyplot as plt
x_values = np.random.uniform(low=0, high=2*math.pi, size=1000)
np.random.shuffle(x_values)
y_values = np.sin(x_values)
plt.plot(x_values, y_values, 'b.')
plt.show()
```
## ノイズをかけて
```
y_values += 0.1 * np.random.randn(*y_values.shape)
plt.plot(x_values, y_values, 'b.')
plt.show()
```
## datasetをちゃんと分けて
```
x_train, x_test, x_validate = x_values[:600], x_values[600:800], x_values[800:]
y_train, y_test, y_validate = y_values[:600], y_values[600:800], y_values[800:]
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.legend()
plt.show()
```
## Kerasで10秒を温めて
```
from tensorflow.keras import layers
import tensorflow as tf
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(1,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
history = model.fit(x_train, y_train, epochs=200, batch_size=16,
validation_data=(x_validate, y_validate), verbose=1)
```
## モデルを試して
```
predictions = model.predict(x_test)
plt.clf()
plt.plot(x_test, y_test, 'bo', label='Test')
plt.plot(x_test, predictions, 'ro', label='Keras')
plt.legend()
plt.show()
```
## tfliteにゆっくり変わって
```
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()
open("sine_model_quantized.tflite", "wb").write(tflite_model)
```
## マイコンに入れる前に最後の確認
```
interpreter = tf.lite.Interpreter('sine_model_quantized.tflite')
interpreter.allocate_tensors()
input = interpreter.tensor(interpreter.get_input_details()[0]["index"])
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
lite_predictions = np.empty(x_test.size)
for i in range(x_test.size):
input()[0] = x_test[i]
interpreter.invoke()
lite_predictions[i] = output()[0]
plt.plot(x_test, y_test, 'bo', label='Test')
plt.plot(x_test, predictions, 'ro', label='Keras')
plt.plot(x_test, lite_predictions, 'kx', label='TFLite')
plt.legend()
plt.show()
```
## マイコンに入れるために「ANSI C」に変わって
```
! xxd -i sine_model_quantized.tflite > sine_model_data.h
```
```
unsigned char sine_model_quantized_tflite[] = {
0x18, 0x00, 0x00, 0x00, 0x54, 0x46, 0x4c, 0x33, 0x00, 0x00, 0x0e, 0x00,
0x18, 0x00, 0x04, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x14, 0x00,
0x0e, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x10, 0x0a, 0x00, 0x00,
0xb8, 0x05, 0x00, 0x00, 0xa0, 0x05, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x0b, 0x00, 0x00, 0x00, 0x90, 0x05, 0x00, 0x00, 0x7c, 0x05, 0x00, 0x00,
0x24, 0x05, 0x00, 0x00, 0xd4, 0x04, 0x00, 0x00, 0xc4, 0x00, 0x00, 0x00,
// ...
}
unsigned int sine_model_quantized_tflite_len = 2640;
```
## Arduinoタイム
`SineSerial.ino`
```
#include "TfLiteMicroArduino.h"
#include "sine_model_data.h"
float angle = 0;
void setup() {
// ...
}
void loop() {
// ...
}
```
## tflite microのinterpreterを設定する
```
void setup() {
Serial.begin(9600);
TfLiteMicro.begin(g_sine_model_data);
}
```
## モデルを叩く
```
void loop() {
// ...
TfLiteMicro.inputFloat(0)[0] = angle;
TfLiteMicro.invoke();
Serial.println(TfLiteMicro.outputFloat(0)[0]);
angle += 0.1f;
if (angle > 2 * M_PI) {
angle = 0.0f;
}
}
```
![plotter](plotter.png)
## LEDを点ける
```
void setup() {
// ...
TfLiteMicro.begin(g_sine_model_data);
pinMode(PB9, PWM);
}
void loop() {
// ...
TfLiteMicro.inputFloat(0)[0] = angle;
TfLiteMicro.invoke();
float y = TfLiteMicro.outputFloat(0)[0];
pwmWrite(PB9, 65535 * (y + 1.0f) / 2.0f);
angle += 0.01f;
if (angle > 2 * M_PI) {
angle = 0.0f;
}
}
```
![leds](leds.gif)
| github_jupyter | ! python -m pip install --pre tensorflow
! python -m pip install matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 200
import numpy as np
import math
import matplotlib.pyplot as plt
x_values = np.random.uniform(low=0, high=2*math.pi, size=1000)
np.random.shuffle(x_values)
y_values = np.sin(x_values)
plt.plot(x_values, y_values, 'b.')
plt.show()
y_values += 0.1 * np.random.randn(*y_values.shape)
plt.plot(x_values, y_values, 'b.')
plt.show()
x_train, x_test, x_validate = x_values[:600], x_values[600:800], x_values[800:]
y_train, y_test, y_validate = y_values[:600], y_values[600:800], y_values[800:]
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.legend()
plt.show()
from tensorflow.keras import layers
import tensorflow as tf
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(1,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
history = model.fit(x_train, y_train, epochs=200, batch_size=16,
validation_data=(x_validate, y_validate), verbose=1)
predictions = model.predict(x_test)
plt.clf()
plt.plot(x_test, y_test, 'bo', label='Test')
plt.plot(x_test, predictions, 'ro', label='Keras')
plt.legend()
plt.show()
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()
open("sine_model_quantized.tflite", "wb").write(tflite_model)
interpreter = tf.lite.Interpreter('sine_model_quantized.tflite')
interpreter.allocate_tensors()
input = interpreter.tensor(interpreter.get_input_details()[0]["index"])
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
lite_predictions = np.empty(x_test.size)
for i in range(x_test.size):
input()[0] = x_test[i]
interpreter.invoke()
lite_predictions[i] = output()[0]
plt.plot(x_test, y_test, 'bo', label='Test')
plt.plot(x_test, predictions, 'ro', label='Keras')
plt.plot(x_test, lite_predictions, 'kx', label='TFLite')
plt.legend()
plt.show()
! xxd -i sine_model_quantized.tflite > sine_model_data.h
unsigned char sine_model_quantized_tflite[] = {
0x18, 0x00, 0x00, 0x00, 0x54, 0x46, 0x4c, 0x33, 0x00, 0x00, 0x0e, 0x00,
0x18, 0x00, 0x04, 0x00, 0x08, 0x00, 0x0c, 0x00, 0x10, 0x00, 0x14, 0x00,
0x0e, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x10, 0x0a, 0x00, 0x00,
0xb8, 0x05, 0x00, 0x00, 0xa0, 0x05, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00,
0x0b, 0x00, 0x00, 0x00, 0x90, 0x05, 0x00, 0x00, 0x7c, 0x05, 0x00, 0x00,
0x24, 0x05, 0x00, 0x00, 0xd4, 0x04, 0x00, 0x00, 0xc4, 0x00, 0x00, 0x00,
// ...
}
unsigned int sine_model_quantized_tflite_len = 2640;
#include "TfLiteMicroArduino.h"
#include "sine_model_data.h"
float angle = 0;
void setup() {
// ...
}
void loop() {
// ...
}
void setup() {
Serial.begin(9600);
TfLiteMicro.begin(g_sine_model_data);
}
void loop() {
// ...
TfLiteMicro.inputFloat(0)[0] = angle;
TfLiteMicro.invoke();
Serial.println(TfLiteMicro.outputFloat(0)[0]);
angle += 0.1f;
if (angle > 2 * M_PI) {
angle = 0.0f;
}
}
void setup() {
// ...
TfLiteMicro.begin(g_sine_model_data);
pinMode(PB9, PWM);
}
void loop() {
// ...
TfLiteMicro.inputFloat(0)[0] = angle;
TfLiteMicro.invoke();
float y = TfLiteMicro.outputFloat(0)[0];
pwmWrite(PB9, 65535 * (y + 1.0f) / 2.0f);
angle += 0.01f;
if (angle > 2 * M_PI) {
angle = 0.0f;
}
} | 0.561335 | 0.948917 |
```
%matplotlib inline
```
A Gentle Introduction to ``torch.autograd``
---------------------------------
``torch.autograd`` is PyTorch’s automatic differentiation engine that powers
neural network training. In this section, you will get a conceptual
understanding of how autograd helps a neural network train.
Background
~~~~~~~~~~
Neural networks (NNs) are a collection of nested functions that are
executed on some input data. These functions are defined by *parameters*
(consisting of weights and biases), which in PyTorch are stored in
tensors.
Training a NN happens in two steps:
**Forward Propagation**: In forward prop, the NN makes its best guess
about the correct output. It runs the input data through each of its
functions to make this guess.
**Backward Propagation**: In backprop, the NN adjusts its parameters
proportionate to the error in its guess. It does this by traversing
backwards from the output, collecting the derivatives of the error with
respect to the parameters of the functions (*gradients*), and optimizing
the parameters using gradient descent. For a more detailed walkthrough
of backprop, check out this `video from
3Blue1Brown <https://www.youtube.com/watch?v=tIeHLnjs5U8>`__.
Usage in PyTorch
~~~~~~~~~~~
Let's take a look at a single training step.
For this example, we load a pretrained resnet18 model from ``torchvision``.
We create a random data tensor to represent a single image with 3 channels, and height & width of 64,
and its corresponding ``label`` initialized to some random values.
```
import torch, torchvision
model = torchvision.models.resnet18(pretrained=True)
data = torch.rand(1, 3, 64, 64)
labels = torch.rand(1, 1000)
```
Next, we run the input data through the model through each of its layers to make a prediction.
This is the **forward pass**.
```
prediction = model(data) # forward pass
```
We use the model's prediction and the corresponding label to calculate the error (``loss``).
The next step is to backpropagate this error through the network.
Backward propagation is kicked off when we call ``.backward()`` on the error tensor.
Autograd then calculates and stores the gradients for each model parameter in the parameter's ``.grad`` attribute.
```
loss = (prediction - labels).sum()
loss.backward() # backward pass
```
Next, we load an optimizer, in this case SGD with a learning rate of 0.01 and momentum of 0.9.
We register all the parameters of the model in the optimizer.
```
optim = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0.9)
```
Finally, we call ``.step()`` to initiate gradient descent. The optimizer adjusts each parameter by its gradient stored in ``.grad``.
```
optim.step() #gradient descent
```
At this point, you have everything you need to train your neural network.
The below sections detail the workings of autograd - feel free to skip them.
--------------
Differentiation in Autograd
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Let's take a look at how ``autograd`` collects gradients. We create two tensors ``a`` and ``b`` with
``requires_grad=True``. This signals to ``autograd`` that every operation on them should be tracked.
```
import torch
a = torch.tensor([2., 3.], requires_grad=True)
b = torch.tensor([6., 4.], requires_grad=True)
```
We create another tensor ``Q`` from ``a`` and ``b``.
\begin{align}Q = 3a^3 - b^2\end{align}
```
Q = 3*a**3 - b**2
```
Let's assume ``a`` and ``b`` to be parameters of an NN, and ``Q``
to be the error. In NN training, we want gradients of the error
w.r.t. parameters, i.e.
\begin{align}\frac{\partial Q}{\partial a} = 9a^2\end{align}
\begin{align}\frac{\partial Q}{\partial b} = -2b\end{align}
When we call ``.backward()`` on ``Q``, autograd calculates these gradients
and stores them in the respective tensors' ``.grad`` attribute.
We need to explicitly pass a ``gradient`` argument in ``Q.backward()`` because it is a vector.
``gradient`` is a tensor of the same shape as ``Q``, and it represents the
gradient of Q w.r.t. itself, i.e.
\begin{align}\frac{dQ}{dQ} = 1\end{align}
Equivalently, we can also aggregate Q into a scalar and call backward implicitly, like ``Q.sum().backward()``.
```
external_grad = torch.tensor([1., 1.])
Q.backward(gradient=external_grad)
```
Gradients are now deposited in ``a.grad`` and ``b.grad``
```
# check if collected gradients are correct
print(9*a**2 == a.grad)
print(-2*b == b.grad)
```
Optional Reading - Vector Calculus using ``autograd``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Mathematically, if you have a vector valued function
$\vec{y}=f(\vec{x})$, then the gradient of $\vec{y}$ with
respect to $\vec{x}$ is a Jacobian matrix $J$:
\begin{align}J
=
\left(\begin{array}{cc}
\frac{\partial \bf{y}}{\partial x_{1}} &
... &
\frac{\partial \bf{y}}{\partial x_{n}}
\end{array}\right)
=
\left(\begin{array}{ccc}
\frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}}\\
\vdots & \ddots & \vdots\\
\frac{\partial y_{m}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}}
\end{array}\right)\end{align}
Generally speaking, ``torch.autograd`` is an engine for computing
vector-Jacobian product. That is, given any vector $\vec{v}$, compute the product
$J^{T}\cdot \vec{v}$
If $v$ happens to be the gradient of a scalar function
\begin{align}l
=
g\left(\vec{y}\right)
=
\left(\begin{array}{ccc}\frac{\partial l}{\partial y_{1}} & \cdots & \frac{\partial l}{\partial y_{m}}\end{array}\right)^{T}\end{align}
then by the chain rule, the vector-Jacobian product would be the
gradient of $l$ with respect to $\vec{x}$:
\begin{align}J^{T}\cdot \vec{v}=\left(\begin{array}{ccc}
\frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}}\\
\vdots & \ddots & \vdots\\
\frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}}
\end{array}\right)\left(\begin{array}{c}
\frac{\partial l}{\partial y_{1}}\\
\vdots\\
\frac{\partial l}{\partial y_{m}}
\end{array}\right)=\left(\begin{array}{c}
\frac{\partial l}{\partial x_{1}}\\
\vdots\\
\frac{\partial l}{\partial x_{n}}
\end{array}\right)\end{align}
This characteristic of vector-Jacobian product is what we use in the above example;
``external_grad`` represents $\vec{v}$.
Computational Graph
~~~~~~~~~~~~~~~~~~~
Conceptually, autograd keeps a record of data (tensors) & all executed
operations (along with the resulting new tensors) in a directed acyclic
graph (DAG) consisting of
`Function <https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function>`__
objects. In this DAG, leaves are the input tensors, roots are the output
tensors. By tracing this graph from roots to leaves, you can
automatically compute the gradients using the chain rule.
In a forward pass, autograd does two things simultaneously:
- run the requested operation to compute a resulting tensor, and
- maintain the operation’s *gradient function* in the DAG.
The backward pass kicks off when ``.backward()`` is called on the DAG
root. ``autograd`` then:
- computes the gradients from each ``.grad_fn``,
- accumulates them in the respective tensor’s ``.grad`` attribute, and
- using the chain rule, propagates all the way to the leaf tensors.
Below is a visual representation of the DAG in our example. In the graph,
the arrows are in the direction of the forward pass. The nodes represent the backward functions
of each operation in the forward pass. The leaf nodes in blue represent our leaf tensors ``a`` and ``b``.
.. figure:: /_static/img/dag_autograd.png
<div class="alert alert-info"><h4>Note</h4><p>**DAGs are dynamic in PyTorch**
An important thing to note is that the graph is recreated from scratch; after each
``.backward()`` call, autograd starts populating a new graph. This is
exactly what allows you to use control flow statements in your model;
you can change the shape, size and operations at every iteration if
needed.</p></div>
Exclusion from the DAG
^^^^^^^^^^^^^^^^^^^^^^
``torch.autograd`` tracks operations on all tensors which have their
``requires_grad`` flag set to ``True``. For tensors that don’t require
gradients, setting this attribute to ``False`` excludes it from the
gradient computation DAG.
The output tensor of an operation will require gradients even if only a
single input tensor has ``requires_grad=True``.
```
x = torch.rand(5, 5)
y = torch.rand(5, 5)
z = torch.rand((5, 5), requires_grad=True)
a = x + y
print(f"Does `a` require gradients? : {a.requires_grad}")
b = x + z
print(f"Does `b` require gradients?: {b.requires_grad}")
```
In a NN, parameters that don't compute gradients are usually called **frozen parameters**.
It is useful to "freeze" part of your model if you know in advance that you won't need the gradients of those parameters
(this offers some performance benefits by reducing autograd computations).
Another common usecase where exclusion from the DAG is important is for
`finetuning a pretrained network <https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html>`__
In finetuning, we freeze most of the model and typically only modify the classifier layers to make predictions on new labels.
Let's walk through a small example to demonstrate this. As before, we load a pretrained resnet18 model, and freeze all the parameters.
```
from torch import nn, optim
model = torchvision.models.resnet18(pretrained=True)
# Freeze all the parameters in the network
for param in model.parameters():
param.requires_grad = False
```
Let's say we want to finetune the model on a new dataset with 10 labels.
In resnet, the classifier is the last linear layer ``model.fc``.
We can simply replace it with a new linear layer (unfrozen by default)
that acts as our classifier.
```
model.fc = nn.Linear(512, 10)
```
Now all parameters in the model, except the parameters of ``model.fc``, are frozen.
The only parameters that compute gradients are the weights and bias of ``model.fc``.
```
# Optimize only the classifier
optimizer = optim.SGD(model.fc.parameters(), lr=1e-2, momentum=0.9)
```
Notice although we register all the parameters in the optimizer,
the only parameters that are computing gradients (and hence updated in gradient descent)
are the weights and bias of the classifier.
The same exclusionary functionality is available as a context manager in
`torch.no_grad() <https://pytorch.org/docs/stable/generated/torch.no_grad.html>`__
--------------
Further readings:
~~~~~~~~~~~~~~~~~~~
- `In-place operations & Multithreaded Autograd <https://pytorch.org/docs/stable/notes/autograd.html>`__
- `Example implementation of reverse-mode autodiff <https://colab.research.google.com/drive/1VpeE6UvEPRz9HmsHh1KS0XxXjYu533EC>`__
| github_jupyter | %matplotlib inline
import torch, torchvision
model = torchvision.models.resnet18(pretrained=True)
data = torch.rand(1, 3, 64, 64)
labels = torch.rand(1, 1000)
prediction = model(data) # forward pass
loss = (prediction - labels).sum()
loss.backward() # backward pass
optim = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0.9)
optim.step() #gradient descent
import torch
a = torch.tensor([2., 3.], requires_grad=True)
b = torch.tensor([6., 4.], requires_grad=True)
Q = 3*a**3 - b**2
external_grad = torch.tensor([1., 1.])
Q.backward(gradient=external_grad)
# check if collected gradients are correct
print(9*a**2 == a.grad)
print(-2*b == b.grad)
x = torch.rand(5, 5)
y = torch.rand(5, 5)
z = torch.rand((5, 5), requires_grad=True)
a = x + y
print(f"Does `a` require gradients? : {a.requires_grad}")
b = x + z
print(f"Does `b` require gradients?: {b.requires_grad}")
from torch import nn, optim
model = torchvision.models.resnet18(pretrained=True)
# Freeze all the parameters in the network
for param in model.parameters():
param.requires_grad = False
model.fc = nn.Linear(512, 10)
# Optimize only the classifier
optimizer = optim.SGD(model.fc.parameters(), lr=1e-2, momentum=0.9) | 0.861101 | 0.992393 |
# Exploratory Data Analysis
## Import libraries
```
import pandas as pd
```
### Load data
```
df = pd.read_csv('Train.csv', sep=';')
df.head()
len(df)
df['opinion'].str.len().mean()
df['opinion'].str.len().max()
df['opinion'].str.len().min()
df['opinion'].str.len().hist(bins=200)
len(df[df['opinion'].str.len() < 1000])
df[df['opinion'].str.len() < 1000]['opinion'].str.len().hist(bins=200)
df.groupby(by='rate').name.count()
df.plot.hist(by='rate')
df['op_len'] = df['opinion'].str.len()
print(df.corr())
print(df[df['op_len'] < 1000 ].corr())
df[df['op_len'] < 1000 ].corr().style.background_gradient(cmap='coolwarm')
```
# Data cleaning
```
from html import unescape
```
Remove html escaping
```
df.update(df[df['opinion'].str.contains('&')]['opinion'].apply(unescape))
df.opinion.head()
```
remove trailing "
```
df['opinion'] = df['opinion'].str[1:-1]
df.opinion.head()
df[df['opinion'].str.len() < 2]
df = df[df['opinion'].str.len() > 2]
df[df['opinion'].str.contains('\n')].count()
```
# N-Grams & logistic regression
```
import sklearn
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
train, test = train_test_split(df, test_size=0.2)
print(train.shape, test.shape)
tfidf_vect = TfidfVectorizer(ngram_range=(1,2), max_features=10000)
X_train_tfidf = tfidf_vect.fit_transform(train['opinion'])
len(tfidf_vect.vocabulary_)
indices = np.argsort(tfidf_vect.idf_)[::-1]
features = tfidf_vect.get_feature_names()
[features[i] for i in indices[:100]]
lr = LogisticRegression(solver='saga',
multi_class='multinomial',
penalty='l1',
random_state=42)
lr.fit(X_train_tfidf, train['rate'])
X_test_tfidf = tfidf_vect.transform(test['opinion'])
predicted = lr.predict(X_test_tfidf)
accuracy = np.sum(predicted == test['rate']) / predicted.shape[0]
print(accuracy)
error = np.sum(np.abs(predicted - test['rate'])) / predicted.shape[0]
print(error)
pd.DataFrame(data=np.abs(predicted - test['rate'])).plot.hist()
```
Get drug and disease names for stop words
```
drug_names = df['name'].str.lower().unique()
drug_names
disease_names = df['condition'].str.lower().dropna()
disease_names = disease_names[~disease_names.str.contains('</span>')].unique()
stop_words = list(disease_names) + list(drug_names)
stop_words
tfidf_vect = TfidfVectorizer(ngram_range=(1,2), max_features=15000, sublinear_tf=True)
X_train_tfidf = tfidf_vect.fit_transform(train['opinion'])
len(tfidf_vect.vocabulary_)
lr = LogisticRegression(solver='saga',
multi_class='multinomial',
penalty='l1',
random_state=42)
lr.fit(X_train_tfidf, train['rate'])
X_test_tfidf = tfidf_vect.transform(test['opinion'])
predicted = lr.predict(X_test_tfidf)
accuracy = np.sum(predicted == test['rate']) / predicted.shape[0]
print(accuracy)
error = np.sum(np.abs(predicted - test['rate'])) / predicted.shape[0]
print(error)
indices = np.argsort(tfidf_vect.idf_)[::-1]
features = tfidf_vect.get_feature_names()
[(features[i], tfidf_vect.idf_[i]) for i in indices[13000:13100]]
```
Examples of BIG mistakes
```
def f(x):
if x > 7:
return 'high'
if x > 3:
return 'medium'
return 'low'
accuracy1 = np.sum(np.array([f(p) for p in predicted]) == test['rate1']) / predicted.shape[0]
print(accuracy1)
for p, r , r2 in zip(predicted, test['opinion'], test['rate']):
if abs(p-r2) > 6:
print('opinion=', r, 'predicted=', p, 'rate=',r2)
```
| github_jupyter | import pandas as pd
df = pd.read_csv('Train.csv', sep=';')
df.head()
len(df)
df['opinion'].str.len().mean()
df['opinion'].str.len().max()
df['opinion'].str.len().min()
df['opinion'].str.len().hist(bins=200)
len(df[df['opinion'].str.len() < 1000])
df[df['opinion'].str.len() < 1000]['opinion'].str.len().hist(bins=200)
df.groupby(by='rate').name.count()
df.plot.hist(by='rate')
df['op_len'] = df['opinion'].str.len()
print(df.corr())
print(df[df['op_len'] < 1000 ].corr())
df[df['op_len'] < 1000 ].corr().style.background_gradient(cmap='coolwarm')
from html import unescape
df.update(df[df['opinion'].str.contains('&')]['opinion'].apply(unescape))
df.opinion.head()
df['opinion'] = df['opinion'].str[1:-1]
df.opinion.head()
df[df['opinion'].str.len() < 2]
df = df[df['opinion'].str.len() > 2]
df[df['opinion'].str.contains('\n')].count()
import sklearn
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
train, test = train_test_split(df, test_size=0.2)
print(train.shape, test.shape)
tfidf_vect = TfidfVectorizer(ngram_range=(1,2), max_features=10000)
X_train_tfidf = tfidf_vect.fit_transform(train['opinion'])
len(tfidf_vect.vocabulary_)
indices = np.argsort(tfidf_vect.idf_)[::-1]
features = tfidf_vect.get_feature_names()
[features[i] for i in indices[:100]]
lr = LogisticRegression(solver='saga',
multi_class='multinomial',
penalty='l1',
random_state=42)
lr.fit(X_train_tfidf, train['rate'])
X_test_tfidf = tfidf_vect.transform(test['opinion'])
predicted = lr.predict(X_test_tfidf)
accuracy = np.sum(predicted == test['rate']) / predicted.shape[0]
print(accuracy)
error = np.sum(np.abs(predicted - test['rate'])) / predicted.shape[0]
print(error)
pd.DataFrame(data=np.abs(predicted - test['rate'])).plot.hist()
drug_names = df['name'].str.lower().unique()
drug_names
disease_names = df['condition'].str.lower().dropna()
disease_names = disease_names[~disease_names.str.contains('</span>')].unique()
stop_words = list(disease_names) + list(drug_names)
stop_words
tfidf_vect = TfidfVectorizer(ngram_range=(1,2), max_features=15000, sublinear_tf=True)
X_train_tfidf = tfidf_vect.fit_transform(train['opinion'])
len(tfidf_vect.vocabulary_)
lr = LogisticRegression(solver='saga',
multi_class='multinomial',
penalty='l1',
random_state=42)
lr.fit(X_train_tfidf, train['rate'])
X_test_tfidf = tfidf_vect.transform(test['opinion'])
predicted = lr.predict(X_test_tfidf)
accuracy = np.sum(predicted == test['rate']) / predicted.shape[0]
print(accuracy)
error = np.sum(np.abs(predicted - test['rate'])) / predicted.shape[0]
print(error)
indices = np.argsort(tfidf_vect.idf_)[::-1]
features = tfidf_vect.get_feature_names()
[(features[i], tfidf_vect.idf_[i]) for i in indices[13000:13100]]
def f(x):
if x > 7:
return 'high'
if x > 3:
return 'medium'
return 'low'
accuracy1 = np.sum(np.array([f(p) for p in predicted]) == test['rate1']) / predicted.shape[0]
print(accuracy1)
for p, r , r2 in zip(predicted, test['opinion'], test['rate']):
if abs(p-r2) > 6:
print('opinion=', r, 'predicted=', p, 'rate=',r2) | 0.361616 | 0.72777 |
# Deploy machine learning models to Azure
description: (preview) deploy your machine learning or deep learning model as a web service in the Azure cloud.
## Connect to your workspace
```
from azureml.core import Workspace
# get workspace configurations
ws = Workspace.from_config()
# get subscription and resourcegroup from config
SUBSCRIPTION_ID = ws.subscription_id
RESOURCE_GROUP = ws.resource_group
RESOURCE_GROUP, SUBSCRIPTION_ID
!az account set -s $SUBSCRIPTION_ID
!az ml workspace list --resource-group=$RESOURCE_GROUP
```
## Register your model
A registered model is a logical container stored in the cloud, containing all files located at `model_path`, which is associated with a version number and other metadata.
## Register a model from a local file
You can register a model by providing the local path of the model. You can provide the path of either a folder or a single file on your local machine.
```
!wget https://aka.ms/bidaf-9-model -o model.onnx
!az ml model register -n bidaf_onnx -p ./model.onnx
```
## Deploy your machine learning model
Replace bidaf_onnx:1 with the name of your model and its version number
```
!az ml model deploy -n myservice -m bidaf_onnx:1 --overwrite --ic dummyinferenceconfig.json --dc deploymentconfig.json
!az ml service get-logs -n myservice
```
## Call into your model
```
!curl -v http://localhost:32267
!curl -v -X POST -H "content-type:application/json" -d '{"query": "What color is the fox", "context": "The quick brown fox jumped over the lazy dog."}' http://localhost:32267/score
```
Notice the use of the AZUREML_MODEL_DIR environment variable to locate your registered model. Now that you've added some pip packages, you also need to update your inference configuration with [new configurations](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where?tabs=azcli#tabpanel_7_azcli) to add in those additional packages
## Deploy again and call your service
Now that we've deployed successfully with a dummy entry script, let's try deploying with a real one. Replace `bidaf_onnx:1` with the name of your model and its version number
```
!az ml model deploy -n myservice -m bidaf_onnx:1 --overwrite --ic inferenceconfig.json --dc deploymentconfig.json
!az ml service get-logs -n myservice
```
Then ensure you can send a post request to the service:
```
!curl -v -X POST -H "content-type:application/json" -d '{"query": "What color is the fox", "context": "The quick brown fox jumped over the lazy dog."}' http://localhost:32267/score
```
## Re-deploy to cloud
Once you've confirmed your service works locally and chosen a remote compute target, you are ready to deploy to the cloud.
Change your re-deploy configuration to correspond to the compute target you've chosen, in this case Azure Container Instances.
Deploy your service again
```
!az ml model deploy -n myaciservice -m bidaf_onnx:1 --overwrite --ic inferenceconfig.json --dc re-deploymentconfig.json
!az ml service get-logs -n myaciservice
```
## Call your remote webservice
When you deploy remotely, you may have key authentication enabled. The example below shows how to get your service key with Python in order to make an inference request.
```
import requests
import json
from azureml.core import Webservice, Workspace
ws = Workspace.from_config()
service = Webservice(workspace=ws, name="myaciservice")
scoring_uri = service.scoring_uri
# If the service is authenticated, set the key or token
key, _ = service.get_keys()
# Set the appropriate headers
headers = {"Content-Type": "application/json"}
headers["Authorization"] = f"Bearer {key}"
# Make the request and display the response and logs
data = {
"query": "What color is the fox",
"context": "The quick brown fox jumped over the lazy dog.",
}
data = json.dumps(data)
resp = requests.post(scoring_uri, data=data, headers=headers)
print(resp.text)
print(service.get_logs())
```
# Delete resources
```
# Get the current model id
import os
stream = os.popen(
'az ml model list --model-name=bidaf_onnx --latest --query "[0].id" -o tsv'
)
MODEL_ID = stream.read()[0:-1]
MODEL_ID
!az ml service delete -n myservice
!az ml service delete -n myaciservice
!az ml model delete --model-id=$MODEL_ID
```
## Next Steps
Try reading [our documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where?tabs=python)
| github_jupyter | from azureml.core import Workspace
# get workspace configurations
ws = Workspace.from_config()
# get subscription and resourcegroup from config
SUBSCRIPTION_ID = ws.subscription_id
RESOURCE_GROUP = ws.resource_group
RESOURCE_GROUP, SUBSCRIPTION_ID
!az account set -s $SUBSCRIPTION_ID
!az ml workspace list --resource-group=$RESOURCE_GROUP
!wget https://aka.ms/bidaf-9-model -o model.onnx
!az ml model register -n bidaf_onnx -p ./model.onnx
!az ml model deploy -n myservice -m bidaf_onnx:1 --overwrite --ic dummyinferenceconfig.json --dc deploymentconfig.json
!az ml service get-logs -n myservice
!curl -v http://localhost:32267
!curl -v -X POST -H "content-type:application/json" -d '{"query": "What color is the fox", "context": "The quick brown fox jumped over the lazy dog."}' http://localhost:32267/score
!az ml model deploy -n myservice -m bidaf_onnx:1 --overwrite --ic inferenceconfig.json --dc deploymentconfig.json
!az ml service get-logs -n myservice
!curl -v -X POST -H "content-type:application/json" -d '{"query": "What color is the fox", "context": "The quick brown fox jumped over the lazy dog."}' http://localhost:32267/score
!az ml model deploy -n myaciservice -m bidaf_onnx:1 --overwrite --ic inferenceconfig.json --dc re-deploymentconfig.json
!az ml service get-logs -n myaciservice
import requests
import json
from azureml.core import Webservice, Workspace
ws = Workspace.from_config()
service = Webservice(workspace=ws, name="myaciservice")
scoring_uri = service.scoring_uri
# If the service is authenticated, set the key or token
key, _ = service.get_keys()
# Set the appropriate headers
headers = {"Content-Type": "application/json"}
headers["Authorization"] = f"Bearer {key}"
# Make the request and display the response and logs
data = {
"query": "What color is the fox",
"context": "The quick brown fox jumped over the lazy dog.",
}
data = json.dumps(data)
resp = requests.post(scoring_uri, data=data, headers=headers)
print(resp.text)
print(service.get_logs())
# Get the current model id
import os
stream = os.popen(
'az ml model list --model-name=bidaf_onnx --latest --query "[0].id" -o tsv'
)
MODEL_ID = stream.read()[0:-1]
MODEL_ID
!az ml service delete -n myservice
!az ml service delete -n myaciservice
!az ml model delete --model-id=$MODEL_ID | 0.477554 | 0.809314 |
# Financial Planning with APIs and Simulations
In this Challenge, you’ll create two financial analysis tools by using a single Jupyter notebook:
Part 1: A financial planner for emergencies. The members will be able to use this tool to visualize their current savings. The members can then determine if they have enough reserves for an emergency fund.
Part 2: A financial planner for retirement. This tool will forecast the performance of their retirement portfolio in 30 years. To do this, the tool will make an Alpaca API call via the Alpaca SDK to get historical price data for use in Monte Carlo simulations.
You’ll use the information from the Monte Carlo simulation to answer questions about the portfolio in your Jupyter notebook.
```
# Import the required libraries and dependencies
import os
import requests
import json
import pandas as pd
from dotenv import load_dotenv
import alpaca_trade_api as tradeapi
from MCForecastTools import MCSimulation
%matplotlib inline
# Load the environment variables from the .env file
#by calling the load_dotenv function
load_dotenv()
```
## Part 1: Create a Financial Planner for Emergencies
### Evaluate the Cryptocurrency Wallet by Using the Requests Library
In this section, you’ll determine the current value of a member’s cryptocurrency wallet. You’ll collect the current prices for the Bitcoin and Ethereum cryptocurrencies by using the Python Requests library. For the prototype, you’ll assume that the member holds the 1.2 Bitcoins (BTC) and 5.3 Ethereum coins (ETH). To do all this, complete the following steps:
1. Create a variable named `monthly_income`, and set its value to `12000`.
2. Use the Requests library to get the current price (in US dollars) of Bitcoin (BTC) and Ethereum (ETH) by using the API endpoints that the starter code supplies.
3. Navigate the JSON response object to access the current price of each coin, and store each in a variable.
> **Hint** Note the specific identifier for each cryptocurrency in the API JSON response. The Bitcoin identifier is `1`, and the Ethereum identifier is `1027`.
4. Calculate the value, in US dollars, of the current amount of each cryptocurrency and of the entire cryptocurrency wallet.
```
# The current number of coins for each cryptocurrency asset held in the portfolio.
BTC_coins = 1.2
ETH_coins = 5.3
```
#### Step 1: Create a variable named `monthly_income`, and set its value to `12000`.
```
# The monthly amount for the member's household income
monthly_income = 12000
```
#### Review the endpoint URLs for the API calls to Free Crypto API in order to get the current pricing information for both BTC and ETH.
```
# The Free Crypto API Call endpoint URLs for the held cryptocurrency assets
BTC_url = "https://api.alternative.me/v2/ticker/Bitcoin/?convert=USD"
ETH_url = "https://api.alternative.me/v2/ticker/Ethereum/?convert=USD"
```
#### Step 2. Use the Requests library to get the current price (in US dollars) of Bitcoin (BTC) and Ethereum (ETH) by using the API endpoints that the starter code supplied.
```
# Using the Python requests library, make an API call to access the current price of BTC
BTC_response = requests.get(BTC_url).json()
# Use the json.dumps function to review the response data from the API call
# Use the indent and sort_keys parameters to make the response object readable
print(json.dumps(BTC_response, indent=4, sort_keys=True))
# Using the Python requests library, make an API call to access the current price ETH
ETH_response = requests.get(ETH_url).json()
# Use the json.dumps function to review the response data from the API call
# Use the indent and sort_keys parameters to make the response object readable
print(json.dumps(ETH_response, indent=4, sort_keys=True))
```
#### Step 3: Navigate the JSON response object to access the current price of each coin, and store each in a variable.
```
# Navigate the BTC response object to access the current price of BTC
BTC_price = BTC_response['data']['1']['quotes']['USD']['price']
# Print the current price of BTC
print(f"The current price of Bitcoin is ${BTC_price}")
# Navigate the BTC response object to access the current price of ETH
ETH_price = ETH_response['data']['1027']['quotes']['USD']['price']
# Print the current price of ETH
print(f"The current price of Ethereum is ${ETH_price}")
```
### Step 4: Calculate the value, in US dollars, of the current amount of each cryptocurrency and of the entire cryptocurrency wallet.
```
# Compute the current value of the BTC holding
BTC_value = BTC_price * BTC_coins
# Print current value of your holding in BTC
print(f"The current value of Bitcoin is ${BTC_value}")
# Compute the current value of the ETH holding
ETH_value = ETH_price * ETH_coins
# Print current value of your holding in ETH
print(f"The current value of Ethereum is ${ETH_value}")
# Compute the total value of the cryptocurrency wallet
# Add the value of the BTC holding to the value of the ETH holding
crypto_wallet_value = BTC_value + ETH_value
# Print current cryptocurrency wallet balance
print(f"The current cryptocurrency wallet balance is ${crypto_wallet_value}")
```
### Evaluate the Stock and Bond Holdings by Using the Alpaca SDK
In this section, you’ll determine the current value of a member’s stock and bond holdings. You’ll make an API call to Alpaca via the Alpaca SDK to get the current closing prices of the SPDR S&P 500 ETF Trust (ticker: SPY) and of the iShares Core US Aggregate Bond ETF (ticker: AGG). For the prototype, assume that the member holds 110 shares of SPY, which represents the stock portion of their portfolio, and 200 shares of AGG, which represents the bond portion. To do all this, complete the following steps:
1. In the `Starter_Code` folder, create an environment file (`.env`) to store the values of your Alpaca API key and Alpaca secret key.
2. Set the variables for the Alpaca API and secret keys. Using the Alpaca SDK, create the Alpaca `tradeapi.REST` object. In this object, include the parameters for the Alpaca API key, the secret key, and the version number.
3. Set the following parameters for the Alpaca API call:
- `tickers`: Use the tickers for the member’s stock and bond holdings.
- `timeframe`: Use a time frame of one day.
- `start_date` and `end_date`: Use the same date for these parameters, and format them with the date of the previous weekday (or `2020-08-07`). This is because you want the one closing price for the most-recent trading day.
4. Get the current closing prices for `SPY` and `AGG` by using the Alpaca `get_barset` function. Format the response as a Pandas DataFrame by including the `df` property at the end of the `get_barset` function.
5. Navigating the Alpaca response DataFrame, select the `SPY` and `AGG` closing prices, and store them as variables.
6. Calculate the value, in US dollars, of the current amount of shares in each of the stock and bond portions of the portfolio, and print the results.
#### Review the total number of shares held in both (SPY) and (AGG).
```
# Current amount of shares held in both the stock (SPY) and bond (AGG) portion of the portfolio.
SPY_shares = 110
AGG_shares = 200
```
#### Step 1: In the `Starter_Code` folder, create an environment file (`.env`) to store the values of your Alpaca API key and Alpaca secret key.
#### Step 2: Set the variables for the Alpaca API and secret keys. Using the Alpaca SDK, create the Alpaca `tradeapi.REST` object. In this object, include the parameters for the Alpaca API key, the secret key, and the version number.
```
# Set the variables for the Alpaca API and secret keys
alpaca_api_key = os.getenv("ALPACA_API_KEY")
alpaca_secret_key = os.getenv("ALPACA_SECRET_KEY")
# Create the Alpaca tradeapi.REST object
alpaca = tradeapi.REST(
alpaca_api_key,
alpaca_secret_key,
api_version="v2")
```
#### Step 3: Set the following parameters for the Alpaca API call:
- `tickers`: Use the tickers for the member’s stock and bond holdings.
- `timeframe`: Use a time frame of one day.
- `start_date` and `end_date`: Use the same date for these parameters, and format them with the date of the previous weekday (or `2020-08-07`). This is because you want the one closing price for the most-recent trading day.
```
# Set the tickers for both the bond and stock portion of the portfolio
tickers = ["AGG", "SPY"]
# Set timeframe to 1D
timeframe = "1D"
# Format current date as ISO format
# Set both the start and end date at the date of your prior weekday
# This will give you the closing price of the previous trading day
# Alternatively you can use a start and end date of 2020-08-07
start_date = pd.Timestamp("2020-08-07")
end_date = pd.Timestamp("2020-08-07")
```
#### Step 4: Get the current closing prices for `SPY` and `AGG` by using the Alpaca `get_barset` function. Format the response as a Pandas DataFrame by including the `df` property at the end of the `get_barset` function.
```
# Use the Alpaca get_barset function to get current closing prices the portfolio
# Be sure to set the `df` property after the function to format the response object as a DataFrame
portfolio_closing_prices = alpaca.get_barset(
tickers,
timeframe,
start = start_date,
end = end_date
).df
# Review the first 5 rows of the Alpaca DataFrame
portfolio_closing_prices.head()
```
#### Step 5: Navigating the Alpaca response DataFrame, select the `SPY` and `AGG` closing prices, and store them as variables.
```
# Access the closing price for AGG from the Alpaca DataFrame
# Converting the value to a floating point number
AGG_close_price = float(portfolio_closing_prices["AGG"]["close"][0])
# Print the AGG closing price
print(AGG_close_price)
# Access the closing price for SPY from the Alpaca DataFrame
# Converting the value to a floating point number
SPY_close_price = float(portfolio_closing_prices["SPY"]["close"][0])
# Print the SPY closing price
print(SPY_close_price)
```
#### Step 6: Calculate the value, in US dollars, of the current amount of shares in each of the stock and bond portions of the portfolio, and print the results.
```
# Calculate the current value of the bond portion of the portfolio
AGG_value = AGG_close_price * AGG_shares
# Print the current value of the bond portfolio
print(f"The current value of the bond portfolio is ${AGG_value}")
# Calculate the current value of the stock portion of the portfolio
SPY_value = SPY_close_price * SPY_shares
# Print the current value of the stock portfolio
print(f"The current value of the stock portfolio is ${SPY_value}")
# Calculate the total value of the stock and bond portion of the portfolio
total_stocks_bonds = SPY_value + AGG_value
# Print the current balance of the stock and bond portion of the portfolio
print(f"The current balance of the stock and bond portion is ${total_stocks_bonds}")
# Calculate the total value of the member's entire savings portfolio
# Add the value of the cryptocurrency wallet to the value of the total stocks and bonds
total_portfolio = crypto_wallet_value + total_stocks_bonds
# Print current cryptocurrency wallet balance
print(f"The current cryptocurrency wallet balance is ${crypto_wallet_value}")
```
### Evaluate the Emergency Fund
In this section, you’ll use the valuations for the cryptocurrency wallet and for the stock and bond portions of the portfolio to determine if the credit union member has enough savings to build an emergency fund into their financial plan. To do this, complete the following steps:
1. Create a Python list named `savings_data` that has two elements. The first element contains the total value of the cryptocurrency wallet. The second element contains the total value of the stock and bond portions of the portfolio.
2. Use the `savings_data` list to create a Pandas DataFrame named `savings_df`, and then display this DataFrame. The function to create the DataFrame should take the following three parameters:
- `savings_data`: Use the list that you just created.
- `columns`: Set this parameter equal to a Python list with a single value called `amount`.
- `index`: Set this parameter equal to a Python list with the values of `crypto` and `stock/bond`.
3. Use the `savings_df` DataFrame to plot a pie chart that visualizes the composition of the member’s portfolio. The y-axis of the pie chart uses `amount`. Be sure to add a title.
4. Using Python, determine if the current portfolio has enough to create an emergency fund as part of the member’s financial plan. Ideally, an emergency fund should equal to three times the member’s monthly income. To do this, implement the following steps:
1. Create a variable named `emergency_fund_value`, and set it equal to three times the value of the member’s `monthly_income` of $12000. (You set this earlier in Part 1).
2. Create a series of three if statements to determine if the member’s total portfolio is large enough to fund the emergency portfolio:
1. If the total portfolio value is greater than the emergency fund value, display a message congratulating the member for having enough money in this fund.
2. Else if the total portfolio value is equal to the emergency fund value, display a message congratulating the member on reaching this important financial goal.
3. Else the total portfolio is less than the emergency fund value, so display a message showing how many dollars away the member is from reaching the goal. (Subtract the total portfolio value from the emergency fund value.)
#### Step 1: Create a Python list named `savings_data` that has two elements. The first element contains the total value of the cryptocurrency wallet. The second element contains the total value of the stock and bond portions of the portfolio.
```
# Consolidate financial assets data into a Python list
savings_data = crypto_wallet_value + total_stocks_bonds
# Review the Python list savings_data
savings_data
```
#### Step 2: Use the `savings_data` list to create a Pandas DataFrame named `savings_df`, and then display this DataFrame. The function to create the DataFrame should take the following three parameters:
- `savings_data`: Use the list that you just created.
- `columns`: Set this parameter equal to a Python list with a single value called `amount`.
- `index`: Set this parameter equal to a Python list with the values of `crypto` and `stock/bond`.
```
# Create a Pandas DataFrame called savings_df
savings_df = pd.DataFrame(
{'Amount': [crypto_wallet_value, total_stocks_bonds]},
index=['Crypto', 'Stock/Bond']
)
# Display the savings_df DataFrame
savings_df
```
#### Step 3: Use the `savings_df` DataFrame to plot a pie chart that visualizes the composition of the member’s portfolio. The y-axis of the pie chart uses `amount`. Be sure to add a title.
```
# Plot the total value of the member's portfolio (crypto and stock/bond) in a pie chart
savings_df.plot.pie(y='Amount', title="Portfolio Composition - 2020-08-07", figsize=(7,8))
```
#### Step 4: Using Python, determine if the current portfolio has enough to create an emergency fund as part of the member’s financial plan. Ideally, an emergency fund should equal to three times the member’s monthly income. To do this, implement the following steps:
Step 1. Create a variable named `emergency_fund_value`, and set it equal to three times the value of the member’s `monthly_income` of 12000. (You set this earlier in Part 1).
Step 2. Create a series of three if statements to determine if the member’s total portfolio is large enough to fund the emergency portfolio:
* If the total portfolio value is greater than the emergency fund value, display a message congratulating the member for having enough money in this fund.
* Else if the total portfolio value is equal to the emergency fund value, display a message congratulating the member on reaching this important financial goal.
* Else the total portfolio is less than the emergency fund value, so display a message showing how many dollars away the member is from reaching the goal. (Subtract the total portfolio value from the emergency fund value.)
##### Step 4-1: Create a variable named `emergency_fund_value`, and set it equal to three times the value of the member’s `monthly_income` of 12000. (You set this earlier in Part 1).
```
# Create a variable named emergency_fund_value
emergency_fund_value = monthly_income * 3
emergency_fund_value
```
##### Step 4-2: Create a series of three if statements to determine if the member’s total portfolio is large enough to fund the emergency portfolio:
* If the total portfolio value is greater than the emergency fund value, display a message congratulating the member for having enough money in this fund.
* Else if the total portfolio value is equal to the emergency fund value, display a message congratulating the member on reaching this important financial goal.
* Else the total portfolio is less than the emergency fund value, so display a message showing how many dollars away the member is from reaching the goal. (Subtract the total portfolio value from the emergency fund value.)
```
amt_from_goal = savings_data - emergency_fund_value
print(amt_from_goal)
# Evaluate the possibility of creating an emergency fund with 3 conditions:
if savings_data > emergency_fund_value:
print("Congrats! you have enough money in this fund")
elif savings_data == emergency_fund_value:
print("Congrats! you have reached your financial goal")
else:
print("You are $ {amt_from_goal}")
```
## Part 2: Create a Financial Planner for Retirement
### Create the Monte Carlo Simulation
In this section, you’ll use the MCForecastTools library to create a Monte Carlo simulation for the member’s savings portfolio. To do this, complete the following steps:
1. Make an API call via the Alpaca SDK to get 3 years of historical closing prices for a traditional 60/40 portfolio split: 60% stocks (SPY) and 40% bonds (AGG).
2. Run a Monte Carlo simulation of 500 samples and 30 years for the 60/40 portfolio, and then plot the results.The following image shows the overlay line plot resulting from a simulation with these characteristics. However, because a random number generator is used to run each live Monte Carlo simulation, your image will differ slightly from this exact image:
![A screenshot depicts the resulting plot.](Images/5-4-monte-carlo-line-plot.png)
3. Plot the probability distribution of the Monte Carlo simulation. Plot the probability distribution of the Monte Carlo simulation. The following image shows the histogram plot resulting from a simulation with these characteristics. However, because a random number generator is used to run each live Monte Carlo simulation, your image will differ slightly from this exact image:
![A screenshot depicts the histogram plot.](Images/5-4-monte-carlo-histogram.png)
4. Generate the summary statistics for the Monte Carlo simulation.
#### Step 1: Make an API call via the Alpaca SDK to get 3 years of historical closing prices for a traditional 60/40 portfolio split: 60% stocks (SPY) and 40% bonds (AGG).
```
# Set start and end dates of 3 years back from your current date
# Alternatively, you can use an end date of 2020-08-07 and work 3 years back from that date
start_date = pd.Timestamp("2017-08-07", tz="America/New_York").isoformat()
end_date = pd.Timestamp("2020-08-07", tz="America/New_York").isoformat()
# Set number of rows to 1000 to retrieve the maximum amount of rows
limit_rows = 1000
# Use the Alpaca get_barset function to make the API call to get the 3 years worth of pricing data
# The tickers and timeframe parameters should have been set in Part 1 of this activity
# The start and end dates should be updated with the information set above
# Remember to add the df property to the end of the call so the response is returned as a DataFrame
three_year_pricing = alpaca.get_barset(
tickers,
timeframe,
start=start_date,
end=end_date,
limit=limit_rows
).df
# Display both the first and last five rows of the DataFrame
three_year_pricing.head()
three_year_pricing.tail()
```
#### Step 2: Run a Monte Carlo simulation of 500 samples and 30 years for the 60/40 portfolio, and then plot the results.
```
# Configure the Monte Carlo simulation to forecast 30 years cumulative returns
# The weights should be split 40% to AGG and 60% to SPY.
# Run 500 samples.
MC_thirtyyear = MCSimulation(
portfolio_data = three_year_pricing,
weights = [.40,.60],
num_simulation = 500,
num_trading_days = 252*30
)
# Review the simulation input data
MC_thirtyyear.portfolio_data
# Run the Monte Carlo simulation to forecast 30 years cumulative returns
MC_thirtyyear.calc_cumulative_return()
# Visualize the 30-year Monte Carlo simulation by creating an
# overlay line plot
MC_simulation_lineplot = MC_thirtyyear.plot_simulation()
MC_simulation_lineplot.get_figure().savefig("MC_thirtyyear_sim_plot.png", bbox_inches="tight")
```
#### Step 3: Plot the probability distribution of the Monte Carlo simulation.
```
# Visualize the probability distribution of the 30-year Monte Carlo simulation
# by plotting a histogram
MC_distribution_plot = MC_thirtyyear.plot_distribution()
MC_distribution_plot.get_figure().savefig("MC_thirtyyear_dist_plot.png", bbox_inches="tight")
```
#### Step 4: Generate the summary statistics for the Monte Carlo simulation.
```
# Generate summary statistics from the 30-year Monte Carlo simulation results
# Save the results as a variable
MC_summary_statistics = MC_thirtyyear.summarize_cumulative_return()
# Review the 30-year Monte Carlo summary statistics
MC_summary_statistics
```
### Analyze the Retirement Portfolio Forecasts
Using the current value of only the stock and bond portion of the member's portfolio and the summary statistics that you generated from the Monte Carlo simulation, answer the following question in your Jupyter notebook:
- What are the lower and upper bounds for the expected value of the portfolio with a 95% confidence interval?
```
# Print the current balance of the stock and bond portion of the members portfolio
total_stocks_bonds
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes for the current stock/bond portfolio
ci_lower_thirty_cumulative_return = MC_summary_statistics[8] * 65737.189
ci_upper_thirty_cumulative_return = MC_summary_statistics[9] * 65737.189
# Print the result of your calculations
print(f"There is a 95% chance that an initial investment of $67,737.189 in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${ci_lower_thirty_cumulative_return: .2f} and ${ci_upper_thirty_cumulative_return: .2f}.")
```
### Forecast Cumulative Returns in 10 Years
The CTO of the credit union is impressed with your work on these planning tools but wonders if 30 years is a long time to wait until retirement. So, your next task is to adjust the retirement portfolio and run a new Monte Carlo simulation to find out if the changes will allow members to retire earlier.
For this new Monte Carlo simulation, do the following:
- Forecast the cumulative returns for 10 years from now. Because of the shortened investment horizon (30 years to 10 years), the portfolio needs to invest more heavily in the riskier asset—that is, stock—to help accumulate wealth for retirement.
- Adjust the weights of the retirement portfolio so that the composition for the Monte Carlo simulation consists of 20% bonds and 80% stocks.
- Run the simulation over 500 samples, and use the same data that the API call to Alpaca generated.
- Based on the new Monte Carlo simulation, answer the following questions in your Jupyter notebook:
- Using the current value of only the stock and bond portion of the member's portfolio and the summary statistics that you generated from the new Monte Carlo simulation, what are the lower and upper bounds for the expected value of the portfolio (with the new weights) with a 95% confidence interval?
- Will weighting the portfolio more heavily toward stocks allow the credit union members to retire after only 10 years?
```
# Configure a Monte Carlo simulation to forecast 10 years cumulative returns
# The weights should be split 20% to AGG and 80% to SPY.
# Run 500 samples.
MC_tenyear = MCSimulation(
portfolio_data = three_year_pricing,
weights = [.20,.80],
num_simulation = 500,
num_trading_days = 252*10
)
# Review the simulation input data
MC_tenyear.portfolio_data
# Run the Monte Carlo simulation to forecast 10 years cumulative returns
MC_tenyear.calc_cumulative_return()
# Visualize the 10-year Monte Carlo simulation by creating an
# overlay line plot
MC_simulation_lineplot = MC_tenyear.plot_simulation()
MC_simulation_lineplot.get_figure().savefig("MC_tenyear_sim_plot.png", bbox_inches="tight")
# Visualize the probability distribution of the 10-year Monte Carlo simulation
# by plotting a histogram
MC_distribution_plot = MC_tenyear.plot_distribution()
MC_distribution_plot.get_figure().savefig("MC_tenyear_dist_plot.png", bbox_inches="tight")
# Generate summary statistics from the 10-year Monte Carlo simulation results
# Save the results as a variable
MC_summary_statistics = MC_tenyear.summarize_cumulative_return()
# Review the 10-year Monte Carlo summary statistics
MC_summary_statistics
```
### Answer the following questions:
#### Question: Using the current value of only the stock and bond portion of the member's portfolio and the summary statistics that you generated from the new Monte Carlo simulation, what are the lower and upper bounds for the expected value of the portfolio (with the new weights) with a 95% confidence interval?
```
# Print the current balance of the stock and bond portion of the members portfolio
total_stocks_bonds
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes for the current stock/bond portfolio
ci_lower_ten_cumulative_return = MC_summary_statistics[8] * 65737.189
ci_upper_ten_cumulative_return = MC_summary_statistics[9] * 65737.189
# Print the result of your calculations
print(f"There is a 95% chance that an initial investment of $67,737.189 in the portfolio"
f" over the next 10 years will end within in the range of"
f" ${ci_lower_ten_cumulative_return: .2f} and ${ci_upper_ten_cumulative_return: .2f}.")
```
#### Question: Will weighting the portfolio more heavily to stocks allow the credit union members to retire after only 10 years?
| github_jupyter | # Import the required libraries and dependencies
import os
import requests
import json
import pandas as pd
from dotenv import load_dotenv
import alpaca_trade_api as tradeapi
from MCForecastTools import MCSimulation
%matplotlib inline
# Load the environment variables from the .env file
#by calling the load_dotenv function
load_dotenv()
# The current number of coins for each cryptocurrency asset held in the portfolio.
BTC_coins = 1.2
ETH_coins = 5.3
# The monthly amount for the member's household income
monthly_income = 12000
# The Free Crypto API Call endpoint URLs for the held cryptocurrency assets
BTC_url = "https://api.alternative.me/v2/ticker/Bitcoin/?convert=USD"
ETH_url = "https://api.alternative.me/v2/ticker/Ethereum/?convert=USD"
# Using the Python requests library, make an API call to access the current price of BTC
BTC_response = requests.get(BTC_url).json()
# Use the json.dumps function to review the response data from the API call
# Use the indent and sort_keys parameters to make the response object readable
print(json.dumps(BTC_response, indent=4, sort_keys=True))
# Using the Python requests library, make an API call to access the current price ETH
ETH_response = requests.get(ETH_url).json()
# Use the json.dumps function to review the response data from the API call
# Use the indent and sort_keys parameters to make the response object readable
print(json.dumps(ETH_response, indent=4, sort_keys=True))
# Navigate the BTC response object to access the current price of BTC
BTC_price = BTC_response['data']['1']['quotes']['USD']['price']
# Print the current price of BTC
print(f"The current price of Bitcoin is ${BTC_price}")
# Navigate the BTC response object to access the current price of ETH
ETH_price = ETH_response['data']['1027']['quotes']['USD']['price']
# Print the current price of ETH
print(f"The current price of Ethereum is ${ETH_price}")
# Compute the current value of the BTC holding
BTC_value = BTC_price * BTC_coins
# Print current value of your holding in BTC
print(f"The current value of Bitcoin is ${BTC_value}")
# Compute the current value of the ETH holding
ETH_value = ETH_price * ETH_coins
# Print current value of your holding in ETH
print(f"The current value of Ethereum is ${ETH_value}")
# Compute the total value of the cryptocurrency wallet
# Add the value of the BTC holding to the value of the ETH holding
crypto_wallet_value = BTC_value + ETH_value
# Print current cryptocurrency wallet balance
print(f"The current cryptocurrency wallet balance is ${crypto_wallet_value}")
# Current amount of shares held in both the stock (SPY) and bond (AGG) portion of the portfolio.
SPY_shares = 110
AGG_shares = 200
# Set the variables for the Alpaca API and secret keys
alpaca_api_key = os.getenv("ALPACA_API_KEY")
alpaca_secret_key = os.getenv("ALPACA_SECRET_KEY")
# Create the Alpaca tradeapi.REST object
alpaca = tradeapi.REST(
alpaca_api_key,
alpaca_secret_key,
api_version="v2")
# Set the tickers for both the bond and stock portion of the portfolio
tickers = ["AGG", "SPY"]
# Set timeframe to 1D
timeframe = "1D"
# Format current date as ISO format
# Set both the start and end date at the date of your prior weekday
# This will give you the closing price of the previous trading day
# Alternatively you can use a start and end date of 2020-08-07
start_date = pd.Timestamp("2020-08-07")
end_date = pd.Timestamp("2020-08-07")
# Use the Alpaca get_barset function to get current closing prices the portfolio
# Be sure to set the `df` property after the function to format the response object as a DataFrame
portfolio_closing_prices = alpaca.get_barset(
tickers,
timeframe,
start = start_date,
end = end_date
).df
# Review the first 5 rows of the Alpaca DataFrame
portfolio_closing_prices.head()
# Access the closing price for AGG from the Alpaca DataFrame
# Converting the value to a floating point number
AGG_close_price = float(portfolio_closing_prices["AGG"]["close"][0])
# Print the AGG closing price
print(AGG_close_price)
# Access the closing price for SPY from the Alpaca DataFrame
# Converting the value to a floating point number
SPY_close_price = float(portfolio_closing_prices["SPY"]["close"][0])
# Print the SPY closing price
print(SPY_close_price)
# Calculate the current value of the bond portion of the portfolio
AGG_value = AGG_close_price * AGG_shares
# Print the current value of the bond portfolio
print(f"The current value of the bond portfolio is ${AGG_value}")
# Calculate the current value of the stock portion of the portfolio
SPY_value = SPY_close_price * SPY_shares
# Print the current value of the stock portfolio
print(f"The current value of the stock portfolio is ${SPY_value}")
# Calculate the total value of the stock and bond portion of the portfolio
total_stocks_bonds = SPY_value + AGG_value
# Print the current balance of the stock and bond portion of the portfolio
print(f"The current balance of the stock and bond portion is ${total_stocks_bonds}")
# Calculate the total value of the member's entire savings portfolio
# Add the value of the cryptocurrency wallet to the value of the total stocks and bonds
total_portfolio = crypto_wallet_value + total_stocks_bonds
# Print current cryptocurrency wallet balance
print(f"The current cryptocurrency wallet balance is ${crypto_wallet_value}")
# Consolidate financial assets data into a Python list
savings_data = crypto_wallet_value + total_stocks_bonds
# Review the Python list savings_data
savings_data
# Create a Pandas DataFrame called savings_df
savings_df = pd.DataFrame(
{'Amount': [crypto_wallet_value, total_stocks_bonds]},
index=['Crypto', 'Stock/Bond']
)
# Display the savings_df DataFrame
savings_df
# Plot the total value of the member's portfolio (crypto and stock/bond) in a pie chart
savings_df.plot.pie(y='Amount', title="Portfolio Composition - 2020-08-07", figsize=(7,8))
# Create a variable named emergency_fund_value
emergency_fund_value = monthly_income * 3
emergency_fund_value
amt_from_goal = savings_data - emergency_fund_value
print(amt_from_goal)
# Evaluate the possibility of creating an emergency fund with 3 conditions:
if savings_data > emergency_fund_value:
print("Congrats! you have enough money in this fund")
elif savings_data == emergency_fund_value:
print("Congrats! you have reached your financial goal")
else:
print("You are $ {amt_from_goal}")
# Set start and end dates of 3 years back from your current date
# Alternatively, you can use an end date of 2020-08-07 and work 3 years back from that date
start_date = pd.Timestamp("2017-08-07", tz="America/New_York").isoformat()
end_date = pd.Timestamp("2020-08-07", tz="America/New_York").isoformat()
# Set number of rows to 1000 to retrieve the maximum amount of rows
limit_rows = 1000
# Use the Alpaca get_barset function to make the API call to get the 3 years worth of pricing data
# The tickers and timeframe parameters should have been set in Part 1 of this activity
# The start and end dates should be updated with the information set above
# Remember to add the df property to the end of the call so the response is returned as a DataFrame
three_year_pricing = alpaca.get_barset(
tickers,
timeframe,
start=start_date,
end=end_date,
limit=limit_rows
).df
# Display both the first and last five rows of the DataFrame
three_year_pricing.head()
three_year_pricing.tail()
# Configure the Monte Carlo simulation to forecast 30 years cumulative returns
# The weights should be split 40% to AGG and 60% to SPY.
# Run 500 samples.
MC_thirtyyear = MCSimulation(
portfolio_data = three_year_pricing,
weights = [.40,.60],
num_simulation = 500,
num_trading_days = 252*30
)
# Review the simulation input data
MC_thirtyyear.portfolio_data
# Run the Monte Carlo simulation to forecast 30 years cumulative returns
MC_thirtyyear.calc_cumulative_return()
# Visualize the 30-year Monte Carlo simulation by creating an
# overlay line plot
MC_simulation_lineplot = MC_thirtyyear.plot_simulation()
MC_simulation_lineplot.get_figure().savefig("MC_thirtyyear_sim_plot.png", bbox_inches="tight")
# Visualize the probability distribution of the 30-year Monte Carlo simulation
# by plotting a histogram
MC_distribution_plot = MC_thirtyyear.plot_distribution()
MC_distribution_plot.get_figure().savefig("MC_thirtyyear_dist_plot.png", bbox_inches="tight")
# Generate summary statistics from the 30-year Monte Carlo simulation results
# Save the results as a variable
MC_summary_statistics = MC_thirtyyear.summarize_cumulative_return()
# Review the 30-year Monte Carlo summary statistics
MC_summary_statistics
# Print the current balance of the stock and bond portion of the members portfolio
total_stocks_bonds
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes for the current stock/bond portfolio
ci_lower_thirty_cumulative_return = MC_summary_statistics[8] * 65737.189
ci_upper_thirty_cumulative_return = MC_summary_statistics[9] * 65737.189
# Print the result of your calculations
print(f"There is a 95% chance that an initial investment of $67,737.189 in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${ci_lower_thirty_cumulative_return: .2f} and ${ci_upper_thirty_cumulative_return: .2f}.")
# Configure a Monte Carlo simulation to forecast 10 years cumulative returns
# The weights should be split 20% to AGG and 80% to SPY.
# Run 500 samples.
MC_tenyear = MCSimulation(
portfolio_data = three_year_pricing,
weights = [.20,.80],
num_simulation = 500,
num_trading_days = 252*10
)
# Review the simulation input data
MC_tenyear.portfolio_data
# Run the Monte Carlo simulation to forecast 10 years cumulative returns
MC_tenyear.calc_cumulative_return()
# Visualize the 10-year Monte Carlo simulation by creating an
# overlay line plot
MC_simulation_lineplot = MC_tenyear.plot_simulation()
MC_simulation_lineplot.get_figure().savefig("MC_tenyear_sim_plot.png", bbox_inches="tight")
# Visualize the probability distribution of the 10-year Monte Carlo simulation
# by plotting a histogram
MC_distribution_plot = MC_tenyear.plot_distribution()
MC_distribution_plot.get_figure().savefig("MC_tenyear_dist_plot.png", bbox_inches="tight")
# Generate summary statistics from the 10-year Monte Carlo simulation results
# Save the results as a variable
MC_summary_statistics = MC_tenyear.summarize_cumulative_return()
# Review the 10-year Monte Carlo summary statistics
MC_summary_statistics
# Print the current balance of the stock and bond portion of the members portfolio
total_stocks_bonds
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes for the current stock/bond portfolio
ci_lower_ten_cumulative_return = MC_summary_statistics[8] * 65737.189
ci_upper_ten_cumulative_return = MC_summary_statistics[9] * 65737.189
# Print the result of your calculations
print(f"There is a 95% chance that an initial investment of $67,737.189 in the portfolio"
f" over the next 10 years will end within in the range of"
f" ${ci_lower_ten_cumulative_return: .2f} and ${ci_upper_ten_cumulative_return: .2f}.") | 0.729327 | 0.989254 |
```
import statsmodels.formula.api as smf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
indicepanel = pd.DataFrame.from_csv('../data/indice/indicepanel.csv')
indicepanel.head()
Train = indicepanel.iloc[-2000:-1000, :]
Test = indicepanel.iloc[-1000:, :]
formula = 'spy~spy_lag1+sp500+nasdaq+dji+cac40+aord+daxi+nikkei+hsi'
lm = smf.ols(formula=formula, data=Train).fit()
Train['PredictedY'] = lm.predict(Train)
Test['PredictedY'] = lm.predict(Test)
```
# Profit of Signal-based strategy
```
# Train
Train['Order'] = [1 if sig>0 else -1 for sig in Train['PredictedY']]
Train['Profit'] = Train['spy'] * Train['Order']
Train['Wealth'] = Train['Profit'].cumsum()
print('Total profit made in Train: ', Train['Profit'].sum())
plt.figure(figsize=(10, 10))
plt.title('Performance of Strategy in Train')
plt.plot(Train['Wealth'].values, color='green', label='Signal based strategy')
plt.plot(Train['spy'].cumsum().values, color='red', label='Buy and Hold strategy')
plt.legend()
plt.show()
# Test
Test['Order'] = [1 if sig>0 else -1 for sig in Test['PredictedY']]
Test['Profit'] = Test['spy'] * Test['Order']
Test['Wealth'] = Test['Profit'].cumsum()
print('Total profit made in Test: ', Test['Profit'].sum())
plt.figure(figsize=(10, 10))
plt.title('Performance of Strategy in Train')
plt.plot(Test['Wealth'].values, color='green', label='Signal based strategy')
plt.plot(Test['spy'].cumsum().values, color='red', label='Buy and Hold strategy')
plt.legend()
plt.show()
```
# Evaluation of model - Practical Standard
We introduce two common practical standards - **Sharpe Ratio**, **Maximum Drawdown** to evaluate our model performance
```
Train['Wealth'] = Train['Wealth'] + Train.loc[Train.index[0], 'Price']
Test['Wealth'] = Test['Wealth'] + Test.loc[Test.index[0], 'Price']
# Sharpe Ratio on Train data
Train['Return'] = np.log(Train['Wealth']) - np.log(Train['Wealth'].shift(1))
dailyr = Train['Return'].dropna()
print('Daily Sharpe Ratio is ', dailyr.mean()/dailyr.std(ddof=1))
print('Yearly Sharpe Ratio is ', (252**0.5)*dailyr.mean()/dailyr.std(ddof=1))
# Sharpe Ratio in Test data
Test['Return'] = np.log(Test['Wealth']) - np.log(Test['Wealth'].shift(1))
dailyr = Test['Return'].dropna()
print('Daily Sharpe Ratio is ', dailyr.mean()/dailyr.std(ddof=1))
print('Yearly Sharpe Ratio is ', (252**0.5)*dailyr.mean()/dailyr.std(ddof=1))
# Maximum Drawdown in Train data
Train['Peak'] = Train['Wealth'].cummax()
Train['Drawdown'] = (Train['Peak'] - Train['Wealth'])/Train['Peak']
print('Maximum Drawdown in Train is ', Train['Drawdown'].max())
# Maximum Drawdown in Test data
Test['Peak'] = Test['Wealth'].cummax()
Test['Drawdown'] = (Test['Peak'] - Test['Wealth'])/Test['Peak']
print('Maximum Drawdown in Test is ', Test['Drawdown'].max())
```
| github_jupyter | import statsmodels.formula.api as smf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
indicepanel = pd.DataFrame.from_csv('../data/indice/indicepanel.csv')
indicepanel.head()
Train = indicepanel.iloc[-2000:-1000, :]
Test = indicepanel.iloc[-1000:, :]
formula = 'spy~spy_lag1+sp500+nasdaq+dji+cac40+aord+daxi+nikkei+hsi'
lm = smf.ols(formula=formula, data=Train).fit()
Train['PredictedY'] = lm.predict(Train)
Test['PredictedY'] = lm.predict(Test)
# Train
Train['Order'] = [1 if sig>0 else -1 for sig in Train['PredictedY']]
Train['Profit'] = Train['spy'] * Train['Order']
Train['Wealth'] = Train['Profit'].cumsum()
print('Total profit made in Train: ', Train['Profit'].sum())
plt.figure(figsize=(10, 10))
plt.title('Performance of Strategy in Train')
plt.plot(Train['Wealth'].values, color='green', label='Signal based strategy')
plt.plot(Train['spy'].cumsum().values, color='red', label='Buy and Hold strategy')
plt.legend()
plt.show()
# Test
Test['Order'] = [1 if sig>0 else -1 for sig in Test['PredictedY']]
Test['Profit'] = Test['spy'] * Test['Order']
Test['Wealth'] = Test['Profit'].cumsum()
print('Total profit made in Test: ', Test['Profit'].sum())
plt.figure(figsize=(10, 10))
plt.title('Performance of Strategy in Train')
plt.plot(Test['Wealth'].values, color='green', label='Signal based strategy')
plt.plot(Test['spy'].cumsum().values, color='red', label='Buy and Hold strategy')
plt.legend()
plt.show()
Train['Wealth'] = Train['Wealth'] + Train.loc[Train.index[0], 'Price']
Test['Wealth'] = Test['Wealth'] + Test.loc[Test.index[0], 'Price']
# Sharpe Ratio on Train data
Train['Return'] = np.log(Train['Wealth']) - np.log(Train['Wealth'].shift(1))
dailyr = Train['Return'].dropna()
print('Daily Sharpe Ratio is ', dailyr.mean()/dailyr.std(ddof=1))
print('Yearly Sharpe Ratio is ', (252**0.5)*dailyr.mean()/dailyr.std(ddof=1))
# Sharpe Ratio in Test data
Test['Return'] = np.log(Test['Wealth']) - np.log(Test['Wealth'].shift(1))
dailyr = Test['Return'].dropna()
print('Daily Sharpe Ratio is ', dailyr.mean()/dailyr.std(ddof=1))
print('Yearly Sharpe Ratio is ', (252**0.5)*dailyr.mean()/dailyr.std(ddof=1))
# Maximum Drawdown in Train data
Train['Peak'] = Train['Wealth'].cummax()
Train['Drawdown'] = (Train['Peak'] - Train['Wealth'])/Train['Peak']
print('Maximum Drawdown in Train is ', Train['Drawdown'].max())
# Maximum Drawdown in Test data
Test['Peak'] = Test['Wealth'].cummax()
Test['Drawdown'] = (Test['Peak'] - Test['Wealth'])/Test['Peak']
print('Maximum Drawdown in Test is ', Test['Drawdown'].max()) | 0.393152 | 0.77535 |
<a href="https://colab.research.google.com/github/chadeowen/DS-Sprint-03-Creating-Professional-Portfolios/blob/master/ChadOwen_DS_Unit_1_Sprint_Challenge_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Data Science Unit 1 Sprint Challenge 3
# Creating Professional Portfolios
For your Sprint Challenge, you will **write about your upcoming [data storytelling portfolio project](https://learn.lambdaschool.com/ds/module/recedjanlbpqxic2r)**.
(Don't worry, you don't have to choose your final idea now. For this challenge, you can write about any idea you're considering.)
# Part 1
**Describe an idea** you could work on for your upcoming data storytelling project. What's your hypothesis?
#### Write a [lede](https://www.thoughtco.com/how-to-write-a-great-lede-2074346) (first paragraph)
- Put the bottom line up front.
- Use 60 words or fewer. (The [Hemingway App](http://www.hemingwayapp.com/) gives you word count.)
[This is hard](https://quoteinvestigator.com/2012/04/28/shorter-letter/), but you can do it!
#### Stretch goals
- Write more about your idea. Tell us what the story's about. Show us why it's interesting. Continue to follow the inverted pyramid structure.
- Improve your readability. Post your "before & after" scores from the Hemingway App.
- Who what where when why = States, Population Growth Rates, USA, From Civil War until now, Multitude of factors
- I want my title to **pop**... maybe something like: '_A Second Civil War is Approaching; Why The North Might Be in More Trouble Than it Thinks_'
- Controversial, I'm aware... Must tread lightly and be cautious of language, I'm also aware...
- Lede: Population growth rates in Southern States have significantly outpaced those in Northern States since the Civil War; how __*might*__ this affect the current political divide?
# Part 2
#### Find sources
- Link to at least 2 relevant sources for your topic. Sources could include any data or writing about your topic.
- Use [Markdown](https://commonmark.org/help/) to format your links.
- Summarize each source in 1-2 sentences.
#### Stretch goals
- Find more sources.
- Use Markdown to add images from your sources.
- Critically evaluate your sources in writing.
[Facts and Trends Article](https://factsandtrends.net/2018/01/18/southern-states-continue-outpace-rest-u-s-population-growth/)
- Numerical Growth top ten list and Percentage Growth top ten list
[Bloomberg Opinion](https://www.bloomberg.com/opinion/articles/2018-01-04/america-s-heartland-has-moved-to-the-south-and-west)
- Opinion piece on Southern and Western Population Growth
- Some great visuals to keep in mind when making my report
# Part 3
#### Plan your next steps
- Describe at least 2 actions you'd take to get started with your project.
- Use Markdown headings and lists to organize your plan.
#### Stretch goals
- Add detail to your plan.
- Publish your project proposal on your GitHub Pages site.
# Story
- The story I want to tell with the data is the most important part. Although I do not intend on being a data scientist with 'clickbait' or 'attention grabbing' reports, I do not mind using those practices given time constraints (project timelines). After collecting regional growth rates and state growth rates (speaking strictly population), I will paint a picture as to *why* the shifts are occuring and *how* this could play a factor given today's two-party divide.
## Data
- The next step is to find the data. I'm finding that it's a bit trickier to track down historical census data (every seven years) and coupling that with census estimates (yearly) in a clean csv file, but I should be able to build my own if worst comes to worst. I have a broad idea of what the data will look like, but this weekend and next is when I'll spend time finding it.
## Data Viz
- Last action is choosing my data visualizations. I intend to provide different visualizations to portray multiple perspectives. The bloomberg article linked above looks helpful, but I will select data viz's that support my 'story'
| github_jupyter | <a href="https://colab.research.google.com/github/chadeowen/DS-Sprint-03-Creating-Professional-Portfolios/blob/master/ChadOwen_DS_Unit_1_Sprint_Challenge_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Data Science Unit 1 Sprint Challenge 3
# Creating Professional Portfolios
For your Sprint Challenge, you will **write about your upcoming [data storytelling portfolio project](https://learn.lambdaschool.com/ds/module/recedjanlbpqxic2r)**.
(Don't worry, you don't have to choose your final idea now. For this challenge, you can write about any idea you're considering.)
# Part 1
**Describe an idea** you could work on for your upcoming data storytelling project. What's your hypothesis?
#### Write a [lede](https://www.thoughtco.com/how-to-write-a-great-lede-2074346) (first paragraph)
- Put the bottom line up front.
- Use 60 words or fewer. (The [Hemingway App](http://www.hemingwayapp.com/) gives you word count.)
[This is hard](https://quoteinvestigator.com/2012/04/28/shorter-letter/), but you can do it!
#### Stretch goals
- Write more about your idea. Tell us what the story's about. Show us why it's interesting. Continue to follow the inverted pyramid structure.
- Improve your readability. Post your "before & after" scores from the Hemingway App.
- Who what where when why = States, Population Growth Rates, USA, From Civil War until now, Multitude of factors
- I want my title to **pop**... maybe something like: '_A Second Civil War is Approaching; Why The North Might Be in More Trouble Than it Thinks_'
- Controversial, I'm aware... Must tread lightly and be cautious of language, I'm also aware...
- Lede: Population growth rates in Southern States have significantly outpaced those in Northern States since the Civil War; how __*might*__ this affect the current political divide?
# Part 2
#### Find sources
- Link to at least 2 relevant sources for your topic. Sources could include any data or writing about your topic.
- Use [Markdown](https://commonmark.org/help/) to format your links.
- Summarize each source in 1-2 sentences.
#### Stretch goals
- Find more sources.
- Use Markdown to add images from your sources.
- Critically evaluate your sources in writing.
[Facts and Trends Article](https://factsandtrends.net/2018/01/18/southern-states-continue-outpace-rest-u-s-population-growth/)
- Numerical Growth top ten list and Percentage Growth top ten list
[Bloomberg Opinion](https://www.bloomberg.com/opinion/articles/2018-01-04/america-s-heartland-has-moved-to-the-south-and-west)
- Opinion piece on Southern and Western Population Growth
- Some great visuals to keep in mind when making my report
# Part 3
#### Plan your next steps
- Describe at least 2 actions you'd take to get started with your project.
- Use Markdown headings and lists to organize your plan.
#### Stretch goals
- Add detail to your plan.
- Publish your project proposal on your GitHub Pages site.
# Story
- The story I want to tell with the data is the most important part. Although I do not intend on being a data scientist with 'clickbait' or 'attention grabbing' reports, I do not mind using those practices given time constraints (project timelines). After collecting regional growth rates and state growth rates (speaking strictly population), I will paint a picture as to *why* the shifts are occuring and *how* this could play a factor given today's two-party divide.
## Data
- The next step is to find the data. I'm finding that it's a bit trickier to track down historical census data (every seven years) and coupling that with census estimates (yearly) in a clean csv file, but I should be able to build my own if worst comes to worst. I have a broad idea of what the data will look like, but this weekend and next is when I'll spend time finding it.
## Data Viz
- Last action is choosing my data visualizations. I intend to provide different visualizations to portray multiple perspectives. The bloomberg article linked above looks helpful, but I will select data viz's that support my 'story'
| 0.63273 | 0.747017 |
# Grid Search
Let's incorporate grid search into your modeling process. To start, include an import statement for `GridSearchCV` below.
```
import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
def tokenize(text):
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
```
### View parameters in pipeline
Before modifying your build_model method to include grid search, view the parameters in your pipeline here.
```
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', RandomForestClassifier())
])
pipeline.get_params()
```
### Modify your `build_model` function to return a GridSearchCV object.
Try to grid search some parameters in your data transformation steps as well as those for your classifier! Browse the parameters you can search above.
```
def build_model():
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', RandomForestClassifier())
])
# specify parameters for grid search
parameters = {
'features__text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
'features__text_pipeline__vect__max_df': (0.5, 0.75, 1.0),
'features__text_pipeline__vect__max_features': (None, 5000, 10000),
'features__text_pipeline__tfidf__smooth_idf': (True, False),
'features__text_pipeline__tfidf__use_idf': (True, False),
'clf__max_depth': (None, 300, 500),
'clf__n_estimators': [50, 100, 200],
'clf__min_samples_split': [2, 3, 4],
'features__transformer_weights': (
{'text_pipeline': 1, 'starting_verb': 0.5},
{'text_pipeline': 0.5, 'starting_verb': 1},
{'text_pipeline': 0.8, 'starting_verb': 1},
)
}
# create grid search object
cv = GridSearchCV(pipeline, param_grid=parameters, scoring='accuracy', verbose=0, n_jobs=-1)
return cv
```
### Run program to test
Running grid search can take a while, especially if you are searching over a lot of parameters! If you want to reduce it to a few minutes, try commenting out some of your parameters to grid search over just 1 or 2 parameters with a small number of values each. Once you know that works, feel free to add more parameters and see how well your final model can perform! You can try this out in the next page.
```
def load_data():
df = pd.read_csv('corporate_messaging.csv', encoding='latin-1')
df = df[(df["category:confidence"] == 1) & (df['category'] != 'Exclude')]
X = df.text.values
y = df.category.values
return X, y
def display_results(cv, y_test, y_pred):
labels = np.unique(y_pred)
confusion_mat = confusion_matrix(y_test, y_pred, labels=labels)
accuracy = (y_pred == y_test).mean()
print("Labels:", labels)
print("Confusion Matrix:\n", confusion_mat)
print("Accuracy:", accuracy)
print("\nBest Parameters:", cv.best_params_)
def main():
X, y = load_data()
X_train, X_test, y_train, y_test = train_test_split(X, y)
model = build_model()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
display_results(model, y_test, y_pred)
%%time
main()
```
| github_jupyter | import nltk
nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger'])
import re
import numpy as np
import pandas as pd
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
def tokenize(text):
detected_urls = re.findall(url_regex, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
clean_tokens = []
for tok in tokens:
clean_tok = lemmatizer.lemmatize(tok).lower().strip()
clean_tokens.append(clean_tok)
return clean_tokens
class StartingVerbExtractor(BaseEstimator, TransformerMixin):
def starting_verb(self, text):
sentence_list = nltk.sent_tokenize(text)
for sentence in sentence_list:
pos_tags = nltk.pos_tag(tokenize(sentence))
first_word, first_tag = pos_tags[0]
if first_tag in ['VB', 'VBP'] or first_word == 'RT':
return True
return False
def fit(self, x, y=None):
return self
def transform(self, X):
X_tagged = pd.Series(X).apply(self.starting_verb)
return pd.DataFrame(X_tagged)
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', RandomForestClassifier())
])
pipeline.get_params()
def build_model():
pipeline = Pipeline([
('features', FeatureUnion([
('text_pipeline', Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer())
])),
('starting_verb', StartingVerbExtractor())
])),
('clf', RandomForestClassifier())
])
# specify parameters for grid search
parameters = {
'features__text_pipeline__vect__ngram_range': ((1, 1), (1, 2)),
'features__text_pipeline__vect__max_df': (0.5, 0.75, 1.0),
'features__text_pipeline__vect__max_features': (None, 5000, 10000),
'features__text_pipeline__tfidf__smooth_idf': (True, False),
'features__text_pipeline__tfidf__use_idf': (True, False),
'clf__max_depth': (None, 300, 500),
'clf__n_estimators': [50, 100, 200],
'clf__min_samples_split': [2, 3, 4],
'features__transformer_weights': (
{'text_pipeline': 1, 'starting_verb': 0.5},
{'text_pipeline': 0.5, 'starting_verb': 1},
{'text_pipeline': 0.8, 'starting_verb': 1},
)
}
# create grid search object
cv = GridSearchCV(pipeline, param_grid=parameters, scoring='accuracy', verbose=0, n_jobs=-1)
return cv
def load_data():
df = pd.read_csv('corporate_messaging.csv', encoding='latin-1')
df = df[(df["category:confidence"] == 1) & (df['category'] != 'Exclude')]
X = df.text.values
y = df.category.values
return X, y
def display_results(cv, y_test, y_pred):
labels = np.unique(y_pred)
confusion_mat = confusion_matrix(y_test, y_pred, labels=labels)
accuracy = (y_pred == y_test).mean()
print("Labels:", labels)
print("Confusion Matrix:\n", confusion_mat)
print("Accuracy:", accuracy)
print("\nBest Parameters:", cv.best_params_)
def main():
X, y = load_data()
X_train, X_test, y_train, y_test = train_test_split(X, y)
model = build_model()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
display_results(model, y_test, y_pred)
%%time
main() | 0.583085 | 0.790369 |
# Aries Basic Controller Example - Alice
# DID Exchange - Inviter
In this notebook we'll be initiating the aries [DID Exchange](https://github.com/hyperledger/aries-rfcs/tree/master/features/0023-did-exchange) protocol using the aries_basic_controller package.
This notebook has the following phases:
1. Pull in dependencies
2. Instantiate the controller for the aries agent (See the docker-compose.yml)
3. Set up a listener for basicmessages events emitted by the controller when it receives a webhook
4. Use the controller to create an invitation from our agent
5. Copy the invitation output from 4 and move over to Bob's [notebook](http://localhost:8889)
<b>Carry on in Bob's notebook</b>
12. Accept Request for Connection
13. Send Trust Ping to Bob
### 1. Pull in dependencies
```
%autoawait
import time
import asyncio
```
### 2. Instatiate the controller for our Agent
The arguments depend on how the aca-py agent was initiated. See the manage and docker-compose.yml files for more details.
```
from aries_basic_controller.aries_controller import AriesAgentController
WEBHOOK_HOST = "0.0.0.0"
WEBHOOK_PORT = 8022
WEBHOOK_BASE = ""
ADMIN_URL = "http://alice-agent:8021"
# Based on the aca-py agent you wish to control
agent_controller = AriesAgentController(webhook_host=WEBHOOK_HOST, webhook_port=WEBHOOK_PORT,
webhook_base=WEBHOOK_BASE, admin_url=ADMIN_URL, connections=True)
```
### 3. Listen for webhooks and register default listeners
Everytime a webhook is received from the agent, the controller reemits the hook using PyPubSub. The default listeners are used to update state and print logs.
```
loop = asyncio.get_event_loop()
loop.create_task(agent_controller.listen_webhooks())
agent_controller.register_listeners([], defaults=True)
```
### 4. Use the controller to create an invitation from our agent
```
# Create Invitation
invite = await agent_controller.connections.create_invitation()
connection_id = invite["connection_id"]
print("Connection ID", connection_id)
print("Invitation")
print(invite)
```
### 5. Copy the invitation output from 4 and move over to the Bob notebook (localhost:8889)
### 11. Accept Request for Connection
```
# Accept Request for Invite created
connection = await agent_controller.connections.accept_request(connection_id)
print("ACCEPT REQUEST")
print(connection)
print("state", connection["state"])
```
### 12. Send Trust Ping to activate the conneciton
```
trust_ping = await agent_controller.messaging.trust_ping(connection_id, "hello")
print("Trust Ping", trust_ping)
```
#### 12.1 Check if connection is active
```
connection = await agent_controller.connections.get_connection(connection_id)
print(connection)
print("Is Active?", connection["state"])
```
## End of Tutorial
#### Terminate Controller & Stop Webhook Server
**Note: You will need to run this command when combining this example with others such as Issuer**
```
response = await agent_controller.terminate()
print(response)
```
| github_jupyter | %autoawait
import time
import asyncio
from aries_basic_controller.aries_controller import AriesAgentController
WEBHOOK_HOST = "0.0.0.0"
WEBHOOK_PORT = 8022
WEBHOOK_BASE = ""
ADMIN_URL = "http://alice-agent:8021"
# Based on the aca-py agent you wish to control
agent_controller = AriesAgentController(webhook_host=WEBHOOK_HOST, webhook_port=WEBHOOK_PORT,
webhook_base=WEBHOOK_BASE, admin_url=ADMIN_URL, connections=True)
loop = asyncio.get_event_loop()
loop.create_task(agent_controller.listen_webhooks())
agent_controller.register_listeners([], defaults=True)
# Create Invitation
invite = await agent_controller.connections.create_invitation()
connection_id = invite["connection_id"]
print("Connection ID", connection_id)
print("Invitation")
print(invite)
# Accept Request for Invite created
connection = await agent_controller.connections.accept_request(connection_id)
print("ACCEPT REQUEST")
print(connection)
print("state", connection["state"])
trust_ping = await agent_controller.messaging.trust_ping(connection_id, "hello")
print("Trust Ping", trust_ping)
connection = await agent_controller.connections.get_connection(connection_id)
print(connection)
print("Is Active?", connection["state"])
response = await agent_controller.terminate()
print(response) | 0.243193 | 0.889529 |
```
%matplotlib inline
```
Tensors
--------------------------------------------
Tensors are a specialized data structure that are very similar to arrays
and matrices. In PyTorch, we use tensors to encode the inputs and
outputs of a model, as well as the model’s parameters.
Tensors are similar to NumPy’s ndarrays, except that tensors can run on
GPUs or other specialized hardware to accelerate computing. If you’re familiar with ndarrays, you’ll
be right at home with the Tensor API. If not, follow along in this quick
API walkthrough.
```
import torch
import numpy as np
```
Tensor Initialization
~~~~~~~~~~~~~~~~~~~~~
Tensors can be initialized in various ways. Take a look at the following examples:
**Directly from data**
Tensors can be created directly from data. The data type is automatically inferred.
```
data = [[1, 2], [3, 4]]
x_data = torch.tensor(data)
x_data.dtype
x_data
```
**From a NumPy array**
Tensors can be created from NumPy arrays (and vice versa - see `bridge-to-np-label`).
```
np_array = np.array(data)
x_np = torch.from_numpy(np_array)
np_array
x_np
```
**From another tensor:**
The new tensor retains the properties (shape, datatype) of the argument tensor, unless explicitly overridden.
```
x_ones = torch.ones_like(x_data) # retains the properties of x_data
print(f"Ones Tensor: \n {x_ones} \n")
x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data
print(f"Random Tensor: \n {x_rand} \n")
```
**With random or constant values:**
``shape`` is a tuple of tensor dimensions. In the functions below, it determines the dimensionality of the output tensor.
```
shape = (2, 3,)
rand_tensor = torch.rand(shape)
ones_tensor = torch.ones(shape)
zeros_tensor = torch.zeros(shape)
print(f"Random Tensor: \n {rand_tensor} \n")
print(f"Ones Tensor: \n {ones_tensor} \n")
print(f"Zeros Tensor: \n {zeros_tensor}")
```
--------------
Tensor Attributes
~~~~~~~~~~~~~~~~~
Tensor attributes describe their shape, datatype, and the device on which they are stored.
```
tensor = torch.rand(3, 4)
print(f"Shape of tensor: {tensor.shape}")
print(f"Datatype of tensor: {tensor.dtype}")
print(f"Device tensor is stored on: {tensor.device}")
```
--------------
Tensor Operations
~~~~~~~~~~~~~~~~~
Over 100 tensor operations, including transposing, indexing, slicing,
mathematical operations, linear algebra, random sampling, and more are
comprehensively described
`here <https://pytorch.org/docs/stable/torch.html>`__.
Each of them can be run on the GPU (at typically higher speeds than on a
CPU). If you’re using Colab, allocate a GPU by going to Edit > Notebook
Settings.
```
# We move our tensor to the GPU if available
if torch.cuda.is_available():
tensor = tensor.to('cuda')
print(f"Device tensor is stored on: {tensor.device}")
```
Try out some of the operations from the list.
If you're familiar with the NumPy API, you'll find the Tensor API a breeze to use.
**Standard numpy-like indexing and slicing:**
```
tensor = torch.ones(4, 4)
tensor[:,1] = 0
print(tensor)
tensor[:,0]=0
print(tensor)
```
**Joining tensors** You can use ``torch.cat`` to concatenate a sequence of tensors along a given dimension.
See also `torch.stack <https://pytorch.org/docs/stable/generated/torch.stack.html>`__,
another tensor joining op that is subtly different from ``torch.cat``.
```
t1 = torch.cat([tensor, tensor, tensor], dim=1)
print(t1)
t2=torch.cat([tensor, tensor,tensor], dim=0)
print(t2)
```
**Multiplying tensors**
```
# This computes the element-wise product
print(f"tensor.mul(tensor) \n {tensor.mul(tensor)} \n")
# Alternative syntax:
print(f"tensor * tensor \n {tensor * tensor}")
```
This computes the matrix multiplication between two tensors
```
print(f"tensor.matmul(tensor.T) \n {tensor.matmul(tensor.T)} \n")
# Alternative syntax:
print(f"tensor @ tensor.T \n {tensor @ tensor.T}")
```
**In-place operations**
Operations that have a ``_`` suffix are in-place. For example: ``x.copy_(y)``, ``x.t_()``, will change ``x``.
```
print(tensor, "\n")
tensor.add_(5)
print(tensor)
```
<div class="alert alert-info"><h4>Note</h4><p>In-place operations save some memory, but can be problematic when computing derivatives because of an immediate loss
of history. Hence, their use is discouraged.</p></div>
--------------
Bridge with NumPy
~~~~~~~~~~~~~~~~~
Tensors on the CPU and NumPy arrays can share their underlying memory
locations, and changing one will change the other.
Tensor to NumPy array
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
t = torch.ones(5)
print(f"t: {t}")
n = t.numpy()
print(f"n: {n}")
```
A change in the tensor reflects in the NumPy array.
```
t.add_(1)
print(f"t: {t}")
print(f"n: {n}")
```
NumPy array to Tensor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
n = np.ones(5)
t = torch.from_numpy(n)
t
```
Changes in the NumPy array reflects in the tensor.
```
np.add(n, 1, out=n)
print(f"t: {t}")
print(f"n: {n}")
```
| github_jupyter | %matplotlib inline
import torch
import numpy as np
data = [[1, 2], [3, 4]]
x_data = torch.tensor(data)
x_data.dtype
x_data
np_array = np.array(data)
x_np = torch.from_numpy(np_array)
np_array
x_np
x_ones = torch.ones_like(x_data) # retains the properties of x_data
print(f"Ones Tensor: \n {x_ones} \n")
x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data
print(f"Random Tensor: \n {x_rand} \n")
shape = (2, 3,)
rand_tensor = torch.rand(shape)
ones_tensor = torch.ones(shape)
zeros_tensor = torch.zeros(shape)
print(f"Random Tensor: \n {rand_tensor} \n")
print(f"Ones Tensor: \n {ones_tensor} \n")
print(f"Zeros Tensor: \n {zeros_tensor}")
tensor = torch.rand(3, 4)
print(f"Shape of tensor: {tensor.shape}")
print(f"Datatype of tensor: {tensor.dtype}")
print(f"Device tensor is stored on: {tensor.device}")
# We move our tensor to the GPU if available
if torch.cuda.is_available():
tensor = tensor.to('cuda')
print(f"Device tensor is stored on: {tensor.device}")
tensor = torch.ones(4, 4)
tensor[:,1] = 0
print(tensor)
tensor[:,0]=0
print(tensor)
t1 = torch.cat([tensor, tensor, tensor], dim=1)
print(t1)
t2=torch.cat([tensor, tensor,tensor], dim=0)
print(t2)
# This computes the element-wise product
print(f"tensor.mul(tensor) \n {tensor.mul(tensor)} \n")
# Alternative syntax:
print(f"tensor * tensor \n {tensor * tensor}")
print(f"tensor.matmul(tensor.T) \n {tensor.matmul(tensor.T)} \n")
# Alternative syntax:
print(f"tensor @ tensor.T \n {tensor @ tensor.T}")
print(tensor, "\n")
tensor.add_(5)
print(tensor)
t = torch.ones(5)
print(f"t: {t}")
n = t.numpy()
print(f"n: {n}")
t.add_(1)
print(f"t: {t}")
print(f"n: {n}")
n = np.ones(5)
t = torch.from_numpy(n)
t
np.add(n, 1, out=n)
print(f"t: {t}")
print(f"n: {n}") | 0.671255 | 0.983518 |
# Example 04: General Use of XGBoostRegressorHyperOpt
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/slickml/slick-ml/blob/master/examples/optimization/example_04_XGBoostRegressorrHyperOpt.ipynb)
### Google Colab Configuration
```
# !git clone https://github.com/slickml/slick-ml.git
# %cd slick-ml
# !pip install -r requirements.txt
```
### Local Environment Configuration
```
# # Change path to project root
%cd ../..
```
### Import Python Libraries
```
%load_ext autoreload
# widen the screen
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
# change the path and loading class
import os, sys
import pandas as pd
import numpy as np
import seaborn as sns
%autoreload
from slickml.optimization import XGBoostRegressorHyperOpt
```
----
# XGBoostRegressorHyperOpt Docstring
```
# loading data; note this is a multi regression data
df = pd.read_csv("data/reg_data.csv")
df.head(2)
# define X, y based on one of the targets
y = df.TARGET1.values
X = df.drop(["TARGET1", "TARGET2"], axis=1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
shuffle=True,
random_state=1367)
# define the parameters' bounds
from hyperopt import hp
def get_xgb_params():
""" Define Parameter Space"""
params = {
"nthread": 4,
"booster": "gbtree",
"tree_method": "hist",
"objective": "reg:squarederror",
"max_depth": hp.choice("max_depth", range(2, 8)),
"learning_rate": hp.quniform("learning_rate", 0.01, 0.50, 0.01),
"gamma": hp.quniform("gamma", 0, 0.50, 0.01),
"min_child_weight": hp.quniform("min_child_weight", 1, 20, 1),
"subsample": hp.quniform("subsample", 0.1, 1.0, 0.01),
"colsample_bytree": hp.quniform("colsample_bytree", 0.1, 1.0, 0.01),
"gamma": hp.quniform("gamma", 0.0, 1.0, 0.01),
"reg_alpha": hp.quniform("reg_alpha", 0.0, 1.0, 0.01),
"reg_lambda": hp.quniform("reg_lambda", 0.0, 1.0, 0.01),
}
return params
hp.choice("max_depth", range(2, 10, 1))
# initialize XGBoostRegressorHyperOpt
xho = XGBoostRegressorHyperOpt(num_boost_round=200,
metrics="rmse",
n_splits=4,
shuffle=True,
early_stopping_rounds=20,
func_name="xgb_cv",
space=get_xgb_params(),
max_evals=100,
verbose=False
)
# fit
xho.fit(X_train,y_train)
```
### Best set of parameters from all runs
```
xho.get_optimization_results()
```
### Results from each trial
```
import pprint
pprint.pprint(xho.get_optimization_trials().trials)
```
| github_jupyter | # !git clone https://github.com/slickml/slick-ml.git
# %cd slick-ml
# !pip install -r requirements.txt
# # Change path to project root
%cd ../..
%load_ext autoreload
# widen the screen
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
# change the path and loading class
import os, sys
import pandas as pd
import numpy as np
import seaborn as sns
%autoreload
from slickml.optimization import XGBoostRegressorHyperOpt
# loading data; note this is a multi regression data
df = pd.read_csv("data/reg_data.csv")
df.head(2)
# define X, y based on one of the targets
y = df.TARGET1.values
X = df.drop(["TARGET1", "TARGET2"], axis=1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
shuffle=True,
random_state=1367)
# define the parameters' bounds
from hyperopt import hp
def get_xgb_params():
""" Define Parameter Space"""
params = {
"nthread": 4,
"booster": "gbtree",
"tree_method": "hist",
"objective": "reg:squarederror",
"max_depth": hp.choice("max_depth", range(2, 8)),
"learning_rate": hp.quniform("learning_rate", 0.01, 0.50, 0.01),
"gamma": hp.quniform("gamma", 0, 0.50, 0.01),
"min_child_weight": hp.quniform("min_child_weight", 1, 20, 1),
"subsample": hp.quniform("subsample", 0.1, 1.0, 0.01),
"colsample_bytree": hp.quniform("colsample_bytree", 0.1, 1.0, 0.01),
"gamma": hp.quniform("gamma", 0.0, 1.0, 0.01),
"reg_alpha": hp.quniform("reg_alpha", 0.0, 1.0, 0.01),
"reg_lambda": hp.quniform("reg_lambda", 0.0, 1.0, 0.01),
}
return params
hp.choice("max_depth", range(2, 10, 1))
# initialize XGBoostRegressorHyperOpt
xho = XGBoostRegressorHyperOpt(num_boost_round=200,
metrics="rmse",
n_splits=4,
shuffle=True,
early_stopping_rounds=20,
func_name="xgb_cv",
space=get_xgb_params(),
max_evals=100,
verbose=False
)
# fit
xho.fit(X_train,y_train)
xho.get_optimization_results()
import pprint
pprint.pprint(xho.get_optimization_trials().trials) | 0.461259 | 0.925432 |
```
import pandas
df1 = pandas.read_csv('GSE6631-Upregulated.csv')
df2 = pandas.read_csv('GSE113282-Upregulated.csv')
df3 = pandas.read_csv('GSE12452-Upregulated50.csv')
a = []
aa = []
b = []
c = []
common = []
for i in range(0, len(df1)):
a.append(df1['Gene'][i])
aa.append(df1['Gene'][i])
for i in range(0, len(df2)):
b.append(df2['Gene'][i])
for i in range(0, len(df3)):
c.append(df3['Gene'][i])
print('Total A: ' + str(len(df1)))
print('Total B: ' + str(len(df2)))
print('Total C: ' + str(len(df3)))
tmp = 0
stack = []
for i in a:
if i in b:
if i not in c:
if i not in stack:
stack.append(i)
tmp = tmp+1
print('A intersection B, not in C: ' + str(tmp))
tmp = 0
stack = []
for i in a:
if str(i)!='nan':
if i in c:
if i not in b:
if i not in stack:
stack.append(i)
tmp = tmp+1
print('A intersection C, not in B: ' + str(tmp))
tmp = 0
stack = []
for i in b:
if str(i)!='nan':
if i in c:
if i not in a:
if i not in stack:
stack.append(i)
tmp = tmp+1
print('B intersection C, not in A: ' + str(tmp))
upCommon = 0
for i in c:
if str(i)!='nan':
if i in a:
if i in b:
if i not in common:
common.append(i)
upCommon=upCommon+1
print(upCommon)
filename = 'Upregulated-Common-Genes-Pval-logFC.csv'
log = open(filename, 'w')
log.write('Gene-symbol,GSE15471-Adj-Pval,GSE15471-logFC,GSE16515-Adj-Pval,GSE16515-logFC,GSE28735-Adj-Pval,GSE28735-logFC\n')
for i in common:
log.write(str(i) + ',')
for j in range(0,len(df1)):
if str(df1['Gene'][j])==str(i):
log.write(str(df1['P.adjusted.BH'][j]) + ',' + str(df1['logFC'][j]) + ',')
break
for j in range(0,len(df2)):
if str(df2['Gene'][j])==str(i):
log.write(str(df2['P.adjusted.BH'][j]) + ',' + str(df2['logFC'][j]) + ',')
break
for j in range(0,len(df3)):
if str(df3['Gene'][j])==str(i):
log.write(str(df3['P.adjusted.BH'][j]) + ',' + str(df3['logFC'][j]))
break
log.write('\n')
log.close()
df1 = pandas.read_csv('GSE6631-Downregulated.csv')
df2 = pandas.read_csv('GSE113282-Downregulated.csv')
df3 = pandas.read_csv('GSE12452-Downregulated50.csv')
a = []
b = []
c = []
common2 = []
for i in range(0, len(df1)):
a.append(df1['Gene'][i])
for i in range(0, len(df2)):
b.append(df2['Gene'][i])
for i in range(0, len(df3)):
c.append(df3['Gene'][i])
print('Total A: ' + str(len(df1)))
print('Total B: ' + str(len(df2)))
print('Total C: ' + str(len(df3)))
tmp = 0
stack = []
for i in a:
if i in b:
if i not in c:
if i not in stack:
stack.append(i)
tmp = tmp+1
print('A intersection B, not in C: ' + str(tmp))
tmp = 0
stack = []
for i in a:
if str(i)!='nan':
if i in c:
if i not in b:
if i not in stack:
stack.append(i)
tmp = tmp+1
print('A intersection C, not in B: ' + str(tmp))
tmp = 0
stack = []
for i in b:
if str(i)!='nan':
if i in c:
if i not in a:
if i not in stack:
stack.append(i)
tmp = tmp+1
print('B intersection C, not in A: ' + str(tmp))
downCommon = 0
for i in c:
if str(i)!='nan':
if i in a:
if i in b:
if i not in common2:
common2.append(i)
downCommon=downCommon+1
print(downCommon)
filename = 'Downregulated-Common-Genes-Pval-logFC.csv'
log = open(filename, 'w')
log.write('Gene-symbol,GSE15471-Adj-Pval,GSE15471-logFC,GSE16515-Adj-Pval,GSE16515-logFC,GSE28735-Adj-Pval,GSE28735-logFC\n')
for i in common2:
log.write(str(i) + ',')
for j in range(0,len(df1)):
if str(df1['Gene'][j])==str(i):
log.write(str(df1['P.adjusted.BH'][j]) + ',' + str(df1['logFC'][j]) + ',')
break
for j in range(0,len(df2)):
if str(df2['Gene'][j])==str(i):
log.write(str(df2['P.adjusted.BH'][j]) + ',' + str(df2['logFC'][j]) + ',')
break
for j in range(0,len(df3)):
if str(df3['Gene'][j])==str(i):
log.write(str(df3['P.adjusted.BH'][j]) + ',' + str(df3['logFC'][j]))
break
log.write('\n')
log.close()
filename = 'Common Genes.csv'
CommonGenes = open(filename, 'w')
CommonGenes.write('Type,Gene\n')
for i in common:
CommonGenes.write('Upregulated,'+str(i)+'\n')
for i in common2:
CommonGenes.write('Downregulated,'+str(i)+'\n')
CommonGenes.close()
df1 = pandas.read_csv('Target_Genes.csv')
tg = []
for i in range(0,len(df1)):
if df1['Gene'][i] not in tg:
tg.append(df1['Gene'][i])
print(len(tg))
for i in tg:
if i in common:
print(i)
elif i in common2:
print(i)
filename = 'Final Target Genes.csv'
TargetGenes = open(filename, 'w')
TargetGenes.write('Gene\n')
for i in tg:
TargetGenes.write(str(i)+'\n')
TargetGenes.close()
```
| github_jupyter | import pandas
df1 = pandas.read_csv('GSE6631-Upregulated.csv')
df2 = pandas.read_csv('GSE113282-Upregulated.csv')
df3 = pandas.read_csv('GSE12452-Upregulated50.csv')
a = []
aa = []
b = []
c = []
common = []
for i in range(0, len(df1)):
a.append(df1['Gene'][i])
aa.append(df1['Gene'][i])
for i in range(0, len(df2)):
b.append(df2['Gene'][i])
for i in range(0, len(df3)):
c.append(df3['Gene'][i])
print('Total A: ' + str(len(df1)))
print('Total B: ' + str(len(df2)))
print('Total C: ' + str(len(df3)))
tmp = 0
stack = []
for i in a:
if i in b:
if i not in c:
if i not in stack:
stack.append(i)
tmp = tmp+1
print('A intersection B, not in C: ' + str(tmp))
tmp = 0
stack = []
for i in a:
if str(i)!='nan':
if i in c:
if i not in b:
if i not in stack:
stack.append(i)
tmp = tmp+1
print('A intersection C, not in B: ' + str(tmp))
tmp = 0
stack = []
for i in b:
if str(i)!='nan':
if i in c:
if i not in a:
if i not in stack:
stack.append(i)
tmp = tmp+1
print('B intersection C, not in A: ' + str(tmp))
upCommon = 0
for i in c:
if str(i)!='nan':
if i in a:
if i in b:
if i not in common:
common.append(i)
upCommon=upCommon+1
print(upCommon)
filename = 'Upregulated-Common-Genes-Pval-logFC.csv'
log = open(filename, 'w')
log.write('Gene-symbol,GSE15471-Adj-Pval,GSE15471-logFC,GSE16515-Adj-Pval,GSE16515-logFC,GSE28735-Adj-Pval,GSE28735-logFC\n')
for i in common:
log.write(str(i) + ',')
for j in range(0,len(df1)):
if str(df1['Gene'][j])==str(i):
log.write(str(df1['P.adjusted.BH'][j]) + ',' + str(df1['logFC'][j]) + ',')
break
for j in range(0,len(df2)):
if str(df2['Gene'][j])==str(i):
log.write(str(df2['P.adjusted.BH'][j]) + ',' + str(df2['logFC'][j]) + ',')
break
for j in range(0,len(df3)):
if str(df3['Gene'][j])==str(i):
log.write(str(df3['P.adjusted.BH'][j]) + ',' + str(df3['logFC'][j]))
break
log.write('\n')
log.close()
df1 = pandas.read_csv('GSE6631-Downregulated.csv')
df2 = pandas.read_csv('GSE113282-Downregulated.csv')
df3 = pandas.read_csv('GSE12452-Downregulated50.csv')
a = []
b = []
c = []
common2 = []
for i in range(0, len(df1)):
a.append(df1['Gene'][i])
for i in range(0, len(df2)):
b.append(df2['Gene'][i])
for i in range(0, len(df3)):
c.append(df3['Gene'][i])
print('Total A: ' + str(len(df1)))
print('Total B: ' + str(len(df2)))
print('Total C: ' + str(len(df3)))
tmp = 0
stack = []
for i in a:
if i in b:
if i not in c:
if i not in stack:
stack.append(i)
tmp = tmp+1
print('A intersection B, not in C: ' + str(tmp))
tmp = 0
stack = []
for i in a:
if str(i)!='nan':
if i in c:
if i not in b:
if i not in stack:
stack.append(i)
tmp = tmp+1
print('A intersection C, not in B: ' + str(tmp))
tmp = 0
stack = []
for i in b:
if str(i)!='nan':
if i in c:
if i not in a:
if i not in stack:
stack.append(i)
tmp = tmp+1
print('B intersection C, not in A: ' + str(tmp))
downCommon = 0
for i in c:
if str(i)!='nan':
if i in a:
if i in b:
if i not in common2:
common2.append(i)
downCommon=downCommon+1
print(downCommon)
filename = 'Downregulated-Common-Genes-Pval-logFC.csv'
log = open(filename, 'w')
log.write('Gene-symbol,GSE15471-Adj-Pval,GSE15471-logFC,GSE16515-Adj-Pval,GSE16515-logFC,GSE28735-Adj-Pval,GSE28735-logFC\n')
for i in common2:
log.write(str(i) + ',')
for j in range(0,len(df1)):
if str(df1['Gene'][j])==str(i):
log.write(str(df1['P.adjusted.BH'][j]) + ',' + str(df1['logFC'][j]) + ',')
break
for j in range(0,len(df2)):
if str(df2['Gene'][j])==str(i):
log.write(str(df2['P.adjusted.BH'][j]) + ',' + str(df2['logFC'][j]) + ',')
break
for j in range(0,len(df3)):
if str(df3['Gene'][j])==str(i):
log.write(str(df3['P.adjusted.BH'][j]) + ',' + str(df3['logFC'][j]))
break
log.write('\n')
log.close()
filename = 'Common Genes.csv'
CommonGenes = open(filename, 'w')
CommonGenes.write('Type,Gene\n')
for i in common:
CommonGenes.write('Upregulated,'+str(i)+'\n')
for i in common2:
CommonGenes.write('Downregulated,'+str(i)+'\n')
CommonGenes.close()
df1 = pandas.read_csv('Target_Genes.csv')
tg = []
for i in range(0,len(df1)):
if df1['Gene'][i] not in tg:
tg.append(df1['Gene'][i])
print(len(tg))
for i in tg:
if i in common:
print(i)
elif i in common2:
print(i)
filename = 'Final Target Genes.csv'
TargetGenes = open(filename, 'w')
TargetGenes.write('Gene\n')
for i in tg:
TargetGenes.write(str(i)+'\n')
TargetGenes.close() | 0.021543 | 0.141133 |
```
%matplotlib inline
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 4.5))
plt.subplots_adjust(left=0.02, right=0.98, top=0.98, bottom=0.00)
m = Basemap(projection='robin',lon_0=0,resolution='c')
m.fillcontinents(color='gray',lake_color='white')
m.drawcoastlines()
plt.savefig('world.png',dpi=75)
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
from matplotlib.collections import PathCollection
from matplotlib.path import Path
fig = plt.figure(figsize=(8, 4.5))
plt.subplots_adjust(left=0.02, right=0.98, top=0.98, bottom=0.00)
# MPL searches for ne_10m_land.shp in the directory 'D:\\ne_10m_land'
m = Basemap(projection='robin',lon_0=0,resolution='c')
shp_info = m.readshapefile('D:\\ne_10m_land', 'scalerank', drawbounds=True)
ax = plt.gca()
ax.cla()
paths = []
for line in shp_info[4]._paths:
paths.append(Path(line.vertices, codes=line.codes))
coll = PathCollection(paths, linewidths=0, facecolors='grey', zorder=2)
m = Basemap(projection='robin',lon_0=0,resolution='c')
# drawing something seems necessary to 'initiate' the map properly
m.drawcoastlines(color='white', zorder=0)
ax = plt.gca()
ax.add_collection(coll)
plt.savefig('world.png',dpi=75)
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
# setup Lambert Conformal basemap.
# set resolution=None to skip processing of boundary datasets.
m = Basemap(width=12000000,height=9000000,projection='lcc',
resolution=None,lat_1=45.,lat_2=55,lat_0=50,lon_0=-107.)
m.bluemarble()
plt.show()
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
# setup Lambert Conformal basemap.
# set resolution=None to skip processing of boundary datasets.
m = Basemap(width=1200000,height=900000,projection='lcc',
resolution=None,lat_1=45.,lat_2=65,lat_0=55,lon_0=-3.)
m.etopo()
plt.show()
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
# set up orthographic map projection with
# perspective of satellite looking down at 50N, 100W.
# use low resolution coastlines.
map = Basemap(projection='ortho',lat_0=45,lon_0=0,resolution='l')
# draw coastlines, country boundaries, fill continents.
map.drawcoastlines(linewidth=0.25)
map.drawcountries(linewidth=0.25)
map.fillcontinents(color='coral',lake_color='aqua')
# draw the edge of the map projection region (the projection limb)
map.drawmapboundary(fill_color='aqua')
# draw lat/lon grid lines every 30 degrees.
map.drawmeridians(np.arange(0,360,30))
map.drawparallels(np.arange(-90,90,30))
# make up some data on a regular lat/lon grid.
nlats = 73; nlons = 145; delta = 2.*np.pi/(nlons-1)
lats = (0.5*np.pi-delta*np.indices((nlats,nlons))[0,:,:])
lons = (delta*np.indices((nlats,nlons))[1,:,:])
wave = 0.75*(np.sin(2.*lats)**8*np.cos(4.*lons))
mean = 0.5*np.cos(2.*lats)*((np.sin(2.*lats))**2 + 2.)
# compute native map projection coordinates of lat/lon grid.
x, y = map(lons*180./np.pi, lats*180./np.pi)
# contour data over the map.
cs = map.contour(x,y,wave+mean,15,linewidths=1.5)
plt.title('contour lines over filled continent background')
plt.show()
```
| github_jupyter | %matplotlib inline
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8, 4.5))
plt.subplots_adjust(left=0.02, right=0.98, top=0.98, bottom=0.00)
m = Basemap(projection='robin',lon_0=0,resolution='c')
m.fillcontinents(color='gray',lake_color='white')
m.drawcoastlines()
plt.savefig('world.png',dpi=75)
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
from matplotlib.collections import PathCollection
from matplotlib.path import Path
fig = plt.figure(figsize=(8, 4.5))
plt.subplots_adjust(left=0.02, right=0.98, top=0.98, bottom=0.00)
# MPL searches for ne_10m_land.shp in the directory 'D:\\ne_10m_land'
m = Basemap(projection='robin',lon_0=0,resolution='c')
shp_info = m.readshapefile('D:\\ne_10m_land', 'scalerank', drawbounds=True)
ax = plt.gca()
ax.cla()
paths = []
for line in shp_info[4]._paths:
paths.append(Path(line.vertices, codes=line.codes))
coll = PathCollection(paths, linewidths=0, facecolors='grey', zorder=2)
m = Basemap(projection='robin',lon_0=0,resolution='c')
# drawing something seems necessary to 'initiate' the map properly
m.drawcoastlines(color='white', zorder=0)
ax = plt.gca()
ax.add_collection(coll)
plt.savefig('world.png',dpi=75)
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
# setup Lambert Conformal basemap.
# set resolution=None to skip processing of boundary datasets.
m = Basemap(width=12000000,height=9000000,projection='lcc',
resolution=None,lat_1=45.,lat_2=55,lat_0=50,lon_0=-107.)
m.bluemarble()
plt.show()
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
# setup Lambert Conformal basemap.
# set resolution=None to skip processing of boundary datasets.
m = Basemap(width=1200000,height=900000,projection='lcc',
resolution=None,lat_1=45.,lat_2=65,lat_0=55,lon_0=-3.)
m.etopo()
plt.show()
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
# set up orthographic map projection with
# perspective of satellite looking down at 50N, 100W.
# use low resolution coastlines.
map = Basemap(projection='ortho',lat_0=45,lon_0=0,resolution='l')
# draw coastlines, country boundaries, fill continents.
map.drawcoastlines(linewidth=0.25)
map.drawcountries(linewidth=0.25)
map.fillcontinents(color='coral',lake_color='aqua')
# draw the edge of the map projection region (the projection limb)
map.drawmapboundary(fill_color='aqua')
# draw lat/lon grid lines every 30 degrees.
map.drawmeridians(np.arange(0,360,30))
map.drawparallels(np.arange(-90,90,30))
# make up some data on a regular lat/lon grid.
nlats = 73; nlons = 145; delta = 2.*np.pi/(nlons-1)
lats = (0.5*np.pi-delta*np.indices((nlats,nlons))[0,:,:])
lons = (delta*np.indices((nlats,nlons))[1,:,:])
wave = 0.75*(np.sin(2.*lats)**8*np.cos(4.*lons))
mean = 0.5*np.cos(2.*lats)*((np.sin(2.*lats))**2 + 2.)
# compute native map projection coordinates of lat/lon grid.
x, y = map(lons*180./np.pi, lats*180./np.pi)
# contour data over the map.
cs = map.contour(x,y,wave+mean,15,linewidths=1.5)
plt.title('contour lines over filled continent background')
plt.show() | 0.633977 | 0.774157 |
# Spherical coordinates in shenfun
The Helmholtz equation is given as
$$
-\nabla^2 u + \alpha u = f.
$$
In this notebook we will solve this equation on a unitsphere, using spherical coordinates. To verify the implementation we use a spherical harmonics function as manufactured solution.
We start the implementation by importing necessary functionality from shenfun and sympy:
```
from shenfun import *
from shenfun.la import SolverGeneric1ND
import sympy as sp
```
Define spherical coordinates $(r, \theta, \phi)$
$$
\begin{align}
x &= r \sin \theta \cos \phi \\
y &= r \sin \theta \sin \phi \\
z &= r \cos \theta
\end{align}
$$
using sympy. The radius `r` will be constant `r=1`. We create the three-dimensional position vector `rv` as a function of the two new coordinates $(\theta, \phi)$.
```
r = 1
theta, phi = psi = sp.symbols('x,y', real=True, positive=True)
rv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta))
```
We define bases with the domains $\theta \in [0, \pi]$ and $\phi \in [0, 2\pi]$. Also define a tensorproductspace, test- and trialfunction. Note that the new coordinates and the position vector are fed to the `TensorProductSpace` and not the individual spaces:
```
N, M = 256, 256
L0 = FunctionSpace(N, 'L', domain=(0, np.pi))
F1 = FunctionSpace(M, 'F', dtype='D')
T = TensorProductSpace(comm, (L0, F1), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta))))
v = TestFunction(T)
u = TrialFunction(T)
print(T.coors.sg)
```
Use one spherical harmonic function as a manufactured solution
```
#sph = sp.functions.special.spherical_harmonics.Ynm
#ue = sph(6, 3, theta, phi)
ue = sp.cos(8*(sp.sin(theta)*sp.cos(phi) + sp.sin(theta)*sp.sin(phi) + sp.cos(theta)))
```
Compute the right hand side on the quadrature mesh and take the scalar product
```
alpha = 1000
g = (-div(grad(u))+alpha*u).tosympy(basis=ue, psi=psi)
gj = Array(T, buffer=g*T.coors.sg)
g_hat = Function(T)
g_hat = inner(v, gj, output_array=g_hat)
```
Note that we can use the `shenfun` operators `div` and `grad` on a trialfunction `u`, and then switch the trialfunction for a sympy function `ue`. The operators will then make use of sympy's [derivative method](https://docs.sympy.org/latest/tutorial/calculus.html#derivatives) on the function `ue`. Here `(-div(grad(u))+alpha*u)` corresponds to the equation we are trying to solve:
```
from IPython.display import Math
Math((-div(grad(u))+alpha*u).tolatex(funcname='u', symbol_names={theta: '\\theta', phi: '\\phi'}))
```
Evaluated with `u=ue` and you get the exact right hand side `f`.
Tensor product matrices that make up the Helmholtz equation are then assembled as
```
mats = inner(v, (-div(grad(u))+alpha*u)*T.coors.sg)
```
And the linear system of equations can be solved using the generic `SolverGeneric1ND`, that can be used for any problem that only has non-periodic boundary conditions in one dimension.
```
u_hat = Function(T)
Sol1 = SolverGeneric1ND(mats)
u_hat = Sol1(g_hat, u_hat)
```
Transform back to real space and compute the error.
```
uj = u_hat.backward()
uq = Array(T, buffer=ue)
print('Error =', np.sqrt(inner(1, (uj-uq)**2)), np.linalg.norm(uj-uq))
import matplotlib.pyplot as plt
%matplotlib notebook
plt.spy(Sol1.MM[1].diags())
```
## Postprocessing
Since we used quite few quadrature points in solving this problem, we refine the solution for a nicer plot. Note that `refine` simply pads Functions with zeros, which gives exactly the same accuracy, but more quadrature points in real space. `u_hat` has `NxM` quadrature points, here we refine using 3 times as many points along both dimensions
```
u_hat2 = u_hat.refine([N*3, M*3])
ur = u_hat2.backward(kind='uniform')
```
The periodic solution does not contain the periodic points twice, i.e., the computational mesh contains $0$, but not $2\pi$. It looks better if we wrap the periodic dimension all around to $2\pi$, and this is achieved with
```
xx, yy, zz = u_hat2.function_space().local_cartesian_mesh(uniform=True)
xx = np.hstack([xx, xx[:, 0][:, None]])
yy = np.hstack([yy, yy[:, 0][:, None]])
zz = np.hstack([zz, zz[:, 0][:, None]])
ur = np.hstack([ur, ur[:, 0][:, None]])
```
In the end the solution is plotted using mayavi
```
from mayavi import mlab
mlab.init_notebook('x3d', 400, 400)
mlab.figure(bgcolor=(1, 1, 1))
m = mlab.mesh(xx, yy, zz, scalars=ur.real, colormap='jet')
m
```
# Biharmonic equation
A biharmonic equation is given as
$$
\nabla^4 u + \alpha u = f.
$$
This equation is extremely messy in spherical coordinates. I cannot even find it posted anywhere. Nevertheless, we can solve it trivially with shenfun, and we can also see what it looks like
```
Math((div(grad(div(grad(u))))+alpha*u).tolatex(funcname='u', symbol_names={theta: '\\theta', phi: '\\phi'}))
```
Remember that this equation uses constant radius `r=1`. We now solve the equation using the same manufactured solution as for the Helmholtz equation.
```
g = (div(grad(div(grad(u))))+alpha*u).tosympy(basis=ue, psi=psi)
gj = Array(T, buffer=g)
# Take scalar product
g_hat = Function(T)
g_hat = inner(v, gj, output_array=g_hat)
mats = inner(v, div(grad(div(grad(u)))) + alpha*u)
# Solve
u_hat = Function(T)
Sol1 = SolverGeneric1ND(mats)
u_hat = Sol1(g_hat, u_hat)
# Transform back to real space.
uj = u_hat.backward()
uq = Array(T, buffer=ue)
print('Error =', np.sqrt(dx((uj-uq)**2)))
```
Want to see what the regular 3-dimensional biharmonic equation looks like in spherical coordinates? This is extremely tedious to derive by hand, but in shenfun you can get there with the following few lines of code
```
r, theta, phi = psi = sp.symbols('x,y,z', real=True, positive=True)
rv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta))
L0 = FunctionSpace(20, 'L', domain=(0, 1))
F1 = FunctionSpace(20, 'L', domain=(0, np.pi))
F2 = FunctionSpace(20, 'F', dtype='D')
T = TensorProductSpace(comm, (L0, F1, F2), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta))))
u = TrialFunction(T)
Math((div(grad(div(grad(u))))).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'}))
v = TestFunction(T)
A = inner(div(grad(v)), div(grad(u)), level=2)
```
I don't know if this is actually correct, because I haven't derived it by hand and I haven't seen it printed anywhere, but at least I know the Cartesian equation is correct:
```
L0 = FunctionSpace(8, 'C', domain=(0, np.pi))
F1 = FunctionSpace(8, 'F', dtype='D')
F2 = FunctionSpace(8, 'F', dtype='D')
T = TensorProductSpace(comm, (L0, F1, F2))
u = TrialFunction(T)
Math((div(grad(div(grad(u))))).tolatex(funcname='u'))
```
| github_jupyter | from shenfun import *
from shenfun.la import SolverGeneric1ND
import sympy as sp
r = 1
theta, phi = psi = sp.symbols('x,y', real=True, positive=True)
rv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta))
N, M = 256, 256
L0 = FunctionSpace(N, 'L', domain=(0, np.pi))
F1 = FunctionSpace(M, 'F', dtype='D')
T = TensorProductSpace(comm, (L0, F1), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta))))
v = TestFunction(T)
u = TrialFunction(T)
print(T.coors.sg)
#sph = sp.functions.special.spherical_harmonics.Ynm
#ue = sph(6, 3, theta, phi)
ue = sp.cos(8*(sp.sin(theta)*sp.cos(phi) + sp.sin(theta)*sp.sin(phi) + sp.cos(theta)))
alpha = 1000
g = (-div(grad(u))+alpha*u).tosympy(basis=ue, psi=psi)
gj = Array(T, buffer=g*T.coors.sg)
g_hat = Function(T)
g_hat = inner(v, gj, output_array=g_hat)
from IPython.display import Math
Math((-div(grad(u))+alpha*u).tolatex(funcname='u', symbol_names={theta: '\\theta', phi: '\\phi'}))
mats = inner(v, (-div(grad(u))+alpha*u)*T.coors.sg)
u_hat = Function(T)
Sol1 = SolverGeneric1ND(mats)
u_hat = Sol1(g_hat, u_hat)
uj = u_hat.backward()
uq = Array(T, buffer=ue)
print('Error =', np.sqrt(inner(1, (uj-uq)**2)), np.linalg.norm(uj-uq))
import matplotlib.pyplot as plt
%matplotlib notebook
plt.spy(Sol1.MM[1].diags())
u_hat2 = u_hat.refine([N*3, M*3])
ur = u_hat2.backward(kind='uniform')
xx, yy, zz = u_hat2.function_space().local_cartesian_mesh(uniform=True)
xx = np.hstack([xx, xx[:, 0][:, None]])
yy = np.hstack([yy, yy[:, 0][:, None]])
zz = np.hstack([zz, zz[:, 0][:, None]])
ur = np.hstack([ur, ur[:, 0][:, None]])
from mayavi import mlab
mlab.init_notebook('x3d', 400, 400)
mlab.figure(bgcolor=(1, 1, 1))
m = mlab.mesh(xx, yy, zz, scalars=ur.real, colormap='jet')
m
Math((div(grad(div(grad(u))))+alpha*u).tolatex(funcname='u', symbol_names={theta: '\\theta', phi: '\\phi'}))
g = (div(grad(div(grad(u))))+alpha*u).tosympy(basis=ue, psi=psi)
gj = Array(T, buffer=g)
# Take scalar product
g_hat = Function(T)
g_hat = inner(v, gj, output_array=g_hat)
mats = inner(v, div(grad(div(grad(u)))) + alpha*u)
# Solve
u_hat = Function(T)
Sol1 = SolverGeneric1ND(mats)
u_hat = Sol1(g_hat, u_hat)
# Transform back to real space.
uj = u_hat.backward()
uq = Array(T, buffer=ue)
print('Error =', np.sqrt(dx((uj-uq)**2)))
r, theta, phi = psi = sp.symbols('x,y,z', real=True, positive=True)
rv = (r*sp.sin(theta)*sp.cos(phi), r*sp.sin(theta)*sp.sin(phi), r*sp.cos(theta))
L0 = FunctionSpace(20, 'L', domain=(0, 1))
F1 = FunctionSpace(20, 'L', domain=(0, np.pi))
F2 = FunctionSpace(20, 'F', dtype='D')
T = TensorProductSpace(comm, (L0, F1, F2), coordinates=(psi, rv, sp.Q.positive(sp.sin(theta))))
u = TrialFunction(T)
Math((div(grad(div(grad(u))))).tolatex(funcname='u', symbol_names={r: 'r', theta: '\\theta', phi: '\\phi'}))
v = TestFunction(T)
A = inner(div(grad(v)), div(grad(u)), level=2)
L0 = FunctionSpace(8, 'C', domain=(0, np.pi))
F1 = FunctionSpace(8, 'F', dtype='D')
F2 = FunctionSpace(8, 'F', dtype='D')
T = TensorProductSpace(comm, (L0, F1, F2))
u = TrialFunction(T)
Math((div(grad(div(grad(u))))).tolatex(funcname='u')) | 0.360264 | 0.989119 |
# Examples of basic allofplos functions
```
import datetime
from allofplos.plos_regex import (validate_doi, show_invalid_dois, find_valid_dois)
from allofplos.samples.corpus_analysis import (get_random_list_of_dois, get_all_local_dois,
get_all_plos_dois)
from allofplos.corpus.plos_corpus import (get_uncorrected_proofs, get_all_solr_dois)
from allofplos import Article
```
## Get example DOIs: get_random_list_of_dois()
```
example_dois = get_random_list_of_dois(count=10)
example_doi = example_dois[0]
article = Article(example_doi)
example_file = article.filename
example_url = article.url
print("Three ways to represent an article\nArticle as DOI: {}\nArticle as local file: {}\nArticle as url: {}" \
.format(example_doi, example_file, example_url))
example_corrections_dois = ['10.1371/journal.pone.0166537',
'10.1371/journal.ppat.1005301',
'10.1371/journal.pone.0100397']
example_retractions_dois = ['10.1371/journal.pone.0180272',
'10.1371/journal.pone.0155388',
'10.1371/journal.pone.0102411']
example_vor_doi = '10.1371/journal.ppat.1006307'
example_uncorrected_proofs = get_uncorrected_proofs()
```
## Validate PLOS DOI format: validate.doi(string), show_invalid_dois(list)
```
validate_doi('10.1371/journal.pbio.2000797')
validate_doi('10.1371/journal.pone.12345678') # too many trailing digits
doi_list = ['10.1371/journal.pbio.2000797', '10.1371/journal.pone.12345678', '10.1371/journal.pmed.1234567']
show_invalid_dois(doi_list)
```
## Check if a DOI resolves correctly: article.check_if_doi_resolves()
```
article = Article('10.1371/journal.pbio.2000797') # working DOI
article.check_if_doi_resolves()
article = Article('10.1371/annotation/b8b66a84-4919-4a3e-ba3e-bb11f3853755') # working DOI
article.check_if_doi_resolves()
article = Article('10.1371/journal.pone.1111111') # valid DOI structure, but article doesn't exist
article.check_if_doi_resolves()
```
## Check if uncorrected proof: article.proof
```
article = Article(next(iter(example_uncorrected_proofs)))
article.proof
article = Article(example_vor_doi)
article.proof
```
## Find PLOS DOIs in a string: find_valid_dois(string)
```
find_valid_dois("ever seen 10.1371/journal.pbio.2000797, it's great! or maybe 10.1371/journal.pone.1234567?")
```
## Get article pubdate: article.pubdate
```
# returns a datetime object
article = Article(example_doi)
article.pubdate
# datetime object can be transformed into any string format
article = Article(example_doi)
dates = article.get_dates(string_=True, string_format='%Y-%b-%d')
print(dates['epub'])
```
## Check (JATS) article type of article file: article.type_
```
article = Article(example_doi)
article.authors
article = Article(example_corrections_dois[0])
article.type_
article = Article(example_retractions_dois[0])
article.type_
```
## Get related DOIs: article.related_dois
For corrections and retractions, get the DOI(s) of the PLOS articles being retracted or corrected.
```
article = Article(example_corrections_dois[0])
article.related_dois
article = Article(example_retractions_dois[0])
article.related_dois
```
# Working with many articles at once
## Get list of every article DOI indexed on the PLOS search API, Solr: get_all_solr_dois()
```
solr_dois = get_all_solr_dois()
print(len(solr_dois), "articles indexed on Solr.")
```
## Get list of every PLOS article you have downloaded: get_all_local_dois()
```
all_articles = get_all_local_dois()
print(len(all_articles), "articles on local computer.")
```
## Combine local and solr articles: get_all_plos_dois()
```
plos_articles = get_all_plos_dois()
download_updated_xml('allofplos_xml/journal.pcbi.0030158.xml')
```
| github_jupyter | import datetime
from allofplos.plos_regex import (validate_doi, show_invalid_dois, find_valid_dois)
from allofplos.samples.corpus_analysis import (get_random_list_of_dois, get_all_local_dois,
get_all_plos_dois)
from allofplos.corpus.plos_corpus import (get_uncorrected_proofs, get_all_solr_dois)
from allofplos import Article
example_dois = get_random_list_of_dois(count=10)
example_doi = example_dois[0]
article = Article(example_doi)
example_file = article.filename
example_url = article.url
print("Three ways to represent an article\nArticle as DOI: {}\nArticle as local file: {}\nArticle as url: {}" \
.format(example_doi, example_file, example_url))
example_corrections_dois = ['10.1371/journal.pone.0166537',
'10.1371/journal.ppat.1005301',
'10.1371/journal.pone.0100397']
example_retractions_dois = ['10.1371/journal.pone.0180272',
'10.1371/journal.pone.0155388',
'10.1371/journal.pone.0102411']
example_vor_doi = '10.1371/journal.ppat.1006307'
example_uncorrected_proofs = get_uncorrected_proofs()
validate_doi('10.1371/journal.pbio.2000797')
validate_doi('10.1371/journal.pone.12345678') # too many trailing digits
doi_list = ['10.1371/journal.pbio.2000797', '10.1371/journal.pone.12345678', '10.1371/journal.pmed.1234567']
show_invalid_dois(doi_list)
article = Article('10.1371/journal.pbio.2000797') # working DOI
article.check_if_doi_resolves()
article = Article('10.1371/annotation/b8b66a84-4919-4a3e-ba3e-bb11f3853755') # working DOI
article.check_if_doi_resolves()
article = Article('10.1371/journal.pone.1111111') # valid DOI structure, but article doesn't exist
article.check_if_doi_resolves()
article = Article(next(iter(example_uncorrected_proofs)))
article.proof
article = Article(example_vor_doi)
article.proof
find_valid_dois("ever seen 10.1371/journal.pbio.2000797, it's great! or maybe 10.1371/journal.pone.1234567?")
# returns a datetime object
article = Article(example_doi)
article.pubdate
# datetime object can be transformed into any string format
article = Article(example_doi)
dates = article.get_dates(string_=True, string_format='%Y-%b-%d')
print(dates['epub'])
article = Article(example_doi)
article.authors
article = Article(example_corrections_dois[0])
article.type_
article = Article(example_retractions_dois[0])
article.type_
article = Article(example_corrections_dois[0])
article.related_dois
article = Article(example_retractions_dois[0])
article.related_dois
solr_dois = get_all_solr_dois()
print(len(solr_dois), "articles indexed on Solr.")
all_articles = get_all_local_dois()
print(len(all_articles), "articles on local computer.")
plos_articles = get_all_plos_dois()
download_updated_xml('allofplos_xml/journal.pcbi.0030158.xml') | 0.385375 | 0.660049 |
# 250-D Multivariate Normal
Let's go for broke here.
## Setup
First, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.
```
# Python 3 compatability
from __future__ import division, print_function
from builtins import range
# system functions that are always useful to have
import time, sys, os
# basic numeric setup
import numpy as np
import math
from numpy import linalg
# inline plotting
%matplotlib inline
# plotting
import matplotlib
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# seed the random number generator
np.random.seed(2018)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'font.size': 30})
import dynesty
```
Here we will quickly demonstrate that slice sampling is able to cope with very high-dimensional problems without the use of gradients. Our target will in this case be a 250-D uncorrelated multivariate normal distribution with an identical prior.
```
from scipy.special import ndtri
ndim = 250 # number of dimensions
C = np.identity(ndim) # set covariance to identity matrix
Cinv = linalg.inv(C) # precision matrix
lnorm = -0.5 * (np.log(2 * np.pi) * ndim + np.log(linalg.det(C))) # ln(normalization)
# 250-D iid standard normal log-likelihood
def loglikelihood(x):
"""Multivariate normal log-likelihood."""
return -0.5 * np.dot(x, np.dot(Cinv, x)) + lnorm
# prior transform (iid standard normal prior)
def prior_transform(u):
"""Transforms our unit cube samples `u` to a flat prior between -10. and 10. in each variable."""
return ndtri(u)
# ln(evidence)
lnz_truth = lnorm - 0.5 * ndim * np.log(2)
print(lnz_truth)
```
We will use Hamiltonian Slice Sampling (`'hslice'`) to sample in high dimensions. We will also utilize a small number of overall particles ($K < N$) to demonstrate that we can be quite sparsely sampled in this regime and still perform decently well.
```
# hamiltonian slice sampling ('hslice')
sampler = dynesty.DynamicNestedSampler(loglikelihood, prior_transform, ndim,
bound='none', sample='hslice', slices=10)
sampler.run_nested(nlive_init=100, nlive_batch=100)
res = sampler.results
```
Let's dump our results to disk to avoid losing all that work!
```
import pickle
# dump results
output = open('250d_gauss.pkl', 'wb')
pickle.dump(sampler.results, output)
output.close()
import pickle
output = open('250d_gauss.pkl', 'rb')
res = pickle.load(output)
output.close()
```
Now let's see how our sampling went.
```
from dynesty import plotting as dyplot
# evidence check
fig, axes = dyplot.runplot(res, color='red', lnz_truth=lnz_truth, truth_color='black', logplot=True)
fig.tight_layout()
# posterior check
dims = [-1, -2, -3, -4, -5]
fig, ax = plt.subplots(5, 5, figsize=(25, 25))
samps, samps_t = res.samples, res.samples[:,dims]
res.samples = samps_t
fg, ax = dyplot.cornerplot(res, color='red', truths=np.zeros(ndim), truth_color='black',
span=[(-3.5, 3.5) for i in range(len(dims))],
show_titles=True, title_kwargs={'y': 1.05},
quantiles=None, fig=(fig, ax))
res.samples = samps
print(1./np.sqrt(2))
```
That looks good! Obviously we can't plot the full 250x250 plot, but 5x5 subplots should do.
Now we can finally check how well our mean and covariances agree.
```
# let's confirm we actually got the entire distribution
from dynesty import utils
weights = np.exp(res.logwt - res.logz[-1])
mu, cov = utils.mean_and_cov(samps, weights)
# plot residuals
from scipy.stats.kde import gaussian_kde
mu_kde = gaussian_kde(mu)
xgrid = np.linspace(-0.5, 0.5, 1000)
mu_pdf = mu_kde.pdf(xgrid)
cov_kde = gaussian_kde((cov - C).flatten())
xgrid2 = np.linspace(-0.3, 0.3, 1000)
cov_pdf = cov_kde.pdf(xgrid2)
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.plot(xgrid, mu_pdf, lw=3, color='black')
plt.xlabel('Mean Offset')
plt.ylabel('PDF')
plt.subplot(1, 2, 2)
plt.plot(xgrid2, cov_pdf, lw=3, color='red')
plt.xlabel('Covariance Offset')
plt.ylabel('PDF')
# print values
print('Means (0.):', np.mean(mu), '+/-', np.std(mu))
print('Variance (0.5):', np.mean(np.diag(cov)), '+/-', np.std(np.diag(cov)))
cov_up = np.triu(cov, k=1).flatten()
cov_low = np.tril(cov,k=-1).flatten()
cov_offdiag = np.append(cov_up[abs(cov_up) != 0.], cov_low[cov_low != 0.])
print('Covariance (0.):', np.mean(cov_offdiag), '+/-', np.std(cov_offdiag))
plt.tight_layout()
# plot individual values
plt.figure(figsize=(20,6))
plt.subplot(1, 3, 1)
plt.plot(mu, 'k.')
plt.ylabel('Mean')
plt.xlabel('Dimension')
plt.tight_layout()
plt.subplot(1, 3, 2)
plt.plot(np.diag(cov) - 0.5, 'r.')
plt.ylabel('Variance')
plt.xlabel('Dimension')
plt.tight_layout()
plt.subplot(1, 3, 3)
plt.plot(cov_low[cov_low != 0.], 'b.')
plt.plot(cov_up[cov_up != 0.], 'b.')
plt.ylabel('Covariance')
plt.xlabel('Cross-Term')
plt.tight_layout()
```
| github_jupyter | # Python 3 compatability
from __future__ import division, print_function
from builtins import range
# system functions that are always useful to have
import time, sys, os
# basic numeric setup
import numpy as np
import math
from numpy import linalg
# inline plotting
%matplotlib inline
# plotting
import matplotlib
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# seed the random number generator
np.random.seed(2018)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'font.size': 30})
import dynesty
from scipy.special import ndtri
ndim = 250 # number of dimensions
C = np.identity(ndim) # set covariance to identity matrix
Cinv = linalg.inv(C) # precision matrix
lnorm = -0.5 * (np.log(2 * np.pi) * ndim + np.log(linalg.det(C))) # ln(normalization)
# 250-D iid standard normal log-likelihood
def loglikelihood(x):
"""Multivariate normal log-likelihood."""
return -0.5 * np.dot(x, np.dot(Cinv, x)) + lnorm
# prior transform (iid standard normal prior)
def prior_transform(u):
"""Transforms our unit cube samples `u` to a flat prior between -10. and 10. in each variable."""
return ndtri(u)
# ln(evidence)
lnz_truth = lnorm - 0.5 * ndim * np.log(2)
print(lnz_truth)
# hamiltonian slice sampling ('hslice')
sampler = dynesty.DynamicNestedSampler(loglikelihood, prior_transform, ndim,
bound='none', sample='hslice', slices=10)
sampler.run_nested(nlive_init=100, nlive_batch=100)
res = sampler.results
import pickle
# dump results
output = open('250d_gauss.pkl', 'wb')
pickle.dump(sampler.results, output)
output.close()
import pickle
output = open('250d_gauss.pkl', 'rb')
res = pickle.load(output)
output.close()
from dynesty import plotting as dyplot
# evidence check
fig, axes = dyplot.runplot(res, color='red', lnz_truth=lnz_truth, truth_color='black', logplot=True)
fig.tight_layout()
# posterior check
dims = [-1, -2, -3, -4, -5]
fig, ax = plt.subplots(5, 5, figsize=(25, 25))
samps, samps_t = res.samples, res.samples[:,dims]
res.samples = samps_t
fg, ax = dyplot.cornerplot(res, color='red', truths=np.zeros(ndim), truth_color='black',
span=[(-3.5, 3.5) for i in range(len(dims))],
show_titles=True, title_kwargs={'y': 1.05},
quantiles=None, fig=(fig, ax))
res.samples = samps
print(1./np.sqrt(2))
# let's confirm we actually got the entire distribution
from dynesty import utils
weights = np.exp(res.logwt - res.logz[-1])
mu, cov = utils.mean_and_cov(samps, weights)
# plot residuals
from scipy.stats.kde import gaussian_kde
mu_kde = gaussian_kde(mu)
xgrid = np.linspace(-0.5, 0.5, 1000)
mu_pdf = mu_kde.pdf(xgrid)
cov_kde = gaussian_kde((cov - C).flatten())
xgrid2 = np.linspace(-0.3, 0.3, 1000)
cov_pdf = cov_kde.pdf(xgrid2)
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.plot(xgrid, mu_pdf, lw=3, color='black')
plt.xlabel('Mean Offset')
plt.ylabel('PDF')
plt.subplot(1, 2, 2)
plt.plot(xgrid2, cov_pdf, lw=3, color='red')
plt.xlabel('Covariance Offset')
plt.ylabel('PDF')
# print values
print('Means (0.):', np.mean(mu), '+/-', np.std(mu))
print('Variance (0.5):', np.mean(np.diag(cov)), '+/-', np.std(np.diag(cov)))
cov_up = np.triu(cov, k=1).flatten()
cov_low = np.tril(cov,k=-1).flatten()
cov_offdiag = np.append(cov_up[abs(cov_up) != 0.], cov_low[cov_low != 0.])
print('Covariance (0.):', np.mean(cov_offdiag), '+/-', np.std(cov_offdiag))
plt.tight_layout()
# plot individual values
plt.figure(figsize=(20,6))
plt.subplot(1, 3, 1)
plt.plot(mu, 'k.')
plt.ylabel('Mean')
plt.xlabel('Dimension')
plt.tight_layout()
plt.subplot(1, 3, 2)
plt.plot(np.diag(cov) - 0.5, 'r.')
plt.ylabel('Variance')
plt.xlabel('Dimension')
plt.tight_layout()
plt.subplot(1, 3, 3)
plt.plot(cov_low[cov_low != 0.], 'b.')
plt.plot(cov_up[cov_up != 0.], 'b.')
plt.ylabel('Covariance')
plt.xlabel('Cross-Term')
plt.tight_layout() | 0.685423 | 0.874023 |
# Transfer Learning
In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html).
ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU).
Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy.
With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models
```
Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`.
```
data_dir = 'Cat_Dog_data'
# TODO: Define transforms for the training data and testing data
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
test_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
# Pass transforms in here, then run the next cell to see how the transforms look
train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)
trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
testloader = torch.utils.data.DataLoader(test_data, batch_size=64)
```
We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.html#id5). Let's print out the model architecture so we can see what's going on.
```
model = models.densenet121(pretrained=True)
model
```
This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers.
```
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(1024, 500)),
('relu', nn.ReLU()),
('fc2', nn.Linear(500, 2)),
('output', nn.LogSoftmax(dim=1))
]))
model.classifier = classifier
```
With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time.
PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU.
```
import time
for device in ['cpu', 'cuda']:
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)
model.to(device)
for ii, (inputs, labels) in enumerate(trainloader):
# Move input and label tensors to the GPU
inputs, labels = inputs.to(device), labels.to(device)
start = time.time()
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if ii==3:
break
print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds")
```
You can write device agnostic code which will automatically use CUDA if it's enabled like so:
```python
# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
...
# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)
```
From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily.
>**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen.
```
## TODO: Use a pretrained model to classify the cat and dog images
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = models.resnet50(pretrained=True)
# Turn of gradients for the model
for param in model.parameters():
param.requires_grad = False
# Define new classifier
clsssifier = nn.Sequential(nn.Linear(2048, 512),
nn.ReLU(),
nn.Dropout(p=0.2),
nn.Linear(512,2),
nn.LogSoftmax(dim=1))
model.fc = clsssifier
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.fc.parameters(), lr=0.003)
model.to(device)
epochs = 1
steps = 0
running_loss = 0
print_every = 5
for epoch in range(epochs):
for images, labels in trainloader:
steps += 1
if steps == 50:
break;
images, labels = images.to(device), labels.to(device)
optimizer.zero_grad()
logps = model(images)
loss = criterion(logps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
model.eval()
test_loss = 0
accuracy = 0
for images, labels in testloader:
images, labels = images.to(device), labels.to(device)
logps = model(images)
loss = criterion(logps, labels)
test_loss += loss.item()
# calculate the accuracy
ps = torch.exp(logps)
top_ps, top_class = ps.topk(1, dim=1)
equality = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equality.type(torch.FloatTensor)).item()
print(f"Epoch {epoch+1}/{epochs}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Test loss: {test_loss/len(testloader):.3f}.. "
f"Test accuracy: {accuracy/len(testloader):.3f}")
running_loss = 0
model.train()
```
| github_jupyter | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models
data_dir = 'Cat_Dog_data'
# TODO: Define transforms for the training data and testing data
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
test_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
# Pass transforms in here, then run the next cell to see how the transforms look
train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)
trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
testloader = torch.utils.data.DataLoader(test_data, batch_size=64)
model = models.densenet121(pretrained=True)
model
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(1024, 500)),
('relu', nn.ReLU()),
('fc2', nn.Linear(500, 2)),
('output', nn.LogSoftmax(dim=1))
]))
model.classifier = classifier
import time
for device in ['cpu', 'cuda']:
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)
model.to(device)
for ii, (inputs, labels) in enumerate(trainloader):
# Move input and label tensors to the GPU
inputs, labels = inputs.to(device), labels.to(device)
start = time.time()
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if ii==3:
break
print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds")
# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
...
# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)
## TODO: Use a pretrained model to classify the cat and dog images
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = models.resnet50(pretrained=True)
# Turn of gradients for the model
for param in model.parameters():
param.requires_grad = False
# Define new classifier
clsssifier = nn.Sequential(nn.Linear(2048, 512),
nn.ReLU(),
nn.Dropout(p=0.2),
nn.Linear(512,2),
nn.LogSoftmax(dim=1))
model.fc = clsssifier
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.fc.parameters(), lr=0.003)
model.to(device)
epochs = 1
steps = 0
running_loss = 0
print_every = 5
for epoch in range(epochs):
for images, labels in trainloader:
steps += 1
if steps == 50:
break;
images, labels = images.to(device), labels.to(device)
optimizer.zero_grad()
logps = model(images)
loss = criterion(logps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
model.eval()
test_loss = 0
accuracy = 0
for images, labels in testloader:
images, labels = images.to(device), labels.to(device)
logps = model(images)
loss = criterion(logps, labels)
test_loss += loss.item()
# calculate the accuracy
ps = torch.exp(logps)
top_ps, top_class = ps.topk(1, dim=1)
equality = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equality.type(torch.FloatTensor)).item()
print(f"Epoch {epoch+1}/{epochs}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Test loss: {test_loss/len(testloader):.3f}.. "
f"Test accuracy: {accuracy/len(testloader):.3f}")
running_loss = 0
model.train() | 0.67694 | 0.989582 |
```
import os,sys
import numpy as np
import yaml
import scipy.integrate as integrate
import matplotlib.pyplot as plt
import math
```
## Source Matrix
### Parameters
```
with open('configure.yml','r') as conf_para:
conf_para = yaml.load(conf_para,Loader=yaml.FullLoader)
```
### wavefront_initialize
```
def wavefront_initialize(pixelsize_x = 55e-06,pixelsize_y=55e-06,fs_size = 2000,ss_size = 20000,focus_x = 1.2e-3,focus_y = 1.0e-3,defocus = 400e-6, det_dist = 14e-03, ap_x = 40e-06, ap_y= 40e-6,wl = 7.29e-11,amplitude_value=0.0):
wf_dec = np.zeros((ss_size,fs_size),dtype='complex')
wf_dec += amplitude_value
# the range of detector plane(x-axis,y-axis)
xx_span = fs_size * pixelsize_x
yy_span = ss_size * pixelsize_y
# the range of object plane(x-axis,y-axis)
x_span = 1.6 * ap_x / focus_x * defocus
y_span = 1.6 * ap_y / focus_y * defocus
# the sample rate in the object plane
n_x = int(x_span * xx_span / wl / det_dist)
n_y = int(y_span * yy_span / wl / det_dist)
# Initializing coordinate arrays
# coordinate in object plane
x_arr = np.linspace(-x_span / 2, x_span / 2, n_x)
y_arr = np.linspace(-y_span / 2, y_span / 2, n_y)
# coordinate in detector plan
xx_arr = np.linspace(-xx_span / 2, xx_span / 2, fs_size, endpoint=False)
yy_arr = np.linspace(-yy_span / 2, yy_span / 2, ss_size, endpoint=False)
return x_arr,y_arr,xx_arr,yy_arr,wf_dec
# fresnel number : fn(ap,wl,det_dist)
# ap: aperture size
# wl: wavelength (initial 16.9keV)
# det_dist : propagation distance
def fn(ap_x = 40e-6,wl = 7.29e-11,det_dist = 14e-03):
fnum = int(ap_x **2 / wl / det_dist)
return fnum
```
### Lens wavefront
```
"""
Parameters:
------------
r : coordinates
f : focus of lens
df: defocus of the object
a : alpha, Third order abberations coefficient [rad/mrad^3]
cen_ab : center point of the lens' abberations
"""
def lens_wf(x_arr, y_arr, wf_dec,ap_x = 40e-06,ap_y = 40e-06, focus_x = 1.2e-3, focus_y=1.0e-3, x_abcen = 0.5, y_abcen = 0.5, alpha_x = -0.05, alpha_y = -0.05, wl = 7.29e-11,defocus =400e-06):
xx_arr = x_arr.copy()
yy_arr = y_arr.copy()
wf_lens = np.array(np.meshgrid(y_arr,x_arr))
wf_obj = np.array(np.meshgrid(yy_arr,def main()
x_arr,y_arr,xx_arr,yy_arr,wf_dec = wavefront_initialize()
wavefront_lens = np.zeros((len(y_arr),len(x_arr)),dtype = 'complex')))
wavefront_lens = np.zeros_like(wf_dec,dtype='complex')
wavenumber = 2*np.pi / wl
z_dis = focus_y + defocus
M_x = (focus_x+defocus)/focus_x
M_y = (focus_y+defocus)/focus_y
A = wavenumber/1.j/2/np.pi/z_dis
ph_0 = wavenumber* 1.j / 2 / z_dis * (xx_arr**2 + yy_arr**2) + i.j*wavenumber*z_dis
ph_x = -wavenumber / 2 / M_x / focus_x * x_arr**2
ph_ab_x = alpha_x * 1e9 * ((x_arr - x_abcen) / focus_x) **3
ph_y = -wavenumber / 2 / M_y / focus_y * y_arr**2
ph_ab_y= alpha_y * 1e9 * ((y_arr - y_abcen) / focus_y) **3
ph_mix = wavenumber / defocus * (xx_arr*x_arr + yy_arr*y_arr)
func = np.exp(1.j (ph_x + ph_ab_x + ph_y + ph_ab_y + ph_mix) )
wavefront_lens,err = integrate.dblquad(func,-ap_x/2,ap_x/2,-ap_y/2,ap_y/2)
wavefront_lens *= A*exp(ph_0)
return wavefront_lens,err
def propagator2d_integrate(x_arr,y_arr,xx_arr,yy_arr,wavefront_obj, image, wf_dec, det_dist = 14e-03, wl = 7.29e-11 ):
# convolving with the Fresnel kernel via FFT multiplication
p_xy = np.array(np.meshgrid(y_arr,x_arr))
det_xy = np.array(np.meshgrid(yy_arr,xx_arr))
wf_progagated = np.zeros_like(wf_dec,dtype='complex')
wavenumber = 2 * np.pi / wl
ph = wavenumber / 2 / det_dist
for i in range(yy_arr.size):
for j in range(xx_arr.size):
ph_x = wavenumber/ det_dist * p_xy[0,:,:] * det_xy[0,j,i]
ph_y = wavenumber/ det_dist * p_xy[1,:,:] * det_xy[0,j,i]
value = wavefront_obj * image * np.exp(-ph_x-ph_y)
wf_propagated[i][j] = np.exp(ph) * integrate.simps(integrate.simps(value,ph_y),ph_x)
return wf_propagated
def main()
x_arr,y_arr,xx_arr,yy_arr,wf_dec = wavefront_initialize()
wavefront_lens = np.zeros((len(y_arr),len(x_arr)),dtype = 'complex')
if __name__ == "__main__":
main()
```
| github_jupyter | import os,sys
import numpy as np
import yaml
import scipy.integrate as integrate
import matplotlib.pyplot as plt
import math
with open('configure.yml','r') as conf_para:
conf_para = yaml.load(conf_para,Loader=yaml.FullLoader)
def wavefront_initialize(pixelsize_x = 55e-06,pixelsize_y=55e-06,fs_size = 2000,ss_size = 20000,focus_x = 1.2e-3,focus_y = 1.0e-3,defocus = 400e-6, det_dist = 14e-03, ap_x = 40e-06, ap_y= 40e-6,wl = 7.29e-11,amplitude_value=0.0):
wf_dec = np.zeros((ss_size,fs_size),dtype='complex')
wf_dec += amplitude_value
# the range of detector plane(x-axis,y-axis)
xx_span = fs_size * pixelsize_x
yy_span = ss_size * pixelsize_y
# the range of object plane(x-axis,y-axis)
x_span = 1.6 * ap_x / focus_x * defocus
y_span = 1.6 * ap_y / focus_y * defocus
# the sample rate in the object plane
n_x = int(x_span * xx_span / wl / det_dist)
n_y = int(y_span * yy_span / wl / det_dist)
# Initializing coordinate arrays
# coordinate in object plane
x_arr = np.linspace(-x_span / 2, x_span / 2, n_x)
y_arr = np.linspace(-y_span / 2, y_span / 2, n_y)
# coordinate in detector plan
xx_arr = np.linspace(-xx_span / 2, xx_span / 2, fs_size, endpoint=False)
yy_arr = np.linspace(-yy_span / 2, yy_span / 2, ss_size, endpoint=False)
return x_arr,y_arr,xx_arr,yy_arr,wf_dec
# fresnel number : fn(ap,wl,det_dist)
# ap: aperture size
# wl: wavelength (initial 16.9keV)
# det_dist : propagation distance
def fn(ap_x = 40e-6,wl = 7.29e-11,det_dist = 14e-03):
fnum = int(ap_x **2 / wl / det_dist)
return fnum
"""
Parameters:
------------
r : coordinates
f : focus of lens
df: defocus of the object
a : alpha, Third order abberations coefficient [rad/mrad^3]
cen_ab : center point of the lens' abberations
"""
def lens_wf(x_arr, y_arr, wf_dec,ap_x = 40e-06,ap_y = 40e-06, focus_x = 1.2e-3, focus_y=1.0e-3, x_abcen = 0.5, y_abcen = 0.5, alpha_x = -0.05, alpha_y = -0.05, wl = 7.29e-11,defocus =400e-06):
xx_arr = x_arr.copy()
yy_arr = y_arr.copy()
wf_lens = np.array(np.meshgrid(y_arr,x_arr))
wf_obj = np.array(np.meshgrid(yy_arr,def main()
x_arr,y_arr,xx_arr,yy_arr,wf_dec = wavefront_initialize()
wavefront_lens = np.zeros((len(y_arr),len(x_arr)),dtype = 'complex')))
wavefront_lens = np.zeros_like(wf_dec,dtype='complex')
wavenumber = 2*np.pi / wl
z_dis = focus_y + defocus
M_x = (focus_x+defocus)/focus_x
M_y = (focus_y+defocus)/focus_y
A = wavenumber/1.j/2/np.pi/z_dis
ph_0 = wavenumber* 1.j / 2 / z_dis * (xx_arr**2 + yy_arr**2) + i.j*wavenumber*z_dis
ph_x = -wavenumber / 2 / M_x / focus_x * x_arr**2
ph_ab_x = alpha_x * 1e9 * ((x_arr - x_abcen) / focus_x) **3
ph_y = -wavenumber / 2 / M_y / focus_y * y_arr**2
ph_ab_y= alpha_y * 1e9 * ((y_arr - y_abcen) / focus_y) **3
ph_mix = wavenumber / defocus * (xx_arr*x_arr + yy_arr*y_arr)
func = np.exp(1.j (ph_x + ph_ab_x + ph_y + ph_ab_y + ph_mix) )
wavefront_lens,err = integrate.dblquad(func,-ap_x/2,ap_x/2,-ap_y/2,ap_y/2)
wavefront_lens *= A*exp(ph_0)
return wavefront_lens,err
def propagator2d_integrate(x_arr,y_arr,xx_arr,yy_arr,wavefront_obj, image, wf_dec, det_dist = 14e-03, wl = 7.29e-11 ):
# convolving with the Fresnel kernel via FFT multiplication
p_xy = np.array(np.meshgrid(y_arr,x_arr))
det_xy = np.array(np.meshgrid(yy_arr,xx_arr))
wf_progagated = np.zeros_like(wf_dec,dtype='complex')
wavenumber = 2 * np.pi / wl
ph = wavenumber / 2 / det_dist
for i in range(yy_arr.size):
for j in range(xx_arr.size):
ph_x = wavenumber/ det_dist * p_xy[0,:,:] * det_xy[0,j,i]
ph_y = wavenumber/ det_dist * p_xy[1,:,:] * det_xy[0,j,i]
value = wavefront_obj * image * np.exp(-ph_x-ph_y)
wf_propagated[i][j] = np.exp(ph) * integrate.simps(integrate.simps(value,ph_y),ph_x)
return wf_propagated
def main()
x_arr,y_arr,xx_arr,yy_arr,wf_dec = wavefront_initialize()
wavefront_lens = np.zeros((len(y_arr),len(x_arr)),dtype = 'complex')
if __name__ == "__main__":
main() | 0.569972 | 0.733786 |
```
"""
RDF generator for the PREDICT drug indication gold standard (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s4.xls)
@version 1.0
@author Remzi Celebi
"""
import pandas as pd
from csv import reader
from src.util import utils
from src.util.utils import Dataset, DataResource
from rdflib import Graph, URIRef, Literal, RDF, ConjunctiveGraph
from rdflib import Namespace
import datetime
mapping_df = pd.read_excel('https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s4.xls')
mapping_df.head()
#save the original file
mapping_df.to_csv('data/external/msb201126-s4.csv', index=False)
mapping_df['OMIM disease name'].replace({'Neuropathy, Hereditary Sensory And Autonomic, Type I, With Cough And':
'Neuropathy, Hereditary Sensory And Autonomic, Type I, With Cough And Gastroesophageal Reflux'}, inplace=True)
goldstd_df = pd.read_excel('https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s1.xls')
goldstd_df.head()
goldstd_df['Drug name'].replace({'Divalproex Sodium':'Valproic Acid',
'Bismuth':'Bismuth subsalicylate',
'Clobetasol':'Clobetasol propionate',
'Guanadrel Sulfate':'Guanadrel',
'Marinol':'Dronabinol',
'Medroxyprogesterone':'Medroxyprogesterone acetate',
'Megestrol':'Megestrol acetate',
'Propoxyphene':'Dextropropoxyphene',
'Salicyclic Acid':'Salicylic acid',
'Ipratropium':'Ipratropiumbromid',
'Adenosine Monophosphate':'Adenosine monophosphate',
'Arsenic Trioxide':'Arsenic trioxide',
'Ethacrynic Acid':'Ethacrynic acid',
'Fondaparinux Sodium':'Fondaparinux sodium',
'Meclofenamic Acid':'Meclofenamic acid',
'Methyl Aminolevulinate':'Methyl aminolevulinate'},inplace=True)
merged_df = goldstd_df.merge(mapping_df, left_on='Disease name', right_on='OMIM disease name')
merged_df.head()
sparql_endpoint="http://graphdb.dumontierlab.com/repositories/openpredict"
!curl -H "Accept: text/csv" --data-urlencode query@data/sparql/drugbank-drug-synonym.rq {sparql_endpoint} > data/input/drugbank-drug-synonym.csv
drug_synonym_df = pd.read_csv('data/input/drugbank-drug-synonym.csv')
drug_synonym_df.head()
merged_df = merged_df.merge(drug_synonym_df, left_on='Drug name', right_on='name')
print ('# of drug-disease associations',len(merged_df[['drugid','OMIM ID']].drop_duplicates()))
gold_std_mapped_df = merged_df[['drugid','OMIM ID']].drop_duplicates()
gold_std_mapped_df['drugid'] = gold_std_mapped_df['drugid'].map(lambda x: 'http://bio2rdf.org/drugbank:'+str(x))
gold_std_mapped_df['OMIM ID'] = gold_std_mapped_df['OMIM ID'].map(lambda x: 'http://bio2rdf.org/omim:'+str(x))
gold_std_mapped_df.rename(columns={'OMIM ID':'http://bio2rdf.org/openpredict_vocabulary:indication'},inplace=True)
gold_std_mapped_df= gold_std_mapped_df.set_index('drugid', drop=True)
column_types ={'http://bio2rdf.org/openpredict_vocabulary:indication':'URI'}
graphURI ='http://w3id.org/fairworkflows/dataset.openpredict.indications.R1'
g = ConjunctiveGraph(identifier = URIRef(graphURI))
g= utils.to_rdf(g, gold_std_mapped_df, column_types, 'http://bio2rdf.org/drugbank:Drug' )
g.serialize('data/rdf/predict_gold_standard_omim.nq', format='nquads')
def addMetaData(g, graphURI):
#generate dataset
data_source = Dataset(qname=graphURI, graph = g)
data_source.setURI(graphURI)
data_source.setTitle('Supplementary data used in the PREDICT')
data_source.setDescription('Drug indications gold standard and mappings used in the study of "PREDICT: a method for inferring novel drug indications with application to personalized medicine" ')
data_source.setPublisher('https://www.embopress.org/journal/17444292')
data_source.setPublisherName('Molecular Systems Biology')
data_source.addRight('use-share-modify')
data_source.addTheme('http://www.wikidata.org/entity/Q56863002')
data_source.setLicense('https://www.embopress.org/page/journal/17444292/about')
data_source.setHomepage('https://dx.doi.org/10.1038%2Fmsb.2011.26')
data_source.setVersion('1.0')
#generate dataset distribution
data_dist1 = DataResource(qname=graphURI, graph = data_source.toRDF())
data_dist1.setURI('http:/w3id.org/fairworkflows/dataset.openpredict.mapping/version/1/source')
data_dist1.setTitle('Mapping between OMIM diseases and UMLS concepts used in the PREDICT study (msb201126-s4.xls)')
data_dist1.setDescription('This file contains the mappings between OMIM diseases and UMLS concepts used in the PREDICT study')
data_dist1.setLicense('https://creativecommons.org/publicdomain/zero/1.0/')
data_dist1.setVersion('1.0')
data_dist1.setFormat('application/vnd.ms-excel')
data_dist1.setMediaType('application/vnd.ms-excel')
data_dist1.setPublisher('https://www.embopress.org/journal/17444292')
data_dist1.addRight('use-share-modify')
data_dist1.setDownloadURL('https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s4.xls')
data_dist1.setRetrievedDate(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
data_dist1.setDataset(data_source.getURI())
#generate dataset distribution
data_dist2 = DataResource(qname=graphURI, graph = data_dist1.toRDF())
data_dist2.setURI('http:/w3id.org/fairworkflows/dataset.openpredict.indications/version/1/source')
data_dist2.setTitle('Drug indications gold standard used in the PREDICT study (msb201126-s1.xls)')
data_dist2.setDescription('This file contains the gold standard drug indications used in the PREDICT study')
data_dist2.setLicense('https://creativecommons.org/publicdomain/zero/1.0/')
data_dist2.setVersion('1.0')
data_dist2.setFormat('application/vnd.ms-excel')
data_dist2.setMediaType('application/vnd.ms-excel')
data_dist2.setPublisher('https://www.embopress.org/journal/17444292')
data_dist2.addRight('use-share-modify')
data_dist2.setDownloadURL('https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s1.xls')
data_dist2.setRetrievedDate(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
data_dist2.setDataset(data_source.getURI())
#generate RDF data distrubtion
rdf_dist = DataResource(qname=graphURI, graph = data_dist2.toRDF() )
rdf_dist.setURI('http:/w3id.org/fairworkflows/dataset.openpredict.indications/version/1/rdf/data')
rdf_dist.setTitle('RDF version of PREDICT drug indication gold standard')
rdf_dist.setDescription('This file is the RDF version of PREDICT drug indication gold standard')
rdf_dist.setLicense('http://creativecommons.org/licenses/by/3.0/')
rdf_dist.setVersion('1.0')
rdf_dist.setFormat('application/n-quads')
rdf_dist.setMediaType('application/n-quads')
rdf_dist.addRight('use-share-modify')
rdf_dist.addRight('by-attribution')
rdf_dist.addRight('restricted-by-source-license')
rdf_dist.setCreateDate(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
rdf_dist.setCreator('https://github.com/fair-workflows/openpredict/src/MappingPREDICTGoldstandard.py')
rdf_dist.setDownloadURL('https://github.com/fair-workflows/openpredict/known_associations/predict-gold-standard-omim.nq.gz')
rdf_dist.setDataset(data_dist2.getURI())
rdf_dist.addSource('https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s1.xls')
rdf_dist.addSource('https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s4.xls')
return rdf_dist.toRDF()
g = ConjunctiveGraph(identifier = graphURI)
g= addMetaData(g, graphURI)
outfile ='data/rdf/predict_gold_standard_omim_metadata.nq'
g.serialize(outfile, format='nquads')
print('RDF is generated at '+outfile)
```
| github_jupyter | """
RDF generator for the PREDICT drug indication gold standard (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s4.xls)
@version 1.0
@author Remzi Celebi
"""
import pandas as pd
from csv import reader
from src.util import utils
from src.util.utils import Dataset, DataResource
from rdflib import Graph, URIRef, Literal, RDF, ConjunctiveGraph
from rdflib import Namespace
import datetime
mapping_df = pd.read_excel('https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s4.xls')
mapping_df.head()
#save the original file
mapping_df.to_csv('data/external/msb201126-s4.csv', index=False)
mapping_df['OMIM disease name'].replace({'Neuropathy, Hereditary Sensory And Autonomic, Type I, With Cough And':
'Neuropathy, Hereditary Sensory And Autonomic, Type I, With Cough And Gastroesophageal Reflux'}, inplace=True)
goldstd_df = pd.read_excel('https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s1.xls')
goldstd_df.head()
goldstd_df['Drug name'].replace({'Divalproex Sodium':'Valproic Acid',
'Bismuth':'Bismuth subsalicylate',
'Clobetasol':'Clobetasol propionate',
'Guanadrel Sulfate':'Guanadrel',
'Marinol':'Dronabinol',
'Medroxyprogesterone':'Medroxyprogesterone acetate',
'Megestrol':'Megestrol acetate',
'Propoxyphene':'Dextropropoxyphene',
'Salicyclic Acid':'Salicylic acid',
'Ipratropium':'Ipratropiumbromid',
'Adenosine Monophosphate':'Adenosine monophosphate',
'Arsenic Trioxide':'Arsenic trioxide',
'Ethacrynic Acid':'Ethacrynic acid',
'Fondaparinux Sodium':'Fondaparinux sodium',
'Meclofenamic Acid':'Meclofenamic acid',
'Methyl Aminolevulinate':'Methyl aminolevulinate'},inplace=True)
merged_df = goldstd_df.merge(mapping_df, left_on='Disease name', right_on='OMIM disease name')
merged_df.head()
sparql_endpoint="http://graphdb.dumontierlab.com/repositories/openpredict"
!curl -H "Accept: text/csv" --data-urlencode query@data/sparql/drugbank-drug-synonym.rq {sparql_endpoint} > data/input/drugbank-drug-synonym.csv
drug_synonym_df = pd.read_csv('data/input/drugbank-drug-synonym.csv')
drug_synonym_df.head()
merged_df = merged_df.merge(drug_synonym_df, left_on='Drug name', right_on='name')
print ('# of drug-disease associations',len(merged_df[['drugid','OMIM ID']].drop_duplicates()))
gold_std_mapped_df = merged_df[['drugid','OMIM ID']].drop_duplicates()
gold_std_mapped_df['drugid'] = gold_std_mapped_df['drugid'].map(lambda x: 'http://bio2rdf.org/drugbank:'+str(x))
gold_std_mapped_df['OMIM ID'] = gold_std_mapped_df['OMIM ID'].map(lambda x: 'http://bio2rdf.org/omim:'+str(x))
gold_std_mapped_df.rename(columns={'OMIM ID':'http://bio2rdf.org/openpredict_vocabulary:indication'},inplace=True)
gold_std_mapped_df= gold_std_mapped_df.set_index('drugid', drop=True)
column_types ={'http://bio2rdf.org/openpredict_vocabulary:indication':'URI'}
graphURI ='http://w3id.org/fairworkflows/dataset.openpredict.indications.R1'
g = ConjunctiveGraph(identifier = URIRef(graphURI))
g= utils.to_rdf(g, gold_std_mapped_df, column_types, 'http://bio2rdf.org/drugbank:Drug' )
g.serialize('data/rdf/predict_gold_standard_omim.nq', format='nquads')
def addMetaData(g, graphURI):
#generate dataset
data_source = Dataset(qname=graphURI, graph = g)
data_source.setURI(graphURI)
data_source.setTitle('Supplementary data used in the PREDICT')
data_source.setDescription('Drug indications gold standard and mappings used in the study of "PREDICT: a method for inferring novel drug indications with application to personalized medicine" ')
data_source.setPublisher('https://www.embopress.org/journal/17444292')
data_source.setPublisherName('Molecular Systems Biology')
data_source.addRight('use-share-modify')
data_source.addTheme('http://www.wikidata.org/entity/Q56863002')
data_source.setLicense('https://www.embopress.org/page/journal/17444292/about')
data_source.setHomepage('https://dx.doi.org/10.1038%2Fmsb.2011.26')
data_source.setVersion('1.0')
#generate dataset distribution
data_dist1 = DataResource(qname=graphURI, graph = data_source.toRDF())
data_dist1.setURI('http:/w3id.org/fairworkflows/dataset.openpredict.mapping/version/1/source')
data_dist1.setTitle('Mapping between OMIM diseases and UMLS concepts used in the PREDICT study (msb201126-s4.xls)')
data_dist1.setDescription('This file contains the mappings between OMIM diseases and UMLS concepts used in the PREDICT study')
data_dist1.setLicense('https://creativecommons.org/publicdomain/zero/1.0/')
data_dist1.setVersion('1.0')
data_dist1.setFormat('application/vnd.ms-excel')
data_dist1.setMediaType('application/vnd.ms-excel')
data_dist1.setPublisher('https://www.embopress.org/journal/17444292')
data_dist1.addRight('use-share-modify')
data_dist1.setDownloadURL('https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s4.xls')
data_dist1.setRetrievedDate(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
data_dist1.setDataset(data_source.getURI())
#generate dataset distribution
data_dist2 = DataResource(qname=graphURI, graph = data_dist1.toRDF())
data_dist2.setURI('http:/w3id.org/fairworkflows/dataset.openpredict.indications/version/1/source')
data_dist2.setTitle('Drug indications gold standard used in the PREDICT study (msb201126-s1.xls)')
data_dist2.setDescription('This file contains the gold standard drug indications used in the PREDICT study')
data_dist2.setLicense('https://creativecommons.org/publicdomain/zero/1.0/')
data_dist2.setVersion('1.0')
data_dist2.setFormat('application/vnd.ms-excel')
data_dist2.setMediaType('application/vnd.ms-excel')
data_dist2.setPublisher('https://www.embopress.org/journal/17444292')
data_dist2.addRight('use-share-modify')
data_dist2.setDownloadURL('https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s1.xls')
data_dist2.setRetrievedDate(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
data_dist2.setDataset(data_source.getURI())
#generate RDF data distrubtion
rdf_dist = DataResource(qname=graphURI, graph = data_dist2.toRDF() )
rdf_dist.setURI('http:/w3id.org/fairworkflows/dataset.openpredict.indications/version/1/rdf/data')
rdf_dist.setTitle('RDF version of PREDICT drug indication gold standard')
rdf_dist.setDescription('This file is the RDF version of PREDICT drug indication gold standard')
rdf_dist.setLicense('http://creativecommons.org/licenses/by/3.0/')
rdf_dist.setVersion('1.0')
rdf_dist.setFormat('application/n-quads')
rdf_dist.setMediaType('application/n-quads')
rdf_dist.addRight('use-share-modify')
rdf_dist.addRight('by-attribution')
rdf_dist.addRight('restricted-by-source-license')
rdf_dist.setCreateDate(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
rdf_dist.setCreator('https://github.com/fair-workflows/openpredict/src/MappingPREDICTGoldstandard.py')
rdf_dist.setDownloadURL('https://github.com/fair-workflows/openpredict/known_associations/predict-gold-standard-omim.nq.gz')
rdf_dist.setDataset(data_dist2.getURI())
rdf_dist.addSource('https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s1.xls')
rdf_dist.addSource('https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979/bin/msb201126-s4.xls')
return rdf_dist.toRDF()
g = ConjunctiveGraph(identifier = graphURI)
g= addMetaData(g, graphURI)
outfile ='data/rdf/predict_gold_standard_omim_metadata.nq'
g.serialize(outfile, format='nquads')
print('RDF is generated at '+outfile) | 0.591015 | 0.330971 |
# Семинар 1. Python, numpy
## Starter-pack (не для курса)
👒 Разберитесь с гитхабом. Клонируйте себе [репо](https://github.com/AsyaKarpova/ml_nes_2021) нашего курса. Необязательные [советы](https://t.me/KarpovCourses/213) по оформлению.
👒 [Leetcode](https://leetcode.com/problemset/all/https://leetcode.com/problemset/all/): тут можно решать задачки на алгоритмы на питоне (и не только)
👒 Если вы почему-то решили, что вам надо понимать алгоритмы, но не хочется ботать целый курс, почитайте «Грокаем алгоритмы..»
👒 Можно скачать Slack и вступить в ods ([тык](https://ods.ai/join-community)).
## Вспомнить Python за 10 минут
### list comprehensions — [expression for member in iterable]
- Создайте лист с квадратами чисел от 1 до 10 с помощью list comprehensions
```
# code
```
- С помощью list comprehensions создайте новый лист с квадратами только тех чисел от 1 до 10, что без остатка делятся на 3.
```
# code
```
- С помощью list comprehensions замените все отрицательные значения в листе на 0
```
candy_prices = [35.4, 26.7, -33.8, 41.9, -100, 25]
# code
```
- walrus operator
Cгенерируйте 20 значений случайной величины и оставьте только те, что больше 28.
```
import random
def get_number():
return random.randrange(17, 35)
# code
```
### lambda function
- отсортируйте айдишники по убыванию
```
ids = ['id1', 'id2', 'id30', 'id3','id100', 'id22']
# code
```
### Индексация
- Вспомните, как вывести последний элемент; все, кроме последнего; через один элемент, начиная с первого; в обратном порядке
```
elems = [1, 2, 3, 'b||']
# code
# code
# code
```
## Задачки
- Написать функцию, которая выдает вектор с кумулятивными суммами. Функция должна изменять старый вектор, а не возвращать новый.
`Input: [1,2,3,4]`
`Output: [1,3,6,10]`, т. к. `[1, 1+2, 1+2+3, 1+2+3+4]`
```
from typing import List
def runningSum(nums: List[int]) -> List[int]:
pass
nums = [1, 3, 2]
runningSum(nums)
nums
```
- Дан вектор `prices` с ценами акций. Цена в день `[i]` — это `prices[i]`.
Вы можете один раз купить и один раз продать акцию. Найдите максимальный профит от этой операции. Если профита нет, вверните 0 (ничего не покупайте).
`Input: [1,10,8,16]`
`Output: 15`, т. к. `16-1`
```
def maxProfit(prices: List[int]) -> int:
pass
maxProfit([1,10,8,16])
maxProfit([15, 12, 11])
```
- Дан массив чисел, в котором каждое число повторяется дважды и только одно значение — один раз.
Верните это число.
`Input: [1,1,2,3,3]`
`Output: 2`
```
def singleNumber(nums: List[int]) -> int:
pass
singleNumber([1,1,2,3,3])
```
- Определите, является ли последовательность из скобочек валидной.
Возможные символы: `'(', ')', '{', '}', '[', ']'.`
`Input: '()[]'`
`Output: True`
===
`Input: '([{}])'`
`Output: True`
===
`Input: '([{])'`
`Output: False`
```
def ValidPar(seq: str) -> bool:
pass
ValidPar(')')
ValidPar('()')
ValidPar('()[{}]')
```
## numpy
```
import numpy as np
np.random.seed(10)
```
- Вспоминаем, как создавать матрицы из ноликов, едничек, диагональную, матрицу случайных величин :)
```
np.diag([1, 2, 3])
np.zeros((5,5))
np.ones((5, 5))
np.random.poisson(lam=5, size=(5,3))
a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
```
- Записать все значения матрицы в один лист
```
# code
```
- Из двумерного массива отберите в один лист элементы с индексами `(0,0), (1,2), (2,2)`
```
# code
```
- Создайте таблицу умножения от 1 до 10
Перед тем как идти в бой :)
```
x = np.arange(4)
y = np.arange(5)
y.shape
y1 = y[:, None]
y1.shape
y[:, np.newaxis] #тот же y1
def mult_table(n: int) -> np.ndarray:
pass
mult_table(10)
```
- Заменить нолики на единички
```
purr = np.array([1, 2, 6, 0, 0 , 7])
# code
```
- Сделайте one-hot encoding вектора. То есть создайте матрицу, в которой число колонок равно числу уникальных значений вектора, а число строк — длине вектора. Заполните ячейки матрицы единичками, если на данной позиции в векторе встретилась конкретная цифра.
```
vec2 = np.array([1, 2, 5, 5, 4])
# code
```
- Выведите вектор отношения минимума к максимум для каждой строчки матрицы
```
a = np.random.normal(loc=3, scale=4, size=(4,3))
# code
```
- Уберите все пропущенные значения из vec
```
vec = np.array([1, 2, 3, np.nan, np.nan, 3])
# code
```
- Поменяйте первую и вторую колонку местами
```
arr = np.arange(9).reshape(3,3)
# code
```
- Найдите наиболее близкое значение вектора к заданному (по абсолютному значению)
```
def find_nearest(array: np.array, val: int) -> int:
pass
find_nearest(np.array([-1, 2, 3]), 0)
from PIL import Image
import matplotlib.pyplot as plt
%matplotlib inline
```
### Про картинки
![alt text](image1.png "Картинка")
В картинке 4 полосочки с пикселями :) Каждая полосочка содержит пять пикселей. Каждый пиксель содержит три бита (red, green, blue).
Такое изображение обычно хранится в виде трехмерного массива размерности $height \times width \times numChannels$, где число каналов часто равно 3 (или 4 в случае RGBA). Элементами такого массива являются 8-битовые натуральные числа (то есть возможные значения от 0 до 2^8-1 :) Чиселки влияют на насыщенность цвета — от черного (0) до максимально насыщенного (255).
- Нарисовать RGB картинку с помощью numpy :) Например, розовый квадратик
```
picture = np.zeros((512, 512, 3), dtype=np.uint8)
picture[:, :] = [120, 0, 60]
img = Image.fromarray(picture, 'RGB')
img
```
#### Для тех, кто не понял или хочет деталей
- [Броадкаcтинг](https://numpy.org/devdocs/user/theory.broadcasting.html)
- [Индексация](https://numpy.org/doc/stable/reference/arrays.indexing.html)
| github_jupyter | # code
# code
candy_prices = [35.4, 26.7, -33.8, 41.9, -100, 25]
# code
import random
def get_number():
return random.randrange(17, 35)
# code
ids = ['id1', 'id2', 'id30', 'id3','id100', 'id22']
# code
elems = [1, 2, 3, 'b||']
# code
# code
# code
from typing import List
def runningSum(nums: List[int]) -> List[int]:
pass
nums = [1, 3, 2]
runningSum(nums)
nums
def maxProfit(prices: List[int]) -> int:
pass
maxProfit([1,10,8,16])
maxProfit([15, 12, 11])
def singleNumber(nums: List[int]) -> int:
pass
singleNumber([1,1,2,3,3])
def ValidPar(seq: str) -> bool:
pass
ValidPar(')')
ValidPar('()')
ValidPar('()[{}]')
import numpy as np
np.random.seed(10)
np.diag([1, 2, 3])
np.zeros((5,5))
np.ones((5, 5))
np.random.poisson(lam=5, size=(5,3))
a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
# code
# code
x = np.arange(4)
y = np.arange(5)
y.shape
y1 = y[:, None]
y1.shape
y[:, np.newaxis] #тот же y1
def mult_table(n: int) -> np.ndarray:
pass
mult_table(10)
purr = np.array([1, 2, 6, 0, 0 , 7])
# code
vec2 = np.array([1, 2, 5, 5, 4])
# code
a = np.random.normal(loc=3, scale=4, size=(4,3))
# code
vec = np.array([1, 2, 3, np.nan, np.nan, 3])
# code
arr = np.arange(9).reshape(3,3)
# code
def find_nearest(array: np.array, val: int) -> int:
pass
find_nearest(np.array([-1, 2, 3]), 0)
from PIL import Image
import matplotlib.pyplot as plt
%matplotlib inline
picture = np.zeros((512, 512, 3), dtype=np.uint8)
picture[:, :] = [120, 0, 60]
img = Image.fromarray(picture, 'RGB')
img | 0.425128 | 0.966028 |
```
import matplotlib.pyplot as plt
import numpy as np
import sys;
import power_law_analysis as pl
import glob
import pandas as pd
import power_spectrum as pow_spec
import scipy.interpolate as interpolate
ptomm = 216/1920 # px to mm factor for Samsung T580
def load_trace(filename):
d = pd.read_csv(filename, sep=" ", names = ["t", "x", "y"])
d.t = d.t/1000.0 # convert ms to seconds
d.y = 1200.0 - d.y # invert y axis, tablet's (0,0) is in upper left corner
d['xmm'] = d.x * ptomm
d['ymm'] = d.y * ptomm
d['filename'] = filename
return d
def load_path(fname):
d = pd.read_csv(fname, sep=" ", names = ["x", "y"])
d.y = 1200.0 - d.y # invert y axis, tablet's (0,0) is in upper left corner
d['xmm'] = d.x * ptomm
d['ymm'] = d.y * ptomm
return d
fdraw = glob.glob("data-new/*tracing lemniscate*")[0]
d = load_trace(fdraw)
ftemp = glob.glob("data-new/*LemniscatePath*")[0]
tt = load_path(ftemp)
c = 200
ptom = 216/1920
plt.plot(d.xmm[c:-c], d.ymm[c:-c], '.', color="gray", label="trace")
plt.plot(tt.xmm, tt.ymm, color="black", lw=6, alpha = 0.5, label="template")
plt.legend()
plt.title("Tracing a figure")
plt.axis("equal")
plt.xlim([0, 1920*ptom]); plt.ylim([0, 1200*ptom])
plt.xlabel("x (mm)")
plt.ylabel("y (mm)")
#plt.savefig("trace lemniscate.svg", format="svg")
plt.show()
# analyze lemniscate trace
d.r = pl.analyze([d.xmm, d.ymm, d.t], butter=15, cut=1)
N = len(d.r["logC"])
filt = [i for i in range(N) if d.r["RC"][i] < 1500*10]
logC= d.r["logC"][filt]
logA = d.r["logA"][filt]
plt.title("Angular speed vs curvature log-log plot")
plt.plot(logC, logA, '.', color="gray")
reg_line1 = [d.r['beta'] * i + d.r['offset'] for i in logC]
plt.plot(logC, reg_line1, color="black", label=r"$\beta$={:.3f}".format(d.r["beta"]))
plt.plot([],[], color="white", label="$r^2$={:.3f}".format(d.r["r2"]))
plt.legend()
plt.xlabel("log C")
plt.ylabel("log A")
#plt.savefig("power law.svg", format="svg")
plt.show()
take = 700
c1 = "tab:blue"
c2 = "tab:red"
fig, ax1 = plt.subplots()
ax1.set_xlabel("time (s)")
ax1.set_ylabel("angular speed (rad/s)", color=c1)
ax1.plot(d.r["t"][:take], d.r["Avel"][:take], ".", color=c1)
ax2 = ax1.twinx()
ax2.plot(d.r["t"][:take], d.r["C"][:take], ".", color= c2)
ax2.set_ylabel(r"curvature (mm$^{-1}$)", color=c2)
plt.title("Angular speed and curvature in time")
plt.savefig("velcurvtime.svg", format="svg")
def resample(ts_raw, xs, dt, start=0, end=25):
xc = np.copy(xs)
x_spline = interpolate.UnivariateSpline(ts_raw, xc, k=3, s=0)
new_ts = np.arange(start, end, dt)
new_xs = x_spline(new_ts)
return new_ts, new_xs
def ellipse_lead_follow(user_file, target_file, dt = 0.020, start = 2, end = 25):
ud = load_trace(user_file)
td = load_trace(target_file)
cx = (0.5 * (np.min(td.x) + np.max(td.x)))
cy = (0.5 * (np.min(td.y) + np.max(td.y)))
N = len(ud.x)
ut = np.arange(start, end, dt)
ux = interpolate.UnivariateSpline(ud.t, ud.x, k=3, s=0)(ut)
uy = interpolate.UnivariateSpline(ud.t, ud.y, k=3, s=0)(ut)
tt = ut
tx = interpolate.UnivariateSpline(td.t, td.x, k=3, s=0)(tt)
ty = interpolate.UnivariateSpline(td.t, td.y, k=3, s=0)(tt)
pht = np.unwrap(np.arctan2(ty - cy, tx - cx))
phu = np.unwrap(np.arctan2(uy - cy, ux - cx))
dph = pht - phu
return pd.DataFrame ({"ut": ut, "ux": ux, "uy":uy,
"tt": tt, "tx": tx, "ty": ty,
"pht":pht, "phu": phu, "dph": dph})
fs = glob.glob("data-new/*track*10.4.2019*.txt")
fs
r = ellipse_lead_follow(fs[2], fs[0], 0.04, 4.1, 28) ## hiponatural
fig, ax = plt.subplots()
m = ax.scatter(r.ux *ptomm, r.uy * ptomm, c= r.dph, marker='.', cmap="RdBu")
ax.set_title(r"Lead-follow analysis, target $\nu = 2, \beta$ = 1/3")
ax.set_xlabel("x (mm)")
ax.set_ylabel("y (mm)")
plt.axis("equal")
CB = fig.colorbar(m, orientation='vertical', shrink = 0.8)
CB.ax.set_ylabel("phase difference")
ax.set_xlim([0, 1920*ptomm]);
ax.set_ylim([0, 1200*ptomm])
plt.savefig("lead-follow.svg", format="svg")
lims = [i for i in range(len(r.dph)) if r.dph[i] > 0]
plt.plot([4, 28], [0,0], "--", color="gray" )
plt.plot(r.tt, r.dph, color = "gray", label="target lead")
plt.plot(r.tt[lims], r.dph[lims], ".", color="black", label="finger lead")
plt.title("Finger-target phase difference")
plt.xlabel("time (s)")
plt.ylabel("phase difference (rad)")
plt.legend()
plt.savefig("lead-follow2.svg", format="svg")
def load_get_PS(f):
d = load_path(f)
d.t = np.arange(len(d.x)) * 0.001
x = util.
d.F, d.Y = pow_spec.get_power_spectrum_c(d.x, d.y, d.t)
return d
fs = glob.glob("data-new/*tracing flower4*.txt")[:1] + \
glob.glob("data-new/*scribble*.txt")[:-1] + \
glob.glob("data-new/*tracing flower3*.txt")[:1] + \
glob.glob("data-new/*tracing ellipse*.txt")[:1]
fs
ds = {}
ds["flower4_user"] = load_get_PS(fs[0])
ds["scribble"] = load_get_PS(fs[1])
ds["flower3_user"] = load_get_PS(fs[2])
ds["ellipse_user"] = load_get_PS(fs[3])
plt.plot(ds["flower4_user"].F, ds["flower4_user"].Y, label=r"$\nu$=0.8", color="green")
plt.plot(ds["flower3_user"].F, ds["flower3_user"].Y, label=r"$\nu$=1.5", color="tab:blue")
plt.plot(ds["ellipse_user"].F, ds["ellipse_user"].Y, label=r"$\nu$=2", color="tab:orange")
plt.plot(ds["scribble"].F, ds["scribble"].Y, label="scribble", color = "darkred")
plt.xlim([0, 8])
plt.xlabel(r"frequency($\nu$)")
plt.ylabel(r"amplitude, |$ \mathcal{F}[log \mathcal{C}(\theta) ] $| ")
plt.title("Curvature power spectrum")
plt.legend()
plt.savefig("pure_freq2.svg", format="svg")
plt.show()
take = -1500
step = 2
plt.plot(ds["flower4_user"].x[:take:step], ds["flower4_user"].y[:take:step], color="green")
take = -2000
step = 2
plt.plot(ds["flower3_user"].x[:take:step] + 1800, ds["flower3_user"].y[:take:step], color="tab:blue")
take = -2000
step = 2
plt.plot(ds["ellipse_user"].x[:take:step] + 3500, ds["ellipse_user"].y[:take:step], color="tab:orange")
take = -1500
step = 2
plt.plot(ds["scribble"].x[:take:step] + 5500, ds["scribble"].y[:take:step], color="darkred")
plt.axis("equal")
plt.savefig("c:/dev/baw/plots/spectrum_traces.svg", format="svg")
### segmentation
cx = 963.0
cy = 466.0
r = 1500.0 # area triangle size
angles = np.arange(0.5, 7.5, 1.0) * (np.pi*2.0) /6.0
x_points = [cx + r*np.cos(a) for a in angles]
y_points = [cy + r*np.sin(a) for a in angles]
rm = 250 # center radius size for exclusion
distance = lambda x1, y1, x2, y2: np.sqrt((x2-x1)**2 + (y2-y1)**2)
triangles = [ [(cx, cy), (x_points[i], y_points[i]), (x_points[i+1], y_points[i+1])] for i in range(6)]
def subtract(A,B):
return (A[0] - B[0], A[1] - B[1])
def SameSide(p1, p2, a, b):
cp1 = np.cross(subtract(b, a), subtract(p1, a))
cp2 = np.cross(subtract(b, a), subtract(p2, a))
return (np.dot(cp1, cp2) >= 0)
def PointInTriangle(p, a, b, c):
return (SameSide(p, a, b,c) and SameSide(p, b, a, c) and SameSide(p, c, a, b))
def whichTriangle(p):
if distance (cx, cy, p[0], p[1]) < rm:
return 6
for i in range(6):
if (PointInTriangle(p, *(triangles[i]))):
return i
return 6
colors = ["red", "green", "blue", "cyan", "yellow", "brown", "black"]
def get_segments(t):
return np.asarray([whichTriangle((a, b)) for a, b in zip(t.x, t.y)])
def get_sequence_no_center(t):
segment0 = get_segments(t)
index = np.argwhere(segment0==6) # 6 is center
no_center = np.delete(segment0, index)
repeats = np.diff(no_center)
final = no_center[:-1][repeats != 0]
return final
def get_sequence(t):
segment0 = get_segments(t)
repeats = np.diff(segment0)
final = segment0[:-1][repeats != 0]
return final
fs = glob.glob("c:/dev/baw/data-new/sequences/*flower3*.txt")
for f in fs[:]:
print(f)
d = pd.read_csv(f, sep=" ", names = ["t", "x", "y"])
c = get_segments(d)
#filt = [i for i in range(len(c)) if c[i] != 6]
#plt.scatter(d.x[filt], d.y[filt], c=c[filt], cmap="Accent", marker='.')
plt.scatter(d.x, d.y, c=c, cmap="Dark2", marker='.')
plt.axis("equal")
plt.savefig("c:/dev/baw/plots/Segmentation_two.svg", format="svg")
plt.show()
fs = glob.glob("c:/dev/baw/data-new/sequences/*flower3*2019*.txt")
segs = [ get_sequence(pd.read_csv(f, sep=" ", names = ["t", "x", "y"])) for f in fs]
a = segs[2].argmax(0)
b = segs[0].argmax(0)
c = segs[1].argmax(0)
q1 = pd.DataFrame([(segs[0][b+1:b+10])[::-1]])
q2 = pd.DataFrame([segs[1][c:c+9]])
plt.figure(figsize=[12, 4])
plt.imshow(q1, cmap="Dark2")
plt.savefig("c:/dev/baw/plots/Segmentation-sequence.svg", format="svg")
plt.show()
plt.figure(figsize=[12, 4])
plt.imshow(q2, cmap="Dark2")
plt.savefig("c:/dev/baw/plots/Segmentation-sequence-two.svg", format="svg")
plt.show()
f = r"C:/dev/baw/data-new\February1986MasDer scribble 10.4.2019. 15.23.12.txt"
d = load_trace(f)
d.r = pl.analyze([d.xmm, d.ymm, d.t], butter=8, cut=1)
N = len(d.r["x"])
phi0 = [np.arctan2(d.r["y"][i] - d.r["y"][i-1], d.r["x"][i] - d.r["x"][i-1]) for i in range(1, N)]
phi = np.unwrap(phi0)
cc = np.sign(np.diff(phi))
colors = lambda q: ["tab:red", "black", "tab:blue"][int(q)]
plt.title("Accumulated local angle")
for i in np.arange(0, -32, -2*np.pi):
plt.plot([5, 8], [i, i], color="gray")
for i in range(1000, 1600):
plt.scatter(d.r["t"][i], phi[i] - phi[1000], c=colors(cc[i] + 1), marker = '.')
i = 1286
plt.scatter([],[], c=colors(0), marker = '.', label="CW")
plt.scatter([],[], c=colors(2), marker = '.', label="CCW")
plt.plot([], [], c="gray", label=r"n * 2$\pi$")
plt.scatter(d.r["t"][i], phi[i] - phi[1000], color="black", marker = 'o', label="direction change")
plt.ylabel("angle (rad)")
plt.xlabel("time (s)")
plt.legend()
plt.savefig("c:/dev/baw/plots/local_angle.svg", format="svg")
plt.show()
x = d.r["x"][1:-1]
y = d.r["y"][1:-1]
for i in range(1000, 1600):
plt.scatter(x[i], y[i], c=colors(cc[i] + 1), marker = '.')
plt.xlim([0, 1920*ptom]); plt.ylim([0, 1200*ptom])
plt.scatter([],[], c=colors(0), marker = '.', label="CW")
plt.scatter([],[], c=colors(2), marker = '.', label="CCW")
i = 1286
plt.scatter(x[i], y[i], color="black", marker = 'o', label="direction change")
plt.title("Clockwise and counterclockwise scribbling")
plt.xlabel("x (mm)")
plt.ylabel("y (mm)")
plt.legend()
plt.savefig("c:/dev/baw/plots/cwccw.svg", format="svg")
plt.show()
fs = glob.glob("C:/dev/baw/data-new/*user*tracking*"); fs[0]
duser = pd.read_csv(fs[0], sep=" ", names = ["t", "x", "y"]);
fss = glob.glob("C:/dev/baw/data-new/*target*tracking*"); fss[0]
dtarget = pd.read_csv(fss[0], sep=" ", names = ["t", "x", "y"]);
plt.title("Timestamp differences")
take = 180
plt.plot(dtarget.t[-take:-1] / 1000.0, np.diff(dtarget.t[-take:]), ".", label="screen refresh", color="gray")
take = 250
plt.plot(duser.t[-take:-1] / 1000.0, np.diff(duser.t[-take:]), ".", label="touch events", color="black")
plt.ylim([0, 18])
plt.ylabel("timestamp difference (ms)")
plt.xlabel("time (s)")
plt.legend()
plt.savefig("C:/dev/baw/plots/timestamps.svg", format="svg")
plt.show()
```
| github_jupyter | import matplotlib.pyplot as plt
import numpy as np
import sys;
import power_law_analysis as pl
import glob
import pandas as pd
import power_spectrum as pow_spec
import scipy.interpolate as interpolate
ptomm = 216/1920 # px to mm factor for Samsung T580
def load_trace(filename):
d = pd.read_csv(filename, sep=" ", names = ["t", "x", "y"])
d.t = d.t/1000.0 # convert ms to seconds
d.y = 1200.0 - d.y # invert y axis, tablet's (0,0) is in upper left corner
d['xmm'] = d.x * ptomm
d['ymm'] = d.y * ptomm
d['filename'] = filename
return d
def load_path(fname):
d = pd.read_csv(fname, sep=" ", names = ["x", "y"])
d.y = 1200.0 - d.y # invert y axis, tablet's (0,0) is in upper left corner
d['xmm'] = d.x * ptomm
d['ymm'] = d.y * ptomm
return d
fdraw = glob.glob("data-new/*tracing lemniscate*")[0]
d = load_trace(fdraw)
ftemp = glob.glob("data-new/*LemniscatePath*")[0]
tt = load_path(ftemp)
c = 200
ptom = 216/1920
plt.plot(d.xmm[c:-c], d.ymm[c:-c], '.', color="gray", label="trace")
plt.plot(tt.xmm, tt.ymm, color="black", lw=6, alpha = 0.5, label="template")
plt.legend()
plt.title("Tracing a figure")
plt.axis("equal")
plt.xlim([0, 1920*ptom]); plt.ylim([0, 1200*ptom])
plt.xlabel("x (mm)")
plt.ylabel("y (mm)")
#plt.savefig("trace lemniscate.svg", format="svg")
plt.show()
# analyze lemniscate trace
d.r = pl.analyze([d.xmm, d.ymm, d.t], butter=15, cut=1)
N = len(d.r["logC"])
filt = [i for i in range(N) if d.r["RC"][i] < 1500*10]
logC= d.r["logC"][filt]
logA = d.r["logA"][filt]
plt.title("Angular speed vs curvature log-log plot")
plt.plot(logC, logA, '.', color="gray")
reg_line1 = [d.r['beta'] * i + d.r['offset'] for i in logC]
plt.plot(logC, reg_line1, color="black", label=r"$\beta$={:.3f}".format(d.r["beta"]))
plt.plot([],[], color="white", label="$r^2$={:.3f}".format(d.r["r2"]))
plt.legend()
plt.xlabel("log C")
plt.ylabel("log A")
#plt.savefig("power law.svg", format="svg")
plt.show()
take = 700
c1 = "tab:blue"
c2 = "tab:red"
fig, ax1 = plt.subplots()
ax1.set_xlabel("time (s)")
ax1.set_ylabel("angular speed (rad/s)", color=c1)
ax1.plot(d.r["t"][:take], d.r["Avel"][:take], ".", color=c1)
ax2 = ax1.twinx()
ax2.plot(d.r["t"][:take], d.r["C"][:take], ".", color= c2)
ax2.set_ylabel(r"curvature (mm$^{-1}$)", color=c2)
plt.title("Angular speed and curvature in time")
plt.savefig("velcurvtime.svg", format="svg")
def resample(ts_raw, xs, dt, start=0, end=25):
xc = np.copy(xs)
x_spline = interpolate.UnivariateSpline(ts_raw, xc, k=3, s=0)
new_ts = np.arange(start, end, dt)
new_xs = x_spline(new_ts)
return new_ts, new_xs
def ellipse_lead_follow(user_file, target_file, dt = 0.020, start = 2, end = 25):
ud = load_trace(user_file)
td = load_trace(target_file)
cx = (0.5 * (np.min(td.x) + np.max(td.x)))
cy = (0.5 * (np.min(td.y) + np.max(td.y)))
N = len(ud.x)
ut = np.arange(start, end, dt)
ux = interpolate.UnivariateSpline(ud.t, ud.x, k=3, s=0)(ut)
uy = interpolate.UnivariateSpline(ud.t, ud.y, k=3, s=0)(ut)
tt = ut
tx = interpolate.UnivariateSpline(td.t, td.x, k=3, s=0)(tt)
ty = interpolate.UnivariateSpline(td.t, td.y, k=3, s=0)(tt)
pht = np.unwrap(np.arctan2(ty - cy, tx - cx))
phu = np.unwrap(np.arctan2(uy - cy, ux - cx))
dph = pht - phu
return pd.DataFrame ({"ut": ut, "ux": ux, "uy":uy,
"tt": tt, "tx": tx, "ty": ty,
"pht":pht, "phu": phu, "dph": dph})
fs = glob.glob("data-new/*track*10.4.2019*.txt")
fs
r = ellipse_lead_follow(fs[2], fs[0], 0.04, 4.1, 28) ## hiponatural
fig, ax = plt.subplots()
m = ax.scatter(r.ux *ptomm, r.uy * ptomm, c= r.dph, marker='.', cmap="RdBu")
ax.set_title(r"Lead-follow analysis, target $\nu = 2, \beta$ = 1/3")
ax.set_xlabel("x (mm)")
ax.set_ylabel("y (mm)")
plt.axis("equal")
CB = fig.colorbar(m, orientation='vertical', shrink = 0.8)
CB.ax.set_ylabel("phase difference")
ax.set_xlim([0, 1920*ptomm]);
ax.set_ylim([0, 1200*ptomm])
plt.savefig("lead-follow.svg", format="svg")
lims = [i for i in range(len(r.dph)) if r.dph[i] > 0]
plt.plot([4, 28], [0,0], "--", color="gray" )
plt.plot(r.tt, r.dph, color = "gray", label="target lead")
plt.plot(r.tt[lims], r.dph[lims], ".", color="black", label="finger lead")
plt.title("Finger-target phase difference")
plt.xlabel("time (s)")
plt.ylabel("phase difference (rad)")
plt.legend()
plt.savefig("lead-follow2.svg", format="svg")
def load_get_PS(f):
d = load_path(f)
d.t = np.arange(len(d.x)) * 0.001
x = util.
d.F, d.Y = pow_spec.get_power_spectrum_c(d.x, d.y, d.t)
return d
fs = glob.glob("data-new/*tracing flower4*.txt")[:1] + \
glob.glob("data-new/*scribble*.txt")[:-1] + \
glob.glob("data-new/*tracing flower3*.txt")[:1] + \
glob.glob("data-new/*tracing ellipse*.txt")[:1]
fs
ds = {}
ds["flower4_user"] = load_get_PS(fs[0])
ds["scribble"] = load_get_PS(fs[1])
ds["flower3_user"] = load_get_PS(fs[2])
ds["ellipse_user"] = load_get_PS(fs[3])
plt.plot(ds["flower4_user"].F, ds["flower4_user"].Y, label=r"$\nu$=0.8", color="green")
plt.plot(ds["flower3_user"].F, ds["flower3_user"].Y, label=r"$\nu$=1.5", color="tab:blue")
plt.plot(ds["ellipse_user"].F, ds["ellipse_user"].Y, label=r"$\nu$=2", color="tab:orange")
plt.plot(ds["scribble"].F, ds["scribble"].Y, label="scribble", color = "darkred")
plt.xlim([0, 8])
plt.xlabel(r"frequency($\nu$)")
plt.ylabel(r"amplitude, |$ \mathcal{F}[log \mathcal{C}(\theta) ] $| ")
plt.title("Curvature power spectrum")
plt.legend()
plt.savefig("pure_freq2.svg", format="svg")
plt.show()
take = -1500
step = 2
plt.plot(ds["flower4_user"].x[:take:step], ds["flower4_user"].y[:take:step], color="green")
take = -2000
step = 2
plt.plot(ds["flower3_user"].x[:take:step] + 1800, ds["flower3_user"].y[:take:step], color="tab:blue")
take = -2000
step = 2
plt.plot(ds["ellipse_user"].x[:take:step] + 3500, ds["ellipse_user"].y[:take:step], color="tab:orange")
take = -1500
step = 2
plt.plot(ds["scribble"].x[:take:step] + 5500, ds["scribble"].y[:take:step], color="darkred")
plt.axis("equal")
plt.savefig("c:/dev/baw/plots/spectrum_traces.svg", format="svg")
### segmentation
cx = 963.0
cy = 466.0
r = 1500.0 # area triangle size
angles = np.arange(0.5, 7.5, 1.0) * (np.pi*2.0) /6.0
x_points = [cx + r*np.cos(a) for a in angles]
y_points = [cy + r*np.sin(a) for a in angles]
rm = 250 # center radius size for exclusion
distance = lambda x1, y1, x2, y2: np.sqrt((x2-x1)**2 + (y2-y1)**2)
triangles = [ [(cx, cy), (x_points[i], y_points[i]), (x_points[i+1], y_points[i+1])] for i in range(6)]
def subtract(A,B):
return (A[0] - B[0], A[1] - B[1])
def SameSide(p1, p2, a, b):
cp1 = np.cross(subtract(b, a), subtract(p1, a))
cp2 = np.cross(subtract(b, a), subtract(p2, a))
return (np.dot(cp1, cp2) >= 0)
def PointInTriangle(p, a, b, c):
return (SameSide(p, a, b,c) and SameSide(p, b, a, c) and SameSide(p, c, a, b))
def whichTriangle(p):
if distance (cx, cy, p[0], p[1]) < rm:
return 6
for i in range(6):
if (PointInTriangle(p, *(triangles[i]))):
return i
return 6
colors = ["red", "green", "blue", "cyan", "yellow", "brown", "black"]
def get_segments(t):
return np.asarray([whichTriangle((a, b)) for a, b in zip(t.x, t.y)])
def get_sequence_no_center(t):
segment0 = get_segments(t)
index = np.argwhere(segment0==6) # 6 is center
no_center = np.delete(segment0, index)
repeats = np.diff(no_center)
final = no_center[:-1][repeats != 0]
return final
def get_sequence(t):
segment0 = get_segments(t)
repeats = np.diff(segment0)
final = segment0[:-1][repeats != 0]
return final
fs = glob.glob("c:/dev/baw/data-new/sequences/*flower3*.txt")
for f in fs[:]:
print(f)
d = pd.read_csv(f, sep=" ", names = ["t", "x", "y"])
c = get_segments(d)
#filt = [i for i in range(len(c)) if c[i] != 6]
#plt.scatter(d.x[filt], d.y[filt], c=c[filt], cmap="Accent", marker='.')
plt.scatter(d.x, d.y, c=c, cmap="Dark2", marker='.')
plt.axis("equal")
plt.savefig("c:/dev/baw/plots/Segmentation_two.svg", format="svg")
plt.show()
fs = glob.glob("c:/dev/baw/data-new/sequences/*flower3*2019*.txt")
segs = [ get_sequence(pd.read_csv(f, sep=" ", names = ["t", "x", "y"])) for f in fs]
a = segs[2].argmax(0)
b = segs[0].argmax(0)
c = segs[1].argmax(0)
q1 = pd.DataFrame([(segs[0][b+1:b+10])[::-1]])
q2 = pd.DataFrame([segs[1][c:c+9]])
plt.figure(figsize=[12, 4])
plt.imshow(q1, cmap="Dark2")
plt.savefig("c:/dev/baw/plots/Segmentation-sequence.svg", format="svg")
plt.show()
plt.figure(figsize=[12, 4])
plt.imshow(q2, cmap="Dark2")
plt.savefig("c:/dev/baw/plots/Segmentation-sequence-two.svg", format="svg")
plt.show()
f = r"C:/dev/baw/data-new\February1986MasDer scribble 10.4.2019. 15.23.12.txt"
d = load_trace(f)
d.r = pl.analyze([d.xmm, d.ymm, d.t], butter=8, cut=1)
N = len(d.r["x"])
phi0 = [np.arctan2(d.r["y"][i] - d.r["y"][i-1], d.r["x"][i] - d.r["x"][i-1]) for i in range(1, N)]
phi = np.unwrap(phi0)
cc = np.sign(np.diff(phi))
colors = lambda q: ["tab:red", "black", "tab:blue"][int(q)]
plt.title("Accumulated local angle")
for i in np.arange(0, -32, -2*np.pi):
plt.plot([5, 8], [i, i], color="gray")
for i in range(1000, 1600):
plt.scatter(d.r["t"][i], phi[i] - phi[1000], c=colors(cc[i] + 1), marker = '.')
i = 1286
plt.scatter([],[], c=colors(0), marker = '.', label="CW")
plt.scatter([],[], c=colors(2), marker = '.', label="CCW")
plt.plot([], [], c="gray", label=r"n * 2$\pi$")
plt.scatter(d.r["t"][i], phi[i] - phi[1000], color="black", marker = 'o', label="direction change")
plt.ylabel("angle (rad)")
plt.xlabel("time (s)")
plt.legend()
plt.savefig("c:/dev/baw/plots/local_angle.svg", format="svg")
plt.show()
x = d.r["x"][1:-1]
y = d.r["y"][1:-1]
for i in range(1000, 1600):
plt.scatter(x[i], y[i], c=colors(cc[i] + 1), marker = '.')
plt.xlim([0, 1920*ptom]); plt.ylim([0, 1200*ptom])
plt.scatter([],[], c=colors(0), marker = '.', label="CW")
plt.scatter([],[], c=colors(2), marker = '.', label="CCW")
i = 1286
plt.scatter(x[i], y[i], color="black", marker = 'o', label="direction change")
plt.title("Clockwise and counterclockwise scribbling")
plt.xlabel("x (mm)")
plt.ylabel("y (mm)")
plt.legend()
plt.savefig("c:/dev/baw/plots/cwccw.svg", format="svg")
plt.show()
fs = glob.glob("C:/dev/baw/data-new/*user*tracking*"); fs[0]
duser = pd.read_csv(fs[0], sep=" ", names = ["t", "x", "y"]);
fss = glob.glob("C:/dev/baw/data-new/*target*tracking*"); fss[0]
dtarget = pd.read_csv(fss[0], sep=" ", names = ["t", "x", "y"]);
plt.title("Timestamp differences")
take = 180
plt.plot(dtarget.t[-take:-1] / 1000.0, np.diff(dtarget.t[-take:]), ".", label="screen refresh", color="gray")
take = 250
plt.plot(duser.t[-take:-1] / 1000.0, np.diff(duser.t[-take:]), ".", label="touch events", color="black")
plt.ylim([0, 18])
plt.ylabel("timestamp difference (ms)")
plt.xlabel("time (s)")
plt.legend()
plt.savefig("C:/dev/baw/plots/timestamps.svg", format="svg")
plt.show() | 0.426083 | 0.512937 |
### Loading and combining data
```
#importing libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
#reading the files as dataframes
df1 = pd.read_csv('ml_case_training_data.csv')
df2 = pd.read_csv('ml_case_training_hist_data.csv')
df3 = pd.read_csv('ml_case_training_output.csv')
pd.DataFrame({"Missing Value (%) ":df1.isnull().sum()/len(df1.index)*100})
for i in df1.columns:
if (df1[i].isnull().sum()/len(df1.index)*100) > 75:
del df1[i]6
df1.describe()
df1.dtypes
df3.describe()
pd.DataFrame({"Missing Value (%) ":df3.isnull().sum()/len(df3.index)*100})
pd.DataFrame({"Missing Value (%) ":df2.isnull().sum()/len(df2.index)*100})
df1.columns
#past consumption data
p_cons = df1[['id','cons_12m', 'cons_gas_12m',
'cons_last_month', 'has_gas', 'imp_cons']]
```
### Connecting SQL
```
from pandasql import sqldf
p_cons = sqldf("""select p_cons.*, df3.churn
from p_cons
inner join df3
on p_cons.id = df3.id""")
p_cons['churn'] = p_cons['churn'].replace({0:'No', 1:'Yes'})
sns.pairplot(p_cons, hue = 'churn')
p_cons.columns
```
### Data Visualization
```
def stacked_bars(dataframe, title_, size_ = (18,10), rot_=0,
legend = "upper right"):
ax = dataframe.plot(kind = "bar",
stacked = True,
figsize = size_,
rot = rot_,
title = title_)
ann_stacked_bars(ax, textsize=14)
plt.legend(["Retention", "Churn"], loc=legend)
plt.ylabel("Company base (%)")
plt.show()
def ann_stacked_bars(ax, pad=0.99, colour = 'white', textsize=13):
for p in ax.patches:
value = str(round(p.get_height(),1))
if value == '0.0':
continue
ax.annotate(value,
((p.get_x()+p.get_width()/2)*pad-0.05,
(p.get_y()+p.get_height()/2)*pad),
color= colour,
size=textsize,
)
p_cons['churn'] = pd.Series(np.where((p_cons['churn']==0), "No", "Yes"))
churn_sum = df3.groupby(df3['churn']).count()
churn_per = churn_sum/churn_sum.sum()*100
stacked_bars(churn_per.transpose(), "Churn Status", (5,5), legend= "lower right")
```
10% customers have already churned
```
sales = df1[['id','channel_sales']]
sales = sqldf("""select sales.*, df3.churn
from sales
inner join df3
on sales.id = df3.id""")
sales = sales.groupby([sales['channel_sales'], sales['churn']])['id'].count().unstack(level=1).fillna(0)
sales = (sales.div(sales.sum(axis=1), axis=0)*100).sort_values(by=[1], ascending=False)
sales
stacked_bars(sales, "Churn Status", legend= "lower right", rot_=30)
```
### SME Activity
```
activity = df1[['id','activity_new']]
activity = pd.merge(activity, df3, on='id')
activity
activity = activity.groupby([activity["activity_new"],
activity["churn"]])["id"].count().unstack(level=1).sort_values(by=[0], ascending=False)
activity
activity.plot(kind='bar',
figsize=(18,10),
width = 4,
stacked = True,)
plt.xlabel("Activity")
plt.ylabel("Number of Companies")
plt.xticks([])
plt.show();
activity_tot = activity.fillna(0)[0]+activity.fillna(0)[1]
activity_per = activity.fillna(0)[1]/(activity_tot)*100
pd.DataFrame({"Percentage churn": activity_per,
"Total companies": activity_tot }).sort_values(by="Percentage churn",
ascending=False).head(10)
```
### Consumption
```
cons = df1[["id","cons_12m", "cons_gas_12m","cons_last_month", "imp_cons", "has_gas" ]]
cons = pd.merge(cons, df3, on='id')
def cons_plot(dataframe, column, bins_=50, figsize=(18,25)):
temp = pd.DataFrame({"Retention": dataframe[dataframe["churn"]==0][column],
"Churn": dataframe[dataframe["churn"]==1][column]})
temp[['Retention', 'Churn']].plot(kind='hist', bins=bins_, stacked = True)
plt.xlabel(column)
plt.ticklabel_format(style='plain', axis='x')
cons_plot(cons, 'cons_last_month')
cons_plot(cons, 'imp_cons')
cons_plot(cons, 'cons_12m')
plt.xticks(rotation = 45);
fig, axs = plt.subplots(nrows=4, figsize=(18,25))
# Plot histogram
sns.boxplot(cons["cons_12m"], ax=axs[0])
sns.boxplot(cons[cons["has_gas"] == "t"]["cons_gas_12m"], ax=axs[1])
sns.boxplot(cons["cons_last_month"], ax=axs[2])
sns.boxplot(cons["imp_cons"], ax=axs[3])
# Remove scientific notation
for ax in axs:
ax.ticklabel_format(style='plain', axis='x')
# Set x-axis limit
axs[0].set_xlim(-200000, 2000000)
axs[1].set_xlim(-200000, 2000000)
axs[2].set_xlim(-20000, 100000)
plt.show()
```
### Forecast
```
forc = df1[["id","forecast_cons_12m",
"forecast_cons_year","forecast_discount_energy","forecast_meter_rent_12m",
"forecast_price_energy_p1","forecast_price_energy_p2",
"forecast_price_pow_p1"]]
forc = pd.merge(forc, df3, on='id')
forc
def forc_plot(dataframe, column, ax, bins_=50, figsize=(18,25)):
temp = pd.DataFrame({"Retention": dataframe[dataframe["churn"]==0][column],
"Churn": dataframe[dataframe["churn"]==1][column]})
temp[['Retention', 'Churn']].plot(kind='hist',ax=ax, bins=bins_, stacked = True)
ax.set_xlabel(column)
ax.ticklabel_format(style='plain', axis='x')
fig, axs = plt.subplots(nrows = 7, figsize=(18,25))
k = 0
for i in forc.columns:
if i == 'id' or i == 'churn':
continue
forc_plot(forc, i ,axs[k])
k = k+1
```
### Contract Type
```
contract_type = df1[["id", "has_gas"]]
contract_type = pd.merge(contract_type, df3, on='id')
contract = contract_type.groupby([contract_type["churn"],
contract_type["has_gas"]])["id"].count().unstack(level=0)
contract = contract_type.groupby([contract_type["churn"],
contract_type["has_gas"]])["id"].count().unstack(level=0)
contract_percentage = (contract.div(contract.sum(axis=1), axis=0)*100).sort_values(by=[1], ascending=False)
contract_percentage
stacked_bars(contract_percentage, "Contract Type")
df1.dtypes
pd.DataFrame({"Missing Value (%) ":df1.isnull().sum()/len(df1.index)*100})
df2.dtypes
pd.DataFrame({"Missing Value (%) ":df2.isnull().sum()/len(df2.index)*100})
df1.loc[df1["date_modif_prod"].isnull(),"date_modif_prod"] = df1["date_modif_prod"].value_counts().index[0]
df1.loc[df1["date_end"].isnull(),"date_end"] = df1["date_end"].value_counts().index[0]
df1.loc[df1["date_renewal"].isnull(),"date_renewal"] = df1["date_renewal"].value_counts().index[0]
df2.loc[df2["price_p1_var"].isnull(),"price_p1_var"] = df2["price_p1_var"].median()
df2.loc[df2["price_p2_var"].isnull(),"price_p2_var"] = df2["price_p2_var"].median()
df2.loc[df2["price_p3_var"].isnull(),"price_p3_var"] = df2["price_p3_var"].median()
df2.loc[df2["price_p1_fix"].isnull(),"price_p1_fix"] = df2["price_p1_fix"].median()
df2.loc[df2["price_p2_fix"].isnull(),"price_p2_fix"] = df2["price_p2_fix"].median()
df2.loc[df2["price_p3_fix"].isnull(),"price_p3_fix"] = df2["price_p3_fix"].median()
pd.DataFrame({"Missing Value (%) ":df1.isnull().sum()/len(df1.index)*100})
df2.loc[df2["price_p1_fix"] < 0,"price_p1_fix"] = df2["price_p1_fix"].median()
df2.loc[df2["price_p2_fix"] < 0,"price_p2_fix"] = df2["price_p2_fix"].median()
df2.loc[df2["price_p3_fix"] < 0,"price_p3_fix"] = df2["price_p3_fix"].median()
pd.DataFrame({"Missing Value (%) ":df2.isnull().sum()/len(df2.index)*100})
df1["date_activ"] = pd.to_datetime(df1["date_activ"], format='%Y-%m-%d')
df1["date_end"] = pd.to_datetime(df1["date_end"], format='%Y-%m-%d')
df1["date_modif_prod"] = pd.to_datetime(df1["date_modif_prod"], format='%Y-%m-%d')
df1["date_renewal"] = pd.to_datetime(df1["date_renewal"], format='%Y-%m-%d')
df2["price_date"] = pd.to_datetime(df2["price_date"], format='%Y-%m-%d')
df1.dtypes
df2.dtypes
pd.DataFrame({"Missing Value (%) ":df1.isnull().sum()/len(df1.index)*100})
df1.columns
df1.loc[df1["forecast_price_energy_p2"].isnull(),"forecast_price_energy_p2"] = df1["forecast_price_energy_p2"].median()
df1.loc[df1["forecast_price_energy_p1"].isnull(),"forecast_price_energy_p1"] = df1["forecast_price_energy_p1"].median()
df1.loc[df1["forecast_price_pow_p1"].isnull(),"forecast_price_pow_p1"] = df1["forecast_price_pow_p1"].median()
df1.loc[df1["forecast_discount_energy"].isnull(),"forecast_discount_energy"] = df1["forecast_discount_energy"].median()
df1.to_csv(r'C:\Users\juhia\New folder\Desktop\Groups\Proj_Data\BCG\df1_clean')
df2.to_csv(r'C:\Users\juhia\New folder\Desktop\Groups\Proj_Data\BCG\df2_clean')
```
| github_jupyter | #importing libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
#reading the files as dataframes
df1 = pd.read_csv('ml_case_training_data.csv')
df2 = pd.read_csv('ml_case_training_hist_data.csv')
df3 = pd.read_csv('ml_case_training_output.csv')
pd.DataFrame({"Missing Value (%) ":df1.isnull().sum()/len(df1.index)*100})
for i in df1.columns:
if (df1[i].isnull().sum()/len(df1.index)*100) > 75:
del df1[i]6
df1.describe()
df1.dtypes
df3.describe()
pd.DataFrame({"Missing Value (%) ":df3.isnull().sum()/len(df3.index)*100})
pd.DataFrame({"Missing Value (%) ":df2.isnull().sum()/len(df2.index)*100})
df1.columns
#past consumption data
p_cons = df1[['id','cons_12m', 'cons_gas_12m',
'cons_last_month', 'has_gas', 'imp_cons']]
from pandasql import sqldf
p_cons = sqldf("""select p_cons.*, df3.churn
from p_cons
inner join df3
on p_cons.id = df3.id""")
p_cons['churn'] = p_cons['churn'].replace({0:'No', 1:'Yes'})
sns.pairplot(p_cons, hue = 'churn')
p_cons.columns
def stacked_bars(dataframe, title_, size_ = (18,10), rot_=0,
legend = "upper right"):
ax = dataframe.plot(kind = "bar",
stacked = True,
figsize = size_,
rot = rot_,
title = title_)
ann_stacked_bars(ax, textsize=14)
plt.legend(["Retention", "Churn"], loc=legend)
plt.ylabel("Company base (%)")
plt.show()
def ann_stacked_bars(ax, pad=0.99, colour = 'white', textsize=13):
for p in ax.patches:
value = str(round(p.get_height(),1))
if value == '0.0':
continue
ax.annotate(value,
((p.get_x()+p.get_width()/2)*pad-0.05,
(p.get_y()+p.get_height()/2)*pad),
color= colour,
size=textsize,
)
p_cons['churn'] = pd.Series(np.where((p_cons['churn']==0), "No", "Yes"))
churn_sum = df3.groupby(df3['churn']).count()
churn_per = churn_sum/churn_sum.sum()*100
stacked_bars(churn_per.transpose(), "Churn Status", (5,5), legend= "lower right")
sales = df1[['id','channel_sales']]
sales = sqldf("""select sales.*, df3.churn
from sales
inner join df3
on sales.id = df3.id""")
sales = sales.groupby([sales['channel_sales'], sales['churn']])['id'].count().unstack(level=1).fillna(0)
sales = (sales.div(sales.sum(axis=1), axis=0)*100).sort_values(by=[1], ascending=False)
sales
stacked_bars(sales, "Churn Status", legend= "lower right", rot_=30)
activity = df1[['id','activity_new']]
activity = pd.merge(activity, df3, on='id')
activity
activity = activity.groupby([activity["activity_new"],
activity["churn"]])["id"].count().unstack(level=1).sort_values(by=[0], ascending=False)
activity
activity.plot(kind='bar',
figsize=(18,10),
width = 4,
stacked = True,)
plt.xlabel("Activity")
plt.ylabel("Number of Companies")
plt.xticks([])
plt.show();
activity_tot = activity.fillna(0)[0]+activity.fillna(0)[1]
activity_per = activity.fillna(0)[1]/(activity_tot)*100
pd.DataFrame({"Percentage churn": activity_per,
"Total companies": activity_tot }).sort_values(by="Percentage churn",
ascending=False).head(10)
cons = df1[["id","cons_12m", "cons_gas_12m","cons_last_month", "imp_cons", "has_gas" ]]
cons = pd.merge(cons, df3, on='id')
def cons_plot(dataframe, column, bins_=50, figsize=(18,25)):
temp = pd.DataFrame({"Retention": dataframe[dataframe["churn"]==0][column],
"Churn": dataframe[dataframe["churn"]==1][column]})
temp[['Retention', 'Churn']].plot(kind='hist', bins=bins_, stacked = True)
plt.xlabel(column)
plt.ticklabel_format(style='plain', axis='x')
cons_plot(cons, 'cons_last_month')
cons_plot(cons, 'imp_cons')
cons_plot(cons, 'cons_12m')
plt.xticks(rotation = 45);
fig, axs = plt.subplots(nrows=4, figsize=(18,25))
# Plot histogram
sns.boxplot(cons["cons_12m"], ax=axs[0])
sns.boxplot(cons[cons["has_gas"] == "t"]["cons_gas_12m"], ax=axs[1])
sns.boxplot(cons["cons_last_month"], ax=axs[2])
sns.boxplot(cons["imp_cons"], ax=axs[3])
# Remove scientific notation
for ax in axs:
ax.ticklabel_format(style='plain', axis='x')
# Set x-axis limit
axs[0].set_xlim(-200000, 2000000)
axs[1].set_xlim(-200000, 2000000)
axs[2].set_xlim(-20000, 100000)
plt.show()
forc = df1[["id","forecast_cons_12m",
"forecast_cons_year","forecast_discount_energy","forecast_meter_rent_12m",
"forecast_price_energy_p1","forecast_price_energy_p2",
"forecast_price_pow_p1"]]
forc = pd.merge(forc, df3, on='id')
forc
def forc_plot(dataframe, column, ax, bins_=50, figsize=(18,25)):
temp = pd.DataFrame({"Retention": dataframe[dataframe["churn"]==0][column],
"Churn": dataframe[dataframe["churn"]==1][column]})
temp[['Retention', 'Churn']].plot(kind='hist',ax=ax, bins=bins_, stacked = True)
ax.set_xlabel(column)
ax.ticklabel_format(style='plain', axis='x')
fig, axs = plt.subplots(nrows = 7, figsize=(18,25))
k = 0
for i in forc.columns:
if i == 'id' or i == 'churn':
continue
forc_plot(forc, i ,axs[k])
k = k+1
contract_type = df1[["id", "has_gas"]]
contract_type = pd.merge(contract_type, df3, on='id')
contract = contract_type.groupby([contract_type["churn"],
contract_type["has_gas"]])["id"].count().unstack(level=0)
contract = contract_type.groupby([contract_type["churn"],
contract_type["has_gas"]])["id"].count().unstack(level=0)
contract_percentage = (contract.div(contract.sum(axis=1), axis=0)*100).sort_values(by=[1], ascending=False)
contract_percentage
stacked_bars(contract_percentage, "Contract Type")
df1.dtypes
pd.DataFrame({"Missing Value (%) ":df1.isnull().sum()/len(df1.index)*100})
df2.dtypes
pd.DataFrame({"Missing Value (%) ":df2.isnull().sum()/len(df2.index)*100})
df1.loc[df1["date_modif_prod"].isnull(),"date_modif_prod"] = df1["date_modif_prod"].value_counts().index[0]
df1.loc[df1["date_end"].isnull(),"date_end"] = df1["date_end"].value_counts().index[0]
df1.loc[df1["date_renewal"].isnull(),"date_renewal"] = df1["date_renewal"].value_counts().index[0]
df2.loc[df2["price_p1_var"].isnull(),"price_p1_var"] = df2["price_p1_var"].median()
df2.loc[df2["price_p2_var"].isnull(),"price_p2_var"] = df2["price_p2_var"].median()
df2.loc[df2["price_p3_var"].isnull(),"price_p3_var"] = df2["price_p3_var"].median()
df2.loc[df2["price_p1_fix"].isnull(),"price_p1_fix"] = df2["price_p1_fix"].median()
df2.loc[df2["price_p2_fix"].isnull(),"price_p2_fix"] = df2["price_p2_fix"].median()
df2.loc[df2["price_p3_fix"].isnull(),"price_p3_fix"] = df2["price_p3_fix"].median()
pd.DataFrame({"Missing Value (%) ":df1.isnull().sum()/len(df1.index)*100})
df2.loc[df2["price_p1_fix"] < 0,"price_p1_fix"] = df2["price_p1_fix"].median()
df2.loc[df2["price_p2_fix"] < 0,"price_p2_fix"] = df2["price_p2_fix"].median()
df2.loc[df2["price_p3_fix"] < 0,"price_p3_fix"] = df2["price_p3_fix"].median()
pd.DataFrame({"Missing Value (%) ":df2.isnull().sum()/len(df2.index)*100})
df1["date_activ"] = pd.to_datetime(df1["date_activ"], format='%Y-%m-%d')
df1["date_end"] = pd.to_datetime(df1["date_end"], format='%Y-%m-%d')
df1["date_modif_prod"] = pd.to_datetime(df1["date_modif_prod"], format='%Y-%m-%d')
df1["date_renewal"] = pd.to_datetime(df1["date_renewal"], format='%Y-%m-%d')
df2["price_date"] = pd.to_datetime(df2["price_date"], format='%Y-%m-%d')
df1.dtypes
df2.dtypes
pd.DataFrame({"Missing Value (%) ":df1.isnull().sum()/len(df1.index)*100})
df1.columns
df1.loc[df1["forecast_price_energy_p2"].isnull(),"forecast_price_energy_p2"] = df1["forecast_price_energy_p2"].median()
df1.loc[df1["forecast_price_energy_p1"].isnull(),"forecast_price_energy_p1"] = df1["forecast_price_energy_p1"].median()
df1.loc[df1["forecast_price_pow_p1"].isnull(),"forecast_price_pow_p1"] = df1["forecast_price_pow_p1"].median()
df1.loc[df1["forecast_discount_energy"].isnull(),"forecast_discount_energy"] = df1["forecast_discount_energy"].median()
df1.to_csv(r'C:\Users\juhia\New folder\Desktop\Groups\Proj_Data\BCG\df1_clean')
df2.to_csv(r'C:\Users\juhia\New folder\Desktop\Groups\Proj_Data\BCG\df2_clean') | 0.279927 | 0.767712 |
```
#default_exp database
%load_ext autoreload
%autoreload 2
```
# database
> helpers to get and query a sqlalchemy engine for DB containing metadata on experiments
```
#export
from sqlalchemy import create_engine
from sqlalchemy import Table, Column, Integer, String, MetaData, select
import pandas as pd
import getpass
import json
#export
def get_db_engine(username, password, ip_adress, model_name, rdbms="mysql"):
"""
Creates a sqlalchemy engine to query a database.
params:
- username: Username used to connect
- password: Password of the user
- ip_adress: IP adress of the database
- model_name: Name of the model of the database to connect to
- rdbms: Backend engine of the database
return:
- A sqlalchemy engine connected to the database
"""
engine = create_engine("%s://%s:%s@%s/%s" % (rdbms, username, password, ip_adress, model_name),echo = False)
test_query = "SELECT * FROM Project"
pd.read_sql_query(test_query, engine)
return engine
def prompt_credentials(user=None, db_adress=None):
"""
Helper function to make a prompt for the password, and additonally the user and the database IP adress
if left to None.
params:
- user: None to prompt or name of the user.
- db_adress: None to prompt or database adress
return:
- username, password and database IP adress
"""
if user is None:
user = input(prompt='Username: ')
passwd = getpass.getpass(prompt='Password: ')
if db_adress is None:
db_adress = input(prompt='DB IP: ')
return user, passwd, db_adress
#export
def get_record_essentials(engine, record_id):
"""
Retrieves the essential informations about a record.
params:
- engine: Database engine
- record_id: ID of the record
return:
- Pandas Dataframe of record essential informations
"""
q_select_record = "SELECT * FROM Record WHERE id = %d" % record_id
q_select_cell = "SELECT * FROM Cell WHERE record_id = %d" % record_id
df_record = pd.read_sql_query(q_select_record, engine)
df_cell = pd.read_sql_query(q_select_cell, engine)
experiment_id = df_record["experiment_id"][0]
q_select_experiment = "SELECT * FROM Experiment WHERE id = %d" % experiment_id
df_experiment = pd.read_sql_query(q_select_experiment, engine)
mouse_id = df_experiment["mouse_id"][0]
q_select_mouse = "SELECT * FROM Mouse WHERE id = %d" % mouse_id
df_mouse = pd.read_sql_query(q_select_mouse, engine)
tool_id = df_record["tool_id"][0]
q_select_tool = "SELECT * FROM Tool WHERE id = %d" % tool_id
df_tool = pd.read_sql_query(q_select_tool, engine)
q_select_map = "SELECT * FROM Map WHERE tool_id = %d" % tool_id
df_map = pd.read_sql_query(q_select_map, engine)
res_dict = {"record": df_record, "cell": df_cell,
"experiment": df_experiment, "mouse": df_mouse,
"tool": df_tool, "map": df_map}
return res_dict
#export
def get_stim_params(engine, stim_hashes):
"""
Retrieves the parameters of a stimulus specified by its hash key.
params:
- engine: Database engine
- stim_hashes: Stimulus hash
return:
- Pandas Dataframe of stimulus parameters
"""
#Writting the query speed up the function rather than querying all individual tables
# and filtering them all
if not isinstance(stim_hashes, list):
stim_hashes = [stim_hashes]
if len(stim_hashes)==1:
str_hashes = "('"+stim_hashes[0]+"')"
else:
str_hashes=str(tuple(stim_hashes))
query = """SELECT Stim.name AS stim_name, description, barcode, stim_comment, stimulus_id,
screen_id, hash, date AS date_compiled, compiled_comment, compiled_id, parameter_id,
Parameter.name as param_name, value as param_value
FROM (SELECT Compiled.id as comp_id, name, description, barcode, Stimulus.comment AS stim_comment, stimulus_id, screen_id, hash, date, Compiled.comment AS compiled_comment FROM Stimulus
LEFT JOIN Compiled ON stimulus_id=Stimulus.id WHERE hash IN """+str_hashes+""") AS Stim
LEFT JOIN Compiled_Parameter ON compiled_id = comp_id
LEFT JOIN Parameter ON parameter_id = Parameter.id"""
df_params = pd.read_sql_query(query, engine)
return df_params
#export
def get_table(engine, table_name):
"""
Return the entire content of a table in a pandas Dataframe.
params:
- engine: Database engine
- table_name: Name of the table
return:
- Pandas Dataframe of the whole table
"""
query = """SELECT * FROM """+str(table_name)
df_table = pd.read_sql_query(query, engine)
return df_table
#export
def stim_param_to_dict(param_df, md5):
param_dict = {}
stim_mask = param_df["hash"] == md5
for _, row in param_df[stim_mask][["param_name", "param_value"]].iterrows():
try:
param = json.loads(row["param_value"])
except:
param = row["param_value"]
param_dict[row["param_name"]] = param
return param_dict
#hide
from nbdev.export import *
notebook2script()
```
| github_jupyter | #default_exp database
%load_ext autoreload
%autoreload 2
#export
from sqlalchemy import create_engine
from sqlalchemy import Table, Column, Integer, String, MetaData, select
import pandas as pd
import getpass
import json
#export
def get_db_engine(username, password, ip_adress, model_name, rdbms="mysql"):
"""
Creates a sqlalchemy engine to query a database.
params:
- username: Username used to connect
- password: Password of the user
- ip_adress: IP adress of the database
- model_name: Name of the model of the database to connect to
- rdbms: Backend engine of the database
return:
- A sqlalchemy engine connected to the database
"""
engine = create_engine("%s://%s:%s@%s/%s" % (rdbms, username, password, ip_adress, model_name),echo = False)
test_query = "SELECT * FROM Project"
pd.read_sql_query(test_query, engine)
return engine
def prompt_credentials(user=None, db_adress=None):
"""
Helper function to make a prompt for the password, and additonally the user and the database IP adress
if left to None.
params:
- user: None to prompt or name of the user.
- db_adress: None to prompt or database adress
return:
- username, password and database IP adress
"""
if user is None:
user = input(prompt='Username: ')
passwd = getpass.getpass(prompt='Password: ')
if db_adress is None:
db_adress = input(prompt='DB IP: ')
return user, passwd, db_adress
#export
def get_record_essentials(engine, record_id):
"""
Retrieves the essential informations about a record.
params:
- engine: Database engine
- record_id: ID of the record
return:
- Pandas Dataframe of record essential informations
"""
q_select_record = "SELECT * FROM Record WHERE id = %d" % record_id
q_select_cell = "SELECT * FROM Cell WHERE record_id = %d" % record_id
df_record = pd.read_sql_query(q_select_record, engine)
df_cell = pd.read_sql_query(q_select_cell, engine)
experiment_id = df_record["experiment_id"][0]
q_select_experiment = "SELECT * FROM Experiment WHERE id = %d" % experiment_id
df_experiment = pd.read_sql_query(q_select_experiment, engine)
mouse_id = df_experiment["mouse_id"][0]
q_select_mouse = "SELECT * FROM Mouse WHERE id = %d" % mouse_id
df_mouse = pd.read_sql_query(q_select_mouse, engine)
tool_id = df_record["tool_id"][0]
q_select_tool = "SELECT * FROM Tool WHERE id = %d" % tool_id
df_tool = pd.read_sql_query(q_select_tool, engine)
q_select_map = "SELECT * FROM Map WHERE tool_id = %d" % tool_id
df_map = pd.read_sql_query(q_select_map, engine)
res_dict = {"record": df_record, "cell": df_cell,
"experiment": df_experiment, "mouse": df_mouse,
"tool": df_tool, "map": df_map}
return res_dict
#export
def get_stim_params(engine, stim_hashes):
"""
Retrieves the parameters of a stimulus specified by its hash key.
params:
- engine: Database engine
- stim_hashes: Stimulus hash
return:
- Pandas Dataframe of stimulus parameters
"""
#Writting the query speed up the function rather than querying all individual tables
# and filtering them all
if not isinstance(stim_hashes, list):
stim_hashes = [stim_hashes]
if len(stim_hashes)==1:
str_hashes = "('"+stim_hashes[0]+"')"
else:
str_hashes=str(tuple(stim_hashes))
query = """SELECT Stim.name AS stim_name, description, barcode, stim_comment, stimulus_id,
screen_id, hash, date AS date_compiled, compiled_comment, compiled_id, parameter_id,
Parameter.name as param_name, value as param_value
FROM (SELECT Compiled.id as comp_id, name, description, barcode, Stimulus.comment AS stim_comment, stimulus_id, screen_id, hash, date, Compiled.comment AS compiled_comment FROM Stimulus
LEFT JOIN Compiled ON stimulus_id=Stimulus.id WHERE hash IN """+str_hashes+""") AS Stim
LEFT JOIN Compiled_Parameter ON compiled_id = comp_id
LEFT JOIN Parameter ON parameter_id = Parameter.id"""
df_params = pd.read_sql_query(query, engine)
return df_params
#export
def get_table(engine, table_name):
"""
Return the entire content of a table in a pandas Dataframe.
params:
- engine: Database engine
- table_name: Name of the table
return:
- Pandas Dataframe of the whole table
"""
query = """SELECT * FROM """+str(table_name)
df_table = pd.read_sql_query(query, engine)
return df_table
#export
def stim_param_to_dict(param_df, md5):
param_dict = {}
stim_mask = param_df["hash"] == md5
for _, row in param_df[stim_mask][["param_name", "param_value"]].iterrows():
try:
param = json.loads(row["param_value"])
except:
param = row["param_value"]
param_dict[row["param_name"]] = param
return param_dict
#hide
from nbdev.export import *
notebook2script() | 0.379953 | 0.637384 |
# Chapter 7. 텍스트 문서의 범주화 - (6) CNN 모델을 이용한 다중 클래스 분류
- 이제 다중 클래스 분류에 동일한 모델을 적용해 보자.
- 이를 위해 20 NewsGroup 데이터 세트를 사용한다.
- 20 NewsGroup 데이터는 함수 sklearn 함수 호출로 가져오므로 별도로 다운받을 필요 없음
- 모델은 학습된 GloVe 모델로 임베딩 초기화만 적용한다
```
import os
import config
from dataloader.loader import Loader
from preprocessing.utils import Preprocess, remove_empty_docs
from dataloader.embeddings import GloVe
from model.cnn_document_model import DocumentModel, TrainingParameters
from keras.callbacks import ModelCheckpoint, EarlyStopping
import numpy as np
from keras.utils import to_categorical
import keras.backend as K
from sklearn.manifold import TSNE
from utils import scatter_plot
from sklearn.metrics import classification_report,accuracy_score,confusion_matrix
```
## 20 News Group 데이터셋 로드
- 20 News Group 데이터셋은 sklearn의 패치함수를 통해 자동으로 다운로드 받는다
```
dataset = Loader.load_20newsgroup_data(subset='train')
corpus, labels = dataset.data, dataset.target
corpus, labels = remove_empty_docs(corpus, labels)
test_dataset = Loader.load_20newsgroup_data(subset='test')
test_corpus, test_labels = test_dataset.data, test_dataset.target
test_corpus, test_labels = remove_empty_docs(test_corpus, test_labels)
print(f'corpus size : {len(corpus)}')
print(f'labels size : {len(labels)}')
print(f'test_corpus size : {len(test_corpus)}')
print(f'test_labels size : {len(test_labels)}')
```
## 20개 그룹을 6개 상위그룹으로 매핑
```
six_groups = {
'comp.graphics':0,'comp.os.ms-windows.misc':0,'comp.sys.ibm.pc.hardware':0,
'comp.sys.mac.hardware':0, 'comp.windows.x':0,
'rec.autos':1, 'rec.motorcycles':1, 'rec.sport.baseball':1, 'rec.sport.hockey':1,
'sci.crypt':2, 'sci.electronics':2,'sci.med':2, 'sci.space':2,
'misc.forsale':3,
'talk.politics.misc':4, 'talk.politics.guns':4, 'talk.politics.mideast':4,
'talk.religion.misc':5, 'alt.atheism':5, 'soc.religion.christian':5
}
map_20_2_6 = [six_groups[dataset.target_names[i]] for i in range(20)]
labels = [six_groups[dataset.target_names[i]] for i in labels]
test_labels = [six_groups[dataset.target_names[i]] for i in test_labels]
map_20_2_6
```
## 데이터셋을 인덱스 시퀀스로 변환
```
Preprocess.MIN_WD_COUNT=5
preprocessor = Preprocess(corpus=corpus)
corpus_to_seq = preprocessor.fit()
test_corpus_to_seq = preprocessor.transform(test_corpus)
```
## 임베딩 초기화
data 디렉토리에 저장한 GloVe 모델을 로드하여 초기화된 임베딩 매트릭스 생성
```
glove=GloVe(50)
initial_embeddings = glove.get_embedding(preprocessor.word_index)
```
## 모델 빌드하기
```
newsgrp_model = DocumentModel(vocab_size=preprocessor.get_vocab_size(),
sent_k_maxpool = 5,
sent_filters = 20,
word_kernel_size = 5,
word_index = preprocessor.word_index,
num_sentences=Preprocess.NUM_SENTENCES,
embedding_weights=initial_embeddings,
conv_activation = 'relu',
train_embedding = True,
learn_word_conv = True,
learn_sent_conv = True,
sent_dropout = 0.4,
hidden_dims=64,
input_dropout=0.2,
hidden_gaussian_noise_sd=0.5,
final_layer_kernel_regularizer=0.1,
num_hidden_layers=2,
num_units_final_layer=6)
```
## 파라미터 저장
```
if not os.path.exists(os.path.join(config.MODEL_DIR, '20newsgroup')):
os.makedirs(os.path.join(config.MODEL_DIR, '20newsgroup'))
train_params = TrainingParameters('6_newsgrp_largeclass',
model_file_path = config.MODEL_DIR+ '/20newsgroup/model_6_01.hdf5',
model_hyper_parameters = config.MODEL_DIR+ '/20newsgroup/model_6_01.json',
model_train_parameters = config.MODEL_DIR+ '/20newsgroup/model_6_01_meta.json',
num_epochs=20,
batch_size = 128,
validation_split=.10,
learning_rate=0.01)
train_params.save()
newsgrp_model._save_model(train_params.model_hyper_parameters)
```
## 모델 학습
```
# 모델 컴파일
newsgrp_model._model.compile(loss="categorical_crossentropy",
optimizer=train_params.optimizer,
metrics=["accuracy"])
# 콜백 함수
checkpointer = ModelCheckpoint(filepath=train_params.model_file_path,
verbose=1,
save_best_only=True,
save_weights_only=True)
# 콜백 함수
early_stop = EarlyStopping(patience=2)
x_train = np.array(corpus_to_seq)
y_train = to_categorical(np.array(labels))
x_test = np.array(test_corpus_to_seq)
y_test = to_categorical(np.array(test_labels))
print(f'x_train.shape : {x_train.shape}')
print(f'y_train.shape : {y_train.shape}')
print(f'x_test.shape : {x_test.shape}')
print(f'y_test.shape : {y_test.shape}')
# learning rate 설정
K.set_value(newsgrp_model.get_classification_model().optimizer.lr, train_params.learning_rate)
# 학습 시작
newsgrp_model.get_classification_model().fit(x_train, y_train,
batch_size=train_params.batch_size,
epochs=train_params.num_epochs,
verbose=2,
validation_split=train_params.validation_split,
callbacks=[checkpointer,early_stop])
# 모델 평가
newsgrp_model.get_classification_model().evaluate( x_test, y_test, verbose=2)
# 테스트셋에 대한 예측 결과
preds = newsgrp_model.get_classification_model().predict(x_test)
preds_test = np.argmax(preds, axis=1)
```
## 모델 평가
```
print(classification_report(test_labels, preds_test))
# confusion matrix는 가로가 actual class, 세로가 prediced class임
print(confusion_matrix(test_labels, preds_test))
print(accuracy_score(test_labels, preds_test))
```
## 문서 임베딩 시각화
앞에서 소개한 문서 CNN 모델에는 문서 임베딩층이 있다. 이 층에서 모델이 배운 특성을 시각화해 보자.
```
doc_embeddings = newsgrp_model.get_document_model().predict(x_test)
print(doc_embeddings.shape)
doc_proj = TSNE(n_components=2, random_state=42, ).fit_transform(doc_embeddings)
f, ax, sc, txts, plt = scatter_plot(doc_proj, np.array(test_labels))
f.savefig('assets/handson06_nws_grp_embd.png')
```
- 흩어져 있는 레이블 (0-5)는 여섯 개의 클래스를 나타낸다
- 모델은 적절한 임베딩을 학습했고 80차원 공간에서 6개의 클래스를 잘 구분했다
| github_jupyter | import os
import config
from dataloader.loader import Loader
from preprocessing.utils import Preprocess, remove_empty_docs
from dataloader.embeddings import GloVe
from model.cnn_document_model import DocumentModel, TrainingParameters
from keras.callbacks import ModelCheckpoint, EarlyStopping
import numpy as np
from keras.utils import to_categorical
import keras.backend as K
from sklearn.manifold import TSNE
from utils import scatter_plot
from sklearn.metrics import classification_report,accuracy_score,confusion_matrix
dataset = Loader.load_20newsgroup_data(subset='train')
corpus, labels = dataset.data, dataset.target
corpus, labels = remove_empty_docs(corpus, labels)
test_dataset = Loader.load_20newsgroup_data(subset='test')
test_corpus, test_labels = test_dataset.data, test_dataset.target
test_corpus, test_labels = remove_empty_docs(test_corpus, test_labels)
print(f'corpus size : {len(corpus)}')
print(f'labels size : {len(labels)}')
print(f'test_corpus size : {len(test_corpus)}')
print(f'test_labels size : {len(test_labels)}')
six_groups = {
'comp.graphics':0,'comp.os.ms-windows.misc':0,'comp.sys.ibm.pc.hardware':0,
'comp.sys.mac.hardware':0, 'comp.windows.x':0,
'rec.autos':1, 'rec.motorcycles':1, 'rec.sport.baseball':1, 'rec.sport.hockey':1,
'sci.crypt':2, 'sci.electronics':2,'sci.med':2, 'sci.space':2,
'misc.forsale':3,
'talk.politics.misc':4, 'talk.politics.guns':4, 'talk.politics.mideast':4,
'talk.religion.misc':5, 'alt.atheism':5, 'soc.religion.christian':5
}
map_20_2_6 = [six_groups[dataset.target_names[i]] for i in range(20)]
labels = [six_groups[dataset.target_names[i]] for i in labels]
test_labels = [six_groups[dataset.target_names[i]] for i in test_labels]
map_20_2_6
Preprocess.MIN_WD_COUNT=5
preprocessor = Preprocess(corpus=corpus)
corpus_to_seq = preprocessor.fit()
test_corpus_to_seq = preprocessor.transform(test_corpus)
glove=GloVe(50)
initial_embeddings = glove.get_embedding(preprocessor.word_index)
newsgrp_model = DocumentModel(vocab_size=preprocessor.get_vocab_size(),
sent_k_maxpool = 5,
sent_filters = 20,
word_kernel_size = 5,
word_index = preprocessor.word_index,
num_sentences=Preprocess.NUM_SENTENCES,
embedding_weights=initial_embeddings,
conv_activation = 'relu',
train_embedding = True,
learn_word_conv = True,
learn_sent_conv = True,
sent_dropout = 0.4,
hidden_dims=64,
input_dropout=0.2,
hidden_gaussian_noise_sd=0.5,
final_layer_kernel_regularizer=0.1,
num_hidden_layers=2,
num_units_final_layer=6)
if not os.path.exists(os.path.join(config.MODEL_DIR, '20newsgroup')):
os.makedirs(os.path.join(config.MODEL_DIR, '20newsgroup'))
train_params = TrainingParameters('6_newsgrp_largeclass',
model_file_path = config.MODEL_DIR+ '/20newsgroup/model_6_01.hdf5',
model_hyper_parameters = config.MODEL_DIR+ '/20newsgroup/model_6_01.json',
model_train_parameters = config.MODEL_DIR+ '/20newsgroup/model_6_01_meta.json',
num_epochs=20,
batch_size = 128,
validation_split=.10,
learning_rate=0.01)
train_params.save()
newsgrp_model._save_model(train_params.model_hyper_parameters)
# 모델 컴파일
newsgrp_model._model.compile(loss="categorical_crossentropy",
optimizer=train_params.optimizer,
metrics=["accuracy"])
# 콜백 함수
checkpointer = ModelCheckpoint(filepath=train_params.model_file_path,
verbose=1,
save_best_only=True,
save_weights_only=True)
# 콜백 함수
early_stop = EarlyStopping(patience=2)
x_train = np.array(corpus_to_seq)
y_train = to_categorical(np.array(labels))
x_test = np.array(test_corpus_to_seq)
y_test = to_categorical(np.array(test_labels))
print(f'x_train.shape : {x_train.shape}')
print(f'y_train.shape : {y_train.shape}')
print(f'x_test.shape : {x_test.shape}')
print(f'y_test.shape : {y_test.shape}')
# learning rate 설정
K.set_value(newsgrp_model.get_classification_model().optimizer.lr, train_params.learning_rate)
# 학습 시작
newsgrp_model.get_classification_model().fit(x_train, y_train,
batch_size=train_params.batch_size,
epochs=train_params.num_epochs,
verbose=2,
validation_split=train_params.validation_split,
callbacks=[checkpointer,early_stop])
# 모델 평가
newsgrp_model.get_classification_model().evaluate( x_test, y_test, verbose=2)
# 테스트셋에 대한 예측 결과
preds = newsgrp_model.get_classification_model().predict(x_test)
preds_test = np.argmax(preds, axis=1)
print(classification_report(test_labels, preds_test))
# confusion matrix는 가로가 actual class, 세로가 prediced class임
print(confusion_matrix(test_labels, preds_test))
print(accuracy_score(test_labels, preds_test))
doc_embeddings = newsgrp_model.get_document_model().predict(x_test)
print(doc_embeddings.shape)
doc_proj = TSNE(n_components=2, random_state=42, ).fit_transform(doc_embeddings)
f, ax, sc, txts, plt = scatter_plot(doc_proj, np.array(test_labels))
f.savefig('assets/handson06_nws_grp_embd.png') | 0.486575 | 0.913638 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import sys
sys.path.insert(0, '/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/src')
import read_video
control0 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsRed2-HeLa_10_26_Cell0.csv")
control1 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsRed2-HeLa_10_26_Cell1.csv")
control2 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsRed2-HeLa_10_26_Cell2.csv")
control3 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsRed2-HeLa_10_26_Cell3.csv")
LLO0 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsRed2-HeLa_2_21_LLO_Cell1.csv")
LLO1 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsRed2-HeLa_4_5_LLO1_Cell0.csv")
c0 = read_video.read_video("/Users/BrittanyDorsey/Desktop/my-notebook/OrNet-Data/DsRed2-HeLa_10_26_Cell0.mov", first_only=True)
c1 = read_video.read_video("/Users/BrittanyDorsey/Desktop/my-notebook/OrNet-Data/DsRed2-HeLa_10_26_Cell1.mov", first_only=True)
c2 = read_video.read_video("/Users/BrittanyDorsey/Desktop/my-notebook/OrNet-Data/DsRed2-HeLa_10_26_Cell2.mov", first_only=True)
c3 = read_video.read_video("/Users/BrittanyDorsey/Desktop/my-notebook/OrNet-Data/DsRed2-HeLa_10_26_Cell3.mov", first_only=True)
l0 = read_video.read_video("/Users/BrittanyDorsey/Desktop/my-notebook/OrNet-Data/DsRed2-HeLa_2_21_LLO_Cell1.mov", first_only=True)
l1 = read_video.read_video("/Users/BrittanyDorsey/Desktop/my-notebook/OrNet-Data/DsRed2-HeLa_4_5_LLO1_Cell0.mov", first_only=True)
```
## The cells
Below is the first frame of each of the videos used, to serve as a comparison for the results we got
```
plt.subplot(2, 2, 1)
plt.imshow(c0[1])
plt.subplot(2, 2, 2)
plt.imshow(c1[1])
plt.subplot(2, 2, 3)
plt.imshow(c2[1])
plt.subplot(2, 2, 4)
plt.imshow(c3[1])
plt.subplot(1, 2, 1)
plt.imshow(l0[1])
plt.subplot(1, 2, 2)
plt.imshow(l0[1])
```
# LLR
I will first test the log-likelihood ratio to compare the control cells to LLO cells and see if there is a pattern that distinguishes the two.
```
cLLR0 = control0['LLR']
cLLR1 = control1['LLR']
cLLR2 = control2['LLR']
cLLR3 = control3['LLR']
lLLR0 = LLO0['LLR']
lLLR1 = LLO1['LLR']
c_LLR0 = cLLR0.values.reshape(512, 512)
c_LLR1 = cLLR1.values.reshape(512, 512)
c_LLR2 = cLLR2.values.reshape(512, 512)
c_LLR3 = cLLR3.values.reshape(512, 512)
l_LLR0 = lLLR0.values.reshape(512, 512)
l_LLR1 = lLLR1.values.reshape(512, 512)
```
## Control cells
```
plt.subplot(2, 2, 1)
plt.imshow(c_LLR0)
plt.colorbar()
plt.subplot(2, 2, 2)
plt.imshow(c_LLR1)
plt.colorbar()
plt.subplot(2, 2, 3)
plt.imshow(c_LLR2)
plt.colorbar()
plt.subplot(2, 2, 4)
plt.imshow(c_LLR3)
plt.colorbar()
```
I would think that the control cells would at least have some similarity, but their values are completely different, even just for the background portion of the video that doesn't contain a cell, which is concerning.
```
plt.subplot(2, 2, 1)
plt.hist(cLLR0)
plt.subplot(2, 2, 2)
plt.hist(cLLR1)
plt.subplot(2, 2, 3)
#plt.hist(cLLR2) # this one throws an error for some reason
plt.subplot(2, 2, 4)
plt.hist(cLLR3)
```
## LLO cells
```
plt.subplot(1, 2, 1)
plt.imshow(l_LLR0)
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(l_LLR1)
plt.colorbar()
```
While there is a difference between some of the control cells and some of the LLO cells, there is no pattern since the control cells aren't similar to each other.
```
plt.subplot(2, 2, 1)
plt.hist(lLLR0)
plt.subplot(2, 2, 2)
plt.hist(lLLR1)
```
# FSV
I will do the same with fraction of spatial variance next.
```
cFSV0 = control0['FSV']
cFSV1 = control1['FSV']
cFSV2 = control2['FSV']
cFSV3 = control3['FSV']
lFSV0 = LLO0['FSV']
lFSV1 = LLO1['FSV']
c_FSV0 = cFSV0.values.reshape(512, 512)
c_FSV1 = cFSV1.values.reshape(512, 512)
c_FSV2 = cFSV2.values.reshape(512, 512)
c_FSV3 = cFSV3.values.reshape(512, 512)
l_FSV0 = lFSV0.values.reshape(512, 512)
l_FSV1 = lFSV1.values.reshape(512, 512)
```
## Control cells
```
plt.subplot(2, 2, 1)
plt.imshow(c_FSV0)
plt.colorbar()
plt.subplot(2, 2, 2)
plt.imshow(c_FSV1)
plt.colorbar()
plt.subplot(2, 2, 3)
plt.imshow(c_FSV2)
plt.colorbar()
plt.subplot(2, 2, 4)
plt.imshow(c_FSV3)
plt.colorbar()
```
More of a pattern here, but the third one seems to be an outlier in many cases.
```
plt.subplot(2, 2, 1)
plt.hist(cFSV0)
plt.subplot(2, 2, 2)
plt.hist(cFSV1)
plt.subplot(2, 2, 3)
plt.hist(cFSV2)
plt.subplot(2, 2, 4)
plt.hist(cFSV3)
```
## LLO cells
```
plt.subplot(1, 2, 1)
plt.imshow(l_FSV0)
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(l_FSV1)
plt.colorbar()
```
Despite the control cells having more consistancy, the LLO cell pattern looks almost identical to the control.
```
plt.subplot(2, 2, 1)
plt.hist(lFSV0)
plt.subplot(2, 2, 2)
plt.hist(lFSV1)
```
# qval
```
cq0 = control0['qval']
cq1 = control1['qval']
cq2 = control2['qval']
cq3 = control3['qval']
lq0 = LLO0['qval']
lq1 = LLO1['qval']
c_q0 = cq0.values.reshape(512, 512)
c_q1 = cq1.values.reshape(512, 512)
c_q2 = cq2.values.reshape(512, 512)
c_q3 = cq3.values.reshape(512, 512)
l_q0 = lq0.values.reshape(512, 512)
l_q1 = lq1.values.reshape(512, 512)
```
## Control cells
```
plt.subplot(2, 2, 1)
plt.imshow(c_q0)
plt.colorbar()
plt.subplot(2, 2, 2)
plt.imshow(c_q1)
plt.colorbar()
plt.subplot(2, 2, 3)
plt.imshow(c_q2)
plt.colorbar()
plt.subplot(2, 2, 4)
plt.imshow(c_q3)
plt.colorbar()
```
Hard to discern any patterns here; they look very similar to me.
```
plt.subplot(2, 2, 1)
plt.hist(cq0)
plt.subplot(2, 2, 2)
plt.hist(cq1)
plt.subplot(2, 2, 3)
#plt.hist(cq2) # again, throws an error
plt.subplot(2, 2, 4)
plt.hist(cq3)
```
## LLO cells
```
plt.subplot(1, 2, 1)
plt.imshow(l_q0)
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(l_q1)
plt.colorbar()
```
Again, the LLO looks very similar to the control cells with no discernable difference.
```
plt.subplot(2, 2, 1)
plt.hist(lq0)
plt.subplot(2, 2, 2)
plt.hist(lq1)
```
| github_jupyter | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import sys
sys.path.insert(0, '/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/src')
import read_video
control0 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsRed2-HeLa_10_26_Cell0.csv")
control1 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsRed2-HeLa_10_26_Cell1.csv")
control2 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsRed2-HeLa_10_26_Cell2.csv")
control3 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsRed2-HeLa_10_26_Cell3.csv")
LLO0 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsRed2-HeLa_2_21_LLO_Cell1.csv")
LLO1 = pd.read_csv("/Users/BrittanyDorsey/Desktop/my-notebook/MyRepo/ornet-reu-2018/results/results_DsRed2-HeLa_4_5_LLO1_Cell0.csv")
c0 = read_video.read_video("/Users/BrittanyDorsey/Desktop/my-notebook/OrNet-Data/DsRed2-HeLa_10_26_Cell0.mov", first_only=True)
c1 = read_video.read_video("/Users/BrittanyDorsey/Desktop/my-notebook/OrNet-Data/DsRed2-HeLa_10_26_Cell1.mov", first_only=True)
c2 = read_video.read_video("/Users/BrittanyDorsey/Desktop/my-notebook/OrNet-Data/DsRed2-HeLa_10_26_Cell2.mov", first_only=True)
c3 = read_video.read_video("/Users/BrittanyDorsey/Desktop/my-notebook/OrNet-Data/DsRed2-HeLa_10_26_Cell3.mov", first_only=True)
l0 = read_video.read_video("/Users/BrittanyDorsey/Desktop/my-notebook/OrNet-Data/DsRed2-HeLa_2_21_LLO_Cell1.mov", first_only=True)
l1 = read_video.read_video("/Users/BrittanyDorsey/Desktop/my-notebook/OrNet-Data/DsRed2-HeLa_4_5_LLO1_Cell0.mov", first_only=True)
plt.subplot(2, 2, 1)
plt.imshow(c0[1])
plt.subplot(2, 2, 2)
plt.imshow(c1[1])
plt.subplot(2, 2, 3)
plt.imshow(c2[1])
plt.subplot(2, 2, 4)
plt.imshow(c3[1])
plt.subplot(1, 2, 1)
plt.imshow(l0[1])
plt.subplot(1, 2, 2)
plt.imshow(l0[1])
cLLR0 = control0['LLR']
cLLR1 = control1['LLR']
cLLR2 = control2['LLR']
cLLR3 = control3['LLR']
lLLR0 = LLO0['LLR']
lLLR1 = LLO1['LLR']
c_LLR0 = cLLR0.values.reshape(512, 512)
c_LLR1 = cLLR1.values.reshape(512, 512)
c_LLR2 = cLLR2.values.reshape(512, 512)
c_LLR3 = cLLR3.values.reshape(512, 512)
l_LLR0 = lLLR0.values.reshape(512, 512)
l_LLR1 = lLLR1.values.reshape(512, 512)
plt.subplot(2, 2, 1)
plt.imshow(c_LLR0)
plt.colorbar()
plt.subplot(2, 2, 2)
plt.imshow(c_LLR1)
plt.colorbar()
plt.subplot(2, 2, 3)
plt.imshow(c_LLR2)
plt.colorbar()
plt.subplot(2, 2, 4)
plt.imshow(c_LLR3)
plt.colorbar()
plt.subplot(2, 2, 1)
plt.hist(cLLR0)
plt.subplot(2, 2, 2)
plt.hist(cLLR1)
plt.subplot(2, 2, 3)
#plt.hist(cLLR2) # this one throws an error for some reason
plt.subplot(2, 2, 4)
plt.hist(cLLR3)
plt.subplot(1, 2, 1)
plt.imshow(l_LLR0)
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(l_LLR1)
plt.colorbar()
plt.subplot(2, 2, 1)
plt.hist(lLLR0)
plt.subplot(2, 2, 2)
plt.hist(lLLR1)
cFSV0 = control0['FSV']
cFSV1 = control1['FSV']
cFSV2 = control2['FSV']
cFSV3 = control3['FSV']
lFSV0 = LLO0['FSV']
lFSV1 = LLO1['FSV']
c_FSV0 = cFSV0.values.reshape(512, 512)
c_FSV1 = cFSV1.values.reshape(512, 512)
c_FSV2 = cFSV2.values.reshape(512, 512)
c_FSV3 = cFSV3.values.reshape(512, 512)
l_FSV0 = lFSV0.values.reshape(512, 512)
l_FSV1 = lFSV1.values.reshape(512, 512)
plt.subplot(2, 2, 1)
plt.imshow(c_FSV0)
plt.colorbar()
plt.subplot(2, 2, 2)
plt.imshow(c_FSV1)
plt.colorbar()
plt.subplot(2, 2, 3)
plt.imshow(c_FSV2)
plt.colorbar()
plt.subplot(2, 2, 4)
plt.imshow(c_FSV3)
plt.colorbar()
plt.subplot(2, 2, 1)
plt.hist(cFSV0)
plt.subplot(2, 2, 2)
plt.hist(cFSV1)
plt.subplot(2, 2, 3)
plt.hist(cFSV2)
plt.subplot(2, 2, 4)
plt.hist(cFSV3)
plt.subplot(1, 2, 1)
plt.imshow(l_FSV0)
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(l_FSV1)
plt.colorbar()
plt.subplot(2, 2, 1)
plt.hist(lFSV0)
plt.subplot(2, 2, 2)
plt.hist(lFSV1)
cq0 = control0['qval']
cq1 = control1['qval']
cq2 = control2['qval']
cq3 = control3['qval']
lq0 = LLO0['qval']
lq1 = LLO1['qval']
c_q0 = cq0.values.reshape(512, 512)
c_q1 = cq1.values.reshape(512, 512)
c_q2 = cq2.values.reshape(512, 512)
c_q3 = cq3.values.reshape(512, 512)
l_q0 = lq0.values.reshape(512, 512)
l_q1 = lq1.values.reshape(512, 512)
plt.subplot(2, 2, 1)
plt.imshow(c_q0)
plt.colorbar()
plt.subplot(2, 2, 2)
plt.imshow(c_q1)
plt.colorbar()
plt.subplot(2, 2, 3)
plt.imshow(c_q2)
plt.colorbar()
plt.subplot(2, 2, 4)
plt.imshow(c_q3)
plt.colorbar()
plt.subplot(2, 2, 1)
plt.hist(cq0)
plt.subplot(2, 2, 2)
plt.hist(cq1)
plt.subplot(2, 2, 3)
#plt.hist(cq2) # again, throws an error
plt.subplot(2, 2, 4)
plt.hist(cq3)
plt.subplot(1, 2, 1)
plt.imshow(l_q0)
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(l_q1)
plt.colorbar()
plt.subplot(2, 2, 1)
plt.hist(lq0)
plt.subplot(2, 2, 2)
plt.hist(lq1) | 0.12954 | 0.384017 |
```
from tensorflow import keras
from tensorflow.keras import *
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras.regularizers import l2#正则化L2
import tensorflow as tf
import numpy as np
import pandas as pd
normal = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9.12-未发生缠绕前\2013-9.12振动\2013-9-12振动-1450rmin-mat\1450r_normalvibz.txt', delimiter=',')
chanrao = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9.17-发生缠绕后\振动\9-17下午振动1450rmin-mat\1450r_chanraovibz.txt', delimiter=',')
print(normal.shape,chanrao.shape,"***************************************************")
data_normal=normal[14:16] #提取前两行
data_chanrao=chanrao[14:16] #提取前两行
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
data_normal=data_normal.reshape(1,-1)
data_chanrao=data_chanrao.reshape(1,-1)
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
#水泵的两种故障类型信号normal正常,chanrao故障
data_normal=data_normal.reshape(-1, 512)#(65536,1)-(128, 515)
data_chanrao=data_chanrao.reshape(-1,512)
print(data_normal.shape,data_chanrao.shape)
import numpy as np
def yuchuli(data,label):#(4:1)(51:13)
#打乱数据顺序
np.random.shuffle(data)
train = data[0:102,:]
test = data[102:128,:]
label_train = np.array([label for i in range(0,102)])
label_test =np.array([label for i in range(0,26)])
return train,test ,label_train ,label_test
def stackkk(a,b,c,d,e,f,g,h):
aa = np.vstack((a, e))
bb = np.vstack((b, f))
cc = np.hstack((c, g))
dd = np.hstack((d, h))
return aa,bb,cc,dd
x_tra0,x_tes0,y_tra0,y_tes0 = yuchuli(data_normal,0)
x_tra1,x_tes1,y_tra1,y_tes1 = yuchuli(data_chanrao,1)
tr1,te1,yr1,ye1=stackkk(x_tra0,x_tes0,y_tra0,y_tes0 ,x_tra1,x_tes1,y_tra1,y_tes1)
x_train=tr1
x_test=te1
y_train = yr1
y_test = ye1
#打乱数据
state = np.random.get_state()
np.random.shuffle(x_train)
np.random.set_state(state)
np.random.shuffle(y_train)
state = np.random.get_state()
np.random.shuffle(x_test)
np.random.set_state(state)
np.random.shuffle(y_test)
#对训练集和测试集标准化
def ZscoreNormalization(x):
"""Z-score normaliaztion"""
x = (x - np.mean(x)) / np.std(x)
return x
x_train=ZscoreNormalization(x_train)
x_test=ZscoreNormalization(x_test)
# print(x_test[0])
#转化为一维序列
x_train = x_train.reshape(-1,512,1)
x_test = x_test.reshape(-1,512,1)
print(x_train.shape,x_test.shape)
def to_one_hot(labels,dimension=2):
results = np.zeros((len(labels),dimension))
for i,label in enumerate(labels):
results[i,label] = 1
return results
one_hot_train_labels = to_one_hot(y_train)
one_hot_test_labels = to_one_hot(y_test)
x = layers.Input(shape=[512,1,1])
Flatten=layers.Flatten()(x)
Dense1=layers.Dense(12, activation='relu')(Flatten)
Dense2=layers.Dense(6, activation='relu')(Dense1)
Dense3=layers.Dense(2, activation='softmax')(Dense2)
model = keras.Model(x, Dense3)
model.summary()
#定义优化
model.compile(loss='categorical_crossentropy',
optimizer='adam',metrics=['accuracy'])
import time
time_begin = time.time()
history = model.fit(x_train,one_hot_train_labels,
validation_split=0.1,
epochs=50,batch_size=10,
shuffle=True)
time_end = time.time()
time = time_end - time_begin
print('time:', time)
import time
time_begin = time.time()
score = model.evaluate(x_test,one_hot_test_labels, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
time_end = time.time()
time = time_end - time_begin
print('time:', time)
#绘制acc-loss曲线
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['val_loss'],color='g')
plt.plot(history.history['accuracy'],color='b')
plt.plot(history.history['val_accuracy'],color='k')
plt.title('model loss and acc')
plt.ylabel('Accuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'test_loss','train_acc', 'test_acc'], loc='center right')
# plt.legend(['train_loss','train_acc'], loc='upper left')
#plt.savefig('1.png')
plt.show()
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['accuracy'],color='b')
plt.title('model loss and sccuracy ')
plt.ylabel('loss/sccuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'train_sccuracy'], loc='center right')
plt.show()
```
| github_jupyter | from tensorflow import keras
from tensorflow.keras import *
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from tensorflow.keras.regularizers import l2#正则化L2
import tensorflow as tf
import numpy as np
import pandas as pd
normal = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9.12-未发生缠绕前\2013-9.12振动\2013-9-12振动-1450rmin-mat\1450r_normalvibz.txt', delimiter=',')
chanrao = np.loadtxt(r'F:\张老师课题学习内容\code\数据集\试验数据(包括压力脉动和振动)\2013.9.17-发生缠绕后\振动\9-17下午振动1450rmin-mat\1450r_chanraovibz.txt', delimiter=',')
print(normal.shape,chanrao.shape,"***************************************************")
data_normal=normal[14:16] #提取前两行
data_chanrao=chanrao[14:16] #提取前两行
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
data_normal=data_normal.reshape(1,-1)
data_chanrao=data_chanrao.reshape(1,-1)
print(data_normal.shape,data_chanrao.shape)
print(data_normal,"\r\n",data_chanrao,"***************************************************")
#水泵的两种故障类型信号normal正常,chanrao故障
data_normal=data_normal.reshape(-1, 512)#(65536,1)-(128, 515)
data_chanrao=data_chanrao.reshape(-1,512)
print(data_normal.shape,data_chanrao.shape)
import numpy as np
def yuchuli(data,label):#(4:1)(51:13)
#打乱数据顺序
np.random.shuffle(data)
train = data[0:102,:]
test = data[102:128,:]
label_train = np.array([label for i in range(0,102)])
label_test =np.array([label for i in range(0,26)])
return train,test ,label_train ,label_test
def stackkk(a,b,c,d,e,f,g,h):
aa = np.vstack((a, e))
bb = np.vstack((b, f))
cc = np.hstack((c, g))
dd = np.hstack((d, h))
return aa,bb,cc,dd
x_tra0,x_tes0,y_tra0,y_tes0 = yuchuli(data_normal,0)
x_tra1,x_tes1,y_tra1,y_tes1 = yuchuli(data_chanrao,1)
tr1,te1,yr1,ye1=stackkk(x_tra0,x_tes0,y_tra0,y_tes0 ,x_tra1,x_tes1,y_tra1,y_tes1)
x_train=tr1
x_test=te1
y_train = yr1
y_test = ye1
#打乱数据
state = np.random.get_state()
np.random.shuffle(x_train)
np.random.set_state(state)
np.random.shuffle(y_train)
state = np.random.get_state()
np.random.shuffle(x_test)
np.random.set_state(state)
np.random.shuffle(y_test)
#对训练集和测试集标准化
def ZscoreNormalization(x):
"""Z-score normaliaztion"""
x = (x - np.mean(x)) / np.std(x)
return x
x_train=ZscoreNormalization(x_train)
x_test=ZscoreNormalization(x_test)
# print(x_test[0])
#转化为一维序列
x_train = x_train.reshape(-1,512,1)
x_test = x_test.reshape(-1,512,1)
print(x_train.shape,x_test.shape)
def to_one_hot(labels,dimension=2):
results = np.zeros((len(labels),dimension))
for i,label in enumerate(labels):
results[i,label] = 1
return results
one_hot_train_labels = to_one_hot(y_train)
one_hot_test_labels = to_one_hot(y_test)
x = layers.Input(shape=[512,1,1])
Flatten=layers.Flatten()(x)
Dense1=layers.Dense(12, activation='relu')(Flatten)
Dense2=layers.Dense(6, activation='relu')(Dense1)
Dense3=layers.Dense(2, activation='softmax')(Dense2)
model = keras.Model(x, Dense3)
model.summary()
#定义优化
model.compile(loss='categorical_crossentropy',
optimizer='adam',metrics=['accuracy'])
import time
time_begin = time.time()
history = model.fit(x_train,one_hot_train_labels,
validation_split=0.1,
epochs=50,batch_size=10,
shuffle=True)
time_end = time.time()
time = time_end - time_begin
print('time:', time)
import time
time_begin = time.time()
score = model.evaluate(x_test,one_hot_test_labels, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
time_end = time.time()
time = time_end - time_begin
print('time:', time)
#绘制acc-loss曲线
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['val_loss'],color='g')
plt.plot(history.history['accuracy'],color='b')
plt.plot(history.history['val_accuracy'],color='k')
plt.title('model loss and acc')
plt.ylabel('Accuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'test_loss','train_acc', 'test_acc'], loc='center right')
# plt.legend(['train_loss','train_acc'], loc='upper left')
#plt.savefig('1.png')
plt.show()
import matplotlib.pyplot as plt
plt.plot(history.history['loss'],color='r')
plt.plot(history.history['accuracy'],color='b')
plt.title('model loss and sccuracy ')
plt.ylabel('loss/sccuracy')
plt.xlabel('epoch')
plt.legend(['train_loss', 'train_sccuracy'], loc='center right')
plt.show() | 0.513668 | 0.431105 |
```
import gym
import math
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import time as t
from gym import envs
envids = [spec.id for spec in envs.registry.all()]
'''for envid in sorted(envids):
print(envid) '''
#To improve the velocity, run it on the GPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('Device ', device)
#Create the enviorment
env = gym.make('LunarLander-v2')
env.seed(101) #To ensure it's the same situation always simulated, even if it's random
np.random.seed(101)
#check the properties of the enviorment
print('observation space:', env.observation_space) #states, is continuous 2-D (a box)
print('action space:', env.action_space) #actions, 1 discrete action, with 3 possible values
# print(' - low:', env.action_space.low) #minimum speed
# print(' - high:', env.action_space.high) #maximus speed
#t.sleep(10)
print( env.observation_space.shape[0] ) #First layer number of states+
h_size=16
# 1 action ,0 nothing, 1 left, 2 up , 3 right
#In this case exist 4 movements which can be positive or 0
#Creation of a class to chosse the actions, the policy
class Agent(nn.Module):
def __init__(self, env, h_size=16):
super(Agent, self).__init__() #Equivalent to super().__init__()
#Means that this class heritage the init from nn.Module class
# nn.Module it's the base class for all the neural net networks
# https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module
# https://pytorch.org/docs/stable/nn.html
self.env = env #Save the enviorment as the Gym enviorment
# state, hidden layer, action sizes
self.s_size = env.observation_space.shape[0] #First layer number of states+
self.h_size = h_size #hidden layer
self.a_size = 1 #Last layer number of actions ( 0 don't do nothing, 1 left, 2 up,3right)
# define layers
self.fc1 = nn.Linear(self.s_size, self.h_size) # A linear layer that connect the states with the hidden layer
self.fc2 = nn.Linear(self.h_size, self.a_size) # Hidden layer the from hidden layer to actions
def set_weights(self, weights):
s_size = self.s_size
h_size = self.h_size
a_size = self.a_size
# print(s_size)
# print(h_size)
# print(a_size)
# Are linear layers so
# weight _w and b it's the bias.
#https://medium.com/datathings/linear-layers-explained-in-a-simple-way-2319a9c2d1aa
#The bias learns a constant value, independent of the input.
# Learns that all the positive states need at least a bias constant + something dependant
# a linear layer learns that the output ,it's the Activation Function ( input * pendent (the weight) + constant)
# linear neuron output = input *w + b
# separate the weights for each layer
# so we are saying that (state1 * wl1 + bl1)*wl2 +bl2, so we belive it follows a 1st order equation * activation function
fc1_end = (s_size*h_size)+h_size
#The first states * number of hidden layers are the weights of the first layer, each network has different weights for each state input
fc1_W = torch.from_numpy(weights[:s_size*h_size].reshape(s_size, h_size))
#From the previous end , the follwing hidden layer neurons number weights are the bias, each neuron has only 1 bias, doesn't depend on the state input
fc1_b = torch.from_numpy(weights[s_size*h_size:fc1_end])
#Every neuron has a weight for each action output
fc2_W = torch.from_numpy(weights[fc1_end:fc1_end+(h_size*a_size)].reshape(h_size, a_size))
fc2_b = torch.from_numpy(weights[fc1_end+(h_size*a_size):])
# set the weights for each layer
self.fc1.weight.data.copy_(fc1_W.view_as(self.fc1.weight.data))
self.fc1.bias.data.copy_(fc1_b.view_as(self.fc1.bias.data))
self.fc2.weight.data.copy_(fc2_W.view_as(self.fc2.weight.data))
self.fc2.bias.data.copy_(fc2_b.view_as(self.fc2.bias.data))
def get_weights_dim(self):
#In reality its returning the weights + bias dimensions, the +1 its the bias
return (self.s_size+1)*self.h_size + (self.h_size+1)*self.a_size
def forward(self, x):
#forward its Method to be able to pass the data as batches of data
# It passes the data as Matrices adding the activation functions at the same time
#They have activation functions to
#https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6
x = F.relu(self.fc1(x)) # Only possitive values pass
#x = F.tanh(self.fc2(x))
x = torch.sigmoid(self.fc2(x)) # Only from 0 to 1, to have easy to do 4 groups
return x.cpu().data
def evaluate(self, weights, gamma=1.0, max_t=5000):
# Obtain the accumulative reward from the actions selected by the neural net
self.set_weights(weights)
episode_return = 0.0
state = self.env.reset()
for t in range(max_t):
state = torch.from_numpy(state).float().to(device)
action = self.forward(state)
#print(action)
action = action *3
if(action >= 2.5):
action = 3
elif (action >= 1.5):
action = 2
elif (action >= 0.5):
action = 1
else:
action = 0
state, reward, done, _ = self.env.step(action)
episode_return += reward * math.pow(gamma, t)
if done:
break
return episode_return
#End of class
agent = Agent(env).to(device) # Creation of a neural net in the device, in my case the GPU
#Cross Entrophy Method, to choose the weights
def cem(n_iterations=1000, max_t=1000, gamma=1.0, print_every=10, pop_size=50, elite_frac=0.2, sigma=0.5):
"""PyTorch implementation of the cross-entropy method.
Params
======
n_iterations (int): maximum number of training iterations
max_t (int): maximum number of timesteps per episode
gamma (float): discount rate
print_every (int): how often to print average score (over last 100 episodes)
pop_size (int): size of population at each iteration
elite_frac (float): percentage of top performers to use in update
sigma (float): standard deviation of additive noise
"""
#Fracció de millors pesos que et quedas
n_elite=int(pop_size*elite_frac)
#scores doble end queee , from 100 values
scores_deque = deque(maxlen=100)
#intial scores empty
scores = []
#Initial best weights, are from 0 to 1, it's good to be small the weights, but they should be different from 0.
# small to avoid overfiting , different from 0 to update them
best_weight = sigma*np.random.randn(agent.get_weights_dim())
#Each iteration, modify + 1 to 0 the best weight randomly
#Computes the reward with these weights
#Sort the reward to get the best ones
# Save the best weights
# the Best weight it's the mean of the best one
#compute the main reward of the main best rewards ones
#this it's show to evalute how good its
for i_iteration in range(1, n_iterations+1):
weights_pop = [best_weight + (sigma*np.random.randn(agent.get_weights_dim())) for i in range(pop_size)]
rewards = np.array([agent.evaluate(weights, gamma, max_t) for weights in weights_pop])
elite_idxs = rewards.argsort()[-n_elite:]
elite_weights = [weights_pop[i] for i in elite_idxs]
best_weight = np.array(elite_weights).mean(axis=0)
reward = agent.evaluate(best_weight, gamma=1.0)
scores_deque.append(reward)
scores.append(reward)
#save the check point
torch.save(agent.state_dict(), 'checkpointLunar.pth')
if i_iteration % print_every == 0:
print('Episode {}\tAverage Score: {:.2f}'.format(i_iteration, np.mean(scores_deque)))
if np.mean(scores_deque)>=90.0:
print('\nEnvironment solved in {:d} iterations!\tAverage Score: {:.2f}'.format(i_iteration-100, np.mean(scores_deque)))
break
return scores
#Execute the cross entrophy method with default Values
#scores = cem()
#To don't ask the GPU as much reduce the pop_size, it's the amount of elemts try
scores = cem(pop_size=30)
#
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# load the weights from file
agent.load_state_dict(torch.load('checkpoint.pth'))
state = env.reset()
while True:
state = torch.from_numpy(state).float().to(device)
with torch.no_grad():
action = agent(state)
action = action *3
if(action >= 2.5):
action = 3
elif (action >= 1.5):
action = 2
elif (action >= 0.5):
action = 1
else:
action = 0
env.render()
t.sleep(0.1)
next_state, reward, done, _ = env.step(action)
state = next_state
if done:
break
env.close()
env.close()
#save the check point
torch.save(agent.state_dict(), 'checkpointLunar.pth')
```
| github_jupyter | import gym
import math
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import time as t
from gym import envs
envids = [spec.id for spec in envs.registry.all()]
'''for envid in sorted(envids):
print(envid) '''
#To improve the velocity, run it on the GPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('Device ', device)
#Create the enviorment
env = gym.make('LunarLander-v2')
env.seed(101) #To ensure it's the same situation always simulated, even if it's random
np.random.seed(101)
#check the properties of the enviorment
print('observation space:', env.observation_space) #states, is continuous 2-D (a box)
print('action space:', env.action_space) #actions, 1 discrete action, with 3 possible values
# print(' - low:', env.action_space.low) #minimum speed
# print(' - high:', env.action_space.high) #maximus speed
#t.sleep(10)
print( env.observation_space.shape[0] ) #First layer number of states+
h_size=16
# 1 action ,0 nothing, 1 left, 2 up , 3 right
#In this case exist 4 movements which can be positive or 0
#Creation of a class to chosse the actions, the policy
class Agent(nn.Module):
def __init__(self, env, h_size=16):
super(Agent, self).__init__() #Equivalent to super().__init__()
#Means that this class heritage the init from nn.Module class
# nn.Module it's the base class for all the neural net networks
# https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module
# https://pytorch.org/docs/stable/nn.html
self.env = env #Save the enviorment as the Gym enviorment
# state, hidden layer, action sizes
self.s_size = env.observation_space.shape[0] #First layer number of states+
self.h_size = h_size #hidden layer
self.a_size = 1 #Last layer number of actions ( 0 don't do nothing, 1 left, 2 up,3right)
# define layers
self.fc1 = nn.Linear(self.s_size, self.h_size) # A linear layer that connect the states with the hidden layer
self.fc2 = nn.Linear(self.h_size, self.a_size) # Hidden layer the from hidden layer to actions
def set_weights(self, weights):
s_size = self.s_size
h_size = self.h_size
a_size = self.a_size
# print(s_size)
# print(h_size)
# print(a_size)
# Are linear layers so
# weight _w and b it's the bias.
#https://medium.com/datathings/linear-layers-explained-in-a-simple-way-2319a9c2d1aa
#The bias learns a constant value, independent of the input.
# Learns that all the positive states need at least a bias constant + something dependant
# a linear layer learns that the output ,it's the Activation Function ( input * pendent (the weight) + constant)
# linear neuron output = input *w + b
# separate the weights for each layer
# so we are saying that (state1 * wl1 + bl1)*wl2 +bl2, so we belive it follows a 1st order equation * activation function
fc1_end = (s_size*h_size)+h_size
#The first states * number of hidden layers are the weights of the first layer, each network has different weights for each state input
fc1_W = torch.from_numpy(weights[:s_size*h_size].reshape(s_size, h_size))
#From the previous end , the follwing hidden layer neurons number weights are the bias, each neuron has only 1 bias, doesn't depend on the state input
fc1_b = torch.from_numpy(weights[s_size*h_size:fc1_end])
#Every neuron has a weight for each action output
fc2_W = torch.from_numpy(weights[fc1_end:fc1_end+(h_size*a_size)].reshape(h_size, a_size))
fc2_b = torch.from_numpy(weights[fc1_end+(h_size*a_size):])
# set the weights for each layer
self.fc1.weight.data.copy_(fc1_W.view_as(self.fc1.weight.data))
self.fc1.bias.data.copy_(fc1_b.view_as(self.fc1.bias.data))
self.fc2.weight.data.copy_(fc2_W.view_as(self.fc2.weight.data))
self.fc2.bias.data.copy_(fc2_b.view_as(self.fc2.bias.data))
def get_weights_dim(self):
#In reality its returning the weights + bias dimensions, the +1 its the bias
return (self.s_size+1)*self.h_size + (self.h_size+1)*self.a_size
def forward(self, x):
#forward its Method to be able to pass the data as batches of data
# It passes the data as Matrices adding the activation functions at the same time
#They have activation functions to
#https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6
x = F.relu(self.fc1(x)) # Only possitive values pass
#x = F.tanh(self.fc2(x))
x = torch.sigmoid(self.fc2(x)) # Only from 0 to 1, to have easy to do 4 groups
return x.cpu().data
def evaluate(self, weights, gamma=1.0, max_t=5000):
# Obtain the accumulative reward from the actions selected by the neural net
self.set_weights(weights)
episode_return = 0.0
state = self.env.reset()
for t in range(max_t):
state = torch.from_numpy(state).float().to(device)
action = self.forward(state)
#print(action)
action = action *3
if(action >= 2.5):
action = 3
elif (action >= 1.5):
action = 2
elif (action >= 0.5):
action = 1
else:
action = 0
state, reward, done, _ = self.env.step(action)
episode_return += reward * math.pow(gamma, t)
if done:
break
return episode_return
#End of class
agent = Agent(env).to(device) # Creation of a neural net in the device, in my case the GPU
#Cross Entrophy Method, to choose the weights
def cem(n_iterations=1000, max_t=1000, gamma=1.0, print_every=10, pop_size=50, elite_frac=0.2, sigma=0.5):
"""PyTorch implementation of the cross-entropy method.
Params
======
n_iterations (int): maximum number of training iterations
max_t (int): maximum number of timesteps per episode
gamma (float): discount rate
print_every (int): how often to print average score (over last 100 episodes)
pop_size (int): size of population at each iteration
elite_frac (float): percentage of top performers to use in update
sigma (float): standard deviation of additive noise
"""
#Fracció de millors pesos que et quedas
n_elite=int(pop_size*elite_frac)
#scores doble end queee , from 100 values
scores_deque = deque(maxlen=100)
#intial scores empty
scores = []
#Initial best weights, are from 0 to 1, it's good to be small the weights, but they should be different from 0.
# small to avoid overfiting , different from 0 to update them
best_weight = sigma*np.random.randn(agent.get_weights_dim())
#Each iteration, modify + 1 to 0 the best weight randomly
#Computes the reward with these weights
#Sort the reward to get the best ones
# Save the best weights
# the Best weight it's the mean of the best one
#compute the main reward of the main best rewards ones
#this it's show to evalute how good its
for i_iteration in range(1, n_iterations+1):
weights_pop = [best_weight + (sigma*np.random.randn(agent.get_weights_dim())) for i in range(pop_size)]
rewards = np.array([agent.evaluate(weights, gamma, max_t) for weights in weights_pop])
elite_idxs = rewards.argsort()[-n_elite:]
elite_weights = [weights_pop[i] for i in elite_idxs]
best_weight = np.array(elite_weights).mean(axis=0)
reward = agent.evaluate(best_weight, gamma=1.0)
scores_deque.append(reward)
scores.append(reward)
#save the check point
torch.save(agent.state_dict(), 'checkpointLunar.pth')
if i_iteration % print_every == 0:
print('Episode {}\tAverage Score: {:.2f}'.format(i_iteration, np.mean(scores_deque)))
if np.mean(scores_deque)>=90.0:
print('\nEnvironment solved in {:d} iterations!\tAverage Score: {:.2f}'.format(i_iteration-100, np.mean(scores_deque)))
break
return scores
#Execute the cross entrophy method with default Values
#scores = cem()
#To don't ask the GPU as much reduce the pop_size, it's the amount of elemts try
scores = cem(pop_size=30)
#
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# load the weights from file
agent.load_state_dict(torch.load('checkpoint.pth'))
state = env.reset()
while True:
state = torch.from_numpy(state).float().to(device)
with torch.no_grad():
action = agent(state)
action = action *3
if(action >= 2.5):
action = 3
elif (action >= 1.5):
action = 2
elif (action >= 0.5):
action = 1
else:
action = 0
env.render()
t.sleep(0.1)
next_state, reward, done, _ = env.step(action)
state = next_state
if done:
break
env.close()
env.close()
#save the check point
torch.save(agent.state_dict(), 'checkpointLunar.pth') | 0.805058 | 0.615897 |
```
import numpy as np
import pandas as pd
import linearsolve as ls
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
```
# Homework 7
**Instructions:** Complete the notebook below. Download the completed notebook in HTML format. Upload assignment using Canvas.
**Due:** Feb. 23 at **2pm.**
## Exercise: The Labor-Leisure Tradeoff
\begin{align}
\frac{\varphi}{1-L_t} & = \frac{(1-\alpha)A_tK_t^{\alpha}L_t^{-\alpha}}{C_t} \tag{1}
\end{align}
**Questions**
1. Explain words why the left-hand side of equation (1) represents the marginal cost to the household of working. A complete answer will make use of the term *marginal utility* .
2. Explain words why the right-hand side of equation (1) represents the marginal benefit to the household of supplying labor (i.e., working). A complete answer will make use of the terms *marginal utility* and *marginal product*.
3. Holding everything else constant, according to equation (1), what effect will an increase in TFP have on equilibrium labor? Explain the economic intuition behind your answer.
4. Holding everything else constant, according to equation (1), what effect will an increase in household consumption have on equilibrium labor? Explain the economic intuition behind your answer.
**Answers**
1. The left-hand side is the derivative of the household's period $t$ utility flow with respect to $1-L_t$ and is therefore the marginal utility of leisure. A marginal increase in work effort leads to a marginal decrease in leisure of equal magnitude so the left-hand side of equation (1) reflects the utility cost at the margin of working. <!-- answer -->
2. The right-hand side of equation (1) is the marginal product of labor times the marginal utility of consumption. A marginal increase in work effort raises household income, and therefore consumption, by the marginal product of labor: $(1-\alpha)A_t K_t^{\alpha}L_t^{-\alpha}$. Accodring to the chain rule: $\partial u /\partial L= (\partial u /\partial C)(\partial C /\partial L)$, and so equation (1) is the additional utility at the margin of working.<!-- answer -->
3. Multiply both sides of equation (1) by $L^{\alpha}$ and obtain $\varphi L^{\alpha}/(1-L_t) = (1-\alpha)A_tK_t^{\alpha}/C_t$. Increasing TFP increases the right-hand side and therefore increases labor supplied because the left-hand side is an increasing function of $L$. Intuitively, an increase in TFP increases the marginal product of labor which effectively raises the price of leisure relative to consumption so the household cuts back on leisure. <!-- answer -->
4. Increasing consumption decreases labor supplied. Intuitively, an increase in consumption reduces the marginal utility of income, reduces the household's incentive to earn income from working, and so the household takes enjoys more leisure. <!-- answer -->
## Exercise: The Euler Equation
\begin{align}
\frac{1}{C_t} & = \beta \left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha-1}L_{t+1}^{1-\alpha} +1-\delta }{C_{t+1}}\right]\tag{2}
\end{align}
**Questions**
1. Explain words why the left-hand side of equation (2) represents the marginal cost to the household of saving (i.e., building new capital). A complete answer will make use of the term *marginal utility* .
2. Explain words why the right-hand side of equation (2) represents the marginal benefit to the household of saving. A complete answer will make use of the terms *marginal utility* and *marginal product*.
3. Holding everything else constant, according to equation (2), what effect will an increase in TFP in period $t+1$ have on the household's choice for capital in period $t+1$? Explain the economic intuition behind your answer.
3. Holding everything else constant, according to equation (2), what effect will an increase in consumption in period $t$ have on the household's choice for capital in period $t+1$? Explain the economic intuition behind your answer.
3. Holding everything else constant, according to equation (2), what effect will an increase in consumption in period $t+1$ have on the household's choice for capital in period $t+1$? Explain the economic intuition behind your answer.
**Answers**
1. The left-hand side is the derivative of the household's period $t$ utility flow with respect to $C_t$ and is therefore the marginal utility of consumption in period $t$. A marginal increase in saving reduces current consuption by an equal magnitude and so the left-hand side of equation (1) reflects the utility cost at the margin of saving. <!-- answer -->
2. The right-hand side of equation (2) represents the marginal benefit to the household of saving because a marginal increase in period $t+1$ capital increases period $t+1$ consumption by the marginal product of capital plus the share of that capital that doesn't depreciate and so, by the chain rule, the increase in the $t+1$ utility flow is: $(\alpha A_{t+1} K^{\alpha-1}L^{1-\alpha}+1-\delta)/C_{t+1}$. Since the houshold realizes this change in the future, it's discounted by the subjective discount factor $\beta$. <!-- answer -->
3. An increase in TFP in $t+1$ would raise the $t+1$ marginal product of capital and so, since the left-hand side is unchanged, period $t+1$ capital increases to so that the marginal product of capital remains unchanged. Intuitively, an increase in future TFP raises the return to saving (in terms of future consumption) and so the household saves more. <!-- answer -->
3. An increase in consumption in $t$ lowers the marginal utility of consumption in period $t$. Therefore the household will try to save more and so period $t+1$ capital increases. Intuitively, a household that suddenly gains more consumption in the current period will save more in order to bring down the marginal benefit in the future from saving. <!-- answer -->
4. An increase in consumption in $t+1$ lowers the marginal utility of consumption in period $t+1$. Therefore the household will try to save less and so period $t+1$ capital decreases. Intuitively, a household that suddenly gains more consumption in the next period will save less in order to keep the marginal benefit in the future from saving unchanged. <!-- answer -->
| github_jupyter | import numpy as np
import pandas as pd
import linearsolve as ls
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline | 0.193109 | 0.993314 |
```
import metpy.calc as mpcalc
from metpy.units import units
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
from cartopy.feature import ShapelyFeature,NaturalEarthFeature
import matplotlib as mpl
import numpy.ma as ma
import metpy
from metpy.plots import USCOUNTIES # Make sure metpy is updated to latest version.
%matplotlib inline
plt.rcParams.update({"font.size":30})
mpl.rcParams['legend.fontsize'] = 'large'
from pathlib import Path
HRRR= Path("/export/home/mbrewer/HRRR/hrrr.t18z.wrfprsf00_1108.nc")
el = Path("/export/home/mbrewer/Documents/GMTED2010_15n030_0125deg.nc")
ds = xr.open_dataset(HRRR)
elev = xr.open_dataset(el)
data = ds.metpy.parse_cf()
wp_lat = 39.697254
wp_lon = -121.574221
lat = data.gridlat_0
lon = data.gridlon_0
abslat = np.abs(lat-wp_lat)
abslon= np.abs(lon-wp_lon)
c = np.maximum(abslon,abslat)
xx, yy = np.where(c == np.min(c))
print(xx,yy)
height = ds.HGT_P0_L100_GLC0
uwind_pres = ds.UGRD_P0_L100_GLC0[:,0:int(yy+1000),0:int(xx+500)]
vwind_pres =ds.VGRD_P0_L100_GLC0[:,0:int(yy+1000),0:int(xx+500)]
uwind_10m = ds.UGRD_P0_L103_GLC0[0,0:int(yy+1000),0:int(xx+500)]
vwind_10m = ds.VGRD_P0_L103_GLC0[0,0:int(yy+1000),0:int(xx+500)]
RH = ds.RH_P0_L100_GLC0[:,0:int(yy+1000),0:int(xx+500)]
RH_2m =ds.RH_P0_L103_GLC0[0:int(yy+1000),0:int(xx+500)]
DPT = ds.DPT_P0_L103_GLC0[0:int(yy+1000),0:int(xx+500)]
#lon_2d, lat_2d = np.meshgrid(ds['gridlon_0'], ds['gridlat_0'])
# Function used to create the map subplots
def plot_background(ax):
ax.set_extent([-108+360.,-126.5+360., 32., 49.])
ax.coastlines(resolution='10m', linewidth=2, color = 'black', zorder = 4)
political_boundaries = NaturalEarthFeature(category='cultural',
name='admin_0_boundary_lines_land',
scale='10m', facecolor='none')
states = NaturalEarthFeature(category='cultural',
name='admin_1_states_provinces_lines',
scale='50m', facecolor='none')
ax.add_feature(political_boundaries, linestyle='-', edgecolor='black', zorder =4)
ax.add_feature(states, linestyle='-', edgecolor='black',linewidth=2, zorder =4)
#gl = ax.gridlines(draw_labels=True)
#gl.xlabels_top = gl.ylabels_right = False
#gl.xformatter = LONGITUDE_FORMATTER
#gl.yformatter = LATITUDE_FORMATTER
return ax
RH = data.metpy.parse_cf('RH_P0_L100_GLC0')
RH = RH[:,0:int(yy+1000),0:int(xx+500)]
x, y = RH.metpy.coordinates('x', 'y')
lccProjParams_HRRR = { 'central_latitude' : -38.5, # same as lat_0 in proj4 string
'central_longitude' : -97.5, # same as lon_0
'standard_parallels' : (38.5, 38.5), # same as (lat_1, lat_2)
}
crs = ccrs.LambertConformal(**lccProjParams_HRRR)
var = RH
fig, ax = plt.subplots(figsize = (20,20),subplot_kw={'projection': crs})
plot_background(ax)
clevs = np.arange(0.,105.,1)
levs = np.arange(0,6000.,500)
levs2 = np.arange(1,3000.,250)
cf = ax.contourf(x,y,var[26], clevs, transform = ccrs.PlateCarree(), cmap = 'viridis_r', alpha = .7, zorder = 2, vmax = 80)
cs1 =ax.contour(elev.longitude,elev.latitude, elev.elevation, levs, transform = ccrs.PlateCarree(), colors = '#333333', zorder = 1)
#cs =ax.contour(lon_2d,lat_2d, heights_700,levs, transform = ccrs.PlateCarree(), linewidths = 3, colors = '#116000')
#ax.clabel(cs, cs.levels, fontsize=20, colors='k')
ax.scatter(-121.6219, 39.7596, s =300, marker = '*', label = 'Paradise, California', transform = ccrs.PlateCarree(), color = 'tab:red', zorder =6)
sknum = 15
skip=(slice(None,None,sknum),slice(None,None,sknum))
ax.barbs(x[skip].values,y[skip].values, uwind_pres[26,0:int(yy+1000),0:int(xx+600)][skip].values, vwind_pres[26,0:int(yy+1000),0:int(xx+600)][skip].values, length=6,
sizes=dict(emptybarb=0.25, spacing=.2, height=0.5),
zorder = 5,
linewidth=0.95, transform= ccrs.PlateCarree())
#ax.barbs(x[::50].values,y[::50].values, uwind_pres[26][::50].values, vwind_pres[26][::50].values, transform = ccrs.PlateCarree(), zorder = 5)
#ax.set_title('201]-11-08 0000UTC GFS 0.5°', fontsize = 30)
ax.add_feature(USCOUNTIES.with_scale('500k'), edgecolor='black', linewidth=.2, zorder = 4)
ax.legend(loc = 1,fontsize = '18')
cb = fig.colorbar(cf, shrink=0.74, pad=0)
font_size = 20 # Adjust as appropriate.
cb.ax.tick_params(labelsize=font_size)
cb.set_label('RH (%)', size = 'x-large', fontsize = 22 )
for label in (ax.get_xticklabels() + ax.get_yticklabels()):
label.set_fontname('Arial')
label.set_fontsize(30)
label.set_fontweight('bold');
plt.title('HRRR Model Output', loc='left', fontweight='bold', fontsize = 22)
plt.title('Field: %s' % (var.attrs['long_name']), loc='center', fontsize = 18)
plt.title('Valid Time: %s\nLevel: %s hPa' % (var.attrs['initial_time'], int(var[26].coords['lv_ISBL0'].data)/100), loc='right', fontsize = 18)
plt.savefig('HRRR_RH_%s_%sz_%s.png'% (var.attrs['initial_time'][3:5],var.attrs['initial_time'][12:14], int(var[26].coords['lv_ISBL0'].data)/100), dpi = 800, bbox_inches = 'tight')
```
| github_jupyter | import metpy.calc as mpcalc
from metpy.units import units
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
from cartopy.feature import ShapelyFeature,NaturalEarthFeature
import matplotlib as mpl
import numpy.ma as ma
import metpy
from metpy.plots import USCOUNTIES # Make sure metpy is updated to latest version.
%matplotlib inline
plt.rcParams.update({"font.size":30})
mpl.rcParams['legend.fontsize'] = 'large'
from pathlib import Path
HRRR= Path("/export/home/mbrewer/HRRR/hrrr.t18z.wrfprsf00_1108.nc")
el = Path("/export/home/mbrewer/Documents/GMTED2010_15n030_0125deg.nc")
ds = xr.open_dataset(HRRR)
elev = xr.open_dataset(el)
data = ds.metpy.parse_cf()
wp_lat = 39.697254
wp_lon = -121.574221
lat = data.gridlat_0
lon = data.gridlon_0
abslat = np.abs(lat-wp_lat)
abslon= np.abs(lon-wp_lon)
c = np.maximum(abslon,abslat)
xx, yy = np.where(c == np.min(c))
print(xx,yy)
height = ds.HGT_P0_L100_GLC0
uwind_pres = ds.UGRD_P0_L100_GLC0[:,0:int(yy+1000),0:int(xx+500)]
vwind_pres =ds.VGRD_P0_L100_GLC0[:,0:int(yy+1000),0:int(xx+500)]
uwind_10m = ds.UGRD_P0_L103_GLC0[0,0:int(yy+1000),0:int(xx+500)]
vwind_10m = ds.VGRD_P0_L103_GLC0[0,0:int(yy+1000),0:int(xx+500)]
RH = ds.RH_P0_L100_GLC0[:,0:int(yy+1000),0:int(xx+500)]
RH_2m =ds.RH_P0_L103_GLC0[0:int(yy+1000),0:int(xx+500)]
DPT = ds.DPT_P0_L103_GLC0[0:int(yy+1000),0:int(xx+500)]
#lon_2d, lat_2d = np.meshgrid(ds['gridlon_0'], ds['gridlat_0'])
# Function used to create the map subplots
def plot_background(ax):
ax.set_extent([-108+360.,-126.5+360., 32., 49.])
ax.coastlines(resolution='10m', linewidth=2, color = 'black', zorder = 4)
political_boundaries = NaturalEarthFeature(category='cultural',
name='admin_0_boundary_lines_land',
scale='10m', facecolor='none')
states = NaturalEarthFeature(category='cultural',
name='admin_1_states_provinces_lines',
scale='50m', facecolor='none')
ax.add_feature(political_boundaries, linestyle='-', edgecolor='black', zorder =4)
ax.add_feature(states, linestyle='-', edgecolor='black',linewidth=2, zorder =4)
#gl = ax.gridlines(draw_labels=True)
#gl.xlabels_top = gl.ylabels_right = False
#gl.xformatter = LONGITUDE_FORMATTER
#gl.yformatter = LATITUDE_FORMATTER
return ax
RH = data.metpy.parse_cf('RH_P0_L100_GLC0')
RH = RH[:,0:int(yy+1000),0:int(xx+500)]
x, y = RH.metpy.coordinates('x', 'y')
lccProjParams_HRRR = { 'central_latitude' : -38.5, # same as lat_0 in proj4 string
'central_longitude' : -97.5, # same as lon_0
'standard_parallels' : (38.5, 38.5), # same as (lat_1, lat_2)
}
crs = ccrs.LambertConformal(**lccProjParams_HRRR)
var = RH
fig, ax = plt.subplots(figsize = (20,20),subplot_kw={'projection': crs})
plot_background(ax)
clevs = np.arange(0.,105.,1)
levs = np.arange(0,6000.,500)
levs2 = np.arange(1,3000.,250)
cf = ax.contourf(x,y,var[26], clevs, transform = ccrs.PlateCarree(), cmap = 'viridis_r', alpha = .7, zorder = 2, vmax = 80)
cs1 =ax.contour(elev.longitude,elev.latitude, elev.elevation, levs, transform = ccrs.PlateCarree(), colors = '#333333', zorder = 1)
#cs =ax.contour(lon_2d,lat_2d, heights_700,levs, transform = ccrs.PlateCarree(), linewidths = 3, colors = '#116000')
#ax.clabel(cs, cs.levels, fontsize=20, colors='k')
ax.scatter(-121.6219, 39.7596, s =300, marker = '*', label = 'Paradise, California', transform = ccrs.PlateCarree(), color = 'tab:red', zorder =6)
sknum = 15
skip=(slice(None,None,sknum),slice(None,None,sknum))
ax.barbs(x[skip].values,y[skip].values, uwind_pres[26,0:int(yy+1000),0:int(xx+600)][skip].values, vwind_pres[26,0:int(yy+1000),0:int(xx+600)][skip].values, length=6,
sizes=dict(emptybarb=0.25, spacing=.2, height=0.5),
zorder = 5,
linewidth=0.95, transform= ccrs.PlateCarree())
#ax.barbs(x[::50].values,y[::50].values, uwind_pres[26][::50].values, vwind_pres[26][::50].values, transform = ccrs.PlateCarree(), zorder = 5)
#ax.set_title('201]-11-08 0000UTC GFS 0.5°', fontsize = 30)
ax.add_feature(USCOUNTIES.with_scale('500k'), edgecolor='black', linewidth=.2, zorder = 4)
ax.legend(loc = 1,fontsize = '18')
cb = fig.colorbar(cf, shrink=0.74, pad=0)
font_size = 20 # Adjust as appropriate.
cb.ax.tick_params(labelsize=font_size)
cb.set_label('RH (%)', size = 'x-large', fontsize = 22 )
for label in (ax.get_xticklabels() + ax.get_yticklabels()):
label.set_fontname('Arial')
label.set_fontsize(30)
label.set_fontweight('bold');
plt.title('HRRR Model Output', loc='left', fontweight='bold', fontsize = 22)
plt.title('Field: %s' % (var.attrs['long_name']), loc='center', fontsize = 18)
plt.title('Valid Time: %s\nLevel: %s hPa' % (var.attrs['initial_time'], int(var[26].coords['lv_ISBL0'].data)/100), loc='right', fontsize = 18)
plt.savefig('HRRR_RH_%s_%sz_%s.png'% (var.attrs['initial_time'][3:5],var.attrs['initial_time'][12:14], int(var[26].coords['lv_ISBL0'].data)/100), dpi = 800, bbox_inches = 'tight') | 0.543348 | 0.409398 |
```
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, Bidirectional, Dropout
from sklearn.model_selection import train_test_split
from keras.utils.np_utils import to_categorical
import re
import matplotlib.pyplot as plt
from google.colab import drive
drive.mount('/content/drive')
train = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/mal_full_offensive_train.csv', delimiter='\t', names=['sentence','classes','nan'])
train = train.drop(columns=['nan'])
train.head()
val = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/mal_full_offensive_dev.csv', delimiter='\t', names=['sentence','classes','nan'])
val = val.drop(columns=['nan'])
test = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/mal_full_offensive_test.csv',delimiter='\t',names=['sentence'])
test.head(4)
train.count()
train['classes'].apply(len).max()
train['sentence'].apply(len).max()
set(train['classes'])
encode_dict = {}
def encode_cat(x):
if x not in encode_dict.keys():
encode_dict[x] = len(encode_dict)
return encode_dict[x]
train['encode_cat'] = train['classes'].apply(lambda x: encode_cat(x))
val['encode_cat'] = val['classes'].apply(lambda x: encode_cat(x))
train.head(9)
y_train = train['encode_cat']
y_val = val['encode_cat']
def clean(df):
df['sentence'] = df['sentence'].apply(lambda x: x.lower())
df['sentence'] = df['sentence'].apply(lambda x: re.sub(r' +', ' ',x))
df['sentence'] = df['sentence'].apply(lambda x: re.sub("[!@#$+%*:()'-]", ' ',x))
df['sentence'] = df['sentence'].str.replace('\d+', '')
clean(train)
clean(val)
clean(test)
max_features = 2000
max_len = 512
tokenizer = Tokenizer(num_words=max_features, split=' ')
tokenizer.fit_on_texts(train['sentence'].values)
X_train = tokenizer.texts_to_sequences(train['sentence'].values)
# vocab_size = len(tokenizer.word_index) + 1
X_train = pad_sequences(X_train,padding = 'post', maxlen=max_len)
tokenizer.fit_on_texts(val['sentence'].values)
X_val = tokenizer.texts_to_sequences(val['sentence'].values)
# vocab_size = len(tokenizer.word_index) + 1
X_val = pad_sequences(X_val,padding = 'post', maxlen=max_len)
tokenizer.fit_on_texts(test['sentence'].values)
X_test = tokenizer.texts_to_sequences(test['sentence'].values)
# vocab_size = len(tokenizer.word_index) + 1
X_test = pad_sequences(X_test,padding = 'post', maxlen=max_len)
train['sentence'].apply(len).max()
train.describe()
Y_train = pd.get_dummies(y_train).values
Y_val = pd.get_dummies(y_val).values
print(X_train.shape,Y_train.shape)
print(X_val.shape,Y_val.shape)
print(X_test.shape)
from keras.layers import Layer
from keras.layers import Input
from keras.models import Model
from tensorflow.keras import backend as K
class attention(Layer):
def __init__(self):
super(attention,self).__init__()
def build(self,input_shape):
self.W=self.add_weight(name="att_weight",shape=(input_shape[-1],1),initializer="normal")
self.b=self.add_weight(name="att_bias",shape=(input_shape[1],1),initializer="zeros")
super(attention, self).build(input_shape)
def call(self,x):
et=K.squeeze(K.tanh(K.dot(x,self.W)+self.b),axis=-1)
at=K.softmax(et)
at=K.expand_dims(at,axis=-1)
output=x*at
return K.sum(output,axis=1)
def compute_output_shape(self,input_shape):
return (input_shape[0],input_shape[-1])
def get_config(self):
return super(attention,self).get_config()
!wget --header="Host: nlp.stanford.edu" --header="User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" --header="Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8" --header="Accept-Language: en-US,en;q=0.9" --header="Cookie: _ga=GA1.2.456156586.1539718115; _gid=GA1.2.491677602.1539718115; _gat=1" --header="Connection: keep-alive" "https://nlp.stanford.edu/data/glove.6B.zip" -O "glove.6B.zip" -c
!unzip glove.6B.zip
from numpy import array
from numpy import asarray
from numpy import zeros
embeddings_index = dict()
glove_file = open('glove.6B.100d.txt', encoding="utf8")
for line in glove_file:
records = line.split()
word = records[0]
vector_dimensions = asarray(records[1:], dtype='float32')
embeddings_index[word] = vector_dimensions
glove_file.close()
print('Found %s word vectors.' %len(embeddings_index))
word_index = tokenizer.word_index
print(len(word_index))
num_words = min(max_features, len(word_index)) + 1
print(num_words)
embedding_dim = 100
embedding_matrix = np.zeros((num_words, embedding_dim))
for word, i in word_index.items():
if i > max_features:
continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
else:
embedding_matrix[i] = np.random.randn(embedding_dim)
K.clear_session()
from keras.regularizers import l2
from keras.initializers import Constant
embed_dim = 100
lstm_out = 128
# model = Sequential()
inputs = Input(shape=(512,))
x = Embedding(num_words, embed_dim,embeddings_initializer=Constant(embedding_matrix),input_length = X_train.shape[1])(inputs)
att_in = Bidirectional(LSTM(lstm_out,return_sequences=True, dropout=0.2))(x)
att_out = attention()(att_in)
d = Dropout(0.2)(att_out)
outputs = Dense(5, activation='softmax')(d)
model = Model(inputs,outputs)
print(model.summary())
import numpy as np
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',
np.unique(train.encode_cat.values),
train.encode_cat.values)
class_weights = dict(enumerate(class_weights))
model.compile(loss = 'categorical_crossentropy', optimizer='adam',metrics = ['accuracy'])
history = model.fit(X_train, Y_train,batch_size = 64,class_weight=class_weights, validation_data=(X_val,Y_val), epochs=10, verbose=2)
import matplotlib.pyplot as plt
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train','val'])
#plt.show()
plt.savefig('Model_accuracy.png', dpi=600)
# score = model.evaluate(X_test,verbose=1)
predictions = np.argmax(model.predict(X_val),axis = -1)
# print("val score is {}".format(score[0]))
# print("val Accuracy is {}".format(score[1]))
_, train_acc = model.evaluate(X_train, Y_train, verbose=0)
_, val_acc = model.evaluate(X_val, Y_val, verbose=0)
print(val_acc)
print(train_acc)
rounded_predictions = np.argmax(model.predict(X_test, batch_size=128, verbose=0),axis = -1)
print(rounded_predictions)
import numpy as np
rounded_labels=np.argmax(Y_val, axis=1)
from sklearn.metrics import classification_report
print(classification_report(rounded_labels, predictions))
a = {'id':[i for i in range(2001)]}
a = pd.DataFrame(a)
df = pd.DataFrame({'id':a.id,'labels':rounded_predictions})
df.labels = df.labels.apply({0:'Not_offensive',4:'Offensive_Untargetede',3:'Offensive_Targeted_Insult_Group',
1:'Offensive_Targeted_Insult_Individual',2:'not-malayalam'}.get)
df
df.to_csv('LSTM_with_attention_Malayalam_submission.csv',index=False)
from google.colab import files
files.download("LSTM_with_attention_Malayalam_submission.csv")
```
| github_jupyter | import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, Bidirectional, Dropout
from sklearn.model_selection import train_test_split
from keras.utils.np_utils import to_categorical
import re
import matplotlib.pyplot as plt
from google.colab import drive
drive.mount('/content/drive')
train = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/mal_full_offensive_train.csv', delimiter='\t', names=['sentence','classes','nan'])
train = train.drop(columns=['nan'])
train.head()
val = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/mal_full_offensive_dev.csv', delimiter='\t', names=['sentence','classes','nan'])
val = val.drop(columns=['nan'])
test = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/mal_full_offensive_test.csv',delimiter='\t',names=['sentence'])
test.head(4)
train.count()
train['classes'].apply(len).max()
train['sentence'].apply(len).max()
set(train['classes'])
encode_dict = {}
def encode_cat(x):
if x not in encode_dict.keys():
encode_dict[x] = len(encode_dict)
return encode_dict[x]
train['encode_cat'] = train['classes'].apply(lambda x: encode_cat(x))
val['encode_cat'] = val['classes'].apply(lambda x: encode_cat(x))
train.head(9)
y_train = train['encode_cat']
y_val = val['encode_cat']
def clean(df):
df['sentence'] = df['sentence'].apply(lambda x: x.lower())
df['sentence'] = df['sentence'].apply(lambda x: re.sub(r' +', ' ',x))
df['sentence'] = df['sentence'].apply(lambda x: re.sub("[!@#$+%*:()'-]", ' ',x))
df['sentence'] = df['sentence'].str.replace('\d+', '')
clean(train)
clean(val)
clean(test)
max_features = 2000
max_len = 512
tokenizer = Tokenizer(num_words=max_features, split=' ')
tokenizer.fit_on_texts(train['sentence'].values)
X_train = tokenizer.texts_to_sequences(train['sentence'].values)
# vocab_size = len(tokenizer.word_index) + 1
X_train = pad_sequences(X_train,padding = 'post', maxlen=max_len)
tokenizer.fit_on_texts(val['sentence'].values)
X_val = tokenizer.texts_to_sequences(val['sentence'].values)
# vocab_size = len(tokenizer.word_index) + 1
X_val = pad_sequences(X_val,padding = 'post', maxlen=max_len)
tokenizer.fit_on_texts(test['sentence'].values)
X_test = tokenizer.texts_to_sequences(test['sentence'].values)
# vocab_size = len(tokenizer.word_index) + 1
X_test = pad_sequences(X_test,padding = 'post', maxlen=max_len)
train['sentence'].apply(len).max()
train.describe()
Y_train = pd.get_dummies(y_train).values
Y_val = pd.get_dummies(y_val).values
print(X_train.shape,Y_train.shape)
print(X_val.shape,Y_val.shape)
print(X_test.shape)
from keras.layers import Layer
from keras.layers import Input
from keras.models import Model
from tensorflow.keras import backend as K
class attention(Layer):
def __init__(self):
super(attention,self).__init__()
def build(self,input_shape):
self.W=self.add_weight(name="att_weight",shape=(input_shape[-1],1),initializer="normal")
self.b=self.add_weight(name="att_bias",shape=(input_shape[1],1),initializer="zeros")
super(attention, self).build(input_shape)
def call(self,x):
et=K.squeeze(K.tanh(K.dot(x,self.W)+self.b),axis=-1)
at=K.softmax(et)
at=K.expand_dims(at,axis=-1)
output=x*at
return K.sum(output,axis=1)
def compute_output_shape(self,input_shape):
return (input_shape[0],input_shape[-1])
def get_config(self):
return super(attention,self).get_config()
!wget --header="Host: nlp.stanford.edu" --header="User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" --header="Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8" --header="Accept-Language: en-US,en;q=0.9" --header="Cookie: _ga=GA1.2.456156586.1539718115; _gid=GA1.2.491677602.1539718115; _gat=1" --header="Connection: keep-alive" "https://nlp.stanford.edu/data/glove.6B.zip" -O "glove.6B.zip" -c
!unzip glove.6B.zip
from numpy import array
from numpy import asarray
from numpy import zeros
embeddings_index = dict()
glove_file = open('glove.6B.100d.txt', encoding="utf8")
for line in glove_file:
records = line.split()
word = records[0]
vector_dimensions = asarray(records[1:], dtype='float32')
embeddings_index[word] = vector_dimensions
glove_file.close()
print('Found %s word vectors.' %len(embeddings_index))
word_index = tokenizer.word_index
print(len(word_index))
num_words = min(max_features, len(word_index)) + 1
print(num_words)
embedding_dim = 100
embedding_matrix = np.zeros((num_words, embedding_dim))
for word, i in word_index.items():
if i > max_features:
continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
else:
embedding_matrix[i] = np.random.randn(embedding_dim)
K.clear_session()
from keras.regularizers import l2
from keras.initializers import Constant
embed_dim = 100
lstm_out = 128
# model = Sequential()
inputs = Input(shape=(512,))
x = Embedding(num_words, embed_dim,embeddings_initializer=Constant(embedding_matrix),input_length = X_train.shape[1])(inputs)
att_in = Bidirectional(LSTM(lstm_out,return_sequences=True, dropout=0.2))(x)
att_out = attention()(att_in)
d = Dropout(0.2)(att_out)
outputs = Dense(5, activation='softmax')(d)
model = Model(inputs,outputs)
print(model.summary())
import numpy as np
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',
np.unique(train.encode_cat.values),
train.encode_cat.values)
class_weights = dict(enumerate(class_weights))
model.compile(loss = 'categorical_crossentropy', optimizer='adam',metrics = ['accuracy'])
history = model.fit(X_train, Y_train,batch_size = 64,class_weight=class_weights, validation_data=(X_val,Y_val), epochs=10, verbose=2)
import matplotlib.pyplot as plt
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train','val'])
#plt.show()
plt.savefig('Model_accuracy.png', dpi=600)
# score = model.evaluate(X_test,verbose=1)
predictions = np.argmax(model.predict(X_val),axis = -1)
# print("val score is {}".format(score[0]))
# print("val Accuracy is {}".format(score[1]))
_, train_acc = model.evaluate(X_train, Y_train, verbose=0)
_, val_acc = model.evaluate(X_val, Y_val, verbose=0)
print(val_acc)
print(train_acc)
rounded_predictions = np.argmax(model.predict(X_test, batch_size=128, verbose=0),axis = -1)
print(rounded_predictions)
import numpy as np
rounded_labels=np.argmax(Y_val, axis=1)
from sklearn.metrics import classification_report
print(classification_report(rounded_labels, predictions))
a = {'id':[i for i in range(2001)]}
a = pd.DataFrame(a)
df = pd.DataFrame({'id':a.id,'labels':rounded_predictions})
df.labels = df.labels.apply({0:'Not_offensive',4:'Offensive_Untargetede',3:'Offensive_Targeted_Insult_Group',
1:'Offensive_Targeted_Insult_Individual',2:'not-malayalam'}.get)
df
df.to_csv('LSTM_with_attention_Malayalam_submission.csv',index=False)
from google.colab import files
files.download("LSTM_with_attention_Malayalam_submission.csv") | 0.487307 | 0.285445 |
```
def pow(x, n, I, mult):
# https://sahandsaba.com/five-ways-to-calculate-fibonacci-numbers-with-python-code.html
"""
Returns x to the power of n. Assumes I to be identity relative to the
multiplication given by mult, and n to be a positive integer.
"""
if n == 0:
return I
elif n == 1:
return x
else:
y = pow(x, n // 2, I, mult)
y = mult(y, y)
if n % 2:
y = mult(x, y)
return y
def identity_matrix(n):
"""Returns the n by n identity matrix."""
r = list(range(n))
return [[1 if i == j else 0 for i in r] for j in r]
def matrix_multiply(A, B):
BT = list(zip(*B))
return [[sum(a * b
for a, b in zip(row_a, col_b))
for col_b in BT]
for row_a in A]
def fib(n):
F = pow([[1, 1], [1, 0]], n, identity_matrix(2), matrix_multiply)
return F[0][1]
# github.com/nsc9's code begins here
from sympy import *
from sympy.abc import x,y,z
fib2 = []
poly1 = []
poly2 = []
poly3 = []
poly4 = []
poly5 = []
poly6 = []
poly7 = []
for i in range(0,2000):
#print(-fib(i),"---------------------------------------------------------------------------------------------------------",fib(i))
fib2.append(fib(i))
poly1.append(x+fib2[i])
poly2.append(fib2[i]*x+fib2[i])
poly3.append(fib2[i]*x**2+fib2[i]*x+fib2[i])
poly4.append(fib2[i]*x**3+fib2[i]*x**2+fib2[i]*x+fib2[i])
poly5.append(fib2[i]*x**4+fib2[i]*x**3+fib2[i]*x**2+fib2[i]*x+fib2[i])
poly6.append(fib2[i]*x**5+fib2[i]*x**4+fib2[i]*x**3+fib2[i]*x**2+fib2[i]*x+fib2[i])
poly7.append(fib2[i]*x**8+fib2[i]*x**7+fib2[i]*x**6+fib2[i]*x**5+fib2[i]*x**4+fib2[i]*x**3+fib2[i]*x**2+fib2[i]*x**1+fib2[i])
print(poly7[0:10])
eqs = []
for i in range(1,100,2):
#print(i)
#plot(poly3[i]/10)
#print(poly3[i])
eqs.append(poly7[i])
#print(" ")
#'--Fibonacci numbers------------------------------------------------------------',fib2,'--x + Fibnumbr------------------------------------------------------------ ',poly1,'--Fibnumr*x+Fibnumbr----------------------------------------------------------------- ',poly2,'--Fibnumbr*x**2+Fibnumbrx+Fibnumbr----------------------------------------------------------------- ',poly3
diff(x**x)
poly7[11]
plot(eqs[0],eqs[2],eqs[3],eqs[4],eqs[5],eqs[6],eqs[7],eqs[8],eqs[9],E**x)
plot(eqs[6]*0.85,E**x,legend = True)
plot(eqs[6]*0.851,E**x,legend = True, xlim = (0,11),size = (13,4))
plot(eqs[0],eqs[1],eqs[2],eqs[3],eqs[4],eqs[5],E**x,x**x,legend = True,size = (23,4),yscale='log',ylim=(0.1,10**11),xlim = (-10,10))
from sympy import Sum
from sympy.abc import k,m
print(Sum(eqs[0], (x, 1, 10)).doit(), "=")
Sum(eqs[0], (x, 1, 10))
print(Sum(eqs[1], (x, 1, 10)).doit(), "=")
Sum(eqs[1], (x, 1, 10))
print(Sum(eqs[2], (x, 1, 10)).doit(), "=")
Sum(eqs[2], (x, 1, 10))
print(Sum(eqs[3], (x, 1, 10)).doit(), "=")
Sum(eqs[3], (x, 1, 10))
print(Sum(eqs[4], (x, 1, 10)).doit(), "=")
Sum(eqs[4], (x, 1, 10))
6393353064/2444517348
2444517348/940198980
940198980/376079592
print(Sum(eqs[5], (x, 1, 10)).doit(), "=")
Sum(eqs[5], (x, 1, 10))
16735541844/6393353064
Sum(eqs[5], (x, 1, 10)).doit()/Sum(eqs[5-1], (x, 1, 10)).doit()
for i in range(1,25):
print((Sum(eqs[i], (x, 1, i)).doit()/Sum(eqs[i-1], (x, 1, i)).doit()).evalf())
for i in range(1,50):
print(i," ",fib2[i+1]/fib2[i])
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots() # Create a figure containing a single axes.
digsumfibs = []
def sum_digits(n):
#https://stackoverflow.com/questions/14939953/sum-the-digits-of-a-number
s = 0
while n:
s += n % 10
n //= 10
return s
for i in range(0,26):
print("The digit sum of fib number",fib2[i], "=",sum_digits(fib2[i]))
digsumfibs.append(sum_digits(fib2[i]))
ax.plot([digsumfibs[0],digsumfibs[1],digsumfibs[2],digsumfibs[3],digsumfibs[4],digsumfibs[5],digsumfibs[6],digsumfibs[7],
digsumfibs[8],digsumfibs[9],digsumfibs[10],digsumfibs[11],digsumfibs[12],digsumfibs[13],digsumfibs[14],digsumfibs[15],
digsumfibs[16],digsumfibs[17],digsumfibs[18],digsumfibs[19],digsumfibs[20],digsumfibs[21],digsumfibs[22],digsumfibs[23],digsumfibs[24],digsumfibs[25]])
```
| github_jupyter | def pow(x, n, I, mult):
# https://sahandsaba.com/five-ways-to-calculate-fibonacci-numbers-with-python-code.html
"""
Returns x to the power of n. Assumes I to be identity relative to the
multiplication given by mult, and n to be a positive integer.
"""
if n == 0:
return I
elif n == 1:
return x
else:
y = pow(x, n // 2, I, mult)
y = mult(y, y)
if n % 2:
y = mult(x, y)
return y
def identity_matrix(n):
"""Returns the n by n identity matrix."""
r = list(range(n))
return [[1 if i == j else 0 for i in r] for j in r]
def matrix_multiply(A, B):
BT = list(zip(*B))
return [[sum(a * b
for a, b in zip(row_a, col_b))
for col_b in BT]
for row_a in A]
def fib(n):
F = pow([[1, 1], [1, 0]], n, identity_matrix(2), matrix_multiply)
return F[0][1]
# github.com/nsc9's code begins here
from sympy import *
from sympy.abc import x,y,z
fib2 = []
poly1 = []
poly2 = []
poly3 = []
poly4 = []
poly5 = []
poly6 = []
poly7 = []
for i in range(0,2000):
#print(-fib(i),"---------------------------------------------------------------------------------------------------------",fib(i))
fib2.append(fib(i))
poly1.append(x+fib2[i])
poly2.append(fib2[i]*x+fib2[i])
poly3.append(fib2[i]*x**2+fib2[i]*x+fib2[i])
poly4.append(fib2[i]*x**3+fib2[i]*x**2+fib2[i]*x+fib2[i])
poly5.append(fib2[i]*x**4+fib2[i]*x**3+fib2[i]*x**2+fib2[i]*x+fib2[i])
poly6.append(fib2[i]*x**5+fib2[i]*x**4+fib2[i]*x**3+fib2[i]*x**2+fib2[i]*x+fib2[i])
poly7.append(fib2[i]*x**8+fib2[i]*x**7+fib2[i]*x**6+fib2[i]*x**5+fib2[i]*x**4+fib2[i]*x**3+fib2[i]*x**2+fib2[i]*x**1+fib2[i])
print(poly7[0:10])
eqs = []
for i in range(1,100,2):
#print(i)
#plot(poly3[i]/10)
#print(poly3[i])
eqs.append(poly7[i])
#print(" ")
#'--Fibonacci numbers------------------------------------------------------------',fib2,'--x + Fibnumbr------------------------------------------------------------ ',poly1,'--Fibnumr*x+Fibnumbr----------------------------------------------------------------- ',poly2,'--Fibnumbr*x**2+Fibnumbrx+Fibnumbr----------------------------------------------------------------- ',poly3
diff(x**x)
poly7[11]
plot(eqs[0],eqs[2],eqs[3],eqs[4],eqs[5],eqs[6],eqs[7],eqs[8],eqs[9],E**x)
plot(eqs[6]*0.85,E**x,legend = True)
plot(eqs[6]*0.851,E**x,legend = True, xlim = (0,11),size = (13,4))
plot(eqs[0],eqs[1],eqs[2],eqs[3],eqs[4],eqs[5],E**x,x**x,legend = True,size = (23,4),yscale='log',ylim=(0.1,10**11),xlim = (-10,10))
from sympy import Sum
from sympy.abc import k,m
print(Sum(eqs[0], (x, 1, 10)).doit(), "=")
Sum(eqs[0], (x, 1, 10))
print(Sum(eqs[1], (x, 1, 10)).doit(), "=")
Sum(eqs[1], (x, 1, 10))
print(Sum(eqs[2], (x, 1, 10)).doit(), "=")
Sum(eqs[2], (x, 1, 10))
print(Sum(eqs[3], (x, 1, 10)).doit(), "=")
Sum(eqs[3], (x, 1, 10))
print(Sum(eqs[4], (x, 1, 10)).doit(), "=")
Sum(eqs[4], (x, 1, 10))
6393353064/2444517348
2444517348/940198980
940198980/376079592
print(Sum(eqs[5], (x, 1, 10)).doit(), "=")
Sum(eqs[5], (x, 1, 10))
16735541844/6393353064
Sum(eqs[5], (x, 1, 10)).doit()/Sum(eqs[5-1], (x, 1, 10)).doit()
for i in range(1,25):
print((Sum(eqs[i], (x, 1, i)).doit()/Sum(eqs[i-1], (x, 1, i)).doit()).evalf())
for i in range(1,50):
print(i," ",fib2[i+1]/fib2[i])
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots() # Create a figure containing a single axes.
digsumfibs = []
def sum_digits(n):
#https://stackoverflow.com/questions/14939953/sum-the-digits-of-a-number
s = 0
while n:
s += n % 10
n //= 10
return s
for i in range(0,26):
print("The digit sum of fib number",fib2[i], "=",sum_digits(fib2[i]))
digsumfibs.append(sum_digits(fib2[i]))
ax.plot([digsumfibs[0],digsumfibs[1],digsumfibs[2],digsumfibs[3],digsumfibs[4],digsumfibs[5],digsumfibs[6],digsumfibs[7],
digsumfibs[8],digsumfibs[9],digsumfibs[10],digsumfibs[11],digsumfibs[12],digsumfibs[13],digsumfibs[14],digsumfibs[15],
digsumfibs[16],digsumfibs[17],digsumfibs[18],digsumfibs[19],digsumfibs[20],digsumfibs[21],digsumfibs[22],digsumfibs[23],digsumfibs[24],digsumfibs[25]]) | 0.47658 | 0.791942 |
# Logging
We can track events in a software application, this is known as **logging**. Let’s start with a simple example, we will log a warning message.
As opposed to just printing the errors, logging can be configured to disable output or save to a file. This is a big advantage to simple printing the errors.
```
import logging
# print a log message to the console.
logging.warning('This is a warning!')
```
We can easily output to a file:
```
import logging
logging.basicConfig(filename='program.log',level=logging.DEBUG)
logging.warning('An example message.')
logging.warning('Another message')
```
The importance of a log message depends on the severity.
## Level of severity
The logger module has several levels of severity. We set the level of severity using this line of code:
```
logging.basicConfig(level=logging.DEBUG)
```
These are the levels of severity:
The default logging level is warning, which implies that other messages are ignored.
If you want to print debug or info log messages you have to change the logging level like so:
Type | Description
--- | ---
DEBUG | Information only for problem diagnostics
INFO | The program is running as expected
WARNING | Indicate something went wrong
ERROR | The software will no longer be able to function
CRITICAL | Very serious error
```
logging.basicConfig(level=logging.DEBUG)
logging.warning('Debug message')
```
## Time in log
You can enable time for logging using this line of code:
```
logging.basicConfig(format='%(asctime)s %(message)s')
#logging.basicConfig(format='%(asctime)s %(message)s', level=logging.DEBUG)
logging.info('Logging app started')
logging.warning('An example logging message.')
logging.warning('Another log message')
```
# Testing
**Unit testing** is a type of software testing where individual units or components of a software are tested.
Unit Testing of software applications is done during the development (coding) of an application.
Unit Tests isolate a section of code and verify its correctness.
In procedural programming, a unit may be an individual function or procedure. Unit Testing is usually performed by the developer.
## Doctest
The tests of the doctest module look like interactive Python sessions embedded in the python docstrings. In the following code snippet we extend our running example with a test that consists of two lines: a function call (starts with **>>>**) and the expected output.
```
def add(a, b):
"""Return the sum of a and b.
>>> add(2, 2)
4
>>> add(1,1)
2
"""
return a + b
import doctest
doctest.testmod(verbose=True)
```
# Unittest
The unittest framework looks and works similar to the unit testing frameworks in other languages. It allows for more complex testing scenarios than doctest, but also requires to write more code.
The following code snippet contains a test case for the add() function. A test case is created by subclassing unittest.TestCase. A test case contains one ore more tests that are implemented with methods whose names start with test. The tests use assert methods to check for an expected result.
```
import unittest
class TestNotebook(unittest.TestCase):
def test_add(self):
self.assertEqual(add(2, 2), 4)
unittest.main(argv=[''], verbosity=2, exit=False)
```
We need the argv=[''] argument, because we run the tests from a notebook and not form a command line. exit=False argument prevents unittest from shutting down the notebook kernel. verbosity adjust the verbosity of the output (higher values = more verbose output).
# Debugging a Failed Test
If a test fails it is often useful to halt the test case execution at some point and run a debugger to inspect the state of the program to find clues about a possible bug.
For this example, the next time you run the code, the execution will halt just before the return statement and the Python debugger (pdb) will start. You will get a pdb prompt directly in the notebook (as shown in the figure), which will allow you to inspect the values of variables, step over lines, etc.
```
def add(a, b):
"""Return the sum of a and b."""
sum = a +b
import pdb; pdb.set_trace()
return sum
print(add(2,2))
```
| github_jupyter | import logging
# print a log message to the console.
logging.warning('This is a warning!')
import logging
logging.basicConfig(filename='program.log',level=logging.DEBUG)
logging.warning('An example message.')
logging.warning('Another message')
logging.basicConfig(level=logging.DEBUG)
logging.basicConfig(level=logging.DEBUG)
logging.warning('Debug message')
logging.basicConfig(format='%(asctime)s %(message)s')
#logging.basicConfig(format='%(asctime)s %(message)s', level=logging.DEBUG)
logging.info('Logging app started')
logging.warning('An example logging message.')
logging.warning('Another log message')
def add(a, b):
"""Return the sum of a and b.
>>> add(2, 2)
4
>>> add(1,1)
2
"""
return a + b
import doctest
doctest.testmod(verbose=True)
import unittest
class TestNotebook(unittest.TestCase):
def test_add(self):
self.assertEqual(add(2, 2), 4)
unittest.main(argv=[''], verbosity=2, exit=False)
def add(a, b):
"""Return the sum of a and b."""
sum = a +b
import pdb; pdb.set_trace()
return sum
print(add(2,2)) | 0.348202 | 0.922062 |
(IN)=
# 1.7 Integración Numérica
```{admonition} Notas para contenedor de docker:
Comando de docker para ejecución de la nota de forma local:
nota: cambiar `<ruta a mi directorio>` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.
`docker run --rm -v <ruta a mi directorio>:/datos --name jupyterlab_optimizacion -p 8888:8888 -d palmoreck/jupyterlab_optimizacion:2.1.4`
password para jupyterlab: `qwerty`
Detener el contenedor de docker:
`docker stop jupyterlab_optimizacion`
Documentación de la imagen de docker `palmoreck/jupyterlab_optimizacion:2.1.4` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/optimizacion).
```
---
Nota generada a partir de la [liga1](https://www.dropbox.com/s/jfrxanjls8kndjp/Diferenciacion_e_Integracion.pdf?dl=0) y [liga2](https://www.dropbox.com/s/k3y7h9yn5d3yf3t/Integracion_por_Monte_Carlo.pdf?dl=0).
```{admonition} Al final de esta nota el y la lectora:
:class: tip
* Aprenderá que el método de integración numérica es un método estable numéricamente respecto al redondeo.
* Aprenderá a aproximar integrales de forma numérica por el método de Monte Carlo y tendrá una alternativa a los métodos por Newton-Cotes para el caso de más de una dimensión.
```
```{admonition} Comentario
Los métodos revisados en esta nota de integración numérica serán utilizados más adelante para revisión de herramientas en Python de **perfilamiento de código: uso de cpu y memoria**. También serán referidos en el capítulo de **cómputo en paralelo**.
```
En lo siguiente consideramos que las funciones del integrando están en $\mathcal{C}^2$ en el conjunto de integración (ver {ref}`Definición de función, continuidad y derivada <FCD>` para definición de $\mathcal{C}^2$).
Las reglas o métodos por cuadratura nos ayudan a aproximar integrales con sumas de la forma:
$$\displaystyle \int_a^bf(x)dx \approx \displaystyle \sum_{i=0}^nw_if(x_i)$$
donde: $w_i$ es el **peso** para el **nodo** $x_i$, $f$ se llama integrando y $[a,b]$ intervalo de integración. Los valores $f(x_i)$ se asumen conocidos.
Una gran cantidad de reglas o métodos por cuadratura se obtienen con interpoladores polinomiales del integrando (por ejemplo usando la representación de Lagrange) o también con el teorema Taylor (ver nota {ref}`Polinomios de Taylor y diferenciación numérica <PTDN>` para éste teorema).
Se realizan aproximaciones numéricas por:
* Desconocimiento de la función en todo el intervalo $[a,b]$ y sólo se conoce en los nodos su valor.
* Inexistencia de antiderivada o primitiva del integrando. Por ejemplo:
$$\displaystyle \int_a^be^{-\frac{x^2}{2}}dx$$ con $a,b$ números reales.
```{admonition} Observación
:class: tip
Si existe antiderivada o primitiva del integrando puede usarse el cómputo simbólico o algebraico para obtener el resultado de la integral y evaluarse. Un paquete de Python que nos ayuda a lo anterior es [SymPy](https://www.sympy.org/en/index.html).
```
Dependiendo de la ubicación de los nodos y pesos es el método de cuadratura que resulta:
* Newton-Cotes si los nodos y pesos son equidistantes como la regla del rectángulo, trapecio y Simpson (con el teorema de Taylor o interpolación es posible obtener tales fórmulas).
* Cuadratura Gaussiana si se desea obtener reglas o fórmulas que tengan la mayor exactitud posible (los nodos y pesos se eligen para cumplir con lo anterior). Ejemplos de este tipo de cuadratura se tiene la regla por cuadratura Gauss-Legendre en $[-1,1]$ (que usa [polinomos de Legendre](https://en.wikipedia.org/wiki/Legendre_polynomials)) o Gauss-Hermite (que usa [polinomios de Hermite](https://en.wikipedia.org/wiki/Hermite_polynomials)) para el caso de integrales en $[-\infty, \infty]$ con integrando $e^{-x^2}f(x)$.
```{margin}
En este dibujo se muestra que puede subdivirse el intervalo de integración en una mayor cantidad de subintervalos, lo cual para la función $f$ mostrada es benéfico pues se tiene mejor aproximación (¿en la práctica esto será bueno? recuérdese los errores de redondeo de la nota {ref}`Sistema de punto flotante <SPF>`).
```
<img src="https://dl.dropboxusercontent.com/s/baf7eauuwm347zk/integracion_numerica.png?dl=0" heigth="500" width="500">
En el dibujo: a),b) y c) se integra numéricamente por Newton-Cotes. d) es por cuadratura Gaussiana.
```{admonition} Observación
:class: tip
Si la fórmula por Newton-Cotes involucra el valor de la función en los extremos se nombra cerrada, si no los involucra se les nombra abiertas. En el dibujo d) es abierta.
```
```{admonition} Definición
Los métodos que utilizan la idea anterior de dividir en subintervalos se les conoce como **métodos de integración numérica compuestos** en contraste con los simples:
Para las reglas compuestas se divide el intervalo $[a,b]$ en $n_\text{sub}$ subinteralos $[a_{i-1},a_i], i=1,\dots,n_\text{sub}$ con $a_0=a<a_1<\dots<a_{n_\text{sub}-1}<a_{n_\text{sub}}=b$ y se considera una partición regular, esto es: $a_i-a_{i-1}=\hat{h}$ con $\hat{h}=\frac{h}{n_\text{sub}}$ y $h=b-a$. En este contexto se realiza la aproximación:
$$\displaystyle \int_a^bf(x)dx = \sum_{i=1}^{n_\text{sub}}\int_{a_{i-1}}^{a_i}f(x)dx.$$
```
```{admonition} Comentario
Los métodos de integración numérica por Newton-Cotes o cuadratura Gaussiana pueden extenderse a más dimensiones, sin embargo incurren en lo que se conoce como la **maldición de la dimensionalidad** que para el caso de integración numérica consiste en la gran cantidad de evaluaciones que deben realizarse de la función del integrando para tener una exactitud pequeña. Por ejemplo con un número de nodos igual a $10^4$, una distancia entre ellos de $.1$ y una integral en $4$ dimensiones para la regla por Newton Cotes del rectángulo, se obtiene una exactitud de $2$ dígitos. Como alternativa a los métodos por cuadratura anteriores para las integrales de más dimensiones se tienen los {ref}`métodos de integración por el método Monte Carlo <IMC>` que generan aproximaciones con una exactitud moderada (del orden de $\mathcal{O}(n^{-1/2})$ con $n$ número de nodos) para un número de puntos moderado **independiente** de la dimensión.
```
## Newton-Cotes
Si los nodos $x_i, i=0,1,\dots,$ cumplen $x_{i+1}-x_i=h, \forall i=0,1,\dots,$ con $h$ (espaciado) constante y se aproxima la función del integrando $f$ con un polinomio en $(x_i,f(x_i)) \forall i=0,1,\dots,$ entonces se tiene un método de integración numérica por Newton-Cotes (o reglas o fórmulas por Newton-Cotes).
## Ejemplo de una integral que no tiene antiderivada
En las siguientes reglas se considerará la función $f(x)=e^{-x^2}$ la cual tiene una forma:
```
import math
import numpy as np
import pandas as pd
from scipy.integrate import quad
import matplotlib.pyplot as plt
f=lambda x: np.exp(-x**2)
x=np.arange(-1,1,.01)
plt.plot(x,f(x))
plt.title('f(x)=exp(-x^2)')
plt.show()
```
El valor de la integral $\int_0^1e^{-x^2}dx$ es:
```
obj, err = quad(f, 0, 1)
print((obj,err))
```
```{admonition} Observación
:class: tip
El segundo valor regresado `err`, es una cota superior del error.
```
## Regla simple del rectángulo
Denotaremos a esta regla como $Rf$. En este caso se aproxima el integrando $f$ por un polinomio de grado **cero** con nodo en $x_1 = \frac{a+b}{2}$. Entonces:
$$\displaystyle \int_a^bf(x)dx \approx \int_a^bf(x_1)dx = (b-a)f(x_1)=(b-a)f\left( \frac{a+b}{2} \right ) = hf(x_1)$$
con $h=b-a, x_1=\frac{a+b}{2}$.
<img src="https://dl.dropboxusercontent.com/s/mzlmnvgnltqamz3/rectangulo_simple.png?dl=0" heigth="200" width="200">
### Ejemplo de implementación de regla simple de rectángulo: usando math
Utilizar la regla simple del rectángulo para aproximar la integral $\displaystyle \int_0^1e^{-x^2}dx$.
```
f=lambda x: math.exp(-x**2) #using math library
def Rf(f,a,b):
"""
Compute numerical approximation using simple rectangle or midpoint method in
an interval.
"""
node=a+(b-a)/2.0 #mid point formula to minimize rounding errors
return f(node) #zero degree polynomial
rf_simple = Rf(f,0,1)
print(rf_simple)
```
```{admonition} Observación
:class: tip
Para cualquier aproximación calculada siempre es una muy buena idea reportar el error relativo de la aproximación si tenemos el valor del objetivo. No olvidar esto :)
```
**Para el cálculo del error utilizamos {ref}`fórmulas para calcular errores absolutos y relativos <FORERRABSERRREL>`:**
$$\text{ErrRel(aprox)} = \frac{|\text{aprox}-\text{obj}|}{|\text{obj}|}$$
**La siguiente función calcula un error relativo para un valor `obj`:**
```
def compute_error(obj,approx):
'''
Relative or absolute error between obj and approx.
'''
if math.fabs(obj) > np.finfo(float).eps:
Err = math.fabs(obj-approx)/math.fabs(obj)
else:
Err = math.fabs(obj-approx)
return Err
print(compute_error(obj, rf_simple))
```
**El error relativo es de $4.2\%$ aproximadamente.**
## Regla compuesta del rectángulo
En cada subintervalo construído como $[a_{i-1},a_i]$ con $i=1,\dots,n_{\text{sub}}$ se aplica la regla simple $Rf$, esto es:
$$\displaystyle \int_{a_{i-1}}^{a_i}f(x)dx \approx R_i(f) \forall i=1,\dots,n_{\text{sub}}.$$
De forma sencilla se puede ver que la regla compuesta del rectángulo $R_c(f)$ se escribe:
$$\begin{eqnarray}
R_c(f) &=& \displaystyle \sum_{i=1}^{n_\text{sub}}(a_i-a_{i-1})f\left( \frac{a_i+a_{i-1}}{2}\right) \nonumber\\
&=& \frac{h}{n_\text{sub}}\sum_{i=1}^{n_\text{sub}}f\left( \frac{a_i+a_{i-1}}{2}\right) \nonumber\\
&=&\frac{h}{n_\text{sub}}\sum_{i=1}^{n_\text{sub}}f\left( x_i\right) \nonumber
\end{eqnarray}
$$
con $h=b-a$ y $n_\text{sub}$ número de subintervalos.
<img src="https://dl.dropboxusercontent.com/s/j2wmiyoms7gxrzp/rectangulo_compuesto.png?dl=0" heigth="200" width="200">
```{admonition} Observación
:class: tip
Los nodos para el caso del rectángulo se obtienen con la fórmula: $x_i = a +(i+\frac{1}{2})\hat{h}, \forall i=0,\dots,n_\text{sub}-1, \hat{h}=\frac{h}{n_\text{sub}}$. Por ejemplo si $a=1, b=2$ y $\hat{h}=\frac{1}{4}$ (por tanto $n_\text{sub}=4$ subintervalos) entonces:
Los subintervalos que tenemos son: $\left[1,\frac{5}{4}\right], \left[\frac{5}{4}, \frac{6}{4}\right], \left[\frac{6}{4}, \frac{7}{4}\right]$ y $\left[\frac{7}{4}, 2\right]$.
Los nodos están dados por:
$$x_0 = 1 + \left(0 + \frac{1}{2} \right)\frac{1}{4} = 1 + \frac{1}{8} = \frac{9}{8}$$
$$x_1 = 1 + \left(1 + \frac{1}{2}\right)\frac{1}{4} = 1 + \frac{3}{2}\cdot \frac{1}{4} = \frac{11}{8}$$
$$x_2 = 1 + \left(2 + \frac{1}{2}\right)\frac{1}{4} = 1 + \frac{5}{8}\cdot \frac{1}{4} = \frac{13}{8}$$
$$x_3 = 1 + \left(3 + \frac{1}{2}\right)\frac{1}{4} = 1 + \frac{7}{2}\cdot \frac{1}{4} = \frac{15}{8}$$
```
```{admonition} Observación
:class: tip
Obsérvese que para el caso de la regla del rectángulo Rcf $n = n_\text{sub}$ con $n$ número de nodos.
```
### Ejemplo de implementación de regla compuesta de rectángulo: usando math
Utilizar la regla compuesta del rectángulo para aproximar la integral $\int_0^1e^{-x^2}dx$.
```
f=lambda x: math.exp(-x**2) #using math library
def Rcf(f,a,b,n): #Rcf: rectángulo compuesto para f
"""
Compute numerical approximation using rectangle or mid-point method in
an interval.
Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n
Args:
f (function): function expression of integrand
a (float): left point of interval
b (float): right point of interval
n (float): number of subintervals
Returns:
sum_res (float): numerical approximation to integral of f in the interval a,b
"""
h_hat=(b-a)/n
nodes=[a+(i+1/2)*h_hat for i in range(0,n)]
sum_res=0
for node in nodes:
sum_res=sum_res+f(node)
return h_hat*sum_res
a = 0; b = 1
```
**1 nodo**
```
n = 1
rcf_1 = Rcf(f,a, b, n)
print(rcf_1)
```
**2 nodos**
```
n = 2
rcf_2 = Rcf(f,a, b, n)
print(rcf_2)
```
**$10^3$ nodos**
```
n = 10**3
rcf_3 = Rcf(f, a, b, n)
print(rcf_3)
```
**Errores relativos:**
```
rel_err_rcf_1 = compute_error(obj, rcf_1)
rel_err_rcf_2 = compute_error(obj, rcf_2)
rel_err_rcf_3 = compute_error(obj, rcf_3)
dic = {"Aproximaciones Rcf": [
"Rcf_1",
"Rcf_2",
"Rcf_3"
],
"Número de nodos" : [
1,
2,
1e3
],
"Errores relativos": [
rel_err_rcf_1,
rel_err_rcf_2,
rel_err_rcf_3
]
}
print(pd.DataFrame(dic))
```
### Comentario: `pytest`
Otra forma de evaluar las aproximaciones realizadas es con módulos o paquetes de Python creados para este propósito en lugar de crear nuestras funciones como la de `compute_error`. Uno de estos es el paquete [pytest](https://docs.pytest.org/en/latest/) y la función [approx](https://docs.pytest.org/en/latest/reference.html#pytest-approx) de este paquete:
```
from pytest import approx
print(rcf_1 == approx(obj))
print(rcf_2 == approx(obj))
print(rcf_3 == approx(obj))
```
Y podemos usar un valor definido de tolerancia definido para hacer la prueba (por default se tiene una tolerancia de $10^{-6}$):
```
print(rcf_1 == approx(obj, abs=1e-1, rel=1e-1))
```
### Pregunta
**Será el método del rectángulo un método estable numéricamente bajo el redondeo?** Ver nota {ref}`Condición de un problema y estabilidad de un algoritmo <CPEA>` para definición de estabilidad numérica de un algoritmo.
Para responder la pregunta anterior aproximamos la integral con más nodos: $10^5$ nodos
```
n = 10**5
rcf_4 = Rcf(f, a, b, n)
print(rcf_4)
print(compute_error(obj, rcf_4))
```
Al menos para este ejemplo con $10^5$ nodos parece ser **numéricamente estable...**
## Regla compuesta del trapecio
En cada subintervalo se aplica la regla simple $Tf$, esto es:
$$\displaystyle \int_{a_{i-1}}^{a_i}f(x)dx \approx T_i(f) \forall i=1,\dots,n_\text{sub}.$$
Con $T_i(f) = \frac{(a_i-a_{i-1})}{2}(f(a_i)+f(a_{i-1}))$ para $i=1,\dots,n_\text{sub}$.
De forma sencilla se puede ver que la regla compuesta del trapecio $T_c(f)$ se escribe como:
$$T_c(f) = \displaystyle \frac{h}{2n_\text{sub}}\left[f(x_0)+f(x_{n_\text{sub}})+2\displaystyle\sum_{i=1}^{n_\text{sub}-1}f(x_i)\right]$$
con $h=b-a$ y $n_\text{sub}$ número de subintervalos.
<img src="https://dl.dropboxusercontent.com/s/4dl2btndrftdorp/trapecio_compuesto.png?dl=0" heigth="200" width="200">
```{admonition} Observaciones
:class: tip
* Los nodos para el caso del trapecio se obtienen con la fórmula: $x_i = a +i\hat{h}, \forall i=0,\dots,n_\text{sub}, \hat{h}=\frac{h}{n_\text{sub}}$.
* Obsérvese que para el caso de la regla del trapecio Tcf $n = n_\text{sub}+1$ con $n$ número de nodos.
```
### Ejemplo de implementación de regla compuesta del trapecio: usando numpy
Con la regla compuesta del trapecio se aproximará la integral $\int_0^1e^{-x^2}dx$. Se calculará el error relativo y graficará $n_\text{sub}$ vs Error relativo para $n_\text{sub}=1,10,100,1000,10000$.
```
f=lambda x: np.exp(-x**2) #using numpy library
def Tcf(n,f,a,b): #Tcf: trapecio compuesto para f
"""
Compute numerical approximation using trapezoidal method in
an interval.
Nodes are generated via formula: x_i = a+ih_hat for i=0,1,...,n and h_hat=(b-a)/n
Args:
f (function): function expression of integrand
a (float): left point of interval
b (float): right point of interval
n (float): number of subintervals
Returns:
sum_res (float): numerical approximation to integral of f in the interval a,b
"""
h=b-a
nodes=np.linspace(a,b,n+1)
sum_res=sum(f(nodes[1:-1]))
return h/(2*n)*(f(nodes[0])+f(nodes[-1])+2*sum_res)
```
Graficamos:
```
numb_of_subintervals=(1,10,100,1000,10000)
tcf_approx = np.array([Tcf(n,f,0,1) for n in numb_of_subintervals])
```
**Para el cálculo del error utilizamos {ref}`fórmulas para calcular errores absolutos y relativos <FORERRABSERRREL>`:**
$$\text{ErrRel(aprox)} = \frac{|\text{aprox}-\text{obj}|}{|\text{obj}|}$$
**La siguiente función calcula un error relativo para un varios valores `obj`:**
```
def compute_error_point_wise(obj,approx):
'''
Relative or absolute error between obj and approx.
'''
if np.abs(obj) > np.nextafter(0,1):
Err = np.abs(obj-approx)/np.abs(obj)
else:
Err = np.abs(obj-approx)
return Err
relative_errors = compute_error_point_wise(obj, tcf_approx)
print(relative_errors)
plt.plot(numb_of_subintervals, relative_errors,'o')
plt.xlabel('number of subintervals')
plt.ylabel('Relative error')
plt.title('Error relativo en la regla del Trapecio')
plt.show()
```
Si no nos interesa el valor de los errores relativos y sólo la gráfica podemos utilizar la siguiente opción:
```
from functools import partial
```
Ver [functools.partial](https://docs.python.org/2/library/functools.html#functools.partial) para documentación, [liga](https://stackoverflow.com/questions/15331726/how-does-functools-partial-do-what-it-does) para una explicación de `partial` y [liga2](https://stackoverflow.com/questions/10834960/how-to-do-multiple-arguments-to-map-function-where-one-remains-the-same-in-pytho), [liga3](https://stackoverflow.com/questions/47859209/how-to-map-over-a-function-with-multiple-arguments-in-python) para ejemplos de uso.
```
tcf_approx_2 = map(partial(Tcf,f=f,a=a,b=b),
numb_of_subintervals) #map returns an iterator
```
**Para el cálculo del error utilizamos {ref}`fórmulas para calcular errores absolutos y relativos <FORERRABSERRREL>`:**
$$\text{ErrRel(aprox)} = \frac{|\text{aprox}-\text{obj}|}{|\text{obj}|}$$
**La siguiente función calcula un error relativo para un varios valores `obj`:**
```
def compute_error_point_wise_2(obj, approx):
for ap in approx:
yield math.fabs(ap-obj)/math.fabs(obj) #using math library
```
```{admonition} Observación
:class: tip
La función `compute_error_point_wise_2` anterior es un [generator](https://wiki.python.org/moin/Generators), ver [liga](https://stackoverflow.com/questions/231767/what-does-the-yield-keyword-do) para conocer el uso de `yield`.
```
```
relative_errors_2 = compute_error_point_wise_2(obj, tcf_approx_2)
plt.plot(numb_of_subintervals,list(relative_errors_2),'o')
plt.xlabel('number of subintervals')
plt.ylabel('Relative error')
plt.title('Error relativo en la regla del Trapecio')
plt.show()
```
**Otra forma con [scatter](https://matplotlib.org/3.3.0/api/_as_gen/matplotlib.pyplot.scatter.html):**
```
tcf_approx_2 = map(partial(Tcf,f=f,a=a,b=b),
numb_of_subintervals) #map returns an iterator
relative_errors_2 = compute_error_point_wise_2(obj, tcf_approx_2)
[plt.scatter(n,rel_err) for n,rel_err in zip(numb_of_subintervals,relative_errors_2)]
plt.xlabel('number of subintervals')
plt.ylabel('Relative error')
plt.title('Error relativo en la regla del Trapecio')
plt.show()
```
## Regla compuesta de Simpson
En cada subintervalo se aplica la regla simple $Sf$, esto es:
$$\displaystyle \int_{a_{i-1}}^{a_i}f(x)dx \approx S_i(f) \forall i=1,\dots,n_\text{sub}$$
con $S_i(f) = \frac{h}{6}\left[f(x_{2i})+f(x_{2i-2})+4f(x_{2i-1})\right]$ para el subintervalo $[a_{i-1},a_i]$ con $i=1,\dots,n_\text{sub}$.
De forma sencilla se puede ver que la regla compuesta de Simpson compuesta $S_c(f)$ se escribe como:
$$S_c(f) = \displaystyle \frac{h}{3(2n_\text{sub})} \left [ f(x_0) + f(x_{2n_\text{sub}}) + 2 \sum_{i=1}^{n_\text{sub}-1}f(x_{2i}) + 4 \sum_{i=1}^{n_\text{sub}}f(x_{2i-1})\right ]$$
con $h=b-a$ y $n_\text{sub}$ número de subintervalos.
<img src="https://dl.dropboxusercontent.com/s/8rx32vdtulpdflm/Simpson_compuesto.png?dl=0" heigth="200" width="200">
```{admonition} Observaciones
:class: tip
* Los nodos para el caso de Simpson se obtienen con la fórmula: $x_i = a +\frac{i}{2}\hat{h}, \forall i=0,\dots,2n, \hat{h}=\frac{h}{n_\text{sub}}$.
* Obsérvese que para el caso de la regla de Simpson Scf $n = 2n_\text{sub}+1$ con $n$ número de nodos.
```
```{margin}
En esta [liga](https://www.dropbox.com/s/qrbcs5n57kp5150/Simpson-6-subintervalos.pdf?dl=0) está un apoyo visual para la regla Scf.
```
```{admonition} Ejercicio
:class: tip
Implementar la regla compuesta de Simpson para aproximar la integral $\int_0^1e^{-x^2}dx$. Calcular error relativo y realizar una gráfica de $n$ vs Error relativo para $n=1,10,100,1000,10000$ utilizando *Numpy* e `iterators`.
```
## Expresiones de los errores para las reglas compuestas del rectángulo, trapecio y Simpson
La forma de los errores de las reglas del rectángulo, trapecio y Simpson se pueden obtener con interpolación o con el teorema de Taylor. Ver [Diferenciación e Integración](https://www.dropbox.com/s/jfrxanjls8kndjp/Diferenciacion_e_Integracion.pdf?dl=0) para detalles y {ref}`Polinomios de Taylor y diferenciación numérica <PTDN>` para el teorema. Suponiendo que $f$ cumple con condiciones sobre sus derivadas, tales errores son:
$$\text{Err}Rc(f) = \frac{b-a}{6}f^{(2)}(\xi_r)\hat{h}^2, \xi_r \in [a,b]$$
$$\text{Err}Tc(f)=-\frac{b-a}{12}f^{(2)}(\xi_t)\hat{h}^2, \xi_t \in [a,b]$$
$$\text{Err}Sc(f)=-\frac{b-a}{180}f^{(4)}(\xi_S)\hat{h}^4, \xi_S \in [a,b].$$
(IMC)=
## Integración por el método de Monte Carlo
Los métodos de integración numérica por Monte Carlo son similares a los métodos por cuadratura en el sentido que se eligen puntos en los que se evaluará el integrando para sumar sus valores. La diferencia esencial con los métodos por cuadratura es que en el método de integración por Monte Carlo los puntos son **seleccionados de una forma *aleatoria*** (de hecho es pseudo-aleatoria pues se generan con un programa de computadora) en lugar de generarse con una fórmula.
### Problema
En esta sección consideramos $n$ número de nodos.
Aproximar numéricamente la integral $\displaystyle \int_{\Omega}f(x)dx$ para $x \in \mathbb{R}^\mathcal{D}, \Omega \subseteq \mathbb{R}^\mathcal{D}, f: \mathbb{R}^\mathcal{D} \rightarrow \mathbb{R}$ función tal que la integral esté bien definida en $\Omega$.
Por ejemplo para $\mathcal{D}=2:$
<img src="https://dl.dropboxusercontent.com/s/xktwjmgbf8aiekw/integral_2_dimensiones.png?dl=0" heigth="500" width="500">
Para resolver el problema anterior con $\Omega$ un rectángulo, podemos utilizar las reglas por cuadratura por Newton-Cotes o cuadratura Gaussiana en una dimensión manteniendo fija la otra dimensión. Sin embargo considérese la siguiente situación:
La regla del rectángulo (o del punto medio) y del trapecio tienen un error de orden $\mathcal{O}(h^2)$ independientemente de si se está aproximando integrales de una o más dimensiones. Supóngase que se utilizan $n$ nodos para tener un valor de espaciado igual a $\hat{h}$ en una dimensión, entonces para $\mathcal{D}$ dimensiones se requerirían $N=n^\mathcal{D}$ evaluaciones del integrando, o bien, si se tiene un valor de $N$ igual a $10, 000$ y $\mathcal{D}=4$ dimensiones el error sería del orden $\mathcal{O}(N^{-2/\mathcal{D}})$ lo que implicaría un valor de $\hat{h}=.1$ para aproximadamente sólo **dos dígitos** correctos en la aproximación (para el enunciado anterior recuérdese que $\hat{h}$ es proporcional a $n^{-1}$ y $n$ = $N^{1/\mathcal{D}}$). Este esfuerzo enorme de evaluar $N$ veces el integrando para una exactitud pequeña se debe al problema de generar puntos para *llenar* un espacio $\mathcal{D}$-dimensional y se conoce con el nombre de la maldición de la dimensionalidad, [***the curse of dimensionality***](https://en.wikipedia.org/wiki/Curse_of_dimensionality).
Una opción para resolver la situación anterior si no se desea una precisión alta (por ejemplo con una precisión de $10^{-4}$ o $4$ dígitos es suficiente) es con el método de integración por Monte Carlo (tal nombre por el uso de números aleatorios). La integración por el método de Monte Carlo está basada en la interpretación geométrica de las integrales: calcular la integral del problema inicial requiere calcular el **hipervolumen** de $\Omega$.
### Ejemplo
Supóngase que se desea aproximar el área de un círculo centrado en el origen de radio igual a $1$:
<img src="https://dl.dropboxusercontent.com/s/xmtcxw3wntfxuau/monte_carlo_1.png?dl=0" heigth="300" width="300">
entonces el área de este círculo es $\pi r^2 = \pi$.
Para lo anterior **encerramos** al círculo con un cuadrado de lado $2$:
<img src="https://dl.dropboxusercontent.com/s/igsn57vuahem0il/monte_carlo_2.png?dl=0" heigth="200" width="200">
Si tenemos $n$ puntos en el cuadrado:
<img src="https://dl.dropboxusercontent.com/s/a4krdneo0jaerqz/monte_carlo_3.png?dl=0" heigth="200" width="200">
y consideramos los $m$ puntos que están dentro del círculo:
<img src="https://dl.dropboxusercontent.com/s/pr4c5e57r4fawdt/monte_carlo_4.png?dl=0" heigth="200" width="200">
Entonces: $\frac{\text{Área del círculo}}{\text{Área del cuadrado}} \approx \frac{m}{n}$ y se tiene: Área del círculo $\approx$Área del cuadrado$\frac{m}{n}$ y si $n$ crece entonces la aproximación es mejor.
prueba numérica:
```
density_p=int(2.5*10**3)
x_p=np.random.uniform(-1,1,(density_p,2))
plt.scatter(x_p[:,0],x_p[:,1],marker='.',color='g')
density=1e-5
x=np.arange(-1,1,density)
y1=np.sqrt(1-x**2)
y2=-np.sqrt(1-x**2)
plt.plot(x,y1,'r',x,y2,'r')
plt.title('Integración por Monte Carlo')
plt.grid()
plt.show()
f=lambda x: np.sqrt(x[:,0]**2 + x[:,1]**2) #norm2 definition
ind=f(x_p)<=1
x_p_subset=x_p[ind]
plt.scatter(x_p_subset[:,0],x_p_subset[:,1],marker='.',color='r')
plt.title('Integración por Monte Carlo')
plt.grid()
plt.show()
```
Área del círculo es aproximadamente:
```
square_area = 4
print(square_area*len(x_p_subset)/len(x_p))
```
Si aumentamos el número de puntos...
```
density_p=int(10**4)
x_p=np.random.uniform(-1,1,(density_p,2))
ind=f(x_p)<=1
x_p_subset=x_p[ind]
print(square_area*len(x_p_subset)/len(x_p))
density_p=int(10**5)
x_p=np.random.uniform(-1,1,(density_p,2))
ind=f(x_p)<=1
x_p_subset=x_p[ind]
print(square_area*len(x_p_subset)/len(x_p))
```
```{admonition} Comentarios
* El método de Monte Carlo revisado en el ejemplo anterior nos indica que debemos encerrar a la región de integración $\Omega$. Por ejemplo para una región $\Omega$ más general:
<img src="https://dl.dropboxusercontent.com/s/ke6hngwue3ovpaz/monte_carlo_5.png?dl=0" heigth="300" width="300">
entonces la integración por el método de Monte Carlo será:
$$\displaystyle \int_\Omega f d\Omega \approx V \overline{f}$$
donde: $V$ es el hipervolumen de $\Omega_E$ que encierra a $\Omega$, esto es $\Omega \subseteq \Omega_E$, $\{x_1,\dots,x_n\}$ es un conjunto de puntos distribuidos uniformemente en $\Omega_E$ y $\overline{f}=\frac{1}{n}\displaystyle \sum_{i=1}^nf(x_i)$
* Consideramos $\overline{f}$ pues $\displaystyle \sum_{i=1}^nf(x_i)$ representa el valor de $m$ si pensamos a $f$ como una restricción que deben cumplir los $n$ puntos en el ejemplo de aproximación al área del círculo: Área del círculo $\approx$Área del cuadrado$\frac{m}{n}$ (en este caso Área del cuadrado es el hipervolumen $V$).
* Algunas características para regiones $\Omega_E$ que encierren a $\Omega$ es que:
* Sea sencillo generar números aleatorios uniformes.
* Sea sencillo obtener su hipervolumen.
```
### Ejemplos
**Aproximar las siguientes integrales:**
```
density_p=int(10**4)
```
* $\displaystyle \int_0^1\frac{4}{1+x^2}dx = \pi$
```
f = lambda x: 4/(1+x**2)
x_p = np.random.uniform(0,1,density_p)
obj = math.pi
a = 0
b = 1
vol = b-a
ex_1 = vol*np.mean(f(x_p))
print("error relativo: {:0.4e}".format(compute_error(obj, ex_1)))
```
* $\displaystyle \int_1^2 \frac{1}{x}dx = \log{2}$.
```
f = lambda x: 1/x
x_p = np.random.uniform(1,2,density_p)
obj = math.log(2)
a = 1
b = 2
vol = b-a
ex_2 = vol*np.mean(f(x_p))
print("error relativo: {:0.4e}".format(compute_error(obj, ex_2)))
```
* $\displaystyle \int_{-1}^1 \int_0^1x^2+y^2dxdy = \frac{4}{3}$.
```
f = lambda x,y:x**2+y**2
a1 = -1
b1 = 1
a2 = 0
b2 = 1
x_p = np.random.uniform(a1,b1,density_p)
y_p = np.random.uniform(a2,b2,density_p)
obj = 4/3
vol = (b1-a1)*(b2-a2)
ex_3 = vol*np.mean(f(x_p,y_p))
print("error relativo: {:0.4e}".format(compute_error(obj, ex_3)))
```
* $\displaystyle \int_0^{\frac{\pi}{2}} \int_0^{\frac{\pi}{2}}\cos(x)\sin(y)dxdy=1$.
```
f = lambda x,y:np.cos(x)*np.sin(y)
a1 = 0
b1 = math.pi/2
a2 = 0
b2 = math.pi/2
x_p = np.random.uniform(a1,b1,density_p)
y_p = np.random.uniform(a2,b2,density_p)
obj = 1
vol = (b1-a1)*(b2-a2)
ex_4 = vol*np.mean(f(x_p,y_p))
print("error relativo: {:0.4e}".format(compute_error(obj, ex_4)))
```
* $\displaystyle \int_0^1\int_{\frac{-1}{2}}^0\int_0^{\frac{1}{3}}(x+2y+3z)^2dxdydz =\frac{1}{12}$.
```
f = lambda x,y,z:(x+2*y+3*z)**2
a1 = 0
b1 = 1
a2 = -1/2
b2 = 0
a3 = 0
b3 = 1/3
x_p = np.random.uniform(a1,b1,density_p)
y_p = np.random.uniform(a2,b2,density_p)
z_p = np.random.uniform(a3,b3,density_p)
obj = 1/12
vol = (b1-a1)*(b2-a2)*(b3-a3)
ex_5 = vol*np.mean(f(x_p,y_p,z_p))
print("error relativo: {:0.4e}".format(compute_error(obj, ex_5)))
```
### ¿Cuál es el error en la aproximación por el método de integración por Monte Carlo?
Para obtener la expresión del error en esta aproximación supóngase que $x_1, x_2,\dots x_n$ son variables aleatorias independientes uniformemente distribuidas. Entonces:
$$\text{Err}(\overline{f})=\sqrt{\text{Var}(\overline{f})}=\sqrt{\text{Var}\left( \frac{1}{n} \displaystyle \sum_{i=1}^nf(x_i)\right)}=\dots=\sqrt{\frac{\text{Var}(f(x))}{n}}$$
con $x$ variable aleatoria uniformemente distribuida.
Un estimador de $\text{Var}(f(x))$ es: $\frac{1}{n}\displaystyle \sum_{i=1}^n(f(x_i)-\overline{f})^2=\overline{f^2}-\overline{f}^2$ por lo que $\hat{\text{Err}}(\overline{f}) = \sqrt{\frac{\overline{f^2}-\overline{f}^2}{n}}$.
Se tiene entonces que $\displaystyle \int_\Omega f d\Omega$ estará en el intervalo:
$$V(\overline{f} \pm \text{Err}(\overline{f})) \approx V(\overline{f} \pm \hat{\text{Err}}(\overline{f}))=V\overline{f} \pm V\sqrt{\frac{\overline{f^2}-\overline{f}^2}{n}}$$
```{admonition} Comentarios
* Los signos $\pm$ en el error de aproximación **no** representan una cota rigurosa, es una desviación estándar.
* A diferencia de la aproximación por las reglas por cuadratura tenemos una precisión con $n$ puntos independientemente de la dimensión $\mathcal{D}$.
* Si $\mathcal{D} \rightarrow \infty$ entonces $\hat{\text{Err}}(\overline{f}) = \mathcal{O}\left(\frac{1}{\sqrt{n}} \right)$ por lo que para ganar un decimal extra de precisión en la integración por el método de Monte Carlo se requiere incrementar el número de puntos por un factor de $10^2$.
```
```{admonition} Observación
:class: tip
Obsérvese que si $f$ es constante entonces $\hat{\text{Err}}(\overline{f})=0$. Esto implica que si $f$ es casi constante y $\Omega_E$ encierra muy bien a $\Omega$ entonces se tendrá una estimación muy precisa de $\displaystyle \int_\Omega f d\Omega$, por esto en la integración por el método de Monte Carlo se realizan cambios de variable de modo que transformen a $f$ en aproximadamente constante y que esto resulte además en regiones $\Omega_E$ que encierren a $\Omega$ casi de manera exacta (y que además sea sencillo generar números pseudo aleatorios en ellas!).
```
### Ejemplo
Para el ejemplo anterior $\displaystyle \int_0^1\frac{4}{1+x^2}dx = \pi$ se tiene:
```
f = lambda x: 4/(1+x**2)
x_p = np.random.uniform(0,1,density_p)
obj = math.pi
a = 0
b = 1
vol = b-a
f_bar = np.mean(f(x_p))
ex_6 = vol*f_bar
print("error relativo: {:0.4e}".format(compute_error(obj,ex_6 )))
error_std = math.sqrt(sum((f(x_p)-f_bar)**2)/density_p**2)
print(error_std)
```
intervalo:
```
print((ex_6-vol*error_std, ex_6+vol*error_std))
```
```{admonition} Ejercicios
:class: tip
Aproximar, reportar errores relativos e intervalo de estimación en una tabla:
* $\displaystyle \int_0^1\int_0^1\sqrt{x+y}dydx=\frac{2}{3}\left(\frac{2}{5}2^{5/2}-\frac{4}{5}\right)$.
* $\displaystyle \int_D \int \sqrt{x+y}dydx=8\frac{\sqrt{2}}{15}$ donde: $D=\{(x,y) \in \mathbb{R}^2 | 0 \leq x \leq 1, -x \leq y \leq x\}$.
* $\displaystyle \int_D \int \exp{(x^2+y^2)}dydx = \pi(e^9-1)$ donde $D=\{(x,y) \in \mathbb{R}^2 | x^2+y^2 \leq 9\}$.
* $\displaystyle \int_0^2 \int_{-1}^1 \int_0^1 (2x+3y+z)dzdydx = 10$.
```
### Aproximación de características de variables aleatorias
La integración por el método de Monte Carlo se utiliza para aproximar características de variables aleatorias continuas. Por ejemplo, si $x$ es variable aleatoria continua, entonces su media está dada por:
$$E_f[h(X)] = \displaystyle \int_{S_X}h(x)f(x)dx$$
donde: $f$ es función de densidad de $X$, $S_X$ es el soporte de $X$ y $h$ es una transformación. Entonces:
$$E_f[h(X)] \approx \frac{1}{n} \displaystyle \sum_{i=1}^nh(x_i)=\overline{h}_n$$
con $\{x_1,x_2,\dots,x_n\}$ muestra de $f$. Y por la ley de los grandes números se tiene:
$$\overline{h}_n \xrightarrow{n \rightarrow \infty} E_f[h(X)]$$
con **convergencia casi segura**. Aún más: si $E_f[h^2(X)] < \infty$ entonces el error de aproximación de $\overline{h}_n$ es del orden $\mathcal{O}\left(\frac{1}{\sqrt{n}} \right)$ y una estimación de este error es: $\hat{\text{Err}}(\overline{h}) = \sqrt{\frac{\overline{h^2}-\overline{h}^2}{n}}$. Por el teorema del límite central:
$$\frac{\overline{h}_n-E_f[h(X)]}{\hat{\text{Err}}(\overline{h})} \xrightarrow{n \rightarrow \infty} N(0,1)$$
con $N(0,1)$ una distribución Normal con $\mu=0,\sigma=1$ $\therefore$ si $n \rightarrow \infty$ un intervalo de confianza al $95\%$ para $E_f[h(X)]$ es: $\overline{h}_n \pm z_{.975} \hat{\text{Err}}(\overline{h})$.
Uno de los pasos complicados en el desarrollo anterior es obtener una muestra de $f$. Para el caso de variables continuas se puede utilizar el teorema de transformación inversa o integral de probabilidad. Otros métodos son los nombrados [métodos de monte Carlo con cadenas de Markov](https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo) o MCMC.
```{admonition} Ejercicios
:class: tip
1. Resuelve los ejercicios y preguntas de la nota.
**Referencias**
1. R. L. Burden, J. D. Faires, Numerical Analysis, Brooks/Cole Cengage Learning, 2005.
2. M. T. Heath, Scientific Computing. An Introductory Survey, McGraw-Hill, 2002.
| github_jupyter |
---
Nota generada a partir de la [liga1](https://www.dropbox.com/s/jfrxanjls8kndjp/Diferenciacion_e_Integracion.pdf?dl=0) y [liga2](https://www.dropbox.com/s/k3y7h9yn5d3yf3t/Integracion_por_Monte_Carlo.pdf?dl=0).
En lo siguiente consideramos que las funciones del integrando están en $\mathcal{C}^2$ en el conjunto de integración (ver {ref}`Definición de función, continuidad y derivada <FCD>` para definición de $\mathcal{C}^2$).
Las reglas o métodos por cuadratura nos ayudan a aproximar integrales con sumas de la forma:
$$\displaystyle \int_a^bf(x)dx \approx \displaystyle \sum_{i=0}^nw_if(x_i)$$
donde: $w_i$ es el **peso** para el **nodo** $x_i$, $f$ se llama integrando y $[a,b]$ intervalo de integración. Los valores $f(x_i)$ se asumen conocidos.
Una gran cantidad de reglas o métodos por cuadratura se obtienen con interpoladores polinomiales del integrando (por ejemplo usando la representación de Lagrange) o también con el teorema Taylor (ver nota {ref}`Polinomios de Taylor y diferenciación numérica <PTDN>` para éste teorema).
Se realizan aproximaciones numéricas por:
* Desconocimiento de la función en todo el intervalo $[a,b]$ y sólo se conoce en los nodos su valor.
* Inexistencia de antiderivada o primitiva del integrando. Por ejemplo:
$$\displaystyle \int_a^be^{-\frac{x^2}{2}}dx$$ con $a,b$ números reales.
Dependiendo de la ubicación de los nodos y pesos es el método de cuadratura que resulta:
* Newton-Cotes si los nodos y pesos son equidistantes como la regla del rectángulo, trapecio y Simpson (con el teorema de Taylor o interpolación es posible obtener tales fórmulas).
* Cuadratura Gaussiana si se desea obtener reglas o fórmulas que tengan la mayor exactitud posible (los nodos y pesos se eligen para cumplir con lo anterior). Ejemplos de este tipo de cuadratura se tiene la regla por cuadratura Gauss-Legendre en $[-1,1]$ (que usa [polinomos de Legendre](https://en.wikipedia.org/wiki/Legendre_polynomials)) o Gauss-Hermite (que usa [polinomios de Hermite](https://en.wikipedia.org/wiki/Hermite_polynomials)) para el caso de integrales en $[-\infty, \infty]$ con integrando $e^{-x^2}f(x)$.
<img src="https://dl.dropboxusercontent.com/s/baf7eauuwm347zk/integracion_numerica.png?dl=0" heigth="500" width="500">
En el dibujo: a),b) y c) se integra numéricamente por Newton-Cotes. d) es por cuadratura Gaussiana.
## Newton-Cotes
Si los nodos $x_i, i=0,1,\dots,$ cumplen $x_{i+1}-x_i=h, \forall i=0,1,\dots,$ con $h$ (espaciado) constante y se aproxima la función del integrando $f$ con un polinomio en $(x_i,f(x_i)) \forall i=0,1,\dots,$ entonces se tiene un método de integración numérica por Newton-Cotes (o reglas o fórmulas por Newton-Cotes).
## Ejemplo de una integral que no tiene antiderivada
En las siguientes reglas se considerará la función $f(x)=e^{-x^2}$ la cual tiene una forma:
El valor de la integral $\int_0^1e^{-x^2}dx$ es:
## Regla simple del rectángulo
Denotaremos a esta regla como $Rf$. En este caso se aproxima el integrando $f$ por un polinomio de grado **cero** con nodo en $x_1 = \frac{a+b}{2}$. Entonces:
$$\displaystyle \int_a^bf(x)dx \approx \int_a^bf(x_1)dx = (b-a)f(x_1)=(b-a)f\left( \frac{a+b}{2} \right ) = hf(x_1)$$
con $h=b-a, x_1=\frac{a+b}{2}$.
<img src="https://dl.dropboxusercontent.com/s/mzlmnvgnltqamz3/rectangulo_simple.png?dl=0" heigth="200" width="200">
### Ejemplo de implementación de regla simple de rectángulo: usando math
Utilizar la regla simple del rectángulo para aproximar la integral $\displaystyle \int_0^1e^{-x^2}dx$.
**Para el cálculo del error utilizamos {ref}`fórmulas para calcular errores absolutos y relativos <FORERRABSERRREL>`:**
$$\text{ErrRel(aprox)} = \frac{|\text{aprox}-\text{obj}|}{|\text{obj}|}$$
**La siguiente función calcula un error relativo para un valor `obj`:**
**El error relativo es de $4.2\%$ aproximadamente.**
## Regla compuesta del rectángulo
En cada subintervalo construído como $[a_{i-1},a_i]$ con $i=1,\dots,n_{\text{sub}}$ se aplica la regla simple $Rf$, esto es:
$$\displaystyle \int_{a_{i-1}}^{a_i}f(x)dx \approx R_i(f) \forall i=1,\dots,n_{\text{sub}}.$$
De forma sencilla se puede ver que la regla compuesta del rectángulo $R_c(f)$ se escribe:
$$\begin{eqnarray}
R_c(f) &=& \displaystyle \sum_{i=1}^{n_\text{sub}}(a_i-a_{i-1})f\left( \frac{a_i+a_{i-1}}{2}\right) \nonumber\\
&=& \frac{h}{n_\text{sub}}\sum_{i=1}^{n_\text{sub}}f\left( \frac{a_i+a_{i-1}}{2}\right) \nonumber\\
&=&\frac{h}{n_\text{sub}}\sum_{i=1}^{n_\text{sub}}f\left( x_i\right) \nonumber
\end{eqnarray}
$$
con $h=b-a$ y $n_\text{sub}$ número de subintervalos.
<img src="https://dl.dropboxusercontent.com/s/j2wmiyoms7gxrzp/rectangulo_compuesto.png?dl=0" heigth="200" width="200">
### Ejemplo de implementación de regla compuesta de rectángulo: usando math
Utilizar la regla compuesta del rectángulo para aproximar la integral $\int_0^1e^{-x^2}dx$.
**1 nodo**
**2 nodos**
**$10^3$ nodos**
**Errores relativos:**
### Comentario: `pytest`
Otra forma de evaluar las aproximaciones realizadas es con módulos o paquetes de Python creados para este propósito en lugar de crear nuestras funciones como la de `compute_error`. Uno de estos es el paquete [pytest](https://docs.pytest.org/en/latest/) y la función [approx](https://docs.pytest.org/en/latest/reference.html#pytest-approx) de este paquete:
Y podemos usar un valor definido de tolerancia definido para hacer la prueba (por default se tiene una tolerancia de $10^{-6}$):
### Pregunta
**Será el método del rectángulo un método estable numéricamente bajo el redondeo?** Ver nota {ref}`Condición de un problema y estabilidad de un algoritmo <CPEA>` para definición de estabilidad numérica de un algoritmo.
Para responder la pregunta anterior aproximamos la integral con más nodos: $10^5$ nodos
Al menos para este ejemplo con $10^5$ nodos parece ser **numéricamente estable...**
## Regla compuesta del trapecio
En cada subintervalo se aplica la regla simple $Tf$, esto es:
$$\displaystyle \int_{a_{i-1}}^{a_i}f(x)dx \approx T_i(f) \forall i=1,\dots,n_\text{sub}.$$
Con $T_i(f) = \frac{(a_i-a_{i-1})}{2}(f(a_i)+f(a_{i-1}))$ para $i=1,\dots,n_\text{sub}$.
De forma sencilla se puede ver que la regla compuesta del trapecio $T_c(f)$ se escribe como:
$$T_c(f) = \displaystyle \frac{h}{2n_\text{sub}}\left[f(x_0)+f(x_{n_\text{sub}})+2\displaystyle\sum_{i=1}^{n_\text{sub}-1}f(x_i)\right]$$
con $h=b-a$ y $n_\text{sub}$ número de subintervalos.
<img src="https://dl.dropboxusercontent.com/s/4dl2btndrftdorp/trapecio_compuesto.png?dl=0" heigth="200" width="200">
### Ejemplo de implementación de regla compuesta del trapecio: usando numpy
Con la regla compuesta del trapecio se aproximará la integral $\int_0^1e^{-x^2}dx$. Se calculará el error relativo y graficará $n_\text{sub}$ vs Error relativo para $n_\text{sub}=1,10,100,1000,10000$.
Graficamos:
**Para el cálculo del error utilizamos {ref}`fórmulas para calcular errores absolutos y relativos <FORERRABSERRREL>`:**
$$\text{ErrRel(aprox)} = \frac{|\text{aprox}-\text{obj}|}{|\text{obj}|}$$
**La siguiente función calcula un error relativo para un varios valores `obj`:**
Si no nos interesa el valor de los errores relativos y sólo la gráfica podemos utilizar la siguiente opción:
Ver [functools.partial](https://docs.python.org/2/library/functools.html#functools.partial) para documentación, [liga](https://stackoverflow.com/questions/15331726/how-does-functools-partial-do-what-it-does) para una explicación de `partial` y [liga2](https://stackoverflow.com/questions/10834960/how-to-do-multiple-arguments-to-map-function-where-one-remains-the-same-in-pytho), [liga3](https://stackoverflow.com/questions/47859209/how-to-map-over-a-function-with-multiple-arguments-in-python) para ejemplos de uso.
**Para el cálculo del error utilizamos {ref}`fórmulas para calcular errores absolutos y relativos <FORERRABSERRREL>`:**
$$\text{ErrRel(aprox)} = \frac{|\text{aprox}-\text{obj}|}{|\text{obj}|}$$
**La siguiente función calcula un error relativo para un varios valores `obj`:**
**Otra forma con [scatter](https://matplotlib.org/3.3.0/api/_as_gen/matplotlib.pyplot.scatter.html):**
## Regla compuesta de Simpson
En cada subintervalo se aplica la regla simple $Sf$, esto es:
$$\displaystyle \int_{a_{i-1}}^{a_i}f(x)dx \approx S_i(f) \forall i=1,\dots,n_\text{sub}$$
con $S_i(f) = \frac{h}{6}\left[f(x_{2i})+f(x_{2i-2})+4f(x_{2i-1})\right]$ para el subintervalo $[a_{i-1},a_i]$ con $i=1,\dots,n_\text{sub}$.
De forma sencilla se puede ver que la regla compuesta de Simpson compuesta $S_c(f)$ se escribe como:
$$S_c(f) = \displaystyle \frac{h}{3(2n_\text{sub})} \left [ f(x_0) + f(x_{2n_\text{sub}}) + 2 \sum_{i=1}^{n_\text{sub}-1}f(x_{2i}) + 4 \sum_{i=1}^{n_\text{sub}}f(x_{2i-1})\right ]$$
con $h=b-a$ y $n_\text{sub}$ número de subintervalos.
<img src="https://dl.dropboxusercontent.com/s/8rx32vdtulpdflm/Simpson_compuesto.png?dl=0" heigth="200" width="200">
## Expresiones de los errores para las reglas compuestas del rectángulo, trapecio y Simpson
La forma de los errores de las reglas del rectángulo, trapecio y Simpson se pueden obtener con interpolación o con el teorema de Taylor. Ver [Diferenciación e Integración](https://www.dropbox.com/s/jfrxanjls8kndjp/Diferenciacion_e_Integracion.pdf?dl=0) para detalles y {ref}`Polinomios de Taylor y diferenciación numérica <PTDN>` para el teorema. Suponiendo que $f$ cumple con condiciones sobre sus derivadas, tales errores son:
$$\text{Err}Rc(f) = \frac{b-a}{6}f^{(2)}(\xi_r)\hat{h}^2, \xi_r \in [a,b]$$
$$\text{Err}Tc(f)=-\frac{b-a}{12}f^{(2)}(\xi_t)\hat{h}^2, \xi_t \in [a,b]$$
$$\text{Err}Sc(f)=-\frac{b-a}{180}f^{(4)}(\xi_S)\hat{h}^4, \xi_S \in [a,b].$$
(IMC)=
## Integración por el método de Monte Carlo
Los métodos de integración numérica por Monte Carlo son similares a los métodos por cuadratura en el sentido que se eligen puntos en los que se evaluará el integrando para sumar sus valores. La diferencia esencial con los métodos por cuadratura es que en el método de integración por Monte Carlo los puntos son **seleccionados de una forma *aleatoria*** (de hecho es pseudo-aleatoria pues se generan con un programa de computadora) en lugar de generarse con una fórmula.
### Problema
En esta sección consideramos $n$ número de nodos.
Aproximar numéricamente la integral $\displaystyle \int_{\Omega}f(x)dx$ para $x \in \mathbb{R}^\mathcal{D}, \Omega \subseteq \mathbb{R}^\mathcal{D}, f: \mathbb{R}^\mathcal{D} \rightarrow \mathbb{R}$ función tal que la integral esté bien definida en $\Omega$.
Por ejemplo para $\mathcal{D}=2:$
<img src="https://dl.dropboxusercontent.com/s/xktwjmgbf8aiekw/integral_2_dimensiones.png?dl=0" heigth="500" width="500">
Para resolver el problema anterior con $\Omega$ un rectángulo, podemos utilizar las reglas por cuadratura por Newton-Cotes o cuadratura Gaussiana en una dimensión manteniendo fija la otra dimensión. Sin embargo considérese la siguiente situación:
La regla del rectángulo (o del punto medio) y del trapecio tienen un error de orden $\mathcal{O}(h^2)$ independientemente de si se está aproximando integrales de una o más dimensiones. Supóngase que se utilizan $n$ nodos para tener un valor de espaciado igual a $\hat{h}$ en una dimensión, entonces para $\mathcal{D}$ dimensiones se requerirían $N=n^\mathcal{D}$ evaluaciones del integrando, o bien, si se tiene un valor de $N$ igual a $10, 000$ y $\mathcal{D}=4$ dimensiones el error sería del orden $\mathcal{O}(N^{-2/\mathcal{D}})$ lo que implicaría un valor de $\hat{h}=.1$ para aproximadamente sólo **dos dígitos** correctos en la aproximación (para el enunciado anterior recuérdese que $\hat{h}$ es proporcional a $n^{-1}$ y $n$ = $N^{1/\mathcal{D}}$). Este esfuerzo enorme de evaluar $N$ veces el integrando para una exactitud pequeña se debe al problema de generar puntos para *llenar* un espacio $\mathcal{D}$-dimensional y se conoce con el nombre de la maldición de la dimensionalidad, [***the curse of dimensionality***](https://en.wikipedia.org/wiki/Curse_of_dimensionality).
Una opción para resolver la situación anterior si no se desea una precisión alta (por ejemplo con una precisión de $10^{-4}$ o $4$ dígitos es suficiente) es con el método de integración por Monte Carlo (tal nombre por el uso de números aleatorios). La integración por el método de Monte Carlo está basada en la interpretación geométrica de las integrales: calcular la integral del problema inicial requiere calcular el **hipervolumen** de $\Omega$.
### Ejemplo
Supóngase que se desea aproximar el área de un círculo centrado en el origen de radio igual a $1$:
<img src="https://dl.dropboxusercontent.com/s/xmtcxw3wntfxuau/monte_carlo_1.png?dl=0" heigth="300" width="300">
entonces el área de este círculo es $\pi r^2 = \pi$.
Para lo anterior **encerramos** al círculo con un cuadrado de lado $2$:
<img src="https://dl.dropboxusercontent.com/s/igsn57vuahem0il/monte_carlo_2.png?dl=0" heigth="200" width="200">
Si tenemos $n$ puntos en el cuadrado:
<img src="https://dl.dropboxusercontent.com/s/a4krdneo0jaerqz/monte_carlo_3.png?dl=0" heigth="200" width="200">
y consideramos los $m$ puntos que están dentro del círculo:
<img src="https://dl.dropboxusercontent.com/s/pr4c5e57r4fawdt/monte_carlo_4.png?dl=0" heigth="200" width="200">
Entonces: $\frac{\text{Área del círculo}}{\text{Área del cuadrado}} \approx \frac{m}{n}$ y se tiene: Área del círculo $\approx$Área del cuadrado$\frac{m}{n}$ y si $n$ crece entonces la aproximación es mejor.
prueba numérica:
Área del círculo es aproximadamente:
Si aumentamos el número de puntos...
### Ejemplos
**Aproximar las siguientes integrales:**
* $\displaystyle \int_0^1\frac{4}{1+x^2}dx = \pi$
* $\displaystyle \int_1^2 \frac{1}{x}dx = \log{2}$.
* $\displaystyle \int_{-1}^1 \int_0^1x^2+y^2dxdy = \frac{4}{3}$.
* $\displaystyle \int_0^{\frac{\pi}{2}} \int_0^{\frac{\pi}{2}}\cos(x)\sin(y)dxdy=1$.
* $\displaystyle \int_0^1\int_{\frac{-1}{2}}^0\int_0^{\frac{1}{3}}(x+2y+3z)^2dxdydz =\frac{1}{12}$.
### ¿Cuál es el error en la aproximación por el método de integración por Monte Carlo?
Para obtener la expresión del error en esta aproximación supóngase que $x_1, x_2,\dots x_n$ son variables aleatorias independientes uniformemente distribuidas. Entonces:
$$\text{Err}(\overline{f})=\sqrt{\text{Var}(\overline{f})}=\sqrt{\text{Var}\left( \frac{1}{n} \displaystyle \sum_{i=1}^nf(x_i)\right)}=\dots=\sqrt{\frac{\text{Var}(f(x))}{n}}$$
con $x$ variable aleatoria uniformemente distribuida.
Un estimador de $\text{Var}(f(x))$ es: $\frac{1}{n}\displaystyle \sum_{i=1}^n(f(x_i)-\overline{f})^2=\overline{f^2}-\overline{f}^2$ por lo que $\hat{\text{Err}}(\overline{f}) = \sqrt{\frac{\overline{f^2}-\overline{f}^2}{n}}$.
Se tiene entonces que $\displaystyle \int_\Omega f d\Omega$ estará en el intervalo:
$$V(\overline{f} \pm \text{Err}(\overline{f})) \approx V(\overline{f} \pm \hat{\text{Err}}(\overline{f}))=V\overline{f} \pm V\sqrt{\frac{\overline{f^2}-\overline{f}^2}{n}}$$
### Ejemplo
Para el ejemplo anterior $\displaystyle \int_0^1\frac{4}{1+x^2}dx = \pi$ se tiene:
intervalo:
### Aproximación de características de variables aleatorias
La integración por el método de Monte Carlo se utiliza para aproximar características de variables aleatorias continuas. Por ejemplo, si $x$ es variable aleatoria continua, entonces su media está dada por:
$$E_f[h(X)] = \displaystyle \int_{S_X}h(x)f(x)dx$$
donde: $f$ es función de densidad de $X$, $S_X$ es el soporte de $X$ y $h$ es una transformación. Entonces:
$$E_f[h(X)] \approx \frac{1}{n} \displaystyle \sum_{i=1}^nh(x_i)=\overline{h}_n$$
con $\{x_1,x_2,\dots,x_n\}$ muestra de $f$. Y por la ley de los grandes números se tiene:
$$\overline{h}_n \xrightarrow{n \rightarrow \infty} E_f[h(X)]$$
con **convergencia casi segura**. Aún más: si $E_f[h^2(X)] < \infty$ entonces el error de aproximación de $\overline{h}_n$ es del orden $\mathcal{O}\left(\frac{1}{\sqrt{n}} \right)$ y una estimación de este error es: $\hat{\text{Err}}(\overline{h}) = \sqrt{\frac{\overline{h^2}-\overline{h}^2}{n}}$. Por el teorema del límite central:
$$\frac{\overline{h}_n-E_f[h(X)]}{\hat{\text{Err}}(\overline{h})} \xrightarrow{n \rightarrow \infty} N(0,1)$$
con $N(0,1)$ una distribución Normal con $\mu=0,\sigma=1$ $\therefore$ si $n \rightarrow \infty$ un intervalo de confianza al $95\%$ para $E_f[h(X)]$ es: $\overline{h}_n \pm z_{.975} \hat{\text{Err}}(\overline{h})$.
Uno de los pasos complicados en el desarrollo anterior es obtener una muestra de $f$. Para el caso de variables continuas se puede utilizar el teorema de transformación inversa o integral de probabilidad. Otros métodos son los nombrados [métodos de monte Carlo con cadenas de Markov](https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo) o MCMC.
| 0.742795 | 0.898009 |
# Use Case 1: Kögur
In this example we will subsample a dataset stored on SciServer using methods resembling field-work procedures.
Specifically, we will estimate volume fluxes through the [Kögur section](http://kogur.whoi.edu) using (i) mooring arrays, and (ii) ship surveys.
```
# Import oceanspy
import oceanspy as ospy
# Import additional packages used in this notebook
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
```
The following cell starts a dask client (see the [Dask Client section in the tutorial](Tutorial.ipynb#Dask-Client)).
```
# Start client
from dask.distributed import Client
client = Client()
client
```
This command opens one of the datasets avaiable on SciServer.
```
# Open dataset stored on SciServer.
od = ospy.open_oceandataset.from_catalog('EGshelfIIseas2km_ASR_full')
```
The following cell changes the default parameters used by the plotting functions.
```
import matplotlib as mpl
%matplotlib inline
mpl.rcParams['figure.figsize'] = [10.0, 5.0]
```
## Mooring array
The following diagram shows the instrumentation deployed by observational oceanographers to monitor the Kögur section (source: http://kogur.whoi.edu/img/array_boxes.png).
![Kögur Array](http://kogur.whoi.edu/img/array_boxes.png)
The analogous OceanSpy function (`compute.mooring_array`) extracts vertical sections from the model using two criteria:
* Vertical sections follow great circle paths (unless cartesian coordinates are used).
* Vertical sections follow the grid of the model (extracted moorings are adjacent to each other, and the native grid of the model is preserved).
```
# Kögur information
lats_Kogur = [ 68.68, 67.52, 66.49]
lons_Kogur = [-26.28, -23.77, -22.99]
depth_Kogur = [0, -1750]
# Select time range:
# September 2007, extracting one snapshot every 3 days
timeRange = ['2007-09-01', '2007-09-30T18']
timeFreq = '3D'
# Extract mooring array and fields used by this notebook
od_moor = od.subsample.mooring_array(Xmoor=lons_Kogur,
Ymoor=lats_Kogur,
ZRange=depth_Kogur,
timeRange=timeRange,
timeFreq=timeFreq,
varList=['Temp', 'S',
'U', 'V',
'dyG', 'dxG',
'drF',
'HFacS', 'HFacW'])
```
The following cell shows how to store the mooring array in a NetCDF file. In this use case, we only use this feature to create a checkpoint. Another option could be to move the file to other servers or computers. If the NetCDF is re-opened using OceanSpy (as shown below), all OceanSpy functions are enabled and can be applied to the `oceandataset`.
```
# Store the new mooring dataset
filename = 'Kogur_mooring.nc'
od_moor.to_netcdf(filename)
# The NetCDF can now be re-opened with oceanspy at any time,
# and on any computer
od_moor = ospy.open_oceandataset.from_netcdf(filename)
# Print size
print('Size:')
print(' * Original dataset: {0:.1f} TB'.format(od.dataset.nbytes*1.E-12))
print(' * Mooring dataset: {0:.1f} MB'.format(od_moor.dataset.nbytes*1.E-6))
print()
```
The following map shows the location of the moorings forming the Kögur section.
```
# Plot map and mooring locations
fig = plt.figure(figsize=(5, 5))
ax = od.plot.horizontal_section(varName='Depth')
XC = od_moor.dataset['XC'].squeeze()
YC = od_moor.dataset['YC'].squeeze()
line = ax.plot(XC, YC, 'r.',
transform=ccrs.PlateCarree())
```
The following figure shows the grid structure of the mooring array. The original grid structure of the model is unchaged, and each mooring is associated with one C-gridpoint (e.g., hydrography), two U-gridpoint and two V-gridpoint (e.g., velocities), and four G-gripoint (e.g., vertical component of relative vorticity).
```
# Print grid
print(od_moor.grid)
print()
print(od_moor.dataset.coords)
print()
# Plot 10 moorings and their grid points
fig, ax = plt.subplots(1, 1)
n_moorings = 10
# Markers:
for _, (pos, mark, col) in enumerate(zip(['C', 'G', 'U', 'V'],
['o', 'x', '>', '^'],
['k', 'm', 'r', 'b'])):
X = od_moor.dataset['X'+pos].values[:n_moorings].flatten()
Y = od_moor.dataset['Y'+pos].values[:n_moorings].flatten()
ax.plot(X, Y, col+mark, markersize=20, label=pos)
if pos == 'C':
for i in range(n_moorings):
ax.annotate(str(i), (X[i], Y[i]),
size=15, weight="bold", color='w', ha='center', va='center')
ax.set_xticks(X, minor=False)
ax.set_yticks(Y, minor=False)
elif pos == 'G':
ax.set_xticks(X, minor=True)
ax.set_yticks(Y, minor=True)
ax.legend(prop={'size': 20})
ax.grid(which='major', linestyle='-')
ax.grid(which='minor', linestyle='--')
```
## Plots
### Vertical sections
We can now use OceanSpy to plot vertical sections. Here we plot isopycnal contours on top of the mean meridional velocities (`V`). Although there are two V-points associated with each mooring, the plot can be displayed because OceanSpy automatically performs a linear interpolation using the grid object.
```
# Plot time mean
ax = od_moor.plot.vertical_section(varName='V', contourName='Sigma0', meanAxes='time',
robust=True, cmap='coolwarm')
```
It is possible to visualize all the snapshots by omitting the `meanAxes='time'` argument:
```
# Plot all snapshots
ax = od_moor.plot.vertical_section(varName='V', contourName='Sigma0',
robust=True, cmap='coolwarm', col_wrap=5)
# Alternatively, use the following command to produce a movie:
# anim = od_moor.animate.vertical_section(varName='V', contourName='Sigma0', ...)
```
### TS-diagrams
Here we use OceanSpy to plot a Temperature-Salinity diagram.
```
ax = od_moor.plot.TS_diagram()
# Alternatively, use the following command
# to explore how the water masses change with time:
# anim = od_moor.animate.TS_diagram()
```
We can also color each TS point using any field in the original dataset, or any field computed by OceanSpy. Fields that are not on the same grid of temperature and salinity are automatically regridded by OceanSpy.
```
ax = od_moor.plot.TS_diagram(colorName='V',
meanAxes='time',
cmap_kwargs={'robust': True,
'cmap': 'coolwarm'})
```
## Volume flux
OceanSpy can be used to compute accurate volume fluxes through vertical sections.
The function `compute.mooring_volume_transport` calculates the inflow/outflow through all grid faces of the vertical section.
This function creates a new dimension named `path` because transports can be computed using two paths (see the plot below).
```
# Show volume flux variables
ds_Vflux = ospy.compute.mooring_volume_transport(od_moor)
od_moor = od_moor.merge_into_oceandataset(ds_Vflux)
print(ds_Vflux)
# Plot 10 moorings and volume flux directions.
fig, ax = plt.subplots(1, 1)
ms = 10
s = 100
ds = od_moor.dataset
_ = ax.step(ds['XU'].isel(Xp1=0).squeeze().values,
ds['YV'].isel(Yp1=0).squeeze().values, 'C0.-', ms=ms, label='path0')
_ = ax.step(ds['XU'].isel(Xp1=1).squeeze().values,
ds['YV'].isel(Yp1=1).squeeze().values, 'C1.-', ms=ms, label='path1')
_ = ax.plot(ds['XC'].squeeze(),
ds['YC'].squeeze(), 'k.', ms=ms, label='mooring')
_ = ax.scatter(ds['X_Vtransport'].where(ds['dir_Vtransport'] == 1),
ds['Y_Vtransport'].where(ds['dir_Vtransport'] == 1),
s=s, c='k', marker='^', label='meridional direction')
_ = ax.scatter(ds['X_Utransport'].where(ds['dir_Utransport'] == 1),
ds['Y_Utransport'].where(ds['dir_Utransport'] == 1),
s=s, c='k', marker='>', label='zonal direction')
_ = ax.scatter(ds['X_Vtransport'].where(ds['dir_Vtransport'] == -1),
ds['Y_Vtransport'].where(ds['dir_Vtransport'] == -1),
s=s, c='k', marker='v', label='meridional direction')
_ = ax.scatter(ds['X_Utransport'].where(ds['dir_Utransport'] == -1),
ds['Y_Utransport'].where(ds['dir_Utransport'] == -1),
s=s, c='k', marker='<', label='zonal direction')
# Only show a few moorings
m_start = 50
m_end = 70
xlim = ax.set_xlim(sorted([ds['XC'].isel(mooring=m_start).values,
ds['XC'].isel(mooring=m_end).values]))
ylim = ax.set_ylim(sorted([ds['YC'].isel(mooring=m_start).values,
ds['YC'].isel(mooring=m_end).values]))
ax.legend()
```
Here we compute and plot the cumulative mean transport through the Kögur mooring array.
```
# Compute cumulative transport
tran_moor = od_moor.dataset['transport']
cum_tran_moor = tran_moor.sum('Z').mean('time').cumsum('mooring')
cum_tran_moor.attrs = tran_moor.attrs
fig, ax = plt.subplots(1, 1)
lines = cum_tran_moor.squeeze().plot.line(hue='path', linewidth=3)
tot_mean_tran_moor = cum_tran_moor.isel(mooring=-1).mean('path')
title = ax.set_title('TOTAL MEAN TRANSPORT: {0:.1f} Sv'
''.format(tot_mean_tran_moor.values))
```
Here we compute the transport of the overflow, defined as water with density greater than 27.8 kg m$^{-3}$.
```
# Mask transport using density
od_moor = od_moor.compute.potential_density_anomaly()
density = od_moor.dataset['Sigma0'].squeeze()
oflow_moor = tran_moor.where(density>27.8)
# Compute cumulative transport as before
cum_oflow_moor = oflow_moor.sum('Z').mean('time').cumsum('mooring')
cum_oflow_moor.attrs = oflow_moor.attrs
fig, ax = plt.subplots(1, 1)
lines = cum_oflow_moor.squeeze().plot.line(hue='path', linewidth=3)
tot_mean_oflow_moor = cum_oflow_moor.isel(mooring=-1).mean('path')
title = ax.set_title('TOTAL MEAN OVERFLOW TRANSPORT: {0:.1f} Sv'
''.format(tot_mean_oflow_moor.values))
```
## Ship survey
The following picture shows the NATO Research Vessel Alliance, a ship designed to carry out research at sea (source: http://www.marina.difesa.it/noi-siamo-la-marina/mezzi/forze-navali/PublishingImages/_alliance.jpg).
![Survey ship](http://www.marina.difesa.it/noi-siamo-la-marina/mezzi/forze-navali/PublishingImages/_alliance.jpg)
The OceanSpy function analogous to a ship survey (`compute.survey_stations`) extracts vertical sections from the model using two criteria:
* Vertical sections follow great circle paths (unless cartesian coordinates are used) with constant horizontal spacing between stations.
* Interpolation is performed and all fields are returned at the same locations (the native grid of the model is NOT preserved).
```
# Spacing between interpolated stations
delta_Kogur = 2 # km
# Extract survey stations
# Reduce dataset to speed things up:
od_surv = od.subsample.survey_stations(Xsurv=lons_Kogur,
Ysurv=lats_Kogur,
delta=delta_Kogur,
ZRange=depth_Kogur,
timeRange=timeRange,
timeFreq=timeFreq,
varList=['Temp', 'S',
'U', 'V',
'drC', 'drF',
'HFacC', 'HFacW', 'HFacS'])
# Plot map and survey stations
fig = plt.figure(figsize=(5, 5))
ax = od.plot.horizontal_section(varName='Depth')
XC = od_surv.dataset['XC'].squeeze()
YC = od_surv.dataset['YC'].squeeze()
line = ax.plot(XC, YC, 'r.',
transform=ccrs.PlateCarree())
```
## Orthogonal velocities
We can use OceanSpy to compute the velocity components orthogonal and tangential to the Kögur section.
```
od_surv = od_surv.compute.survey_aligned_velocities()
```
The following animation shows isopycnal contours on top of the velocity component orthogonal to the Kögur section.
```
anim = od_surv.animate.vertical_section(varName='ort_Vel', contourName='Sigma0',
robust=True, cmap='coolwarm',
display=False)
# The following code is necessary to display the animation in the documentation.
# When the notebook is executed, remove the code below and set
# display=True in the command above to show the animation.
import matplotlib.pyplot as plt
dirName = '_static'
import os
try:
os.mkdir(dirName)
except FileExistsError:
pass
anim.save('{}/Kogur.mp4'.format(dirName))
plt.close()
!ffmpeg -loglevel panic -y -i _static/Kogur.mp4 -filter_complex "[0:v] fps=12,scale=480:-1,split [a][b];[a] palettegen [p];[b][p] paletteuse" _static/Kogur.gif
!rm -f _static/Kogur.mp4
```
![Kogur gif](_static/Kogur.gif)
Finally, we can infer the volume flux by integrating the orthogonal velocities.
```
# Integrate along Z
od_surv = od_surv.compute.integral(varNameList='ort_Vel',
axesList=['Z'])
# Compute transport using weights
od_surv = od_surv.compute.weighted_mean(varNameList='I(ort_Vel)dZ',
axesList=['station'])
transport_surv = (od_surv.dataset['I(ort_Vel)dZ'] *
od_surv.dataset['weight_I(ort_Vel)dZ'])
# Convert in Sverdrup
transport_surv = transport_surv * 1.E-6
# Compute cumulative transport
cum_transport_surv = transport_surv.cumsum('station').rename('Horizontal volume transport')
cum_transport_surv.attrs['units'] = 'Sv'
```
Here we plot the cumulative transport for each snapshot.
```
# Plot
fig, ax = plt.subplots(figsize=(13,5))
lines = cum_transport_surv.squeeze().plot.line(hue='time', linewidth=3)
tot_mean_transport = cum_transport_surv.isel(station=-1).mean('time')
title = ax.set_title('TOTAL MEAN TRANSPORT: {0:.1f} Sv'.format(tot_mean_transport.values))
```
| github_jupyter | # Import oceanspy
import oceanspy as ospy
# Import additional packages used in this notebook
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
# Start client
from dask.distributed import Client
client = Client()
client
# Open dataset stored on SciServer.
od = ospy.open_oceandataset.from_catalog('EGshelfIIseas2km_ASR_full')
import matplotlib as mpl
%matplotlib inline
mpl.rcParams['figure.figsize'] = [10.0, 5.0]
# Kögur information
lats_Kogur = [ 68.68, 67.52, 66.49]
lons_Kogur = [-26.28, -23.77, -22.99]
depth_Kogur = [0, -1750]
# Select time range:
# September 2007, extracting one snapshot every 3 days
timeRange = ['2007-09-01', '2007-09-30T18']
timeFreq = '3D'
# Extract mooring array and fields used by this notebook
od_moor = od.subsample.mooring_array(Xmoor=lons_Kogur,
Ymoor=lats_Kogur,
ZRange=depth_Kogur,
timeRange=timeRange,
timeFreq=timeFreq,
varList=['Temp', 'S',
'U', 'V',
'dyG', 'dxG',
'drF',
'HFacS', 'HFacW'])
# Store the new mooring dataset
filename = 'Kogur_mooring.nc'
od_moor.to_netcdf(filename)
# The NetCDF can now be re-opened with oceanspy at any time,
# and on any computer
od_moor = ospy.open_oceandataset.from_netcdf(filename)
# Print size
print('Size:')
print(' * Original dataset: {0:.1f} TB'.format(od.dataset.nbytes*1.E-12))
print(' * Mooring dataset: {0:.1f} MB'.format(od_moor.dataset.nbytes*1.E-6))
print()
# Plot map and mooring locations
fig = plt.figure(figsize=(5, 5))
ax = od.plot.horizontal_section(varName='Depth')
XC = od_moor.dataset['XC'].squeeze()
YC = od_moor.dataset['YC'].squeeze()
line = ax.plot(XC, YC, 'r.',
transform=ccrs.PlateCarree())
# Print grid
print(od_moor.grid)
print()
print(od_moor.dataset.coords)
print()
# Plot 10 moorings and their grid points
fig, ax = plt.subplots(1, 1)
n_moorings = 10
# Markers:
for _, (pos, mark, col) in enumerate(zip(['C', 'G', 'U', 'V'],
['o', 'x', '>', '^'],
['k', 'm', 'r', 'b'])):
X = od_moor.dataset['X'+pos].values[:n_moorings].flatten()
Y = od_moor.dataset['Y'+pos].values[:n_moorings].flatten()
ax.plot(X, Y, col+mark, markersize=20, label=pos)
if pos == 'C':
for i in range(n_moorings):
ax.annotate(str(i), (X[i], Y[i]),
size=15, weight="bold", color='w', ha='center', va='center')
ax.set_xticks(X, minor=False)
ax.set_yticks(Y, minor=False)
elif pos == 'G':
ax.set_xticks(X, minor=True)
ax.set_yticks(Y, minor=True)
ax.legend(prop={'size': 20})
ax.grid(which='major', linestyle='-')
ax.grid(which='minor', linestyle='--')
# Plot time mean
ax = od_moor.plot.vertical_section(varName='V', contourName='Sigma0', meanAxes='time',
robust=True, cmap='coolwarm')
# Plot all snapshots
ax = od_moor.plot.vertical_section(varName='V', contourName='Sigma0',
robust=True, cmap='coolwarm', col_wrap=5)
# Alternatively, use the following command to produce a movie:
# anim = od_moor.animate.vertical_section(varName='V', contourName='Sigma0', ...)
ax = od_moor.plot.TS_diagram()
# Alternatively, use the following command
# to explore how the water masses change with time:
# anim = od_moor.animate.TS_diagram()
ax = od_moor.plot.TS_diagram(colorName='V',
meanAxes='time',
cmap_kwargs={'robust': True,
'cmap': 'coolwarm'})
# Show volume flux variables
ds_Vflux = ospy.compute.mooring_volume_transport(od_moor)
od_moor = od_moor.merge_into_oceandataset(ds_Vflux)
print(ds_Vflux)
# Plot 10 moorings and volume flux directions.
fig, ax = plt.subplots(1, 1)
ms = 10
s = 100
ds = od_moor.dataset
_ = ax.step(ds['XU'].isel(Xp1=0).squeeze().values,
ds['YV'].isel(Yp1=0).squeeze().values, 'C0.-', ms=ms, label='path0')
_ = ax.step(ds['XU'].isel(Xp1=1).squeeze().values,
ds['YV'].isel(Yp1=1).squeeze().values, 'C1.-', ms=ms, label='path1')
_ = ax.plot(ds['XC'].squeeze(),
ds['YC'].squeeze(), 'k.', ms=ms, label='mooring')
_ = ax.scatter(ds['X_Vtransport'].where(ds['dir_Vtransport'] == 1),
ds['Y_Vtransport'].where(ds['dir_Vtransport'] == 1),
s=s, c='k', marker='^', label='meridional direction')
_ = ax.scatter(ds['X_Utransport'].where(ds['dir_Utransport'] == 1),
ds['Y_Utransport'].where(ds['dir_Utransport'] == 1),
s=s, c='k', marker='>', label='zonal direction')
_ = ax.scatter(ds['X_Vtransport'].where(ds['dir_Vtransport'] == -1),
ds['Y_Vtransport'].where(ds['dir_Vtransport'] == -1),
s=s, c='k', marker='v', label='meridional direction')
_ = ax.scatter(ds['X_Utransport'].where(ds['dir_Utransport'] == -1),
ds['Y_Utransport'].where(ds['dir_Utransport'] == -1),
s=s, c='k', marker='<', label='zonal direction')
# Only show a few moorings
m_start = 50
m_end = 70
xlim = ax.set_xlim(sorted([ds['XC'].isel(mooring=m_start).values,
ds['XC'].isel(mooring=m_end).values]))
ylim = ax.set_ylim(sorted([ds['YC'].isel(mooring=m_start).values,
ds['YC'].isel(mooring=m_end).values]))
ax.legend()
# Compute cumulative transport
tran_moor = od_moor.dataset['transport']
cum_tran_moor = tran_moor.sum('Z').mean('time').cumsum('mooring')
cum_tran_moor.attrs = tran_moor.attrs
fig, ax = plt.subplots(1, 1)
lines = cum_tran_moor.squeeze().plot.line(hue='path', linewidth=3)
tot_mean_tran_moor = cum_tran_moor.isel(mooring=-1).mean('path')
title = ax.set_title('TOTAL MEAN TRANSPORT: {0:.1f} Sv'
''.format(tot_mean_tran_moor.values))
# Mask transport using density
od_moor = od_moor.compute.potential_density_anomaly()
density = od_moor.dataset['Sigma0'].squeeze()
oflow_moor = tran_moor.where(density>27.8)
# Compute cumulative transport as before
cum_oflow_moor = oflow_moor.sum('Z').mean('time').cumsum('mooring')
cum_oflow_moor.attrs = oflow_moor.attrs
fig, ax = plt.subplots(1, 1)
lines = cum_oflow_moor.squeeze().plot.line(hue='path', linewidth=3)
tot_mean_oflow_moor = cum_oflow_moor.isel(mooring=-1).mean('path')
title = ax.set_title('TOTAL MEAN OVERFLOW TRANSPORT: {0:.1f} Sv'
''.format(tot_mean_oflow_moor.values))
# Spacing between interpolated stations
delta_Kogur = 2 # km
# Extract survey stations
# Reduce dataset to speed things up:
od_surv = od.subsample.survey_stations(Xsurv=lons_Kogur,
Ysurv=lats_Kogur,
delta=delta_Kogur,
ZRange=depth_Kogur,
timeRange=timeRange,
timeFreq=timeFreq,
varList=['Temp', 'S',
'U', 'V',
'drC', 'drF',
'HFacC', 'HFacW', 'HFacS'])
# Plot map and survey stations
fig = plt.figure(figsize=(5, 5))
ax = od.plot.horizontal_section(varName='Depth')
XC = od_surv.dataset['XC'].squeeze()
YC = od_surv.dataset['YC'].squeeze()
line = ax.plot(XC, YC, 'r.',
transform=ccrs.PlateCarree())
od_surv = od_surv.compute.survey_aligned_velocities()
anim = od_surv.animate.vertical_section(varName='ort_Vel', contourName='Sigma0',
robust=True, cmap='coolwarm',
display=False)
# The following code is necessary to display the animation in the documentation.
# When the notebook is executed, remove the code below and set
# display=True in the command above to show the animation.
import matplotlib.pyplot as plt
dirName = '_static'
import os
try:
os.mkdir(dirName)
except FileExistsError:
pass
anim.save('{}/Kogur.mp4'.format(dirName))
plt.close()
!ffmpeg -loglevel panic -y -i _static/Kogur.mp4 -filter_complex "[0:v] fps=12,scale=480:-1,split [a][b];[a] palettegen [p];[b][p] paletteuse" _static/Kogur.gif
!rm -f _static/Kogur.mp4
# Integrate along Z
od_surv = od_surv.compute.integral(varNameList='ort_Vel',
axesList=['Z'])
# Compute transport using weights
od_surv = od_surv.compute.weighted_mean(varNameList='I(ort_Vel)dZ',
axesList=['station'])
transport_surv = (od_surv.dataset['I(ort_Vel)dZ'] *
od_surv.dataset['weight_I(ort_Vel)dZ'])
# Convert in Sverdrup
transport_surv = transport_surv * 1.E-6
# Compute cumulative transport
cum_transport_surv = transport_surv.cumsum('station').rename('Horizontal volume transport')
cum_transport_surv.attrs['units'] = 'Sv'
# Plot
fig, ax = plt.subplots(figsize=(13,5))
lines = cum_transport_surv.squeeze().plot.line(hue='time', linewidth=3)
tot_mean_transport = cum_transport_surv.isel(station=-1).mean('time')
title = ax.set_title('TOTAL MEAN TRANSPORT: {0:.1f} Sv'.format(tot_mean_transport.values)) | 0.639961 | 0.982691 |
# CS229: Problem Set 4
## Problem 4: Independent Component Analysis
**C. Combier**
This iPython Notebook provides solutions to Stanford's CS229 (Machine Learning, Fall 2017) graduate course problem set 3, taught by Andrew Ng.
The problem set can be found here: [./ps4.pdf](ps4.pdf)
I chose to write the solutions to the coding questions in Python, whereas the Stanford class is taught with Matlab/Octave.
## Notation
- $x_i$ is the $i^{th}$ feature vector
- $y_i$ is the expected outcome for the $i^{th}$ training example
- $z_i$'s are the latent (hidden) variables
- $m$ is the number of training examples
- $n$ is the number of features
For clarity, I've inlined the code of the provided helper function ```belsej.py```.
## Dependencies
I installed ```sounddevice``` to Anaconda with the following command:
```conda install -c conda-forge python-sounddevice ```
First, let's set up the environment and write helper functions:
- ```normalize``` ensures all mixes have the same volume
- ```load_data``` loads the mix
- ```play``` plays the audio using ```sounddevice```
```
### Independent Components Analysis
###
### This program requires a working installation of:
###
### On Mac:
### conda install -c conda-forge python-sounddevice
###
import sounddevice as sd
import numpy as np
Fs = 11025
def normalize(dat):
return 0.99 * dat / np.max(np.abs(dat))
def load_data():
mix = np.loadtxt('data/mix.dat')
return mix
def play(vec):
sd.play(vec, Fs, blocking=True)
```
Next we write a numerically stable sigmoid function, to avoid overflows:
```
# Numerically stable sigmoid
def sigmoid(x):
return np.where(x >= 0, 1 / (1 + np.exp(-x)), np.exp(x) / (1 + np.exp(x)))
```
The following functions calculates the weights to separate the independent components of the five mixes, using stochastic gradient descent and annealing to speed up convergence.
```
def unmixer(X):
M, N = X.shape
W = np.eye(N)
anneal = [0.1, 0.1, 0.1, 0.05, 0.05, 0.05, 0.02, 0.02, 0.01, 0.01,
0.005, 0.005, 0.002, 0.002, 0.001, 0.001]
print('Separating tracks ...')
for alpha in anneal:
for xi in X:
W += alpha * (np.outer(1 - 2 * sigmoid(np.dot(W, xi.T)), xi) + np.linalg.inv(W.T))
return W
```
Finally, this last function unmixes the 5 mixes to extract the independent components.
```
def unmix(X, W):
S = np.zeros(X.shape)
S = X.dot(W.T)
return S
```
Now, we load the mix data:
```
X = normalize(load_data())
for i in range(X.shape[1]):
print('Playing mixed track %d' % i)
play(X[:, i])
```
Next, we run Independent Component Analysis and separate the components in the mix:
```
W = unmixer(X)
S = normalize(unmix(X, W))
```
Finally, we play the separated components:
```
for i in range(S.shape[1]):
print('Playing separated track %d' % i)
play(S[:, i])
```
| github_jupyter |
First, let's set up the environment and write helper functions:
- ```normalize``` ensures all mixes have the same volume
- ```load_data``` loads the mix
- ```play``` plays the audio using ```sounddevice```
Next we write a numerically stable sigmoid function, to avoid overflows:
The following functions calculates the weights to separate the independent components of the five mixes, using stochastic gradient descent and annealing to speed up convergence.
Finally, this last function unmixes the 5 mixes to extract the independent components.
Now, we load the mix data:
Next, we run Independent Component Analysis and separate the components in the mix:
Finally, we play the separated components:
| 0.80147 | 0.98752 |
# Helm 201
A deep-dive into Helm (v3) and details like
* Templating
* Charts and Subcharts
* Usage and internal structure in Kubernetes
* Integrations
```
wd_init = "work/helm-init2"
!helm version
```
---
---
## Init
* Create a new template / Helm Chart
```
!echo $wd_init
!mkdir -p $wd_init
!helm create $wd_init/demo-helm-201
!tree $wd_init
```
## Chart Structure and Overview
* `Chart.yaml` common meta information
* avoid using `appVersion` - handle version in env-specific value-files or as field which will be overwritten during helm execution
```
!cat $wd_init/demo-helm-201/Chart.yaml | grep -B2 -i 'version:'
```
----
* Templating
* include output/result from namespaced functions
* Whitespaces and new lines, indent
* Action (00)
* modify service template
```
!echo "Render template and generate Kubernetes resource files"
!helm template demo-helm-201-common $wd_init/demo-helm-201 -f $wd_init/demo-helm-201/values.yaml --output-dir=work/out/common
```
----
* Templating, Variables
* `.Values.*` holds all variables (from file and command line)
* `values.yaml` and any additional values-file will be merged
* Action (01)
* `image.version`
* env-specific values file
```
!echo "Render template and generate Kubernetes resource files for *TEST* stage"
!helm template demo-helm-201-test $wd_init/demo-helm-201 -f $wd_init/demo-helm-201/values.test.yaml --output-dir=work/out/test
```
---
* Templating, Functions
* `_helpers.tpl` holds set of helper functions used in template files
* common Go template functions are included (and, or, len, ...)
----
* Templating, Functions
* `with` set the scope
* `range` iterate
* Action (02)
* new `sa.yaml`
* values in `values.test.yaml`
```
!echo "Render template and generate Kubernetes resource files for *TEST* stage"
!helm template demo-helm-201-test $wd_init/demo-helm-201 -f $wd_init/demo-helm-201/values.test.yaml --output-dir=work/out/test
```
----
* Charts and Subcharts
* Subchart is a standalone chart
* only parent knows the subcharts / childrens
* ...and only parent can override fields
* Action
* checkout existing chart
* create a new chart in `parent/chart` directory
* configure dependency
```
!echo "Print out existing chart (without subchart)"
!tree $wd_init
!echo "Create new (sub)chart in the *charts* dir"
!helm create $wd_init/demo-helm-201/charts/demo-subchart
!rm -rf $wd_init/demo-helm-201/charts/demo-subchart/templates/*
!tree $wd_init
```
* Action (03)
* template for `ConfigMap` in subchart
* value file in subchart
* override options
```
!echo "Dry-Run subchart - with local value file"
!helm install --generate-name --dry-run --debug $wd_init/demo-helm-201/charts/demo-subchart
!echo "Render entire chart - with *pre* value file, to override subchart fields"
!helm template demo-helm-201-pre $wd_init/demo-helm-201 -f $wd_init/demo-helm-201/values.pre.yaml --output-dir=work/out/pre
!echo "Render entire chart - with *test* value file, to *NOT* override subchart fields"
!helm template demo-helm-201-test $wd_init/demo-helm-201 -f $wd_init/demo-helm-201/values.test.yaml --output-dir=work/out/test
```
---
---
## Life Cycle and Hooks
Hooks allows to execute specific resource definitions at defined points in the Helm life cycle.
Available hooks
* `pre-install`
* `post-install`
* `pre-delete`
* `post-delete`
* `pre-upgrade`
* `post-upgrade`
* `pre-rollback`
* `post-rollback`
* `test`
Examples
* prepare the installation and create specific resources beforehand (ConfigMap, Job etc)
* clean up database before uninstalling application
* return a license or deregister/unsubscribe
### Pre-Install and Post-Delete
Example for `pre-install` and `post-delete` hook. Hooks are usual Kubernetes resource definitions with special annotations.
```
metadata:
annotations:
"helm.sh/hook": pre-install, post-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
```
```
!echo "Create new (sub)chart in the *charts* dir"
!helm create $wd_init/demo-helm-201/charts/demo-hook
!rm -rf $wd_init/demo-helm-201/charts/demo-hook/templates/ingress.yaml
!rm -rf $wd_init/demo-helm-201/charts/demo-hook/templates/service.yaml
!rm -rf $wd_init/demo-helm-201/charts/demo-hook/templates/tests
!tree $wd_init
```
---
* Action (04)
* create 2 jobs for different hooks
* install and upgrade
```
!echo "Install subchart for hook testing"
!echo "some OCP permissions adjustments (for nginx)..."
!oc adm policy add-scc-to-user anyuid -z demo-hook-dev
!helm upgrade --install demo-hook-dev $wd_init/demo-helm-201/charts/demo-hook -n demo-helm
!echo "Update subchart for hook testing now with job definitions"
!helm upgrade --install demo-hook-dev $wd_init/demo-helm-201/charts/demo-hook -n demo-helm
```
---
* Action
* current `pods` and `jobs`
* events
* clean-up and check out the job with `post-delete`
```
!echo "Delete subchart which triggers hook"
!helm delete demo-hook-dev -n demo-helm
```
---
---
## OCI Registry
Helm 3 provides the option to store helm charts not only on a HTTP server, but also in a OCI Registry. This allows the option to hold the artifacts (container image and helm charts) in the same registry.
### Testing
* Action
* use a local registry
* push a local helm chart
* install from OCI registry
```
!echo "Run a local registry"
!docker run -dp 5000:5000 --restart=always --name registry registry
!echo "Create helm package from the hook subchart"
!echo "...copy subchart in own dir for testing..."
!cp -r $wd_init/demo-helm-201/charts/demo-hook $wd_init/
!echo "...helm package..."
!helm package $wd_init/demo-hook --version 1.0.3 --destination $wd_init
!helm package $wd_init/demo-hook --version 1.0.4 --destination $wd_init
!echo "Push helm package to registry"
!echo "...helm registry login..."
!helm registry login -u testuser -p testpassword localhost:5000
!echo "...helm package push..."
!helm push $wd_init/demo-hook-1.0.3.tgz oci://localhost:5000/helm-charts
!helm push $wd_init/demo-hook-1.0.4.tgz oci://localhost:5000/helm-charts
!echo "Verify helm package in OCI registry"
!echo "...show chart details..."
!helm show chart oci://localhost:5000/helm-charts/demo-hook
!echo "Install (render) helm package directly from OCI registry"
!echo "...render existing chart from registry..."
!helm template release-demo-hook-oci oci://localhost:5000/helm-charts/demo-hook --version 1.0.3 --output-dir=work/out/oci
!echo "...try to render non-existing chart from registry..."
!helm template release-demo-hook-oci oci://localhost:5000/helm-charts/demo-hook --version 2.2.2 --output-dir=work/out/oci
```
---
---
## Misc
Some additional details
### Internal/Release State
With Helm3 not Tiller exists anymore. All configuration and release state, including value files, is now stored in a `Secret` instead of a `ConfigMap`.
```
!echo "Install subchart"
!helm upgrade --install release-demo-hook-state $wd_init/demo-helm-201/charts/demo-hook -n demo-helm
!echo "List all existing helm releases"
!helm list
!echo "Print out release information from Helm Release Secret"
!oc get secret sh.helm.release.v1.release-demo-hook-state.v1 -o jsonpath='{.data.release}' | base64 --decode | base64 --decode | gunzip -c | jq
```
### Helm Upgrade and force re-deployment
Helm determines the relevant changes and updates only the necessary resources. In case the content of a `ConfigMap` changed, but not the image version, a redeployment or restart of the deployment will be not triggered.
To trigger a redeployment a relation between `ConfigMap` content and `Deployment` is needed
```
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
```
The idea is, that the `Deployment` contains an annotation with a checksum of the related `ConfigMap`. Any changes in the content, results in an other `sha256sum`, results in an other annotation, which results in modified `Deployment` definition.
### IBM Cloud Container Registry
The IBM Cloud Container Registry is OCI compliant and could also be used for Helm packages
```
!echo "Push chart to IBM CR"
!ibmcloud login --sso
!ibmcloud cr login
!echo "...use API key to login..."
!helm registry login -u iamapikey
!helm push $wd_init/demo-hook-1.0.1.tgz oci://de.icr.io/demo-helm
```
### VS Code Extensions
* `yaml`
* `kubernetes` understands also the helm template syntax
## Summary
This was a walkthrough for Helm 201 with covering topics like
* templating
* charts and subcharts
* OCI Registry
## References
* [Helm - Storage Provider - Secret instead ConfigMap](https://helm.sh/docs/topics/advanced/#storage-backends)
* [Helm - (OCI) Registry](https://helm.sh/docs/topics/registries/)
* [Helm - Best Practices](https://helm.sh/docs/chart_best_practices/conventions/)
* [Helm - Template Guide](https://helm.sh/docs/chart_template_guide/getting_started/)
| github_jupyter | wd_init = "work/helm-init2"
!helm version
!echo $wd_init
!mkdir -p $wd_init
!helm create $wd_init/demo-helm-201
!tree $wd_init
!cat $wd_init/demo-helm-201/Chart.yaml | grep -B2 -i 'version:'
!echo "Render template and generate Kubernetes resource files"
!helm template demo-helm-201-common $wd_init/demo-helm-201 -f $wd_init/demo-helm-201/values.yaml --output-dir=work/out/common
!echo "Render template and generate Kubernetes resource files for *TEST* stage"
!helm template demo-helm-201-test $wd_init/demo-helm-201 -f $wd_init/demo-helm-201/values.test.yaml --output-dir=work/out/test
!echo "Render template and generate Kubernetes resource files for *TEST* stage"
!helm template demo-helm-201-test $wd_init/demo-helm-201 -f $wd_init/demo-helm-201/values.test.yaml --output-dir=work/out/test
!echo "Print out existing chart (without subchart)"
!tree $wd_init
!echo "Create new (sub)chart in the *charts* dir"
!helm create $wd_init/demo-helm-201/charts/demo-subchart
!rm -rf $wd_init/demo-helm-201/charts/demo-subchart/templates/*
!tree $wd_init
!echo "Dry-Run subchart - with local value file"
!helm install --generate-name --dry-run --debug $wd_init/demo-helm-201/charts/demo-subchart
!echo "Render entire chart - with *pre* value file, to override subchart fields"
!helm template demo-helm-201-pre $wd_init/demo-helm-201 -f $wd_init/demo-helm-201/values.pre.yaml --output-dir=work/out/pre
!echo "Render entire chart - with *test* value file, to *NOT* override subchart fields"
!helm template demo-helm-201-test $wd_init/demo-helm-201 -f $wd_init/demo-helm-201/values.test.yaml --output-dir=work/out/test
metadata:
annotations:
"helm.sh/hook": pre-install, post-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
!echo "Create new (sub)chart in the *charts* dir"
!helm create $wd_init/demo-helm-201/charts/demo-hook
!rm -rf $wd_init/demo-helm-201/charts/demo-hook/templates/ingress.yaml
!rm -rf $wd_init/demo-helm-201/charts/demo-hook/templates/service.yaml
!rm -rf $wd_init/demo-helm-201/charts/demo-hook/templates/tests
!tree $wd_init
!echo "Install subchart for hook testing"
!echo "some OCP permissions adjustments (for nginx)..."
!oc adm policy add-scc-to-user anyuid -z demo-hook-dev
!helm upgrade --install demo-hook-dev $wd_init/demo-helm-201/charts/demo-hook -n demo-helm
!echo "Update subchart for hook testing now with job definitions"
!helm upgrade --install demo-hook-dev $wd_init/demo-helm-201/charts/demo-hook -n demo-helm
!echo "Delete subchart which triggers hook"
!helm delete demo-hook-dev -n demo-helm
!echo "Run a local registry"
!docker run -dp 5000:5000 --restart=always --name registry registry
!echo "Create helm package from the hook subchart"
!echo "...copy subchart in own dir for testing..."
!cp -r $wd_init/demo-helm-201/charts/demo-hook $wd_init/
!echo "...helm package..."
!helm package $wd_init/demo-hook --version 1.0.3 --destination $wd_init
!helm package $wd_init/demo-hook --version 1.0.4 --destination $wd_init
!echo "Push helm package to registry"
!echo "...helm registry login..."
!helm registry login -u testuser -p testpassword localhost:5000
!echo "...helm package push..."
!helm push $wd_init/demo-hook-1.0.3.tgz oci://localhost:5000/helm-charts
!helm push $wd_init/demo-hook-1.0.4.tgz oci://localhost:5000/helm-charts
!echo "Verify helm package in OCI registry"
!echo "...show chart details..."
!helm show chart oci://localhost:5000/helm-charts/demo-hook
!echo "Install (render) helm package directly from OCI registry"
!echo "...render existing chart from registry..."
!helm template release-demo-hook-oci oci://localhost:5000/helm-charts/demo-hook --version 1.0.3 --output-dir=work/out/oci
!echo "...try to render non-existing chart from registry..."
!helm template release-demo-hook-oci oci://localhost:5000/helm-charts/demo-hook --version 2.2.2 --output-dir=work/out/oci
!echo "Install subchart"
!helm upgrade --install release-demo-hook-state $wd_init/demo-helm-201/charts/demo-hook -n demo-helm
!echo "List all existing helm releases"
!helm list
!echo "Print out release information from Helm Release Secret"
!oc get secret sh.helm.release.v1.release-demo-hook-state.v1 -o jsonpath='{.data.release}' | base64 --decode | base64 --decode | gunzip -c | jq
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
!echo "Push chart to IBM CR"
!ibmcloud login --sso
!ibmcloud cr login
!echo "...use API key to login..."
!helm registry login -u iamapikey
!helm push $wd_init/demo-hook-1.0.1.tgz oci://de.icr.io/demo-helm | 0.227727 | 0.702304 |
Lorenz equations as a model of atmospheric convection:
This is one of the classic systems in non-linear differential equations. It exhibits a range of different behaviors as the parameters (σ, β, ρ) are varied.
x˙ = σ(y−x)
y˙ = ρx−y−xz
z˙ = −βz+xy
The Lorenz equations also arise in simplified models for lasers, dynamos, thermosyphons, brushless DC motors, electric circuits, chemical reactions, and forward osmosis.
The Lorenz system is nonlinear, non-periodic, three-dimensional and deterministic.
The Lorenz equations are derived from the Oberbeck-Boussinesq approximation to the equations describing fluid circulation in a shallow layer of fluid, heated uniformly from below and cooled uniformly from above. This fluid circulation is known as Rayleigh-Bénard convection. The fluid is assumed to circulate in two dimensions (vertical and horizontal) with periodic rectangular boundary conditions.
The partial differential equations modeling the system's stream function and temperature are subjected to a spectral Galerkin approximation: the hydrodynamic fields are expanded in Fourier series, which are then severely truncated to a single term for the stream function and two terms for the temperature. This reduces the model equations to a set of three coupled, nonlinear ordinary differential equations.
```
%matplotlib inline
from ipywidgets import interact, interactive
from IPython.display import clear_output, display, HTML
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import cnames
from matplotlib import animation
#Computing the trajectories and plotting the result.
def solve_lorenz(
N =10,
angle = 0.0,
max_time = 4.0,
sigma = 10.0,
beta = 8./3,
rho = 28.0):
'''
We define a function that can integrate the differential
equations numerically and then plot the solutions.
This function has arguments that control the parameters of the
differential equation (σ, β, ρ),
the numerical integration (N, max_time),
and the visualization (angle).
'''
fig = plt.figure();
ax = fig.add_axes([0, 0, 1, 1], projection = '3d');
ax.axis('on')
#Prepare the axes limits.
ax.set_xlim((-25, 25))
ax.set_ylim((-35, 35))
ax.set_zlim((5, 55))
def lorenz_deriv(
x_y_z,
t0,
sigma = sigma,
beta = beta,
rho = rho):
'''
Computes the time-derivate of a Lorenz System.
'''
x, y, z = x_y_z
return[
sigma * (y - x),
x * (rho - z) - y,
x * y - beta * z]
#Choose random starting points, uniformly distributed from -15 to 15.
np.random.seed(1)
x0 = -15 + 30 * np.random.random((N, 3))
#Solve for the trajectories.
t = np.linspace(0, max_time, int(250*max_time))
x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t) for x0i in x0])
#Choose a different color for each trajectory.
colors = plt.cm.jet(np.linspace(0, 1, N));
for i in range(N):
x, y, z = x_t[i,:,:].T
lines = ax.plot(x, y, z, '-', c = colors[i])
_ = plt.setp(lines, linewidth = 2);
ax.view_init(30, angle)
_ = plt.show();
return t, x_t
t, x_t = solve_lorenz(angle = 0, N = 10)
w = interactive(
solve_lorenz,
angle = (0., 360.),
N = (0, 50),
sigma = (0.0, 50.0),
rho = (0.0, 50.0),
)
display(w)
```
| github_jupyter | %matplotlib inline
from ipywidgets import interact, interactive
from IPython.display import clear_output, display, HTML
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import cnames
from matplotlib import animation
#Computing the trajectories and plotting the result.
def solve_lorenz(
N =10,
angle = 0.0,
max_time = 4.0,
sigma = 10.0,
beta = 8./3,
rho = 28.0):
'''
We define a function that can integrate the differential
equations numerically and then plot the solutions.
This function has arguments that control the parameters of the
differential equation (σ, β, ρ),
the numerical integration (N, max_time),
and the visualization (angle).
'''
fig = plt.figure();
ax = fig.add_axes([0, 0, 1, 1], projection = '3d');
ax.axis('on')
#Prepare the axes limits.
ax.set_xlim((-25, 25))
ax.set_ylim((-35, 35))
ax.set_zlim((5, 55))
def lorenz_deriv(
x_y_z,
t0,
sigma = sigma,
beta = beta,
rho = rho):
'''
Computes the time-derivate of a Lorenz System.
'''
x, y, z = x_y_z
return[
sigma * (y - x),
x * (rho - z) - y,
x * y - beta * z]
#Choose random starting points, uniformly distributed from -15 to 15.
np.random.seed(1)
x0 = -15 + 30 * np.random.random((N, 3))
#Solve for the trajectories.
t = np.linspace(0, max_time, int(250*max_time))
x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t) for x0i in x0])
#Choose a different color for each trajectory.
colors = plt.cm.jet(np.linspace(0, 1, N));
for i in range(N):
x, y, z = x_t[i,:,:].T
lines = ax.plot(x, y, z, '-', c = colors[i])
_ = plt.setp(lines, linewidth = 2);
ax.view_init(30, angle)
_ = plt.show();
return t, x_t
t, x_t = solve_lorenz(angle = 0, N = 10)
w = interactive(
solve_lorenz,
angle = (0., 360.),
N = (0, 50),
sigma = (0.0, 50.0),
rho = (0.0, 50.0),
)
display(w) | 0.702836 | 0.981875 |
## Neural Networks
- This was adopted from the PyTorch Tutorials.
- http://pytorch.org/tutorials/beginner/pytorch_with_examples.html
## Neural Networks
- Neural networks are the foundation of deep learning, which has revolutionized the
```In the mathematical theory of artificial neural networks, the universal approximation theorem states[1] that a feed-forward network with a single hidden layer containing a finite number of neurons (i.e., a multilayer perceptron), can approximate continuous functions on compact subsets of Rn, under mild assumptions on the activation function.```
### Generate Fake Data
- `D_in` is the number of dimensions of an input varaible.
- `D_out` is the number of dimentions of an output variable.
- Here we are learning some special "fake" data that represents the xor problem.
- Here, the dv is 1 if either the first or second variable is
```
# -*- coding: utf-8 -*-
import numpy as np
#This is our independent and dependent variables.
x = np.array([ [0,0,0],[1,0,0],[0,1,0],[0,0,0] ])
y = np.array([[0,1,1,0]]).T
print("Input data:\n",x,"\n Output data:\n",y)
```
### A Simple Neural Network
- Here we are going to build a neural network with 2 hidden layers.
-
```
np.random.seed(seed=83832)
#D_in is the number of input variables.
#H is the hidden dimension.
#D_out is the number of dimensions for the output.
D_in, H, D_out = 3, 2, 1
# Randomly initialize weights og out 2 hidden layer network.
w1 = np.random.randn(D_in, H)
w2 = np.random.randn(H, D_out)
bias = np.random.randn(H, 1)
```
### Learn the Appropriate Weights via Backpropogation
- Learning rate adjust how quickly the model will adjust parameters.
```
# -*- coding: utf-8 -*-
learning_rate = .01
for t in range(500):
# Forward pass: compute predicted y
h = x.dot(w1)
#A relu is just the activation.
h_relu = np.maximum(h, 0)
y_pred = h_relu.dot(w2)
# Compute and print loss
loss = np.square(y_pred - y).sum()
print(t, loss)
# Backprop to compute gradients of w1 and w2 with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_w2 = h_relu.T.dot(grad_y_pred)
grad_h_relu = grad_y_pred.dot(w2.T)
grad_h = grad_h_relu.copy()
grad_h[h < 0] = 0
grad_w1 = x.T.dot(grad_h)
# Update weights
w1 -= learning_rate * grad_w1
w2 -= learning_rate * grad_w2
```
#CFully connected
```
pred = np.maximum(x.dot(w1),0).dot(w2)
print (pred, "\n", y)
```
### Hidden Layers are Often Viewed as Unknown
- Just a weighting matrix
```
#However
w1
w2
# Relu just removes the negative numbers.
h_relu
```
| github_jupyter |
### Generate Fake Data
- `D_in` is the number of dimensions of an input varaible.
- `D_out` is the number of dimentions of an output variable.
- Here we are learning some special "fake" data that represents the xor problem.
- Here, the dv is 1 if either the first or second variable is
### A Simple Neural Network
- Here we are going to build a neural network with 2 hidden layers.
-
### Learn the Appropriate Weights via Backpropogation
- Learning rate adjust how quickly the model will adjust parameters.
#CFully connected
### Hidden Layers are Often Viewed as Unknown
- Just a weighting matrix
| 0.802981 | 0.989899 |
# Image Operators And Transforms
Nina Miolane, UC Santa Barbara
<center><img src="figs/02_main.png" width=1200px alt="default"/></center>
# Last Lecture
- **01: Image Formation Models (Ch. 2)**
- 02: Image Operators and Transforms (Ch. 3)
- 03: Feature Detection, Matching, Segmentation (Ch. 7)
- 04: Image Alignment and Stitching (Ch. 8)
- 05: 3D Reconstruction (Ch. 13)
We have seen how images are formed based on:
- 3D scene elements,
- camera intrinsic parameters,
- camera extrinsic parameters.
# This Lecture
- 01: Image Formation Models (Ch. 2)
- **02: Image Operators and Transforms (Ch. 3)**
- 03: Feature Detection, Matching, Segmentation (Ch. 7)
- 04: Image Alignment and Stitching (Ch. 8)
- 05: 3D Reconstruction (Ch. 13)
We will look at our first image processing operators and transforms, which:
- convert an image into another image,
- make the image more suited to answer a question in the downstream analysis.
# Continuous and Discrete Images
$\color{#EF5645}{\text{Continuous Images}}$: A continuous image is represented as a function over a 2D continuous domain $\Omega^2$, i.e. $f: \Omega^2 \rightarrow \mathbb{R}^c$, where $c$ is the number of ``channels" of the images, typically the three colors R, G, B. The elements $x$ of the continuous domain are represented by their 2D coordinates.
$\color{#EF5645}{\text{Discrete Images}}$: A discrete (sampled) image is represented as a function over a 2D discrete domain $[1, ..., n] \times [1, ..., m] $, i.e. $f: [1, ..., n] \times [1, ..., m] \rightarrow \mathbb{R}^c$, where $c$ is the number of channels. The elements $x$ of the discrete domain are denoted by the indices of the pixel that they represent $x=(i, j)$.
```
from skimage import data, io
image = data.astronaut()
print(type(image))
print(image.shape)
# io.imshow(image);
io.imshow(image[10:300, 50:200, 2]);
```
# Operators
$\color{#EF5645}{\text{Operator}}$: An operator is a function $H$ that takes one image $f$ as input and produces an output image $g$.
$\color{#EF5645}{\text{Linear Operator}}$: The operator $H$ is said to be linear if we have $H(af+bg) = aH(f) +bH(g)$ for all images $f$ and $g$, and for all constants $a$ and $b$.
$\color{#EF5645}{\text{Shift-Invariant Operator}}$: The operator $H$ is said to be shift-invariant if we have:
- Discrete images: $g(i, j) = f(i + k, j + l) \Leftrightarrow H(g)(i, j) = H(f)(i + k, j + l)$,
- Continuous images: $g(x, y) = f(x + a, y + b) \Leftrightarrow H(g)(x, y) = H(f)(x + a, y + b)$.
$\color{#EF5645}{\text{Linear Shift-Invariant Operator}}$ are denoted LSI operators.
# Image Operators
- **[Point Operators](#sec-syllabus)**
- [Neighborhood Operators and Linear Filtering](#sec-ece)
- [Fourier Transforms](#sec-ece)
- [Pyramid and Wavelets](#sec-ece)
# Point Operators
$\color{#EF5645}{\text{Point Operator}}$: A general point operator is an operator $H$ that takes one image $f$ as input and produces an output image $g=H(f)$, with the same domain, such that:
- For continuous images and $x \in \Omega^2$ a 2D continuous domain:
$$g(x) = h(f(x))$$
- For discrete images and $x=(i, j) \in [1, n]^2$ a 2D discrete domain:
$$g(i, j) = h(f(i ,j))$$
In other words, each output pixel’s value depends on only the corresponding input pixel value. The operator $H$ is entirely defined by the function $h$ that only acts on output values from $f$.
# Examples
$\color{#047C91}{\text{Example}}$: The multiplicative gain operator is an example of a point operator:
$$g(x) = af(x) + b,$$
where $a >0$ and $b$ are called the gain and bias parameters.
$\color{#047C91}{\text{Example}}$: Spatially varying multiplicative gain operator is an example of a point operator:
$$g(x) = a(x)f(x) + b(x),$$
where the bias and gain parameters can also be spatially varying.
```
from skimage import data, io
image = data.astronaut()
image = image / 255
gain = 1.8 # a
bias = 0. # b
mult_image = gain * image + bias
io.imshow(mult_image);
```
$\color{#047C91}{\text{Example}}$: The linear blend operator is an example of point operator:
$$g(x) = (1- \alpha)f_0(x) + \alpha f_1(x)$$
Varying $\alpha$ from $0$ to $1$, this operator can be used to perform atemporal cross-dissolve between two images or videos, as seen in slide shows and film production.
```
from skimage import data, io
from skimage.transform import resize
image0 = data.coins() / 255
image1 = data.brick() / 255
image0 = resize(image0, (128, 128))
image1 = resize(image1, (128, 128))
alpha = 1.
blend_image = (1 - alpha) * image0 + alpha * image1
io.imshow(blend_image);
```
# Linear Operators
$\color{#047C91}{\text{Exercise}}$: Among the operators defined above, which are linear operators? Which are shift-invariant operators?
# Histogram Equalization
Can we automatically determine their "best" values to enhance the appearance of an image?
$\rightarrow$ Histogram equalization.
<center><img src="figs/02_before.jpeg" width=400px alt="default"/></center>
<center><img src="figs/02_after.jpeg" width=400px alt="default"/></center>
# Image Histogram
$\color{#EF5645}{\text{Image histogram}}$: An image histogram is a graphical representation of the number of pixels in an image as a function of their intensity. Histograms are made up of bins, each bin representing a certain intensity value range. Intensities vary between 0 and 255.
$\color{#EF5645}{\text{Remark}}$: For colored images, we have one histogram per color channel.
```
import numpy as np
import skimage
import matplotlib.pyplot as plt
image = skimage.data.chelsea()
# print(image.shape)
#skimage.io.imshow(image)
fig, ax = plt.subplots(2, 3)
bins = np.arange(-0.5, 255 + 1, 1)
for ci, c in enumerate('rgb'):
ax[0, ci].imshow(image[:,:, ci], cmap='gray')
ax[1, ci].hist(image[:,:, ci].flatten(), bins = bins, color=c)
```
# Histogram Equalization
$\color{#EF5645}{\text{Histogram equalization}}$: is a method of contrast adjustment using the image's histogram. It amounts to find an operator such that the output histogram is flat.
<center><img src="figs/02_hist.png" width=600px alt="default"/></center>
# Normalized Histograms and Cumulative Distribution Function
How can we find such an operator? Statistics provide methods to map a probability distribution to another.
$\color{#EF5645}{\text{Normalized Histogram}}$: is the histogram of the image, when the intensity values have been scaled, or "normalized" in $[0, 1]$. In this context, we can understand the histogram of the image as a probability distribution, showing the probability $p(a)$ of finding intensity $a$ in the image.
$\color{#EF5645}{\text{Cumulative Distribution Function}}$: is the function $CDF$ defined as: $CDF(a) = \sum_{b=0}^a p(b)$ for any $p$ a discrete probability distribution.
# Method for Histogram Equalization
$\color{#6D7D33}{\text{Proposition (Statistics)}}$: Consider a random variable $X$ with CDF $F$.
- If $Y$ is another random variable that is uniformly distributed (flat histogram),
- then $F^{-1}(Y)$ is distributed as $F$.
Histogram equalization aims to perform the reverse operation:
- Given an image distributed as $F$,
- convert it into an image with a flat histogram.
Thus, histogram equalization amounts to performing the operation: $a \rightarrow CDF(a)$, for every intensity $a$ found in the image.
# From Gray-Scale Images to Color Images
- The above describes histogram equalization on a grayscale image.
- On color images: apply the same method separately to the R, G, B components.
$\color{#EF5645}{\text{Remark}}$: This may give dramatic changes in the image's color balance since the relative distributions of the color channels change as a result of applying the algorithm.
# Extension: Locally Adaptive Histogram Equalization
$\color{#EF5645}{\text{Locally adaptive histogram equalization}}$: For some images it might be preferable to apply different kinds of equalization in different regions of the image, also called "neighborhood". This leads to "locally adaptive histogram equalization", see textbook p. 117.
Comparison:
- Histogram equalization is a point operator.
- Locally adaptive histogram equalization is a neighborhood operator.
# This Lecture
- 01: Image Formation Models (Ch. 2)
- **02: Image Operators and Transforms (Ch. 3)**
- 03: Feature Detection, Matching, Segmentation (Ch. 7)
- 04: Image Alignment and Stitching (Ch. 8)
- 05: 3D Reconstruction (Ch. 13)
# Image Operators
- [Point Operators](#sec-syllabus)
- **[Neighborhood Operators and Linear Filtering](#sec-ece)**
- [Fourier Transforms](#sec-ece)
- [Pyramid and Wavelets](s)
# Neighborhood Operators
$\color{#EF5645}{\text{Neighborhood operators}}$ use a collection of pixels in the vicinity of a given location in input image $f$ to determine the intensity at this location in the output image $g$. Neighborhood operators are used to filter images to:
- add blur or sharpen details,
- accentuate edges,
- remove noise.
<center><img src="figs/02_point_neighborhood.jpg" width=300px alt="default"/></center>
# Linear Filters
We first consider linear neighbordhood operators, also called linear filters.
$\color{#EF5645}{\text{A linear filter}}$: generates an output pixel's value that is calculated using a weighted sum of input pixel values within small neighborhood.
<center><img src="figs/02_point_neighborhood.jpg" width=300px alt="default"/></center>
# Convolution
$\color{#EF5645}{\text{The convolution operator}}$ with impulse function $h$, denoted $g = f * h$ is defined as:
- Discrete images: $g(i, j) = (f * h) (i, j) = \sum_{u=-k}^k\sum_{v=-l}^l h(k, l)f(i-k, j-l).$
- Continuous images: $g(x) = (f * h) (x) = \int_u h(u) f(x-u)du.$
$\color{#EF5645}{\text{Remarks}}$:
- $h$ is also called the filter or kernel.
- Changing $h$ produces different filtering, i.e. different operators.
<center><img src="figs/02_conv.jpg" width=1000px alt="default"/></center>
```
107*0.1+91*0.1+63*0.1 + 115*0.1+96*0.2+65*0.1 + 123*0.1+98*0.1+65*0.1
```
# Cross-correlation
$\color{#EF5645}{\text{The cross-correlation operator}}$ with kernel $h$, denoted $g = f \otimes h$, is defined as:
- For discrete images: $g(i, j) = (f \otimes h) (i, j) = \sum_{u=-k}^k\sum_{v=-l}^l h(k, l) f(i + u, j + v).$
- For continuous images: $g(x) = (f \otimes h) (x) = \int_u h(u) f(x+u)du.$
$\color{#EF5645}{\text{Remark}}$:
- Changing the filter $h$ produces different filtering, i.e. different operators.
- Note the + sign that differentiates this operation from convolution.
<center><img src="figs/02_conv.jpg" width=800px alt="default"/></center>
```
65*0.1+98*0.1+123*0.1 + 65*0.1+96*0.2+115*0.1 +63*0.1+91*0.1+107*0.1
```
<center><img src="figs/02_cross_conv.png" width=800px alt="default"/></center>
$\color{#047C91}{\text{Example}}$:
Show that both correlation and convolution with any kernel $h$ are linear shift-invariant (LSI) operators.
# Examples: Smoothing Kernels
Filtering can be used to blur an image.
$\color{#EF5645}{\text{The Moving Average}}$ averages the pixel values in a $K × K$ window, i.e. convolves/cross-correlates the image with a:
- normalized constant kernel.
<center><img src="figs/02_moving_average.jpg" width=400px alt="default"/></center>
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.ndimage as ndi
%precision 2
bright_square = np.zeros((7, 7)); bright_square[2:5, 2:5] = 1
mean_kernel = np.full((3, 3), 1/9)
print(bright_square); print(mean_kernel)
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].imshow(bright_square); axes[1].imshow(ndi.correlate(bright_square, mean_kernel));
```
# Examples: Smoothing Kernels
$\color{#EF5645}{\text{Barlett Filter or Bilinear filter}}$ convolves/cross-correlates the image with a:
- piecewise linear "tent" function (known as a Bartlett filter).
$\color{#EF5645}{\text{Remark}}$: It is called "bilinear" because it is the outer product of two linear splines (special functions defined piecewise by polynomials).
<center><img src="figs/02_bilinear.jpg" width=350px alt="default"/></center>
# Examples: Smoothing Kernels
$\color{#EF5645}{\text{Gaussian Filter}}$ convolves/cross-correlates the image with a the
- Gaussian kernel,
which is obtained by convolving the linear tent function with itself.
<center><img src="figs/02_gaussian.jpg" width=350px alt="default"/></center>
# Smoothing = Low-Pass Filters
$\color{#EF5645}{\text{Low-Pass Filters}}$: The previous kernels are also called:
- low-pass filters,
since they accept lower frequencies while attenuating higher frequencies.
# Example: Edge Extraction
Filtering can perform edge extraction and interest point detection.
$\color{#EF5645}{\text{Sobel Operator}}$ is an edge extractor which is a separable combination of a horizontal central difference (so called because the horizontal derivative is centered on the pixel) and a vertical tent filter (to smooth the results).
<center><img src="figs/02_sobel.jpg" width=300px alt="default"/></center>
```
```
# Example: Corner Detector
$\color{#EF5645}{\text{The Corner Detector}}$ looks for simultaneous horizontal and vertical second derivatives.
<center><img src="figs/02_corner.jpg" width=300px alt="default"/></center>
<center><img src="figs/02_corner_init.jpg" width=410px alt="default"/></center>
<center><img src="figs/02_corner_image.jpg" width=400px alt="default"/></center>
# Band-Pass Filters
$\color{#EF5645}{\text{Band-Pass Filters}}$: The Sobel and corner operators are examples of:
- band-pass filters and high-pass filters,
since they filter out both low and high frequencies, or only low frequencies.
# Image Operators
- [Point Operators](#sec-syllabus)
- [Neighborhood Operators and Linear Filtering](#sec-ece)
- **[Fourier Transforms](#sec-ece)**
- [Pyramids and Wavelets](#sec-ece)
# Frequencies in Images
We saw linear neighborhood operators, also called linear filters, that were:
- low-pass filters,
- band-pass and high-pass filters,
and gave an intuitive explanation of it.
Now, we show how Fourier analysis can give use insights into analyzing:
- the frequencies within images,
- and with this, the frequency characteristics of various filters.
$\color{#EF5645}{\text{Fourier series}}$: Consider a real-valued function $s$ that is integrable on a interval of length $P$. The following expansion of $s$ is called its Fourier series:
$$s_N(x) = \frac{a_0}{2} + \sum_{n=1}^N \left(a_n \cos(2\pi\frac{n}{P}x) + b_n \sin(2\pi\frac{n}{P}x) \right) = \sum_{n=-N}^N c_n \exp(i2\pi\frac{n}{P}x),$$
- $a_n, b_n$ are the Fourier coefficients from the sine-cosine form
- $c_n$ are the Fourier coefficients from the exponential form, also denoted $S[n]$.
- $\omega = 2\pi\frac{n}{P}$ are the (angular) frequencies, unique in $[0, 2\pi]$,
- $f = \frac{n}{P}$ are the frequencies.
<center><img src="figs/02_fourier_series_1.png" width=300px alt="default"/></center>
<center><img src="figs/02_fourier_series_2.png" width=300px alt="default"/></center>
Can we get the coefficients from the original function, and back? $s \leftrightarrow \{c_n = S[n]\}_n$.
<center><img src="figs/02_fourier_series_2.png" width=300px alt="default"/></center>
# 1D (Continuous) Fourier Transform
$\color{#EF5645}{\text{Fourier analysis}}$: is a method for expressing a function as a sum of periodic components, and for recovering the signal from those components.
$\color{#EF5645}{\text{The Fourier transform }} \mathcal{F}$ of a periodic function, $s_P$ with period P is:
$$\mathcal{F}(s)[n] = S[n]={\frac {1}{P}}\int_{P}s_{P}(t)\cdot e^{-i2 \pi {\frac{n}{P}}t}dt,\quad n\in \mathbb{Z} \quad \text{or}\quad \mathcal{F}(s)[\omega] = S[\omega]={\frac {1}{P}}\int_{P}s_{P}(t)\cdot e^{-i\omega t}dt, \quad \omega = 2\pi\frac{n}{P}, n \in \mathbb{Z}$$
<center><img src="figs/02_fourier_series_2.png" width=300px alt="default"/></center>
We say that $s$ is defined in real space and $S$ is defined in Fourier space.
# Amplitude and Phase
$\color{#EF5645}{\text{Amplitude and Phase}}$: Fourier transforms are complex-valued in general. As is common with complex numbers, it is often convenient to express them in terms of magnitude $A$ and phase $\phi$: $S[n] = Ae^{iφ}$. Roughly:
- $A$ tells "how much" of a certain frequency component is present,
- $\phi$ tells "where" the frequency component is.
# 1D Inverse Fourier Transform
$\color{#EF5645}{\text{The inverse Fourier transform }} \mathcal{F}^{-1}$ gives the Fourier series that represents $s_P$ as a sum of a potentially infinite number of complex exponential functions, each with an amplitude and phase specified by one of the coefficients:
$$\mathcal{F}^{-1}(S)[t] = s_{P}(t) = \sum _{n=-\infty }^{\infty }S[\omega]\cdot e^{i2\pi {\frac {n}{P}}t} =\sum_{\omega} S[\omega]\cdot e^{i\omega t} .$$
<center><img src="figs/02_fourier_series_2.png" width=300px alt="default"/></center>
# 1D Discrete Fourier Transform (DFT)
When both the function and its Fourier transform are replaced with discretized counterparts, it is called the discrete Fourier transform (DFT).
- The discrete 1D image is an array of size $P$: $[s(0), ..., s(P-1)]$.
$\color{#EF5645}{\text{The discrete Fourier transform (DFT) }} \mathcal{F}$ is:
$$S[n]= \frac{1}{P} \sum_{p=0}^{P-1} s(p)e^{−i 2\pi \frac{n}{P}p} \quad \text{or} \quad S[\omega]= \frac{1}{P} \sum_{p=0}^{P-1} s(p)e^{−i \omega p}.$$
Note that we use $p$ to denote the 1D discrete spatial coordinate here, as opposed to $i, j$, in order to avoid confusion with the imaginary $i$ (sometimes also denoted $j$).
# 2D Fourier Transforms
We have 2 coordinates of space $(x, y)$, and 2 coordinates of frequencies $(ω_x, ω_y)$.
$\color{#EF5645}{\text{2D Fourier Transforms}}$
- Continuous functions:
$$S[ω_x, ω_y]= \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} s(x, y)e^{-i(ω_xx+ω_yy)}dxdy$$
- Discrete functions:
$$S[\omega_x, \omega_y]= \frac{1}{MN} \sum_{x=0}^{M-1} \sum_{y=0}^{N-1} s(x, y)e^{-i(ω_xx+ω_yy)},$$
where M and N are the width and height of the image.
# Remark on FT Images
$\color{#EF5645}{\text{Remark:}}$ FT images often only consider the magnitude $A$.
<center><img src="figs/02_ft_image.png" width=300px alt="default"/></center>
The images displayed are horizontal cosines of 8 cycles, where one is shifted laterally from the other by 1/2 cycle, i.e. $\pi$ in phase:
- same FT magnitude images,
- FT phase images would have been different, but we often do not display them.
When we look at a FT image, we only have a partial information!
# Properties of Fourier Transforms
$\color{#6D7D33}{\text{Properties}}$: The Fourier transform has the following properties:
- Linearity: $\mathscr{F}( af_1 + bf_2) = aF_1(u,v) + bF_2(u,v)$
- Scaling: $\mathscr{F}( f(\alpha x, \beta y))= \frac{1}{|\alpha \beta|} F(u/\alpha, v/\beta)$
- Shift:
- $\mathscr{F}(f(x-\alpha, y-\beta)) = F(u,v) e^{-i2\pi (u\alpha + v\beta)}$,
- $\mathscr{F}(f(x,y)e^{i 2\pi (u_0\alpha + v_0\beta)}) = F(u-u_0 , v-v_0)$
- Rotation: $\mathscr{F}(\mathscr{F}(f(x,y)))= f(-x, -y)$
where we use $F_1, F_2, F$ to denote the Fourier transforms of $f_1, f_2, f$.
# Convolution Theorem
$\color{#6D7D33}{\text{Theorem}}$: The following result constitutes the Convolution Theorem:
$$\mathscr{F}(f_1 * f_2) = F_1. F_2$$
$$ \mathscr{F}(f_1 .f_2) = F_1 * F_2, $$
where we use $F_1, F_2$ to denote the Fourier transforms of $f_1, f_2$.
# 1D Fourier Transform to Analyze Filters
How can we analyze what a given filter does to high, medium, and low frequencies?
- pass a periodic signal through the filter: $s(p)=e^{i \omega p}$
- observe by how much it is attenuated.
Convolving $s(p)$ with a filter of kernel $h$ gives : $o(p)= h(p) * s(p) = A_h e^{i2\pi \phi_h}e^{i \omega p }$ with the same frequency but different:
- Magnitude $A$: gain or magnitude of the filter, which is the magnitude of $\mathcal{F}(h)[\omega]$
- Phase $\phi$: shift or phase of the filter, which is the phase of $\mathcal{F}(h)[\omega]$
$\color{#047C91}{\text{Exercise}}$: Show the above assertion (about "eigenfunctions" of convolutions).
# Fourier Transform of a Filter
In this example:
- $S$ has a single nonzero amplitude $A=1$ with $\phi = 0$ at frequency $\omega$
- Sending that signal tells us the value of $\mathcal{F}(h)[\omega]$:
- = amplitude $A=A_h$ and phase $\phi = \phi_h$ at frequency $\omega$ for $\mathcal{F}[h]$.
<center><img src="figs/02_sinus.png" width=700px alt="default"/></center>
In general:
- The whole $\mathcal{F}(h)$ tells us which frequencies are amplified or attenuated by the filter.
# Computational Considerations
$\color{#EF5645}{\text{Remark}}$: By denoting DFT the Discrete Fourier Transform:
- At face value, the DFT takes $O(P^2)$ operations (multiply-adds) to evaluate.
- The algorithm Fast Fourier Transform (FFT) requires only $O(P \log_2 P )$ operations.
for a 1D discrete signal of length $P$.
# 1D Fourier Transforms of Filters
Low-pass filters considered previously, in their 1D version:
<center><img src="figs/02_ft_lowpass.png" width=700px alt="default"/></center>
# 1D Fourier Transforms of Filters
Edge and corner detection filters considered previously, in their 1D version:
<center><img src="figs/02_ft_bandpass.png" width=700px alt="default"/></center>
# 2D Fourier Transforms of Filters
<center><img src="figs/02_ft_low.png" width=600px alt="default"/></center>
# 2D Fourier Transforms of Filters
<center><img src="figs/02_ft_high.png" width=600px alt="default"/></center>
# 2D Fourier Transforms of Filters
<center><img src="figs/02_ft_deriv.png" width=600px alt="default"/></center>
# Image Operators
- [Point Operators](#sec-syllabus)
- [Neighborhood Operators and Linear Filtering](#sec-ece)
- [Fourier Transforms](#sec-ece)
- **[Pyramids and Wavelets](#sec-ece)**
# We Have Seen
- how to modify the image's intensity characteristics:
- point by point
- neighborhood by neighbordhood:
- led us to convolution and linear filters
- the Fourier transform, explaining what the filters were doing to images.
$\rightarrow$ image is mapped to an image (of the same size) through an operator.
<center><img src="figs/02_main.png" width=500px alt="default"/></center>
<center><img src="figs/02_edges.png" width=500px alt="default"/></center>
# Pyramids of Images
We transform one image into a pyramid of images:
- Upsample or downsample images
- upsample: might be needed to compare it to another image
- downsample: for compression
- Pyramid of images, because:
- additional multiscale
- useful in practice for multiscale detection
<center><img src="figs/02_mona_pyr.png" width=400px alt="default"/></center>
<center>Traditional image pyramid: each level has half the resolution (width and height), and hence a quarter of the pixels, of its parent level.</center>
# Interpolation or Upsampling
$\color{#EF5645}{\text{Interpolation}}$ of an image $f$, also called upsampling of $f$, is performed by:
- selecting an upsampling rate $r$ and an "interpolation kernel" $h$,
- convolving the image with it such as:
$$ g(i,j )= \sum_{k,l} f(k, l)h(i − rk,j − rl).$$
$\color{#EF5645}{\text{Remark}}$: This formula is related to the discrete convolution formula, except that we now have $r$ that multiplies $k$ and $l$.
<center><img src="figs/02_upsampling.png" width=400px alt="default"/></center>
# What Kernels Make Good Interpolators?
- Depends on the application and the computational time
- Linear smoothing kernels (e.g. the bilinear kernel) can be used
- Most graphics cards use the bilinear kernel
- Most photo editing packages using bicubic interpolation, with a often set to -1.
<center><img src="figs/02_bicubic.png" width=900px alt="default"/></center>
- High quality interpolator: "windowed sinc function".
# What Kernels Make Good Interpolators?
<center><img src="figs/02_interpolation_tire.png" width=900px alt="default"/></center>
# Decimation or Downsampling
$\color{#EF5645}{\text{Decimation}}$ of an image $f$, also called downsampling, is performed by:
- selecting an downsampling rate $r$ and a low-pass filter $h$,
- convolving the image $f$ with the filter,
- keep every r-th sample (or only evaluate the convolution at every r-th sample):
$$ g(i,j )= \sum_{k,l} f(k, l)h(i − \frac{k}{r},j −\frac{l}{r}).$$
$\color{#EF5645}{\text{Remark}}$: Convolution avoids aliasing (distortion/artifact on a reconstructed signal).
<center><img src="figs/02_downsampling.png" width=600px alt="default"/></center>
# Which Smoothing Kernels are Good Decimators?
- Depends on the application (downstream task, or display to the user) and computational time
- Bilinear filter: commonly used
- Higher quality filter: "windowed sinc"
<center><img src="figs/02_decimation_filters.png" width=750px alt="default"/></center>
# Pyramids
Uses for Pyramids:
- speed-up coarse-to-fine search algorithms,
- to look for objects or patterns at different scales,
- perform multi-resolution blending operations.
![image.png](attachment:image.png)
# Pyramids
Two main types in increasing level of complexity:
- Gaussian
- Laplacian
# Gaussian Pyramid
$\color{#003660}{\text{Algorithm:}}$ At each iteration $k$, an operator $G_k$ transforms $g_k$ into image $g_{k+1}$:
1. Convolve the image with a low-pass filter
- E.g. 4-th binomial filter $b_4 = [1, 4, 6, 4, 1] / 16$, normalized to sum to 1.
2. Subsample by a factor of 2 the result.
$\rightarrow$ generates a sequence of images, subsequent ones being smaller, lower resolution versions of the earlier ones.
<center><img src="figs/02_gaussian_block.png" width=300px alt="default"/></center>
# Gaussian Pyramid
<center><img src="figs/02_gaussian_pyr.png" width=900px alt="default"/></center>
# From Gaussian to Laplacian Pyramid
Gaussian pyramid:
- each level losses some of the fine image details available in the previous level.
Laplacian pyramid:
- represent, at each level, what is in a Gaussian pyramid image of one level, but not at the level below it.
# Laplacian Pyramid
$\color{#003660}{\text{Algorithm}}$: At each iteration $k$, starting from an image $g_k$:
- Compute $g_{k+1}$ using the block from the Gaussian pyramid,
- Expanding the lower-resolution image $g_{k+1}$ to the same pixel resolution as the neighboring higher-resolution image $g_k$.
- Subtract the two.
<center><img src="figs/02_laplace_block.png" width=220px alt="default"/></center>
# Laplacian Pyramid
<center><img src="figs/02_laplace_pyr0.png" width=500px alt="default"/></center>
<center><img src="figs/02_laplace_pyr1.png" width=1200px alt="default"/></center>
# Laplace Pyramid: Application
<center><img src="figs/02_lap_app0.png" width=1000px alt="default"/></center>
<center><img src="figs/02_lap_app1.png" width=300px alt="default"/></center>
<center><img src="figs/02_lap_app2.png" width=300px alt="default"/></center>
# Towards Wavelets
Gaussian and Laplacian pyramids:
- used extensively in computer vision applications
- a method to represent an image at different scales
Wavelet decompositition
- = an alternative way to represent an image at different scales
# Wavelets
$\color{#EF5645}{\text{Wavelets}}$ are filters that:
- localize a signal in both space and frequency,
- are defined over a hierarchy of scales.
# Drawbacks of Fourier Analysis
- Location information is stored in phases and difficult to extract.
- The Fourier transform is very sensitive to changes in the function:
- Change of $O(\epsilon)$ in one point of a discrete function...
- ...can cause as much as $O(\epsilon)$ change in every Fourier coefficient.
- Similarly:
- a change in any one Fourier coefficient...
- ...can cause a change of similar magnitude at every point in physical space.
# Wavelets: Definition
$\color{#EF5645}{\text{A wavelet}}$ is a function $\psi$ that satisfies:
- $\int_{-\infty}^{+\infty} \psi(x) dx = 0$
- $\int_{-\infty}^{+\infty} \frac{|\hat \psi(\omega)|^2}{\omega} d\omega = C_\psi < \infty$ where $\hat \psi$ denotes the Fourier transfor|m of $\psi$.
The second condition is necessary to ensure that a function can be reconstructed from a decomposition into wavelets.
# Wavelet Families
$\color{#EF5645}{\text{A wavelet family}}$ is a collection of functions obtained by shifting and dilating the graph of a wavelet. Specifically, a wavelet family with mother wavelet $\psi$ consists of functions $\psi_{a,b}$ of the form:
$$\psi_{a,b}(x) = \frac{1}{\sqrt{a}}\psi\left(\frac{x - b}{a}\right),$$
where:
- $b$ is the shift or center of $\psi_{a, b}$ and $a > 0 $ is the scale.
- If $a > 1$, then $\psi_{a,b}$ is obtained by stretching the graph of $\psi$.
- If $a < 1$, then the graph of $\psi$ is contracted.
- The value $a$ corresponds to the notion of frequency in Fourier analysis.
# Well-Known Wavelets
- Haar: first wavelet, introduced in 1909.
<center><img src="figs/02_example_haar.jpg" width=700px alt="default"/></center>
# Well-Known Wavelets
- Mexican hat: useful for detection in computer vision.
<center><img src="figs/02_example_mexican.jpg" width=700px alt="default"/></center>
# Other Wavelets
<center><img src="figs/02_more_waves.png" width=700px alt="default"/></center>
# Continuous Wavelet Transform (CWT)
$\color{#EF5645}{\text{The continuous wavelet transform (CWT)}}$ of a function $f$ is defined by:
$$Wf(a, b) = \int_{-\infty}^{+\infty} f(x) \psi_{a, b}(x)dx.$$
The inverse transform is given by:
$$ f(x) = \frac{1}{C_\psi}\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}\frac{1}{|a|^{1/2}}Wf(a, b)\psi_{a, b}(x) da db,$$
where $C_\psi$ is the constant coming from the definition of a wavelet.
<center><img src="figs/02_fourier_wavelets.jpg" width=600px alt="default"/></center>
# Discrete Wavelet Transform (DWT)
$\color{#EF5645}{\text{The discrete wavelet transform (DWT)}}$ is defined just as the continuous wavelet transform, except only particular values of $a$ and $b$.
For specific values of $a$ and $b$, it can be computed using the Fast Wavelet Transform, developed by Mallat.
# An Orthogonal Family of Wavelets
Given a mother wavelet $\psi$, an orthogonal family of wavelets can be obtained by:
- Choosing $a = 2^m$ and $b = n 2^m$, where $m$ and $n$ are integers.
With these choices of $a$ and $b$, the DWT is given by:
$$ Wf(m, n) = <\psi_{m, n}, f> = \sum_{k=0}^{p-1} \psi_{m, n}(t_k)f(t_k)$$
where: $\psi_{m, n}(x) = 2^{-m/2}\psi\left( \frac{x - n 2^m}{2^m} \right).$
The inverse transform is given by:
$$ f(x) = \sum_{m, n}\psi_{m,n}(x) Wf(m, n).$$
# Image Operators
- [Point Operators](#sec-syllabus)
- [Neighborhood Operators and Linear Filtering](#sec-ece)
- [Fourier Transforms](#sec-ece)
- [Pyramids and Wavelets](#sec-ece)
| github_jupyter | from skimage import data, io
image = data.astronaut()
print(type(image))
print(image.shape)
# io.imshow(image);
io.imshow(image[10:300, 50:200, 2]);
from skimage import data, io
image = data.astronaut()
image = image / 255
gain = 1.8 # a
bias = 0. # b
mult_image = gain * image + bias
io.imshow(mult_image);
from skimage import data, io
from skimage.transform import resize
image0 = data.coins() / 255
image1 = data.brick() / 255
image0 = resize(image0, (128, 128))
image1 = resize(image1, (128, 128))
alpha = 1.
blend_image = (1 - alpha) * image0 + alpha * image1
io.imshow(blend_image);
import numpy as np
import skimage
import matplotlib.pyplot as plt
image = skimage.data.chelsea()
# print(image.shape)
#skimage.io.imshow(image)
fig, ax = plt.subplots(2, 3)
bins = np.arange(-0.5, 255 + 1, 1)
for ci, c in enumerate('rgb'):
ax[0, ci].imshow(image[:,:, ci], cmap='gray')
ax[1, ci].hist(image[:,:, ci].flatten(), bins = bins, color=c)
107*0.1+91*0.1+63*0.1 + 115*0.1+96*0.2+65*0.1 + 123*0.1+98*0.1+65*0.1
65*0.1+98*0.1+123*0.1 + 65*0.1+96*0.2+115*0.1 +63*0.1+91*0.1+107*0.1
import numpy as np
import matplotlib.pyplot as plt
import scipy.ndimage as ndi
%precision 2
bright_square = np.zeros((7, 7)); bright_square[2:5, 2:5] = 1
mean_kernel = np.full((3, 3), 1/9)
print(bright_square); print(mean_kernel)
fig, axes = plt.subplots(nrows=1, ncols=2)
axes[0].imshow(bright_square); axes[1].imshow(ndi.correlate(bright_square, mean_kernel)); | 0.299003 | 0.991489 |
# Saving and Loading Models
In this notebook, I'll show you how to save and load models with PyTorch. This is important because you'll often want to load previously trained models to use in making predictions or to continue training on new data.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import helper
import fc_model
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here we can see one of the images.
```
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
```
# Train a network
To make things more concise here, I moved the model architecture and training code from the last part to a file called `fc_model`. Importing this, we can easily create a fully-connected network with `fc_model.Network`, and train the network using `fc_model.train`. I'll use this model (once it's trained) to demonstrate how we can save and load models.
```
# Create the network, define the criterion and optimizer
model = fc_model.Network(784, 10, [512, 256, 128])
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=2)
```
## Saving and loading networks
As you can imagine, it's impractical to train a network every time you need to use it. Instead, we can save trained networks then load them later to train more or use them for predictions.
The parameters for PyTorch networks are stored in a model's `state_dict`. We can see the state dict contains the weight and bias matrices for each of our layers.
```
print("Our model: \n\n", model, '\n')
print("The state dict keys: \n\n", model.state_dict().keys())
```
The simplest thing to do is simply save the state dict with `torch.save`. For example, we can save it to a file `'checkpoint.pth'`.
```
torch.save(model.state_dict(), 'checkpoint.pth')
```
Then we can load the state dict with `torch.load`.
```
state_dict = torch.load('checkpoint.pth')
print(state_dict.keys())
```
And to load the state dict in to the network, you do `model.load_state_dict(state_dict)`.
```
model.load_state_dict(state_dict)
```
Seems pretty straightforward, but as usual it's a bit more complicated. Loading the state dict works only if the model architecture is exactly the same as the checkpoint architecture. If I create a model with a different architecture, this fails.
```
# Try this
model = fc_model.Network(784, 10, [400, 200, 100])
# This will throw an error because the tensor sizes are wrong!
model.load_state_dict(state_dict)
```
This means we need to rebuild the model exactly as it was when trained. Information about the model architecture needs to be saved in the checkpoint, along with the state dict. To do this, you build a dictionary with all the information you need to compeletely rebuild the model.
```
checkpoint = {'input_size': 784,
'output_size': 10,
'hidden_layers': [each.out_features for each in model.hidden_layers],
'state_dict': model.state_dict()}
torch.save(checkpoint, 'checkpoint.pth')
```
Now the checkpoint has all the necessary information to rebuild the trained model. You can easily make that a function if you want. Similarly, we can write a function to load checkpoints.
```
def load_checkpoint(filepath):
checkpoint = torch.load(filepath)
model = fc_model.Network(checkpoint['input_size'],
checkpoint['output_size'],
checkpoint['hidden_layers'])
model.load_state_dict(checkpoint['state_dict'])
return model
model = load_checkpoint('checkpoint.pth')
print(model)
```
| github_jupyter | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms
import helper
import fc_model
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
# Create the network, define the criterion and optimizer
model = fc_model.Network(784, 10, [512, 256, 128])
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
fc_model.train(model, trainloader, testloader, criterion, optimizer, epochs=2)
print("Our model: \n\n", model, '\n')
print("The state dict keys: \n\n", model.state_dict().keys())
torch.save(model.state_dict(), 'checkpoint.pth')
state_dict = torch.load('checkpoint.pth')
print(state_dict.keys())
model.load_state_dict(state_dict)
# Try this
model = fc_model.Network(784, 10, [400, 200, 100])
# This will throw an error because the tensor sizes are wrong!
model.load_state_dict(state_dict)
checkpoint = {'input_size': 784,
'output_size': 10,
'hidden_layers': [each.out_features for each in model.hidden_layers],
'state_dict': model.state_dict()}
torch.save(checkpoint, 'checkpoint.pth')
def load_checkpoint(filepath):
checkpoint = torch.load(filepath)
model = fc_model.Network(checkpoint['input_size'],
checkpoint['output_size'],
checkpoint['hidden_layers'])
model.load_state_dict(checkpoint['state_dict'])
return model
model = load_checkpoint('checkpoint.pth')
print(model) | 0.799011 | 0.989791 |
### Regression: Predicting continuous labels
In contrast with the discrete labels of a classification algorithm, we will next look at a simple *regression* task in which the labels are continuous quantities.
Consider the data shown in the following figure, which consists of a set of points each with a continuous label:
<img src="https://github.com/soltaniehha/Business-Analytics/blob/master/figs/12-01-regression-1.png?raw=true" width="600" align="center"/>
As with the classification example, we have two-dimensional data: that is, there are two features describing each data point.
The color of each point represents the continuous label for that point.
There are a number of possible regression models we might use for this type of data, but here we will use a simple linear regression to predict the points.
This simple linear regression model assumes that if we treat the label as a third spatial dimension, we can fit a plane to the data.
This is a higher-level generalization of the well-known problem of fitting a line to data with two coordinates.
We can visualize this setup as shown in the following figure:
<img src="https://github.com/soltaniehha/Business-Analytics/blob/master/figs/12-01-regression-2.png?raw=true" width="800" align="center"/>
Notice that the *feature 1-feature 2* plane here is the same as in the two-dimensional plot from before; in this case, however, we have represented the labels by both color and three-dimensional axis position.
From this view, it seems reasonable that fitting a plane through this three-dimensional data would allow us to predict the expected label for any set of input parameters.
Returning to the two-dimensional projection, when we fit such a plane we get the result shown in the following figure:
<img src="https://github.com/soltaniehha/Business-Analytics/blob/master/figs/12-01-regression-3.png?raw=true" width="600" align="center"/>
This plane of fit gives us what we need to predict labels for new points.
Visually, we find the results shown in the following figure:
<img src="https://github.com/soltaniehha/Business-Analytics/blob/master/figs/12-01-regression-4.png?raw=true" width="900" align="center"/>
As with the classification example, this may seem rather trivial in a low number of dimensions.
But the power of these methods is that they can be straightforwardly applied and evaluated in the case of data with many, many features.
For example, this is similar to the task of computing the apparent temperature (feels like), we might use the following features and labels:
- *feature 1*, *feature 2*, etc. $\to$ temperature, humidity, or wind speed
- *label* $\to$ apparent temperature
The apparent temperature for a small number of data points might be determined through an independent set of (typically more expensive) observations.
Apparent temperature to remaining data points could then be estimated using a suitable regression model, without the need to employ the more expensive observation across the entire set.
# Example Linear Regression - Apparent Temperature
Goal: predict the apparent temperature from a series of measurements.
Data was download from [Kaggle](https://www.kaggle.com/budincsevity/szeged-weather#weatherHistory.csv) and can be loaded directly from the course's GitHub. It includes hourly weather data for Szeged, Hungary area from 2006 to 2016:
```
from IPython.display import Pretty as disp
hint = 'https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/docs/hints/' # path to hints on GitHub
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="white")
df = pd.read_csv('https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/data/weatherHistory.csv')
df.head(3)
df.info()
```
## Preprocessing
1. There are a small number of NAs in "Precip Type"; we will drop all the NAs
2. We won't need the following fields, we will drop them: 'Formatted Date', 'Summary', 'Daily Summary'
3. We will convert 'Precip Type' to dummy variables. Note that we've used `drop_first=True` so we won't have to drop one of the categories
```
df = df.dropna()
df = df.drop(['Formatted Date','Summary','Daily Summary'], axis=1)
df = pd.get_dummies(df, ['Precip Type'], drop_first=True)
df.info()
```
Create a feature DataFrame called `X`; our target variable is 'Apparent Temperature (C)' and that's what we need to exclude in the feature DataFrame:
```
# Your answer goes here
# Don't run this cell to keep the outcome as your frame of reference
# SOLUTION: Uncomment and execute the line below to get help
#disp(hint + '11-03-X')
```
Create a target vector with 'Apparent Temperature (C)' and call it `y`:
```
# Your answer goes here
# Don't run this cell to keep the outcome as your frame of reference
# SOLUTION: Uncomment and execute the line below to get help
#disp(hint + '11-03-y')
```
We would like to evaluate the model on data it has not seen before, and so we will split the data into a training set and a testing set. Use a 30% split for test. You can use seed value 833 if you would like to get similar values as this notebook:
```
# Your answer goes here
# SOLUTION: Uncomment and execute the line below to get help
#disp(hint + '11-03-split')
print("Xtrain shape:", Xtrain.shape)
print("Xtest shape:", Xtest.shape)
```
With the data arranged, we can follow our recipe to predict the labels:
First, instantiate a simple linear regrssion model. You would first need to import `LinearRegression`; it can be found under the `linear_model` module in `sklearn`. Call this model: `model`.
We will instantiate the model with all the default parameters:
```
# Your answer goes here
# SOLUTION: Uncomment and execute the line below to get help
#disp(hint + '11-03-model')
```
Fit model to the training data:
```
# Your answer goes here
# SOLUTION: Uncomment and execute the line below to get help
#disp(hint + '11-03-fit')
print("Model coefficients: ", model.coef_)
print("Model intercept:", model.intercept_)
```
predict on new (test) data and store the results as `y_model`:
```
# Your answer goes here
# SOLUTION: Uncomment and execute the line below to get help
#disp(hint + '11-03-predict')
```
Now that our predictions are ready we can merge them along with the ground truth, our 'Apparent Temperature (C)' field, to the test features and visually inspect our model performance:
```
test = Xtest.join(ytest).reset_index()
test.join(pd.Series(y_model, name='predicted')).head()
```
Calculating the mean absolute error:
```
from sklearn.metrics import mean_absolute_error
mean_absolute_error(ytest, y_model)
```
| github_jupyter | from IPython.display import Pretty as disp
hint = 'https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/docs/hints/' # path to hints on GitHub
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="white")
df = pd.read_csv('https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/data/weatherHistory.csv')
df.head(3)
df.info()
df = df.dropna()
df = df.drop(['Formatted Date','Summary','Daily Summary'], axis=1)
df = pd.get_dummies(df, ['Precip Type'], drop_first=True)
df.info()
# Your answer goes here
# Don't run this cell to keep the outcome as your frame of reference
# SOLUTION: Uncomment and execute the line below to get help
#disp(hint + '11-03-X')
# Your answer goes here
# Don't run this cell to keep the outcome as your frame of reference
# SOLUTION: Uncomment and execute the line below to get help
#disp(hint + '11-03-y')
# Your answer goes here
# SOLUTION: Uncomment and execute the line below to get help
#disp(hint + '11-03-split')
print("Xtrain shape:", Xtrain.shape)
print("Xtest shape:", Xtest.shape)
# Your answer goes here
# SOLUTION: Uncomment and execute the line below to get help
#disp(hint + '11-03-model')
# Your answer goes here
# SOLUTION: Uncomment and execute the line below to get help
#disp(hint + '11-03-fit')
print("Model coefficients: ", model.coef_)
print("Model intercept:", model.intercept_)
# Your answer goes here
# SOLUTION: Uncomment and execute the line below to get help
#disp(hint + '11-03-predict')
test = Xtest.join(ytest).reset_index()
test.join(pd.Series(y_model, name='predicted')).head()
from sklearn.metrics import mean_absolute_error
mean_absolute_error(ytest, y_model) | 0.341802 | 0.991883 |
```
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import scipy.io
from tensorflow.python.framework import function
import os, re
import claude.utils as cu
import claude.tx as tx
import claude.claudeflow.autoencoder as ae
import claude.claudeflow.helper as cfh
import claude.claudeflow.training as cft
# Parameters
# Channel Parameters
chParam = cu.AttrDict()
chParam.M = 16
chParam.w = int(np.log2(chParam.M))
if chParam.M == 16:
chParam.SNR = 10
elif chParam.M == 64:
chParam.SNR = 18
else:
chParam.SNR = 22
# Auto-Encoder Parameters
aeParam = cu.AttrDict()
aeParam.seed = 1337
aeParam.constellationDim = 2
aeParam.constellationOrder = chParam.M
aeParam.nLayers = 4
aeParam.nHidden = 256
aeParam.activation = tf.nn.relu
aeParam.dropout = False
aeParam.dtype = tf.float32
# Training Parameters
trainingParam = cu.AttrDict()
trainingParam.sampleSize = 16*chParam.M # Increase for better results (especially if M>16)
trainingParam.batchSize = 1*chParam.M # Increase for better results (especially if M>16)
trainingParam.learningRate = 0.001
trainingParam.displayStep = 25
trainingParam.path = 'results_GMI_AWGN'
trainingParam.filename = 'M{:03d}_seed{:04d}_SNR{:02d}'.format(chParam.M,aeParam.seed,chParam.SNR)
trainingParam.earlyStopping = 25
trainingParam.iterations = 500
trainingParam.summaries = True
if trainingParam.summaries:
# tensorboard directory
chHyperParam = ['M','SNR']
aeHyperParam = ['seed']
trainingHyperParam = []
trainingParam.summaryString = ','.join( [ '{}={}'.format(item,chParam[item]) for item in chHyperParam ]
+[ '{}={}'.format(item,trainingParam[item]) for item in trainingHyperParam ]
+[ '{}={}'.format(item,aeParam[item]) for item in aeHyperParam ] )
print(trainingParam.summaryString,flush=True)
# TF constants
one = tf.constant(1,aeParam.dtype)
two = tf.constant(2,aeParam.dtype)
DIM = tf.constant(aeParam.constellationDim,aeParam.dtype)
PI = tf.constant(np.pi,aeParam.dtype)
tf.set_random_seed(aeParam.seed)
np.random.seed(aeParam.seed)
# Tx Graph
allCombinations = cu.generateUniqueBitVectors(chParam.M)
xSeed = tf.constant(allCombinations, aeParam.dtype)
X = tf.placeholder( aeParam.dtype, shape=(None, chParam.w) )
enc, enc_seed = ae.encoder(X, aeParam, bits=True)
# Channel Graph
SNR_lin = cfh.dB2lin(tf.constant(chParam.SNR,aeParam.dtype),'dB')
sigma2_noise = one / SNR_lin
noise = tf.sqrt(sigma2_noise) * tf.rsqrt(two) * tf.random_normal(shape=tf.shape(enc),dtype=aeParam.dtype)
channel = enc + noise
# Rx Graph
decoder = ae.decoder(channel, aeParam, bits=True)
decoder_sigmoid = tf.sigmoid(decoder)
# Neural Network GMI metric
# the output of the neural network with sigmoid activation can serve as an LLR estimation :)
# we basically assume that the decoder neural network has learned a probability distribution of the channel
# which we use as auxiliary channel within the receiver
sigmoid_LLRs = tf.linalg.transpose( tf.log( (one-decoder_sigmoid) / decoder_sigmoid ) )
sigmoid_GMI = cfh.GMI( tf.linalg.transpose(X), sigmoid_LLRs )
# Gaussian GMI metric
# here we just use a Gaussian auxiliary channel assumption
constellation = tf.expand_dims( tf.complex( enc_seed[:,0], enc_seed[:,1]), axis=0 )
channel_complex = tf.expand_dims( tf.complex( channel[:,0], channel[:,1]), axis=0 )
gaussian_LLRs = cfh.gaussianLLR( constellation, tf.linalg.transpose(xSeed), channel_complex, SNR_lin, chParam.M )
gaussian_GMI = cfh.GMI( tf.linalg.transpose(X), gaussian_LLRs )
# In this script, the channel is a Gaussian channel, so a Gaussian auxiliary channel assumption is optimal
# Therefore: Gaussian GMI > Neural Network GMI
# bit errors and ber
input_bits = tf.cast( X , tf.int32 )
output_bits = tf.cast( tf.round( tf.nn.sigmoid( decoder ) ), tf.int32 )
bit_compare = tf.not_equal( output_bits, input_bits )
bit_errors = tf.reduce_sum( tf.cast( bit_compare, tf.int32 ) )
bit_error_rate = tf.reduce_mean( tf.cast( bit_compare, aeParam.dtype ) )
# loss
loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( labels=X, logits=decoder ) )
optimizer = tf.train.AdamOptimizer(learning_rate=trainingParam.learningRate)
d_sigmoid_loss = optimizer.minimize(loss)
metricsDict = {'loss_metric':loss,\
'ber_metric':bit_error_rate,\
'gaussian_gmi_metric':gaussian_GMI,\
'sigmoid_gmi_metric':sigmoid_GMI}
meanMetricOpsDict, updateOps, resetOps = cft.create_mean_metrics(metricsDict)
sess = tf.Session()
if trainingParam.summaries:
weights_summaries = tf.summary.merge_all() # without weight/bias histograms
# Summaries
s = [tf.summary.scalar('BER', metricsDict['ber_metric']),
tf.summary.scalar('loss', metricsDict['loss_metric']),
tf.summary.scalar('gaussian_GMI', metricsDict['gaussian_gmi_metric']),
tf.summary.scalar('sigmoid_GMI', metricsDict['sigmoid_gmi_metric'])]
epoche_summaries = tf.summary.merge(s) # without weight/bias histograms
summaries_dir = os.path.join(trainingParam.path,'tboard{}'.format(chParam.M),trainingParam.summaryString)
os.makedirs(summaries_dir, exist_ok=True)
train_writer = tf.summary.FileWriter(summaries_dir + '/train', sess.graph)
else:
train_writer = None
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
checkpoint_path = os.path.join(trainingParam.path,'checkpoint',trainingParam.filename,'best')
if not os.path.exists(checkpoint_path):
os.makedirs(checkpoint_path)
else:
pass
# print("Restoring checkpoint...", flush=True)
# saver.restore(sess=sess,save_path=checkpoint_path)
# constellation before training
[constellation,constellation_bits] = sess.run([enc_seed,xSeed])
plt.figure(figsize=(8,8))
plt.plot(constellation[:,0],constellation[:,1],'.')
for ii in range(constellation.shape[0]):
bit_string = ''.join( [ str(int(x)) for x in allCombinations[ii,:].tolist()] )
plt.text(constellation[ii,0], constellation[ii,1], bit_string, fontsize=12)
plt.axis('square');
lim_ = 1.6
plt.xlim(-lim_,lim_);
plt.ylim(-lim_,lim_);
bestLoss = 100000
bestAcc = 0
lastImprovement = 0
epoche = 0
nBatches = int(trainingParam.sampleSize/trainingParam.batchSize)
batchSizeMultiples = 1
batchSize = batchSizeMultiples * trainingParam.batchSize
np_loss = []
np_ber = []
np_gaussian_gmi = []
np_sigmoid_gmi = []
```
### Comment on the training procedure:
The training gets stuck early when a large batch size is chosen. For this reason we start with a low batch size, and iterativley increase it after temporary convergence. Training with a low batch size introduces a more stochastic gradient estimation, which helps to get out of the local minima.
```
print( 'START TRAINING ... ', flush=True )
while(True):
epoche = epoche + 1
sess.run(resetOps)
# train AE with iteratively increasing batch size
for batch in range(0,nBatches):
feedDict = {X: cu.generateBitVectors(batchSize,chParam.M)}
sess.run(d_sigmoid_loss, feed_dict=feedDict)
# gather performance metrics with large batch size
for batch in range(0,nBatches):
feedDict = {X: cu.generateBitVectors(trainingParam.sampleSize,chParam.M)}
sess.run(updateOps, feed_dict=feedDict)
[outAvgLoss, outAvgBer, outAvgGaussianGmi, outAvgSigmoidGmi] = sess.run(list(meanMetricOpsDict.values()), feed_dict=feedDict)
np_loss.append( outAvgLoss )
np_ber.append( outAvgBer )
np_gaussian_gmi.append( outAvgGaussianGmi )
np_sigmoid_gmi.append( outAvgSigmoidGmi )
if trainingParam.summaries:
epocheSummaries = sess.run(epoche_summaries, feed_dict=feedDict)
train_writer.add_summary(epocheSummaries,epoche)
if outAvgLoss < bestLoss:
bestLoss = outAvgLoss
lastImprovement = epoche
saver.save(sess=sess,save_path=checkpoint_path)
# convergence check and increase empirical evidence
if epoche - lastImprovement > trainingParam.earlyStopping:
saver.restore(sess=sess,save_path=checkpoint_path)
bestLoss = 10000
lastImprovement = epoche
# increase empirical evidence
batchSizeMultiples = batchSizeMultiples + 2
batchSize = batchSizeMultiples * trainingParam.batchSize
if batchSizeMultiples >= 17:
break;
print("batchSize: {}, batchSizeMultiples: {}".format(batchSize,batchSizeMultiples))
if epoche%trainingParam.displayStep == 0:
print('epoche: {:04d} - avgLoss: {:.2f} - avgBer: {:.2e} - avgGaussianGmi: {:.2f} - avgSigmoidGmi: {:.2f}'.format(epoche,outAvgLoss,outAvgBer,outAvgGaussianGmi,outAvgSigmoidGmi),flush=True)
saver.restore(sess=sess,save_path=checkpoint_path)
np_loss = np.array( np_loss )
np_ber = np.array( np_ber )
np_gaussian_gmi = np.array( np_gaussian_gmi )
np_sigmoid_gmi = np.array( np_sigmoid_gmi )
plt.plot( np_loss )
plt.plot( np_gaussian_gmi )
plt.plot( np_sigmoid_gmi )
# constellation after training
[constellation,constellation_bits] = sess.run([enc_seed,xSeed])
plt.figure(figsize=(8,8))
plt.plot(constellation[:,0],constellation[:,1],'x')
for ii in range(constellation.shape[0]):
bit_string = ''.join( [ str(int(x)) for x in allCombinations[ii,:].tolist()] )
plt.text(constellation[ii,0]+0.01, constellation[ii,1]+0.01, bit_string, fontsize=12)
plt.axis('square');
lim_ = 1.6
plt.xlim(-lim_,lim_);
plt.ylim(-lim_,lim_);
sess.run(resetOps)
for batch in range(0,100):
feedDict = {X: cu.generateBitVectors(1000,chParam.M)}
sess.run(updateOps, feed_dict=feedDict)
[outAvgLoss, outAvgBer, outAvgGaussianGmi, outAvgSigmoidGmi] = sess.run(list(meanMetricOpsDict.values()), feed_dict=feedDict)
finalMetrics = { 'GaussianGMI': outAvgGaussianGmi, 'SigmoidGMI': outAvgSigmoidGmi, 'BER': outAvgBer, 'xentropy': outAvgLoss }
print( 'finalMetrics:', finalMetrics )
```
| github_jupyter | %matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import scipy.io
from tensorflow.python.framework import function
import os, re
import claude.utils as cu
import claude.tx as tx
import claude.claudeflow.autoencoder as ae
import claude.claudeflow.helper as cfh
import claude.claudeflow.training as cft
# Parameters
# Channel Parameters
chParam = cu.AttrDict()
chParam.M = 16
chParam.w = int(np.log2(chParam.M))
if chParam.M == 16:
chParam.SNR = 10
elif chParam.M == 64:
chParam.SNR = 18
else:
chParam.SNR = 22
# Auto-Encoder Parameters
aeParam = cu.AttrDict()
aeParam.seed = 1337
aeParam.constellationDim = 2
aeParam.constellationOrder = chParam.M
aeParam.nLayers = 4
aeParam.nHidden = 256
aeParam.activation = tf.nn.relu
aeParam.dropout = False
aeParam.dtype = tf.float32
# Training Parameters
trainingParam = cu.AttrDict()
trainingParam.sampleSize = 16*chParam.M # Increase for better results (especially if M>16)
trainingParam.batchSize = 1*chParam.M # Increase for better results (especially if M>16)
trainingParam.learningRate = 0.001
trainingParam.displayStep = 25
trainingParam.path = 'results_GMI_AWGN'
trainingParam.filename = 'M{:03d}_seed{:04d}_SNR{:02d}'.format(chParam.M,aeParam.seed,chParam.SNR)
trainingParam.earlyStopping = 25
trainingParam.iterations = 500
trainingParam.summaries = True
if trainingParam.summaries:
# tensorboard directory
chHyperParam = ['M','SNR']
aeHyperParam = ['seed']
trainingHyperParam = []
trainingParam.summaryString = ','.join( [ '{}={}'.format(item,chParam[item]) for item in chHyperParam ]
+[ '{}={}'.format(item,trainingParam[item]) for item in trainingHyperParam ]
+[ '{}={}'.format(item,aeParam[item]) for item in aeHyperParam ] )
print(trainingParam.summaryString,flush=True)
# TF constants
one = tf.constant(1,aeParam.dtype)
two = tf.constant(2,aeParam.dtype)
DIM = tf.constant(aeParam.constellationDim,aeParam.dtype)
PI = tf.constant(np.pi,aeParam.dtype)
tf.set_random_seed(aeParam.seed)
np.random.seed(aeParam.seed)
# Tx Graph
allCombinations = cu.generateUniqueBitVectors(chParam.M)
xSeed = tf.constant(allCombinations, aeParam.dtype)
X = tf.placeholder( aeParam.dtype, shape=(None, chParam.w) )
enc, enc_seed = ae.encoder(X, aeParam, bits=True)
# Channel Graph
SNR_lin = cfh.dB2lin(tf.constant(chParam.SNR,aeParam.dtype),'dB')
sigma2_noise = one / SNR_lin
noise = tf.sqrt(sigma2_noise) * tf.rsqrt(two) * tf.random_normal(shape=tf.shape(enc),dtype=aeParam.dtype)
channel = enc + noise
# Rx Graph
decoder = ae.decoder(channel, aeParam, bits=True)
decoder_sigmoid = tf.sigmoid(decoder)
# Neural Network GMI metric
# the output of the neural network with sigmoid activation can serve as an LLR estimation :)
# we basically assume that the decoder neural network has learned a probability distribution of the channel
# which we use as auxiliary channel within the receiver
sigmoid_LLRs = tf.linalg.transpose( tf.log( (one-decoder_sigmoid) / decoder_sigmoid ) )
sigmoid_GMI = cfh.GMI( tf.linalg.transpose(X), sigmoid_LLRs )
# Gaussian GMI metric
# here we just use a Gaussian auxiliary channel assumption
constellation = tf.expand_dims( tf.complex( enc_seed[:,0], enc_seed[:,1]), axis=0 )
channel_complex = tf.expand_dims( tf.complex( channel[:,0], channel[:,1]), axis=0 )
gaussian_LLRs = cfh.gaussianLLR( constellation, tf.linalg.transpose(xSeed), channel_complex, SNR_lin, chParam.M )
gaussian_GMI = cfh.GMI( tf.linalg.transpose(X), gaussian_LLRs )
# In this script, the channel is a Gaussian channel, so a Gaussian auxiliary channel assumption is optimal
# Therefore: Gaussian GMI > Neural Network GMI
# bit errors and ber
input_bits = tf.cast( X , tf.int32 )
output_bits = tf.cast( tf.round( tf.nn.sigmoid( decoder ) ), tf.int32 )
bit_compare = tf.not_equal( output_bits, input_bits )
bit_errors = tf.reduce_sum( tf.cast( bit_compare, tf.int32 ) )
bit_error_rate = tf.reduce_mean( tf.cast( bit_compare, aeParam.dtype ) )
# loss
loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits( labels=X, logits=decoder ) )
optimizer = tf.train.AdamOptimizer(learning_rate=trainingParam.learningRate)
d_sigmoid_loss = optimizer.minimize(loss)
metricsDict = {'loss_metric':loss,\
'ber_metric':bit_error_rate,\
'gaussian_gmi_metric':gaussian_GMI,\
'sigmoid_gmi_metric':sigmoid_GMI}
meanMetricOpsDict, updateOps, resetOps = cft.create_mean_metrics(metricsDict)
sess = tf.Session()
if trainingParam.summaries:
weights_summaries = tf.summary.merge_all() # without weight/bias histograms
# Summaries
s = [tf.summary.scalar('BER', metricsDict['ber_metric']),
tf.summary.scalar('loss', metricsDict['loss_metric']),
tf.summary.scalar('gaussian_GMI', metricsDict['gaussian_gmi_metric']),
tf.summary.scalar('sigmoid_GMI', metricsDict['sigmoid_gmi_metric'])]
epoche_summaries = tf.summary.merge(s) # without weight/bias histograms
summaries_dir = os.path.join(trainingParam.path,'tboard{}'.format(chParam.M),trainingParam.summaryString)
os.makedirs(summaries_dir, exist_ok=True)
train_writer = tf.summary.FileWriter(summaries_dir + '/train', sess.graph)
else:
train_writer = None
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
checkpoint_path = os.path.join(trainingParam.path,'checkpoint',trainingParam.filename,'best')
if not os.path.exists(checkpoint_path):
os.makedirs(checkpoint_path)
else:
pass
# print("Restoring checkpoint...", flush=True)
# saver.restore(sess=sess,save_path=checkpoint_path)
# constellation before training
[constellation,constellation_bits] = sess.run([enc_seed,xSeed])
plt.figure(figsize=(8,8))
plt.plot(constellation[:,0],constellation[:,1],'.')
for ii in range(constellation.shape[0]):
bit_string = ''.join( [ str(int(x)) for x in allCombinations[ii,:].tolist()] )
plt.text(constellation[ii,0], constellation[ii,1], bit_string, fontsize=12)
plt.axis('square');
lim_ = 1.6
plt.xlim(-lim_,lim_);
plt.ylim(-lim_,lim_);
bestLoss = 100000
bestAcc = 0
lastImprovement = 0
epoche = 0
nBatches = int(trainingParam.sampleSize/trainingParam.batchSize)
batchSizeMultiples = 1
batchSize = batchSizeMultiples * trainingParam.batchSize
np_loss = []
np_ber = []
np_gaussian_gmi = []
np_sigmoid_gmi = []
print( 'START TRAINING ... ', flush=True )
while(True):
epoche = epoche + 1
sess.run(resetOps)
# train AE with iteratively increasing batch size
for batch in range(0,nBatches):
feedDict = {X: cu.generateBitVectors(batchSize,chParam.M)}
sess.run(d_sigmoid_loss, feed_dict=feedDict)
# gather performance metrics with large batch size
for batch in range(0,nBatches):
feedDict = {X: cu.generateBitVectors(trainingParam.sampleSize,chParam.M)}
sess.run(updateOps, feed_dict=feedDict)
[outAvgLoss, outAvgBer, outAvgGaussianGmi, outAvgSigmoidGmi] = sess.run(list(meanMetricOpsDict.values()), feed_dict=feedDict)
np_loss.append( outAvgLoss )
np_ber.append( outAvgBer )
np_gaussian_gmi.append( outAvgGaussianGmi )
np_sigmoid_gmi.append( outAvgSigmoidGmi )
if trainingParam.summaries:
epocheSummaries = sess.run(epoche_summaries, feed_dict=feedDict)
train_writer.add_summary(epocheSummaries,epoche)
if outAvgLoss < bestLoss:
bestLoss = outAvgLoss
lastImprovement = epoche
saver.save(sess=sess,save_path=checkpoint_path)
# convergence check and increase empirical evidence
if epoche - lastImprovement > trainingParam.earlyStopping:
saver.restore(sess=sess,save_path=checkpoint_path)
bestLoss = 10000
lastImprovement = epoche
# increase empirical evidence
batchSizeMultiples = batchSizeMultiples + 2
batchSize = batchSizeMultiples * trainingParam.batchSize
if batchSizeMultiples >= 17:
break;
print("batchSize: {}, batchSizeMultiples: {}".format(batchSize,batchSizeMultiples))
if epoche%trainingParam.displayStep == 0:
print('epoche: {:04d} - avgLoss: {:.2f} - avgBer: {:.2e} - avgGaussianGmi: {:.2f} - avgSigmoidGmi: {:.2f}'.format(epoche,outAvgLoss,outAvgBer,outAvgGaussianGmi,outAvgSigmoidGmi),flush=True)
saver.restore(sess=sess,save_path=checkpoint_path)
np_loss = np.array( np_loss )
np_ber = np.array( np_ber )
np_gaussian_gmi = np.array( np_gaussian_gmi )
np_sigmoid_gmi = np.array( np_sigmoid_gmi )
plt.plot( np_loss )
plt.plot( np_gaussian_gmi )
plt.plot( np_sigmoid_gmi )
# constellation after training
[constellation,constellation_bits] = sess.run([enc_seed,xSeed])
plt.figure(figsize=(8,8))
plt.plot(constellation[:,0],constellation[:,1],'x')
for ii in range(constellation.shape[0]):
bit_string = ''.join( [ str(int(x)) for x in allCombinations[ii,:].tolist()] )
plt.text(constellation[ii,0]+0.01, constellation[ii,1]+0.01, bit_string, fontsize=12)
plt.axis('square');
lim_ = 1.6
plt.xlim(-lim_,lim_);
plt.ylim(-lim_,lim_);
sess.run(resetOps)
for batch in range(0,100):
feedDict = {X: cu.generateBitVectors(1000,chParam.M)}
sess.run(updateOps, feed_dict=feedDict)
[outAvgLoss, outAvgBer, outAvgGaussianGmi, outAvgSigmoidGmi] = sess.run(list(meanMetricOpsDict.values()), feed_dict=feedDict)
finalMetrics = { 'GaussianGMI': outAvgGaussianGmi, 'SigmoidGMI': outAvgSigmoidGmi, 'BER': outAvgBer, 'xentropy': outAvgLoss }
print( 'finalMetrics:', finalMetrics ) | 0.525369 | 0.354461 |