text
stringlengths 87
777k
| meta.hexsha
stringlengths 40
40
| meta.size
int64 682
1.05M
| meta.ext
stringclasses 1
value | meta.lang
stringclasses 1
value | meta.max_stars_repo_path
stringlengths 8
226
| meta.max_stars_repo_name
stringlengths 8
109
| meta.max_stars_repo_head_hexsha
stringlengths 40
40
| meta.max_stars_repo_licenses
sequencelengths 1
5
| meta.max_stars_count
int64 1
23.9k
⌀ | meta.max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | meta.max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_path
stringlengths 8
226
| meta.max_issues_repo_name
stringlengths 8
109
| meta.max_issues_repo_head_hexsha
stringlengths 40
40
| meta.max_issues_repo_licenses
sequencelengths 1
5
| meta.max_issues_count
int64 1
15.1k
⌀ | meta.max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | meta.max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_path
stringlengths 8
226
| meta.max_forks_repo_name
stringlengths 8
109
| meta.max_forks_repo_head_hexsha
stringlengths 40
40
| meta.max_forks_repo_licenses
sequencelengths 1
5
| meta.max_forks_count
int64 1
6.05k
⌀ | meta.max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | meta.max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | meta.avg_line_length
float64 15.5
967k
| meta.max_line_length
int64 42
993k
| meta.alphanum_fraction
float64 0.08
0.97
| meta.converted
bool 1
class | meta.num_tokens
int64 33
431k
| meta.lm_name
stringclasses 1
value | meta.lm_label
stringclasses 3
values | meta.lm_q1_score
float64 0.56
0.98
| meta.lm_q2_score
float64 0.55
0.97
| meta.lm_q1q2_score
float64 0.5
0.93
| text_lang
stringclasses 53
values | text_lang_conf
float64 0.03
1
| label
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
<div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Discontinuous Galerkin Method with Physics based numerical flux - 1D Elastic Wave Equation </div>
</div>
</div>
</div>
This notebook is based on the paper [A new discontinuous Galerkin spectral element method for elastic waves with physically motivated numerical fluxes](https://www.geophysik.uni-muenchen.de/~gabriel/kduru_waves2017.pdf)
Published in the [13th International Conference on Mathematical and Numerical Aspects of Wave Propagation](https://cceevents.umn.edu/waves-2017)
##### Authors:
* Kenneth Duru
* Ashim Rijal ([@ashimrijal](https://github.com/ashimrijal))
* Sneha Singh
---
## Basic Equations ##
The source-free elastic wave equation in a heterogeneous 1D medium is
\begin{align}
\rho(x)\partial_t v(x,t) -\partial_x \sigma(x,t) & = 0\\
\frac{1}{\mu(x)}\partial_t \sigma(x,t) -\partial_x v(x,t) & = 0
\end{align}
with $\rho(x)$ the density, $\mu(x)$ the shear modulus and $x = [0, L]$. At the boundaries $ x = 0, x = L$ we pose the general well-posed linear boundary conditions
\begin{equation}
\begin{split}
B_0(v, \sigma, Z_{s}, r_0): =\frac{Z_{s}}{2}\left({1-r_0}\right){v} -\frac{1+r_0}{2} {\sigma} = 0, \quad \text{at} \quad x = 0, \\
B_L(v, \sigma, Z_{s}, r_n): =\frac{Z_{s}}{2} \left({1-r_n}\right){v} + \frac{1+r_n}{2}{\sigma} = 0, \quad \text{at} \quad x = L.
\end{split}
\end{equation}
with the reflection coefficients $r_0$, $r_n$ being real numbers and $|r_0|, |r_n| \le 1$.
Note that at $x = 0$, while $r_0 = -1$ yields a clamped wall, $r_0 = 0$ yields an absorbing boundary, and with $r_0 = 1$ we have a free-surface boundary condition. Similarly, at $x = L$, $r_n = -1$ yields a clamped wall, $r_n = 0$ yields an absorbing boundary, and $r_n = 1$ gives a free-surface boundary condition.
1) Discretize the spatial domain $x$ into $K$ elements and denote the ${k}^{th}$ element $e^k = [x_{k}, x_{k+1}]$ and the element width $\Delta{x}_k = x_{k+1}-x_{k}$. Consider two adjacent elements $e^k = [x_{k}, x_{k+1}]$ and $e^{k+1} = [x_{k+1}, x_{k+2}]$ with an interface at $x_{k+1}$. At the interface we pose the physical conditions for a locked interface
\begin{align}
\text{force balance}: \quad &\sigma^{-} = \sigma^{+} = \sigma, \nonumber \\
\text{no slip}: \quad & [\![ v]\!] = 0,
\end{align}
where $[\![ v]\!] = v^{+} - v^{-}$, and $v^{-}, \sigma^{-}$ and $v^{+}, \sigma^{+}$ are the fields in $e^k = [x_{k}, x_{k+1}]$ and $e^{k+1} = [x_{k+1}, x_{k+2}]$, respectively.
2) Within the element derive the weak form of the equation by multiplying both sides by an arbitrary test function and integrating over the element.
3) Next map the $e^k = [x_{k}, x_{k+1}]$ to a reference element $\xi = [-1, 1]$
4) Inside the transformed element $\xi \in [-1, 1]$, approximate the solution and material parameters by a polynomial interpolant, and write
\begin{equation}
v^k(\xi, t) = \sum_{j = 1}^{N+1}v_j^k(t) \mathcal{L}_j(\xi), \quad \sigma^k(\xi, t) = \sum_{j = 1}^{N+1}\sigma_j^k(t) \mathcal{L}_j(\xi),
\end{equation}
\begin{equation}
\rho^k(\xi) = \sum_{j = 1}^{N+1}\rho_j^k \mathcal{L}_j(\xi), \quad \mu^k(\xi) = \sum_{j = 1}^{N+1}\mu_j^k \mathcal{L}_j(\xi),
\end{equation}
where $ \mathcal{L}_j$ is the $j$th interpolating polynomial of degree $N$. If we consider nodal basis then the interpolating polynomials satisfy $ \mathcal{L}_j(\xi_i) = \delta_{ij}$.
The interpolating nodes $\xi_i$, $i = 1, 2, \dots, N+1$ are the nodes of a Gauss quadrature with
\begin{equation}
\sum_{i = 1}^{N+1} f(\xi_i)w_i \approx \int_{-1}^{1}f(\xi) d\xi,
\end{equation}
where $w_i$ are quadrature weights.
5) At the element boundaries $\xi = \pm 1$, we generate $\widehat{v}^{k}(\pm 1, t)$ $\widehat{\sigma}^{k}(\pm 1, t)$ by solving a Riemann problem and constraining the solutions against interface and boundary conditions. Then numerical fluctuations $F^k(-1, t)$ and $G^k(1, t)$ are obtained by penalizing hat variables against the incoming characteristics only.
6) Finally, the flux fluctuations are appended to the semi-discrete PDE with special penalty weights and we have
\begin{equation}
\begin{split}
\frac{d \boldsymbol{v}^k( t)}{ d t} &= \frac{2}{\Delta{x}_k} W^{-1}({\boldsymbol{\rho}}^{k})\left(Q \boldsymbol{\sigma}^k( t) - \boldsymbol{e}_{1}F^k(-1, t)- \boldsymbol{e}_{N+1}G^k(1, t)\right),
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\frac{d \boldsymbol{\sigma}^k( t)}{ d t} &= \frac{2}{\Delta{x}_k} W^{-1}(1/{\boldsymbol{\mu}^{k}})\left(Q \boldsymbol{v}^k( t) + \boldsymbol{e}_{1}\frac{1}{Z_{s}^{k}(-1)}F^k(-1, t)- \boldsymbol{e}_{N+1}\frac{1}{Z_{s}^{k}(1)}G^k(1, t)\right),
\end{split}
\end{equation}
where
\begin{align}
\boldsymbol{e}_{1} = [ \mathcal{L}_1(-1), \mathcal{L}_2(-1), \dots, \mathcal{L}_{N+1}(-1) ]^T, \quad \boldsymbol{e}_{N+1} = [ \mathcal{L}_1(1), \mathcal{L}_2(1), \dots, \mathcal{L}_{N+1}(1) ]^T,
\end{align}
and
\begin{align}
G^k(1, t):= \frac{Z_{s}^{k}(1)}{2} \left(v^{k}(1, t)-\widehat{v}^{k}(1, t) \right) + \frac{1}{2}\left(\sigma^{k}(1, t)- \widehat{\sigma}^{k}(1, t)\right),
\end{align}
\begin{align}
F^{k}(-1, t):= \frac{Z_{s}^{k}(-1)}{2} \left(v^{k}(-1, t)-\widehat{v}^{k}(-1, t) \right) - \frac{1}{2}\left(\sigma^{k}(-1, t)- \widehat{\sigma}^{k}(-1, t)\right).
\end{align}
And the weighted elemental mass matrix $W^N(a)$ and the stiffness matrix $Q^N $ are defined by
\begin{align}
W_{ij}(a) = \sum_{m = 1}^{N+1} w_m \mathcal{L}_i(\xi_m) {\mathcal{L}_j(\xi_m)} a(\xi_m), \quad Q_{ij} = \sum_{m = 1}^{N+1} w_m \mathcal{L}_i(\xi_m) {\mathcal{L}_j^{\prime}(\xi_m)}.
\end{align}
7) Time extrapolation can be performed using any stable time stepping scheme like Runge-Kutta or ADER scheme.This notebook implements both Runge-Kutta and ADER schemes for solving the free source version of the elastic wave equation in a homogeneous media. To keep the problem simple, we use as spatial initial condition a Gauss function with half-width $\delta$
\begin{equation}
v(x,t=0) = e^{-1/\delta^2 (x - x_{o})^2}, \quad \sigma(x,t=0) = 0
\end{equation}
**** Exercises****
1. Lagrange polynomial is used to interpolate the solution and the material parameters. First use polynomial degree 2 and then 6. Compare the simulation results in terms of accuracy of the solution (third and fourth figures give erros). At the end of simulation, time required to complete the simulation is also printed. Also compare the time required to complete both simulations.
2. We use quadrature rules: Gauss-Legendre-Lobatto and Gauss-Legendre. Run simulations once using Lobatto and once using Legendre rules. Compare the difference.
3. Now fix the order of polynomial to be 6, for example. Then use degree of freedom 100 and for another simulation 250. What happpens? Also compare the timre required to complete both simulations.
4. Experiment with the boundary conditions by changing the reflection coefficients $r_0$ and $r_n$.
5. You can also play around with sinusoidal initial solution instead of the Gaussian.
6. Change the time-integrator from RK to ADER. Observe if there are changes in the solution or the CFL number. Vary the polynomial order N.
```python
# Parameters initialization and plotting the simulation
# Import necessary routines
import Lagrange
import numpy as np
import timeintegrate
import quadraturerules
import specdiff
import utils
import matplotlib.pyplot as plt
import timeit # to check the simulation time
#plt.switch_backend("TkAgg") # plots in external window
plt.switch_backend("nbagg") # plots within this notebook
```
```python
# Simulation start time
start = timeit.default_timer()
# Tic
iplot = 20
# Physical domain x = [ax, bx] (km)
ax = 0.0 # (km)
bx = 20.0 # (km)
# Choose quadrature rules and the corresponding nodes
# We use Gauss-Legendre-Lobatto (Lobatto) or Gauss-Legendre (Legendre) quadrature rule.
#node = 'Lobatto'
node = 'Legendre'
if node not in ('Lobatto', 'Legendre'):
print('quadrature rule not implemented. choose node = Legendre or node = Lobatto')
exit(-1)
# Polynomial degree N: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
N = 4 # Lagrange polynomial degree
NP = N+1 # quadrature nodes per element
if N < 1 or N > 12:
print('polynomial degree not implemented. choose N>= 1 and N <= 12')
exit(-1)
# Degrees of freedom to resolve the wavefield
deg_of_freedom = 400 # = num_element*NP
# Estimate the number of elements needed for a given polynomial degree and degrees of freedom
num_element = round(deg_of_freedom/NP) # = deg_of_freedom/NP
# Initialize the mesh
y = np.zeros(NP*num_element)
# Generate num_element dG elements in the interval [ax, bx]
x0 = np.linspace(ax,bx,num_element+1)
dx = np.diff(x0) # element sizes
# Generate Gauss quadrature nodes (psi): [-1, 1] and weights (w)
if node == 'Legendre':
GL_return = quadraturerules.GL(N)
psi = GL_return['xi']
w = GL_return['weights'];
if node == 'Lobatto':
gll_return = quadraturerules.gll(N)
psi = gll_return['xi']
w = gll_return['weights']
# Use the Gauss quadrature nodes (psi) generate the mesh (y)
for i in range (1,num_element+1):
for j in range (1,(N+2)):
y[j+(N+1)*(i-1)-1] = dx[i-1]/2.0 * (psi[j-1] + 1.0) +x0[i-1]
# Overide with the exact degrees of freedom
deg_of_freedom = len(y);
# Generate the spectral difference operator (D) in the reference element: [-1, 1]
D = specdiff.derivative_GL(N, psi, w)
# Boundary condition reflection coefficients
r0 = 0 # r=0:absorbing, r=1:free-surface, r=-1: clamped
rn = 0 # r=0:absorbing, r=1:free-surface, r=-1: clamped
# Initialize the wave-fields
L = 0.5*(bx-ax)
delta = 0.01*(bx-ax)
x0 = 0.5*(bx+ax)
omega = 4.0
#u = np.sin(omega*np.pi*y/L) # Sine function
u = 1/np.sqrt(2.0*np.pi*delta**2)*np.exp(-(y-x0)**2/(2.0*delta**2)) # Gaussian
u = np.transpose(u)
v = np.zeros(len(u))
U = np.zeros(len(u))
V = np.zeros(len(u))
print('points per wavelength: ', round(delta*deg_of_freedom/(2*L)))
# Material parameters
cs = 3.464 # shear wave speed (km/s)
rho = 0*y + 2.67 # density (g/cm^3)
mu = 0*y + rho * cs**2 # shear modulus (GPa)
Zs = rho*cs # shear impedance
# Time stepping parameters
cfl = 0.5 # CFL number
dt = (0.25/(cs*(2*N+1)))*min(dx) # time-step (s)
t = 0.0 # initial time
Tend = 2 # final time (s)
n = 0 # counter
# Difference between analyticla and numerical solutions
EV = [0] # initialize errors in V (velocity)
EU = [0] # initialize errors in U (stress)
T = [0] # later append every time steps to this
# Initialize animated plot for velocity and stress
fig1 = plt.figure(figsize=(10,10))
ax1 = fig1.add_subplot(4,1,1)
line1 = ax1.plot(y, u, 'r', y, U, 'k--')
plt.title('numerical vs exact')
plt.xlabel('x[km]')
plt.ylabel('velocity [m/s]')
ax2 = fig1.add_subplot(4,1,2)
line2 = ax2.plot(y, v, 'r', y, V, 'k--')
plt.title('numerical vs exact')
plt.xlabel('x[km]')
plt.ylabel('stress [MPa]')
# Initialize error plot (for velocity and stress)
ax3 = fig1.add_subplot(4,1,3)
line3 = ax3.plot(T, EV, 'r')
plt.title('relative error in particle velocity')
plt.xlabel('time[t]')
ax3.set_ylim([10**-5, 1])
plt.ylabel('error')
ax4 = fig1.add_subplot(4,1,4)
line4 = ax4.plot(T, EU, 'r')
plt.ylabel('error')
plt.xlabel('time[t]')
ax4.set_ylim([10**-5, 1])
plt.title('relative error in stress')
plt.tight_layout()
plt.ion()
plt.show()
A = (np.linalg.norm(1/np.sqrt(2.0*np.pi*delta**2)*0.5*Zs*(np.exp(-(y+cs*(t+1*0.5)-x0)**2/(2.0*delta**2))\
- np.exp(-(y-cs*(t+1*0.5)-x0)**2/(2.0*delta**2)))))
B = (np.linalg.norm(u))
# Loop through time and evolve the wave-fields using ADER time-stepping scheme of N+1 order of accuracy
time_integrator = 'ADER'
for t in utils.drange (0.0, Tend+dt,dt):
n = n+1
# ADER time-integrator
if time_integrator in ('ADER'):
ADER_Wave_dG_return = timeintegrate.ADER_Wave_dG(u,v,D,NP,num_element,dx,w,psi,t,r0,rn,dt,rho,mu)
u = ADER_Wave_dG_return['Hu']
v = ADER_Wave_dG_return['Hv']
# Runge-Kutta time-integrator
if time_integrator in ('RK'):
RK4_Wave_dG_return = timeintegrate.RK4_Wave_dG(u,v,D,NP,num_element,dx,w,psi,t,r0,rn,dt,rho,mu)
u = RK4_Wave_dG_return['Hu']
v = RK4_Wave_dG_return['Hv']
# Analytical sine wave (use it when sine function is choosen above)
#U = 0.5*(np.sin(omega*np.pi/L*(y+cs*(t+1*dt))) + np.sin(omega*np.pi/L*(y-cs*(t+1*dt))))
#V = 0.5*Zs*(np.sin(omega*np.pi/L*(y+cs*(t+1*dt))) - np.sin(omega*np.pi/L*(y-cs*(t+1*dt))))
# Analytical Gaussian
U = 1/np.sqrt(2.0*np.pi*delta**2)*0.5*(np.exp(-(y+cs*(t+1*dt)-x0)**2/(2.0*delta**2))\
+ np.exp(-(y-cs*(t+1*dt)-x0)**2/(2.0*delta**2)))
V = 1/np.sqrt(2.0*np.pi*delta**2)*0.5*Zs*(np.exp(-(y+cs*(t+1*dt)-x0)**2/(2.0*delta**2))\
- np.exp(-(y-cs*(t+1*dt)-x0)**2/(2.0*delta**2)))
EV.append(np.linalg.norm(V-v)/A)
EU.append(np.linalg.norm(U-u)/B)
T.append(t)
# Updating plots
if n % iplot == 0:
for l in line1:
l.remove()
del l
for l in line2:
l.remove()
del l
for l in line3:
l.remove()
del l
for l in line4:
l.remove()
del l
# Display lines
line1 = ax1.plot(y, u, 'r', y, U, 'k--')
ax1.legend(iter(line1),('Numerical', 'Analytical'))
line2 = ax2.plot(y, v, 'r', y, V, 'k--')
ax2.legend(iter(line2),('Numerical', 'Analytical'))
line3 = ax3.plot(T, EU, 'k--')
ax3.set_yscale("log")#, nonposx='clip')
line4 = ax4.plot(T, EV, 'k--')
ax4.set_yscale("log")#, nonposx='clip')
plt.gcf().canvas.draw()
plt.ioff()
plt.show()
# Simulation end time
stop = timeit.default_timer()
print('total simulation time = ', stop - start) # print the time required for simulation
print('ploynomial degree = ', N) # print the polynomial degree used
print('degree of freedom = ', deg_of_freedom) # print the degree of freedom
print('maximum relative error in particle velocity = ', max(EU)) # max. relative error in particle velocity
print('maximum relative error in stress = ', max(EV)) # max. relative error in stress
```
```python
```
| 234a4c3ae7c66228ce84555a68a11484d4aeb447 | 58,009 | ipynb | Jupyter Notebook | notebooks/Earthquake Physics/DiscontinuousGalerkin/dg_elastic_physicalfluxes.ipynb | krischer/seismo_live_build | e4e8e59d9bf1b020e13ac91c0707eb907b05b34f | [
"CC-BY-3.0"
] | 3 | 2020-07-11T10:01:39.000Z | 2020-12-16T14:26:03.000Z | notebooks/Earthquake Physics/DiscontinuousGalerkin/dg_elastic_physicalfluxes.ipynb | krischer/seismo_live_build | e4e8e59d9bf1b020e13ac91c0707eb907b05b34f | [
"CC-BY-3.0"
] | null | null | null | notebooks/Earthquake Physics/DiscontinuousGalerkin/dg_elastic_physicalfluxes.ipynb | krischer/seismo_live_build | e4e8e59d9bf1b020e13ac91c0707eb907b05b34f | [
"CC-BY-3.0"
] | 3 | 2020-11-11T05:05:41.000Z | 2022-03-12T09:36:24.000Z | 122.900424 | 37,152 | 0.808288 | true | 5,053 | Qwen/Qwen-72B | 1. YES
2. YES | 0.879147 | 0.795658 | 0.6995 | __label__eng_Latn | 0.785711 | 0.463505 |
**Notebook Outline:**
- [Setup with libraries](#Set-up-Cells)
- [Fundamental equations for Poisson MGWR](#Fundamental-equations-for-Binomial-MGWR)
- [Example Dataset](#Example-Dataset)
- [Helper functions](#Helper-functions)
- [Univariate example](#Univariate-example)
- [Parameter check](#Parameter-check)
- [Bandwidths check](#Bandwidths-check)
### Set up Cells
```python
import sys
sys.path.append("C:/Users/msachde1/Downloads/Research/Development/mgwr")
```
```python
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
import numpy as np
from mgwr.gwr import GWR
from spglm.family import Gaussian, Binomial, Poisson
from mgwr.gwr import MGWR
from mgwr.sel_bw import Sel_BW
import multiprocessing as mp
pool = mp.Pool()
from scipy import linalg
import numpy.linalg as la
from scipy import sparse as sp
from scipy.sparse import linalg as spla
from spreg.utils import spdot, spmultiply
from scipy import special
import libpysal as ps
import seaborn as sns
import matplotlib.pyplot as plt
from copy import deepcopy
import copy
from collections import namedtuple
import spglm
```
### Fundamental equations for Binomial MGWR
\begin{align}
l = log_b ( p / (1-p)) = ({\sum} {\beta} & _k x _{k,i}) \\
\end{align}
where $x_{k,1}$ is the kth explanatory variable in place i, $𝛽_ks$ are the parameters and p is the probability such that p = P( Y = 1 ).
By exponentiating the log-odds:
$p / (1-p) = b^ {𝛽_0 + 𝛽_1 x_1 + 𝛽_2 x_2}$
It follows from this - the probability that Y = 1 is:
$p = (b^ {𝛽_0 + 𝛽_1 x_1 + 𝛽_2 x_2}) / (b^ {𝛽_0 + 𝛽_1 x_1 + 𝛽_2 x_2} + 1)$ = $1 / (1 + b^ {- 𝛽_0 + 𝛽_1 x_1 + 𝛽_2 x_2})$
### Example Dataset
#### Clearwater data - downloaded from link: https://sgsup.asu.edu/sparc/multiscale-gwr
```python
data_p = pd.read_csv("C:/Users/msachde1/Downloads/logistic_mgwr_data/landslides.csv")
coords = list(zip(data_p['X'],data_p['Y']))
y = np.array(data_p['Landslid']).reshape((-1,1))
elev = np.array(data_p['Elev']).reshape((-1,1))
slope = np.array(data_p['Slope']).reshape((-1,1))
SinAspct = np.array(data_p['SinAspct']).reshape(-1,1)
CosAspct = np.array(data_p['CosAspct']).reshape(-1,1)
X = np.hstack([elev,slope,SinAspct,CosAspct])
x = slope
X_std = (X-X.mean(axis=0))/X.std(axis=0)
x_std = (x-x.mean(axis=0))/x.std(axis=0)
y_std = (y-y.mean(axis=0))/y.std(axis=0)
```
### Helper functions
Hardcoded here for simplicity in the notebook workflow
Please note: A separate bw_func_b will not be required when changes will be made in the repository
```python
kernel='bisquare'
fixed=False
spherical=False
search_method='golden_section'
criterion='AICc'
interval=None
tol=1e-06
max_iter=200
X_glob=[]
```
```python
def gwr_func(y, X, bw,family=Gaussian(),offset=None):
return GWR(coords, y, X, bw, family,offset,kernel=kernel,
fixed=fixed, constant=False,
spherical=spherical, hat_matrix=False).fit(
lite=True, pool=pool)
def bw_func_b(coords,y, X):
selector = Sel_BW(coords,y, X,family=Binomial(),offset=None, X_glob=[],
kernel=kernel, fixed=fixed,
constant=False, spherical=spherical)
return selector
def bw_func_p(coords,y, X):
selector = Sel_BW(coords,y, X,family=Poisson(),offset=off, X_glob=[],
kernel=kernel, fixed=fixed,
constant=False, spherical=spherical)
return selector
def bw_func(coords,y,X):
selector = Sel_BW(coords,y,X,X_glob=[],
kernel=kernel, fixed=fixed,
constant=False, spherical=spherical)
return selector
def sel_func(bw_func, bw_min=None, bw_max=None):
return bw_func.search(
search_method=search_method, criterion=criterion,
bw_min=bw_min, bw_max=bw_max, interval=interval, tol=tol,
max_iter=max_iter, pool=pool, verbose=False)
```
### Univariate example
#### GWR model with independent variable, x = slope
```python
bw_gwbr=Sel_BW(coords,y_std,x_std,family=Binomial(),constant=False).search()
```
```python
gwbr_model=GWR(coords,y_std,x_std,bw=bw_gwbr,family=Binomial(),constant=False).fit()
```
```python
bw_gwbr
```
198.0
#### MGWR Binomial loop with one independent variable, x = slope
##### Edited multi_bw function - original function in https://github.com/pysal/mgwr/blob/master/mgwr/search.py#L167
```python
def multi_bw(init,coords,y, X, n, k, family=Gaussian(),offset=None, tol=1e-06, max_iter=200, multi_bw_min=[None], multi_bw_max=[None],rss_score=False,bws_same_times=3,
verbose=False):
if multi_bw_min==[None]:
multi_bw_min = multi_bw_min*X.shape[1]
if multi_bw_max==[None]:
multi_bw_max = multi_bw_max*X.shape[1]
if isinstance(family,spglm.family.Poisson):
bw = sel_func(bw_func_p(coords,y,X))
optim_model=gwr_func(y,X,bw,family=Poisson(),offset=offset)
err = optim_model.resid_response.reshape((-1, 1))
param = optim_model.params
#This change for the Poisson model follows from equation (1) above
XB = offset*np.exp(np.multiply(param, X))
elif isinstance(family,spglm.family.Binomial):
bw = sel_func(bw_func_b(coords,y,X))
optim_model=gwr_func(y,X,bw,family=Binomial())
err = optim_model.resid_response.reshape((-1, 1))
param = optim_model.params
#This change for the Binomial model follows from equation above
XB = 1/(1+np.exp(-1*np.multiply(optim_model.params,X)))
#print(XB)
else:
bw=sel_func(bw_func(coords,y,X))
optim_model=gwr_func(y,X,bw)
err = optim_model.resid_response.reshape((-1, 1))
param = optim_model.params
XB = np.multiply(param, X)
bw_gwr = bw
XB=XB
if rss_score:
rss = np.sum((err)**2)
iters = 0
scores = []
delta = 1e6
BWs = []
bw_stable_counter = np.ones(k)
bws = np.empty(k)
try:
from tqdm.auto import tqdm #if they have it, let users have a progress bar
except ImportError:
def tqdm(x, desc=''): #otherwise, just passthrough the range
return x
for iters in tqdm(range(1, max_iter + 1), desc='Backfitting'):
new_XB = np.zeros_like(X)
params = np.zeros_like(X)
for j in range(k):
temp_y = XB[:, j].reshape((-1, 1))
temp_y = temp_y + err
temp_X = X[:, j].reshape((-1, 1))
#The step below will not be necessary once the bw_func is changed in the repo to accept family and offset as attributes
if isinstance(family,spglm.family.Poisson):
bw_class = bw_func_p(coords,temp_y, temp_X)
elif isinstance(family,spglm.family.Binomial):
bw_class = bw_func_b(coords,temp_y, temp_X)
else:
bw_class = bw_func(coords,temp_y, temp_X)
if np.all(bw_stable_counter == bws_same_times):
#If in backfitting, all bws not changing in bws_same_times (default 3) iterations
bw = bws[j]
else:
bw = sel_func(bw_class, multi_bw_min[j], multi_bw_max[j])
if bw == bws[j]:
bw_stable_counter[j] += 1
else:
bw_stable_counter = np.ones(k)
#Changed gwr_func to accept family and offset as attributes
optim_model = gwr_func(temp_y, temp_X, bw,family,offset)
err = optim_model.resid_response.reshape((-1, 1))
param = optim_model.params.reshape((-1, ))
new_XB[:, j] = optim_model.predy.reshape(-1)
params[:, j] = param
bws[j] = bw
num = np.sum((new_XB - XB)**2) / n
den = np.sum(np.sum(new_XB, axis=1)**2)
score = (num / den)**0.5
XB = new_XB
if rss_score:
predy = np.sum(np.multiply(params, X), axis=1).reshape((-1, 1))
new_rss = np.sum((y - predy)**2)
score = np.abs((new_rss - rss) / new_rss)
rss = new_rss
scores.append(deepcopy(score))
delta = score
BWs.append(deepcopy(bws))
if verbose:
print("Current iteration:", iters, ",SOC:", np.round(score, 7))
print("Bandwidths:", ', '.join([str(bw) for bw in bws]))
if delta < tol:
break
print("iters = "+str(iters))
opt_bws = BWs[-1]
print("opt_bws = "+str(opt_bws))
return (opt_bws, np.array(BWs), np.array(scores), params, err, bw_gwr)
```
##### Running the function with family = Binomial()
```python
bw_mgwbr = multi_bw(init=None,coords=coords,y=y_std, X=x_std, n=239, k=x.shape[1], family=Binomial())
```
iters = 1
opt_bws = [198.]
##### Running without family and offset attributes runs the normal MGWR loop
```python
bw_mgwr = multi_bw(init=None, coords=coords,y=y_std, X=x_std, n=262, k=x.shape[1])
```
iters = 1
opt_bws = [125.]
### Parameter check
#### Difference in parameters from the GWR - Binomial model and MGWR Binomial model
```python
(bw_mgwbr[3]==gwbr_model.params).all()
```
True
The parameters are identical
### Bandwidths check
```python
bw_gwbr
```
235.0
```python
bw_mgwbr[0]
```
array([235.])
The bandwidth from both models is the same
| 998d672a62ef7522ab9f51475e459f43a6d5cb80 | 15,527 | ipynb | Jupyter Notebook | Notebooks/.ipynb_checkpoints/Binomial_MGWR_univariate_check-checkpoint.ipynb | TaylorOshan/MGWR_workshop_book | 4c0be5cb08dfc669c8da0d1c074f3c5052a81c0a | [
"MIT-0"
] | 6 | 2021-01-21T08:30:01.000Z | 2021-07-24T05:40:43.000Z | Notebooks/Binomial_MGWR_univariate_check.ipynb | TaylorOshan/MGWR_book | c59db902b34d625af4d0e1b90fbc95018a3de579 | [
"MIT-0"
] | null | null | null | Notebooks/Binomial_MGWR_univariate_check.ipynb | TaylorOshan/MGWR_book | c59db902b34d625af4d0e1b90fbc95018a3de579 | [
"MIT-0"
] | 4 | 2020-07-20T19:43:36.000Z | 2021-06-07T23:41:08.000Z | 28.179673 | 176 | 0.503188 | true | 2,642 | Qwen/Qwen-72B | 1. YES
2. YES | 0.857768 | 0.771843 | 0.662063 | __label__eng_Latn | 0.59159 | 0.376524 |
# Subspace
**Subspace.** The set of vectors $V$ is a linear subspace of $\mathbb{R}^n \iff$ null vector $\in V$ and $V$ is closed under scalar multiplication and addition.
⚠️ *Union of subspaces is not a subspace*
> The reason why this can happen is that all vector spaces, and hence subspaces too, must be closed under addition (and scalar multiplication). The union of two subspaces takes all the elements already in those spaces, and nothing more.
>
> In the union of subspaces $W_1$ and $W_2$ there are new combinations of vectors we can add together that we couldn't before, like $v_1 + v_2$ where $v_1 \in W_1$ and $v_2 \in W_2$.
>
> For example, take $W_1$ to be the $x$-axis and $W_2$ the $y$-axis, both subspaces of $\mathbb{R}^2$.
Their union includes both $(3,0)$ and $(0,5)$, whose sum, $(3,5)$, is not in the union. Hence, the union is not a vector space.
>
> http://math.stackexchange.com/a/71875/402625
# Linear Dependence and Independence
**Linearly Dependent.** A set of vectors is linearly dependent if there exists a vector in the set that *can be written as a linear combination of the others*.
**Linearly Independent.** A set of vectors is linearly independent $\iff$ the only linear combination that gives the null vector is the linear combination with all coefficients equal to $0$.
$$\Sigma_{i=0}^k \lambda_i v_i = 0 \iff \lambda_1 = \lambda_2 = \dots = \lambda_i = 0$$
_**Proof.**_ ($\Rightarrow$) Say there exists a linear combination of that set ($D$) that evaluates to the null vector and not all coefficients equal $0$, then there exists a linear combination $\Sigma_{i=1}^{n}\lambda_i v_i=0$ with $\lambda_i\in\mathbb{R}, v_i\in D$ and $\lambda_1\neq0$. So, $v_1=\frac{-1}{\lambda_1}\left( \Sigma_{i=2}^{n}\lambda_i v_i \right)$ and the set is linearly dependent.
# Span
**Span**. The *span of a given subset ($D$) of a vector space ($V$)* is *the vector space ($\text{span}(D)$) containing all possible linear combinations of the vectors in the subset ($D$)*.
##### _Example_
\begin{equation}
A_1=
\begin{bmatrix}
1 \\
0
\end{bmatrix},
A_2=
\begin{bmatrix}
0 \\
1
\end{bmatrix}
\end{equation}
$\text{span}(\{A_1,A_2\})=\langle\{A_1,A_2\}\rangle=\text{vct}(\{A_1,A_2\})= \{\alpha_1A_1 + \alpha_2A_2\mid\alpha_1,\alpha_2\in\mathbb{R}\}$
Two 2-vectors span $\mathbb{R}^2 \iff$ they are linearly independent.
$\text{span}(\{A_1,A_2\})=\mathbb{R}^2$
##### Thoughts
*Q*: If the set of 2 vectors is linearly dependent, can each one be written as a linear combination of the other? What about 3 vectors?
*A*: Trivial for 2 vectors. If the set of vectors $A_1$ and $A_2$ are linearly dependent, there exists a vector in that set that can be written as a linear combination of the other. Say $A_1$ is that vector, then $A_1=\alpha A_2,$ with $\alpha\in\mathbb{F}$ and $A_2=\frac{1}{\alpha}A_1$. Analogue if $A_2$ is that vector.
For 3 vectors, this isn't necessarily possible (e.g.: $\{(1,0,0),(2,0,0),(0,1,0)\}$). The set of 3 vectors is linearly dependent if (at least) one of them can be written as a linear combination of the others.
# Basis & Dimension
**Basis.** A set of *linearly independent* vectors that *spans a vector space* is a *basis* of that space.
⚠️ *Lemma of Steinitz*
Say $(\mathbb{R},V,+)$ a vector space. Then:
1. if a subset $\subset V$ of $m$ elements exists that spans $V$, then every subset of $V$ with more than $m$ elements is linearly dependent;
2. if a subset $\subset V$ of $n$ elements exists that is linearly independent, then every subset of $V$ with less than $n$ elements cannot span $V$.
**Dimension.** The *dimension of a vector space* equals the *number of elements in a (finite) basis* for that vector space. (_Notation:_ $\text{dim}_{\mathbb{R}}V$)
⚠️
Say $(\mathbb{R},V,+)$ a vector space of dimension $n$. Then:
1. every linear independent subset $\subset V$ can be extended to a basis of $V$;
2. every finite subset $\subset V$ that spans $V$ can be reduced (by removing vectors) to a basis of $V$.
```python
```
| a3398163ff03b7c95028982f28b01a7b3d173e4a | 6,023 | ipynb | Jupyter Notebook | H3_vector_spaces.ipynb | jppgks/linear-algebra-notebooks | 08ce0ad8e7a8fb65ddd872d3084d1f8946b65fe0 | [
"MIT"
] | 1 | 2016-11-30T09:55:17.000Z | 2016-11-30T09:55:17.000Z | H3_vector_spaces.ipynb | jppgks/linear-algebra-notebooks | 08ce0ad8e7a8fb65ddd872d3084d1f8946b65fe0 | [
"MIT"
] | null | null | null | H3_vector_spaces.ipynb | jppgks/linear-algebra-notebooks | 08ce0ad8e7a8fb65ddd872d3084d1f8946b65fe0 | [
"MIT"
] | null | null | null | 38.363057 | 420 | 0.58343 | true | 1,285 | Qwen/Qwen-72B | 1. YES
2. YES | 0.923039 | 0.874077 | 0.806808 | __label__eng_Latn | 0.995481 | 0.712817 |
# Haverly's Pooling Problem
## Objective and Prerequisites
One of the new features of Gurobi 9.0 is the addition of a bilinear solver, which enables finding the optimal solution of non-convex quadratic programming problems (i.e. QPs, QCQPs, MIQPs, and MIQCQPs). This notebook will show you how to use this feature by tackling the Haverly's Pooling Problem.
To fully understand the content of this notebook, the reader should be familiar with the following:
- Python.
- Gurobi Python interface.
- Knowledge of building mathematical optimization models.
**Note:** You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=CommercialDataScience) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=AcademicDataScience) as an *academic user*.
---
## Motivation
The Pooling Problem is a challenging problem in the petrochemical refining, wastewater treatment and mining industries. This problem can be regarded as a generalization of the minimum-cost flow problem and the blending problem. It is indeed important because of the significant savings it can generate, so it comes at no surprise that it has been studied extensively since Haverly pointed out the non-linear structure of this problem in 1978 [3].
---
## Problem Description
The Minimum-Cost Flow Problem (MCFP) seeks to find the cheapest way of sending a certain amount of flow from a set of source nodes to a set of target nodes, possibily via transshipment nodes, in a directed capacitated network. The Blending Problem is a type of MCFP with only source and target nodes, where raw materials with different attribute qualities are blended together to create end products in such a way that their attribute qualities are within tolerances.
The Pooling Problem combines features of both problems, as flow streams from different sources are mixed at intermediate pools and blended again at the target nodes. The non-linearity is in fact the direct result of considering pools, as the quality of a given attribute at a pool —defined as the weighted average of the qualities of the incoming streams— is an unknown quantity and thus needs to be captured by a decision variable. We refer to this problem as the Standard Pooling Problem when the network can be represented by a tripartite graph, i.e. three disjoint sets of nodes such that no nodes within the same set are adjacent. In a nutshell, it can be stated as follows: Given a list of source nodes with raw materials containing known attribute qualities, what is the cheapest way of mixing these materials at intermediate pools so as to meet the demand and tolerances at multiple target nodes? (Gupte et al., 2017) [2].
---
## Define Data
In this example, there are three sources of oil, each of which contains a different percentage of sulfur. Given the pooling layout, these materials are to be blended in such a way to create two separate oil products. These final products have requirements on the percentage of sulfur they can contain, as well as how much of the product can be produced.
Now, we define the data used in Haverly's 1978 paper:
```python
import numpy as np
import pandas as pd
from itertools import product
from gurobipy import *
attrs = {'sulfur'}
sources, cost, content = multidict({
"A": [6, {'sulfur': 3}],
"B": [16, {'sulfur': 1}],
"C": [10, {'sulfur': 2}]
})
targets, price, demand, max_tol = multidict({
"X": [9, 100, {'sulfur': 2.5}],
"Y": [15, 200, {'sulfur': 1.5}]
})
pools = {"P"}
s2p = {("A", "P"),
("B", "P")}
# The function `product` deploys the Cartesian product of elements in sets A and B
p2t = set(product(pools, targets))
s2t = {("C", "X"),
("C", "Y")}
```
---
## Mathematical Formulation
Mathematical programming is a declarative approach where the modeler formulates a mathematical optimization model that captures the key aspects of a complex decision problem. The Gurobi Optimizer solves such models using state-of-the-art mathematics and computer science.
A mathematical optimization model has five components, namely:
- Sets and indices.
- Parameters.
- Decision variables.
- Objective function(s).
- Constraints.
A quadratic constraint that involves only products of disjoint pairs of variables is often called a bilinear constraint, and a model that contains bilinear constraints is often called a Bilinear Program. Bilinear constraints are a special case of non-convex quadratic constraints. This type of problems is typically solved using spatial Branch and Bound (sB&B). This algorithm explores the entire search space, so it provides a globally valid lower bound on the optimal objective value and —given enough time— it will find a globally optimal solution (subject to tolerances). The interested reader is referred to [references](#references) [1], [4] and [5].
We now present a Bilinear Program for the Standard Pooling Problem:
#### Sets and Indices
$G=(V,E)$: Directed graph.
$i,j \in V$: Set of nodes.
$(i,j) \in E \subset V \times V$: Set of edges.
$N(i)^+ = \{j \in V \mid (i,j) \in E \}$: Set of successor nodes receiving outflow from node $i$.
$N(j)^- = \{i \in V \mid (i,j) \in E \}$: Set of predecessor nodes sending inflow to node $i$.
$k \in \text{Attrs}$: Set of attributes.
$s \in \text{Sources} \subset V$: Set of source nodes, i.e. $N(s)^-= \emptyset$.
$t \in \text{Targets} \subset V$: Set of target nodes, i.e. $N(t)^+= \emptyset$.
$p \in \text{Pools} = V \setminus (\text{Sources} \cup \text{Targets})$: Set of pools.
#### Parameters
$\text{Cost}_s \in \mathbb{R}^+$: Cost of acquiring one unit of raw material at source node $s$.
$\text{Content}_{s,k} \in \mathbb{R}^+$: Content of attribute $k$ in raw material at source node $s$.
$\text{Price}_t \in \mathbb{R}^+$: Price for selling one unit of final blend at target node $t$.
$\text{Demand}_t \in \mathbb{R}^+$: Minimum number of units required of final blend at target node $t$.
$\text{Max_tol}_{t,k} \in \mathbb{R}^+$: Maximum tolerance for attribute $k$ in final blend at target node $t$.
#### Decision Variables
$\text{flow}_{i,j} \in [0, \text{UB}_{i,j}]$: Flow from node $i$ to node $j$.
$\text{quality}_{p,k} \in \mathbb{R}^+$: Concentration of attribute $k$ at pool $p$.
#### Objective Function
- **Profit**: Maximize total profits.
\begin{equation}
\text{Max} \quad Z = \sum_{t \in \text{Targets}}{\sum_{i \in N(t)^-}{\text{Price}_t \cdot \text{flow}_{i,t}}} - \sum_{s \in \text{Sources}}{\sum_{j \in N(s)^+}{\text{Cost}_s \cdot \text{flow}_{s,j}}}
\tag{0}
\end{equation}
#### Constraints
- **Flow conservation**: Total inflow of pool $p$ must be equal to its toal outflow (nothing is stored in them).
\begin{equation}
\sum_{t \in N(p)^+}{\text{flow}_{p,t}} - \sum_{s \in N(p)^-}{\text{flow}_{s,p}} = 0 \quad \forall p \in \text{Pools}
\tag{1}
\end{equation}
- **Target demand**: Total inflow of target $t$ cannot exceed maximum demand.
\begin{equation}
\sum_{i \in N(t)^-}{\text{flow}_{i,t}} \leq \text{Demand}_t \quad \forall t \in \text{Targets}
\tag{2}
\end{equation}
- **Pool concentration**: Concentration of attribute $k$ at pool $p$ is expressed as the weighted average (linear blending) of the concentrations associated to the incoming flows (notice the bilinear terms on the right-hand side).
\begin{equation}
\sum_{s \in N(p)^-}{\text{Content}_{s,k} \cdot \text{flow}_{s,p}} = \text{quality}_{p,k} \cdot \sum_{t \in N(p)^+}{\text{flow}_{p,t}} \quad \forall (p,k) \in \text{Pools} \times \text{Attrs}
\tag{3}
\end{equation}
- **Target tolerances**: Concentration of attribute $k$ at target $t$ is also the result of linear blending, and must be within tolerances (notice the bilinear terms on the second expression of the left-hand side).
\begin{equation}
\sum_{s \in N(t)^- \cap \text{Sources}}{\text{Content}_{s,k} \cdot \text{flow}_{s,t}}+ \sum_{p \in N(t)^- \cap \text{Pools}}{\text{quality}_{p,k} \cdot \text{flow}_{p,t}} \leq \text{Max_tol}_{t,k} \cdot \sum_{i \in N(t)^-}{\text{flow}_{i,t}} \quad \forall (t,k) \in \text{Targets} \times \text{Attrs}
\tag{4}
\end{equation}
---
## Python Implementation
Solving Bilinear Programs with Gurobi is as easy as configuring the global parameter `nonConvex`. When setting this parameter to a value of 2, non-convex quadratic problems are solved by means of translating them into bilinear form and applying spatial Branch and Bound (sB&B).
```python
haverly = Model("Pooling")
# Set global parameters
haverly.params.nonConvex = 2
# Declare decision variables
# flow
ik = haverly.addVars(s2t, name="Source2Target")
ij = haverly.addVars(s2p, name="Source2Pool")
jk = haverly.addVars(p2t, name="Pool2Target")
# quality
prop = haverly.addVars(pools, attrs, name="Proportion")
# Deploy constraint sets
# 1. Flow conservation
haverly.addConstrs((ij.sum('*',j) == jk.sum(j,'*') for j in pools),
name="Flow_conservation")
# 2. Target demand
haverly.addConstrs((ik.sum('*',k) + jk.sum('*',k) <= demand[k] for k in targets),
name="Target_demand")
# 3. Pool concentration
haverly.addConstrs((quicksum(content[i][attr]*ij[i,j]
for i in sources if (i,j) in s2p)
== prop[j,attr]*jk.sum(j,'*') for j in pools for attr in attrs),
name="Pool_concentration")
# 4. Target (max) tolerances
haverly.addConstrs((quicksum(content[i][attr]*ik[i,k]
for i in sources if (i,k) in s2t)
+ quicksum(prop[j,attr]*jk[j,k]
for j in pools if (j,k) in p2t)
<= max_tol[k][attr]*(ik.sum('*',k) + jk.sum('*',k))
for k in targets for attr in max_tol[k].keys()),
name="Target_max_tolerances")
# Deploy Objective Function
# 0. Total profit
obj = quicksum(price[k]*(ik.sum('*',k) + jk.sum('*',k))
for k in targets) \
- quicksum(cost[i]*(ij.sum(i,'*') + ik.sum(i,'*'))
for i in sources)
haverly.setObjective(obj, GRB.MAXIMIZE)
# Find the optimal solution
haverly.optimize()
```
Using license file /Users/orojuan/gurobi.lic
Set parameter TokenServer to value Juans-MacBook-Pro-3.local
Changed value of parameter nonConvex to 2
Prev: -1 Min: -1 Max: 2 Default: -1
Gurobi Optimizer version 9.0.0 build v9.0.0rc2 (mac64)
Optimize a model with 3 rows, 7 columns and 8 nonzeros
Model fingerprint: 0x777f4d01
Model has 3 quadratic constraints
Coefficient statistics:
Matrix range [1e+00, 1e+00]
QMatrix range [1e+00, 1e+00]
QLMatrix range [5e-01, 3e+00]
Objective range [1e+00, 2e+01]
Bounds range [0e+00, 0e+00]
RHS range [1e+02, 2e+02]
Continuous model is non-convex -- solving as a MIP.
Found heuristic solution: objective -0.0000000
Presolve time: 0.00s
Presolved: 16 rows, 10 columns, 34 nonzeros
Presolved model has 4 bilinear constraint(s)
Variable types: 10 continuous, 0 integer (0 binary)
Root relaxation: objective 2.100000e+03, 4 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 2100.00000 0 3 -0.00000 2100.00000 - - 0s
0 0 2100.00000 0 3 -0.00000 2100.00000 - - 0s
0 0 1937.83784 0 3 -0.00000 1937.83784 - - 0s
0 0 1921.35002 0 4 -0.00000 1921.35002 - - 0s
0 0 1921.23235 0 4 -0.00000 1921.23235 - - 0s
0 0 1921.17768 0 4 -0.00000 1921.17768 - - 0s
0 2 1921.17768 0 4 -0.00000 1921.17768 - - 0s
* 6 6 2 321.5204987 1882.20468 485% 3.0 0s
* 12 4 3 346.2144981 1846.72927 433% 2.4 0s
* 14 4 4 348.0276050 1846.72927 431% 3.1 0s
* 22 10 5 355.0833492 1778.85855 401% 3.0 0s
* 23 5 5 363.4227077 1778.85855 389% 2.9 0s
* 24 5 6 363.6955948 1778.85855 389% 2.9 0s
* 25 5 6 370.7666965 1778.85855 380% 2.8 0s
* 26 5 7 372.0510296 1778.85855 378% 2.8 0s
* 32 2 7 377.9877375 1746.46493 362% 2.5 0s
* 34 2 8 380.1494347 1746.46493 359% 2.4 0s
* 36 4 8 383.8453897 1746.46493 355% 2.5 0s
* 40 5 9 387.9905769 1715.10036 342% 2.5 0s
* 41 5 9 390.8479187 1715.10036 339% 2.4 0s
* 45 5 10 395.5742065 1684.76584 326% 2.4 0s
* 47 5 10 397.5729772 1684.76584 324% 2.3 0s
* 52 7 12 399.9188266 1684.76584 321% 2.3 0s
* 60 2 16 399.9999622 1655.46244 314% 2.2 0s
* 70 2 18 400.0000031 1627.19132 307% 2.1 0s
Cutting planes:
RLT: 3
Explored 105 nodes (257 simplex iterations) in 0.06 seconds
Thread count was 8 (of 8 available processors)
Solution count 10: 400 400 399.919 ... 377.988
Optimal solution found (tolerance 1.00e-04)
Best objective 4.000000031377e+02, best bound 4.000000031377e+02, gap 0.0000%
### Analysis
Let's see the optimal flows found:
```python
def _print_table(rows, columns, variables):
table = pd.DataFrame(columns=columns, index=rows, data=0.0)
for row, col in variables.keys():
value = variables[row, col].x
if abs(value) > 1e-6:
table.loc[row, col] = np.round(value, 1)
print(table)
def print_solution():
print("Flows from Sources to Targets")
_print_table(sources.copy(), targets.copy(), ik)
print("Flows from Pools to Targets")
_print_table(pools.copy(), targets.copy(), jk)
print("Flows from Sources to Pools")
_print_table(sources.copy(), pools.copy(), ij)
```
```python
print_solution()
```
Flows from Sources to Targets
X Y
A 0.0 0.0
B 0.0 0.0
C 0.0 100.0
Flows from Pools to Targets
X Y
P 0.0 100.0
Flows from Sources to Pools
P
A 0.0
B 100.0
C 0.0
---
## Changing the Data
We now consider how the optimal solution changes by modifying some parameters in the model.
### Increasing Demand
The maximum demand of product $\text{X}$ increased from 100 to 600:
```python
dem_x = haverly.getConstrByName("Target_demand[X]")
dem_x.setAttr("rhs", 600)
haverly.optimize()
```
Gurobi Optimizer version 9.0.0 build v9.0.0rc2 (mac64)
Optimize a model with 3 rows, 7 columns and 8 nonzeros
Model fingerprint: 0xf2ff1488
Model has 3 quadratic constraints
Variable types: 7 continuous, 0 integer (0 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
QMatrix range [1e+00, 1e+00]
QLMatrix range [5e-01, 3e+00]
Objective range [1e+00, 2e+01]
Bounds range [0e+00, 0e+00]
RHS range [2e+02, 6e+02]
MIP start from previous solve produced solution with objective 324 (0.01s)
MIP start from previous solve produced solution with objective 526.694 (0.02s)
MIP start from previous solve produced solution with objective 599.488 (0.02s)
MIP start from previous solve produced solution with objective 599.976 (0.02s)
MIP start from previous solve produced solution with objective 600 (0.02s)
MIP start from previous solve produced solution with objective 600 (0.03s)
Loaded MIP start from previous solve with objective 600
Presolve time: 0.00s
Presolved: 16 rows, 10 columns, 34 nonzeros
Presolved model has 4 bilinear constraint(s)
Variable types: 10 continuous, 0 integer (0 binary)
Root relaxation: objective 3.600000e+03, 4 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 3600.00000 0 3 600.00000 3600.00000 500% - 0s
0 0 3600.00000 0 3 600.00000 3600.00000 500% - 0s
0 0 3381.81818 0 4 600.00000 3381.81818 464% - 0s
0 0 3230.76924 0 3 600.00000 3230.76924 438% - 0s
0 0 3217.40285 0 4 600.00000 3217.40285 436% - 0s
0 0 3207.95528 0 4 600.00000 3207.95528 435% - 0s
0 2 3207.95528 0 4 600.00000 3207.95528 435% - 0s
* 126 5 54 600.0000064 600.70747 0.12% 3.5 0s
Cutting planes:
RLT: 3
Explored 141 nodes (472 simplex iterations) in 0.06 seconds
Thread count was 8 (of 8 available processors)
Solution count 6: 600 600 599.976 ... 324
Optimal solution found (tolerance 1.00e-04)
Best objective 6.000000063584e+02, best bound 6.000000063584e+02, gap 0.0000%
```python
print_solution()
```
Flows from Sources to Targets
X Y
A 0.0 0.0
B 0.0 0.0
C 300.0 0.0
Flows from Pools to Targets
X Y
P 300.0 0.0
Flows from Sources to Pools
P
A 300.0
B 0.0
C 0.0
### Decreasing Cost
The price of extracting crude oil from source $\text{B}$ decreased from $\$16$ to $\$13$:
```python
dem_x.setAttr("rhs", 100) # reinstate the model to its initial state
ij["B", "P"].obj = -13 # the coefficient is negative since we're modifying a cost
haverly.optimize()
```
Gurobi Optimizer version 9.0.0 build v9.0.0rc2 (mac64)
Optimize a model with 3 rows, 7 columns and 8 nonzeros
Model fingerprint: 0x94aace23
Model has 3 quadratic constraints
Variable types: 7 continuous, 0 integer (0 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
QMatrix range [1e+00, 1e+00]
QLMatrix range [5e-01, 3e+00]
Objective range [1e+00, 2e+01]
Bounds range [0e+00, 0e+00]
RHS range [1e+02, 2e+02]
MIP start from previous solve produced solution with objective -14.0625 (0.01s)
MIP start from previous solve produced solution with objective -0 (0.02s)
MIP start from previous solve produced solution with objective 1.88255 (0.03s)
MIP start from previous solve produced solution with objective 721.662 (0.03s)
MIP start from previous solve produced solution with objective 726.254 (0.03s)
Loaded MIP start from previous solve with objective 726.254
Presolve time: 0.00s
Presolved: 16 rows, 10 columns, 34 nonzeros
Presolved model has 4 bilinear constraint(s)
Variable types: 10 continuous, 0 integer (0 binary)
Root relaxation: objective 2.100000e+03, 4 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 2100.00000 0 3 726.25404 2100.00000 189% - 0s
0 0 2100.00000 0 3 726.25404 2100.00000 189% - 0s
0 0 1935.80379 0 3 726.25404 1935.80379 167% - 0s
0 0 1919.29441 0 4 726.25404 1919.29441 164% - 0s
0 0 1918.94856 0 4 726.25404 1918.94856 164% - 0s
0 0 1918.78695 0 4 726.25404 1918.78695 164% - 0s
0 2 1918.78695 0 4 726.25404 1918.78695 164% - 0s
* 4 6 2 740.2106233 1900.53786 157% 2.8 0s
* 10 4 3 750.0000000 1844.51703 146% 2.6 0s
Cutting planes:
RLT: 3
Explored 49 nodes (143 simplex iterations) in 0.08 seconds
Thread count was 8 (of 8 available processors)
Solution count 7: 750 740.211 726.254 ... -14.0625
Optimal solution found (tolerance 1.00e-04)
Best objective 7.500000000000e+02, best bound 7.500000000000e+02, gap 0.0000%
```python
print_solution()
```
Flows from Sources to Targets
X Y
A 0.0 0.0
B 0.0 0.0
C 0.0 0.0
Flows from Pools to Targets
X Y
P 0.0 200.0
Flows from Sources to Pools
P
A 50.0
B 150.0
C 0.0
---
## Conclusion
This notebook showed how easy it is to solve Bilinear Programs using Gurobi.
The Haverly's pooling problem is indeed a simple instance of the Standard Pooling Problem, as it considers only one attribute. It was modeled using what the literature calls the P-Formulation, where the number of bilinear terms is proportional to the number of attributes. For intances with more specifications, it may be worth considering using alternative formulations —such as the Q-Formulation— to improve the performance of the optimization process. The Jupyter Notebook `Standard Pooling Problem` presents and compares both formulations with a more challenging scenario.
---
## References
1. Dombrowski, J. (2015, June 07). McCormick envelopes. Retrieved from https://optimization.mccormick.northwestern.edu/index.php/McCormick_envelopes
2. Gupte, A., Ahmed, S., Dey, S. S., & Cheon, M. S. (2017). Relaxations and discretizations for the pooling problem. Journal of Global Optimization, 67(3), 631-669.
3. Haverly, C. A. (1978). Studies of the behavior of recursion for the pooling problem. Acm sigmap bulletin, (25), 19-28.
4. Liberti, L. (2008). Introduction to global optimization. Ecole Polytechnique.
5. Zhuang E. (2015, June 06). Spatial branch and bound method. Retrieved from
https://optimization.mccormick.northwestern.edu/index.php/Spatial_branch_and_bound_method
Copyright © 2019 Gurobi Optimization, LLC
| 63ec521c2c160a2316e904a0ac8ff705b095bf9d | 29,255 | ipynb | Jupyter Notebook | haverly/haverly.ipynb | Zhong-HY/modeling-examples | 66355938dde764a320f0a6a70a9e5a69c696a18f | [
"Apache-2.0"
] | null | null | null | haverly/haverly.ipynb | Zhong-HY/modeling-examples | 66355938dde764a320f0a6a70a9e5a69c696a18f | [
"Apache-2.0"
] | null | null | null | haverly/haverly.ipynb | Zhong-HY/modeling-examples | 66355938dde764a320f0a6a70a9e5a69c696a18f | [
"Apache-2.0"
] | null | null | null | 43.664179 | 939 | 0.553375 | true | 7,146 | Qwen/Qwen-72B | 1. YES
2. YES | 0.927363 | 0.843895 | 0.782597 | __label__eng_Latn | 0.935615 | 0.656568 |
```python
import numpy as np
from scipy import optimize
```
Starting with an initial flow over a horizontal surface where $M_1 = 2.2$, an oblique shock forms at an angle of $\theta = 35^{\circ}$, which deflects the flow by the angle $\delta$. The flow now has Mach number $M_2$.
To satisfy the boundary condition of the surface, the oblique shock reflects at an angle $\beta$ from the surface, which deflects the flow back to the horizontal direction with a flow at a Mach number of $M_3$.
What is the angle of reflection ($\beta$)? How do the strengths of each shock compare?
First, let's determine the deflection angle of the first oblique shock using
\begin{equation}
\tan \delta = 2 (\cot \theta) \left[ \frac{M_1^2 \sin^2 \theta - 1}{M_1^2 (\gamma + \cos 2\theta) + 2}\right]
\end{equation}
```python
# known values
gamma = 1.4
M1 = 2.2
theta = np.radians(35.0) # convert from degrees to radians
delta = np.arctan(
2 * (1 / np.tan(theta)) * (M1**2 * np.sin(theta)**2 - 1) / (M1**2 * (gamma + np.cos(2*theta)) + 2)
)
print(f'Deflection angle: {np.degrees(delta):.3}')
```
Deflection angle: 9.21
Now we can determine the conditions after the oblique shock, using
\begin{equation}
M_{1n} = M_1 \sin \theta
\end{equation}
and
\begin{equation}
M_{2n}^2 = \frac{M_{1n}^2 + \frac{2}{\delta-1}}{\frac{2\gamma}{\gamma-1} M_{1n}^2 - 1}
\end{equation}
```python
M1n = M1 * np.sin(theta)
M2n = np.sqrt((M1n**2 + (2/(gamma - 1))) / (M1n**2 * (2*gamma)/(gamma - 1) - 1))
M2 = M2n / np.sin(theta - delta)
print(f'M_2 = {M2:.3}')
```
M_2 = 1.85
Now, the reflected oblique shock must turn the flow back to the horizontal direction to satisfy flow tangency with the wall, so $\delta_2 = \delta$. We can thus determine the angle of the second shock, $\theta_2$:
```python
def oblique(theta, M1, delta, gamma):
return (
np.tan(delta) -
2 * (1 / np.tan(theta)) * (M1**2 * np.sin(theta)**2 - 1) / (M1**2 * (gamma + np.cos(2*theta)) + 2)
)
root = optimize.root_scalar(oblique, # function we are solving
args=(M2, delta, gamma),
# give range for weak shocks
bracket=[np.radians(0.0001), np.radians(45.0)]
)
theta_2 = root.root
print(f'theta_2 = {np.degrees(theta_2):.3} degrees')
```
theta_2 = 41.7 degrees
**But** this shock angle is defined with respect to the flow direction in the middle region, so $\beta = \theta_2 - \delta$:
```python
beta = theta_2 - delta
print(f'beta = {np.degrees(beta):.3} degrees')
```
beta = 32.5 degrees
Thus, the angle of reflection $\beta$ is **smaller** than the angle of incidence $\theta = 35^{\circ}$. Regarding the shock strength, we can compare the normal Mach numbers of the flow prior to each shock:
```python
print(f'M1n = {M1n:.4}')
M3n = M2 * np.sin(theta_2)
print(f'M2n = {M3n:.4}')
```
M1n = 1.262
M2n = 1.233
We can see that the second shock occurs at a smaller Mach number, so it is **weaker**.
| f43ce2b9dded71a17c27009fd4a2c533689b90f3 | 5,742 | ipynb | Jupyter Notebook | oblique_shock.ipynb | kyleniemeyer/gasdynamics | 50dce7a030daa2757aa55cf6c9a5ae66d079ab72 | [
"MIT"
] | 4 | 2019-10-17T18:21:23.000Z | 2021-08-17T19:30:07.000Z | oblique_shock.ipynb | kyleniemeyer/gasdynamics | 50dce7a030daa2757aa55cf6c9a5ae66d079ab72 | [
"MIT"
] | null | null | null | oblique_shock.ipynb | kyleniemeyer/gasdynamics | 50dce7a030daa2757aa55cf6c9a5ae66d079ab72 | [
"MIT"
] | null | null | null | 26.219178 | 230 | 0.512887 | true | 977 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92079 | 0.875787 | 0.806416 | __label__eng_Latn | 0.951164 | 0.711906 |
# Week 10 of Introduction to Biological System Design
## Compiling Chemical Reaction Network Models for Biological Systems
### Ayush Pandey
Pre-requisite: To get the best out of this notebook, make sure that you have basic understanding of chemical reaction networks and ordinary differential equations (ODE). Further, we also use Hill functions to build models of biological systems. Refer to the [E164 class material](pages.hmc.edu/pandey/) for background on any topics in this notebook.
This notebook discusses a Python package called [BioCRNpyler](https://github.com/BuildACell/BioCRNPyler) (pronounced Bio-Compiler) that can be used to compile chemical reaction network models for biological systems.
Disclaimer: The content in this notebook is taken from the BioCRNpyler Github examples.
Copyright: Build-A-Cell.
Package Authors: William Poole, Ayush Pandey, Andrey Shur, Zoltan Tuza, and Richard M. Murray
## Building Chemical Reaction Networks (CRNs) Directly with BioCRNpyler
### What is a CRN?
A CRN is a widely established model of chemistry and biochemistry.
* A set of species $S$
* A set of reactions $R$ interconvert species $I_r$ to $O_r$
\begin{align}
\\
I \xrightarrow[]{\rho(s)} O
\\
\end{align}
* $I$ and $O$ are multisets of species $S$.
* $\rho(s): S \to \mathbb{R}$ is a function that determines how fast the reaction occurs.
```python
# Running this notebook for the first time?
# Make sure you have biocrnpyler installed in your environment.
# To install biocrnpyler uncomment the following and run:
# !pip install biocrnpyler
```
```python
#Import everything from biocrnpyler
from biocrnpyler import *
```
C:\Users\apand\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\compat\_optional.py:138: UserWarning: Pandas requires version '2.7.0' or newer of 'numexpr' (version '2.6.9' currently installed).
warnings.warn(msg, UserWarning)
## Combining Species and Reactions into a CRN
The following code defines a species called 'S' made out of material 'material'. Species can also have attributes to help identify them. Note that Species with the same name, but different materials or attributes are considered different species in terms of the reactions they participate in.
S = Species('name', material_type = 'material', attributes = [])
The collowing code produces a reaction R
R = Reaction(Inputs, Outputs, k)
here Inputs and Outputs must both be a list of Species. the parameter k is the rate constant of the reaction. By default, propensities in BioCRNpyler are massaction:
### $\rho(S) = k \Pi_{s} s^{I_s}$
Note: for stochastic models mass action propensities are $\rho(S) = k \Pi_{s} s!/(s - I_s)!$.
Massaction reactions can be made reversible with the k_rev keyword:
R_reversible = Reaction(Inputs, Outputs, k, k_rev = krev)
is the same as two reactions:
R = Reaction(Inputs, Outputs, k)
Rrev = Reaction(Outputs, Inputs, krev)
Finally, a CRN can be made by combining species and reactions:
CRN = ChemicalReactionNetwork(species = species, reactions = reactions, initial_condition_dict = {})
Here, initial_condition_dict is an optional dictionary to store the initial values of different species.
initial_condition_dict = {Species:value}
Species without an initial condition will default to 0.
### An example:
```python
#Example: Model the CRN consisting of: A --> 2B,
# 2B <--> B + C where C has the same name as B but a new material
A = Species("A", material_type = "m1",
attributes = ["attribute"])
B = Species("B", material_type = "m1")
C = Species("B", material_type = "m2")
D = Species("D")
print("Species can be printed to show"\
"their string representation:", A, B, C, D)
#Reaction Rates
k1 = 3.
k2 = 1.4
k2rev = 0.15
#Reaciton Objects
R1 = Reaction.from_massaction([A], [B, B], k_forward = k1, k_reverse = 0.9)
R2 = Reaction.from_massaction([B], [C, D], k_forward = k2)
print("\nReactions can be printed as well:\n", R1,"\n", R2)
#create an initial condition so A has a non-zero value
initial_concentration_dict = {A:10}
#Make a CRN
CRN = ChemicalReactionNetwork(species = [A, B, C, D],
reactions = [R1, R2],
initial_concentration_dict =
initial_concentration_dict)
#CRNs can be printed in two different ways
print("\nDirectly printing a CRN shows the string"\
"representation of the species used in BioCRNpyler:")
print(CRN)
print("\nCRN.pretty_print(...) is a function that prints"\
"a more customizable version of the CRN, but doesn't"\
"show the proper string representation of species.")
print(CRN.pretty_print(show_materials = True,
show_rates = True, show_attributes = True))
```
Species can be printed to showtheir string representation: m1_A_attribute m1_B m2_B D
Reactions can be printed as well:
m1[A(attribute)] <--> 2m1[B]
m1[B] --> m2[B]+D
Directly printing a CRN shows the stringrepresentation of the species used in BioCRNpyler:
Species = m1_A_attribute, m1_B, m2_B, D
Reactions = [
m1[A(attribute)] <--> 2m1[B]
m1[B] --> m2[B]+D
]
CRN.pretty_print(...) is a function that printsa more customizable version of the CRN, but doesn'tshow the proper string representation of species.
Species(N = 4) = {
m1[A(attribute)] (@ 10), D (@ 0), m2[B] (@ 0), m1[B] (@ 0),
}
Reactions (2) = [
0. m1[A(attribute)] <--> 2m1[B]
Kf=k_forward * m1_A_attribute
Kr=k_reverse * m1_B^2
k_forward=3.0
k_reverse=0.9
1. m1[B] --> m2[B]+D
Kf=k_forward * m1_B
k_forward=1.4
]
### CRNs can be saved as SBML and simulated
To save a CRN as SBML:
CRN.write_sbml_file("file_name.xml")
To simulate a CRN with biosrape:
Results, Model = CRN_expression.simulate_with_bioscrape(timepoints, initial_condition_dict = x0)
Where x0 is a dictionary: x0 = {species_name:initial_value}
```python
# To simulate the CRN, install Bioscrape, a Python-based simulator
# Uncomment the following line to install Bioscrape
# !pip install bioscrape
```
```python
#Saving and simulating a CRN
CRN.write_sbml_file("build_crns_directly.xml")
try:
import bioscrape
import numpy as np
import pylab as plt
import pandas as pd
#Initial conditions can be set with a dictionary:
x0 = {A:120}
#Timepoints to simulate over
timepoints = np.linspace(0, 1, 100)
#This function can also take a filename keyword to
# save the file at the same time
R = CRN.simulate_with_bioscrape_via_sbml(timepoints = timepoints,
initial_condition_dict = x0)
#Check to ensure simulation worked
#Results are in a Pandas Dictionary and can be accessed
# via string-names of species
plt.plot(R['time'], R[str(A)], label = "A")
plt.plot(R['time'], R[str(B)], label = "B")
plt.plot(R['time'], R[str(C)], "--", label = "C")
plt.plot(R['time'], R[str(D)],":", label = "D")
plt.xlabel('Time')
plt.ylabel('Species')
plt.legend()
except ModuleNotFoundError:
print("Plotting Modules not installed.")
```
## Hill Functions with BioCRNpyler
### HillPositive:
$\rho(s) = k \frac{s_1^n}{K^n+s_1^n}$
Requried parameters: rate constant "k", offset "K", hill coefficient "n", hill species "s1".
```python
#create the propensity
R = Species("R")
hill_pos = HillPositive(k=1, s1=R, K=5, n=2)
#create the reaction
r_hill_pos = Reaction([A], [B], propensity_type = hill_pos)
#print the reaction
print(r_hill_pos.pretty_print())
```
m1[A(attribute)] --> m1[B]
Kf = k R^n / ( K^n + R^n )
k=1
K=5
n=2
### HillNegative:
$\rho(s) = k \frac{1}{K^n+s_1^n}$
Requried parameters: rate constant "k", offset "K", hill coefficient "n", hill species "s1".
```python
#create the propensity
R = Species("R")
hill_neg = HillPositive(k=1, s1=R, K=5, n=2)
#create the reaction
r_hill_neg = Reaction([A], [B], propensity_type = hill_neg)
#print the reaction
print(r_hill_neg.pretty_print())
```
m1[A(attribute)] --> m1[B]
Kf = k R^n / ( K^n + R^n )
k=1
K=5
n=2
### ProportionalHillPositive:
$\rho(s, d) = k d \frac{s_1^n}{K^n + s_1^n}$
Requried parameters: rate constant "k", offset "K", hill coefficient "n", hill species "s1", proportional species "d"
```python
#create the propensity
R = Species("R")
D = Species("D")
prop_hill_pos = ProportionalHillPositive(k=1, s1=R, K=5, n=2, d = D)
#create the reaction
r_prop_hill_pos = Reaction([A], [B], propensity_type = prop_hill_pos)
#print the reaction
print(r_prop_hill_pos.pretty_print())
```
m1[A(attribute)] --> m1[B]
Kf = k D R^n / ( K^n + R^n )
k=1
K=5
n=2
### ProportionalHillNegative:
$\rho(s, d) = k d \frac{1}{K^n + s_1^n}$
Requried parameters: rate constant "k", offset "K", hill coefficient "n", hill species "s1", proportional species "d"
```python
#create the propensity
R = Species("R")
D = Species("D")
prop_hill_neg = ProportionalHillNegative(k=1, s1=R, K=5, n=2, d = D)
#create the reaction
r_prop_hill_neg = Reaction([A], [B], propensity_type = prop_hill_neg)
#print the reaction
print(r_prop_hill_neg.pretty_print())
```
m1[A(attribute)] --> m1[B]
Kf = k D / ( 1 + (R/K)^2 )
k=1
K=5
n=2
### General Propensity:
$\rho(s) = $ function of your choice
For general propensities, the function must be written out as a string with all species and parameters declared.
```python
#create species
# create some parameters - note that parameters will be discussed in the next lecture
k1 = ParameterEntry("k1", 1.11)
k2 = ParameterEntry("k2", 2.22)
S = Species("S")
#type the string as a rate then declare teh species and parameters
general = GeneralPropensity(f'k1*2 - k2/{S}^2', propensity_species=[S], propensity_parameters=[k1, k2])
r_general = Reaction([A, B], [], propensity_type = general)
print(r_general.pretty_print())
```
m1[A(attribute)]+m1[B] -->
k1*2 - k2/S^2
k1=1.11
k2=2.22
## Next week:
### 1. Compiling CRNs with Enzymes Catalysis and Binding
### 2. DNA Assemblies gene expression transcription and translation
### 3. Promoters Transcriptional Regulation and Gene Regulatory Networks
### 4. Simulating and Analyzing SBML models
| 8efdca74ef0870d08f6de6dfcbc58c5707093197 | 31,459 | ipynb | Jupyter Notebook | reading/week10_compiling_crn_models.ipynb | BioSysDesign/E164 | 69f6236de2d8172e541a5b56f7807d4767f20979 | [
"BSD-3-Clause"
] | null | null | null | reading/week10_compiling_crn_models.ipynb | BioSysDesign/E164 | 69f6236de2d8172e541a5b56f7807d4767f20979 | [
"BSD-3-Clause"
] | null | null | null | reading/week10_compiling_crn_models.ipynb | BioSysDesign/E164 | 69f6236de2d8172e541a5b56f7807d4767f20979 | [
"BSD-3-Clause"
] | null | null | null | 58.911985 | 15,260 | 0.745097 | true | 2,936 | Qwen/Qwen-72B | 1. YES
2. YES | 0.73412 | 0.721743 | 0.529846 | __label__eng_Latn | 0.934364 | 0.069339 |
###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 L.A. Barba, C.D. Cooper, G.F. Forsyth.
# Riding the wave
## Numerical schemes for hyperbolic PDEs
Welcome back! This is the second notebook of *Riding the wave: Convection problems*, the third module of ["Practical Numerical Methods with Python"](https://openedx.seas.gwu.edu/courses/course-v1:MAE+MAE6286+2017/about).
The first notebook of this module discussed conservation laws and developed the non-linear traffic equation. We learned about the effect of the wave speed on the stability of the numerical method, and on the CFL number. We also realized that the forward-time/backward-space difference scheme really has many limitations: it cannot deal with wave speeds that move in more than one direction. It is also first-order accurate in space and time, which often is just not good enough. This notebook will introduce some new numerical schemes for conservation laws, continuing with the traffic-flow problem as motivation.
## Red light!
Let's explore the behavior of different numerical schemes for a moving shock wave. In the context of the traffic-flow model of the previous notebook, imagine a very busy road and a red light at $x=4$. Cars accumulate quickly in the front, where we have the maximum allowed density of cars between $x=3$ and $x=4$, and there is an incoming traffic of 50% the maximum allowed density $(\rho = 0.5\rho_{\rm max})$.
Mathematically, this is:
$$
\begin{equation}
\rho(x,0) = \left\{
\begin{array}{cc}
0.5 \rho_{\rm max} & 0 \leq x < 3 \\
\rho_{\rm max} & 3 \leq x \leq 4 \\
\end{array}
\right.
\end{equation}
$$
Let's find out what the initial condition looks like.
```python
import numpy
from matplotlib import pyplot
%matplotlib inline
```
```python
# Set the font family and size to use for Matplotlib figures.
pyplot.rcParams['font.family'] = 'serif'
pyplot.rcParams['font.size'] = 16
```
```python
def rho_red_light(x, rho_max):
"""
Computes the "red light" initial condition with shock.
Parameters
----------
x : numpy.ndaray
Locations on the road as a 1D array of floats.
rho_max : float
The maximum traffic density allowed.
Returns
-------
rho : numpy.ndarray
The initial car density along the road
as a 1D array of floats.
"""
rho = rho_max * numpy.ones_like(x)
mask = numpy.where(x < 3.0)
rho[mask] = 0.5 * rho_max
return rho
```
```python
# Set parameters.
nx = 81 # number of locations on the road
L = 4.0 # length of the road
dx = L / (nx - 1) # distance between two consecutive locations
nt = 40 # number of time steps to compute
rho_max = 10.0 # maximum taffic density allowed
u_max = 1.0 # maximum speed traffic
# Get the road locations.
x = numpy.linspace(0.0, L, num=nx)
# Compute the initial traffic density.
rho0 = rho_red_light(x, rho_max)
```
```python
# Plot the initial traffic density.
fig = pyplot.figure(figsize=(6.0, 4.0))
pyplot.xlabel(r'$x$')
pyplot.ylabel(r'$\rho$')
pyplot.grid()
line = pyplot.plot(x, rho0,
color='C0', linestyle='-', linewidth=2)[0]
pyplot.xlim(0.0, L)
pyplot.ylim(4.0, 11.0)
pyplot.tight_layout()
```
The question we would like to answer is: **How will cars accumulate at the red light?**
We will solve this problem using different numerical schemes, to see how they perform. These schemes are:
* Lax-Friedrichs
* Lax-Wendroff
* MacCormack
Before we do any coding, let's think about the equation a little bit. The wave speed $u_{\rm wave}$ is $-1$ for $\rho = \rho_{\rm max}$ and $\rho \leq \rho_{\rm max}/2$, making all velocities negative. We should see a solution moving left, maintaining the shock geometry.
#### Figure 1. The exact solution is a shock wave moving left.
Now to some coding! First, let's define some useful functions and prepare to make some nice animations later.
```python
def flux(rho, u_max, rho_max):
"""
Computes the traffic flux F = V * rho.
Parameters
----------
rho : numpy.ndarray
Traffic density along the road as a 1D array of floats.
u_max : float
Maximum speed allowed on the road.
rho_max : float
Maximum car density allowed on the road.
Returns
-------
F : numpy.ndarray
The traffic flux along the road as a 1D array of floats.
"""
F = rho * u_max * (1.0 - rho / rho_max)
return F
```
Before we investigate different schemes, let's create the function to update the Matplotlib figure during the animation.
```python
from matplotlib import animation
from IPython.display import HTML
```
```python
def update_plot(n, rho_hist):
"""
Update the line y-data of the Matplotlib figure.
Parameters
----------
n : integer
The time-step index.
rho_hist : list of numpy.ndarray objects
The history of the numerical solution.
"""
fig.suptitle('Time step {:0>2}'.format(n))
line.set_ydata(rho_hist[n])
```
## Lax-Friedrichs scheme
Recall the conservation law for vehicle traffic, resulting in the following equation for the traffic density:
$$
\begin{equation}
\frac{\partial \rho}{\partial t} + \frac{\partial F}{\partial x} = 0
\end{equation}
$$
$F$ is the *traffic flux*, which in the linear traffic-speed model is given by:
$$
\begin{equation}
F = \rho u_{\rm max} \left(1-\frac{\rho}{\rho_{\rm max}}\right)
\end{equation}
$$
In the time variable, the natural choice for discretization is always a forward-difference formula; time invariably moves forward!
$$
\begin{equation}
\frac{\partial \rho}{\partial t}\approx \frac{1}{\Delta t}( \rho_i^{n+1}-\rho_i^n )
\end{equation}
$$
As is usual, the discrete locations on the 1D spatial grid are denoted by indices $i$ and the discrete time instants are denoted by indices $n$.
In a convection problem, using first-order discretization in space leads to excessive numerical diffusion (as you probably observed in [Lesson 1 of Module 2](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/02_spacetime/02_01_1DConvection.ipynb)). The simplest approach to get second-order accuracy in space is to use a central difference:
$$
\begin{equation}
\frac{\partial F}{\partial x} \approx \frac{1}{2\Delta x}( F_{i+1}-F_{i-1})
\end{equation}
$$
But combining these two choices for time and space discretization in the convection equation has catastrophic results! The "forward-time, central scheme" (FTCS) is **unstable**. (Go on: try it; you know you want to!)
The Lax-Friedrichs scheme was proposed by Lax (1954) as a clever trick to stabilize the forward-time, central scheme. The idea was to replace the solution value at $\rho^n_i$ by the average of the values at the neighboring grid points. If we do that replacement, we get the following discretized equation:
$$
\begin{equation}
\frac{\rho_i^{n+1}-\frac{1}{2}(\rho^n_{i+1}+\rho^n_{i-1})}{\Delta t} = -\frac{F^n_{i+1}-F^n_{i-1}}{2 \Delta x}
\end{equation}
$$
Take a careful look: the difference formula no longer uses the value at $\rho^n_i$ to obtain $\rho^{n+1}_i$. The stencil of the Lax-Friedrichs scheme is slightly different than that for the forward-time, central scheme.
#### Figure 2. Stencil of the forward-time/central scheme.
#### Figure 3. Stencil of the Lax-Friedrichs scheme.
This numerical discretization is **stable**. Unfortunately, substituting $\rho^n_i$ by the average of its neighbors introduces a first-order error. _Nice try, Lax!_
To implement the scheme in code, we need to isolate the value at the next time step, $\rho^{n+1}_i$, so we can write a time-stepping loop:
$$
\begin{equation}
\rho_i^{n+1} = \frac{1}{2}(\rho^n_{i+1}+\rho^n_{i-1}) - \frac{\Delta t}{2 \Delta x}(F^n_{i+1}-F^n_{i-1})
\end{equation}
$$
The function below implements Lax-Friedrichs for our traffic model. All the schemes in this notebook are wrapped in their own functions to help with displaying animations of the results. This is also good practice for developing modular, reusable code.
In order to display animations, we're going to hold the results of each time step in the variable `rho`, a 2D array. The resulting array `rho_n` has `nt` rows and `nx` columns.
```python
def lax_friedrichs(rho0, nt, dt, dx, bc_values, *args):
"""
Computes the traffic density on the road
at a certain time given the initial traffic density.
Integration using Lax-Friedrichs scheme.
Parameters
----------
rho0 : numpy.ndarray
The initial traffic density along the road
as a 1D array of floats.
nt : integer
The number of time steps to compute.
dt : float
The time-step size to integrate.
dx : float
The distance between two consecutive locations.
bc_values : 2-tuple of floats
The value of the density at the first and last locations.
args : list or tuple
Positional arguments to be passed to the flux function.
Returns
-------
rho_hist : list of numpy.ndarray objects
The history of the car density along the road.
"""
rho_hist = [rho0.copy()]
rho = rho0.copy()
for n in range(nt):
# Compute the flux.
F = flux(rho, *args)
# Advance in time using Lax-Friedrichs scheme.
rho[1:-1] = (0.5 * (rho[:-2] + rho[2:]) -
dt / (2.0 * dx) * (F[2:] - F[:-2]))
# Set the value at the first location.
rho[0] = bc_values[0]
# Set the value at the last location.
rho[-1] = bc_values[1]
# Record the time-step solution.
rho_hist.append(rho.copy())
return rho_hist
```
### Lax-Friedrichs with $\frac{\Delta t}{\Delta x}=1$
We are now all set to run! First, let's try with CFL=1
```python
# Set the time-step size based on CFL limit.
sigma = 1.0
dt = sigma * dx / u_max # time-step size
# Compute the traffic density at all time steps.
rho_hist = lax_friedrichs(rho0, nt, dt, dx, (rho0[0], rho0[-1]),
u_max, rho_max)
```
```python
# Create an animation of the traffic density.
anim = animation.FuncAnimation(fig, update_plot,
frames=nt, fargs=(rho_hist,),
interval=100)
# Display the video.
HTML(anim.to_html5_video())
```
##### Think
* What do you see in the animation above? How does the numerical solution compare with the exact solution (a left-traveling shock wave)?
* What types of errors do you think we see?
* What do you think of the Lax-Friedrichs scheme, so far?
### Lax-Friedrichs with $\frac{\Delta t}{\Delta x} = 0.5$
Would the solution improve if we use smaller time steps? Let's check that!
```python
# Set the time-step size based on CFL limit.
sigma = 0.5
dt = sigma * dx / u_max # time-step size
# Compute the traffic density at all time steps.
rho_hist = lax_friedrichs(rho0, nt, dt, dx, (rho0[0], rho0[-1]),
u_max, rho_max)
```
```python
# Create an animation of the traffic density.
anim = animation.FuncAnimation(fig, update_plot,
frames=nt, fargs=(rho_hist,),
interval=100)
# Display the video.
HTML(anim.to_html5_video())
```
##### Dig deeper
Notice the strange "staircase" behavior on the leading edge of the wave? You may be interested to learn more about this: a feature typical of what is sometimes called "odd-even decoupling." Last year we published a collection of lessons in Computational Fluid Dynamics, called _CFD Python_, where we discuss [odd-even decoupling](https://nbviewer.jupyter.org/github/barbagroup/CFDPython/blob/14b56718ac1508671de66bab3fe432e93cb59fcb/lessons/19_Odd_Even_Decoupling.ipynb).
* How does this solution compare with the previous one, where the Courant number was $\frac{\Delta t}{\Delta x}=1$?
## Lax-Wendroff scheme
The Lax-Friedrichs method uses a clever trick to stabilize the central difference in space for convection, but loses an order of accuracy in doing so. First-order methods are just not good enough for convection problems, especially when you have sharp gradients (shocks).
The Lax-Wendroff (1960) method was the _first_ scheme ever to achieve second-order accuracy in both space and time. It is therefore a landmark in the history of computational fluid dynamics.
To develop the Lax-Wendroff scheme, we need to do a bit of work. Sit down, grab a notebook and grit your teeth. We want you to follow this derivation in your own hand. It's good for you! Start with the Taylor series expansion (in the time variable) about $\rho^{n+1}$:
$$
\begin{equation}
\rho^{n+1} = \rho^n + \frac{\partial\rho^n}{\partial t} \Delta t + \frac{(\Delta t)^2}{2}\frac{\partial^2\rho^n}{\partial t^2} + \ldots
\end{equation}
$$
For the conservation law with $F=F(\rho)$, and using our beloved chain rule, we can write:
$$
\begin{equation}
\frac{\partial \rho}{\partial t} = -\frac{\partial F}{\partial x} = -\frac{\partial F}{\partial \rho} \frac{\partial \rho}{\partial x} = -J \frac{\partial \rho}{\partial x}
\end{equation}
$$
where
$$
\begin{equation}
J = \frac{\partial F}{\partial \rho} = u _{\rm max} \left(1-2\frac{\rho}{\rho_{\rm max}} \right)
\end{equation}
$$
is the _Jacobian_ for the traffic model. Next, we can do a little trickery:
$$
\begin{equation}
\frac{\partial F}{\partial t} = \frac{\partial F}{\partial \rho} \frac{\partial \rho}{\partial t} = J \frac{\partial \rho}{\partial t} = -J \frac{\partial F}{\partial x}
\end{equation}
$$
In the last step above, we used the differential equation of the traffic model to replace the time derivative by a spatial derivative. These equivalences imply that
$$
\begin{equation}
\frac{\partial^2\rho}{\partial t^2} = \frac{\partial}{\partial x} \left( J \frac{\partial F}{\partial x} \right)
\end{equation}
$$
Let's use all this in the Taylor expansion:
$$
\begin{equation}
\rho^{n+1} = \rho^n - \frac{\partial F^n}{\partial x} \Delta t + \frac{(\Delta t)^2}{2} \frac{\partial}{\partial x} \left(J\frac{\partial F^n}{\partial x} \right)+ \ldots
\end{equation}
$$
We can now reorganize this and discretize the spatial derivatives with central differences to get the following discrete equation:
$$
\begin{equation}
\frac{\rho_i^{n+1} - \rho_i^n}{\Delta t} = -\frac{F^n_{i+1}-F^n_{i-1}}{2 \Delta x} + \frac{\Delta t}{2} \left(\frac{(J \frac{\partial F}{\partial x})^n_{i+\frac{1}{2}}-(J \frac{\partial F}{\partial x})^n_{i-\frac{1}{2}}}{\Delta x}\right)
\end{equation}
$$
Now, approximate the rightmost term (inside the parenthesis) in the above equation as follows:
\begin{equation} \frac{J^n_{i+\frac{1}{2}}\left(\frac{F^n_{i+1}-F^n_{i}}{\Delta x}\right)-J^n_{i-\frac{1}{2}}\left(\frac{F^n_i-F^n_{i-1}}{\Delta x}\right)}{\Delta x}\end{equation}
Then evaluate the Jacobian at the midpoints by using averages of the points on either side:
\begin{equation}\frac{\frac{1}{2 \Delta x}(J^n_{i+1}+J^n_i)(F^n_{i+1}-F^n_i)-\frac{1}{2 \Delta x}(J^n_i+J^n_{i-1})(F^n_i-F^n_{i-1})}{\Delta x}.\end{equation}
Our equation now reads:
\begin{align}
&\frac{\rho_i^{n+1} - \rho_i^n}{\Delta t} =
-\frac{F^n_{i+1}-F^n_{i-1}}{2 \Delta x} + \cdots \\ \nonumber
&+ \frac{\Delta t}{4 \Delta x^2} \left( (J^n_{i+1}+J^n_i)(F^n_{i+1}-F^n_i)-(J^n_i+J^n_{i-1})(F^n_i-F^n_{i-1})\right)
\end{align}
Solving for $\rho_i^{n+1}$:
\begin{align}
&\rho_i^{n+1} = \rho_i^n - \frac{\Delta t}{2 \Delta x} \left(F^n_{i+1}-F^n_{i-1}\right) + \cdots \\ \nonumber
&+ \frac{(\Delta t)^2}{4(\Delta x)^2} \left[ (J^n_{i+1}+J^n_i)(F^n_{i+1}-F^n_i)-(J^n_i+J^n_{i-1})(F^n_i-F^n_{i-1})\right]
\end{align}
with
\begin{equation}J^n_i = \frac{\partial F}{\partial \rho} = u_{\rm max} \left(1-2\frac{\rho^n_i}{\rho_{\rm max}} \right).\end{equation}
Lax-Wendroff is a little bit long. Remember that you can use \ slashes to split up a statement across several lines. This can help make code easier to parse (and also easier to debug!).
```python
def jacobian(rho, u_max, rho_max):
"""
Computes the Jacobian for our traffic model.
Parameters
----------
rho : numpy.ndarray
Traffic density along the road as a 1D array of floats.
u_max : float
Maximum speed allowed on the road.
rho_max : float
Maximum car density allowed on the road.
Returns
-------
J : numpy.ndarray
The Jacobian as a 1D array of floats.
"""
J = u_max * (1.0 - 2.0 * rho / rho_max)
return J
```
```python
def lax_wendroff(rho0, nt, dt, dx, bc_values, *args):
"""
Computes the traffic density on the road
at a certain time given the initial traffic density.
Integration using Lax-Wendroff scheme.
Parameters
----------
rho0 : numpy.ndarray
The initial traffic density along the road
as a 1D array of floats.
nt : integer
The number of time steps to compute.
dt : float
The time-step size to integrate.
dx : float
The distance between two consecutive locations.
bc_values : 2-tuple of floats
The value of the density at the first and last locations.
args : list or tuple
Positional arguments to be passed to the
flux and Jacobien functions.
Returns
-------
rho_hist : list of numpy.ndarray objects
The history of the car density along the road.
"""
rho_hist = [rho0.copy()]
rho = rho0.copy()
for n in range(nt):
# Compute the flux.
F = flux(rho, *args)
# Compute the Jacobian.
J = jacobian(rho, *args)
# Advance in time using Lax-Wendroff scheme.
rho[1:-1] = (rho[1:-1] -
dt / (2.0 * dx) * (F[2:] - F[:-2]) +
dt**2 / (4.0 * dx**2) *
((J[1:-1] + J[2:]) * (F[2:] - F[1:-1]) -
(J[:-2] + J[1:-1]) * (F[1:-1] - F[:-2])))
# Set the value at the first location.
rho[0] = bc_values[0]
# Set the value at the last location.
rho[-1] = bc_values[1]
# Record the time-step solution.
rho_hist.append(rho.copy())
return rho_hist
```
Now that's we've defined a function for the Lax-Wendroff scheme, we can use the same procedure as above to animate and view our results.
### Lax-Wendroff with $\frac{\Delta t}{\Delta x}=1$
```python
# Set the time-step size based on CFL limit.
sigma = 1.0
dt = sigma * dx / u_max # time-step size
# Compute the traffic density at all time steps.
rho_hist = lax_wendroff(rho0, nt, dt, dx, (rho0[0], rho0[-1]),
u_max, rho_max)
```
```python
# Create an animation of the traffic density.
anim = animation.FuncAnimation(fig, update_plot,
frames=nt, fargs=(rho_hist,),
interval=100)
# Display the video.
HTML(anim.to_html5_video())
```
Interesting! The Lax-Wendroff method captures the sharpness of the shock much better than the Lax-Friedrichs scheme, but there is a new problem: a strange wiggle appears right at the tail of the shock. This is typical of many second-order methods: they introduce _numerical oscillations_ where the solution is not smooth. Bummer.
### Lax-Wendroff with $\frac{\Delta t}{\Delta x} =0.5$
How do the oscillations at the shock front vary with changes to the CFL condition? You might think that the solution will improve if you make the time step smaller ... let's see.
```python
# Set the time-step size based on CFL limit.
sigma = 0.5
dt = sigma * dx / u_max # time-step size
# Compute the traffic density at all time steps.
rho_hist = lax_wendroff(rho0, nt, dt, dx, (rho0[0], rho0[-1]),
u_max, rho_max)
```
```python
# Create an animation of the traffic density.
anim = animation.FuncAnimation(fig, update_plot,
frames=nt, fargs=(rho_hist,),
interval=100)
# Display the video.
HTML(anim.to_html5_video())
```
Eek! The numerical oscillations got worse. Double bummer!
Why do we observe oscillations with second-order methods? This is a question of fundamental importance!
## MacCormack Scheme
The numerical oscillations that you observed with the Lax-Wendroff method on the traffic model can become severe in some problems. But actually the main drawback of the Lax-Wendroff method is having to calculate the Jacobian in every time step. With more complicated equations (like the Euler equations), calculating the Jacobian is a large computational expense.
Robert W. MacCormack introduced the first version of his now-famous method at the 1969 AIAA Hypervelocity Impact Conference, held in Cincinnati, Ohio, but the paper did not at first catch the attention of the aeronautics community. The next year, however, he presented at the 2nd International Conference on Numerical Methods in Fluid Dynamics at Berkeley. His paper there (MacCormack, 1971) was a landslide. MacCormack got a promotion and continued to work on applications of his method to the compressible Navier-Stokes equations. In 1973, NASA gave him the prestigious H. Julian Allen award for his work.
The MacCormack scheme is a two-step method, in which the first step is called a _predictor_ and the second step is called a _corrector_. It achieves second-order accuracy in both space and time. One version is as follows:
$$
\begin{equation}
\rho^*_i = \rho^n_i - \frac{\Delta t}{\Delta x} (F^n_{i+1}-F^n_{i}) \ \ \ \ \ \ \text{(predictor)}
\end{equation}
$$
$$
\begin{equation}
\rho^{n+1}_i = \frac{1}{2} (\rho^n_i + \rho^*_i - \frac{\Delta t}{\Delta x} (F^*_i - F^{*}_{i-1})) \ \ \ \ \ \ \text{(corrector)}
\end{equation}
$$
If you look closely, it appears like the first step is a forward-time/forward-space scheme, and the second step is like a forward-time/backward-space scheme (these can also be reversed), averaged with the first result. What is so cool about this? You can compute problems with left-running waves and right-running waves, and the MacCormack scheme gives you a stable method (subject to the CFL condition). Nice! Let's try it.
```python
def maccormack(rho0, nt, dt, dx, bc_values, *args):
"""
Computes the traffic density on the road
at a certain time given the initial traffic density.
Integration using MacCormack scheme.
Parameters
----------
rho0 : numpy.ndarray
The initial traffic density along the road
as a 1D array of floats.
nt : integer
The number of time steps to compute.
dt : float
The time-step size to integrate.
dx : float
The distance between two consecutive locations.
bc_values : 2-tuple of floats
The value of the density at the first and last locations.
args : list or tuple
Positional arguments to be passed to the flux function.
Returns
-------
rho_hist : list of numpy.ndarray objects
The history of the car density along the road.
"""
rho_hist = [rho0.copy()]
rho = rho0.copy()
rho_star = rho.copy()
for n in range(nt):
# Compute the flux.
F = flux(rho, *args)
# Predictor step of the MacCormack scheme.
rho_star[1:-1] = (rho[1:-1] -
dt / dx * (F[2:] - F[1:-1]))
# Compute the flux.
F = flux(rho_star, *args)
# Corrector step of the MacCormack scheme.
rho[1:-1] = 0.5 * (rho[1:-1] + rho_star[1:-1] -
dt / dx * (F[1:-1] - F[:-2]))
# Set the value at the first location.
rho[0] = bc_values[0]
# Set the value at the last location.
rho[-1] = bc_values[1]
# Record the time-step solution.
rho_hist.append(rho.copy())
return rho_hist
```
### MacCormack with $\frac{\Delta t}{\Delta x} = 1$
```python
# Set the time-step size based on CFL limit.
sigma = 1.0
dt = sigma * dx / u_max # time-step size
# Compute the traffic density at all time steps.
rho_hist = maccormack(rho0, nt, dt, dx, (rho0[0], rho0[-1]),
u_max, rho_max)
```
```python
# Create an animation of the traffic density.
anim = animation.FuncAnimation(fig, update_plot,
frames=nt, fargs=(rho_hist,),
interval=100)
# Display the video.
HTML(anim.to_html5_video())
```
### MacCormack with $\frac{\Delta t}{\Delta x}= 0.5$
Once again, we ask: how does the CFL number affect the errors? Which one gives better results? You just have to try it.
```python
# Set the time-step size based on CFL limit.
sigma = 0.5
dt = sigma * dx / u_max # time-step size
# Compute the traffic density at all time steps.
rho_hist = maccormack(rho0, nt, dt, dx, (rho0[0], rho0[-1]),
u_max, rho_max)
```
```python
# Create an animation of the traffic density.
anim = animation.FuncAnimation(fig, update_plot,
frames=nt, fargs=(rho_hist,),
interval=100)
# Display the video.
HTML(anim.to_html5_video())
```
##### Dig Deeper
You can also obtain a MacCormack scheme by reversing the predictor and corrector steps. For shocks, the best resolution will occur when the difference in the predictor step is in the direction of propagation. Try it out! Was our choice here the ideal one? In which case is the shock better resolved?
##### Challenge task
In the *red light* problem, $\rho \geq \rho_{\rm max}/2$, making the wave speed negative at all points . You might be wondering why we introduced these new methods; couldn't we have just used a forward-time/forward-space scheme? But, what if $\rho_{\rm in} < \rho_{\rm max}/2$? Now, a whole region has negative wave speeds and forward-time/backward-space is unstable.
* How do Lax-Friedrichs, Lax-Wendroff and MacCormack behave in this case? Try it out!
* As you decrease $\rho_{\rm in}$, what happens to the velocity of the shock? Why do you think that happens?
## References
* Peter D. Lax (1954), "Weak solutions of nonlinear hyperbolic equations and their numerical computation," _Commun. Pure and Appl. Math._, Vol. 7, pp. 159–193.
* Peter D. Lax and Burton Wendroff (1960), "Systems of conservation laws," _Commun. Pure and Appl. Math._, Vol. 13, pp. 217–237.
* R. W. MacCormack (1969), "The effect of viscosity in hypervelocity impact cratering," AIAA paper 69-354. Reprinted on _Journal of Spacecraft and Rockets_, Vol. 40, pp. 757–763 (2003). Also on _Frontiers of Computational Fluid Dynamics_, edited by D. A. Caughey, M. M. Hafez (2002), chapter 2: [read on Google Books](http://books.google.com/books?id=QBsnMOz_8qcC&lpg=PA27&ots=uqCeuH1U6S&lr&pg=PA27#v=onepage&q&f=false).
* R. W. MacCormack (1971), "Numerical solution of the interaction of a shock wave with a laminar boundary layer," _Proceedings of the 2nd Int. Conf. on Numerical Methods in Fluid Dynamics_, Lecture Notes in Physics, Vol. 8, Springer, Berlin, pp. 151–163.
---
###### The cell below loads the style of the notebook.
```python
from IPython.core.display import HTML
css_file = '../../styles/numericalmoocstyle.css'
HTML(open(css_file, 'r').read())
```
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'>
<link href='https://fonts.googleapis.com/css?family=Source+Code+Pro' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 750px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1000px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.8em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Alegreya Sans' sans-serif;
line-height: 140%;
font-size: 125%;
font-weight: 400;
width:600px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Nixie One', serif;
font-style:regular;
font-weight: 400;
font-size: 45pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Nixie One', serif;
font-weight: 400;
font-size: 30pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h3 {
font-family: 'Nixie One', serif;
margin-top:16px;
font-size: 22pt;
font-weight: 600;
margin-bottom: 3px;
font-style: regular;
color: rgb(102,102,0);
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Nixie One', serif;
font-size: 14pt;
text-align: center;
margin-top: 0em;
margin-bottom: 2em;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Nixie One', sans-serif;
font-weight: 400;
font-size: 16pt;
color: rgb(163,0,0);
font-style: italic;
margin-bottom: .1em;
margin-top: 0.8em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
.alert-box {
padding:10px 10px 10px 36px;
margin:5px;
}
.success {
color:#666600;
background:rgb(240,242,229);
}
</style>
| a54310ae786ab6c5c8c9a4e7404d9e6a23efc791 | 231,202 | ipynb | Jupyter Notebook | lessons/03_wave/03_02_convectionSchemes.ipynb | Fluidentity/numerical-mooc | 083bbe9dc923b0ada6db2ebfbe13392fb66c6fbc | [
"CC-BY-3.0"
] | null | null | null | lessons/03_wave/03_02_convectionSchemes.ipynb | Fluidentity/numerical-mooc | 083bbe9dc923b0ada6db2ebfbe13392fb66c6fbc | [
"CC-BY-3.0"
] | null | null | null | lessons/03_wave/03_02_convectionSchemes.ipynb | Fluidentity/numerical-mooc | 083bbe9dc923b0ada6db2ebfbe13392fb66c6fbc | [
"CC-BY-3.0"
] | null | null | null | 69.576286 | 6,552 | 0.778367 | true | 8,569 | Qwen/Qwen-72B | 1. YES
2. YES | 0.835484 | 0.718594 | 0.600374 | __label__eng_Latn | 0.965553 | 0.2332 |
# The One-Dimensional Particle in a Box
## 🥅 Learning Objectives
- Determine the energies and eigenfunctions of the particle-in-a-box.
- Learn how to normalize a wavefunction.
- Learn how to compute expectation values for quantum-mechanical operators.
- Learn the postulates of quantum mechanics
## Cyanine Dyes
Cyanine dye molecules are often modelled as one-dimension particles in a box. To understand why, start by thinking classically. You learn in organic chemistry that electrons can more “freely” along alternating double bonds. If this is true, then you can imagine that the electrons can more from one Nitrogen to the other, almost without resistance. On the other hand, there are sp<sup>3</sup>-hybridized functional groups attached to the Nitrogen atom, so once the electron gets to Nitrogen atom, it has to turn around and go back whence it came. A very, very, very simple model would be to imagine that the electron is totally free between the Nitrogen atoms, and totally forbidden from going much beyond the Nitrogen atoms. This suggests modeling these systems a potential energy function like:
$$
V(x) =
\begin{cases}
+\infty & x\leq 0\\
0 & 0\lt x \lt a\\
+\infty & a \leq x
\end{cases}
$$
where $a$ is the length of the box. A reasonable approximate formula for $a$ is
$$
a = \left(5.67 + 2.49 (k + 1)\right) \cdot 10^{-10} \text{ m}
$$
## Postulate: The squared magnitude of the wavefunction is proportional to probability
What is the interpretation of the wavefunction? The Born postulate indicates that the squared magnitude of the wavefunction is proportional to the probability of observing the system at that location. E.g., if $\psi(x)$ is the wavefunction for an electron as a function of $x$, then
$$
p(x) = |\psi(x)|^2
$$
is the probability of observing an electron at the point $x$. This is called the Born Postulate.
## The Wavefunctions of the Particle in a Box (boundary conditions)
The nice thing about this “particle in a box” model is that it is easy to solve the time-independent Schrödinger equation in this case. Because there is no chance that the particle could ever “escape” an infinite box like this (such an electron would have infinite potential energy!), $|\psi(x)|^2$ must equal zero outside the box. Therefore the wavefunction can only be nonzero inside the box. In addition, the wavefunction should be zero at the edges of the box, because otherwise the wavefunction will not be continuous. So we should have a wavefunction like
$$
\psi(x) =
\begin{cases}
0& x\le 0\\
\text{?????} & 0 < x < a\\
0 & a \le x
\end{cases}
$$
## Postulate: The wavefunction of a system is determined by solving the Schrödinger equation
How do we find the wavefunction for the particle-in-a-box or, for that matter, any other system? The wavefunction can be determined by solving the time-independent (when the potential is time-independent) or time-dependent (when the potential is time-dependent) Schrödinger equation.
## The Wavefunctions of the Particle in a Box (solution)
To find the wavefunctions for a system, one solves the Schrödinger equation. For a particle of mass $m$ in a one-dimensional box, the (time-independent) Schrödinger equation is:
$$
\left(-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} + V(x) \right)\psi_n(x) = E_n \psi_n(x)
$$
where
$$
V(x) =
\begin{cases}
+\infty & x\leq 0\\
0 & 0\lt x \lt a\\
+\infty & a \leq x
\end{cases}
$$
We already deduced that $\psi(x) = 0$ except when the electron is inside the box ($0 < x < a$), so we only need to consider the Schrödinger equation inside the box:
$$
\left(-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} \right)\psi_n(x) = E_n \psi_n(x)
$$
There are systematic ways to solve this equation, but let's solve it by inspection. That is, we need to know:
> Question: What function(s), when differentiated twice, are proportional to themselves?
This suggests that the eigenfunctions of the 1-dimensional particle-in-a-box must be some linear combination of sine and cosine functions,
$$
\psi_n(x) = A \sin(cx) + B \cos(dx)
$$
We know that the wavefunction must be zero at the edges of the box, $\psi(0) = 0$ and $\psi(a) = 0$. These are called the *boundary conditions* for the problem. Examining the first boundary condition,
$$
0 = \psi(0) = A \sin(0) + B \cos(0) = 0 + B
$$
indicates that $B=0$. The second boundary condition
$$
0 = \psi(a) = A \sin(ca)
$$
requires us to recall that $\sin(x) = 0$ whenever $x$ is an integer multiple of $\pi$. So $c=n\pi$ where $n=1,2,3,\ldots$. The wavefunction for the particle in a box is thus,
$$
\psi_n(x) = A_n \sin\left(\tfrac{n \pi x}{a}\right) \qquad \qquad n=1,2,3,\ldots
$$
## Normalization of Wavefunctions
As seen in the previous section, if a wavefunction solves the Schrödinger equation, any constant multiple of the wavefunction also solves the Schrödinger equation,
$$
\hat{H} \psi(x) = E \psi(x) \quad \longleftrightarrow \quad \hat{H} \left(A\psi(x)\right) = E \left(A\psi(x)\right)
$$
Owing to the Born postulate, the complex square of the wavefunction can be interpreted as probability. Since the probability of a particle being at *some* point in space is one, we can define the normalization constant, $A$, for the wavefunction through the requirement that:
$$
\int_{-\infty}^{\infty} \left|\psi(x)\right|^2 dx = 1.
$$
In the case of a particle in a box, this is:
$$
\begin{align}
1 &= \int_{-\infty}^{\infty} \left|\psi_n(x)\right|^2 dx \\
&= \int_0^a \psi_n(x) \psi_n^*(x) dx \\
&= \int_0^a A_n \sin\left(\tfrac{n \pi x}{a}\right) \left(A_n \sin\left(\tfrac{n \pi x}{a}\right) \right)^* dx \\
&= \left|A_n\right|^2\int_0^a \sin^2\left(\tfrac{n \pi x}{a}\right) dx
\end{align}
$$
To evaluate this integral, it is useful to remember some [trigonometric identities](https://en.wikipedia.org/wiki/List_of_trigonometric_identities). (You can learn more about how I remember trigonometric identities [here](../linkedFiles/TrigIdentities.md).) The specific identity we need here is $\sin^2 x = \tfrac{1}{2}(1-\cos 2x)$:
$$
\begin{align}
1 &= \left|A_n\right|^2\int_0^a \sin^2\left(\tfrac{n \pi x}{a}\right) \,dx \\
&= \left|A_n\right|^2\int_0^a \tfrac{1}{2}\left(1-\cos \left(\tfrac{2n \pi x}{a}\right)\right) \,dx \\
&=\tfrac{\left|A_n\right|^2}{2} \left( \int_0^a 1 \,dx - \int_0^a \cos \left(\tfrac{2n \pi x}{a}\right)\,dx \right) \\
&=\tfrac{\left|A_n\right|^2}{2} \left( \left[ x \right]_0^a - \left[\frac{-a}{2 n \pi}\sin \left(\tfrac{2n \pi x}{a}\right) \right]_0^a \right) \\
&=\tfrac{\left|A_n\right|^2}{2} \left( a - 0 \right)
\end{align}
$$
So
$$
\left|A_n\right|^2 = \tfrac{2}{a}
$$
Note that this does not completely determine $A_n$. For example, any of the following normalization constants are allowed,
$$
A_n = \sqrt{\tfrac{2}{a}}
= - \sqrt{\tfrac{2}{a}}
= i \sqrt{\tfrac{2}{a}}
= -i \sqrt{\tfrac{2}{a}}
$$
In general, any [square root of unity](https://en.wikipedia.org/wiki/Root_of_unity) can be used,
$$
A_n = \left(\cos(\theta) \pm i \sin(\theta) \right) \sqrt{\tfrac{2}{a}}
$$
where $k$ is any real number. The arbitrariness of the *phase* of the wavefunction is an important feature. Because the wavefunction can be imaginary (e.g., if you choose $A_n = i \sqrt{\tfrac{2}{a}}$), it is obvious that the wavefunction is not an observable property of a system. **The wavefunction is only a mathematical tool for quantum mechanics; it is not a physical object.**
Summarizing, the (normalized) wavefunction for a particle with mass $m$ confined to a one-dimensional box with length $a$ can be written as:
$$
\psi_n(x) = \sqrt{\tfrac{2}{a}} \sin\left(\tfrac{n \pi x}{a}\right) \qquad \qquad n=1,2,3,\ldots
$$
Note that in this case, the normalization condition is the same for all $n$; that is an unusual property of the particle-in-a-box wavefunction.
While this normalization convention is used 99% of the time, there are some cases where it is more convenient to make a different choice for the amplitude of the wavefunctions. I say this to remind you that normalization the wavefunction is something we do for convenience; it is not required by physics!
## Normalization Check
One advantage of using Jupyter is that we can easily check our (symbolic) mathematics. Let's confirm that the wavefunction is normalized by evaluating,
$$
\int_0^a \left| \psi_n(x) \right|^2 \, dx
$$
```python
# Execute this code block to import required objects.
# Note: The numpy library from autograd is imported, which behaves the same as
# importing numpy directly. However, to make automatic differentiation work,
# you should NOT import numpy directly by `import numpy as np`.
import autograd.numpy as np
from autograd import elementwise_grad as egrad
# import numpy as np
from scipy.integrate import trapz, quad
from scipy import constants
import ipywidgets as widgets
import matplotlib.pyplot as plt
# set the size of the plot
# plt.rcParams['figure.figsize'] = [10, 5]
```
```python
# Define a function for the wavefunction
def compute_wavefunction(x, n, a):
"""Compute 1-dimensional particle-in-a-box wave-function value(s).
Parameters
----------
x: float or np.ndarray
Position of the particle.
n: int
Quantum number value.
a: float
Length of the box.
"""
# check argument n
if not (isinstance(n, int) and n > 0):
raise ValueError("Argument n should be a positive integer.")
# check argument a
if a <= 0.0:
raise ValueError("Argument a should be positive.")
# check argument x
if not (isinstance(x, float) or hasattr(x, "__iter__")):
raise ValueError("Argument x should be a float or an array!")
# compute wave-function
value = np.sqrt(2 / a) * np.sin(n * np.pi * x / a)
# set wave-function values out of the box equal to zero
if hasattr(x, "__iter__"):
value[x > a] = 0.0
value[x < 0] = 0.0
else:
if x < 0.0 or x > a:
value = 0.0
return value
# Define a function for the wavefunction squared
def compute_probability(x, n, a):
"""Compute 1-dimensional particle-in-a-box probablity value(s).
See `compute_wavefunction` parameters.
"""
return compute_wavefunction(x, n, a)**2
#This next bit of code just prints out the normalization error
def check_normalization(a, n):
#check the computed values of the moments against the analytic formula
normalization,error = quad(compute_probability, 0, a, args=(n, a))
print("Normalization of wavefunction = ", normalization)
#Principle quantum number:
n = 1
#Box length:
a = 1
check_normalization(a, n)
```
Normalization of wavefunction = 1.0000000000000002
## The Energies of the Particle in a Box
How do we compute the energy of a particle in a box? All we need to do is substitute the eigenfunctions of the Hamiltonian, $\psi_n(x)$ back into the Schrödinger equation to determine the eigenenergies, $E_n$. That is, from
$$
\hat{H} \psi_n(x) = E_n \psi_n(x)
$$
we deduce
$$
\begin{align}
-\frac{\hbar^2}{2m} \frac{d^2}{dx^2} \left( A_n \sin \left( \frac{n \pi x}{a}\right) \right)
&= E_n \left( A_n \sin \left( \frac{n \pi x}{a}\right) \right) \\
-A_n \frac{\hbar^2}{2m} \frac{d}{dx} \left( \frac{n \pi}{a} \cos \left( \frac{n \pi x}{a}\right) \right)
&= E_n \left( A_n \sin \left( \frac{n \pi x}{a}\right) \right) \\
A_n \frac{\hbar^2}{2m} \left( \frac{n \pi}{a} \right)^2 \sin \left( \frac{n \pi x}{a}\right)
&= E_n \left( A_n \sin \left( \frac{n \pi x}{a}\right) \right) \\
\frac{\hbar^2 n^2 \pi^2}{2ma^2}
&= E_n
\end{align}
$$
Using the definition of $\hbar$, we can rearrange this to:
$$
\begin{align}
E_n &= \frac{\hbar^2 n^2 \pi^2}{2ma^2} \qquad \qquad n=1,2,3,\ldots\\
&= \frac{h^2 n^2}{8ma^2}
\end{align}
$$
Notice that only certain energies are allowed. This is a fundamental principle of quantum mechanics, and it is related to the "waviness" of particles. Certain "frequencies" are resonant, and other "frequencies" cannot be observed. *The **only** energies that can be observed for a particle-in-a-box are the ones given by the above formula.*
## Zero-Point Energy
Naïvely, you might expect that the lowest-energy state of a particle in a box has zero energy. (The potential in the box is zero, after all, so shouldn't the lowest-energy state be the state with zero kinetic energy? And if the kinetic energy were zero and the potential energy were zero, then the total energy would be zero.)
But this doesn't happen. It turns out that you can never "stop" a quantum particle; it always has a zero-point motion, typically a resonant oscillation about the lowest-potential-energy location(s). Indeed, the more you try to confine a particle to stop it, the bigger its kinetic energy becomes. This is clear in the particle-in-a-box, which has only kinetic energy. There the (kinetic) energy increases rapidly, as $a^{-2}$, as the box becomes smaller:
$$
T_n = E_n = \frac{h^2n^2}{8ma^2}
$$
The residual energy in the electronic ground state is called the **zero-point energy**,
$$
E_{\text{zero-point energy}} = \frac{h^2}{8ma^2}
$$
The existence of the zero-point energy, and the fact that zero-point kinetic energy is always positive, is a general feature of quantum mechanics.
> **Zero-Point Energy Principle:** Let $V(x)$ be a nonnegative potential. The ground-state energy is always greater than zero.
More generally, for any potential that is bound from below,
$$
V_{\text{min}}= \min_x V(x)
$$
the ground-state energy of the system satisfies $E_{\text{zero-point energy}} > V_{\text{min}}$.
>Nuance: There is a tiny mathematical footnote here; there are some $V(x)$ for which there are *no* bound states. In such cases, e.g., $V(x) = \text{constant}$, it is possible for $E = V_{\text{min}}$.)
## Atomic Units
Because Planck's constant and the electron mass are tiny numbers, it is often useful to use [atomic units](https://en.wikipedia.org/wiki/Hartree_atomic_units) when performing calculations. We'll learn more about atomic units later but, for now, we only need to know that, in atomic units, $\hbar$, the mass of the electron, $m_e$, the charge of the electron, $e$, and the average (mean) distance of an electron from the nucleus in the Hydrogen atom, $a_0$, are all defined to be equal to 1.0 in atomic units.
$$
\begin{align}
\hbar &= \frac{h}{2 \pi} = 1.0545718 \times 10^{-34} \text{ J s}= 1 \text{ a.u.} \\
m_e &= 9.10938356 \times 10^{-31} \text{ kg} = 1 \text{ a.u.} \\
e &= 1.602176634 \times 10^-19 \text{ C} = 1 \text{ a.u.} \\
a_0 &= 5.291772109 \times 10^{-11} \text{ m} = 1 \text{ Bohr} = 1 \text{ a.u.}
\end{align}
$$
The unit of energy in atomic units is called the Hartree,
$$
E_h =4.359744722 \times 10^{-18} \text{ J} = 1 \text{ Hartree} = 1 \text{ a.u.}
$$
and the ground-state (zero-point) energy of the Hydrogen atom is $-\tfrac{1}{2} E_h$.
We can now define functions for the eigenenergies of the 1-dimensional particle in a box:
```python
# Define a function for the energy of a particle in a box
# with length a and quantum number n [in atomic units!]
# The length is input in Bohr (atomic units)
def compute_energy(n, a):
"Compute 1-dimensional particle-in-a-box energy."
return n**2 * np.pi**2 / (2 * a**2)
# Define a function for the energy of an electron in a box
# with length a and quantum number n [in SI units!].
# The length is input in meters.
def compute_energy_si(n, a):
"Compute 1-dimensional particle-in-a-box energy."
return n**2 * constants.h**2 / (8 * constants.m_e* a**2)
#Define variable for atomic unit of length in meters
a0 = constants.value('atomic unit of length')
#This next bit of code just prints out the energy in atomic and SI units
def print_energy(a, n):
print(f'The energy of an electron in a box of length {a:.2f} a.u. with '
f'quantum number {n} is {compute_energy(n, a):.2f} a.u..')
print(f'The energy of an electron in a box of length {a*a0:.2e} m with '
f'quantum number {n} is {compute_energy_si(n, a*a0):.2e} Joules.')
#Principle quantum number:
n = 1
#Box length:
a = 0.1
print_energy(a, n)
```
The energy of an electron in a box of length 0.10 a.u. with quantum number 1 is 493.48 a.u..
The energy of an electron in a box of length 5.29e-12 m with quantum number 1 is 2.15e-15 Joules.
#### 📝 Exercise: Write a function that returns the length, $a$, of a box for which the lowest-energy-excitation of the ground state, $n = 1 \rightarrow n=2$, corresponds to the system absorbing light with a given wavelength, $\lambda$. The input is $\lambda$; the output is $a$.
## Postulate: The wavefunction contains all the physically meaningful information about a system.
While the wavefunction is not itself observable, all observable properties of a system can be determined from the wavefunction. However, just because the wavefunction encapsulates all the *observable* properties of a system does not mean that it contains *all information* about a system. In quantum mechanics, some things are not observable. Consider that for the ground ($n=1$) state of the particle in a box, the root-mean-square average momentum,
$$
\bar{p}_{rms} = \sqrt{2m \cdot T} = \sqrt{(2m)\frac{h^2n^2}{8ma^2}} = \frac{hn}{2a}
$$
increases as you squeeze the box. That is, the more you try to constrain the particle in space, the faster it moves. You can't "stop" the particle no matter how hard you squeeze it, so it's impossible to exactly know where the particle is located. You can only determine its *average* position.
## Postulate: Observable Quantities Correspond to Linear, Hermitian Operators.
The *correspondence* principle says that for every classical observable there is a linear, Hermitian, operator that allows computation of the quantum-mechanical observable. An operator, $\hat{C}$ is linear if for any complex numbers $a$ and $b$, and any wavefunctions $\psi_1(x)$ and $\psi_2(x)$,
$$
\hat{C} \left(a \psi_1(x,t) + b \psi_2(x,t) \right) = a \hat{C} \psi_1(x,t) + b \hat{C} \psi_2(x,t)
$$
Similarly, an operator is Hermitian if it satisfies the relation,
$$
\int \psi_1^*(x,t) \hat{C} \psi_2(x,t) \, dx = \int \left( \hat{C}\psi_1(x,t) \right)^* \psi_2(x,t) \, dx
$$
or, equivalently,
$$
\int \psi_1^*(x,t) \left( \hat{C} \psi_2(x,t) \right) \, dx = \int \psi_2(x,t)\left( \hat{C} \psi_1(x,t) \right)^* \, dx
$$
That is for a linear operator, the linear operator applied to a sum of wavefunctions is equal to the sum of the linear operators directly applied to the wavefunctions separately, and the linear operator applied to a constant times a wavefunction is the constant times the linear operator applied directly to the wavefunction. A Hermitian operator can apply forward (towards $\psi_2(x,t)$) or backwards (towards $\psi_1(x,t)$). This is very useful, because sometimes it is much easier to apply an operator in one direction.
We've already been introduced to the quantum-mechanical operators for the momentum,
$$
\hat{p} = -i \hbar \tfrac{d}{dx}
$$
and the kinetic energy,
$$
\hat{T} = -\tfrac{\hbar^2}{2m} \tfrac{d^2}{dx^2}
$$
These operators are linear because the derivative of a sum is the sum of the derivatives, and the derivative of a constant times a function is that constant times the derivative of the function. These operators are also Hermitian. For example, to show that the momentum operator is Hermitian:
$$
\begin{align}
\int_{-\infty}^{\infty} \psi_1^*(x,t) \hat{p} \psi_2(x,t) dx &= \int_{-\infty}^{\infty} \psi_1^*(x,t) \left( -i \hbar \tfrac{d}{dx} \right) \psi_2(x,t) dx \\
&= -i \hbar \int_{-\infty}^{\infty} \tfrac{d}{dx} \left(\psi_1^*(x,t)\psi_2(x,t) \right) - \left(
\psi_2(x,t) \tfrac{d}{dx} \psi_1^*(x,t)\right) dx
\end{align}
$$
Here we used the product rule for derivatives, $f(x)\tfrac{dg}{dx} = \tfrac{d f(x) g(x)}{dx} - g(x) \tfrac{df}{dx}$. Using the fundamental theorem of calculus and the fact the probability of observing a particle at $\pm \infty$ is zero, and therefore the wavefunctions at infinity are also zero, one knows that
$$
\int_{-\infty}^{\infty} \tfrac{d}{dx} \left(\psi_1^*(x,t)\psi_2(x,t) \right) = \left[ \psi_1^*(x,t)\psi_2(x,t)\right]_{-\infty}^{\infty} = 0
$$
Therefore the above equation can be simplified to
$$
\begin{align}
\int_{-\infty}^{\infty} \psi_1^*(x,t) \hat{p} \psi_2(x,t) dx &=i \hbar \int_{-\infty}^{\infty} \psi_2(x,t) \tfrac{d}{dx} \psi_1^*(x,t) dx \\
&= \int_{-\infty}^{\infty} \psi_2(x,t) i \hbar \tfrac{d}{dx} \psi_1^*(x,t) dx \\
&= \int_{-\infty}^{\infty} \psi_2(x,t) \left( -i \hbar \tfrac{d}{dx} \psi_1^*(x,t)\right) dx \\
&= \int_{-\infty}^{\infty} \psi_2(x,t) \left( \hat{p} \psi_1(x,t)\right)^* dx
\end{align}
$$
The expectation value of the momentum of a particle-in-a-box is always zero. This is intuitive, since electrons (on average) are neither moving to the right nor to the left inside the box: if they were, then the box itself would need to be moving. Indeed, for any real wavefunction, the average momentum is always zero. This follows directly from the previous derivation with $\psi_1^*(x,t) = \psi_2(x,t)$. Thus:
$$
\begin{align}
\int_{-\infty}^{\infty} \psi_2(x,t) \hat{p} \psi_2(x,t) dx
&=i \hbar \int_{-\infty}^{\infty} \psi_2(x,t) \tfrac{d}{dx} \psi_2(x,t) dx \\
&=-i \hbar \int_{-\infty}^{\infty} \psi_2(x,t) \left( \tfrac{d}{dx} \psi_2(x,t)\right) dx \\
&= 0
\end{align}
$$
The last line follows because the only number that is equal to its negative is zero. (That is, $x=-x$ if and only if $x=0$.) It is a subtle feature that the eigenfunctions of a real-valued Hamiltonian operator can always be chosen to be real-valued themselves, so their average momentum is clearly zero. We often denote quantum-mechanical expectation values with the shorthand,
$$
\langle \hat{p} \rangle =0
$$
The momentum-squared of the particle-in-a-box is easily computed from the kinetic energy,
$$
\langle \hat{p}^2 \rangle = \int_0^a \psi_n(x) \hat{p}^2 \psi_n(x) dx = \int_0^a \psi_n(x) \left(2m\hat{T}\right) \psi_n(x) dx = 2m E_n = \frac{h^2n^2}{4a^2}
$$
Intuitively, since the box is symmetric about $x=\tfrac{a}{2}$, the particle has equal probability of being in the first-half and the second-half of the box. So the average position is expected to be
$$
\langle x \rangle =\tfrac{a}{2}
$$
We can confirm this by explicit integration,
$$
\begin{align}
\langle x \rangle &= \int_0^a \psi_n^*(x)\, x \,\psi_n(x) dx \\
&= \int_0^a \left(\sqrt{\tfrac{2}{a}} \sin\left(\tfrac{n \pi x}{a} \right)\right) x \left(\sqrt{\tfrac{2}{a}}\sin\left(\tfrac{n \pi x}{a} \right)\right) dx \\
&= \tfrac{2}{a} \int_0^a x \sin^2\left(\tfrac{n \pi x}{a} \right) dx \\
&= \tfrac{2}{a} \left[ \tfrac{x^2}{4} - x \tfrac{\sin \tfrac{2n \pi x}{a}}{\tfrac{4 n \pi}{a}}
- \tfrac{\cos \tfrac{2n \pi x}{a}}{\tfrac{8 n^2 \pi^2}{a^2}}
\right]_0^a \\
&= \tfrac{2}{a} \left[ \tfrac{a^2}{4} - 0 - 0 \right] \\
&= \tfrac{a}{2}
\end{align}
$$
Similarly, we expect that the expectation value of $\langle x^2 \rangle$ will be proportional to $a^2$. We can confirm this by explicit integration,
$$
\begin{align}
\langle x^2 \rangle &= \int_0^a \psi_n^*(x)\, x^2 \,\psi_n(x) dx \\
&= \int_0^a \left(\sqrt{\tfrac{2}{a}} \sin\left(\tfrac{n \pi x}{a} \right)\right) x^2 \left(\sqrt{\tfrac{2}{a}}\sin\left(\tfrac{n \pi x}{a} \right)\right) dx \\
&= \tfrac{2}{a} \int_0^a x^2 \sin^2\left(\tfrac{n \pi x}{a} \right) dx \\
&= \tfrac{2}{a} \left[ \tfrac{x^3}{6}
- x^2 \tfrac{\sin \tfrac{2n \pi x}{a}}{\tfrac{4 n \pi}{a}}
- x \tfrac{\cos \tfrac{2n \pi x}{a}}{\tfrac{4 n^2 \pi^2}{a^2}}
- \tfrac{\sin \tfrac{2n \pi x}{a}}{\tfrac{8 n^3 \pi^3}{a^3}}
\right]_0^a \\
&= \tfrac{2}{a} \left[ \tfrac{a^3}{6} - 0 - \tfrac{a}{{\tfrac{4 n^2 \pi^2}{a^2}}} - 0 \right] \\
&= \tfrac{2}{a} \left[ \tfrac{a^3}{6} - \tfrac{a^3}{4 n^2 \pi^2} \right] \\
&= a^2\left[ \tfrac{1}{3} - \tfrac{1}{2 n^2 \pi^2} \right]
\end{align}
$$
We can verify these formulas by explicit integration.
```python
#Compute <x^power>, the expectation value of x^power
def compute_moment(x, n, a, power):
"""Compute the x^power moment of the 1-dimensional particle-in-a-box.
See `compute_wavefunction` parameters.
"""
return compute_probability(x, n, a)*x**power
#This next bit of code just prints out the values.
def check_moments(a, n):
#check the computed values of the moments against the analytic formula
avg_r,error = quad(compute_moment, 0, a, args=(n, a, 1))
avg_r2,error = quad(compute_moment, 0, a, args=(n, a, 2))
print(f"<r> computed = {avg_r:.5f}")
print(f"<r> analytic = {a/2:.5f}")
print(f"<r^2> computed = {avg_r2:.5f}")
print(f"<r^2> analytic = {a**2*(1/3 - 1./(2*n**2*np.pi**2)):.5f}")
#Principle quantum number:
n = 1
#Box length:
a = 1
check_moments(a, n)
```
<r> computed = 0.50000
<r> analytic = 0.50000
<r^2> computed = 0.28267
<r^2> analytic = 0.28267
## Heisenberg Uncertainty Principle
The previous example gives a first example of the more general [Heisenberg Uncertainty Principle](https://en.wikipedia.org/wiki/Uncertainty_principle). One specific manifestation of the Heisenberg Uncertaity Principle is that the variance of the position, $\sigma_x^2 = \langle x^2 \rangle - \langle x \rangle^2$, times the variance of the momentum. $\sigma_p^2 = \langle \hat{p}^2 \rangle - \langle \hat{p} \rangle^2$ is greater than $\tfrac{\hbar^2}{4}$. We can verify this formula for the particle in a box.
$$
\begin{align}
\tfrac{\hbar^2}{4} &\le \sigma_x^2 \sigma_p^2 \\
&= \left( a^2\left[ \tfrac{1}{3} - \tfrac{1}{2 n^2 \pi^2} \right] - \tfrac{a^2}{4} \right)
\left( \frac{h^2n^2}{4a^2} - 0 \right) \\
&= \left( a^2\left[ \tfrac{1}{3} - \tfrac{1}{2 n^2 \pi^2} \right] - \tfrac{a^2}{4} \right)
\left( \frac{\hbar^2 \pi^2 n^2}{a^2} \right) \\
&= \hbar^2 \pi^2 n^2 \left(\tfrac{1}{12} - \tfrac{1}{2 n^2 \pi^2} \right)
\end{align}
$$
The right-hand-side gets larger and larger as $n$ increases, so the largest value occurs where:
$$
\begin{align}
\tfrac{\hbar^2}{4} &\le \hbar^2 \pi^2 \left(\tfrac{1}{12} - \tfrac{1}{2 \pi^2} \right) \\
&= 0.32247 \hbar^2
\end{align}
$$
## Double-Checking the Energy of a Particle-in-a-Box
To check the energy of the particle in a box, we can compute the kinetic energy density, then integrate it over all space. That is, we define:
$$
\tau_n(x) = \psi_n^*(x) \left(-\tfrac{\hbar^2}{2m}\tfrac{d^2}{dx^2}\right) \psi_n(x)
$$
and then the kinetic energy (which is the energy for the particle in a box) is
$$
T_n = \int_0^a \tau_n(x) dx
$$
> *Note:* In fact there are several different equivalent definitions of the kinetic energy density, but this is not very important in an introductory quantum chemistry course. All of the kinetic energy densities give the same total kinetic energy. However, because the kinetic energy density, $\tau(x)$, represents the kinetic energy of a particle at the point $x$, and it is impossible to know the momentum (or the momentum-squared, ergo the kinetic energy) exactly at a given point in space according to the Heisenberg uncertainty principle), there can be no unique definition for $\tau(x)$.
```python
#The next few lines just set up the sliders for setting parameters.
#Principle quantum number slider:
def compute_wavefunction_derivative(x, n, a, order=1):
"""Compute 1-dimensional particle-in-a-box kinetic energy density.
"""
if not (isinstance(order, int) and n > 0):
raise ValueError("Argument order is expected to be a positive integer!")
def wavefunction(x):
v = np.sqrt(2 / a) * np.sin(n * np.pi * x / a)
return v
# compute derivative
deriv = egrad(wavefunction)
for _ in range(order - 1):
deriv = egrad(deriv)
# return zero for x values out of the box
deriv = deriv(x)
# deriv[x < 0] = 0.0
# deriv[x > a] = 0.0
if hasattr(x, "__iter__"):
deriv[x > a] = 0.0
deriv[x < 0] = 0.0
else:
if x < 0.0 or x > a:
deriv = 0.0
return deriv
def compute_kinetic_energy_density(x, n, a):
"""Compute 1-dimensional particle-in-a-box kinetic energy density.
See `compute_wavefunction` parameters.
"""
# evaluate wave-function and its 2nd derivative w.r.t. x
wf = compute_wavefunction(x, n, a)
d2 = compute_wavefunction_derivative(x, n, a, order=2)
return -0.5 * wf * d2
#This next bit of code just prints out the values.
def check_energy(a, n):
#check the computed values of the moments against the analytic formula
ke,error = quad(compute_kinetic_energy_density, 0, a, args=(n, a))
energy = compute_energy(n, a)
print(f"The energy computed by integrating the k.e. density is {ke:.5f}")
print(f"The energy computed directly is {energy:.5f}")
#Principle quantum number:
n = 1
#Box length:
a = 17
check_energy(a, n)
```
The energy computed by integrating the k.e. density is 0.01708
The energy computed directly is 0.01708
## Visualizing the Particle-in-a-Box Wavefunctions, Probabilities, etc.
In the next code block, the wavefunction, probability density, derivative, second-derivative, and kinetic energy density for the particle-in-a-box are shown. Notice that the kinetic-energy-density is proportional to the probability density, and that the first and second derivatives are not zero at the edge of the box, but the wavefunction and probability density are. It's useful to change the parameters in the below figures to build your intuition for the particle-in-a-box.
```python
#This next bit of code makes the plots and prints out the energy
def make_plots(a, n):
#check the computed values of the moments against the analytic formula
energy = compute_energy(n, a)
print(f"The energy computed directly is {energy:.5f}")
# sample x coordinates
x = np.arange(-0.6, a + 0.6, 0.01)
# evaluate wave-function & probability
wf = compute_wavefunction(x, n, a)
pr = compute_probability(x, n, a)
# evaluate 1st & 2nd derivative of wavefunction w.r.t. x
d1 = compute_wavefunction_derivative(x, n, a, order=1)
d2 = compute_wavefunction_derivative(x, n, a, order=2)
# evaluate kinetic energy density
kin = compute_kinetic_energy_density(x, n, a)
#print("Integrate KED = ", trapz(kin, x))
# set the size of the plot
plt.rcParams['figure.figsize'] = [15, 10]
plt.rcParams['font.family'] = 'DejaVu Sans'
plt.rcParams['font.serif'] = ['Times New Roman']
plt.rcParams['mathtext.fontset'] = 'stix'
plt.rcParams['xtick.labelsize'] = 15
plt.rcParams['ytick.labelsize'] = 15
# define four subplots
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2)
fig.suptitle(f'a={a} n={n} E={compute_energy(n, a):.4f} a.u.', fontsize=35, fontweight='bold')
# plot 1
ax1.plot(x, wf, linestyle='--', label=r'$\psi(x)$', linewidth=3)
ax1.plot(x, pr, linestyle='-', label=r'$\|\psi(x)\|^2$', linewidth=3)
ax1.legend(frameon=False, fontsize=20)
ax1.set_xlabel('x coordinates', fontsize=20)
# plot 2
ax2.plot(x, d1, linewidth=3, c='k')
ax2.set_xlabel('x coordinates', fontsize=20)
ax2.set_ylabel(r'$\frac{d\psi(x)}{dx}$', fontsize=30, rotation=0, labelpad=25)
# plot 3
ax3.plot(x, d2, linewidth=3, c='g', )
ax3.set_xlabel('x coordinates', fontsize=20)
ax3.set_ylabel(r'$\frac{d^2\psi(x)}{dx^2}$', fontsize=25, rotation=0, labelpad=25)
# plot 4
ax4.plot(x, kin, linewidth=3, c='r')
ax4.set_xlabel('x coordinates', fontsize=20)
ax4.set_ylabel('Kinetic Energy Density', fontsize=16)
# adjust spacing between plots
plt.subplots_adjust(left=0.125,
bottom=0.1,
right=0.9,
top=0.9,
wspace=0.35,
hspace=0.35)
#Show Plot
plt.show()
#Principle quantum number slider:
n = 1
#Box length slider:
a = 1
make_plots(a, n)
```
## 🪞 Self-Reflection
- Can you think of other physical or chemical systems where the particle-in-a-box Hamiltonian would be appropriate?
- Can you think of another property density, besides the kinetic energy density, that cannot be uniquely defined in quantum mechanics?
## 🤔 Thought-Provoking Questions
- How would the wavefunction and ground-state energy change if you made a small change in the particle-in-a-box Hamiltonian, so that the right-hand-side of the box was a little higher than the left-hand-side?
- Any system with a zero-point energy of zero is classical. Why?
- How would you compute the probability of observing an electron at the center of a box if the box contained 2 electrons? If it contained 4 electrons? If it contained 8 electrons? The probability of observing an electron at the center of a box with 3 electrons is sometimes *lower* than the probability of observing an electron at the center of a box with 2 electrons. Why?
- Demonstrate that the kinetic energy operator is linear and Hermitian.
- What is the lowest-energy excitation energy for the particle-in-a-box?
- Suppose you wanted to design a one-dimensional box containing a single electron that absorbed blue light? How long would the box be?
## ❓ Knowledge Tests
- Questions about the Particle-in-a-Box and related concepts. [GitHub Classroom Link](https://classroom.github.com/a/1Y48deKP)
## 👩🏽‍💻 Assignments
- Compute and understand expectation values by computing moments of the particle-in-a-box [assignment](https://github.com/McMasterQM/PinBox-Moments/blob/main/moments.ipynb). [Github classroom link]https://classroom.github.com/a/9yzWI5Vt.
- This [assignment](https://github.com/McMasterQM/Sudden-Approximation/blob/main/SuddenPinBox.ipynb) on the sudden approximation provides an introduction to time-dependent phenomena. [Github classroom link](https://classroom.github.com/a/yBzABlb-).
## 🔁 Recapitulation
- Write the Hamiltonian, time-independent Schrödinger equation, eigenfunctions, and eigenvalues for the one-dimensional particle in a box.
- Play around with the eigenfunctions and eigenenergies of the particle-in-a-box to build intuition for them.
- How would you compute the uncertainty in $x^4$?
- Practice your calculus by explicitly computing, using integration by parts, $\langle x \rangle$ and $\langle x^2 \rangle$. This can be implemented in a [Jupyter notebook](x4_mocked.ipynb).
## 🔮 Next Up...
- Postulates of Quantum Mechanics
- Multielectron particle-in-a-box.
- Multidimensional particle-in-a-box.
- Harmonic Oscillator
## 📚 References
My favorite sources for this material are:
- [Randy's book](https://github.com/PaulWAyers/IntroQChem/blob/main/documents/DumontBook.pdf) has an excellent treatment of the particle-in-a-box model, including several extensions to the material covered here. (Chapter 3)
- Also see my (pdf) class [notes](https://github.com/PaulWAyers/IntroQChem/blob/main/documents/PinBox.pdf).
- Also see my notes on the [mathematical structure of quantum mechanics](https://github.com/PaulWAyers/IntroQChem/blob/main/documents/LinAlgAnalogy.pdf).
- [Davit Potoyan's](https://www.chem.iastate.edu/people/davit-potoyan) Jupyter-book covers the particle-in-a-box in [chapter 4](https://dpotoyan.github.io/Chem324/intro.html) are especially relevant here.
- D. A. MacQuarrie, Quantum Chemistry (University Science Books, Mill Valley California, 1983)
- [An excellent explanation of the link to the spectrum of cyanine dyes](https://pubs.acs.org/doi/10.1021/ed084p1840)
- Chemistry Libre Text: [one dimensional](https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/11%3A_Quantum_Mechanics_and_Atomic_Structure/11.08%3A_Particle_in_a_One-Dimensional_Box)
and [multi-dimensional](https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Quantum_Mechanics/05.5%3A_Particle_in_Boxes/Particle_in_a_3-Dimensional_box)
- [McQuarrie and Simon summary](https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_(McQuarrie_and_Simon)/03%3A_The_Schrodinger_Equation_and_a_Particle_in_a_Box)
There are also some excellent wikipedia articles:
- [Particle in a Box](https://en.wikipedia.org/wiki/Particle_in_a_box)
- [Particle on a Ring](https://en.wikipedia.org/wiki/Particle_in_a_ring)
- [Postulates of Quantum Mechanics](https://en.wikipedia.org/wiki/Mathematical_formulation_of_quantum_mechanics#Postulates_of_quantum_mechanics)
```python
```
| 10d5fed85c4fc358fc01c1bc77d5d0741ec61380 | 126,397 | ipynb | Jupyter Notebook | book/ParticleIn1DBox.ipynb | RichRick1/IntroQM2022 | 91a37b630b9b83c76c972ee2e958a13640b1a37f | [
"CC0-1.0"
] | 5 | 2022-02-08T18:42:37.000Z | 2022-02-21T19:33:46.000Z | book/ParticleIn1DBox.ipynb | RichRick1/IntroQM2022 | 91a37b630b9b83c76c972ee2e958a13640b1a37f | [
"CC0-1.0"
] | 2 | 2022-01-26T18:45:29.000Z | 2022-03-04T20:32:52.000Z | book/ParticleIn1DBox.ipynb | RichRick1/IntroQM2022 | 91a37b630b9b83c76c972ee2e958a13640b1a37f | [
"CC0-1.0"
] | 2 | 2022-02-08T17:56:55.000Z | 2022-03-03T08:30:53.000Z | 125.021761 | 77,594 | 0.819165 | true | 11,282 | Qwen/Qwen-72B | 1. YES
2. YES | 0.754915 | 0.76908 | 0.58059 | __label__eng_Latn | 0.970802 | 0.187236 |
# Projet EDP : Ecoulement de gel hydroalcoolique avec l'équation de Stockes
( En cas de problème sur l'exécution du code ou avec les images, merci de me contacter : matthieu.briet@student-cs.fr
### Introduction
La crise sanitaire actuelle nous pousse à utiliser de plus en plus des gels hydroalcoolique parfois sous forme de bouteille avant de rentrer dans des magasins par exemple. Cependant, il arrive qu'une partie du gel hydroalcoolique durcisse et forme un dépôt solide dans l'embout de sortie ce qui projète une partie du gel un peu partout et parfois même sur nos vêtements ou nos chaussures. On se demande alors s'il est possible d'expliquer ou de prédire ces "jets" de gel hydroalcoolique non souhaités.
On se propose de simuler l'écoulement du gel dans cette partie de la bouteille :
### Modélisation
On s'intéresse ici à la résolution du problème de Stockes qui permet de décrire l'écoulement d'un fluide visqueux incompressible dans un lieu étoit où les effets visqueux prédominent sur les effets inertiels. Ces équations sont données par :
\begin{equation}
\begin {cases}
-\nu \Delta \vec u -\vec {grad } ~p= \vec f \text { sur le domaine } \Omega \\
div ~\vec u =0 \text { sur le domaine } \Omega ~\text{(c'est la condition d'incompressibilité)}\\
\vec u=0 ~\text{sur la frontière} ~\partial \Omega ~\text{(conditions de bords de type Dirichlet homogène)}
\end{cases}
\end{equation}
avec
\begin{equation}
\begin{cases}
\Omega ~\text{le domaine d'étude qui sera spécifié plus tard}\\
\vec u ~\text{le champ de vitesse}\\
p ~\text{la pression}\\
\vec f ~\text{ la densité de force }\\
\nu >0 ~\text{la viscosité cinématique du fluide }
\end{cases}
\end{equation}
Grâce au cours d'EDP associé à quelques recherches (car ici on a deux équation à cause de la condition d'incompressibilité), on peut écrire la formulation faible du problème qui s'obtient en multipliant par une fonction test et en intégrant :
\begin{equation}
\begin{cases}
\int_\Omega \nabla u .\nabla \Phi +\frac{1}{\nu}.\int_\Omega \nabla p .\Phi = \frac{1}{\nu}. \int_\Omega f.\Phi \\
\int_\Omega div(u).q=0
\end{cases}
\end{equation}
avec $\Phi$ et $q$ des fonctions tests.
Ecrivons à présent la formulation variationnelle du problème avec :
\begin{equation}
\begin{array}{l|rcl}
a : & W \times W \longrightarrow \mathbf{R} \\
& ((u,p),(\Phi,q)) & \longmapsto \int_{\Omega} \nabla u .\nabla \Phi +\frac{1}{\nu}. \nabla p .\Phi+div(u).q
\end{array}
\end{equation}
avec $W= U \times P $ tel que $u\in U $ et $p\in P$
et
\begin{equation}
\begin{array}{l|rcl}
l: & W \longrightarrow \mathbf{R} \\
& (\Phi,q) & \longmapsto \frac{1}{\nu}.\int_{\Omega} f.\Phi
\end{array}
\end{equation}
On cherche alors à résoudre le problème :
Trouver $(u,p)\in W $ tel que $ \forall (\Phi,q) \in W , a((u,p),(\Phi,q)) =l((\Phi,q))$
On peut ensuite montrer que a est une forme bilinéaire continue et coercive et que l est une forme continue et donc qu'il existe une unique solution au problème.On se propose par la suite de résoudre ce problème numériquement avec FeniCS.
Il nous reste à présent à spécifier les valeurs prisent pour notre étude :
\begin{equation}
\begin{cases}
\nu= 3500 mm^{2}/s \text{ (viscosité cinématique du gel)}\\
u_{0} = 0.1 m/s \text{ (vitesse initiale du gel)}\\
p_{sortie}= 1 bar \\
\end{cases}
\end{equation}
### Etude numérique avec FeniCS
<p style="color:red;"> Chargement des modules </p>
```python
from dolfin import *
from fenics import *
import matplotlib.pyplot as plt
from mshr import *
from __future__ import print_function
import random
```
<p style="color:red;"> Définition des constantes </p>
```python
# définir les constantes ici
nx=10
ny=10
X1=0
Y1=2
X2=1.5
Y2=3
X3=1.5
Y3=2
X4=3
Y4=2.5
nbre_pts=50
nu=0.035
```
<p style="color:red;"> Définition du maillage </p>
```python
rectangle1=Rectangle(Point(X1,Y1),Point(X2,Y2))
rectangle2=Rectangle((Point(X3,Y3)),Point(X4,Y4))
mon_maillage=generate_mesh(rectangle1+rectangle2,nbre_pts)
plt.figure(1)
plot(mon_maillage,title="mesh ")
```
<p style ="color:red;"> Définition de l'espace de travail </p>
```python
U=VectorElement("Lagrange",mon_maillage.ufl_cell(),2) # pour des raisons de stabilité la vitesse ne peux pas être modélisés par des elements finis P1
P=FiniteElement("Lagrange",mon_maillage.ufl_cell(),1)
W=FunctionSpace(mon_maillage,U*P) # W.sub(0) pour les vitesses et W.sub(1) pour les pressions
```
<p style ="color:red;"> Condition aux limites </p>
```python
tol=1e-14
def condition_gauche(x,on_boundary):
return on_boundary and (x[0]<X1+tol and (x[1]<3+tol or x[1]>2-tol))
def condition_droite(x,on_boundary):
return on_boundary and (x[0]>X4-tol)
def condition_haut_bas(x,on_boundary):
return on_boundary and ((x[1]<tol+2) or (x[1]>3-tol and (x[0]<1.5+tol or x[0]>-tol)) or (x[1]>2.5-tol and (x[0]<3+tol or x[0]>1.5-tol)))
#le gel entre dans le bec du flacon avec une pression p
p_sortie=Constant(1e5)
bc_pression=DirichletBC(W.sub(1),p_sortie,condition_droite) #on applique la pression sur l'espace des pressions ici W(1)
#vitesse nulle sur les bords horizontaux et sur le bord vertical
v0=Expression(("0.0","0.0"),degree=2)
bcV0_haut_bas=DirichletBC(W.sub(0),v0,condition_haut_bas)
vitesse_entree=0.1
gel_entree=Expression((str(vitesse_entree),"0.0"),degree=2)
bc_gel_entree=DirichletBC(W.sub(0),gel_entree,condition_gauche)
bc_total=[bc_pression,bcV0_haut_bas,bc_gel_entree]
```
<p style ="color:red;"> Formulation variationnelle </p>
```python
u,p=TrialFunctions(W)
v,q= TestFunctions(W)
f=Constant((0.0,0.0))
a= inner(grad(u),grad(v))*dx+1/nu*inner(v,grad(p))*dx+1/nu*q*div(u)*dx
l=1/nu*inner(f,v)*dx
```
<p style ="color:red;"> Résolution du système </p>
```python
u=Function(W)
solve(a==l,u,bc_total)
u_f=u.split()[0]
plot(u_f,title="vitesse",scale=2)
```
On voit ici un écoulement normal du gel dans le bec distributeur avec une forte accélération de la vitesse du fluide dans le goulot. Ceci correspond à l'effet Venturi observé en mécanique des fluides et notre fluide à donc ici un comportement assez classique sans perturbation de l'écoulement par un quelconque dépot. On remarque bien sur la simulation précédente que la vitesse du fluide est nulle sur les bords et donc logiquement plus importante sur la ligne médiane de notre écoulement.
### Prise en compte d'un paramètre aléatoire
```python
Ox1=2.7
Oy1=2
Ox2=2.9
Oy2=random.uniform(2,2.5)
```
```python
rectangle1=Rectangle(Point(X1,Y1),Point(X2,Y2))
rectangle2=Rectangle((Point(X3,Y3)),Point(X4,Y4))
obstacle= Rectangle(Point(Ox1,Oy1),Point(Ox2,Oy2))
mon_maillage2=generate_mesh(rectangle1+rectangle2-obstacle,nbre_pts)
plt.figure(2)
plot(mon_maillage2,title="mesh2")
```
```python
U=VectorElement("Lagrange",mon_maillage2.ufl_cell(),2) # pour des raisons de stabilité la vitesse ne peux pas être modélisés par des elements finis P1
P=FiniteElement("Lagrange",mon_maillage2.ufl_cell(),1)
W=FunctionSpace(mon_maillage2,U*P) # W.sub(0) pour les vitesses et W.sub(1) pour les pressions
tol=1e-14
def condition_gauche(x,on_boundary):
return on_boundary and (x[0]<X1+tol)
def condition_droite(x,on_boundary):
return on_boundary and (x[0]>X4-tol)
def condition_haut_bas(x,on_boundary):
return on_boundary and ((x[1]<tol+2) or (x[1]>3-tol and (x[0]<1.5+tol or x[0]>-tol)) or (x[1]>2.5-tol and (x[0]<3+tol or x[0]>1.5-tol)))
def condition_obstacle(x,on_boundary):
return on_boundary and x[0]<Ox2+tol and x[0]>Ox1-tol and x[1]<Oy2+tol and x[1]>Oy1-tol
#le gel entre dans le bec du flacon avec une pression p
p_sortie=Constant(1e5)
bc_pression=DirichletBC(W.sub(1),p_sortie,condition_droite) #on applique la pression sur l'espace des pressions ici W(1)
#vitesse nulle sur les bords horizontaux et sur le bord vertical et sur l'obstacle
v0=Expression(("0.0","0.0"),degree=2)
bcV0_haut_bas=DirichletBC(W.sub(0),v0,condition_haut_bas)
bc_obstacle=DirichletBC(W.sub(0),v0,condition_obstacle)
#vecteur vitesse pour le gel en sortie
vitesse_entree=0.1
gel_sortie=Expression((str(vitesse_entree),"0.0"),degree=2)
bc_gel_entree=DirichletBC(W.sub(0),gel_sortie,condition_gauche)
bc_total=[bc_pression,bcV0_haut_bas,bc_obstacle,bc_gel_entree]
u,p=TrialFunctions(W)
v,q= TestFunctions(W)
f=Constant((0.0,0.0))
a= inner(grad(u),grad(v))*dx+1/nu*inner(v,grad(p))*dx+1/nu*q*div(u)*dx
l=1/nu*inner(f,v)*dx
u=Function(W)
solve(a==l,u,bc_total)
u_f=u.split()[0]
p=u.split()[1]
plot(u_f,title="vitesse",scale=2)
```
Dans le cas avec un dépôt solide de taille variable( composante aléatoire ici ou on fait varier la position et donc la taille du dépôt solide), on remarque la perturbation de notre écoulement et des vecteurs vitesses qui partent dans une direction " non colinéaire" à l'écoulement (je vous conseille de lancer le code plusieurs fois afin de voir les changements liés à la taille du dépôt solide). Le comportement de notre fluide a donc été perturbé et ceci peut expliquer la projection de fluide dans des directions non souhaitées.
On pourrait ensuite s'intéresser dans un autre projet à la force exercée sur ce dépôt solide et à son évolution au fur et à mesure des utilisations ( evacuation de ce dernier peut être) et donc déterminer le moment où l'écoulement redevient "normal" c'est à dire sans projection. Pour avoir fait l'expérience de mon côté, le dépot solide s'évacuait avec environ 5 utilisations de gel hydroalcoolique(Ce gel ne fut pas gâché pour l'expérience). De plus un début de dépôt solide pouvait être observé dans ma bouteille en environ 3h30. Evidemment toutes ces données peuvent varier en fonction du type de gel, sa composition, les conditions de température et de pression et la force exercée sur la bouteille pour faire sortir le gel.
### Bibliographie:
https://fr.wikipedia.org/wiki/Écoulement_de_Stokes
http://www.iecl.univ-lorraine.fr/~Jean-Francois.Scheid/Enseignement/polyNS2017_18.pdf
https://courses.ex-machina.ma/downloads/gm-2/s7/M_E_F/EFM_Stokes.pdf
https://fenicsproject.org/olddocs/dolfin/1.3.0/python/demo/documented/stokes-iterative/python/documentation.html
```python
```
| 56c77bcb1f87164f79a3342d3d02b6eb0ab7e820 | 280,487 | ipynb | Jupyter Notebook | PJT_EDP_Briet.ipynb | MatthBriet/Projet_EDP | 3628d18a8357a01b58c879b1adfbcdcee9e2764d | [
"MIT"
] | null | null | null | PJT_EDP_Briet.ipynb | MatthBriet/Projet_EDP | 3628d18a8357a01b58c879b1adfbcdcee9e2764d | [
"MIT"
] | null | null | null | PJT_EDP_Briet.ipynb | MatthBriet/Projet_EDP | 3628d18a8357a01b58c879b1adfbcdcee9e2764d | [
"MIT"
] | null | null | null | 436.216174 | 81,628 | 0.940995 | true | 3,344 | Qwen/Qwen-72B | 1. YES
2. YES | 0.887205 | 0.7773 | 0.689624 | __label__fra_Latn | 0.883592 | 0.440559 |
# Derivación numérica
Aunque la derivada de una función se puede obtener algorítmicamente de manera analítica, los algoritmos numéricos que usemos pueden depender de muchas derivadas o no pueden acceder a otra cosa más que la función original.
Así, comúnmente utilizaremos una aproximación numérica de la derivada en lugar de el valor real.
## Preludio matemático: series de Taylor
Sea $f:\mathbb{R} \to \mathbb{R}$ diferenciable $k$ veces en un punto $a$. Entonces :
$$
f(x) = f(a) + f'(a)(x-a)+ \frac{f''(a)}{2!} (x-a)^2 + \ldots + \frac{f^{(k)}(a)}{k!}(x-a)^k + R_{k}(x)
$$
Y $R_{k}(x)$ cumple que
\begin{equation}
\lim_{x\to a} \frac{R_{k}(x)}{(x-a)^k} = 0 \quad \left( R_k(x) \sim (x-a)^{k+1}\right)
\label{eq1}
\end{equation}
En forma aproximada, se cumple que:
$$
f(x) \approx f(a) + f'(a)(x-a)+ \frac{f''(a)}{2!} (x-a)^2 + \ldots + \frac{f^{(k)}(a)}{k!}(x-a)^k
$$
### Forma explícita de $R_k(x)$
Utilizando el teorema del punto medio, podemos expresar a $R_k(x)$ como
$$
R_k(x) = \frac{f^{(k+1)}(\xi)}{(k+1)!} (x-a)^{k+1}
$$
Con $\xi$ un número real **fijo** entre $x$ y $a$. Con esta expresión, es claro que se la condición de $\lim_{x\to a} \frac{R_{k}(x)}{(x-a)^k} = 0 $ se cumple.
### Reescribir el teorema:
Sustitución: $x-a = h \implies x = a+h$
$$
f(a+h) = f(a) + f'(a)(h)+ \frac{f''(a)}{2!} (h)^2 + \ldots + \frac{f^{(k)}(a)}{k!}(h)^k + R_{k}(a+h)
$$
Y $R_{k}(a+h)$ cumple que
$$
\lim_{h\to 0} \frac{R_{k}(a+h)}{h^k} = 0 \quad \left( R_k(a+h) \sim h^{k+1}\right)
$$
## ¿Cómo aproximar la derivada?
$$
f'(a) = \lim_{h\to 0 } \frac{f(a+h) - f(a)}{h}
$$
**No podemos hacer limites en la computadora**. ¿Qué hacemos? Fijamos un valor de $h$ muy pequeño y aproximamos el valor. Una aproximación de **diferencias finitas**
## Preludio computacional: funciones como objetos
Aunque en clases pasadas hemos entendido a las funciones como un objeto abstracto, podemos tratarlas como un objeto con un tipo definido arbitrario. Por ahora no podemos explicar profundamente cuál es el tipo de una función, así que solo procederemos a utilizarlas como un objeto: asignar una función a una variable, dársela a otra función como argumento, etc.
```julia
# ejemplo de asignación
function f(x)
return sin(3*x^2)
end
```
f (generic function with 1 method)
```julia
println(f(45))
# asigno a la variable `g` la función representada por `f`
g = f
# `g` me imprime el mismo valor que `f` al evaluarla
println(g(45))
```
-0.7447712875753175
-0.7447712875753175
```julia
# ejemplo de utilizar una función como argumento de otra función
function evaluarEn3(func)
return func(3)
end
```
evaluarEn3 (generic function with 1 method)
```julia
println(evaluarEn3(sin))
println(sin(3))
```
0.1411200080598672
0.1411200080598672
```julia
println(evaluarEn3(exp))
println(exp(3))
```
20.085536923187668
20.085536923187668
## Funciones unidimensionales
### Primera aproximación: diferencia hacia adelante (forward difference)
Sea $h > 0$, $h << a$
$$
f(a+h) = f(a) + f'(a)h + R_{1}(a+h)
$$
Despejo la derivada:
$$
f'(a) = \frac{f(a+h)-f(a)}{h} - \frac{R_{1}(a+h)}{h} \approx \frac{f(a+h)-f(a)}{h}
$$
$$
\lim_{h \to 0} \frac{R_1(a+h)}{h} = 0
$$
Notemos que el error absoluto de nuestra aproximación ($f'(a) - \frac{f(a+h) - f(a)}{h}$) está dado por el término $ \frac{R_1 (a+h)}{h}$, el cuál sabemos que es proporcional a $h$ (ya que $R_1(a+h) \sim h^2$)
## Notación:
En la clase de complejidad, introducimos la notación $\mathcal{O}(f(n))$ para decir que la complejidad temporal o espacial de un algoritmo está acotada superiormente por una función de la forma $C \cdot f(n)$.
En **análisis numérico**, esa misma notación se utiliza para los errores numéricos. En el caso de la diferencia hacia adelante, por la forma del error absoluto, sabemos que este es $\mathcal{O}(h)$
En general, cuando el **error de aproximación** es proporcional a $h^p$, decimos que nuestro algoritmo numérico es de clase $\mathcal{O}(h^p)$
Pueden consultar más información sobre estas notaciones [en este link](https://courses.engr.illinois.edu/cs357/fa2019/references/ref-2-error/)
```julia
using Plots
```
```julia
function primera(x)
return sin(2*pi*x)
end
```
primera (generic function with 1 method)
```julia
xs = range(0,stop=1,length=100)
ys = [primera(x) for x in xs]
plot(xs,ys,title="funcion a derivar")
```
Quiero implementar la aproximación
$$
f'(a) = \frac{f(a+h) - f(a)}{h}
$$
```julia
# esta función me calcula la derivada de f en un punto a usando la diferencia hacia adelante con un valor de h
function difAdelante(f,a,h)
return (f(a+h) - f(a)) / h
end
```
difAdelante (generic function with 1 method)
Sabemos que
$$
primera'(x) = (\sin{2\pi x})' = 2 \pi \cos{2 \pi x}
$$
Podemos comparar el resultado numérico con el analítico
```julia
# derivada analítica de la funcion `primera`
function dprimera(x)
return 2*pi*cos(2*pi*x)
end
```
dprimera (generic function with 1 method)
```julia
println(difAdelante(primera,0,0.01))
println(dprimera(0))
# ver el error absoluto
println(abs(dprimera(0)-difAdelante(primera,0,0.01)))
```
6.279051952931337
6.283185307179586
0.004133354248248899
```julia
# cambiando el valor de $h$ la aproximación cambia
h = 0.001
println(difAdelante(primera,0,h))
println(dprimera(0))
# ver el error absoluto
println(abs(dprimera(0)-difAdelante(primera,0,h)))
```
6.283143965558951
6.283185307179586
4.134162063529345e-5
```julia
# ver la derivada como función
xs = range(0,stop=1,length=101)
ys1 = [dprimera(x) for x in xs]
h = 0.001
ys2 = [difAdelante(primera,x,h) for x in xs]
plot(xs,ys1,label="Valor analítico",title="Derivada")
plot!(xs,ys2,label="Diferencia hacia adelante")
```
## Ejercicios
1. Realiza una gráfica en la que compares la derivada analítica de la función $f(x) = \sin{x} \cdot \cos{x}$ con la obtenida por diferencia hacia adelante. Toma como dominio el intervalo $[-\pi,\pi]$
2. Para la función del inciso anterior, fija un valor de $x$, y gráfica, el error absoluto entre la derivada analítica y la numérica como función de $h$ ¿Qué observas? ¿Es lo esperado? Utiliza la escala log-log si es necesario.
#### ¿Qué otras aproximaciones hay?
### Segunda aproximación: diferencia hacia atrás (backwards difference)
Sea $h > 0$, $h << a$
$$
f(a-h) = f(a) + f'(a)(-h) + R_{1}(a-h)
$$
Despejo la derivada:
$$
f'(a) = \frac{f(a-h)-f(a)}{-h} - \frac{R_{1}(a-h)}{-h} = \frac{f(a)-f(a-h)}{h} + \frac{R_{1}(a-h)}{h}
$$
Obtenemos entonces:
$$
f'(a) \approx \frac{f(a)-f(a-h)}{h}
$$
$$
\lim_{h \to 0} \frac{R_1(a-h)}{h} = 0
$$
## Ejercicios
3. Define una función llamada `diferenciaAtras(f,a,h)` que aproxime la derivada de $f$ en a, $f'(a)$, utilizando una diferencia hacia atras de orden $h$
### Tercera aproximación: diferencia centrada (central difference)
Primero expandimos $f(a+h)$ y $f(a-h)$ en polinomio de taylor de orden 2:
$$
f(a+h) = f(a) + f'(a) h + \frac{f''(a)}{2!} h^2 + R_2 (a+h)
$$
$$
\begin{split}
f(a-h) &= f(a) + f'(a)(- h) + \frac{f''(a)}{2!} (-h)^2 + R_2 (a-h) \\
&= f(a) - f'(a)h + \frac{f''(a)}{2!} h^2 + R_2 (a-h)
\end{split}
$$
Podemos restar ambas aproximaciones:
$$
\begin{split}
f(a+h) - f(a-h) &= (f(a) - f(a)) + (f'(a)h - (-f'(a) h)) + (\frac{f''(a)}{2!} h^2 - \frac{f''(a)}{2!} h^2) + (R_2(a+h) - R_2(a-h)) \\
&= 2 f'(a) h + (R_2(a+h) - R_2(a-h))
\end{split}
$$
Despejo la derivada:
$$
f'(a) = \frac{f(a+h) - f(a-h)}{2h} - \frac{R_2(a+h) - R_2(a-h)}{2h} \approx \frac{f(a+h) - f(a-h)}{2h}
$$
$$
\lim_{h \to 0} \frac{R_2(a+h) - R_2(a-h)}{2h} = 0
$$
## Ejercicios:
4. Define una función llamada `diferenciaCentrada(f,a,h)` que aproxime la derivada de $f$ en a, $f'(a)$, utilizando una diferencia centrada de orden $h$
Para las siguientes funciones, realiza dos gráficas: una en la que compares las derivadas obtenidas analíticamente, con diferencia hacia adelante, diferencia hacia atrás y diferencia centrada, Y otra gráfica en la que analices el error absoluto como función de $h$ para un punto fijo. Puedes escoger el dominio de la primera gráfica
5. $f_1(x) = 10x^2 + 6x - 1$
6. $f_2(x) = \sin{\left( \cos{(6x+2)} \right)}$
7. $f_3(x) = 2^{x \cdot \sin{x}}$
```julia
function ejer7(x)
return 2^(x*sin(x))
end
```
```julia
using Plots
```
```julia
function difCentrada(f,a,h)
return (f(a+h) - f(a-h))/(2*h)
end
```
## Derivadas de orden superior
Podemos seguir el mismo esquema para obtener aproximaciones de las derivadas de orden más grande. Esto no es un proceso simple, pero para orden 2 podemos hacerlo de manera muy sencilla.
## Ejercicios
8. Suma las expansiones en serie de taylor a orden 2 de $f(a+h)$ y $f(a-h) $ para obtener una expresión para la segunda derivada $f''(a)$. ¿Cuál es su error de aproximación?
9. Comprueba el error de aproximación obtenido al analizar el error absoluto en la segunda derivada de la función $\sin{(6x^2+1)}$ en un punto fijo del intervalo $[1,3]$
## Funciones multidimensionales
Podemos también utilizar diferencias finitas para aproximar las derivadas parciales de una función $f:\mathbb{R^n} \to \mathbb{R}$. Hacer expresiones de las derivadas de orden superior se vuelve complicado ya que hay que hacer expansiones de Taylor multidimensionales, pero para las primeras derivadas parciales basta recordar la definición
$$
\frac{\partial f}{\partial x_k} (a_1, \ldots, a_n) = \lim_{h \to 0} \frac{f(a_1, \ldots, a_k + h, \ldots, a_n) - f(a_1, \ldots, a_k, \ldots, a_n)}{h}
$$
## Ejercicios
10. Define una función `devParcial(f,i,A,h)`, donde $f:\mathbb{R^n} \to \mathbb{R}$ es una función multivariada, que toma como argumento un solo arreglo, $A$ es un arreglo de longitud $n$ e $1\leq i \leq n$ es un natural, que te regrese la derivdad parcial $\frac{\partial f}{\partial x_i} (A)$. Pruebala con alguna función de tu elección.
11. Sea $g:\mathbb{R}^2 \to \mathbb{R}$, obtén una expresión de diferencias finitas para la derivada curzada $\frac{\partial^2 g}{\partial x \partial y}$. ¿Cuál es el orden de su error?
## La visión general:
Existe una manera de derivar aproximaciones de diferencias finitas arbitrarias que sean $\mathcal{O}(h^p)$ para el $p \in \mathbb{N}$ que yo quiera. Ese método escapa al alcance del curso, pero lo pueden consultar en el libro de LeVeque.
Existe también una manera de calcular las derivadas de manera exacta, llamada **diferenciación automática**. No veremos ese tema pues requiere de conocimientos más avanzados de programación.
```julia
```
| 61cfdfccc40c472872d5105808fbd9a2e5b1bb82 | 108,830 | ipynb | Jupyter Notebook | files/fiscomp_2020-4/material/clase06.ipynb | sayeg84/sayeg84.github.io | 18f2e36dd7252603fad8f7093dc5aa00fc721be4 | [
"MIT"
] | null | null | null | files/fiscomp_2020-4/material/clase06.ipynb | sayeg84/sayeg84.github.io | 18f2e36dd7252603fad8f7093dc5aa00fc721be4 | [
"MIT"
] | null | null | null | files/fiscomp_2020-4/material/clase06.ipynb | sayeg84/sayeg84.github.io | 18f2e36dd7252603fad8f7093dc5aa00fc721be4 | [
"MIT"
] | null | null | null | 119.33114 | 26,476 | 0.666452 | true | 3,783 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.924142 | 0.796183 | __label__spa_Latn | 0.959468 | 0.688134 |
```python
%reset -f
```
```python
from sympy import *
```
```python
init_printing()
```
## Define variables
```python
x,y,z = symbols('x y z')
```
```python
f = sin(x)
```
## Differentiate
```python
diff(f,x)
```
## Integrate
```python
integrate(f, x)
```
```python
integrate(f, [x,0,pi])
```
```python
```
| 84ddd304ab58bf6e24c61f8e59f38a8b7d20ae60 | 4,624 | ipynb | Jupyter Notebook | sympy/Untitled.ipynb | krajit/krajit.github.io | 221c8bcdf0612b3ae28c827809aa309ea6a7b0c2 | [
"MIT"
] | 2 | 2018-09-29T07:40:07.000Z | 2022-02-28T22:17:04.000Z | sympy/Untitled.ipynb | krajit/krajit.github.io | 221c8bcdf0612b3ae28c827809aa309ea6a7b0c2 | [
"MIT"
] | 2 | 2019-08-26T08:42:47.000Z | 2019-08-26T09:48:21.000Z | sympy/Untitled.ipynb | krajit/krajit.github.io | 221c8bcdf0612b3ae28c827809aa309ea6a7b0c2 | [
"MIT"
] | 8 | 2017-09-09T23:32:09.000Z | 2020-01-28T21:11:39.000Z | 24.465608 | 708 | 0.63019 | true | 108 | Qwen/Qwen-72B | 1. YES
2. YES | 0.939025 | 0.72487 | 0.680671 | __label__yue_Hant | 0.257874 | 0.419758 |
# Lecture 4
## Differentiation II:
### Product, Chain and Quotient Rules
```python
import numpy as np
import sympy as sp
sp.init_printing()
##################################################
##### Matplotlib boilerplate for consistency #####
##################################################
from ipywidgets import interact
from ipywidgets import FloatSlider
from matplotlib import pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg')
global_fig_width = 10
global_fig_height = global_fig_width / 1.61803399
font_size = 12
plt.rcParams['axes.axisbelow'] = True
plt.rcParams['axes.edgecolor'] = '0.8'
plt.rcParams['axes.grid'] = True
plt.rcParams['axes.labelpad'] = 8
plt.rcParams['axes.linewidth'] = 2
plt.rcParams['axes.titlepad'] = 16.0
plt.rcParams['axes.titlesize'] = font_size * 1.4
plt.rcParams['figure.figsize'] = (global_fig_width, global_fig_height)
plt.rcParams['font.sans-serif'] = ['Computer Modern Sans Serif', 'DejaVu Sans', 'sans-serif']
plt.rcParams['font.size'] = font_size
plt.rcParams['grid.color'] = '0.8'
plt.rcParams['grid.linestyle'] = 'dashed'
plt.rcParams['grid.linewidth'] = 2
plt.rcParams['lines.dash_capstyle'] = 'round'
plt.rcParams['lines.dashed_pattern'] = [1, 4]
plt.rcParams['xtick.labelsize'] = font_size
plt.rcParams['xtick.major.pad'] = 4
plt.rcParams['xtick.major.size'] = 0
plt.rcParams['ytick.labelsize'] = font_size
plt.rcParams['ytick.major.pad'] = 4
plt.rcParams['ytick.major.size'] = 0
##################################################
```
## Wake Up Exercise
When $y = ax^n$, $y' = a n x^{n-1}$. So find $y'(x)$, when:
(a) $y = 6$
(b) $y = x$
(c) $y = 13x -1$
(d) $y = \sqrt{x^7}$
(e) $y = -\frac{2}{x}$
## Linear approximation and the derivative
By definition, the derivative of a function $f(x)$ is
$f'(x) = \lim_{h \rightarrow 0} \frac{f(x+h) - f(x)}{h}$
This means that for small $h$, this expression approximates the derivative. By rearranging this, we have
$f(x + h) \approx f(x) + h f'(x)$
In other words, $f(x+h)$ can be approximated by starting at $f(x)$ and moving a distance $h$ along the tangent $f'(x)$.
### Example
Estimate $\sqrt{5218}$
To do this, we first need an $f(x)$ that is easy to calculate, and close to 5218. For this we can take that $70^2 = 4900$.
To calculate the approximation, we need $\;f'(x)\;$, where $\;f(x) = \sqrt{x}\;$.
$$f'(x) = \frac{1}{2 \sqrt{x}}$$
$$f'(x) = \frac{1}{2 \sqrt{x}}$$
Since 5218 = 4900 + 318, we can set $x = 4900$ and $h = 318$.
Using the approximation, we have:
$f(5218) \approx f(4900) + 318 \times f'(4900)$
$f(5218) \approx 70 + 318 \times \frac{1}{140} \approx 72.27$
$\sqrt{5218} = 72.2357252$ - not a bad appriximation!
## Standard derivatives
It's useful to know the derivatives of all the standard functions, and some basic rules.
$\frac{d}{dx} (x^n) = n x^{n-1}$
$\frac{d}{dx} (\sin x) = \cos x$
$\frac{d}{dx} (\cos x) = -\sin x$
$\frac{d}{dx} (e^x) = e^x$
To understand the derivative of sin and cos, consider their graphs, and when they are changing positively (increasing), negatively (decreasing) or not at all (no rate of change).
```python
x = np.linspace(0, 8, 100)
y_1 = np.sin(x)
y_2 = np.cos(x)
plt.plot(x, y_1, label = 'sin(x)')
plt.plot(x, y_2, label = 'cos(x)')
plt.legend()
```
## Other Differentiation Rules
## Differentiation of sums, and scalar multiples:
$(f(x) \pm g(x))' = f'(x) \pm g'(x)$
$(a f(x))' = a f'(x) $
## Differentiation of products
While differentiating sums, and scalar multiples is straightforward, differentiating products is more complex
$(f(x) g(x) )' \neq f'(x) g'(x)$
$(f(x) g(x) )' = f'(x) g(x) + g'(x) f(x)$
### Example
To illustrate that this works, consider $y = (2x^3 - 1)(3x^3 + 2x)$
If we expand this out, we have that $y = 6x^6 + 4x^4 - 3x^3 - 2x$
From this, clearly, $y' = 36 x^5 + 16x^3 - 9 x^2 - 2$
To use the product rule, instead we say $y = f \times g$, where $f = 2x^3 - 1$, and $g = 3x^3 + 2x$. Therefore
$f'(x) = 6x^2$
$g'(x) = 9x^2 + 2$
$y' = f'g + g'f = 6x^2 (3x^3 + 2x) + (9x^2 + 2)(2x^3 - 1)$
$y' = 18x^5 + 12x^3 + 18x^5 + 4x^3 - 9x^2 - 2 = 36x^5 + 16x^3 - 9x^2 - 2$
So both rules produce the same result. While for simple examples the product rule requires more work, as functions get more complex it saves a lot of time.
## Differentiating a function of a function - The Chain Rule
One of the most useful rules is differentiating a function that has another function inside it $y = f(g(x))$. For this we use the chain rule:
$y = f(g(x))$
$y'(x) = f'(g(x))\; g'(x) = \frac{df}{dg} \frac{dg}{dx}$
### Example 1: $y = (5x^2 + 2)^4$
We can write this as $y = g^4$, where $g = 5x^2 + 2$. Given this, we have that
$\frac{dy}{dg} = 4g^3 = 4(5x^2 + 2)^3$
$\frac{dg}{dx} = 10x$
This means that
$\frac{dy}{dx} = \frac{dy}{dg} \frac{dg}{dx} = 4 (5x^2 + 2)^3 10 x = 40 x (5x^3 + 2)^3$
This extends infinitely to nested functions, meaning
$\frac{d}{dx}(a(b(c)) = \frac{d a}{d b} \frac{d}{dx} (b(c)) = \frac{d a}{db} \frac{d b}{dc}\frac{dc}{dx}$
## Differentiating the ratio of two functions - The Quotient Rule
If $y(x) = \frac{f(x)}{g(x)}$, then by using the product rule, and setting $h(x) = (g(x))^{-1}$, we can show that
$y'(x) = \frac{f'g - g'f}{g^2}$
### Example
$y = \frac{3x-1}{4x + 2}$
$f = 3x - 1, \rightarrow f' = 3$
$g = 4x + 2, \rightarrow g' = 4$
$y' = \frac{f'g - g'f}{g^2} = \frac{3(4x+2) - 4(3x-1)}{(4x+2)^2}$
$y' = \frac{12x + 6 - 12 x + 4}{(4x+2)^2} = \frac{10}{(4x+2)^2}$
## Differentiating inverses - implicit differentiation
For any function $y = f(x)$, with a well defined inverse $f^{-1}(x)$ (not to be confused with $(f(x))^{-1})$), we have by definition that
$x = f^{-1}(f(x)) = f^{-1}(y)$.
This means that we can apply the chain rule
$\frac{d}{dx}(x) = \frac{d}{dx}(f^{-1}(y)) = \frac{d}{dy}(f^{-1}(y)) \frac{dy}{dx}$
But since $\frac{d}{dx}(x) = 1$
$\frac{d}{dy}(f^{-1}(y)) = \frac{1}{\frac{dy}{dx}}$
### Example: $y = ln(x)$
If $y = ln(x)$, this means that $f^{-1}(y) = e^y = x$
By definition ($f^{-1}(y))' = e^y$, as $e^y$ doesn't change under differentiation. This means that
$\frac{d}{dx}(ln(x)) = \frac{1}{\frac{d}{dy}(f^{-1}(y))} = \frac{1}{e^y}$
But since $y = ln(x)$:
$\frac{d}{dx}(ln(x)) = \frac{1}{e^{ln(x)}} = \frac{1}{x}$
### Example - Differentiating using sympy.
In Python, there is a special package for calculating derivatives symbolically, called sympy.
This can quickly and easily calculate derivatives (as well as do all sorts of other analytic calculations).
```python
import sympy as sp
x = sp.symbols('x') #This creates a variable x, which is symbolically represented as the string x.
# Calculate the derivative of x^2
sp.diff(x**2, x)
```
```python
sp.diff(sp.cos(x), x)
```
```python
f = (x+1)**3 * sp.cos(x**2 - 5)
sp.diff(f,x)
```
```python
f = (x+1)**3 * (x-2)**2 * (x**2 + 4*x + 1)**4
sp.diff(f, x)
```
```python
sp.expand(sp.diff(f, x)) # expand out in polynomial form
```
You can look at the documentation for Sympy to see many other possibilities (e.g. we will use Sympy to do symbolic integration later on in this course)
- https://docs.sympy.org/latest/index.html
Try out Sympy to verify your pen & paper answers to the problem sheets
| 2f0015316c698bbc0b7543c8931e81b185ebc02d | 13,423 | ipynb | Jupyter Notebook | lectures/lecture-04-differentiation-02.ipynb | SABS-R3/2020-essential-maths | 5a53d60f1e8fdc04b7bb097ec15800a89f67a047 | [
"Apache-2.0"
] | 1 | 2021-11-27T12:07:13.000Z | 2021-11-27T12:07:13.000Z | lectures/lecture-04-differentiation-02.ipynb | SABS-R3/2021-essential-maths | 8a81449928e602b51a4a4172afbcd70a02e468b8 | [
"Apache-2.0"
] | null | null | null | lectures/lecture-04-differentiation-02.ipynb | SABS-R3/2021-essential-maths | 8a81449928e602b51a4a4172afbcd70a02e468b8 | [
"Apache-2.0"
] | 1 | 2020-10-30T17:34:52.000Z | 2020-10-30T17:34:52.000Z | 25.616412 | 184 | 0.487447 | true | 2,572 | Qwen/Qwen-72B | 1. YES
2. YES | 0.754915 | 0.867036 | 0.654538 | __label__eng_Latn | 0.92317 | 0.359043 |
# Introduction to orthogonal coordinates
In $\mathbb{R}^3$, we can think that each point is given by the
intersection of three surfaces. Thus, we have three families of curved
surfaces that intersect each other at right angles. These surfaces are
orthogonal locally, but not (necessarily) globally, and are
defined by
$$u_1 = f_1(x, y, z)\, ,\quad u_2 = f_2(x, y, z)\, ,\quad u_3=f_3(x, y, z) \, .$$
These functions should be invertible, at least locally, and we can also write
$$x = x(u_1, u_2, u_3)\, ,\quad y = y(u_1, u_2, u_3)\, ,\quad z = z(u_1, u_2, u_3)\, ,$$
where $x, y, z$ are the usual Cartesian coordinates. The curve defined by the intersection of two of the surfaces gives us
one of the coordinate curves.
## Scale factors
Since we are interested in how these surface intersect each other locally,
we want to express differential vectors in terms of the coordinates. Thus,
the differential for the position vector ($\mathbf{r}$) is given by
$$\mathrm{d}\mathbf{r} = \frac{\partial\mathbf{r}}{\partial u_1}\mathrm{d}u_1
+ \frac{\partial\mathbf{r}}{\partial u_2}\mathrm{d}u_2
+ \frac{\partial\mathbf{r}}{\partial u_3}\mathrm{d}u_3\, ,
$$
or
$$\mathrm{d}\mathbf{r} = \sum_{i=1}^3 \frac{\partial\mathbf{r}}{\partial u_i}\mathrm{d}u_i\, .$$
The factor $\partial \mathbf{r}/\partial u_i$ is a non-unitary vector that takes
into account the variation of $\mathbf{r}$ in the direction of $u_i$, and is then
tangent to the coordinate curve $u_i$. We can define a normalized basis $\hat{\mathbf{e}}_i$
using
$$\frac{\partial\mathbf{r}}{\partial u_i} = h_i \hat{\mathbf{e}}_i\, .$$
The coefficients $h_i$ are functions of $u_i$ and we call them _scale factors_. They
are really important since they allow us to _measure_ distances while we move along
our coordinates. We would need them to define vector operators in orthogonal coordinates.
When the coordinates are not orthogonal we would need to use the [metric tensor](https://en.wikipedia.org/wiki/Metric_tensor), but we are going to restrict ourselves to orthogonal systems.
Hence, we have the following
$$\begin{align}
&h_i = \left|\frac{\partial\mathbf{r}}{\partial u_i}\right|\, ,\\
&\hat{\mathbf{e}}_i = \frac{1}{h_i} \frac{\partial \mathbf{r}}{\partial u_i}\, .
\end{align}$$
## Curvilinear coordinates available
The following coordinate systems are available:
- Cartesian;
- Cylindrical;
- Spherical;
- Parabolic cylindrical;
- Parabolic;
- Paraboloidal;
- Elliptic cylindrical;
- Oblate spheroidal;
- Prolate spheroidal;
- Ellipsoidal;
- Bipolar cylindrical;
- Toroidal;
- Bispherical; and
- Conical.
To obtain the transformation for a given coordinate system we can use
the function `transform_coords` in the `vector` module.
```python
import sympy as sym
from continuum_mechanics import vector
```
First, we define the variables for the coordinates $(u, v, w)$.
```python
sym.init_printing()
u, v, w = sym.symbols("u v w")
```
And, we compute the coordinates for the **parabolic** system using ``transform_coords``.
The first parameter is a string defining the coordinate system and the second is
a tuple with the coordinates.
```python
vector.transform_coords("parabolic", (u, v, w))
```
The scale factors for the coordinate systems mentioned above are availabe.
We can compute them for bipolar cylindrical coordinates. The coordinates
are defined by
$$\begin{align}
&x = a \frac{\sinh\tau}{\cosh\tau - \cos\sigma}\, ,\\
&y = a \frac{\sin\sigma}{\cosh\tau - \cos\sigma}\, ,\\
&z = z\, ,
\end{align}$$
and have the following scale factors
$$h_\sigma = h_\tau = \frac{a}{\cosh\tau - \cos\sigma}\, ,$$
and $h_z = 1$.
```python
sigma, tau, z, a = sym.symbols("sigma tau z a")
z = sym.symbols("z")
scale = vector.scale_coeff_coords("bipolar_cylindrical", (sigma, tau, z), a=a)
scale
```
Finally, we can compute vector operators for different coordinates.
The Laplace operator for the bipolar cylindrical system is given by
$$
\nabla^2 \phi =
\frac{1}{a^2} \left( \cosh \tau - \cos\sigma \right)^{2}
\left(
\frac{\partial^2 \phi}{\partial \sigma^2} +
\frac{\partial^2 \phi}{\partial \tau^2}
\right) +
\frac{\partial^2 \phi}{\partial z^2}\, ,$$
and we can compute it using the function ``lap``. For this function,
the first parameter is the expression that we want to compute the
Laplacian for, the second parameter is a tuple with the coordinates
and the third parameter is a tuple with the scale factors.
```python
phi = sym.symbols("phi", cls=sym.Function)
lap = vector.lap(phi(sigma, tau, z), coords=(sigma, tau, z), h_vec=scale)
sym.simplify(lap)
```
| 37ad4501c97e4936a62f91748c39070b383beb66 | 28,739 | ipynb | Jupyter Notebook | docs/tutorials/curvilinear_coordinates.ipynb | nicoguaro/continuum_mechanics | f8149b69b8461784f6ed721294cd1a49ffdfa3d7 | [
"MIT"
] | 21 | 2018-12-09T15:02:51.000Z | 2022-02-16T09:28:38.000Z | docs/tutorials/curvilinear_coordinates.ipynb | nicoguaro/continuum_mechanics | f8149b69b8461784f6ed721294cd1a49ffdfa3d7 | [
"MIT"
] | 223 | 2019-05-06T16:31:50.000Z | 2022-03-31T21:21:03.000Z | docs/tutorials/curvilinear_coordinates.ipynb | nicoguaro/continuum_mechanics | f8149b69b8461784f6ed721294cd1a49ffdfa3d7 | [
"MIT"
] | 7 | 2020-01-29T10:03:52.000Z | 2022-02-25T19:34:37.000Z | 81.644886 | 8,296 | 0.782317 | true | 1,375 | Qwen/Qwen-72B | 1. YES
2. YES | 0.944177 | 0.896251 | 0.84622 | __label__eng_Latn | 0.988746 | 0.804385 |
## 1. A Numerical Solution to The Heat Equation
*By Parnian Kassraie*
***
***
*For solving this problem you don't need to know anything outside the course's syllabus. But if you are interested and you haven't passed Engineering Mathematics yet, you can read about the Heat Equation from [here.](https://en.wikipedia.org/wiki/Heat_equation)*
```latex
%%latex
\begin{equation}
\frac{\partial^{2} T}{\partial x^{2}}+\frac{\partial^{2} T}{\partial y^{2}}=0 \\
x,y\in[0,10] \\
T(x,10)=100,\space T(x,0)=T(0,y)=T(10,y)=0 \\
\end{equation}
```
\begin{equation}
\frac{\partial^{2} T}{\partial x^{2}}+\frac{\partial^{2} T}{\partial y^{2}}=0 \\
x,y\in[0,10] \\
T(x,10)=100,\space T(x,0)=T(0,y)=T(10,y)=0 \\
\end{equation}
In this problem, we want to solve the heat equation for a square metal plate. As stated above, the edges are kept at constant tempretures and we are solving the problem in the steady state.
### 1.1 Discretizing the Equation
***
We have to choose a finite number of points inside the metal square and calculate the tempreture for each of these points. In other words, we break the continues intervals, $x,y\in[0,10]$ and set $x,y$ to be :
$$x=n\Delta x, \space y = m \Delta y: \space \space n,m\in \mathbb{N},\space \text{and} \space n,m \leq K$$
* a) Set $\Delta x, \Delta y$.
```python
Deltax =
Deltay =
```
* b) Write the heat equation for discrete coordinates.
```latex
%%latex
\begin{equation}
....
\end{equation}
```
* c) We define $T_i,_j = T(x_i,y_j)$ Where $x_i = i\Delta x,\space y_j=j\Delta y$. Write $T_i,_j$ using only: $ T_{i+1,j}, T_{i-1,j},T_{i,j+1},T_{i,j-1}$
```latex
%%latex
\begin{equation}
....
\end{equation}
```
### 1.2 Solving The Discrete Equation
***
* a) We should choose an intial value for each of the points to solve the equation iteratively. Wisely choose constant value for all the points inside the metal square!
```python
T0 =
```
* b) State why you chose the value above. (Bonus point for wiser choices)
* c) Using what you have calculated, write a program that solves the given PDE.
```python
# Import Packages:
#Initialize the rest of the variables:
# Write the iterative code that solves the equation:
```
### 1.3 Ploting The Results
---
* a) Plot the steady state solution using a heatmap. Your result should look like [this](https://github.com/svarthafnyra/Numercial-Methods-for-Understanding-Stock-Market/blob/master/untitled.png), with a higher resolution of course.
```python
```
| 4854c52a575babc8fd894768ba2f71d8d0876844 | 5,324 | ipynb | Jupyter Notebook | PyHW.ipynb | svarthafnyra/Fun-with-Data | 3a7b49f49f1e7aad6587bdac47b6bb315b54dc20 | [
"MIT"
] | null | null | null | PyHW.ipynb | svarthafnyra/Fun-with-Data | 3a7b49f49f1e7aad6587bdac47b6bb315b54dc20 | [
"MIT"
] | null | null | null | PyHW.ipynb | svarthafnyra/Fun-with-Data | 3a7b49f49f1e7aad6587bdac47b6bb315b54dc20 | [
"MIT"
] | null | null | null | 23.662222 | 268 | 0.529489 | true | 756 | Qwen/Qwen-72B | 1. YES
2. YES | 0.951142 | 0.831143 | 0.790535 | __label__eng_Latn | 0.961259 | 0.675011 |
```python
import numpy as np
from scipy.integrate import odeint
import numpy as np
from sympy import symbols,sqrt,sech,Rational,lambdify,Matrix,exp,cosh,cse,simplify,cos,sin
from sympy.vector import CoordSysCartesian
from theano.scalar.basic_sympy import SymPyCCode
from theano import function
from theano.scalar import floats
from IRI import *
from Symbolic import *
from ENUFrame import ENU
import astropy.coordinates as ac
import astropy.units as au
import astropy.time as at
class Fermat(object):
def __init__(self,nFunc=None,neFunc=None,frequency = 120e6, type = 'r'):
self.frequency = frequency#Hz
self.type = type
if nFunc is not None:
self.nFunc = nFunc
self.neFunc = self.n2ne(nFunc)
self.eulerLambda, self.jacLambda = self.generateEulerEqnsSym(self.nFunc)
return
if neFunc is not None:
self.neFunc = neFunc
self.nFunc = self.ne2n(neFunc)
self.eulerLambda, self.jacLambda = self.generateEulerEqnsSym(self.nFunc)
return
def ne2n(self,neFunc):
'''Analytically turn electron density to refractive index. Assume ne in m^-3'''
self.neFunc = neFunc
#wp = 5.63e4*np.sqrt(ne/1e6)/2pi#Hz^2 m^3 lightman p 226
fp2 = 8.980**2 * neFunc
self.nFunc = sqrt(Rational(1) - fp2/self.frequency**2)
return self.nFunc
def n2ne(self,nFunc):
"""Get electron density in m^-3 from refractive index"""
self.nFunc = nFunc
self.neFunc = (Rational(1) - nFunc**2)*self.frequency**2/8.980**2
return self.neFunc
def euler(self,pr,ptheta,pphi,r,theta,phi,s):
N = np.size(pr)
euler = np.zeros([7,N])
i = 0
while i < 7:
euler[i,:] = self.eulerLambda[i](pr,ptheta,pphi,r,theta,phi,s)
i += 1
return euler
def eulerODE(self,y,r):
'''return prdot,pphidot,pthetadot,rdot,phidot,thetadot,sdot'''
e = self.euler(y[0],y[1],y[2],y[3],y[4],z,y[5]).flatten()
return e
def jac(self,pr,ptheta,pphi,r,theta,phi,s):
N = np.size(px)
jac = np.zeros([7,7,N])
i = 0
while i < 7:
j = 0
while j < 7:
jac[i,j,:] = self.jacLambda[i][j](pr,pphi,ptheta,r,phi,theta,s)
j += 1
i += 1
return jac
def jacODE(self,y,z):
'''return d ydot / d y'''
j = self.jac(y[0],y[1],y[2],y[3],y[4],z,y[5]).reshape([7,7])
#print('J:',j)
return j
def generateEulerEqnsSym(self,nFunc=None):
'''Generate function with call signature f(t,y,*args)
and accompanying jacobian jac(t,y,*args), jac[i,j] = d f[i] / d y[j]'''
if nFunc is None:
nFunc = self.nFunc
r,phi,theta,pr,pphi,ptheta,s = symbols('r phi theta pr pphi ptheta s')
if self.type == 'r':
sdot = sqrt(Rational(1) + (ptheta / r / pr)**Rational(2) + (pphi / r / sin(theta) / pr)**Rational(2))
prdot = nFunc.diff('r')*sdot + nFunc/sdot * ((ptheta/r/pr)**Rational(2)/r + (pphi/r /sin(theta)/pr)**Rational(2)/r)
pthetadot = nFunc.diff('theta')*sdot + nFunc/sdot * cos(theta) /sin(theta) *(pphi/r/sin(theta))**Rational(2)
pphidot = nFunc.diff('phi')*sdot
rdot = Rational(1)
thetadot = ptheta/pr/r**Rational(2)
phidot = pphi/pr/(r*sin(theta))**Rational(2)
if self.type == 's':
sdot = Rational(1)
prdot = nFunc.diff('r') + (ptheta**Rational(2) + (pphi/sin(theta))**Rational(2))/nFunc/r**Rational(3)
pthetadot = nFunc.diff('theta') + cos(theta) / sin(theta)**Rational(3) * (pphi/r)**Rational(2)/nFunc
pphidot = nFunc.diff('phi')
rdot = pr/nFunc
thetadot = ptheta/nFunc/r**Rational(2)
phidot = pphi/nFunc/(r*sin(theta))**Rational(2)
eulerEqns = (prdot,pthetadot,pphidot,rdot,thetadot,phidot,sdot)
euler = [lambdify((pr,ptheta,pphi,r,theta,phi,s),eqn,"numpy") for eqn in eulerEqns]
self.eulerLambda = euler
jac = []
for eqn in eulerEqns:
#print([eqn.diff(var) for var in (px,py,pz,x,y,z,s)])
jac.append([lambdify((pr,ptheta,pphi,r,theta,phi,s),eqn.diff(var),"numpy") for var in (pr,ptheta,pphi,r,theta,phi,s)])
self.jacLambda = jac
return self.eulerLambda, self.jacLambda
def integrateRay(self,x0,direction,tmax,N=100):
'''Integrate rays from x0 in initial direction where coordinates are (r,theta,phi)'''
direction /= np.linalg.norm(direction)
r0,theta0,phi0 = x0
rdot0,thetadot0,phi0 = direction
sdot = np.sqrt(rdot**2 + r0**2 * (thetadot0**2 + np.sin(theta0)**2 * phidot0**2))
pr0 = rdot/sdot
ptheta0 = r0**2 * thetadot/sdot
pphi0 = (r0 * np.sin(theta0))**2/sdot * phidot
init = [pr0,ptheta0,pphi0,r0,theta0,phi0,0]
if self.type == 'r':
tarray = np.linspace(r0,tmax,N)
if self.type == 's':
tarray = np.linspace(0,tmax,N)
#print("Integrating from {0} in direction {1} until {2}".format(x0,direction,tmax))
Y,info = odeint(self.eulerODE, init, tarray, Dfun = self.jacODE, col_deriv = 0, full_output=1)
r = Y[:,3]
theta = Y[:,4]
phi = Y[:,5]
s = Y[:,6]
return r,theta,phi,s
def testSweep():
import pylab as plt
x,y,z = symbols('x y z')
sol = SolitonModel(4)
sol.generateSolitonModel()
neFunc = sol.solitonModel
f = Fermat(neFunc = neFunc)
n = f.nFunc
theta = np.linspace(-np.pi/4.,np.pi/4.,5)
rays = []
for t in theta:
origin = ac.SkyCoord(0*au.km,0*au.km,0*au.km,frame=sol.enu).transform_to('itrs').cartesian.xyz.to(au.km).value
direction = ac.SkyCoord(np.cos(t+np.pi/2.),0,np.sin(t+np.pi/2.),frame=sol.enu).transform_to('itrs').cartesian.xyz.value
x,y,z,s = integrateRay(origin,direction,f,origin[2],7000)
rays.append({'x':x,'y':y,'z':z})
#plt.plot(x,z)
plotFuncCube(n.subs({'t':0}), *getSolitonCube(sol),rays=rays)
#plt.show()
def testSmoothify():
octTree = OctTree([0,0,500],dx=100,dy=100,dz=1000)
octTree = subDivide(octTree)
octTree = subDivide(octTree)
s = SmoothVoxel(octTree)
model = s.smoothifyOctTree()
plotCube(model ,-50.,50.,-50.,50.,0.,1000.,N=128,dx=None,dy=None,dz=None)
if __name__=='__main__':
#testSquare()
testSweep()
#testSmoothify()
#testcseLam()
```
```python
```
| d9812952884bd5d32fb2e44880e5d20c760d9d61 | 13,931 | ipynb | Jupyter Notebook | src/ionotomo/notebooks/FermatPrincipleSpherical.ipynb | Joshuaalbert/IonoTomo | 9f50fbac698d43a824dd098d76dce93504c7b879 | [
"Apache-2.0"
] | 7 | 2017-06-22T08:47:07.000Z | 2021-07-01T12:33:02.000Z | src/ionotomo/notebooks/FermatPrincipleSpherical.ipynb | Joshuaalbert/IonoTomo | 9f50fbac698d43a824dd098d76dce93504c7b879 | [
"Apache-2.0"
] | 1 | 2019-04-03T15:21:19.000Z | 2019-04-03T15:48:31.000Z | src/ionotomo/notebooks/FermatPrincipleSpherical.ipynb | Joshuaalbert/IonoTomo | 9f50fbac698d43a824dd098d76dce93504c7b879 | [
"Apache-2.0"
] | 2 | 2020-03-01T16:20:00.000Z | 2020-07-07T15:09:02.000Z | 58.288703 | 3,622 | 0.565932 | true | 2,132 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91611 | 0.718594 | 0.658311 | __label__kor_Hang | 0.211049 | 0.367808 |
# Tutorial rápido de Python para Matemáticos
© Ricardo Miranda Martins, 2022 - http://www.ime.unicamp.br/~rmiranda/
## Índice
1. [Introdução](1-intro.html)
2. [Python é uma boa calculadora!](2-calculadora.html) [(código fonte)](2-calculadora.ipynb)
3. [Resolvendo equações](3-resolvendo-eqs.html) [(código fonte)](3-resolvendo-eqs.ipynb)
4. [Gráficos](4-graficos.html) [(código fonte)](4-graficos.ipynb)
5. [Sistemas lineares e matrizes](5-lineares-e-matrizes.html) [(código fonte)](5-lineares-e-matrizes.ipynb)
6. **[Limites, derivadas e integrais](6-limites-derivadas-integrais.html)** [(código fonte)](6-limites-derivadas-integrais.ipynb)
7. [Equações diferenciais](7-equacoes-diferenciais.html) [(código fonte)](7-equacoes-diferenciais.ipynb)
# Limites
Conhece aquela famosa piadinha? "Tudo tem limites, menos $1/x$ com $x\rightarrow 0$." Matemáticos tem um senso de humor bem peculiar.
O Python sabe muito bem trabalhar com limites, graças ao pacote SymPy. Não é nada recomendável calcular limites fazendo "tabelinhas" ou "aproximações", então o pacote de cálculo simbólico é o ideal.
Para calcular $$\lim_{x\rightarrow a} f(x)$$ o comando é ```sp.limit(f,x,a)```. A variável $x$ precisa anter ser definida.
```python
import sympy as sp
x = sp.symbols('x')
sp.limit(x**2,x,2)
```
Bom, não vamos ficar usando o Python para calcular limites que sabemos calcular de cabeça, só substituindo os valores né? Vamos a alguns mais complicados. Por exemplo, que tal calcular $$\lim_{x\rightarrow 0} \dfrac{\sin(x)}{x}?$$
```python
sp.limit(sp.sin(x)/x,x,0)
```
Isso significa que as funções $f(x)=\sin(x)$ e $g(x)=x$ são muito parecidas, para valores pequenos de $x$. Vamos fazer um gráfico para conferir isso:
```python
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-0.5, 0.5, 100)
y = np.sin(x)
plt.plot(x, y)
plt.plot(x, x)
```
E os limites que não existem? Também podem ser calculados!
```python
import sympy as sp
x = sp.symbols('x')
sp.limit(1/(x-2),x,2)
```
Podemos calcular também limites com $x\rightarrow\infty$. No SymPy, o símbolo para infinito é ```oo```, lembrando do prefixo ```sp```. Sugestivo, não?
```python
sp.limit(1/x,x,sp.oo)
```
Podemos ainda calcular limites laterais no Python, usando os símboos ```+``` ou ```-``` no comando.
```python
sp.limit(1/x, x, 0, '+')
```
```python
sp.limit(1/x, x, 0, '-')
```
Já vamos começar a falar sobre derivadas, mas podemos calculá-las pela definição, usando limites. Abaixo faremos um exemplo disso.
```python
import sympy as sp
x, h = sp.symbols('x h', real=True)
f=x**2
fh=f.subs(x,x+h)
sp.limit( (fh-f)/h , h, 0)
```
O Python trabalha muito bem com funções definidas por partes. Abaixo fazemos um exemplo disso.
```python
import sympy as sp
x = sp.symbols('x', real=True)
g=sp.Piecewise((x**2+1, x<1),(3*x, x>1))
```
```python
sp.limit(g,x,1,'-')
```
```python
sp.limit(g,x,1,'+')
```
Aviso importante: o Python não é muito bom com funções definidas por partes, tome cuidado com os resultados.
# Derivadas
Se você já fez um curso de cálculo, deve ter percebido que derivação é um processo extremamente mecânico. Não é difícil implementar um "derivador formal". O Python calcula derivadas de funções de uma ou mais variáveis de forma muito eficiente. Novamente vamos usar o SymPy.
Sem mais delongas vamos calcular a derivada de $f(x)=x^2+x$ com respeito à função $x$.
```python
import sympy as sp
x = sp.symbols('x')
f=x**2+x
sp.diff(f,x)
```
Foi rápido, né? A função pode ser muito mais complicada, como por exemplo $g(x)=e^{x^3+x}+1/x$.
```python
g=sp.exp(x**3+x)+1/x
sp.diff(g,x)
```
As funções podem ser de mais que uma variável:
```python
y = sp.symbols('y')
h=sp.cos(x*y)+(3*x+y)/(y**2+1)
```
```python
# derivada em x
sp.diff(h,x)
```
```python
# derivada em y
sp.diff(h,x)
```
```python
# derivada mista, em x depois em y
sp.diff(h,x,y)
```
Claro que as derivadas de ordem superior também podem ser pedidas de forma iterada:
```python
sp.diff(sp.diff(h,x),y)
```
Para derivadas de ordem superior, também é fácil. Abaixo calculamos a derivada de terceira ordem de $h(x,y)$ com respeito a $x$
```python
sp.diff(h,x,x,x)
```
Se você quer a derivada em um ponto, pode usar o ```subs``` para avaliar a derivada nesse ponto:
```python
sp.diff(h,x).subs(x,0).subs(y,0)
```
Claro que a função precisa estar definida no ponto, ou coisas podem dar errado..
```python
sp.diff(g,x).subs(x,0).subs(y,0)
```
## Expansão em séries
Uma importante aplicação das derivadas é o cálculo da expansão em série de Taylor. O SymPy tem dois comandos para isso, o ```fps``` e o ```series```. Vamos testá-los com a função $f(x)=x\cos(x)$.
```python
import sympy as sp
x = sp.symbols('x')
f=x*sp.cos(x)
```
```python
# calculando a expansao formal de f em serie de potencias
# com respeito aa variavel x, no ponto x=0, e truncando
# em ordem 10 usando o fps
fn=sp.fps(f,x,0).truncate(10)
display(fn)
```
```python
# o comando series tambem funciona: a sintaxe é
# series(f(x),x,x0,k), onde x0
fm=sp.series(f,x,0,20).as_expr()
display(fm)
```
Se você não está ligando o nome à pessoa, uma aplicação interessante das séries de potências de uma função é poder calcular aproximações de valores de uma função $f(x)$ que não permite cálculos "diretos".
Por exemplo, não sabemos calcular diretamente $e^2$, mas usando a série da exponencial, podemos aproximar esse valor tão bem quanto quisermos. Vamos usar o sufixo ```removeO``` para remover da série de Taylor a parte que tem os termos em $O(n)$.
```python
import sympy as sp
x = sp.symbols('x')
g = sp.exp(x)
gk = sp.series(g,x,0,10).removeO()
gk.subs(x,2)
```
Portanto, $20947/2835$ é uma boa aproximação racional para $e^2$. Bom, o que estamos fazendo é uma coisa totalmente sem propósito, já que estamos fazendo isso no computador e um simples comando poderia calcular. Mas é aquele famoso "toy problem". Só por curiosidade, note que:
```python
20947//2835
```
Portanto, $e^2$ está perto de 7. Como bônus, se você decorar a série abaixo, poderá tirar aquela onda no churrasco do fim de ano e calcular aproximações para $e^x$ para valores de $x$ próximos de $0$. Certamente, vai ser um sucesso.
```python
sp.series(g,x,0,20).as_expr().removeO()
```
# Integrais
E chegamos nas integrais. O integrador do SymPy é muito rápido e eficiente, e é o companheiro que gostaríamos de ter na hora da prova de cálculo. Resolver uma integral no Python é muito fácil: o comando é ```integrate(f,x)``` ou ```integrate(f,(x,a,b))``` se for uma integral definida, com $x\in[a,b]$.
```python
import sympy as sp
x = sp.symbols('x')
f=x**2
sp.integrate(f,x)
```
```python
import sympy as sp
x = sp.symbols('x')
g=x*sp.exp(x)
sp.integrate(g,x)
```
```python
import sympy as sp
x = sp.symbols('x')
h=sp.exp(x)*sp.cos(x)
sp.integrate(h,x)
```
Bom, como você percebeu, o SymPy teria perdido $0.1$ em cada integral anterior - esqueceu a constante de integração.. coisa feia, Python. Alguns exemplos de integrais definidas:
```python
p=x**3+x
sp.integrate(p,(x,0,1))
```
```python
# vamos tentar enganar o python?
q=1/(x-2)
sp.integrate(q,(x,0,3))
```
Garoto esperto..
O Python também consegue calcular integrais impróprias, lembrando que o símbolo ```oo``` é usado para "infinito".
Atenção: se você tem menos que 18 anos, não execute o próximo comando!!
```python
sp.integrate(sp.exp(-x**2), (x, 0, sp.oo))
```
Mais um exemplo de integral imprópria:
```python
sp.integrate(1/(x**(6)), (x, 1,sp.oo))
```
A integração de funções de várias variávies, principalmente se o domínio for um retângulo, também pode ser feita sem problemas no Python:
```python
x, y, a, b, c, d = sp.symbols("x y a b c d")
f = x*y
sp.integrate(f, (y, a, b), (x, c, d))
```
Também podemos integrar sobre regiões um pouco mais gerais, as chamadas regiões de tipo I/tipo II, ou regiões $R_x$ ou $R_y$:
```python
x, y, a, b, c, d = sp.symbols("x y a b c d")
f = x**2+y
sp.integrate(f, (y, 0, x+1), (x, 0, 1))
```
## Somas de Riemann
Na primeira aula de integrais, começamos com integrais definidas, fazendo umas figuras sobre somas de Riemann. Acredite em mim: é complicado fazer isso manualmente, pois os cálculos são complicados e os desenhos são chatos de fazer. Que tal usar o Python pra facilitar nossa vida? Facilitará tanto o trabalho do professor quanto do aluno, que entenderá melhor.
A implementação abaixo foi adaptada do Mathematical Python [(nesse site)](https://personal.math.ubc.ca/~pwalls/math-python/integration/riemann-sums/).
```python
import numpy as np
import matplotlib.pyplot as plt
# originamente, a função foi definida com a definicao lambda,
# que de fato é mais simples nesse caso - caso queira saber
# mais sobre ela, leia aqui:
# https://stackabuse.com/lambda-functions-in-python/
# f = lambda x : x**2
# definindo a função
def f(x):
return x**2
# intervalo
a = 0; b = 10;
# numero de retangulos
N = 10
n = 10 # Use n*N+1 points to plot the function smoothly
# discretizando variáveis
x = np.linspace(a,b,N+1)
y = f(x)
X = np.linspace(a,b,n*N+1)
Y = f(X)
# iniciando plot
plt.figure(figsize=(15,5))
plt.subplot(1,3,1)
plt.plot(X,Y,'b')
x_left = x[:-1]
y_left = y[:-1]
plt.plot(x_left,y_left,'b.',markersize=10)
plt.bar(x_left,y_left,width=(b-a)/N,alpha=0.2,align='edge',edgecolor='b')
plt.title('Soma de Riemann com base na esquerda'.format(N))
plt.show()
```
```python
```
| 7cf27dbb03ddb8e8765f4ea307e44e1259d2296d | 141,394 | ipynb | Jupyter Notebook | 6-limites-derivadas-integrais.ipynb | rmiranda99/tutorial-math-python | 6fe211f9cd0b8b93d4a0543a690ca124fee6a8b2 | [
"CC-BY-4.0"
] | null | null | null | 6-limites-derivadas-integrais.ipynb | rmiranda99/tutorial-math-python | 6fe211f9cd0b8b93d4a0543a690ca124fee6a8b2 | [
"CC-BY-4.0"
] | null | null | null | 6-limites-derivadas-integrais.ipynb | rmiranda99/tutorial-math-python | 6fe211f9cd0b8b93d4a0543a690ca124fee6a8b2 | [
"CC-BY-4.0"
] | null | null | null | 105.439224 | 23,320 | 0.852038 | true | 3,058 | Qwen/Qwen-72B | 1. YES
2. YES | 0.822189 | 0.839734 | 0.69042 | __label__por_Latn | 0.997348 | 0.442409 |
# Variablen
Wenn Sie ein neues Jupyter Notebook erstellen, wählen Sie `Python 3.6` als Typ des Notebooks aus.
Innerhalb des Notebooks arbeiten Sie dann mit Python in der Version 3.6. Um zu verstehen, welche Bedeutung Variablen haben, müssen Sie also Variablen in Python 3.6 verstehen.
In Python ist eine Variable ein Kurzname für einen Speicherbereich, in den Daten abgelegt werden können. Auf diese Daten kann dann unter dem Namen der Variablen zugegriffen werden. Dies haben wir prinzipiell bereits in den ersten Übungen durchgeführt. Beispiele:
`d_i = 20 # Innendurchmesser eines Rohres`
Den Name einer Variablen muss eindeutig sein. Er darf nicht mit einer Ziffer beginnen und darf keine Operationszeichen wie `'+', '-', '*', ':', '^'` oder `'#'` enthalten. Soll einer Variablen ein Wert zugewiesen werden, so geschieht das mittels des Gleichheitszeichens, siehe oben.
Das `'#'`-Zeichen leitet einen Kommentar ein. Alles, was nach diesem Zeichen folgt, hat einen rein informativen Charakter aber wird von Python ignoriert.
Sobald eine Variable angelegt ist, kann diese in Berechnungen verwendet werden, z.B.
```python
import math
```
```python
d_i = 20e-3 # Innendurchmesser eines Rohres in m
A_i = math.pi*d_i**2/4 # Lichter Querschnitt in m**2
```
In einer Zelle dürfen mehrere Operationen stehen, siehe oben. Eine Zelle kann eine Ausgabe haben. Das ist dann immer das Ergebnis der Letzten Operation.
In der oben stehenden Zelle steht als letztes eine Wertzuweisung `A_i = ...`, die keine Ausgabe erzeugt.
Um interaktiv arbeiten zu können, sollte man nicht zu lange Berechnungen in einzelnen Zellen durchführen. Statt dessen sollte man nach wenigen sinnvollen Schritten Zwischenergebnisse anzeigen, um den Gang der Arbeit nachvollziehen zu können.
Stellt man einen Fehler fest, so können Werte geändert und die Zelle erneut ausgeführt werden.
Um das Ergebnis der oben durchgeführten Berechnung anzuzeigen, kann man die zuletzt erzeugte Variable aufrufen. Die Zelle sähe dann so aus:
```python
d_i = 20e-3 # Innendurchmesser eines Rohres in m
A_i = math.pi*d_i**2/4 # Lichter Querschnitt in m**2
A_i
```
Variablen haben einen Typ, den man sich durch die Funktion `type(variable)` anzeigen kann, z.B.:
```python
type(A_i)
```
# Aufgabe
Untersuchen Sie, welchen Typ die Variablen
`a=1`
`x=5.0`
`Name = 'Bernd'`
und
`Punkt = (3,4)`
haben.
Untersuchen Sie, welche Auswirkung der Befehl
`2*Name`
mit dem oben definierten Namen hat. Welche Auswirkung hat analog
`2*Punkt`?
Stellen Sie eine Vermutung auf, welchen Typ das Produkt `a*x` und die Summe `a+x` mit den oben festgelegten Werten von `a` und `x` haben und überprüfen Sie diese.
Um die Anwendungen in der Mathematik nicht aus den Augen zu verlieren, bearbeiten Sie die folgende Aufgabe:
# Aufgabe
Berechnen Sie das Gewicht von 250m Kupferrohr CU15$\times$1 in kg. Entnehmen Sie dafür die Dichte $\varrho$ von Kupfer ihrem Tabellenbuch. Die Zusammenhänge sind durch die folgenden Formeln gegeben:
\begin{align}
A &= \dfrac{\pi\,(d_a^2 - d_i^2)}{4}
\\[2ex]
V &= A\, l
\\[2ex]
m &= \varrho\, V
\end{align}
```python
import math
```
```python
# Ihre Lösung beginnt hier
```
| 677a1e10b2fc6f7c511a776fde7a147943fc807d | 5,675 | ipynb | Jupyter Notebook | src/03-Variablen.ipynb | w-meiners/anb-first-steps | 6cb3583f77ae853922acd86fa9e48e9cf5188596 | [
"MIT"
] | null | null | null | src/03-Variablen.ipynb | w-meiners/anb-first-steps | 6cb3583f77ae853922acd86fa9e48e9cf5188596 | [
"MIT"
] | null | null | null | src/03-Variablen.ipynb | w-meiners/anb-first-steps | 6cb3583f77ae853922acd86fa9e48e9cf5188596 | [
"MIT"
] | null | null | null | 42.350746 | 621 | 0.595595 | true | 964 | Qwen/Qwen-72B | 1. YES
2. YES | 0.870597 | 0.936285 | 0.815127 | __label__deu_Latn | 0.999312 | 0.732146 |
# Laboratório 2: Mofo e Fungicida
### Referente ao capítulo 6
Nesse laboratório, seja $x(t)$ a concentração de mofo que queremos reduzir em um período de tempo fixo. Assumiremos que $x$ tenha crescimento com taxa $r$ e capacidade de carga $M.$ Seja $u(t)$ o fungicida que reduz a população em $u(t)x(t).$ Assim
$$
x'(t) = r(M - x(t)) - u(t)x(t), x(0) = x_0 > 0
$$
O efeito do mofo e do fungicida são negativos para as pessoas ao redor e, por isso, queremos minimizar ambos. Assim, nosso objetivo será
$$
\min_u \int_0^T Ax(t)^2 + u(t)^2 dt
$$
tal que $A$ é o parâmetro que balanceia a importância dos termos do funcional, isto é, quanto mais o seu valor, mais importância em minimizar $x$.
**Resultado de existência:** $f(t,x,u) = Ax^2 + u^2 \implies f_{xx}(t,x,u) = 2A, f_{uu}(t,x,u) = 2$, logo $f$ é continuamente diferenciável nas três variáveis e convexa em $x$ e $u$. $g(t,x,u) = r(M - x(t)) - u(t)x(t) \implies g_{xx} = g_{uu} = 0$ e, de mesma forma é continuamente diferenciável e convexa em $x$ e $u$.
Assim, após encontrarmos $\lambda$ que satisfaça as condições necessárias, temos um resultado de existência, caso a integral seja finita.
## Condições Necessárias
### Hamiltoniano
$$
H = Ax^2 + u^2 + \lambda(r(M - x) - ux)
$$
### Condição de otimalidade
$$
0 = H_u = 2u - \lambda x \implies u^{*}(t) = \frac{1}{2}\lambda(t)x(t)
$$
### Equação adjunta
$$
\lambda '(t) = - H_x = -2Ax(t) + \lambda(t)(r + u(t))
$$
### Condição de transversalidade
$$
\lambda(T) = 0
$$
Deveríamos verificar que $\lambda(t) \ge 0$, mas isso não é feito aqui. Observe que formamos um sistema não linear de equações diferenciais, que tornam a solução analítica muito mais complexa. Por isso, vamos resolver esse problema de forma iterativa.
### Importanto as bibliotecas
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
import sympy as sp
import sys
sys.path.insert(0, '../pyscripts/')
from optimal_control_class import OptimalControl
```
Com a biblioteca `sympy`, de matemática simbólica, é possível obter as condições necessárias sem fazer contas. Para isso, vamos escrever o Hamiltoniano com uma expressão simbólica.
```python
x_sp,u_sp,lambda_sp, r_sp, A_sp, M_sp = sp.symbols('x u lambda r A M')
H = A_sp*x_sp**2 + u_sp**2 + lambda_sp*(r_sp*(M_sp - x_sp) - u_sp*x_sp)
H
```
$\displaystyle A x^{2} + \lambda \left(r \left(M - x\right) - u x\right) + u^{2}$
Assim podemos conseguir suas derivadas e, portanto, as condições necessárias
```python
print('H_x = {}'.format(sp.diff(H,x_sp)))
print('H_u = {}'.format(sp.diff(H,u_sp)))
print('H_lambda = {}'.format(sp.diff(H,lambda_sp)))
```
H_x = 2*A*x + lambda*(-r - u)
H_u = -lambda*x + 2*u
H_lambda = r*(M - x) - u*x
Podemos resolver a equação $H_u = 0$, mas esse passo é importante conferir manualmente também.
```python
eq = sp.Eq(sp.diff(H,u_sp), 0)
sp.solve(eq,u_sp)
```
[lambda*x/2]
Dessa vez vamos utilizar uma classe escrita em Python que codifica o algoritmo apresentado no Capítulo 5 e no Laboratório 1.
Primeiro precisamos definir as equações importantes das condições necessárias. É importante escrever no formato descrito nesse notebook. `par` é um dicionário com os parâmetros específicos do modelo.
```python
parameters = {'r': None, 'M': None, 'A': None}
diff_state = lambda t, x, u, par: par['r']*(par['M'] - x) - u*x # Derivada de x
diff_lambda = lambda t, x, u, lambda_, par: -2*par['A']*x + lambda_*(par['r'] + u) # Derivada de lambda_
update_u = lambda t, x, lambda_, par: 0.5*lambda_*x # Atualiza u através de H_u = 0
```
## Aplicando a classe ao exemplo
Vamos fazer algumas exeperimentações. Sinta-se livre para variar os parâmetros ao final do notebook.
```python
problem = OptimalControl(diff_state, diff_lambda, update_u)
```
```python
x0 = 1
T = 5
parameters['r'] = 0.3
parameters['M'] = 10
parameters['A'] = 1
```
```python
t,x,u,lambda_ = problem.solve(x0, T, parameters, h = 0.001)
ax = problem.plotting(t,x,u,lambda_)
```
O controle inicialmente aumenta até atingir um valor constante, da mesma forma que o estado. Dizemos que eles estão em equilíbrio. Eventualmente o controle decresce a 0. Note que o estado não decresce e isso acontece pela ponderação equivalente que damos aos efeitos negativo do mofo e do fungicida. Por isso, podemos sugerir um aumento em $A$.
```python
parameters['A'] = 10
```
```python
t,x,u,lambda_ = problem.solve(x0, T, parameters, h = 0.001)
ax = problem.plotting(t,x,u,lambda_)
```
Aqui o uso de fungicida é muito maior. Como gostaríamos, a quantidade de mofo também é menor em sua constante, porém no final do intervalo, quando a quantidade do fungicida diminui, o nível de mofo cresce consideravelmente. Para fins de comparação, podemos visualizar a diferença de quando $u \equiv 0$ do controle ótimo. Para isso, temos que fazer a integração da derivada de $x$ no intervalo.
```python
integration = solve_ivp(fun = diff_state,
t_span = (0,T),
y0 = (x0,),
t_eval = np.linspace(0,T,len(u)),
args = (0,parameters))
```
```python
plt.plot(t, x, integration.t, integration.y[0])
plt.title('Comparação da quantidade mofo')
plt.legend(['Controle ótimo', 'Sem controle'])
plt.grid(alpha = 0.5)
```
## Experimentação
Descomente a célula a seguir e varie os parâmetros para ver seus efeitos:
1. Aumentar $r$ para ver o crescimento mais rápido do mofo. Como o controle vai se comportar?
2. Variar $T$ faz alguma diferença? Como é essa diferença?
3. Variar $M$, a capacidade de carga, gera que efeito no estado?
```python
#x0 = 1
#T = 5
#parameters['r'] = 0.3
#parameters['M'] = 10
#parameters['A'] = 1
#
#t,x,u,lambda_ = problem.solve(x0, T, parameters, h = 0.001)
#roblem.plotting(t,x,u,lambda_)
```
### Este é o final do notebook
| a98189579c32fa33c9f981f6b3bfeab8fc22fb12 | 106,567 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/Laboratory2-checkpoint.ipynb | lucasmoschen/optimal-control-biological | 642a12b6a3cb351429018120e564b31c320c44c5 | [
"MIT"
] | 1 | 2021-11-03T16:27:39.000Z | 2021-11-03T16:27:39.000Z | notebooks/.ipynb_checkpoints/Laboratory2-checkpoint.ipynb | lucasmoschen/optimal-control-biological | 642a12b6a3cb351429018120e564b31c320c44c5 | [
"MIT"
] | null | null | null | notebooks/.ipynb_checkpoints/Laboratory2-checkpoint.ipynb | lucasmoschen/optimal-control-biological | 642a12b6a3cb351429018120e564b31c320c44c5 | [
"MIT"
] | null | null | null | 260.555012 | 42,656 | 0.924517 | true | 1,900 | Qwen/Qwen-72B | 1. YES
2. YES | 0.853913 | 0.894789 | 0.764072 | __label__por_Latn | 0.996749 | 0.613528 |
# Symbolic Regression
This example combines neural differential equations with regularised evolution to discover the equations
$\frac{\mathrm{d} x}{\mathrm{d} t}(t) = \frac{y(t)}{1 + y(t)}$
$\frac{\mathrm{d} y}{\mathrm{d} t}(t) = \frac{-x(t)}{1 + x(t)}$
directly from data.
**References:**
This example appears as an example in:
```bibtex
@phdthesis{kidger2021on,
title={{O}n {N}eural {D}ifferential {E}quations},
author={Patrick Kidger},
year={2021},
school={University of Oxford},
}
```
Whilst drawing heavy inspiration from:
```bibtex
@inproceedings{cranmer2020discovering,
title={{D}iscovering {S}ymbolic {M}odels from {D}eep {L}earning with {I}nductive
{B}iases},
author={Cranmer, Miles and Sanchez Gonzalez, Alvaro and Battaglia, Peter and
Xu, Rui and Cranmer, Kyle and Spergel, David and Ho, Shirley},
booktitle={Advances in Neural Information Processing Systems},
publisher={Curran Associates, Inc.},
year={2020},
}
@software{cranmer2020pysr,
title={PySR: Fast \& Parallelized Symbolic Regression in Python/Julia},
author={Miles Cranmer},
publisher={Zenodo},
url={http://doi.org/10.5281/zenodo.4041459},
year={2020},
}
```
This example is available as a Jupyter notebook [here](https://github.com/patrick-kidger/diffrax/blob/main/examples/symbolic_regression.ipynb).
```python
import tempfile
from typing import List
import equinox as eqx # https://github.com/patrick-kidger/equinox
import jax
import jax.numpy as jnp
import optax # https://github.com/deepmind/optax
import pysr # https://github.com/MilesCranmer/PySR
import sympy
# Note that PySR, which we use for SymbolicRegression, uses Julia as a backend.
# You'll need to install a recent version of Julia if you don't have one.
# (And can get funny errors if you have a too-old version of Julia already.)
# You may also need to restart Python after running `pysr.install()` the first time.
pysr.silence_julia_warning()
pysr.install(quiet=True)
```
Now for a bunch of helpers. We'll use these in a moment; skip over them for now.
```python
def quantise(expr, quantise_to):
if isinstance(expr, sympy.Float):
return expr.func(round(float(expr) / quantise_to) * quantise_to)
elif isinstance(expr, sympy.Symbol):
return expr
else:
return expr.func(*[quantise(arg, quantise_to) for arg in expr.args])
class SymbolicFn(eqx.Module):
fn: callable
parameters: jnp.ndarray
def __call__(self, x):
# Dummy batch/unbatching. PySR assumes its JAX'd symbolic functions act on
# tensors with a single batch dimension.
return jnp.squeeze(self.fn(x[None], self.parameters))
class Stack(eqx.Module):
modules: List[eqx.Module]
def __call__(self, x):
return jnp.stack([module(x) for module in self.modules], axis=-1)
def expr_size(expr):
return sum(expr_size(v) for v in expr.args) + 1
def _replace_parameters(expr, parameters, i_ref):
if isinstance(expr, sympy.Float):
i_ref[0] += 1
return expr.func(parameters[i_ref[0]])
elif isinstance(expr, sympy.Symbol):
return expr
else:
return expr.func(
*[_replace_parameters(arg, parameters, i_ref) for arg in expr.args]
)
def replace_parameters(expr, parameters):
i_ref = [-1] # Distinctly sketchy approach to making this conversion.
return _replace_parameters(expr, parameters, i_ref)
```
Now for the main program, which we can run with `main()`. We discuss what's happening at each step in the comments -- read on:
```python
def main(
symbolic_dataset_size=2000,
symbolic_num_populations=100,
symbolic_population_size=20,
symbolic_migration_steps=4,
symbolic_mutation_steps=30,
symbolic_descent_steps=50,
pareto_coefficient=2,
fine_tuning_steps=500,
fine_tuning_lr=3e-3,
quantise_to=0.01,
):
#
# First obtain a neural approximation to the dynamics.
# We begin by running the previous example.
#
# Runs the Neural ODE example.
# This defines the variables `ts`, `ys`, `model`.
print("Training neural differential equation.")
%run neural_ode.ipynb
#
# Now symbolically regress across the learnt vector field, to obtain a Pareto
# frontier of symbolic equations, that trades loss against complexity of the
# equation. Select the "best" from this frontier.
#
print("Symbolically regressing across the vector field.")
vector_field = model.func.mlp # noqa: F821
dataset_size, length_size, data_size = ys.shape # noqa: F821
in_ = ys.reshape(dataset_size * length_size, data_size) # noqa: F821
in_ = in_[:symbolic_dataset_size]
out = jax.vmap(vector_field)(in_)
with tempfile.TemporaryDirectory() as tempdir:
symbolic_regressor = pysr.PySRRegressor(
niterations=symbolic_migration_steps,
ncyclesperiteration=symbolic_mutation_steps,
populations=symbolic_num_populations,
npop=symbolic_population_size,
optimizer_iterations=symbolic_descent_steps,
optimizer_nrestarts=1,
procs=1,
verbosity=0,
tempdir=tempdir,
temp_equation_file=True,
output_jax_format=True,
)
symbolic_regressor.fit(in_, out)
best_equations = symbolic_regressor.get_best()
expressions = [b.sympy_format for b in best_equations]
symbolic_fns = [
SymbolicFn(b.jax_format["callable"], b.jax_format["parameters"])
for b in best_equations
]
#
# Now the constants in this expression have been optimised for regressing across
# the neural vector field. This was good enough to obtain the symbolic expression,
# but won't quite be perfect -- some of the constants will be slightly off.
#
# To fix this we now plug our symbolic function back into the original dataset
# and apply gradient descent.
#
print("Optimising symbolic expression.")
symbolic_fn = Stack(symbolic_fns)
flat, treedef = jax.tree_flatten(
model, is_leaf=lambda x: x is model.func.mlp # noqa: F821
)
flat = [symbolic_fn if f is model.func.mlp else f for f in flat] # noqa: F821
symbolic_model = jax.tree_unflatten(treedef, flat)
@eqx.filter_grad
def grad_loss(symbolic_model):
vmap_model = jax.vmap(symbolic_model, in_axes=(None, 0))
pred_ys = vmap_model(ts, ys[:, 0]) # noqa: F821
return jnp.mean((ys - pred_ys) ** 2) # noqa: F821
optim = optax.adam(fine_tuning_lr)
opt_state = optim.init(eqx.filter(symbolic_model, eqx.is_inexact_array))
@eqx.filter_jit
def make_step(symbolic_model, opt_state):
grads = grad_loss(symbolic_model)
updates, opt_state = optim.update(grads, opt_state)
symbolic_model = eqx.apply_updates(symbolic_model, updates)
return symbolic_model, opt_state
for _ in range(fine_tuning_steps):
symbolic_model, opt_state = make_step(symbolic_model, opt_state)
#
# Finally we round each constant to the nearest multiple of `quantise_to`.
#
trained_expressions = []
for module, expression in zip(symbolic_model.func.mlp.modules, expressions):
expression = replace_parameters(expression, module.parameters.tolist())
expression = quantise(expression, quantise_to)
trained_expressions.append(expression)
print(f"Expressions found: {trained_expressions}")
```
```python
main()
```
| 85b477be66dee5b2c2b83bb2e7b5ae3fadbf4d4e | 59,376 | ipynb | Jupyter Notebook | examples/symbolic_regression.ipynb | FedericoV/diffrax | 98b010242394491fea832e77dc94f456b48495fa | [
"Apache-2.0"
] | null | null | null | examples/symbolic_regression.ipynb | FedericoV/diffrax | 98b010242394491fea832e77dc94f456b48495fa | [
"Apache-2.0"
] | null | null | null | examples/symbolic_regression.ipynb | FedericoV/diffrax | 98b010242394491fea832e77dc94f456b48495fa | [
"Apache-2.0"
] | null | null | null | 164.933333 | 46,340 | 0.878082 | true | 1,932 | Qwen/Qwen-72B | 1. YES
2. YES | 0.737158 | 0.771844 | 0.568971 | __label__eng_Latn | 0.839384 | 0.16024 |
```python
from __future__ import print_function
import sisl
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
In this analysis example we will show how to plot the wavefunction for a periodic system (the same scheme may be used to plot molecular orbitals).
The basic principle of plotting the real-space wave functions may be written as this (for a given $\mathbf k$-point):
\begin{equation}
\psi_i(\mathbf k, \mathbf r) = \sum_\nu e^{i\mathbf k \cdot \mathbf r_\nu}c_{i\nu}\phi_\nu(\mathbf r - \mathbf r_\nu),
\end{equation}
where $c_{i\nu}=\langle\tilde\phi_\nu|\psi_i\rangle$ is the coefficient for the $i$th eigenstate and $\nu$ basis orbital and $\phi_\nu(\mathbf r - \mathbf r_\nu)$ is the basis orbital $\nu$ centered at $\mathbf r_\nu$.
`sisl` will in most cases, automatically read the necessary information from a Siesta run to be able to construct the $\phi_\nu$ basis functions in real-space. If the basis-information is not available `sisl` will inform you when trying to calculate real-space quantities.
In this example we will calculate the eigenstates for a given $\mathbf k$-point for graphene and plot a cut through the real-space wavefunction at a fixed distance above the graphene plane.
## Exercises
1. Run Siesta.
2. Read in the Hamiltonian using the `RUN.fdf` file, see e.g. [S 1](../S_01/run.ipynb).
3. Calculate the eigenstate for the $\Gamma$-point (see. `Hamiltonian.eigenstate`)
4. Read the entry about the `EigenstateElectron` in the [sisl](http://zerothi.github.io/sisl/docs/latest/api-generated/sisl.physics.html#electrons-electron).
Figure out which method you should use in order to calculate the real-space wavefunction.
Use the below method (`plot_grid`) to plot a cut through the wavefunction grid.
*HINT*: this may be a useful grid `grid = Grid(0.05, sc=H.geometry.sc)`
5. If you **don't** see a warning like this:
info:0: SislInfo: wavefunction: summing 18 different state coefficients, will continue silently!
then you have done it correctly! Look in the documentation and figure out how to take a *sub*set of the eigenstates and only plot a single one of them on the grid. This is important since otherwise you are plotting the super-position of all eigenstates at the $\mathbf k$-point.
*HINT*: If you have VMD/XCrySDen on your laptop it may be *fun* to plot the real-space quantities using cube files, if you want, figure out how to save the `Grid` into a cube/xsf file.
6. Try and *play* with the supercell you pass to the `Grid` initialization and plot the wavefunction for different sizes of the grid.
What do you see for increasing size of grids? Are there any periodicites, if so, what is the periodicity and why?
7. Calculate the eigenstates such that the wavefunctions has a periodicity of $3$ along the first lattice vector and $2$ along the second lattice vector (only plot one of the eigenstates)
*HINT*: $e^{i\mathbf k \cdot \mathbf R}$
```python
def plot_grid(grid, plane_dist=1):
""" Plot the grid in either 1 (real-only) or 2 (complex wavefunction) graphs
A cut through the grid will be plotted corresponding to `plane_dist` above the z-coordinate
"""
z_index = grid.index(plane_dist, 2)
x, y = np.mgrid[:grid.shape[0], :grid.shape[1]]
dcell = grid.dcell
x, y = x * dcell[0, 0] + y * dcell[1, 0], x * dcell[0, 1] + y * dcell[1, 1]
if grid.dtype in [np.complex64, np.complex128]:
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
axs[0].contourf(x, y, grid.grid[:, :, z_index].real)
im = axs[1].contourf(x, y, grid.grid[:, :, z_index].imag)
for ax in axs:
ax.set_xlabel(r'$x$ [Ang]'); ax.set_ylabel(r'$y$ [Ang]')
axs[0].set_title('Real part')
axs[1].set_title('Imaginary part')
else:
fig, ax = plt.subplots(1, 1) ; axs = [ax]
axs = [ax]
im = ax.contourf(x, y, grid.grid[:, :, z_index])
ax.set_xlabel(r'$x$ [Ang]'); ax.set_ylabel(r'$y$ [Ang]')
# Also plot the atomic coordinates
try:
xyz = grid.geometry.xyz
for ax in axs:
ax.scatter(xyz[:, 0], xyz[:, 1], 50, 'k', alpha=.6)
except: pass
fig.colorbar(im);
```
```python
# Add code here to 1) read in Hamiltonian with basis orbitals, 2) calculate the eigenstate for a given k-point,
# 3) create a grid to plot the wavefunction on and 4) plot the grid using the above method
```
```python
```
| 6fd15e919fe13f51ca4b4d17dca31df7717756f6 | 6,077 | ipynb | Jupyter Notebook | ts-tbt-sisl-tutorial-master/S_03/run.ipynb | rwiuff/QuantumTransport | 5367ca2130b7cf82fefd4e2e7c1565e25ba68093 | [
"MIT"
] | 1 | 2021-09-25T14:05:45.000Z | 2021-09-25T14:05:45.000Z | ts-tbt-sisl-tutorial-master/S_03/run.ipynb | rwiuff/QuantumTransport | 5367ca2130b7cf82fefd4e2e7c1565e25ba68093 | [
"MIT"
] | 1 | 2020-03-31T03:17:38.000Z | 2020-03-31T03:17:38.000Z | ts-tbt-sisl-tutorial-master/S_03/run.ipynb | rwiuff/QuantumTransport | 5367ca2130b7cf82fefd4e2e7c1565e25ba68093 | [
"MIT"
] | 2 | 2020-01-27T10:27:51.000Z | 2020-06-17T10:18:18.000Z | 45.014815 | 294 | 0.59898 | true | 1,253 | Qwen/Qwen-72B | 1. YES
2. YES | 0.793106 | 0.70253 | 0.557181 | __label__eng_Latn | 0.984799 | 0.132847 |
# Eisntein Tensor calculations using Symbolic module
```python
import numpy as np
import pytest
import sympy
from sympy import cos, simplify, sin, sinh, tensorcontraction
from einsteinpy.symbolic import EinsteinTensor, MetricTensor, RicciScalar
sympy.init_printing()
```
### Defining the Anti-de Sitter spacetime Metric
```python
syms = sympy.symbols("t chi theta phi")
t, ch, th, ph = syms
m = sympy.diag(-1, cos(t) ** 2, cos(t) ** 2 * sinh(ch) ** 2, cos(t) ** 2 * sinh(ch) ** 2 * sin(th) ** 2).tolist()
metric = MetricTensor(m, syms)
```
### Calculating the Einstein Tensor (with both indices covariant)
```python
einst = EinsteinTensor.from_metric(metric)
einst.tensor()
```
| c722970826c908ce91ed41926c7dc64c5f9e02e7 | 21,485 | ipynb | Jupyter Notebook | docs/source/examples/Einstein_Tensor_symbolic_calculation.ipynb | Varunvaruns9/einsteinpy | befc0879c65a53b811e7d6a9ec47675ae28a08c5 | [
"MIT"
] | 1 | 2019-03-08T16:13:56.000Z | 2019-03-08T16:13:56.000Z | docs/source/examples/Einstein_Tensor_symbolic_calculation.ipynb | Varunvaruns9/einsteinpy | befc0879c65a53b811e7d6a9ec47675ae28a08c5 | [
"MIT"
] | null | null | null | docs/source/examples/Einstein_Tensor_symbolic_calculation.ipynb | Varunvaruns9/einsteinpy | befc0879c65a53b811e7d6a9ec47675ae28a08c5 | [
"MIT"
] | 1 | 2022-03-19T18:46:13.000Z | 2022-03-19T18:46:13.000Z | 180.546218 | 17,424 | 0.847708 | true | 201 | Qwen/Qwen-72B | 1. YES
2. YES | 0.944995 | 0.849971 | 0.803218 | __label__eng_Latn | 0.485465 | 0.704478 |
<h1><center> Computation of the modified equation of a numerical scheme (univariate PDE evolution equation)</center></h1>
<center>
Olivier Pannekoucke <br> 2020
# Introduction
In this illustration we compute the modified equation assowiated with the Euler discretization and the centered discretization of the advection equation
$$(1)\quad \partial_t u + c \partial_x u =0$$
whose numerical scheme is
$$(2)\quad u^{q+1}_k = u_k^q - c \delta t \frac{u^q_{k+1}-u^q_{k-1}}{2\delta x}$$
where $u^q_k$ stands for a numerical approximation of $u(q\delta t,k\delta x)$.
## Application to the Euler time scheme applied on the advection
#### Import of modules & functions
```python
from modequation import ModifiedEquation
import sympy
from sympy import symbols, Derivative, Function, Eq
```
```python
c = sympy.symbols('c')
coordinates = sympy.symbols('t x')
t, x = coordinates
dt, dx = sympy.symbols('dt dx')
u = Function('u')(*coordinates)
```
```python
u_qp=u.subs(t,t+dt)
u_kp=u.subs(x,x+dx)
u_km=u.subs(x,x-dx)
```
#### Definition of the time schemes
First we define the numerical schemes as an equation (`sympy.Eq`)
$$(3)\quad \frac{u^{q+1}_k-u_k^q }{\delta t} = - c \frac{u^q_{k+1}-u^q_{k-1}}{2\delta x}$$
```python
euler_centered_scheme = Eq( (u_qp-u)/dt, -c * (u_kp-u_km)/(2*dx))
```
#### Computation of the modified equations
```python
euler_centered = ModifiedEquation(euler_centered_scheme,u, order =3)
```
```python
euler_centered.consistant_equation
```
$\displaystyle \frac{\partial}{\partial t} u{\left(t,x \right)} = - c \frac{\partial}{\partial x} u{\left(t,x \right)}$
```python
euler_centered.modified_equation
```
$\displaystyle \frac{\partial}{\partial t} u{\left(t,x \right)} = - c \frac{\partial}{\partial x} u{\left(t,x \right)} - \frac{c dx^{2} \frac{\partial^{3}}{\partial x^{3}} u{\left(t,x \right)}}{6} - \frac{c^{2} dt \frac{\partial^{2}}{\partial x^{2}} u{\left(t,x \right)}}{2} + O\left(dx^{3}\right) + O\left(dt^{3}\right)$
| b7a5068b3b2b37714893bc47f872d9628bd04b1e | 4,914 | ipynb | Jupyter Notebook | euler-centered-modified-equation.ipynb | opannekoucke/modified-equation | 11a0f18b24e3142a65976048e385e9def54628fd | [
"CECILL-B"
] | null | null | null | euler-centered-modified-equation.ipynb | opannekoucke/modified-equation | 11a0f18b24e3142a65976048e385e9def54628fd | [
"CECILL-B"
] | null | null | null | euler-centered-modified-equation.ipynb | opannekoucke/modified-equation | 11a0f18b24e3142a65976048e385e9def54628fd | [
"CECILL-B"
] | null | null | null | 24.326733 | 357 | 0.524013 | true | 649 | Qwen/Qwen-72B | 1. YES
2. YES | 0.927363 | 0.839734 | 0.778738 | __label__eng_Latn | 0.777021 | 0.647603 |
<center>
<h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>
<h2> Roots of 1D equations </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.32</h2>
</center>
## Table of Contents
* [Introduction](#intro)
* [Bisection Method](#bisection)
* [Cobweb Plot](#cobweb)
* [Fixed Point Iteration](#fpi)
* [Newton Method](#nm)
* [Wilkinson Polynomial](#wilkinson)
* [Acknowledgements](#acknowledgements)
```python
import numpy as np
import matplotlib.pyplot as plt
import sympy as sym
%matplotlib inline
from ipywidgets import interact
from ipywidgets import widgets
sym.init_printing()
from scipy import optimize
```
<div id='intro' />
## Introduction
Hello again! In this document we're going to learn how to find a 1D equation's solution using numerical methods. First, let's start with the definition of a root:
<b>Definition</b>: The function $f(x)$ has a <b>root</b> in $x = r$ if $f(r) = 0$.
An example: Let's say we want to solve the equation $x + \log(x) = 3$. We can rearrange the equation: $x + \log(x) - 3 = 0$. That way, to find its solution we can find the root of $f(x) = x + \log(x) - 3$. Now let's study some numerical methods to solve these kinds of problems.
Defining a function $f(x)$
```python
f = lambda x: x+np.log(x)-3
```
Finding $r$ using sympy
```python
y = sym.Symbol('y')
fsym = lambda y: y+sym.log(y)-3
r_all=sym.solve(sym.Eq(fsym(y), 0), y)
r=r_all[0].evalf()
print(r)
print(r_all)
```
2.20794003156932
[LambertW(exp(3))]
```python
def find_root_manually(r=2.0):
x = np.linspace(1,3,1000)
plt.figure(figsize=(8,8))
plt.plot(x,f(x),'b-')
plt.grid()
plt.ylabel('$f(x)$',fontsize=16)
plt.xlabel('$x$',fontsize=16)
plt.title('What is r such that $f(r)='+str(f(r))+'$? $r='+str(r)+'$',fontsize=16)
plt.plot(r,f(r),'k.',markersize=20)
plt.show()
interact(find_root_manually,r=(1e-5,3,1e-3))
```
interactive(children=(FloatSlider(value=2.0, description='r', max=3.0, min=1e-05, step=0.001), Output()), _dom…
<function __main__.find_root_manually(r=2.0)>
<div id='bisection' />
## Bisection Method
The bisection method finds the root of a function $f$, where $f$ is a **continuous** function.
If we want to know if this has a root, we have to check if there is an interval $[a,b]$ for which
$f(a)\cdot f(b) < 0$. When these 2 conditions are satisfied, it means that there is a value $r$, between $a$ and $b$, for which $f(r) = 0$. To summarize how this method works, start with the aforementioned interval (checking that there's a root in it), and split it into two smaller intervals $[a,c]$ and $[c,b]$. Then, check which of the two intervals contains a root. Keep splitting each "eligible" interval until the algorithm converges or the tolerance is surpassed.
```python
def bisect(f, a, b, tol=1e-8):
fa = f(a)
fb = f(b)
i = 0
# Just checking if the sign is not negative => not root necessarily
if np.sign(f(a)*f(b)) >= 0:
print('f(a)f(b)<0 not satisfied!')
return None
#Printing the evolution of the computation of the root
print(' i | a | c | b | fa | fc | fb | b-a')
print('----------------------------------------------------------------------------------------')
while(b-a)/2 > tol:
c = (a+b)/2.
fc = f(c)
print('%2d | %.7f | %.7f | %.7f | %.7f | %.7f | %.7f | %.7f' %
(i+1, a, c, b, fa, fc, fb, b-a))
# Did we find the root?
if fc == 0:
print('f(c)==0')
break
elif np.sign(fa*fc) < 0:
b = c
fb = fc
else:
a = c
fa = fc
i += 1
xc = (a+b)/2.
return xc
```
```python
## Finding a root of cos(x). What about if you change the interval?
#f = lambda x: np.cos(x)
## Another function
#f = lambda x: x**3-2*x**2+(4/3)*x-(8/27)
## Computing the cubic root of 7.
#f = lambda x: x**3-7
#bisect(f,0,2)
f = lambda x: x*np.exp(x)-3
#f2 = lambda x: np.cos(x)-x
bisect(f,0,3,tol=1e-13)
```
It's very important to define a concept called **convergence rate**.
This rate shows how fast the convergence of a method is at a specified point.
The convergence rate for the bisection is always 0.5 because this method uses the half of the interval for each iteration.
<div id='cobweb' />
## Cobweb Plot
```python
def cobweb(x,g=None):
min_x = np.amin(x)
max_x = np.amax(x)
plt.figure(figsize=(10,10))
ax = plt.axes()
plt.plot(np.array([min_x,max_x]),np.array([min_x,max_x]),'b-')
for i in np.arange(x.size-1):
delta_x = x[i+1]-x[i]
head_length = np.abs(delta_x)*0.04
arrow_length = delta_x-np.sign(delta_x)*head_length
ax.arrow(x[i], x[i], 0, arrow_length, head_width=1.5*head_length, head_length=head_length, fc='k', ec='k')
ax.arrow(x[i], x[i+1], arrow_length, 0, head_width=1.5*head_length, head_length=head_length, fc='k', ec='k')
if g!=None:
y = np.linspace(min_x,max_x,1000)
plt.plot(y,g(y),'r')
plt.title('Cobweb diagram')
plt.grid(True)
plt.show()
```
<div id='fpi' />
## Fixed Point Iteration
To learn about the Fixed-Point Iteration we will first learn about the concept of Fixed Point.
A Fixed Point of a function $g$ is a real number $r$, where $g(r) = r$
The Fixed-Point Iteration is based in the Fixed Point concept and works like this to find the root of a function:
\begin{equation} x_{0} = initial\_guess \\ x_{i+1} = g(x_{i})\end{equation}
To find an equation's solution using this method you'll have to move around some things to rearrange the equation in the form $x = g(x)$. That way, you'll be iterating over the funcion $g(x)$, but you will **not** find $g$'s root, but $f(x) = g(x) - x$ (or $f(x) = x - g(x)$)'s root. In our following example, we'll find the solution to $f(x) = x - \cos(x)$ by iterating over the funcion $g(x) = \cos(x)$.
```python
def fpi(g, x0, k, flag_cobweb=False):
x = np.empty(k+1)
x[0] = x0
error_i = np.nan
print(' i | x(i) | x(i+1) ||x(i+1)-x(i)| | e_i/e_{i-1}')
print('--------------------------------------------------------------')
for i in range(k):
x[i+1] = g(x[i])
error_iminus1 = error_i
error_i = abs(x[i+1]-x[i])
print('%2d | %.10f | %.10f | %.10f | %.10f' %
(i,x[i],x[i+1],error_i,error_i/error_iminus1))
if flag_cobweb:
cobweb(x,g)
return x[-1]
```
```python
g = lambda x: np.cos(x)
fpi2(g, 2, 20, True)
```
Let's quickly explain the Cobweb Diagram we have here. The <font color="blue">blue</font> line is the function $x$ and the <font color="red">red</font> is the function $g(x)$. The point in which they meet is $g$'s fixed point. In this particular example, we start at <font color="blue">$y = x = 1.5$</font> (the top right corner) and then we "jump" **vertically** to <font color="red">$y = \cos(1.5) \approx 0.07$</font>. After this, we jump **horizontally** to <font color="blue">$x = \cos(1.5) \approx 0.07$</font>. Then, we jump again **vertically** to <font color="red">$y = \cos\left(\cos(1.5)\right) \approx 0.997$</font> and so on. See the pattern here? We're just iterating over $x = \cos(x)$, getting closer to the center of the diagram where the fixed point resides, in $x \approx 0.739$.
It's very important to mention that the algorithm will converge only if the rate of convergence $S < 1$, where $S = \left| g'(r) \right|$. If you want to use this method, you'll have to construct $g(x)$ starting from $f(x)$ accordingly. In this example, $g(x) = \cos(x) \Rightarrow g'(x) = -\sin(x)$ and $|-\sin(0.739)| \approx 0.67$.
### Another example. Source: https://divisbyzero.com/2008/12/18/sharkovskys-theorem/amp/?__twitter_impression=true
```python
g = lambda x: -(3/2)*x**2+(11/2)*x-2
gp = lambda x: -3*x+11/2
a=-1/2.7
g2 = lambda x: x+a*(x-g(x))
#x=np.linspace(2,3,100)
#plt.plot(x,gp(x),'-')
#plt.plot(x,gp(x)*0+1,'r-')
#plt.plot(x,gp(x)*0-1,'g-')
#plt.grid(True)
#plt.show()
fpi(g2, 2.45, 12, True)
```
<div id='nm' />
## Newton's Method
For this method, we want to iteratively find some function $f(x)$'s root, that is, the number $r$ for which $f(r) = 0$. The algorithm is as follows:
\begin{equation} x_0 = initial\_guess \end{equation}
\begin{equation} x_{i+1} = x_i - \cfrac{f(x_i)}{f'(x_i)} \end{equation}
which means that you won't be able to find $f$'s root if $f'(r) = 0$. In this case, you would have to use the modified version of this method, but for now let's focus on the unmodified version first. Newton's (unmodified) method is said to have quadratic convergence.
```python
def newton_method(f, fp, x0, rel_error=1e-8, m=1):
#Initialization of hybrid error and absolute
hybrid_error = 100
error_i = np.inf
print('i | x(i) | x(i+1) | |x(i+1)-x(i)| | e_i/e_{i-1} | e_i/e_{i-1}^2')
print('----------------------------------------------------------------------------------------')
#Iteration counter
i = 1
while (hybrid_error > rel_error and hybrid_error < 1e12 and i < 1e4):
#Newton's iteration
x1 = x0-m*f(x0)/fp(x0)
#Checking if root was found
if f(x1) == 0.0:
hybrid_error = 0.0
break
#Computation of hybrid error
hybrid_error = abs(x1-x0)/np.max([abs(x1),1e-12])
#Computation of absolute error
error_iminus1 = error_i
error_i = abs(x1-x0)
#Increasing counter
i += 1
#Showing some info
print("%d | %.10f | %.10f | %.20f | %.10f | %.10f" %
(i, x0, x1, error_i, error_i/error_iminus1, error_i/(error_iminus1**2)))
#Updating solution
x0 = x1
#Checking if solution was obtained
if hybrid_error < rel_error:
return x1
elif i>=1e4:
print('Newton''s Method diverged. Too many iterations!!')
return None
else:
print('Newton''s Method diverged!')
return None
```
```python
f = lambda x: np.sin(x)
fp = lambda x: np.cos(x) # the derivative of f
newton_method(f, fp, 3.1,rel_error=1e-14)
```
```python
f = lambda x: x**2
fp = lambda x: 2*x # the derivative of f
newton_method(f, fp, 3.1,rel_error=1e-2, m=2)
```
<div id='wilkinson' />
## Wilkinson Polynomial
https://en.wikipedia.org/wiki/Wilkinson%27s_polynomial
**Final question: Why is the root far far away from $16$?**
```python
x = sym.symbols('x', reals=True)
W=1
for i in np.arange(1,21):
W*=(x-i)
W # Printing W nicely
```
```python
# Expanding the Wilkinson polynomial
We=sym.expand(W)
We
```
```python
# Just computiong the derivative
Wep=sym.diff(We,x)
Wep
```
```python
# Lamdifying the polynomial to be used with sympy
P=sym.lambdify(x,We)
Pp=sym.lambdify(x,Wep)
```
```python
# Using scipy function to compute a root
root = optimize.newton(P,16)
print(root)
```
<div id='acknowledgements' />
# Acknowledgements
* _Material created by professor Claudio Torres_ (`ctorres@inf.utfsm.cl`) _and assistants: Laura Bermeo, Alvaro Salinas, Axel Simonsen and Martín Villanueva. DI UTFSM. March 2016._ v1.1.
* _Update April 2020 - v1.32 - C.Torres_ : Re-ordering the notebook.
# Propose Classwork
Build a FPI such that given $x$ computes $\displaystyle \frac{1}{x}$. Write down your solution below or go and see the [solution](#sol1)
```python
print('Please try to think and solve before you see the solution!!!')
```
# In class
### From the textbook
```python
g1 = lambda x: 1-x**3
g2 = lambda x: (1-x)**(1/3)
g3 = lambda x: (1+2*x**3)/(1+3*x**2)
fpi(g3, 0.5, 10, True)
```
```python
g1p = lambda x: -3*x**2
g2p = lambda x: -(1/3)*(1-x)**(-2/3)
g3p = lambda x: ((1+3*x**2)*(6*x**2)-(1+2*x**3)*6*x)/((1+3*x**2)**2)
r=0.6823278038280194
print(g3p(r))
```
### Adding another implementation of FPI including and extra column for analyzing quadratic convergence
```python
def fpi2(g, x0, k, flag_cobweb=False):
x = np.empty(k+1)
x[0] = x0
error_i = np.inf
print(' i | x(i) | x(i+1) ||x(i+1)-x(i)| | e_i/e_{i-1} | e_i/e_{i-1}^2')
print('-----------------------------------------------------------------------------')
for i in range(k):
x[i+1] = g(x[i])
error_iminus1 = error_i
error_i = abs(x[i+1]-x[i])
print('%2d | %.10f | %.10f | %.10f | %.10f | %.10f' %
(i,x[i],x[i+1],error_i,error_i/error_iminus1,error_i/(error_iminus1**2)))
if flag_cobweb:
cobweb(x,g)
return x[-1]
```
Which function shows quadratic convergence? Why?
```python
g1 = lambda x: (4./5.)*x+1./x
g2 = lambda x: x/2.+5./(2*x)
g3 = lambda x: (x+5.)/(x+1)
fpi2(g1, 3.0, 30, True)
```
### Building a FPI to compute the cubic root of 7
```python
# What is 'a'? Can we find another 'a'?
a = -3*(1.7**2)
print(a)
```
```python
f = lambda x: x**3-7
g = lambda x: f(x)/a+x
r=fpi(g, 1.7, 14, True)
print(f(r))
```
### Playing with some roots
```python
f = lambda x: 8*x**4-12*x**3+6*x**2-x
fp = lambda x: 32*x**3-36*x**2+12*x-1
x = np.linspace(-1,1,1000)
plt.figure(figsize=(10,10))
plt.title('What are we seeing with the semiloigy plot? Is this function differentiable?')
plt.semilogy(x,np.abs(f(x)),'b-')
plt.semilogy(x,np.abs(fp(x)),'r-')
plt.grid()
plt.ylabel('$f(x)$',fontsize=16)
plt.xlabel('$x$',fontsize=16)
plt.show()
```
```python
r=newton_method(f, fp, 0.3, rel_error=1e-8, m=1)
print([r,f(r)])
# Is this showing quadratic convergence? If not, can you fix it?
```
# Solutions
<div id='sol1' />
Problem: Build a FPI such that given $x$ computes $\displaystyle \frac{1}{x}$
```python
# We are finding the 1/a
# Solution code:
a = 2.1
g = lambda x: 2*x-a*x**2
gp = lambda x: 2-2*a*x
r=fpi2(g, 0.7, 7, flag_cobweb=True)
print([r,1/a])
# Are we seeing quadratic convergence?
```
### What is this plot telling us?
```python
xx=np.linspace(0.2,0.8,1000)
plt.figure(figsize=(10,10))
plt.plot(xx,g(xx),'-',label=r'$g(x)$')
plt.plot(xx,gp(xx),'r-',label=r'$gp(x)$')
plt.plot(xx,xx,'g-',label=r'$x$')
plt.plot(xx,0*xx+1,'k--')
plt.plot(xx,0*xx-1,'k--')
plt.legend(loc='best')
plt.grid()
plt.show()
```
# In-Class 20200420
```python
g = lambda x: (x**2-1.)/2.
gh = lambda x: x+0.7*(x-g(x))
#fpi(g, 2, 15, True)
fpi2(gh, 0, 15, True)
```
# In-Class 20200423
```python
g1 = lambda x: np.cos(x)
g2 = lambda x: np.cos(x)**2
g3 = lambda x: np.sin(x)
g4 = lambda x: 1-5*x+(15/2)*x**2-(5/2)*x**3 # (0,1) y (1,2)
interact(lambda x0, n: fpi2(g4, x0, n, True),x0=widgets.FloatSlider(min=0, max=3, step=0.01, value=0), n=widgets.IntSlider(min=10, max=100, step=1, value=10))
```
```python
```
| d4700a7da8ab0ff2e34cbace88bd018ccbb33691 | 267,638 | ipynb | Jupyter Notebook | SC1/04_roots_of_1D_equations.ipynb | maxaubel/Scientific-Computing | 57a04b5d3e3f7be2fe9b06127f7e569659698656 | [
"BSD-3-Clause"
] | 37 | 2017-06-05T21:01:15.000Z | 2022-03-17T12:51:55.000Z | SC1/04_roots_of_1D_equations.ipynb | maxaubel/Scientific-Computing | 57a04b5d3e3f7be2fe9b06127f7e569659698656 | [
"BSD-3-Clause"
] | null | null | null | SC1/04_roots_of_1D_equations.ipynb | maxaubel/Scientific-Computing | 57a04b5d3e3f7be2fe9b06127f7e569659698656 | [
"BSD-3-Clause"
] | 63 | 2017-10-02T21:21:30.000Z | 2022-03-23T02:23:22.000Z | 200.929429 | 37,024 | 0.886974 | true | 4,922 | Qwen/Qwen-72B | 1. YES
2. YES | 0.749087 | 0.793106 | 0.594106 | __label__eng_Latn | 0.845086 | 0.218636 |
<a href="https://colab.research.google.com/github/Jun-629/20MA573/blob/master/src/Hw4_Monotonicity_in_volatility.ipynb" target="_parent"></a>
- __Suppose $f$ is convex and $X$ is submartingale, prove that
$g(t) = \mathbb E[f(X_t)]$ is increasing.__
__Pf:__
Assuming that $X_n$ is a submartingale with respect to the filtration $\{\mathcal{F_n}\}_{n\in\mathbb{N}}$, which means that $$\mathbb E[X_t] \ge \mathbb E[X_s], \forall t \ge s$$
Now, assuming that $X_n \ge 0$ for all n and $f(x) = -x$, which is a convex function. Then we will have
\begin{equation}
\begin{aligned}
\mathbb E[f(X_t) - f(X_s) | \mathcal{F_s}] &= \mathbb E[-X_t - (-X_s)| \mathcal{F_s}] \\
&= X_s - \mathbb E[X_t| \mathcal{F_s}] \\
&\le 0, \forall t \ge s.
\end{aligned}
\end{equation}
This shows that $g(t) = \mathbb E[f(X_t)]$ is strictly decreasing instead of increasing, which means that this is a counter example of the proof.
On the other hand, if the function $f$ is given by the condition that it is increasing, then we will have the inequality as follows by Jensen's Inequality,
$$\mathbb E[f(X_t)|\mathcal{F_s}] \ge f(\mathbb E[X_t|\mathcal{F_s}]) \ge f(X_s), \forall t \ge s$$
then by taking expectation of both sides, we will have the following inequality by the property of conditional expectation:
$$\mathbb E[f(X_t)] = \mathbb E[\mathbb E[f(X_t)|\mathcal{F_s}]] \ge \mathbb E[f(X_s)], \forall t \ge s$$
Therefore, $g(t) = \mathbb E[f(X_t)]$ is increasing.
__Q.E.D__
- __Let $t \mapsto e^{-rt}S_t$ be a martingale,
then prove that $C(t) = \mathbb E[e^{-rt}(S_t - K)^+]$ is increasing.__
__Pf:__
Assuming that $e^{-rt}S_t$ is a martingale with respect to the filtration $\{\mathcal{F_n}\}_{n\in\mathbb{N}}$, which means
$$\mathbb E[e^{-rt}S_t|\mathcal{F_s}] = e^{-rs}S_s, \forall t \ge s.$$
Since $f(x) = x^+$ is convex function, then
\begin{equation}
\begin{split}
\mathbb E[e^{-rt}(S_t - K)^+|\mathcal{F_s}] &\ge (\mathbb E[e^{-rt}(S_t - K)|\mathcal{F_s}])^+ \\
&= e^{-rs}(S_s-K)^+ \\
\end{split}
\end{equation}
##Can not solve by this since $S_t$ is a random variable
When $S_t \leq K$, then $$\mathbb E[e^{-rt}(S_t - K)^+|\mathcal{F_s}] = 0.$$
when $S_t > K$, then
\begin{equation}
\begin{split}
\mathbb E[e^{-rt}(S_t - K)^+|\mathcal{F_s}] &= \mathbb E[e^{-rt}(S_t-K)|\mathcal{F_s}]\\
&= \mathbb E[e^{-rt}S_t - K(e^{-rt}-e^{-rs}+e^{-rs})|\mathcal{F_s}] \\
&= e^{-rs}(S_s - K) - K \mathbb E[e^{-rt} - e^{-rs}|\mathcal{F_s}] \\
&\ge e^{-rs}(S_s - K) \\
&\ge e^{-rs}(S_s - K)^+.
\end{split}
\end{equation}
Thus,$$\mathbb E[e^{-rt}(S_t - K)^+|\mathcal{F_s}] \ge e^{-rs}(S_s - K)^+, \forall t \ge s.$$
Therefore, by taking expectation of both sides, we will have
$$\mathbb E[e^{-rt}(S_t - K)^+] = \mathbb E[ \mathbb E[e^{-rt}(S_t - K)^+|\mathcal{F_s}]] \ge \mathbb E[e^{-rs}(S_s - K)^+], \forall t \ge s,$$
which shows that $C(t) = \mathbb E[e^{-rt}(S_t - K)^+]$ is increasing.
__Q.E.D__
- __Suppose $r = 0$ and $S$ is martingale, prove that
$P(t) = \mathbb E [(S_t - K)^-]$ is increasing.__
__Pf:__
Assuming that $S_t$ is a martingale with respect to the filtration $\{\mathcal{F_n}\}_{n\in\mathbb{N}}$, which means
$$\mathbb E[S_t|\mathcal{F_s}] = S_s, \forall t \ge s.$$
When $S_t \ge K$, then $$\mathbb E[(S_t - K)^-|\mathcal{F_s}] = 0.$$
when $S_t < K$, then
\begin{equation}
\begin{split}
\mathbb E[(S_t - K)^-|\mathcal{F_s}] &= \mathbb E[K - S_t|\mathcal{F_s}]\\
&= K - S_s \\
&\ge (S_s - K)^-.
\end{split}
\end{equation}
Thus,$$\mathbb E[(S_t - K)^-|\mathcal{F_s}] \ge (S_s - K)^-, \forall t \ge s.$$
Therefore, by taking expectation of both sides, we will have
$$\mathbb E[(S_t - K)^-] = \mathbb E[ \mathbb E[(S_t - K)^-|\mathcal{F_s}]] \ge \mathbb E[(S_s - K)^-], \forall t \ge s,$$
which shows that $P(t) = \mathbb E[(S_t - K)^-]$ is increasing.
__Q.E.D__
| d645512c3fa2fe1f975ef21f6d146f4b43b1bcb5 | 6,057 | ipynb | Jupyter Notebook | src/Hw4_Monotonicity_in_volatility.ipynb | Jun-629/20MA573 | addad663d2dede0422ae690e49b230815aea4c70 | [
"MIT"
] | null | null | null | src/Hw4_Monotonicity_in_volatility.ipynb | Jun-629/20MA573 | addad663d2dede0422ae690e49b230815aea4c70 | [
"MIT"
] | null | null | null | src/Hw4_Monotonicity_in_volatility.ipynb | Jun-629/20MA573 | addad663d2dede0422ae690e49b230815aea4c70 | [
"MIT"
] | 1 | 2020-02-05T21:42:08.000Z | 2020-02-05T21:42:08.000Z | 44.536765 | 248 | 0.483077 | true | 1,474 | Qwen/Qwen-72B | 1. YES
2. YES | 0.882428 | 0.877477 | 0.77431 | __label__eng_Latn | 0.878974 | 0.637314 |
<table>
<tr align=left><td>
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli</td>
</table>
```python
from __future__ import print_function
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
import warnings
import sympy
sympy.init_printing()
```
# Root Finding and Optimization
Our goal in this section is to develop techniques to approximate the roots of a given function $f(x)$. That is find solutions $x$ such that $f(x)=0$. At first glance this may not seem like a meaningful exercise, however, this problem arises in a wide variety of circumstances.
For example, suppose that you are trying to find a solution to the equation
$$
x^2 + x = \alpha\sin{x}.
$$
where $\alpha$ is a real parameter. Simply rearranging, the expression can be rewritten in the form
$$
f(x) = x^2 + x -\alpha\sin{x} = 0.
$$
Determining the roots of the function $f(x)$ is now equivalent to determining the solution to the original expression. Unfortunately, a number of other issues arise. In particular, with non-linear equations, there may be multiple solutions, or no real solutions at all.
The task of approximating the roots of a function can be a deceptively difficult thing to do. For much of the treatment here we will ignore many details such as existence and uniqueness, but you should keep in mind that they are important considerations.
**GOAL:**
For this section we will focus on multiple techniques for efficiently and accurately solving the fundamental problem $f(x)=0$ for functions of a single variable.
### Objectives
* Understand the general rootfinding problem as $f(x)=0$
* Understand the equivalent formulation as a fixed point problem $x = g(x)$
* Understand fixed point iteration and its stability analysis
* Understand definitions of convergence and order of convergence
* Understand practical rootfinding algorithms and their convergence
* Bisection
* Newton's method
* Secant method
* Hybrid methods and scipy.optimize routines (root_scalar)
* Understand basic Optimization routines
* Parabolic Interpolation
* Golden Section Search
* scipy.optimize routines (minimize_scalar and minimize)
### Example: Future Time Annuity
Can I ever retire?
$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$
* $A$ total value after $n$ years
* $P$ is payment amount per compounding period
* $m$ number of compounding periods per year
* $r$ annual interest rate
* $n$ number of years to retirement
#### Question:
For a fix monthly Payment $P$, what does the minimum interest rate $r$ need to be so I can retire in 20 years with \$1M.
Set $P = \frac{\$18,000}{12} = \$1500, \quad m=12, \quad n=20$.
$$
A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
$$
```python
def total_value(P, m, r, n):
"""Total value of portfolio given parameters
Based on following formula:
A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n}
- 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
:Returns:
(float) - total value of portfolio
"""
return P / (r / float(m)) * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.05, 0.15, 100)
goal = 1e6
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, total_value(P, m, r, 10),label='10 years',linewidth=2)
axes.plot(r, total_value(P, m, r, 15),label='15 years',linewidth=2)
axes.plot(r, total_value(P, m, r, n),label='20 years',linewidth=2)
axes.plot(r, numpy.ones(r.shape) * goal, 'r--')
axes.set_xlabel("r (interest rate)", fontsize=16)
axes.set_ylabel("A (total value)", fontsize=16)
axes.set_title("When can I retire?",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((r.min(), r.max()))
axes.set_ylim((total_value(P, m, r.min(), 10), total_value(P, m, r.max(), n)))
axes.legend(loc='best')
axes.grid()
plt.show()
```
## Fixed Point Iteration
How do we go about solving this?
Could try to solve at least partially for $r$:
$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$
$$ r = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$
$$ r = g(r)$$
or
$$ g(r) - r = 0$$
#### Plot these
$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
```python
def g(P, m, r, n, A):
"""Reformulated minimization problem
Based on following formula:
g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
- *A* (float) - total value after $n$ years
:Returns:
(float) - value of g(r)
"""
return P * m / A * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.00, 0.1, 100)
goal = 1e6
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, g(P, m, r, n, goal),label='$g(r)$')
axes.plot(r, r, 'r--',label='$r$')
axes.set_xlabel("r (interest rate)",fontsize=16)
axes.set_ylabel("$g(r)$",fontsize=16)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=18)
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.set_ylim((g(P, m, 0.00, n, goal), g(P, m, 0.1, n, goal)))
axes.legend(fontsize=14)
axes.grid()
axes = fig.add_subplot(1, 2, 2)
axes.plot(r, g(P, m, r, n, goal)-r,label='$r - g(r)$')
axes.plot(r, numpy.zeros(r.shape), 'r--',label='$0$')
axes.set_xlabel("r (interest rate)",fontsize=16)
axes.set_ylabel("residual",fontsize=16)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.legend(fontsize=14)
axes.grid()
plt.show()
```
### Question:
A single root $r>0$ clearly exists around $r=0.088$. But how to find it?
One option might be to take a guess say $r_0 = 0.088$ and form the iterative scheme
$$
\begin{align}
r_1 &= g(r_0)\\
r_2 &= g(r_1)\\
&\vdots \\
r_{k} &= g(r_{k-1})\\
\end{align}
$$
and hope this converges as $k\rightarrow\infty$ (or faster)
### Easy enough to code
```python
r = 0.088
K = 20
for k in range(K):
print(r)
r = g(P,m,r,n,goal)
```
### Example 2:
Let $f(x) = x - e^{-x}$, solve $f(x) = 0$
Equivalent to $x = e^{-x}$ or $x = g(x)$ where $g(x) = e^{-x}$
```python
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(x, numpy.exp(-x), 'r',label='g(x)=exp(-x)$')
axes.plot(x, x, label='$x$')
axes.set_xlabel("$x$",fontsize=16)
axes.legend()
plt.grid()
f = lambda x : x - numpy.exp(-x)
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, f(x),label='$f(x) = x - g(x)$')
axes.plot(x, numpy.zeros(x.shape), 'r--',label='$0$')
axes.set_xlabel("$x$",fontsize=16)
axes.set_ylabel("residual",fontsize=16)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.legend(fontsize=14)
axes.grid()
plt.show()
plt.show()
```
#### Again, consider the iterative scheme
set $x_0$ then compute
$$
x_k = g(x_{k-1})\quad \mathrm{for}\quad k=1,2,3\ldots
$$
or again in code
```python
x = x0
for i in range(N):
x = g(x)
```
```python
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$g(x)=exp(-x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend(fontsize=14)
x = 0.4
print('\tx\t exp(-x)\t residual')
for steps in range(6):
residual = numpy.abs(numpy.exp(-x) - x)
print("{:12.7f}\t{:12.7f}\t{:12.7f}".format(x, numpy.exp(-x), residual))
axes.plot(x, numpy.exp(-x),'kx')
axes.text(x+0.01, numpy.exp(-x)+0.01, steps, fontsize="15")
x = numpy.exp(-x)
plt.grid()
plt.show()
```
### Example 3:
Let $f(x) = \ln x + x$ and solve $f(x) = 0$ or $x = -\ln x$.
Note that this problem is equivalent to $x = e^{-x}$.
```python
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r',label='$g(x)=-\log(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.set_ylabel("f(x)",fontsize=16)
axes.set_ylim([0, 1.5])
axes.legend(loc='best',fontsize=14)
x = 0.55
print('\tx\t -log(x)\t residual')
for steps in range(5):
residual = numpy.abs(numpy.log(x) + x)
print("{:12.7f}\t{:12.7f}\t{:12.7f}".format(x, -numpy.log(x), residual))
axes.plot(x, -numpy.log(x),'kx')
axes.text(x + 0.01, -numpy.log(x) + 0.01, steps, fontsize="15")
x = -numpy.log(x)
plt.grid()
plt.show()
```
### These are equivalent problems!
Something is awry...
## Analysis of Fixed Point Iteration
Existence and uniqueness of fixed point problems
*Existence:*
Assume $g \in C[a, b]$, if the range of the mapping $y = g(x)$ satisfies $y \in [a, b] ~~ \forall ~~ x \in [a, b]$ then $g$ has a fixed point in $[a, b]$.
```python
x = numpy.linspace(0.0, 1.0, 100)
# Plot function and intercept
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$g(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend(loc='best',fontsize=14)
axes.set_title('$g(x) = e^{-x}$',fontsize=24)
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.8), '--k')
axes.plot(x, numpy.ones(x.shape) * 0.4, '--',color='gray',linewidth=.5)
axes.plot(x, numpy.ones(x.shape) * 0.8, '--',color='gray',linewidth=.5)
axes.set_xlim((0.0, 1.0))
axes.set_ylim((0.0, 1.0))
plt.show()
```
```python
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r',label='$g(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.set_xlim([0.1, 1.0])
axes.set_ylim([0.1, 1.0])
axes.legend(loc='best',fontsize=14)
axes.set_title('$g(x) = -\ln(x)$',fontsize=24)
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.8), '--k')
axes.plot(x, numpy.ones(x.shape) * 0.4, '--',color='gray',linewidth=.5)
axes.plot(x, numpy.ones(x.shape) * 0.8, '--',color='gray',linewidth=.5)
plt.show()
```
*Uniqueness:*
Additionally, suppose $g'(x)$ is defined on $x \in [a, b]$ and $\exists K < 1$ such that
$$
|g'(x)| \leq K < 1 \quad \forall \quad x \in (a,b)
$$
then $g$ has a unique fixed point $P \in [a,b]$
```python
x = numpy.linspace(0.4, 0.8, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.abs(-numpy.exp(-x)), 'r')
axes.plot(x, numpy.ones(x.shape), 'k--')
axes.set_xlabel("$x$",fontsize=18)
axes.set_ylabel("$|g\,'(x)|$",fontsize=18)
axes.set_ylim((0.0, 1.1))
axes.set_title("$g(x) = e^{-x}$",fontsize=20)
axes.grid()
plt.show()
```
*Asymptotic convergence*: Behavior of fixed point iterations
$$x_{k+1} = g(x_k)$$
Assume that a fixed point $x^\ast$ exists, such that
$$
x^\ast = g(x^\ast)
$$
Then define
$$
x_{k+1} = x^\ast + e_{k+1} \quad \quad x_k = x^\ast + e_k
$$
substituting
$$
x^\ast + e_{k+1} = g(x^\ast + e_k)
$$
Evaluate $$
g(x^\ast + e_k)
$$
Taylor expand $g(x)$ about $x^\ast$ and substitute $$x = x_k = x^\ast + e_k$$
$$
g(x^\ast + e_k) = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + O(e_k^3)
$$
from our definition $$x^\ast + e_{k+1} = g(x^\ast + e_k)$$ we have
$$
x^\ast + e_{k+1} = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + O(e_k^3)
$$
Note that because $x^* = g(x^*)$ these terms cancel leaving
$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$
So if $|g'(x^*)| \leq K < 1$ we can conclude that
$$|e_{k+1}| = K |e_k|$$
which shows convergence. Also note that $K$ is related to $|g'(x^*)|$.
### Convergence of iterative schemes
Given any iterative scheme where
$$|e_{k+1}| = C |e_k|^n$$
If $C < 1$ and:
- $n=1$ then the scheme is **linearly convergent**
- $n=2$ then the scheme is **quadratically convergent**
- $n > 1$ the scheme can also be called **superlinearly convergent**
If $C > 1$ then the scheme is **divergent**
### Examples Revisited
* Example 1:
$$
g(x) = e^{-x}\quad\mathrm{with}\quad x^* \approx 0.56
$$
$$|g'(x^*)| = |-e^{-x^*}| \approx 0.56$$
* Example 2:
$$g(x) = - \ln x \quad \text{with} \quad x^* \approx 0.56$$
$$|g'(x^*)| = \frac{1}{|x^*|} \approx 1.79$$
* Example 3: The retirement problem
$$
r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
$$
```python
r, P, m, A, n = sympy.symbols('r P m A n')
g_sym = P * m / A * ((1 + r /m)**(m * n) - 1)
g_sym
```
```python
g_prime = g_sym.diff(r)
g_prime
```
```python
r_star = 0.08985602484084668
print("g'(r*) = ", g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}))
print("g(r*) - r* = {}".format(g_sym.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}) - r_star))
```
* Example 3: The retirement problem
$$
r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
$$
```python
f = sympy.lambdify(r, g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
g = sympy.lambdify(r, g_sym.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
r = numpy.linspace(-0.01, 0.1, 100)
fig = plt.figure(figsize=(7,5))
fig.set_figwidth(2. * fig.get_figwidth())
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, g(r),label='$g(r)$')
axes.plot(r, r, 'r--',label='$r$')
axes.set_xlabel("r (interest rate)",fontsize=14)
axes.set_ylabel("$g(r)$",fontsize=14)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=14)
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.set_ylim(g(0.00), g(0.1))
axes.legend()
axes.grid()
axes = fig.add_subplot(1, 2, 2)
axes.plot(r, f(r))
axes.plot(r, numpy.ones(r.shape), 'k--')
axes.plot(r_star, f(r_star), 'ro')
axes.plot(0.0, f(0.0), 'ro')
axes.set_xlim((-0.01, 0.1))
axes.set_xlabel("$r$",fontsize=14)
axes.set_ylabel("$g'(r)$",fontsize=14)
axes.grid()
plt.show()
```
## Better ways for root-finding/optimization
If $x^*$ is a fixed point of $g(x)$ then $x^*$ is also a *root* of $f(x^*) = g(x^*) - x^*$ s.t. $f(x^*) = 0$.
For instance:
$$f(r) = r - \frac{m P}{A} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$
or
$$f(r) = A - \frac{m P}{r} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$
## Classical Methods
- Bisection (linear convergence)
- Newton's Method (quadratic convergence)
- Secant Method (super-linear)
## Combined Methods
- RootSafe (Newton + Bisection)
- Brent's Method (Secant + Bisection)
### Bracketing and Bisection
A **bracket** is an interval $[a,b]$ that contains at least one zero or minima/maxima of interest.
In the case of a zero the bracket should satisfy
$$
\text{sign}(f(a)) \neq \text{sign}(f(b)).
$$
In the case of minima or maxima we need
$$
\text{sign}(f'(a)) \neq \text{sign}(f'(b))
$$
**Theorem**:
Let
$$
f(x) \in C[a,b] \quad \text{and} \quad \text{sign}(f(a)) \neq \text{sign}(f(b))
$$
then there exists a number
$$
c \in (a,b) \quad \text{s.t.} \quad f(c) = 0.
$$
(proof uses intermediate value theorem)
**Example**: The retirement problem again. For fixed $A, P, m, n$
$$
f(r) = A - \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
$$
```python
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.1, 100)
f = lambda r, A, m, P, n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
a = 0.075
b = 0.095
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
plt.show()
```
Basic bracketing algorithms shrink the bracket while ensuring that the root/extrema remains within the bracket.
What ways could we "shrink" the bracket so that the end points converge to the root/extrema?
#### Bisection Algorithm
Given a bracket $[a,b]$ and a function $f(x)$ -
1. Initialize with bracket
2. Iterate
1. Cut bracket in half and check to see where the zero is
2. Set bracket to new bracket based on what direction we went
##### basic code
```python
def bisection(f,a,b,tol):
c = (a + b)/2.
f_a = f(a)
f_b = f(b)
f_c = f(c)
for step in range(1, MAX_STEPS + 1):
if numpy.abs(f_c) < tol:
break
if numpy.sign(f_a) != numpy.sign(f_c):
b = c
f_b = f_c
else:
a = c
f_a = f_c
c = (a + b)/ 2.0
f_c = f(c)
return c
```
### Some real code
```python
# real code with standard bells and whistles
def bisection(f,a,b,tol = 1.e-6):
""" uses bisection to isolate a root x of a function of a single variable f such that f(x) = 0.
the root must exist within an initial bracket a < x < b
returns when f(x) at the midpoint of the bracket < tol
Parameters:
-----------
f: function of a single variable f(x) of type float
a: float
left bracket a < x
b: float
right bracket x < b
Note: the signs of f(a) and f(b) must be different to insure a bracket
tol: float
tolerance. Returns when |f((a+b)/2)| < tol
Returns:
--------
x: float
midpoint of final bracket
x_array: numpy array
history of bracket centers (for plotting later)
Raises:
-------
ValueError:
if initial bracket is invalid
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 1000
# initialize
c = (a + b)/2.
c_array = [ c ]
f_a = f(a)
f_b = f(b)
f_c = f(c)
# check bracket
if numpy.sign(f_a) == numpy.sign(f_b):
raise ValueError("no bracket: f(a) and f(b) must have different signs")
# Loop until we reach the TOLERANCE or we take MAX_STEPS
for step in range(1, MAX_STEPS + 1):
# Check tolerance - Could also check the size of delta_x
# We check this first as we have already initialized the values
# in c and f_c
if numpy.abs(f_c) < tol:
break
if numpy.sign(f_a) != numpy.sign(f_c):
b = c
f_b = f_c
else:
a = c
f_a = f_c
c = (a + b)/2.
f_c = f(c)
c_array.append(c)
if step == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return c, numpy.array(c_array)
```
```python
# set up function as an inline lambda function
P = 1500.0
m = 12
n = 20.0
A = 1e6
f = lambda r: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initialize bracket
a = 0.07
b = 0.10
```
```python
# find root
r_star, r_array = bisection(f, a, b, tol=1e-8)
print('root at r = {}, f(r*) = {}, {} steps'.format(r_star,f(r_star),len(r_array)))
```
```python
r = numpy.linspace(0.05, 0.11, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
# axes.set_xlim([0.085, 0.091])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot(a, f(a), 'ko')
axes.plot([a, a], [0.0, f(a)], 'k--')
axes.text(a, f(a), str(0), fontsize="15")
axes.plot(b, f(b), 'ko')
axes.plot([b, b], [f(b), 0.0], 'k--')
axes.text(b, f(b), str(1), fontsize="15")
axes.grid()
# plot out the first N steps
N = 5
for k,r in enumerate(r_array[:N]):
# Plot iteration
axes.plot(r, f(r),'kx')
axes.text(r, f(r), str(k + 2), fontsize="15")
axes.plot(r_star, f(r_star), 'go', markersize=10)
axes.set_title('Bisection method: first {} steps'.format(N), fontsize=20)
plt.show()
```
What is the smallest tolerance that can be achieved with this routine? Why?
```python
# find root
r_star, r_array = bisection(f, a, b, tol=1e-8 )
print('root at r = {}, f(r*) = {}, {} steps'.format(r_star,f(r_star),len(r_array)))
```
```python
# this might be useful
print(numpy.diff(r_array))
```
#### Convergence of Bisection
Generally have
$$
|e_{k+1}| = C |e_k|^n
$$
where we need $C < 1$ and $n > 0$.
Letting $\Delta x_k$ be the width of the $k$th bracket we can then estimate the error with
$$
e_k \approx \Delta x_k
$$
and therefore
$$
e_{k+1} \approx \frac{1}{2} \Delta x_k.
$$
Due to the relationship then between $x_k$ and $e_k$ we then know
$$
|e_{k+1}| = \frac{1}{2} |e_k|
$$
so therefore the method is linearly convergent.
### Newton's Method (Newton-Raphson)
- Given a bracket, bisection is guaranteed to converge linearly to a root
- However bisection uses almost no information about $f(x)$ beyond its sign at a point
- Can we do "better"? <font color='red'>Newton's method</font>, *when well behaved* can achieve quadratic convergence.
**Basic Ideas**: There are multiple interpretations we can use to derive Newton's method
* Use Taylor's theorem to estimate a correction to minimize the residual $f(x)=0$
* A geometric interpretation that approximates $f(x)$ locally as a straight line to predict where $x^*$ might be.
* As a special case of a fixed-point iteration
Perhaps the simplest derivation uses Taylor series. Consider an initial guess at point $x_k$. For arbitrary $x_k$, it's unlikely $f(x_k)=0$. However we can hope there is a correction $\delta_k$ such that at
$$
x_{k+1} = x_k + \delta_k
$$
and
$$
f(x_{k+1}) = 0
$$
expanding in a Taylor series around point $x_k$
$$
f(x_k + \delta_k) \approx f(x_k) + f'(x_k) \delta_k + O(\delta_k^2)
$$
substituting into $f(x_{k+1})=0$ and dropping the higher order terms gives
$$
f(x_k) + f'(x_k) \delta_k =0
$$
substituting into $f(x_{k+1})=0$ and dropping the higher order terms gives
$$
f(x_k) + f'(x_k) \delta_k =0
$$
or solving for the correction
$$
\delta_k = -f(x_k)/f'(x_k)
$$
which leads to the update for the next iteration
$$
x_{k+1} = x_k + \delta_k
$$
or
$$
x_{k+1} = x_k -f(x_k)/f'(x_k)
$$
rinse and repeat, as it's still unlikely that $f(x_{k+1})=0$ (but we hope the error will be reduced)
### Algorithm
1. Initialize $x = x_0$
1. While ( $f(x) > tol$ )
- solve $\delta = -f(x)/f'(x)$
- update $x \leftarrow x + \delta$
### Geometric interpretation
By truncating the taylor series at first order, we are locally approximating $f(x)$ as a straight line tangent to the point $f(x_k)$. If the function was linear at that point, we could find its intercept such that $f(x_k+\delta_k)=0$
```python
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
# Plot x_k point
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, -5e4, "$x_k$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(x_k, f(x_k) + 2e4, "$f(x_k)$", fontsize=16)
axes.plot(r, f_prime(x_k) * (r - x_k) + f(x_k), 'k')
# Plot x_{k+1} point
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, 1e4, "$x_{k+1}$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(0.0873, f(x_k) - 2e4, "$f(x_{k+1})$", fontsize=16)
axes.set_xlabel("r",fontsize=16)
axes.set_ylabel("f(r)",fontsize=16)
axes.set_title("Newton-Raphson Steps",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
plt.show()
```
If we simply approximate the derivative $f'(x_k)$ with its finite difference approximation
$$
f'(x_k) \approx \frac{0 - f(x_k)}{x_{k+1} - x_k}
$$
we can rearrange to find $x_{k+1}$ as
$$
x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}
$$
which is the classic Newton-Raphson iteration
### Some code
```python
def newton(f,f_prime,x0,tol = 1.e-6):
""" uses newton's method to find a root x of a function of a single variable f
Parameters:
-----------
f: function f(x)
returns type: float
f_prime: function f'(x)
returns type: float
x0: float
initial guess
tolerance: float
Returns when |f(x)| < tol
Returns:
--------
x: float
final iterate
x_array: numpy array
history of iteration points
Raises:
-------
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 200
x = x0
x_array = [ x0 ]
for k in range(1, MAX_STEPS + 1):
x = x - f(x) / f_prime(x)
x_array.append(x)
if numpy.abs(f(x)) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x, numpy.array(x_array)
```
### Set the problem up
```python
P = 1500.0
m = 12
n = 20.0
A = 1e6
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
```
### and solve
```python
x0 = 0.06
x, x_array = newton(f, f_prime, x0, tol=1.e-8)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
print(f_prime(x)*numpy.finfo('float').eps)
```
```python
r = numpy.linspace(0.05, 0.10, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n, x in enumerate(x_array):
axes.plot(x, f(x),'kx')
axes.text(x, f(x), str(n), fontsize="15")
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.set_title("Newton-Raphson Steps", fontsize=18)
axes.grid()
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
```
What is the smallest tolerance that can be achieved with this routine? Why?
### Example:
$$f(x) = x - e^{-x}$$
$$f'(x) = 1 + e^{-x}$$
$$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} = x_k - \frac{x_k - e^{-x_k}}{1 + e^{-x_k}}$$
#### setup in sympy
```python
x = sympy.symbols('x')
f = x - sympy.exp(-x)
f_prime = f.diff(x)
f, f_prime
```
#### and solve
```python
f = sympy.lambdify(x,f)
f_prime = sympy.lambdify(x,f_prime)
x0 = 0.
x, x_array = newton(f, f_prime, x0, tol = 1.e-9)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
```
```python
xa = numpy.linspace(-1,1,100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1,2,1)
axes.plot(xa,f(xa),'b')
axes.plot(xa,numpy.zeros(xa.shape),'r--')
axes.plot(x,f(x),'go', markersize=10)
axes.plot(x0,f(x0),'kx',markersize=10)
axes.grid()
axes.set_xlabel('x', fontsize=16)
axes.set_ylabel('f(x)', fontsize=16)
axes.set_title('$f(x) = x - e^{-x}$', fontsize=18)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
```
### Asymptotic Convergence of Newton's Method
Newton's method can be also considered a fixed point iteration
$$x_{k+1} = g(x_k)$$
with $g(x) = x - \frac{f(x)}{f'(x)}$
Again if $x^*$ is the fixed point and $e_k$ the error at iteration $k$:
$$x_{k+1} = x^* + e_{k+1} \quad \quad x_k = x^* + e_k$$
Taylor Expansion around $x^*$
$$
x^* + e_{k+1} = g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + O(e_k^3)
$$
Note that as before $x^*$ and $g(x^*)$ cancel:
$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$
What about $g'(x^*)$ though?
$$\begin{aligned}
g(x) &= x - \frac{f(x)}{f'(x)} \\
g'(x) & = 1 - \frac{f'(x)}{f'(x)} + \frac{f(x) f''(x)}{(f'(x))^2} = \frac{f(x) f''(x)}{(f'(x))^2}
\end{aligned}$$
which evaluated at $x = x^*$ becomes
$$
g'(x^*) = \frac{f(x^*)f''(x^*)}{f'(x^*)^2} = 0
$$
since $f(x^\ast) = 0$ by definition (assuming $f''(x^\ast)$ and $f'(x^\ast)$ are appropriately behaved).
Back to our expansion we have again
$$
e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots
$$
which simplifies to
$$
e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots
$$
which leads to
$$
|e_{k+1}| < \left | \frac{g''(x^*)}{2!} \right | |e_k|^2
$$
Newton's method is therefore quadratically convergent where the constant is controlled by the second derivative.
#### Example: Convergence for a non-simple root
Consider our first problem
$$
f(x) = x^2 + x - \sin(x)
$$
the case is, unfortunately, not as rosey. Why might this be?
#### Setup the problem
```python
f = lambda x: x*x + x - numpy.sin(x)
f_prime = lambda x: 2*x + 1. - numpy.cos(x)
x0 = .9
x, x_array = newton(f, f_prime, x0, tol= 1.e-16)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
```
```python
xa = numpy.linspace(-2,2,100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1,2,1)
axes.plot(xa,f(xa),'b')
axes.plot(xa,numpy.zeros(xa.shape),'r--')
axes.plot(x,f(x),'go', markersize=10)
axes.plot(x0,f(x0),'kx', markersize=10)
axes.grid()
axes.set_xlabel('x', fontsize=16)
axes.set_ylabel('f(x)', fontsize=16)
axes.set_title('$f(x) = x^2 +x - sin(x)$', fontsize=18)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
```
### Convergence appears linear, can you show this?:
$$f(x) = x^2 + x -\sin (x)$$
### Example: behavior of Newton with multiple roots
$f(x) = \sin (2 \pi x)$
$$x_{k+1} = x_k - \frac{\sin (2 \pi x_k)}{2 \pi \cos (2 \pi x_k)}= x_k - \frac{1}{2 \pi} \tan (2 \pi x_k)$$
```python
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
f_prime = lambda x: 2.0 * numpy.pi * numpy.cos(2.0 * numpy.pi * x)
x_kp = lambda x: x - f(x)/f_prime(x)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(x, f(x),'b')
axes.plot(x, f_prime(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
x_k = 0.3
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x, f_prime(x_k) * (x - x_k) + f(x_k), 'k')
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, f(x),'b')
axes.plot(x, x_kp(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $x_{k+1}(x)$",fontsize=18)
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
plt.show()
```
### Basins of Attraction
Given a point $x_0$ can we determine if Newton-Raphson converges and to **which root** it converges to?
A *basin of attraction* $X$ for Newton's methods is defined as the set such that $\forall x \in X$ Newton iterations converges to the same root. Unfortunately this is far from a trivial thing to determine and even for simple functions can lead to regions that are complicated or even fractal.
```python
# calculate the basin of attraction for f(x) = sin(2\pi x)
x_root = numpy.zeros(x.shape)
N_steps = numpy.zeros(x.shape)
for i,xk in enumerate(x):
x_root[i], x_root_array = newton(f, f_prime, xk)
N_steps[i] = len(x_root_array)
```
```python
y = numpy.linspace(-2,2)
X,Y = numpy.meshgrid(x,y)
X_root = numpy.outer(numpy.ones(y.shape),x_root)
plt.figure(figsize=(8, 6))
plt.pcolor(X, Y, X_root,vmin=-5, vmax=5,cmap='seismic')
cbar = plt.colorbar()
cbar.set_label('$x_{root}$', fontsize=18)
plt.plot(x, f(x), 'k-')
plt.plot(x, numpy.zeros(x.shape),'k--', linewidth=0.5)
plt.xlabel('x', fontsize=16)
plt.title('Basins of Attraction: $f(x) = \sin{2\pi x}$', fontsize=18)
#plt.xlim(0.25-.1,0.25+.1)
plt.show()
```
### Fractal Basins of Attraction
If $f(x)$ is complex (for $x$ complex), then the basins of attraction can be beautiful and fractal
Plotted below are two fairly simple equations which demonstrate the issue:
1. $f(x) = x^3 - 1$
2. Kepler's equation $\theta - e \sin \theta = M$
```python
f = lambda x: x**3 - 1
f_prime = lambda x: 3 * x**2
N = 1001
x = numpy.linspace(-2, 2, N)
X, Y = numpy.meshgrid(x, x)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
roots = numpy.roots([1., 0., 0., -1])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
#axes.contourf(X, Y, numpy.sign(numpy.imag(R))*numpy.abs(R),vmin = -10, vmax = 10)
axes.contourf(X, Y, R, vmin = -8, vmax= 8.)
axes.scatter(numpy.real(roots), numpy.imag(roots))
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x^3 - 1$")
axes.grid()
plt.show()
```
```python
def f(theta, e=0.083, M=1):
return theta - e * numpy.sin(theta) - M
def f_prime(theta, e=0.083):
return 1 - e * numpy.cos(theta)
N = 1001
x = numpy.linspace(-30.5, -29.5, N)
y = numpy.linspace(-17.5, -16.5, N)
X, Y = numpy.meshgrid(x, y)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
axes.contourf(X, Y, R, vmin = 0, vmax = 10)
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x - e \sin x - M$")
plt.show()
```
#### Other Issues
Need to supply both $f(x)$ and $f'(x)$, could be expensive
Example: FTV equation $f(r) = A - \frac{m P}{r} \left[ \left(1 + \frac{r}{m} \right )^{m n} - 1\right]$
Can use symbolic differentiation (`sympy`)
### Secant Methods
Is there a method with the convergence of Newton's method but without the extra derivatives? What way would you modify Newton's method so that you would not need $f'(x)$?
Given $x_k$ and $x_{k-1}$ represent the derivative as the approximation
$$f'(x) \approx \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}}$$
Combining this with the Newton approach leads to
$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1}) }{f(x_k) - f(x_{k-1})}$$
This leads to superlinear convergence and not quite quadratic as the exponent on the convergence is $\approx 1.7$.
Alternative interpretation, fit a line through two points and see where they intersect the x-axis.
$$(x_k, f(x_k)) ~~~~~ (x_{k-1}, f(x_{k-1})$$
$$y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + b$$
$$b = f(x_{k-1}) - \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k-1} - x_k)$$
$$ y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + f(x_k)$$
Now solve for $x_{k+1}$ which is where the line intersects the x-axies ($y=0$)
$$0 = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k+1} - x_k) + f(x_k)$$
$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$
#### Secant Method
$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$
```python
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initial guess
x_k = 0.07
x_km = 0.06
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.plot(x_k, 0.0, 'ko')
axes.plot(x_k, f(x_k), 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_km, 0.0, 'ko')
axes.plot(x_km, f(x_km), 'ko')
axes.plot([x_km, x_km], [0.0, f(x_km)], 'k--')
axes.plot(r, (f(x_k) - f(x_km)) / (x_k - x_km) * (r - x_k) + f(x_k), 'k')
x_kp = x_k - (f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km)))
axes.plot(x_kp, 0.0, 'ro')
axes.plot([x_kp, x_kp], [0.0, f(x_kp)], 'r--')
axes.plot(x_kp, f(x_kp), 'ro')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=14)
axes.set_title("Secant Method", fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
plt.show()
```
What would the algorithm look like for such a method?
#### Algorithm
Given $f(x)$, a `TOLERANCE`, and a `MAX_STEPS`
1. Initialize two points $x_0$, $x_1$, $f_0 = f(x_0)$, and $f_1 = f(x_1)$
2. Loop for k=2, to `MAX_STEPS` is reached or `TOLERANCE` is achieved
1. Calculate new update
$$x_{2} = x_1 - \frac{f(x_1) (x_1 - x_{0})}{f(x_1) - f(x_{0})}$$
2. Check for convergence and break if reached
3. Update parameters $x_0 = x_1$, $x_1 = x_{2}$, $f_0 = f_1$ and $f_1 = f(x_1)$
#### Some Code
```python
def secant(f, x0, x1, tol = 1.e-6):
""" uses a linear secant method to find a root x of a function of a single variable f
Parameters:
-----------
f: function f(x)
returns type: float
x0: float
first point to initialize the algorithm
x1: float
second point to initialize the algorithm x1 != x0
tolerance: float
Returns when |f(x)| < tol
Returns:
--------
x: float
final iterate
x_array: numpy array
history of iteration points
Raises:
-------
ValueError:
if x1 is too close to x0
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 200
if numpy.isclose(x0, x1):
raise ValueError('Initial points are too close (preferably should be a bracket)')
x_array = [ x0, x1 ]
for k in range(1, MAX_STEPS + 1):
x2 = x1 - f(x1) * (x1 - x0) / (f(x1) - f(x0))
x_array.append(x2)
if numpy.abs(f(x2)) < tol:
break
x0 = x1
x1 = x2
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x2, numpy.array(x_array)
```
### Set the problem up
```python
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
```
### and solve
```python
x0 = 0.06
x1 = 0.07
x, x_array = secant(f, x0, x1, tol= 1.e-7)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
```
```python
r = numpy.linspace(0.05, 0.10, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n, x in enumerate(x_array):
axes.plot(x, f(x),'kx')
axes.text(x, f(x), str(n), fontsize="15")
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.set_title("Secant Method Steps", fontsize=18)
axes.grid()
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
```
#### Comments
- Secant method as shown is equivalent to linear interpolation
- Can use higher order interpolation for higher order secant methods
- Convergence is not quite quadratic
- Not guaranteed to converge
- Does not preserve brackets
- Almost as good as Newton's method if your initial guess is good.
### Hybrid Methods
Combine attributes of methods with others to make one great algorithm to rule them all (not really)
#### Goals
1. Robustness: Given a bracket $[a,b]$, maintain bracket
1. Efficiency: Use superlinear convergent methods when possible
#### Options
- Methods requiring $f'(x)$
- NewtSafe (RootSafe, Numerical Recipes)
- Newton's Method within a bracket, Bisection otherwise
- Methods not requiring $f'(x)$
- Brent's Algorithm (zbrent, Numerical Recipes)
- Combination of bisection, secant and inverse quadratic interpolation
- `scipy.optimize` package **new** root_scalar
```python
from scipy.optimize import root_scalar
#root_scalar?
```
### Set the problem up (again)
```python
def f(r,A,m,P,n):
return A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
def f_prime(r,A,m,P,n):
return (-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) +
P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2)
A = 1.e6
m = 12
P = 1500.
n = 20.
```
Try Brent's method
```python
a = 0.07
b = 0.1
sol = root_scalar(f,args=(A,m,P,n), bracket=(a, b), method='brentq')
print(sol)
```
Try Newton's method
```python
sol = root_scalar(f,args=(A,m,P,n), x0=.07, fprime=f_prime, method='newton')
print(sol)
```
```python
# Try something else
```
## Optimization (finding extrema)
I want to find the extrema of a function $f(x)$ on a given interval $[a,b]$.
A few approaches:
- Interpolation Algorithms: Repeated parabolic interpolation
- Bracketing Algorithms: Golden-Section Search (linear)
- Hybrid Algorithms
### Interpolation Approach
Successive parabolic interpolation - similar to secant method
Basic idea: Fit polynomial to function using three points, find its minima, and guess new points based on that minima
1. What do we need to fit a polynomial $p_n(x)$ of degree $n \geq 2$?
2. How do we construct the polynomial $p_2(x)$?
3. Once we have constructed $p_2(x)$ how would we find the minimum?
#### Algorithm
Given $f(x)$ and $[x_0,x_1]$ - Note that unlike a bracket these will be a sequence of better approximations to the minimum.
1. Initialize $x = [x_0, x_1, (x_0+x_1)/2]$
1. Loop
1. Evaluate function $f(x)$ at the three points
1. Find the quadratic polynomial that interpolates those points:
$$p(x) = p_0 x^2 + p_1 x + p_2$$
3. Calculate the minimum:
$$p'(x) = 2 p_0 x + p_1 = 0 \quad \Rightarrow \quad x^\ast = -p_1 / (2 p_0)$$
1. New set of points $x = [x_1, (x_0+x_1)/2, x^\ast]$
1. Check tolerance
### Demo
```python
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
```
```python
x0, x1 = 0.5, 0.2
x = numpy.array([x0, x1, (x0 + x1)/2.])
p = numpy.polyfit(x, f(x), 2)
parabola = lambda t: p[0]*t**2 + p[1]*t + p[2]
t_min = -p[1]/2./p[0]
```
```python
MAX_STEPS = 100
TOLERANCE = 1e-4
t = numpy.linspace(0., 2., 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t), label='$f(t)$')
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x[0], f(x[0]), 'ko')
axes.plot(x[1], f(x[1]), 'ko')
axes.plot(x[2], f(x[2]), 'ko')
axes.plot(t, parabola(t), 'r--', label='parabola')
axes.plot(t_min, parabola(t_min), 'ro' )
axes.plot(t_min, f(t_min), 'k+')
axes.legend(loc='best')
axes.set_ylim((-5, 0.0))
axes.grid()
plt.show()
```
### Rinse and repeat
```python
MAX_STEPS = 100
TOLERANCE = 1e-4
x = numpy.array([x0, x1, (x0 + x1) / 2.0])
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x[0], f(x[0]), 'ko')
axes.plot(x[1], f(x[1]), 'ko')
success = False
for n in range(1, MAX_STEPS + 1):
axes.plot(x[2], f(x[2]), 'ko')
poly = numpy.polyfit(x, f(x), 2)
axes.plot(t, poly[0] * t**2 + poly[1] * t + poly[2], 'r--')
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < TOLERANCE:
success = True
break
if success:
print("Success!")
print(" t* = %s" % x[2])
print(" f(t*) = %s" % f(x[2]))
print(" number of steps = %s" % n)
else:
print("Reached maximum number of steps!")
axes.set_ylim((-5, 0.0))
axes.grid()
plt.show()
```
#### Some Code
```python
def parabolic_interpolation(f, bracket, tol = 1.e-6):
""" uses repeated parabolic interpolation to refine a local minimum of a function f(x)
this routine uses numpy functions polyfit and polyval to fit and evaluate the quadratics
Parameters:
-----------
f: function f(x)
returns type: float
bracket: array
array [x0, x1] containing an initial bracket that contains a minimum
tolerance: float
Returns when relative error of last two iterates < tol
Returns:
--------
x: float
final estimate of the minima
x_array: numpy array
history of iteration points
Raises:
-------
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 100
x = numpy.zeros(3)
x[:2] = bracket
x[2] = (x[0] + x[1])/2.
x_array = [ x[2] ]
for k in range(1, MAX_STEPS + 1):
poly = numpy.polyfit(x, f(x), 2)
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
x_array.append(x[2])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x[2], numpy.array(x_array)
```
#### set up problem
```python
bracket = numpy.array([0.5, 0.2])
x, x_array = parabolic_interpolation(f, bracket, tol = 1.e-6)
print("Extremum f(x) = {}, at x = {}, N steps = {}".format(f(x), x, len(x_array)))
```
```python
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.plot(x_array, f(x_array),'ro')
axes.plot(x, f(x), 'go')
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.grid()
plt.show()
```
### Bracketing Algorithm (Golden Section Search)
Given $f(x) \in C[x_0,x_3]$ that is convex (concave) over an interval $x \in [x_0,x_3]$ reduce the interval size until it brackets the minimum (maximum).
Note that we no longer have the $x=0$ help we had before so bracketing and doing bisection is a bit trickier in this case. In particular choosing your initial bracket is important!
#### Bracket Picking
Say we start with a bracket $[x_0, x_3]$ and pick two new points $x_1 < x_2 \in [x_0, x_3]$. We want to pick a new bracket that guarantees that the extrema exists in it. We then can pick this new bracket with the following rules:
- If $f(x_1) < f(x_2)$ then we know the minimum is between $x_0$ and $x_2$.
- If $f(x_1) > f(x_2)$ then we know the minimum is between $x_1$ and $x_3$.
```python
f = lambda x: x**2
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
search_points = [-1.0, -0.5, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 1)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, 0.5, 1.0]
axes = fig.add_subplot(2, 2, 2)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
search_points = [-1.0, 0.25, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 3)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, -0.25, 1.0]
axes = fig.add_subplot(2, 2, 4)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
plt.show()
```
#### Picking Brackets and Points
Again say we have a bracket $[x_0,x_3]$ and suppose we have two new search points $x_1$ and $x_2$ that separates $[x_0,x_3]$ into two new overlapping brackets.
Define: the length of the line segments in the interval
\begin{aligned}
a &= x_1 - x_0, \\
b &= x_2 - x_1,\\
c &= x_3 - x_2 \\
\end{aligned}
and the total bracket length
\begin{aligned}
d &= x_3 - x_0. \\
\end{aligned}
```python
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
```
For **Golden Section Search** we require two conditions:
- The two new possible brackets are of equal length. i.e $[x_0, x_2] = [x_1, x_3]$ or
$$
a + b = b + c
$$
or simply $a = c$
```python
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
```
- The ratio of segment lengths is the same for every level of recursion so the problem is self-similar i.e.
$$
\frac{b}{a} = \frac{c}{a + b}
$$
These two requirements will allow maximum reuse of previous points and require adding only one new point $x^*$ at each iteration.
```python
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = []
axes.append(fig.add_subplot(1, 2, 1))
axes.append(fig.add_subplot(1, 2, 2))
t = numpy.linspace(-2.0, 2.0, 100)
for i in range(2):
axes[i].plot(t, f(t), 'k')
# First set of intervals
axes[i].plot([x[0], x[2]], [0.0, 0.0], 'g')
axes[i].plot([x[1], x[3]], [-0.2, -0.2], 'r')
axes[i].plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes[i].plot([x[2], x[2]], [0.0, f(x[2])], 'g--')
axes[i].plot([x[1], x[1]], [-0.2, f(x[1])], 'r--')
axes[i].plot([x[3], x[3]], [-0.2, f(x[3])], 'r--')
for (n, point) in enumerate(x):
axes[i].plot(point, f(point), 'ok')
axes[i].text(point, f(point)+0.1, n, fontsize='15')
axes[i].set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes[i].set_ylim((-1.0, 3.0))
# Left new interval
x_new = [x[0], None, x[1], x[2]]
x_new[1] = phi * (x[1] - x[0]) + x[0]
#axes[0].plot([x_new[0], x_new[2]], [1.5, 1.5], 'b')
#axes[0].plot([x_new[1], x_new[3]], [1.75, 1.75], 'c')
#axes[0].plot([x_new[0], x_new[0]], [1.5, f(x_new[0])], 'b--')
#axes[0].plot([x_new[2], x_new[2]], [1.5, f(x_new[2])], 'b--')
#axes[0].plot([x_new[1], x_new[1]], [1.75, f(x_new[1])], 'c--')
#axes[0].plot([x_new[3], x_new[3]], [1.75, f(x_new[3])], 'c--')
axes[0].plot(x_new[1], f(x_new[1]), 'ko')
axes[0].text(x_new[1], f(x_new[1]) + 0.1, "*", fontsize='15')
for i in range(4):
axes[0].text(x_new[i], -0.5, i, color='g',fontsize='15')
# Right new interval
x_new = [x[1], x[2], None, x[3]]
x_new[2] = (x[2] - x[1]) * phi + x[2]
#axes[1].plot([x_new[0], x_new[2]], [1.25, 1.25], 'b')
#axes[1].plot([x_new[1], x_new[3]], [1.5, 1.5], 'c')
#axes[1].plot([x_new[0], x_new[0]], [1.25, f(x_new[0])], 'b--')
#axes[1].plot([x_new[2], x_new[2]], [1.25, f(x_new[2])], 'b--')
#axes[1].plot([x_new[1], x_new[1]], [1.5, f(x_new[2])], 'c--')
#axes[1].plot([x_new[3], x_new[3]], [1.5, f(x_new[3])], 'c--')
axes[1].plot(x_new[2], f(x_new[2]), 'ko')
axes[1].text(x_new[2], f(x_new[2]) + 0.1, "*", fontsize='15')
for i in range(4):
axes[1].text(x_new[i], -0.5, i, color='r',fontsize='15')
axes[0].set_title('Choose left bracket', fontsize=18)
axes[1].set_title('Choose right bracket', fontsize=18)
plt.show()
```
As the first rule implies that $a = c$, we can substitute into the second rule to yield
$$
\frac{b}{a} = \frac{a}{a + b}
$$
or inverting and rearranging
$$
\frac{a}{b} = 1 + \frac{b}{a}
$$
if we let the ratio $b/a = x$, then
$$
x + 1 = \frac{1}{x} \quad \text{or} \quad x^2 + x - 1 = 0
$$
$$
x^2 + x - 1 = 0
$$
has a single positive root for
$$
x = \frac{\sqrt{5} - 1}{2} = \varphi = 0.6180339887498949
$$
where $\varphi$ is related to the "golden ratio" (which in most definitions is given by $1+\varphi$, but either work as $ 1+\varphi = 1/\varphi $ )
Subsequent proportionality implies that the distances between the 4 points at one iteration is proportional to the next. We can now use all of our information to find the points $x_1$ and $x_2$ given any overall bracket $[x_0, x_3]$
Given $b/a = \varphi$, $a = c$, and the known width of the bracket $d$ it follows that
$$ d = a + b + c = (2 + \phi)a $$
or
$$ a = \frac{d}{2 + \varphi} = \frac{\varphi}{1 + \varphi} d$$
by the rather special properties of $\varphi$.
We could use this result immediately to find
\begin{align}
x_1 &= x_0 + a \\
x_2 &= x_3 - a \\
\end{align}
Equivalently, you can show that
$$a + b = (1 + \varphi)a = \varphi d$$
so
\begin{align}
x_1 &= x_3 - \varphi d \\
x_2 &= x_0 + \varphi d \\
\end{align}
```python
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
```
#### Algorithm
1. Initialize bracket $[x_0,x_3]$
1. Initialize points $x_1 = x_3 - \varphi (x_3 - x_0)$ and $x_2 = x_0 + \varphi (x_3 - x_0)$
1. Loop
1. Evaluate $f_1$ and $f_2$
1. If $f_1 < f_2$ then we pick the left interval for the next iteration
1. and otherwise pick the right interval
1. Check size of bracket for convergence $x_3 - x_0 <$ `TOLERANCE`
1. calculate the appropriate new point $x^*$ ($x_1$ on left, $x_2$ on right)
```python
def golden_section(f, bracket, tol = 1.e-6):
""" uses golden section search to refine a local minimum of a function f(x)
this routine uses numpy functions polyfit and polyval to fit and evaluate the quadratics
Parameters:
-----------
f: function f(x)
returns type: float
bracket: array
array [x0, x3] containing an initial bracket that contains a minimum
tolerance: float
Returns when | x3 - x0 | < tol
Returns:
--------
x: float
final estimate of the midpoint of the bracket
x_array: numpy array
history of midpoint of each bracket
Raises:
-------
ValueError:
If initial bracket is < tol or doesn't appear to have any interior points
that are less than the outer points
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 100
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [ bracket[0], None, None, bracket[1] ]
delta_x = x[3] - x[0]
x[1] = x[3] - phi * delta_x
x[2] = x[0] + phi * delta_x
# check for initial bracket
fx = f(numpy.array(x))
bracket_min = min(fx[0], fx[3])
if fx[1] > bracket_min and fx[2] > bracket_min:
raise ValueError("interval does not appear to include a minimum")
elif delta_x < tol:
raise ValueError("interval is already smaller than tol")
x_mid = (x[3] + x[0])/2.
x_array = [ x_mid ]
for k in range(1, MAX_STEPS + 1):
f_1 = f(x[1])
f_2 = f(x[2])
if f_1 < f_2:
# Pick the left bracket
x_new = [x[0], None, x[1], x[2]]
delta_x = x_new[3] - x_new[0]
x_new[1] = x_new[3] - phi * delta_x
else:
# Pick the right bracket
x_new = [x[1], x[2], None, x[3]]
delta_x = x_new[3] - x_new[0]
x_new[2] = x_new[0] + phi * delta_x
x = x_new
x_array.append((x[3] + x[0])/ 2.)
if numpy.abs(x[3] - x[0]) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x_array[-1], numpy.array(x_array)
```
```python
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
```
```python
x, x_array = golden_section(f,[0.2, 0.5], 1.e-4)
print('t* = {}, f(t*) = {}, N steps = {}'.format(x, f(x), len(x_array)-1))
```
```python
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.grid()
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x_array, f(x_array),'ko')
axes.plot(x_array[0],f(x_array[0]),'ro')
axes.plot(x_array[-1],f(x_array[-1]),'go')
plt.show()
```
## Scipy Optimization
Scipy contains a lot of ways for optimization. But a convenenient interface for minimization of functions of a single variable is `scipy.optimize.minimize_scalar`
For optimization or constrained optimization for functions of more than one variable, see
`scipy.optimized.minimize`
```python
from scipy.optimize import minimize_scalar
#minimize_scalar?
```
### Try some different methods
```python
sol = minimize_scalar(f, bracket=(0.2, 0.25, 0.5), method='golden')
print(sol)
```
```python
sol = minimize_scalar(f, method='brent')
print(sol)
```
```python
sol = minimize_scalar(f, bounds=(0.,0.5), method='bounded')
print(sol)
```
| cea1f6eb2c7e763d93a45dfadc40fd7b90ac3c39 | 108,891 | ipynb | Jupyter Notebook | 05_root_finding_optimization.ipynb | mspieg/intro-numerical-methods | d267a075c95acfed6bbcbe91951a05539be61311 | [
"CC-BY-4.0"
] | 6 | 2020-09-10T13:01:06.000Z | 2022-01-20T15:05:30.000Z | 05_root_finding_optimization.ipynb | AinsleyChen/intro-numerical-methods | 2eda74cccbed5c0d4c57e24c3f4c96a1aa741f08 | [
"CC-BY-4.0"
] | null | null | null | 05_root_finding_optimization.ipynb | AinsleyChen/intro-numerical-methods | 2eda74cccbed5c0d4c57e24c3f4c96a1aa741f08 | [
"CC-BY-4.0"
] | 35 | 2020-01-21T16:08:37.000Z | 2022-01-21T12:46:56.000Z | 27.094053 | 303 | 0.472013 | true | 22,778 | Qwen/Qwen-72B | 1. YES
2. YES | 0.737158 | 0.841826 | 0.620559 | __label__eng_Latn | 0.757459 | 0.280096 |
# Series
```python
import pandas as pd
from oeis.sequence import OEIS_Sequence
from matplotlib import pyplot as plt
```
```python
plt.plot(Sequence.terms)
plt.title(Sequence.description)
plt.show()
```
```python
def formula_latex(k, floor=True):
latex = r"$$\left\lfloor\frac{n^2}{" + str(k) + r"}\right\rfloor$$"
if not floor:
latex = latex.replace("floor", "ceil")
return latex
```
```python
def oeis_md_link(id_):
return f'[{id_}]({OEIS_URL}{id_})'
```
```python
SEQ_LIST = ['A000290', 'A007590', 'A000212',
'A002620', 'A118015', 'A056827',
'A056834', 'A130519', 'A056838',
'A056865']
```
```python
series_table = pd.DataFrame(columns= ['k',
'Secuencia',
'Fórmula',
'Descripción',
'Términos'])
MAX_TERMS = 15
for num, id_ in enumerate(SEQ_LIST):
Seq = OEIS_Sequence(id_)
series_table = series_table.append({'k': num + 1,
'Secuencia': oeis_md_link(id_),
'Fórmula': formula_latex(num + 1),
'Descripción': Seq.description,
'Términos': Seq.terms[:MAX_TERMS]
}, ignore_index=True)
```
```python
series_table
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>k</th>
<th>Secuencia</th>
<th>Fórmula</th>
<th>Descripción</th>
<th>Términos</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1</td>
<td>[A000290](http://oeis.org/A000290)</td>
<td>$$\left\lfloor\frac{n^2}{1}\right\rfloor$$</td>
<td>The squares: a(n) = n^2.</td>
<td>[0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121,...</td>
</tr>
<tr>
<th>1</th>
<td>2</td>
<td>[A007590](http://oeis.org/A007590)</td>
<td>$$\left\lfloor\frac{n^2}{2}\right\rfloor$$</td>
<td>a(n) = floor(n^2/2).</td>
<td>[0, 0, 2, 4, 8, 12, 18, 24, 32, 40, 50, 60, 72...</td>
</tr>
<tr>
<th>2</th>
<td>3</td>
<td>[A000212](http://oeis.org/A000212)</td>
<td>$$\left\lfloor\frac{n^2}{3}\right\rfloor$$</td>
<td>a(n) = floor(n^2/3).</td>
<td>[0, 0, 1, 3, 5, 8, 12, 16, 21, 27, 33, 40, 48,...</td>
</tr>
<tr>
<th>3</th>
<td>4</td>
<td>[A002620](http://oeis.org/A002620)</td>
<td>$$\left\lfloor\frac{n^2}{4}\right\rfloor$$</td>
<td>Quarter-squares: floor(n/2)*ceiling(n/2). Equi...</td>
<td>[0, 0, 1, 2, 4, 6, 9, 12, 16, 20, 25, 30, 36, ...</td>
</tr>
<tr>
<th>4</th>
<td>5</td>
<td>[A118015](http://oeis.org/A118015)</td>
<td>$$\left\lfloor\frac{n^2}{5}\right\rfloor$$</td>
<td>a(n) = floor(n^2/5).</td>
<td>[0, 0, 0, 1, 3, 5, 7, 9, 12, 16, 20, 24, 28, 3...</td>
</tr>
<tr>
<th>5</th>
<td>6</td>
<td>[A056827](http://oeis.org/A056827)</td>
<td>$$\left\lfloor\frac{n^2}{6}\right\rfloor$$</td>
<td>a(n) = floor(n^2/6).</td>
<td>[0, 0, 0, 1, 2, 4, 6, 8, 10, 13, 16, 20, 24, 2...</td>
</tr>
<tr>
<th>6</th>
<td>7</td>
<td>[A056834](http://oeis.org/A056834)</td>
<td>$$\left\lfloor\frac{n^2}{7}\right\rfloor$$</td>
<td>a(n) = floor(n^2/7).</td>
<td>[0, 0, 0, 1, 2, 3, 5, 7, 9, 11, 14, 17, 20, 24...</td>
</tr>
<tr>
<th>7</th>
<td>8</td>
<td>[A130519](http://oeis.org/A130519)</td>
<td>$$\left\lfloor\frac{n^2}{8}\right\rfloor$$</td>
<td>a(n) = Sum_{k=0..n} floor(k/4). (Partial sums ...</td>
<td>[0, 0, 0, 0, 1, 2, 3, 4, 6, 8, 10, 12, 15, 18,...</td>
</tr>
<tr>
<th>8</th>
<td>9</td>
<td>[A056838](http://oeis.org/A056838)</td>
<td>$$\left\lfloor\frac{n^2}{9}\right\rfloor$$</td>
<td>a(n) = floor(n^2/9).</td>
<td>[0, 0, 0, 1, 1, 2, 4, 5, 7, 9, 11, 13, 16, 18,...</td>
</tr>
<tr>
<th>9</th>
<td>10</td>
<td>[A056865](http://oeis.org/A056865)</td>
<td>$$\left\lfloor\frac{n^2}{10}\right\rfloor$$</td>
<td>a(n) = floor(n^2/10).</td>
<td>[0, 0, 0, 0, 1, 2, 3, 4, 6, 8, 10, 12, 14, 16,...</td>
</tr>
</tbody>
</table>
</div>
```python
# Tabla en markdown para incluir en el capítulo
print(series_table.to_markdown(index=False))
```
| k | Secuencia | Fórmula | Descripción | Términos |
|----:|:-----------------------------------|:--------------------------------------------|:----------------------------------------------------------------------|:--------------------------------------------------------------|
| 1 | [A000290](http://oeis.org/A000290) | $$\left\lfloor\frac{n^2}{1}\right\rfloor$$ | The squares: a(n) = n^2. | [0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196] |
| 2 | [A007590](http://oeis.org/A007590) | $$\left\lfloor\frac{n^2}{2}\right\rfloor$$ | a(n) = floor(n^2/2). | [0, 0, 2, 4, 8, 12, 18, 24, 32, 40, 50, 60, 72, 84, 98] |
| 3 | [A000212](http://oeis.org/A000212) | $$\left\lfloor\frac{n^2}{3}\right\rfloor$$ | a(n) = floor(n^2/3). | [0, 0, 1, 3, 5, 8, 12, 16, 21, 27, 33, 40, 48, 56, 65] |
| 4 | [A002620](http://oeis.org/A002620) | $$\left\lfloor\frac{n^2}{4}\right\rfloor$$ | Quarter-squares: floor(n/2)*ceiling(n/2). Equivalently, floor(n^2/4). | [0, 0, 1, 2, 4, 6, 9, 12, 16, 20, 25, 30, 36, 42, 49] |
| 5 | [A118015](http://oeis.org/A118015) | $$\left\lfloor\frac{n^2}{5}\right\rfloor$$ | a(n) = floor(n^2/5). | [0, 0, 0, 1, 3, 5, 7, 9, 12, 16, 20, 24, 28, 33, 39] |
| 6 | [A056827](http://oeis.org/A056827) | $$\left\lfloor\frac{n^2}{6}\right\rfloor$$ | a(n) = floor(n^2/6). | [0, 0, 0, 1, 2, 4, 6, 8, 10, 13, 16, 20, 24, 28, 32] |
| 7 | [A056834](http://oeis.org/A056834) | $$\left\lfloor\frac{n^2}{7}\right\rfloor$$ | a(n) = floor(n^2/7). | [0, 0, 0, 1, 2, 3, 5, 7, 9, 11, 14, 17, 20, 24, 28] |
| 8 | [A130519](http://oeis.org/A130519) | $$\left\lfloor\frac{n^2}{8}\right\rfloor$$ | a(n) = Sum_{k=0..n} floor(k/4). (Partial sums of A002265.) | [0, 0, 0, 0, 1, 2, 3, 4, 6, 8, 10, 12, 15, 18, 21] |
| 9 | [A056838](http://oeis.org/A056838) | $$\left\lfloor\frac{n^2}{9}\right\rfloor$$ | a(n) = floor(n^2/9). | [0, 0, 0, 1, 1, 2, 4, 5, 7, 9, 11, 13, 16, 18, 21] |
| 10 | [A056865](http://oeis.org/A056865) | $$\left\lfloor\frac{n^2}{10}\right\rfloor$$ | a(n) = floor(n^2/10). | [0, 0, 0, 0, 1, 2, 3, 4, 6, 8, 10, 12, 14, 16, 19] |
```python
for k, seq in enumerate(SEQ_LIST):
lst = SEQ_LIST.copy()
del lst[k]
print(k, seq, lst)
```
0 A000290 ['A007590', 'A000212', 'A002620', 'A118015', 'A056827', 'A056834', 'A130519', 'A056838', 'A056865']
1 A007590 ['A000290', 'A000212', 'A002620', 'A118015', 'A056827', 'A056834', 'A130519', 'A056838', 'A056865']
2 A000212 ['A000290', 'A007590', 'A002620', 'A118015', 'A056827', 'A056834', 'A130519', 'A056838', 'A056865']
3 A002620 ['A000290', 'A007590', 'A000212', 'A118015', 'A056827', 'A056834', 'A130519', 'A056838', 'A056865']
4 A118015 ['A000290', 'A007590', 'A000212', 'A002620', 'A056827', 'A056834', 'A130519', 'A056838', 'A056865']
5 A056827 ['A000290', 'A007590', 'A000212', 'A002620', 'A118015', 'A056834', 'A130519', 'A056838', 'A056865']
6 A056834 ['A000290', 'A007590', 'A000212', 'A002620', 'A118015', 'A056827', 'A130519', 'A056838', 'A056865']
7 A130519 ['A000290', 'A007590', 'A000212', 'A002620', 'A118015', 'A056827', 'A056834', 'A056838', 'A056865']
8 A056838 ['A000290', 'A007590', 'A000212', 'A002620', 'A118015', 'A056827', 'A056834', 'A130519', 'A056865']
9 A056865 ['A000290', 'A007590', 'A000212', 'A002620', 'A118015', 'A056827', 'A056834', 'A130519', 'A056838']
```python
V = ['A000290', 'A007590', 'A000212', 'A002620', 'A118015', 'A056827', 'A056834', 'A130519', 'A056865']
```
```python
txt = ""
for x in V:
txt = txt + ", " + x
txt
```
', A000290, A007590, A000212, A002620, A118015, A056827, A056834, A130519, A056865'
```python
from math import floor
def f(n, k):
return floor((n ** 2) / k)
```
```python
lst = []
acc = 0
for n in range(20):
acc += f(n, 2)
lst.append(acc)
```
```python
print(lst)
```
[0, 0, 2, 6, 14, 26, 44, 68, 100, 140, 190, 250, 322, 406, 504, 616, 744, 888, 1050, 1230]
```python
lista = []
for n in range(20):
lista.append(floor((2 * (n ** 3) + 3 * (n ** 2) - 2 * n)/12))
```
```python
print(lista)
```
[0, 0, 2, 6, 14, 26, 44, 68, 100, 140, 190, 250, 322, 406, 504, 616, 744, 888, 1050, 1230]
```python
from sympy import Sum, symbols, simplify
```
```python
i, k, n = symbols('i k n', integer=True)
simplify(Sum((i ** 2) / k , (i, 1, n)).doit())
```
$\displaystyle \frac{n \left(2 n^{2} + 3 n + 1\right)}{6 k}$
```python
```
| d3f4991d356c73a7439e08e3a39cd9062f7e5544 | 56,700 | ipynb | Jupyter Notebook | code/01-Intro/oeis.ipynb | EnriquePH/Libro_Bestiario_Mates | 77347cbf5fd9e4c6f7d52c671e29c8d6781b0bb7 | [
"CC0-1.0"
] | null | null | null | code/01-Intro/oeis.ipynb | EnriquePH/Libro_Bestiario_Mates | 77347cbf5fd9e4c6f7d52c671e29c8d6781b0bb7 | [
"CC0-1.0"
] | null | null | null | code/01-Intro/oeis.ipynb | EnriquePH/Libro_Bestiario_Mates | 77347cbf5fd9e4c6f7d52c671e29c8d6781b0bb7 | [
"CC0-1.0"
] | null | null | null | 110.526316 | 38,180 | 0.781041 | true | 3,968 | Qwen/Qwen-72B | 1. YES
2. YES | 0.896251 | 0.826712 | 0.740942 | __label__krc_Cyrl | 0.315338 | 0.559787 |
---
author: Nathan Carter (ncarter@bentley.edu)
---
This answer assumes you have imported SymPy as follows.
```python
from sympy import * # load all math functions
init_printing( use_latex='mathjax' ) # use pretty math output
```
Sequences are typically written in terms of an independent variable $n$,
so we will tell SymPy to use $n$ as a symbol, then define our sequence
in terms of $n$.
We define a term of an example sequence as $a_n=\frac{1}{n+1}$, then
build a sequence from that term. The code `(n,0,oo)` means that $n$
starts counting at $n=0$ and goes on forever (with `oo` being the SymPy
notation for $\infty$).
```python
var( 'n' ) # use n as a symbol
a_n = 1 / ( n + 1 ) # formula for a term
seq = sequence( a_n, (n,0,oo) ) # build the sequence
seq
```
$\displaystyle \left[1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \ldots\right]$
You can ask for specific terms in the sequence, or many terms in a row, as follows.
```python
seq[20]
```
$\displaystyle \frac{1}{21}$
```python
seq[:10]
```
$\displaystyle \left[ 1, \ \frac{1}{2}, \ \frac{1}{3}, \ \frac{1}{4}, \ \frac{1}{5}, \ \frac{1}{6}, \ \frac{1}{7}, \ \frac{1}{8}, \ \frac{1}{9}, \ \frac{1}{10}\right]$
You can compute the limit of a sequence,
$$ \lim_{n\to\infty} a_n. $$
```python
limit( a_n, n, oo )
```
$\displaystyle 0$
| 55b5054f556e227c9801fc25b6839e806570a76e | 3,778 | ipynb | Jupyter Notebook | database/tasks/How to define a mathematical sequence/Python, using SymPy.ipynb | nathancarter/how2data | 7d4f2838661f7ce98deb1b8081470cec5671b03a | [
"MIT"
] | null | null | null | database/tasks/How to define a mathematical sequence/Python, using SymPy.ipynb | nathancarter/how2data | 7d4f2838661f7ce98deb1b8081470cec5671b03a | [
"MIT"
] | null | null | null | database/tasks/How to define a mathematical sequence/Python, using SymPy.ipynb | nathancarter/how2data | 7d4f2838661f7ce98deb1b8081470cec5671b03a | [
"MIT"
] | 2 | 2021-07-18T19:01:29.000Z | 2022-03-29T06:47:11.000Z | 23.177914 | 219 | 0.489677 | true | 464 | Qwen/Qwen-72B | 1. YES
2. YES | 0.957912 | 0.798187 | 0.764593 | __label__eng_Latn | 0.967639 | 0.614738 |
# The standard deb model
The standard DEB model, in the energy formulation, contains four dynamic state variables: reserve energy $E$, structure volume $V$, maturity energy $E_M$ and reproduction buffer energy $E_R$:
\begin{eqnarray}
\frac{dE}{dt} &=& \dot{p}_A - \dot{p}_C\\
\frac{dV}{dt} &=& \frac{\dot{p}_G}{[E_G]}\\
\frac{dE_H}{dt} &=& \dot{p}_R (1 - H(E_H - E^p_H))\\
\frac{dE_R}{dt} &=& \dot{p}_R H(E_H - E^p_H)
\end{eqnarray}
In this coupled set of ODEs, four fluxes appear:
\begin{eqnarray}
\dot{p}_A &=& f(X)\{\dot{p}_{Am}\}V^{2/3}\\
\dot{p}_C &=& E \left( \frac{[E_G]\dot{v}V^{2/3} + \dot{p}_S}{\kappa E + [E_G]V} \right)\\
\dot{p}_G &=& \kappa\dot{p}_C - \dot{p}_S\\
\dot{p}_R &=& (1-\kappa)\dot{p}_C - \dot{p}_J\\
\end{eqnarray}
Here, two additional loss fluxes appear, somatic maintenance $\dot{p}_S$ and maturity maintenance $\dot{p}_J$:
\begin{eqnarray}
\dot{p}_S &=& [\dot{p}_M]V - \{\dot{p}_T\}V^{2/3}\\
\dot{p}_J &=& \dot{k}_JE_H
\end{eqnarray}
```python
%matplotlib inline
#Import packages we need
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
#Import deb modules
import deb
import deb_compound_pars
import deb_aux
#Configure plotting
sns.set_context('notebook')
sns.set_style('white')
```
## The 12 primary parameters
Also include temperature parameters here for now
```python
deb.get_deb_params_pandas()
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Min</th>
<th>Max</th>
<th>Value</th>
<th>Dimension</th>
<th>Units</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<th>Fm</th>
<td>2.220446e-16</td>
<td>NaN</td>
<td>6.500000</td>
<td>l L**-2 t**-1</td>
<td></td>
<td>Specific searching rate</td>
</tr>
<tr>
<th>kappaX</th>
<td>2.220446e-16</td>
<td>1.0</td>
<td>0.800000</td>
<td>-</td>
<td></td>
<td>Assimilation efficiency</td>
</tr>
<tr>
<th>pAm</th>
<td>2.220446e-16</td>
<td>NaN</td>
<td>530.000000</td>
<td>e L**-2 t**-1</td>
<td></td>
<td>max specific assimilation rate</td>
</tr>
<tr>
<th>v</th>
<td>2.220446e-16</td>
<td>NaN</td>
<td>0.020000</td>
<td>L t**-1</td>
<td>cm/d</td>
<td>Energy conductance</td>
</tr>
<tr>
<th>kappa</th>
<td>2.220446e-16</td>
<td>1.0</td>
<td>0.800000</td>
<td>-</td>
<td></td>
<td>Allocation fraction to soma</td>
</tr>
<tr>
<th>kappaR</th>
<td>2.220446e-16</td>
<td>1.0</td>
<td>0.950000</td>
<td>-</td>
<td></td>
<td>Reproduction efficiency</td>
</tr>
<tr>
<th>pM</th>
<td>2.220446e-16</td>
<td>NaN</td>
<td>18.000000</td>
<td>e L**-3 t**-1</td>
<td>J/d/cm**3</td>
<td>Volume-specific somatic maintenance cost</td>
</tr>
<tr>
<th>pT</th>
<td>0.000000e+00</td>
<td>NaN</td>
<td>0.000000</td>
<td>e L**-1 t**-1</td>
<td></td>
<td>Surface-specific somatic maintenance cost</td>
</tr>
<tr>
<th>kJ</th>
<td>2.220446e-16</td>
<td>NaN</td>
<td>0.002000</td>
<td>t**-1</td>
<td></td>
<td>Maturity maintenance rate coefficient</td>
</tr>
<tr>
<th>EG</th>
<td>2.220446e-16</td>
<td>NaN</td>
<td>4184.000000</td>
<td>e L**-3</td>
<td></td>
<td>Specific cost for structure</td>
</tr>
<tr>
<th>EbH</th>
<td>2.220446e-16</td>
<td>NaN</td>
<td>0.000001</td>
<td>e</td>
<td></td>
<td>Maturity at birth</td>
</tr>
<tr>
<th>EpH</th>
<td>2.220446e-16</td>
<td>NaN</td>
<td>843.600000</td>
<td>e</td>
<td></td>
<td>Maturity at puberty</td>
</tr>
<tr>
<th>TA</th>
<td>2.220446e-16</td>
<td>NaN</td>
<td>6000.000000</td>
<td>T</td>
<td>K</td>
<td>Arrhenius temperature</td>
</tr>
<tr>
<th>Ts</th>
<td>2.220446e-16</td>
<td>NaN</td>
<td>293.100000</td>
<td>T</td>
<td>K</td>
<td>Reference temperature</td>
</tr>
</tbody>
</table>
</div>
## Forward solve test
Solve the standard DEB model equations (four state variables)
```python
#Instantiate DEB model with constant food level (must be a function though)
f = lambda t: 1.0
dm = deb.DEBStandard({'f': f})
#Integrate the DEB equations, 20000 days
y0 = [1000, 1, 0, 0] # Initial state
t = np.linspace(0, 20000, 100) # Time points
dm.predict(y0, t) # Predict
#Plot the state variable dynamics. Dashed line indicated implied maximum structural length.
fig = dm.plot_state()
```
## Real-world predictions with auxillary data and equations
Organism physical length [cm]
\begin{equation}
L_w = \frac{L}{\delta_M} = \frac{\sqrt[3]{V}}{\delta_M}
\end{equation}
Organism total (dry) weight [g] is the sum of reserve, structure (and reproduction buffer) weights:
\begin{equation}
W_W = w_VM_V + w_EM_E
\end{equation}
Organism physical volume
\begin{equation}
V_w = V + (E + E_R)\frac{w_E}{d_E\bar{\mu}_E}
\end{equation}
### Supporting parameter relationships
See pps. 81 - 83 i DEB3.
Structural mass [mol]:
\begin{equation}
M_V = [M_V]V = \frac{d_V}{w_V}V
\end{equation}
Reserve mass [mol]:
\begin{equation}
M_E = \frac{E}{\bar{\mu}_E}
\end{equation}
Number of C-atoms per unit of structural volume [mol/cm$^3$]
\begin{equation}
[M_V] = \frac{d_V}{w_V}
\end{equation}
#### Food function
Functional (food) response:
\begin{equation}
f(X) = \frac{X}{K+X}
\end{equation}
Half saturation coefficient
\begin{equation}
K = \frac{\{\dot{J}_{XAm}\}}{\{\dot{F}_m \}} = \frac{\{\dot{p}_{Am} \}}{\kappa_X\{\dot{F}_m \}\bar{\mu}_X}
\end{equation}
### Parameter values that must be specified
These equations requires the specification of three scalar parameters:
| Description | Unit | Symbol | Typical value |
|----------------------------------------|:-------:|:-------------:|:-------------------:|
| Specific chemical potential of reserve | J/mol | $\bar{\mu}_E$ | 550 000 (addchem.m) |
| Specific chemical potential of food | J/mol | $\bar{\mu}_X$ | 525 000 (addchem.m) |
| C-molar weight of water-free structure | g/mol | $w_V$ | 24.6 (DEB3 ex) |
| C-molar weight of water-free reserve | g/mol | $w_E$ | 23.0 (get_pars2.m) |
| Specific density of dry structure | g/cm$^3$| $d_V$ | 0.1 |
| Specific density of reserve | g/cm$^3$| $d_V$ | dV (addchem.m) |
| Shape factor | - | $\delta_M$ | 0.9 (made up) |
```python
aux = deb_aux.AuxPars(dm)
fig = aux.plot_observables()
plt.tight_layout()
```
## Implied compound parameter
```python
deb_compound_pars.calculate_compound_pars(dm.params)
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Min</th>
<th>Max</th>
<th>Value</th>
<th>Dimension</th>
<th>Unit</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<th>Em</th>
<td>0.0</td>
<td>NaN</td>
<td>26500.000000</td>
<td>e L**-3</td>
<td>J/m</td>
<td>Max reserve density</td>
</tr>
<tr>
<th>g</th>
<td>0.0</td>
<td>NaN</td>
<td>0.197358</td>
<td>-</td>
<td>-</td>
<td>Energy investment ration</td>
</tr>
<tr>
<th>Lm</th>
<td>0.0</td>
<td>NaN</td>
<td>23.555556</td>
<td>l</td>
<td>m</td>
<td>Maximum structural length</td>
</tr>
<tr>
<th>kM</th>
<td>0.0</td>
<td>NaN</td>
<td>0.004302</td>
<td>t**-1</td>
<td>1/d</td>
<td>Somatic maintenance rate</td>
</tr>
<tr>
<th>kappaG</th>
<td>0.0</td>
<td>NaN</td>
<td>0.800000</td>
<td>-</td>
<td>-</td>
<td>Fraction of growth energy fixed in structure</td>
</tr>
</tbody>
</table>
</div>
# Testing food functions
```python
#Periodic food function
flist = [lambda t: 1/2*(1+np.sin(t/2000.)**2),
lambda t: 0*t + 1.0]
fig, ax = plt.subplots(2, 2, figsize=(8, 8))
ax = ax.flatten()
ax2 = ax[0].twinx()
for f in flist:
dm = deb.DEBStandard({'f': f})
y0 = [0, 1, 0, 0]
t = np.linspace(0, 20000, 100)
dm.predict(y0, t)
fig = dm.plot_state(fig=fig)
ax2.plot(t, [f(ti) for ti in t], '--')
fig2, ax = plt.subplots(1, 3, figsize=(12, 4))
aux = deb_aux.AuxPars(dm)
_ = aux.plot_observables(fig=fig2)
fig2.tight_layout()
ax2.set_ylim(-1, 1.5)
fig.tight_layout()
```
| fef446b2f82d26db4e82b206b669fffa2bbfc4f7 | 336,663 | ipynb | Jupyter Notebook | my-first-deb.ipynb | nepstad/pydebtest | e1409d0c5cd19d72a045a81eaa6ac1bcba77acc9 | [
"MIT"
] | 1 | 2017-05-30T18:27:47.000Z | 2017-05-30T18:27:47.000Z | my-first-deb.ipynb | nepstad/pydebtest | e1409d0c5cd19d72a045a81eaa6ac1bcba77acc9 | [
"MIT"
] | null | null | null | my-first-deb.ipynb | nepstad/pydebtest | e1409d0c5cd19d72a045a81eaa6ac1bcba77acc9 | [
"MIT"
] | null | null | null | 551.906557 | 74,636 | 0.924625 | true | 3,254 | Qwen/Qwen-72B | 1. YES
2. YES | 0.887205 | 0.743168 | 0.659342 | __label__eng_Latn | 0.268495 | 0.370203 |
```python
import sympy as sym
import numpy as np
```
```python
def rotationGlobalX(alpha):
return np.array([[1,0,0],[0,np.cos(alpha),-np.sin(alpha)],[0,np.sin(alpha),np.cos(alpha)]])
def rotationGlobalY(beta):
return np.array([[np.cos(beta),0,np.sin(beta)], [0,1,0],[-np.sin(beta),0,np.cos(beta)]])
def rotationGlobalZ(gamma):
return np.array([[np.cos(gamma),-np.sin(gamma),0],[np.sin(gamma),np.cos(gamma),0],[0,0,1]])
def rotationLocalX(alpha):
return np.array([[1,0,0],[0,np.cos(alpha),np.sin(alpha)],[0,-np.sin(alpha),np.cos(alpha)]])
def rotationLocalY(beta):
return np.array([[np.cos(beta),0,-np.sin(beta)], [0,1,0],[np.sin(beta),0,np.cos(beta)]])
def rotationLocalZ(gamma):
return np.array([[np.cos(gamma),np.sin(gamma),0],[-np.sin(gamma),np.cos(gamma),0],[0,0,1]])
```
```python
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
plt.rcParams['figure.figsize']=10,10
coefs = (1, 3, 15) # Coefficients in a0/c x**2 + a1/c y**2 + a2/c z**2 = 1
# Radii corresponding to the coefficients:
rx, ry, rz = 1/np.sqrt(coefs)
# Set of all spherical angles:
u = np.linspace(0, 2 * np.pi, 30)
v = np.linspace(0, np.pi, 30)
# Cartesian coordinates that correspond to the spherical angles:
# (this is the equation of an ellipsoid):
x = rx * np.outer(np.cos(u), np.sin(v))
y = ry * np.outer(np.sin(u), np.sin(v))
z = rz * np.outer(np.ones_like(u), np.cos(v))
fig = plt.figure(figsize=plt.figaspect(1)) # Square figure
ax = fig.add_subplot(111, projection='3d')
xr = np.reshape(x, (1,-1))
yr = np.reshape(y, (1,-1))
zr = np.reshape(z, (1,-1))
RX = rotationGlobalX(np.pi/3)
RY = rotationGlobalY(np.pi/3)
RZ = rotationGlobalZ(np.pi/3)
Rx = rotationLocalX(np.pi/3)
Ry = rotationLocalY(np.pi/3)
Rz = rotationLocalZ(np.pi/3)
rRotx = RZ@RY@RX@np.vstack((xr,yr,zr))
print(np.shape(rRotx))
# Plot:
ax.plot_surface(np.reshape(rRotx[0,:],(30,30)), np.reshape(rRotx[1,:],(30,30)),
np.reshape(rRotx[2,:],(30,30)), rstride=4, cstride=4, color='b')
# Adjustment of the axes, so that they all have the same span:
max_radius = max(rx, ry, rz)
for axis in 'xyz':
getattr(ax, 'set_{}lim'.format(axis))((-max_radius, max_radius))
plt.show()
```
<IPython.core.display.Javascript object>
(3, 900)
```python
coefs = (1, 3, 15) # Coefficients in a0/c x**2 + a1/c y**2 + a2/c z**2 = 1
# Radii corresponding to the coefficients:
rx, ry, rz = 1/np.sqrt(coefs)
# Set of all spherical angles:
u = np.linspace(0, 2 * np.pi, 30)
v = np.linspace(0, np.pi, 30)
# Cartesian coordinates that correspond to the spherical angles:
# (this is the equation of an ellipsoid):
x = rx * np.outer(np.cos(u), np.sin(v))
y = ry * np.outer(np.sin(u), np.sin(v))
z = rz * np.outer(np.ones_like(u), np.cos(v))
fig = plt.figure(figsize=plt.figaspect(1)) # Square figure
ax = fig.add_subplot(111, projection='3d')
xr = np.reshape(x, (1,-1))
yr = np.reshape(y, (1,-1))
zr = np.reshape(z, (1,-1))
RX = rotationGlobalX(np.pi/3)
RY = rotationGlobalY(np.pi/3)
RZ = rotationGlobalZ(np.pi/3)
Rx = rotationLocalX(np.pi/3)
Ry = rotationLocalY(np.pi/3)
Rz = rotationLocalZ(np.pi/3)
rRotx = RY@RX@np.vstack((xr,yr,zr))
print(np.shape(rRotx))
# Plot:
ax.plot_surface(np.reshape(rRotx[0,:],(30,30)), np.reshape(rRotx[1,:],(30,30)),
np.reshape(rRotx[2,:],(30,30)), rstride=4, cstride=4, color='b')
# Adjustment of the axes, so that they all have the same span:
max_radius = max(rx, ry, rz)
for axis in 'xyz':
getattr(ax, 'set_{}lim'.format(axis))((-max_radius, max_radius))
plt.show()
```
<IPython.core.display.Javascript object>
(3, 900)
```python
np.sin(np.arccos(0.7))
```
0.71414284285428498
```python
print(RZ@RY@RX)
```
[[ 0.25 -0.0580127 0.96650635]
[ 0.4330127 0.89951905 -0.0580127 ]
[-0.8660254 0.4330127 0.25 ]]
```python
import sympy as sym
sym.init_printing()
```
```python
a,b,g = sym.symbols('alpha, beta, gamma')
```
```python
RX = sym.Matrix([[1,0,0],[0,sym.cos(a),-sym.sin(a)],[0,sym.sin(a),sym.cos(a)]])
RY = sym.Matrix([[sym.cos(b),0,sym.sin(b)],[0,1,0],[-sym.sin(b),0,sym.cos(b)]])
RZ = sym.Matrix([[sym.cos(g),-sym.sin(g),0],[sym.sin(g),sym.cos(g),0],[0,0,1]])
RX,RY,RZ
```
```python
R = RZ@RY@RX
R
```
```python
mm = np.array([2.71, 10.22, 26.52])
lm = np.array([2.92, 10.10, 18.85])
fh = np.array([5.05, 41.90, 15.41])
mc = np.array([8.29, 41.88, 26.52])
ajc = (mm + lm)/2
kjc = (fh + mc)/2
```
```python
i = np.array([1,0,0])
j = np.array([0,1,0])
k = np.array([0,0,1])
v1 = kjc - ajc
v1 = v1 / np.sqrt(v1[0]**2+v1[1]**2+v1[2]**2)
v2 = (mm-lm) - ((mm-lm)@v1)*v1
v2 = v2/ np.sqrt(v2[0]**2+v2[1]**2+v2[2]**2)
v3 = k - (k@v1)*v1 - (k@v2)*v2
v3 = v3/ np.sqrt(v3[0]**2+v3[1]**2+v3[2]**2)
```
```python
v1
```
array([ 0.12043275, 0.99126617, -0.05373394])
```python
R = np.array([v1,v2,v3])
RGlobal = R.T
RGlobal
```
array([[ 0.12043275, -0.02238689, 0.99246903],
[ 0.99126617, 0.05682604, -0.11900497],
[-0.05373394, 0.99813307, 0.02903508]])
```python
alpha = np.arctan2(RGlobal[2,1],RGlobal[2,2])*180/np.pi
alpha
```
```python
beta = np.arctan2(-RGlobal[2,0],np.sqrt(RGlobal[2,1]**2+RGlobal[2,2]**2))*180/np.pi
beta
```
```python
gamma = np.arctan2(RGlobal[1,0],RGlobal[0,0])*180/np.pi
gamma
```
```python
R2 = np.array([[0, 0.71, 0.7],[0,0.7,-0.71],[-1,0,0]])
R2
```
array([[ 0. , 0.71, 0.7 ],
[ 0. , 0.7 , -0.71],
[-1. , 0. , 0. ]])
```python
alpha = np.arctan2(R[2,1],R[2,2])*180/np.pi
alpha
```
```python
gamma = np.arctan2(R[1,0],R[0,0])*180/np.pi
gamma
```
```python
beta = np.arctan2(-R[2,0],np.sqrt(R[2,1]**2+R[2,2]**2))*180/np.pi
beta
```
```python
R = RY@RZ@RX
R
```
```python
alpha = np.arctan2(-R2[1,2],R2[1,1])*180/np.pi
alpha
```
```python
gamma = 0
```
```python
beta = 90
```
```python
import sympy as sym
```
```python
sym.init_printing()
```
```python
a,b,g = sym.symbols('alpha, beta, gamma')
```
```python
RX = sym.Matrix([[1,0,0],[0,sym.cos(a), -sym.sin(a)],[0,sym.sin(a), sym.cos(a)]])
RY = sym.Matrix([[sym.cos(b),0, sym.sin(b)],[0,1,0],[-sym.sin(b),0, sym.cos(b)]])
RZ = sym.Matrix([[sym.cos(g), -sym.sin(g), 0],[sym.sin(g), sym.cos(g),0],[0,0,1]])
RX,RY,RZ
```
```python
```
```python
RXYZ = RZ*RY*RX
RXYZ
```
```python
RZXY = RZ*RX*RY
RZXY
```
```python
```
| 77cfccd29f47940dc64b804f9797123856ad0568 | 484,016 | ipynb | Jupyter Notebook | notebooks/elipsoid3DRotMatrix1.ipynb | tallesmedeiros/BMC | 2f5ccad4a58ffc00d5372970a605352f18cfe1b9 | [
"CC-BY-4.0"
] | 1 | 2022-03-15T14:50:42.000Z | 2022-03-15T14:50:42.000Z | notebooks/elipsoid3DRotMatrix1.ipynb | tallesmedeiros/BMC | 2f5ccad4a58ffc00d5372970a605352f18cfe1b9 | [
"CC-BY-4.0"
] | null | null | null | notebooks/elipsoid3DRotMatrix1.ipynb | tallesmedeiros/BMC | 2f5ccad4a58ffc00d5372970a605352f18cfe1b9 | [
"CC-BY-4.0"
] | null | null | null | 203.453552 | 223,168 | 0.868969 | true | 2,474 | Qwen/Qwen-72B | 1. YES
2. YES | 0.887205 | 0.815232 | 0.723278 | __label__kor_Hang | 0.123706 | 0.518749 |
This file is part of the pyMOR project (http://www.pymor.org).
Copyright 2013-2020 pyMOR developers and contributors. All rights reserved.
License: BSD 2-Clause License (http://opensource.org/licenses/BSD-2-Clause)
# Heat equation example
## Analytic problem formulation
We consider the heat equation on the segment $[0, 1]$, with dissipation on both sides, heating (input) $u$ on the left, and measurement (output) $\tilde{y}$ on the right:
$$
\begin{align*}
\partial_t T(z, t) & = \partial_{zz} T(z, t), & 0 < z < 1,\ t > 0, \\
\partial_z T(0, t) & = T(0, t) - u(t), & t > 0, \\
\partial_z T(1, t) & = -T(1, t), & t > 0, \\
\tilde{y}(t) & = T(1, t), & t > 0.
\end{align*}
$$
## Import modules
```python
import numpy as np
import scipy.linalg as spla
import scipy.integrate as spint
import matplotlib.pyplot as plt
from pymor.basic import *
from pymor.core.config import config
from pymor.reductors.h2 import OneSidedIRKAReductor
from pymor.core.logger import set_log_levels
set_log_levels({'pymor.algorithms.gram_schmidt.gram_schmidt': 'WARNING'})
```
## Assemble LTIModel
### Discretize problem
```python
p = InstationaryProblem(
StationaryProblem(
domain=LineDomain([0.,1.], left='robin', right='robin'),
diffusion=ConstantFunction(1., 1),
robin_data=(ConstantFunction(1., 1), ExpressionFunction('(x[...,0] < 1e-10) * 1.', 1)),
outputs=(('l2_boundary', ExpressionFunction('(x[...,0] > (1 - 1e-10)) * 1.', 1)),)
),
ConstantFunction(0., 1),
T=3.
)
fom, _ = discretize_instationary_cg(p, diameter=1/100, nt=100)
print(fom)
```
### Visualize solution for constant input of 1
```python
fom.visualize(fom.solve())
```
### Convert to LTIModel
```python
lti = fom.to_lti()
print(lti)
```
## System analysis
```python
poles = lti.poles()
fig, ax = plt.subplots()
ax.plot(poles.real, poles.imag, '.')
ax.set_title('System poles')
plt.show()
```
```python
w = np.logspace(-2, 3, 100)
fig, ax = plt.subplots()
lti.mag_plot(w, ax=ax)
ax.set_title('Magnitude plot of the full model')
plt.show()
```
```python
hsv = lti.hsv()
fig, ax = plt.subplots()
ax.semilogy(range(1, len(hsv) + 1), hsv, '.-')
ax.set_title('Hankel singular values')
plt.show()
```
```python
print(f'FOM H_2-norm: {lti.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'FOM H_inf-norm: {lti.hinf_norm():e}')
print(f'FOM Hankel-norm: {lti.hankel_norm():e}')
```
## Balanced Truncation (BT)
```python
r = 5
bt_reductor = BTReductor(lti)
rom_bt = bt_reductor.reduce(r, tol=1e-5)
```
```python
err_bt = lti - rom_bt
print(f'BT relative H_2-error: {err_bt.h2_norm() / lti.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'BT relative H_inf-error: {err_bt.hinf_norm() / lti.hinf_norm():e}')
print(f'BT relative Hankel-error: {err_bt.hankel_norm() / lti.hankel_norm():e}')
```
```python
poles = rom_bt.poles()
fig, ax = plt.subplots()
ax.plot(poles.real, poles.imag, '.')
ax.set_title('Poles of the BT reduced model')
plt.show()
```
```python
fig, ax = plt.subplots()
lti.mag_plot(w, ax=ax)
rom_bt.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Magnitude plot of the full and BT reduced model')
plt.show()
```
```python
fig, ax = plt.subplots()
err_bt.mag_plot(w, ax=ax)
ax.set_title('Magnitude plot of the BT error system')
plt.show()
```
## LQG Balanced Truncation (LQGBT)
```python
r = 5
lqgbt_reductor = LQGBTReductor(lti)
rom_lqgbt = lqgbt_reductor.reduce(r, tol=1e-5)
```
```python
err_lqgbt = lti - rom_lqgbt
print(f'LQGBT relative H_2-error: {err_lqgbt.h2_norm() / lti.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'LQGBT relative H_inf-error: {err_lqgbt.hinf_norm() / lti.hinf_norm():e}')
print(f'LQGBT relative Hankel-error: {err_lqgbt.hankel_norm() / lti.hankel_norm():e}')
```
```python
poles = rom_lqgbt.poles()
fig, ax = plt.subplots()
ax.plot(poles.real, poles.imag, '.')
ax.set_title('Poles of the LQGBT reduced model')
plt.show()
```
```python
fig, ax = plt.subplots()
lti.mag_plot(w, ax=ax)
rom_lqgbt.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Magnitude plot of the full and LQGBT reduced model')
plt.show()
```
```python
fig, ax = plt.subplots()
err_lqgbt.mag_plot(w, ax=ax)
ax.set_title('Magnitude plot of the LGQBT error system')
plt.show()
```
## Bounded Real Balanced Truncation (BRBT)
```python
r = 5
brbt_reductor = BRBTReductor(lti, 0.34)
rom_brbt = brbt_reductor.reduce(r, tol=1e-5)
```
```python
err_brbt = lti - rom_brbt
print(f'BRBT relative H_2-error: {err_brbt.h2_norm() / lti.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'BRBT relative H_inf-error: {err_brbt.hinf_norm() / lti.hinf_norm():e}')
print(f'BRBT relative Hankel-error: {err_brbt.hankel_norm() / lti.hankel_norm():e}')
```
```python
poles = rom_brbt.poles()
fig, ax = plt.subplots()
ax.plot(poles.real, poles.imag, '.')
ax.set_title('Poles of the BRBT reduced model')
plt.show()
```
```python
fig, ax = plt.subplots()
lti.mag_plot(w, ax=ax)
rom_brbt.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Magnitude plot of the full and BRBT reduced model')
plt.show()
```
```python
fig, ax = plt.subplots()
err_brbt.mag_plot(w, ax=ax)
ax.set_title('Magnitude plot of the BRBT error system')
plt.show()
```
## Iterative Rational Krylov Algorithm (IRKA)
```python
r = 5
irka_reductor = IRKAReductor(lti)
rom_irka = irka_reductor.reduce(r)
```
```python
fig, ax = plt.subplots()
ax.semilogy(irka_reductor.conv_crit, '.-')
ax.set_title('Distances between shifts in IRKA iterations')
plt.show()
```
```python
err_irka = lti - rom_irka
print(f'IRKA relative H_2-error: {err_irka.h2_norm() / lti.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'IRKA relative H_inf-error: {err_irka.hinf_norm() / lti.hinf_norm():e}')
print(f'IRKA relative Hankel-error: {err_irka.hankel_norm() / lti.hankel_norm():e}')
```
```python
poles = rom_irka.poles()
fig, ax = plt.subplots()
ax.plot(poles.real, poles.imag, '.')
ax.set_title('Poles of the IRKA reduced model')
plt.show()
```
```python
fig, ax = plt.subplots()
lti.mag_plot(w, ax=ax)
rom_irka.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Magnitude plot of the full and IRKA reduced model')
plt.show()
```
```python
fig, ax = plt.subplots()
err_irka.mag_plot(w, ax=ax)
ax.set_title('Magnitude plot of the IRKA error system')
plt.show()
```
## Two-Sided Iteration Algorithm (TSIA)
```python
r = 5
tsia_reductor = TSIAReductor(lti)
rom_tsia = tsia_reductor.reduce(r)
```
```python
fig, ax = plt.subplots()
ax.semilogy(tsia_reductor.conv_crit, '.-')
ax.set_title('Distances between shifts in TSIA iterations')
plt.show()
```
```python
err_tsia = lti - rom_tsia
print(f'TSIA relative H_2-error: {err_tsia.h2_norm() / lti.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'TSIA relative H_inf-error: {err_tsia.hinf_norm() / lti.hinf_norm():e}')
print(f'TSIA relative Hankel-error: {err_tsia.hankel_norm() / lti.hankel_norm():e}')
```
```python
poles = rom_tsia.poles()
fig, ax = plt.subplots()
ax.plot(poles.real, poles.imag, '.')
ax.set_title('Poles of the TSIA reduced model')
plt.show()
```
```python
fig, ax = plt.subplots()
lti.mag_plot(w, ax=ax)
rom_tsia.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Magnitude plot of the full and TSIA reduced model')
plt.show()
```
```python
fig, ax = plt.subplots()
err_tsia.mag_plot(w, ax=ax)
ax.set_title('Magnitude plot of the TSIA error system')
plt.show()
```
## One-Sided IRKA
```python
r = 5
one_sided_irka_reductor = OneSidedIRKAReductor(lti, 'V')
rom_one_sided_irka = one_sided_irka_reductor.reduce(r)
```
```python
fig, ax = plt.subplots()
ax.semilogy(one_sided_irka_reductor.conv_crit, '.-')
ax.set_title('Distances between shifts in one-sided IRKA iterations')
plt.show()
```
```python
fig, ax = plt.subplots()
osirka_poles = rom_one_sided_irka.poles()
ax.plot(osirka_poles.real, osirka_poles.imag, '.')
ax.set_title('Poles of the one-sided IRKA ROM')
plt.show()
```
```python
err_one_sided_irka = lti - rom_one_sided_irka
print(f'One-sided IRKA relative H_2-error: {err_one_sided_irka.h2_norm() / lti.h2_norm():e}')
if config.HAVE_SLYCOT:
print(f'One-sided IRKA relative H_inf-error: {err_one_sided_irka.hinf_norm() / lti.hinf_norm():e}')
print(f'One-sided IRKA relative Hankel-error: {err_one_sided_irka.hankel_norm() / lti.hankel_norm():e}')
```
```python
fig, ax = plt.subplots()
lti.mag_plot(w, ax=ax)
rom_one_sided_irka.mag_plot(w, ax=ax, linestyle='dashed')
ax.set_title('Magnitude plot of the full and one-sided IRKA reduced model')
plt.show()
```
```python
fig, ax = plt.subplots()
err_one_sided_irka.mag_plot(w, ax=ax)
ax.set_title('Magnitude plot of the one-sided IRKA error system')
plt.show()
```
## Transfer Function IRKA (TF-IRKA)
Applying Laplace transformation to the original PDE formulation, we obtain a parametric boundary value problem
$$
\begin{align*}
s \hat{T}(z, s) & = \partial_{zz} \hat{T}(z, s), \\
\partial_z \hat{T}(0, s) & = \hat{T}(0, s) - \hat{u}(s), \\
\partial_z \hat{T}(1, s) & = -\hat{T}(1, s), \\
\hat{\tilde{y}}(s) & = \hat{T}(1, s),
\end{align*}
$$
where $\hat{T}$, $\hat{u}$, and $\hat{\tilde{y}}$ are respectively Laplace transforms of $T$, $u$, and $\tilde{y}$.
We assumed the initial condition to be zero ($T(z, 0) = 0$).
The parameter $s$ is any complex number in the region convergence of the Laplace tranformation.
Inserting $\hat{T}(z, s) = c_1 \exp\left(\sqrt{s} z\right) + c_2 \exp\left(-\sqrt{s} z\right)$, from the boundary conditions we get a system of equations
$$
\begin{align*}
\left(\sqrt{s} - 1\right) c_1 - \left(\sqrt{s} + 1\right) c_2 + \hat{u}(s) & = 0, \\
\left(\sqrt{s} + 1\right) \exp\left(\sqrt{s}\right) c_1 - \left(\sqrt{s} - 1\right) \exp\left(-\sqrt{s}\right) c_2 & = 0.
\end{align*}
$$
We can solve it using `sympy` and then find the transfer function ($\hat{\tilde{y}}(s) / \hat{u}(s)$).
```python
import sympy as sy
sy.init_printing()
sy_s, sy_u, sy_c1, sy_c2 = sy.symbols('s u c1 c2')
sol = sy.solve([(sy.sqrt(sy_s) - 1) * sy_c1 - (sy.sqrt(sy_s) + 1) * sy_c2 + sy_u,
(sy.sqrt(sy_s) + 1) * sy.exp(sy.sqrt(sy_s)) * sy_c1 -
(sy.sqrt(sy_s) - 1) * sy.exp(-sy.sqrt(sy_s)) * sy_c2],
[sy_c1, sy_c2])
y = sol[sy_c1] * sy.exp(sy.sqrt(sy_s)) + sol[sy_c2] * sy.exp(-sy.sqrt(sy_s))
sy_tf = sy.simplify(y / sy_u)
sy_tf
```
Notice that for $s = 0$, the expression is of the form $0 / 0$.
```python
sy.limit(sy_tf, sy_s, 0)
```
```python
sy_dtf = sy_tf.diff(sy_s)
sy_dtf
```
```python
sy.limit(sy_dtf, sy_s, 0)
```
We can now form the transfer function system.
```python
def H(s):
if s == 0:
return np.array([[1 / 3]])
else:
return np.array([[complex(sy_tf.subs(sy_s, s))]])
def dH(s):
if s == 0:
return np.array([[-13 / 54]])
else:
return np.array([[complex(sy_dtf.subs(sy_s, s))]])
tf = TransferFunction(lti.input_space, lti.output_space, H, dH)
print(tf)
```
Here we compare it to the discretized system, by magnitude plot, $\mathcal{H}_2$-norm, and $\mathcal{H}_2$-distance.
```python
tf_lti_diff = tf - lti
fig, ax = plt.subplots()
tf_lti_diff.mag_plot(w, ax=ax)
ax.set_title('Distance between PDE and discretized transfer function')
plt.show()
```
```python
print(f'TF H_2-norm = {tf.h2_norm():e}')
print(f'LTI H_2-norm = {lti.h2_norm():e}')
```
```python
print(f'TF-LTI relative H_2-distance = {tf_lti_diff.h2_norm() / tf.h2_norm():e}')
```
TF-IRKA finds a reduced model from the transfer function.
```python
tf_irka_reductor = TFIRKAReductor(tf)
rom_tf_irka = tf_irka_reductor.reduce(r)
```
```python
fig, ax = plt.subplots()
tfirka_poles = rom_tf_irka.poles()
ax.plot(tfirka_poles.real, tfirka_poles.imag, '.')
ax.set_title('Poles of the TF-IRKA ROM')
plt.show()
```
Here we compute the $\mathcal{H}_2$-distance from the original PDE model to the TF-IRKA's reduced model and to the IRKA's reduced model.
```python
err_tf_irka = tf - rom_tf_irka
print(f'TF-IRKA relative H_2-error = {err_tf_irka.h2_norm() / tf.h2_norm():e}')
```
```python
err_irka_tf = tf - rom_irka
print(f'IRKA relative H_2-error (from TF) = {err_irka_tf.h2_norm() / tf.h2_norm():e}')
```
| 381c188e320a24f0c783dd4e6040ed84ae2ec908 | 22,159 | ipynb | Jupyter Notebook | notebooks/heat.ipynb | weslowrie/pymor | badb5078b2394162d04a1ebfefe9034b889dac64 | [
"Unlicense"
] | null | null | null | notebooks/heat.ipynb | weslowrie/pymor | badb5078b2394162d04a1ebfefe9034b889dac64 | [
"Unlicense"
] | null | null | null | notebooks/heat.ipynb | weslowrie/pymor | badb5078b2394162d04a1ebfefe9034b889dac64 | [
"Unlicense"
] | null | null | null | 25.238041 | 180 | 0.531297 | true | 4,040 | Qwen/Qwen-72B | 1. YES
2. YES | 0.822189 | 0.672332 | 0.552784 | __label__eng_Latn | 0.410803 | 0.122632 |
# Composition
A deep neural network is simply a composition of (parametrized) processing nodes. Composing two nodes $g$ and $f$ gives yet another node $h = f \cdot g$, or $h(x) = f(g(x))$. We can also evaluate two nodes in parallel and express the result as the concatenation of the two outputs, $h(x) = (f(x), g(x))$. As such we can also view a deep nerual network as a single processing node where we have collected all the inputs together (into a single input) and collected all the outputs together (into a single output). This is how deep learning frameworks, such as PyTorch, process data through neural networks.
This tutorial explores the idea of composition using the `ddn.basic` package. Each processing node in the package is assumed to take a single (vector) input and produce a single (vector) output as presented in the ["Deep Declarative Networks: A New Hope"](https://arxiv.org/abs/1909.04866) paper, so we have to merge and split vectors as we process data through the network.
```python
%matplotlib inline
```
## Example: Matching means
We will develop an example of modifying two vector inputs so that their means match. Our network first computes the mean of each vector, then computes their square difference. Back-propagating to reduce the square difference will modify the vectors such that their means are equal. The network can be visualized as
```
.------.
.------. | |
x_1 ---| mean |--- mu_1 ---| |
'------' | | .---------.
| diff |--- (mu_1 - mu_2) ---| 1/2 sqr |--- y
.------. | | '---------'
x_2 ---| mean |--- mu_2 ---| |
'------' | |
'------'
```
Viewing the network as a single node we have
```
.-------------------------------------------------------------------------.
| .------. |
| .------. | | |
| x_1 ---| mean |--- mu_1 ---| | |
| / '------' | | .---------. |
x ---|-< | diff |--- (mu_1 - mu_2) ---| 1/2 sqr |---|--- y
| \ .------. | | '---------' |
| x_2 ---| mean |--- mu_2 ---| | |
| '------' | | |
| '------' |
.-------------------------------------------------------------------------'
```
Note here each of $x_1$ and $x_2$ is an $n$-dimensional vector. So $x = (x_1, x_2) \in \mathbb{R}^{2n}$.
We now develop the code for this example, starting with the upper and lower branches of the network.
```python
import sys
sys.path.append("../")
from ddn.basic.node import *
from ddn.basic.sample_nodes import *
from ddn.basic.robust_nodes import *
from ddn.basic.composition import *
# construct n-dimensional vector inputs
n = 10
x_1 = np.random.randn(n, 1)
x_2 = np.random.randn(n, 1)
x = np.vstack((x_1, x_2))
# create upper and lower branches
upperBranch = ComposedNode(SelectNode(2*n, 0, n-1), RobustAverage(n, 'quadratic'))
lowerBranch = ComposedNode(SelectNode(2*n, n), RobustAverage(n, 'quadratic'))
```
Here we construct each branch by composing a `SelectNode`, which chooses the appropriate subvector $x_1$ or $x_2$ for the branch, with a `RobustAverage` node, which computes the mean. To make sure things are working so far we evaluate the upper and lower branches, each now expressed as a single composed processing node, and compare their outputs to the mean of $x_1$ and $x_2$, respectively.
```python
mu_1, _ = upperBranch.solve(x)
mu_2, _ = lowerBranch.solve(x)
print("upper branch: {} vs {}".format(mu_1, np.mean(x_1)))
print("lower branch: {} vs {}".format(mu_2, np.mean(x_2)))
```
upper branch: [0.35571504] vs 0.3557150433275785
lower branch: [-0.2411335] vs -0.2411335017597908
Continuing the example, we now run the upper and lower branches in parallel (to produce $(\mu_1, \mu_2) \in \mathbb{R}^2$) and write a node to take the difference between the two elements of the resulting vector.
```python
# combine the upper and lower branches
meansNode = ParallelNode(upperBranch, lowerBranch)
# node for computing mu_1 - mu_2
class DiffNode(AbstractNode):
"""Computes the difference between elements in a 2-dimensional vector."""
def __init__(self):
super().__init__(2, 1)
def solve(self, x):
assert len(x) == 2
return x[0] - x[1], None
def gradient(self, x, y=None, ctx=None):
return np.array([1.0, -1.0])
# now put everything together into a network (super declarative node)
network = ComposedNode(ComposedNode(meansNode, DiffNode()), SquaredErrorNode(1))
# print the initial (half) squared difference between the means
y, _ = network.solve(x)
print(y)
```
0.17811409288645474
Now let's optimize $x_1$ and $x_2$ so as to make their means equal.
```python
import scipy.optimize as opt
import matplotlib.pyplot as plt
x_init = x.copy()
y_init, _ = network.solve(x_init)
history = [y_init]
result = opt.minimize(lambda xk: network.solve(xk)[0], x_init, args=(), method='L-BFGS-B', jac=lambda xk: network.gradient(xk),
options={'maxiter': 1000, 'disp': False},
callback=lambda xk: history.append(network.solve(xk)[0]))
# plot results
plt.figure()
plt.semilogy(history, lw=2)
plt.xlabel("iter."); plt.ylabel("error")
plt.title("Example: Matching means")
plt.show()
# print final vectors and their means
x_final = result.x
print(x_final[0:n])
print(x_final[n:])
print(np.mean(x_final[0:n]))
print(np.mean(x_final[n:]))
```
## Mathematics
To understand composition mathematically, consider the following network,
```
.---.
.---| f |---.
/ '---' \ .---.
x --< >--| h |--- y
\ .---. / '---'
'---| g |---'
'---'
```
We can write the function as $y = h(f(x), g(x))$. Let's assume that $x$ is an $n$-dimensional vector, $f : \mathbb{R}^n \to \mathbb{R}^p$, $g : \mathbb{R}^m \to \mathbb{R}^q$, and $h : \mathbb{R}^{p+q} \to \mathbb{R}^m$. This implies that the output, $y$, is an $m$-dimensional vector.
We can write the derivative as
$$
\begin{align}
\text{D}y(x) &= \text{D}_{F}h(f, g) \text{D}f(x) + \text{D}_{G}h(f, g) \text{D}g(x) \\
&= \begin{bmatrix} \text{D}_F h & \text{D}_G h \end{bmatrix} \begin{bmatrix} \text{D}f \\ \text{D}g \end{bmatrix}
\end{align}
$$
where the first matrix on the right-hand-side has size $m \times (p + q)$ and the second matrix has size $(p + q) \times n$, giving an $m \times n$ matrix for $\text{D}y(x)$ as expected. Moreover, we can treat the parallel branch as a single node in the graph computing $(f(x), g(x)) \in \mathbb{R}^{p+q}$.
Note that none of this is specific to deep declarative nodes---it is a simple consequence of the rules of differentiation and applies to both declarative and imperative nodes. We can, however, also think about composition of the objective function within the optimization problem defining a declarative node.
## Composed Objectives within Declarative Nodes
Consider the following parametrized optimization problem
$$
\begin{align}
y(x) &= \text{argmin}_{u \in \mathbb{R}} h(f(x, u), g(x, u))
\end{align}
$$
for $x \in \mathbb{R}$. From Proposition 4.3 of the ["Deep Declarative Networks: A New Hope"](https://arxiv.org/abs/1909.04866) paper we have
$$
\begin{align}
\frac{dy}{dx} &= -\left(\frac{\partial^2 h(f(x, y), g(x, y))}{\partial y^2}\right)^{-1} \frac{\partial^2 h(f(x, y), g(x, y))}{\partial x \partial y}
\end{align}
$$
when the various partial derivatives exist. Expanding the derivatives we have
$$
\begin{align}
\frac{\partial h(f(x, y), g(x, y))}{\partial y}
&= \frac{\partial h}{\partial f} \frac{\partial f}{\partial y} + \frac{\partial h}{\partial g} \frac{\partial g}{\partial y}
\\
\frac{\partial^2 h(f(x, y), g(x, y))}{\partial y^2}
&= \frac{\partial^2 h}{\partial y \partial f} \frac{\partial f}{\partial y} +
\frac{\partial h}{\partial f} \frac{\partial^2 f}{\partial y^2} +
\frac{\partial^2 h}{\partial y \partial g} \frac{\partial g}{\partial y} +
\frac{\partial h}{\partial g} \frac{\partial^2 g}{\partial y^2}
\\
&= \begin{bmatrix}
\frac{\partial^2 h}{\partial y \partial f} \\
\frac{\partial^2 f}{\partial y^2} \\
\frac{\partial^2 h}{\partial y \partial g} \\
\frac{\partial^2 g}{\partial y^2}
\end{bmatrix}^T
\begin{bmatrix}
\frac{\partial f}{\partial y} \\
\frac{\partial h}{\partial f} \\
\frac{\partial g}{\partial y} \\
\frac{\partial h}{\partial g}
\end{bmatrix}
\\
\frac{\partial^2 h(f(x, y), g(x, y))}{\partial x \partial y}
&= \begin{bmatrix}
\frac{\partial^2 h}{\partial x \partial f} \\
\frac{\partial^2 f}{\partial x \partial y} \\
\frac{\partial^2 h}{\partial x \partial g} \\
\frac{\partial^2 g}{\partial x \partial y}
\end{bmatrix}^T
\begin{bmatrix}
\frac{\partial f}{\partial y} \\
\frac{\partial h}{\partial f} \\
\frac{\partial g}{\partial y} \\
\frac{\partial h}{\partial g}
\end{bmatrix}
\end{align}
$$
As a special case, when $f: (x, u) \mapsto x$ and $g: (x, u) \mapsto u$ we have
$$
\begin{align}
\frac{\partial^2 h(f(x, y), g(x, y))}{\partial y^2}
&= \begin{bmatrix}
\frac{\partial^2 h}{\partial y \partial f} \\
0 \\
\frac{\partial^2 h}{\partial y \partial g} \\
0
\end{bmatrix}^T
\begin{bmatrix}
0 \\
\frac{\partial h}{\partial f} \\
1 \\
\frac{\partial h}{\partial g}
\end{bmatrix}
&= \frac{\partial^2 h}{\partial y^2}
\\
\frac{\partial^2 h(f(x, y), g(x, y))}{\partial x \partial y}
&= \begin{bmatrix}
\frac{\partial^2 h}{\partial x \partial f} \\
0 \\
\frac{\partial^2 h}{\partial x \partial g} \\
0
\end{bmatrix}^T
\begin{bmatrix}
0 \\
\frac{\partial h}{\partial f} \\
1 \\
\frac{\partial h}{\partial g}
\end{bmatrix}
&= \frac{\partial^2 h}{\partial x \partial y}
\end{align}
$$
which gives the standard result, as it should.
```python
```
| e6874217f4a010af89bcd04ebf607f1d01fdf9b2 | 27,126 | ipynb | Jupyter Notebook | tutorials/06_composition.ipynb | pmorerio/ddn | 68e44e3ccfbeed285a78bf75cc778802dd15890e | [
"MIT"
] | 161 | 2019-09-08T05:22:43.000Z | 2022-03-31T06:13:43.000Z | tutorials/06_composition.ipynb | pmorerio/ddn | 68e44e3ccfbeed285a78bf75cc778802dd15890e | [
"MIT"
] | 11 | 2020-09-15T06:59:23.000Z | 2021-12-27T04:15:19.000Z | tutorials/06_composition.ipynb | pmorerio/ddn | 68e44e3ccfbeed285a78bf75cc778802dd15890e | [
"MIT"
] | 29 | 2019-09-15T08:34:45.000Z | 2022-01-04T04:48:54.000Z | 69.375959 | 12,152 | 0.706997 | true | 3,016 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91118 | 0.843895 | 0.76894 | __label__eng_Latn | 0.961738 | 0.624838 |
# Pg. 332 #29, 41, 53, 71, 73, 83, 87, 89, 93, 101, 102
---
Eric Nguyen
20 Dec 2018
```
import numpy as np
import matplotlib.pyplot as plt
```
### *Find each logarithm*. *Round to six decimal places.*
#### 29. $\ln{5894}$
```
ans29 = round(np.log(5894), 6)
ans29
```
#### Answer 29:
> $8.68169$
### *Solve for $t$*.
#### 41. $e^{-0.02t} = 0.06$
$$
\begin{align}
e^{-0.02t} &= 0.06 \\
-0.02t &= \ln{0.06} \\
t &= \frac{\ln{0.06}}{-0.02}
\end{align}
$$
```
ans41 = np.log(0.06) / (-0.02)
ans41
```
```
# Verify
np.power(np.e, -0.02 * ans41)
```
#### Answer 41:
> $t = 140.670536$
### *Differentiate*.
#### 53. $y = \ln{\frac{x^2}{4}} \left(\text{Hint:} \ln{\frac{A}{B}} = \ln{A} - \ln{B}\right)$
$$
\begin{align}
y &= \ln{x^2} - \ln{4} \\
y' &= \frac{2x}{x^2} \\
&= \frac{2}{x}
\end{align}
$$
```
def pltfn(fn, xlim = (-10, 10), smoothness = 100, color = False):
xp = np.linspace(xlim[0], xlim[1], smoothness)
if (color):
plt.plot(xp, fn(xp), color = color)
else:
plt.plot(xp, fn(xp))
qtn53 = lambda x: np.log(np.power(x, 2) / 4)
ans53 = lambda x: 2 / x
pltfn(qtn53)
pltfn(ans53)
```
#### Answer 53:
> $y' = \dfrac{2}{x}$
#### 59. $g(x) = e^x \ln{x^2}$
$$\begin{align}
g'(x) &= \frac{d}{dx} e^x \cdot \frac{d}{dx} \ln{x^2} \\
&= e^x \cdot \frac{2}{x} \\
&= \frac{2e^x}{x}
\end{align}$$
```
qtn59 = lambda x: np.power(np.e, x) * np.log(np.power(x, 2))
ans59 = lambda x: (2 * np.power(np.e, x)) / x
pltfn(qtn59, (5, 10))
pltfn(ans59, (5, 10))
```
#### Answer 59:
> $g'(x) = \dfrac{2e^x}{x}$
#### 71. Find the equation of the line tangent to the graph of $y = (\ln{x})^2$ at $x = 3$.
$$\begin{align}
y' &= 2 \cdot (\ln{x}) \cdot \frac{1}{x} \\
&= \frac{2\ln{x}}{x} \\
y &= f(x) \\
g(x) &= f'\left(a\right)\left(x-a\right)+f\left(a\right) \quad &\text{Tangent line equation.} \\
f'(3) &= \frac{2\ln{3}}{3} \quad &\text{Find the slope.} \\
&\approx 0.732408 \\
f(3) &\approx 1.206949 \\
g(x) &= 0.732408(x - 3) + 1.206949 \\
\end{align}$$
```
fn71 = lambda x: np.power(np.log(x), 2)
dfn71 = lambda x: (2 * np.log(x)) / x
```
```
slope71 = (2 * np.log(3)) / 3
slope71
```
```
fn71(3)
```
```
ans71 = lambda x: slope71 * (x - 3) + fn71(3)
pltfn(fn71, (0.5, 5))
pltfn(dfn71, (0.5, 5))
pltfn(ans71, (0.5, 5))
```
#### Answer 71:
> $g(x) = 0.732408(x - 3) + 1.206949$
#### 73. <span style="color: #0af">Advertising.</span> A model for consumers' response to advertising is given by
$$ N(a) = 2000 + 500 \ln{a}, \quad a \geq 1,$$
#### where $N(a)$ is the number of units sold and $a$ is the amount spent on advertising, in thousands of dollars.
#### a) How many units were sold after spending $\$1000$ on advertising?
```
fn73 = lambda a: 2000 + 500 * np.log(a)
fn73(1000)
```
#### Answer 73a:
> $N(1000) \approx 5453$
#### b) Find $N'(a)$ and $N'(10)$.
$$\begin{align}
N'(a) &= \frac{500}{a} \\
\end{align}$$
```
dfn73 = lambda a: 500 / a
dfn73(10)
```
```
pltfn(fn73, (0.25, 5))
pltfn(dfn73, (0.25, 5))
```
```
pltfn(dfn73, (8, 12), color="green")
plt.scatter(10, dfn73(10), color="green")
```
#### c) Find the maximum and minimum values, if they exist.
$$\begin{align}
N'(a) &= \frac{500}{a} \\
0 &= \frac{500}{a} & \text{Find the zeroes.} \\
0 &\neq 500
\end{align}$$
#### Answer 73c:
> There are no minimum or maximum values.
#### d) Find $\underset{a\to\infty}{\lim} N'(a)$. Discuss whether it makes sense to continue to spend more and more dollars on advertising.
$$\begin{align}
\underset{a\to\infty}{\lim} N'(a) &= \frac{500}{\infty} \\
\underset{a\to\infty}{\lim} N'(a) &= 0
\end{align}$$
#### Answer 73d:
> It does not make sense to spend more and more dollars on advertising, in thousands of dollars, as the effect of advertising rapidly diminishes for each thousand of dollars spent.
#### 83. <span style="color: #0af">Forgetting.</span> Students in a zoology class took a final exam. They took equivalent forms of the exam at monthly intervals thereafter. After $t$ months, the average score $S(t)$, as a percentage was found to be given by
$$ S(t) = 78 - 15 \ln{\left(t + 1\right)}, \quad t \geq 0. $$
#### a) What was the average score when they initially took the test, $t = 0$?
```
fn83 = lambda t: 78 - 15 * np.log(t + 1)
pltfn(fn83, (0, 200))
ans83a = fn83(0)
ans83a
```
#### Answer 83a:
> $S(0) = 78.0$
#### b) What was the average score after 4 months?
```
ans83b = fn83(4)
ans83b
```
#### Answer 83b:
> $S(4) \approx 53.858$
#### c) What was the average score after 24 months?
```
ans83c = fn83(24)
ans83c
```
#### Answer 83c:
> $S(24) \approx 29.727$
#### d) What percentage of their original answers did the students retain after 2 years (24 months)?
```
ans83c / ans83a
```
0.38098541829457677
#### Answer 83d:
> $38.1\%$
#### e) Find $S'(t)$.
$$\begin{align}
S(t) &= 78 - 15 \ln{\left(t + 1\right)}, & t \geq 0 \\
S'(t) &= -15 \cdot \frac{1}{t + 1}, & t \geq 0 \\
&= \frac{-15}{t + 1}, & t \geq 0
\end{align}$$
```
dfn83 = lambda t: -15 / (t + 1)
pltfn(fn83, (0, 200))
pltfn(dfn83, (0, 200))
```
#### f) Find the maximum and minimum values, if they exist.
$$\begin{align}
S'(t) &= \frac{-15}{t + 1} \\
0 &= \frac{-15}{t + 1} \\
0 &\neq -15
\end{align}$$
#### Answer 83f:
> There are no maximum or minimum values.
### *Solve for $t$*.
#### 87. $P = P_0e^{-kt}$
$$\begin{align}
\frac{P}{P_0} &= e^{-kt} \\
\ln{\left(\frac{P}{P_0}\right)} &= -kt \\
t &= \frac{\ln{\left(\frac{P}{P_0}\right)}}{-k}
\end{align}$$
### *Differentiate*.
#### 93. $f(t) = \ln{\dfrac{1 - t}{1 + t}}$
#### Answer 93:
$$\begin{align}
f(t) &= \ln{\left(1 - t \right)} - \ln{\left(1 + t\right)} \\
f'(t) &= \frac{-1}{1 - t} - \frac{1}{1 + t} \\
f'(t) &= \frac{-\left(1 + t\right)}{\left(1 - t\right)\left(1 + t\right)} -
\frac{\left(1 - t\right)}{\left(1 + t\right)\left(1 - t\right)} \\
f'(t) &= \frac{-1 - t - 1 + t}{\left(1 - t\right)\left(1 + t\right)} \\
f'(t) &= \frac{-2}{\left(1 - t\right)\left(1 + t\right)} \\
\end{align}$$
#### 101. $f(x) = \ln{\dfrac{1 + \sqrt{x}}{1 - \sqrt{x}}}$
#### Answer 101:
$$\begin{align}
f(x) &= \ln{\left(1 + \sqrt{x}\right)} - \ln{\left(1 - \sqrt{x}\right)} \\
\frac{d}{dx} \sqrt{x} &= \frac{1}{2\sqrt{x}} \\
f'(x) &= \frac{\frac{1}{2\sqrt{x}}}{1 + \sqrt{x}} - \frac{-\frac{1}{2\sqrt{x}}}{1 - \sqrt{x}} \\
f'(x) &= \frac{1}{\left(2\sqrt{x}\right)\left(1 + \sqrt{x}\right)} + \frac{1}{\left(2\sqrt{x}\right)\left(1 - \sqrt{x}\right)} \\
f'(x) &= \frac{\left(1 - \sqrt{x}\right)}{\left(2\sqrt{x}\right)\left(1 + \sqrt{x}\right)\left(1 - \sqrt{x}\right)} + \frac{\left(1 + \sqrt{x}\right)}{\left(2\sqrt{x}\right)\left(1 - \sqrt{x}\right)\left(1 + \sqrt{x}\right)} \\
f'(x) &= \frac{2}{\left(2\sqrt{x}\right)\left(1 + \sqrt{x}\right)\left(1 - \sqrt{x}\right)} \\
f'(x) &= \frac{1}{\sqrt{x}\left(1 - \sqrt{x}\right)\left(1 + \sqrt{x}\right)} \\
f'(x) &= \frac{1}{\sqrt{x}\left(1 - x\right)} \\
\end{align}$$
#### 102. $f(x) = \ln{\left(\ln{x}\right)}^3$
#### Answer 102:
$$\begin{align}
f(x) &= 3 \ln{\left(\ln{x}\right)} \\
f'(x) &= \frac{\frac{3}{x}}{\ln{x}} \\
f'(x) &= \frac{3}{x\ln{x}}
\end{align}$$
| a22cb2f3ff24a984f5196cd771571186513e8167 | 41,106 | ipynb | Jupyter Notebook | 2018-12/2018-12-20.ipynb | airicbear/calculus-homework | a765d3ba35b2b3794b9b2cce038152682eeb2cb8 | [
"MIT"
] | null | null | null | 2018-12/2018-12-20.ipynb | airicbear/calculus-homework | a765d3ba35b2b3794b9b2cce038152682eeb2cb8 | [
"MIT"
] | 1 | 2019-02-04T07:00:05.000Z | 2019-02-09T01:17:25.000Z | 2018-12/2018-12-20.ipynb | airicbear/calculus-homework | a765d3ba35b2b3794b9b2cce038152682eeb2cb8 | [
"MIT"
] | null | null | null | 43.544492 | 17,974 | 0.635868 | true | 2,930 | Qwen/Qwen-72B | 1. YES
2. YES | 0.891811 | 0.843895 | 0.752595 | __label__eng_Latn | 0.356014 | 0.586862 |
# Expectation–maximization algorithm
# Purpose
* Understand how the EM-algorithm works to estimate parameters
# Methodology
* Implement a simple EM-algorithm
# Setup
```python
# %load imports.py
## Local packages:
%matplotlib inline
%load_ext autoreload
%autoreload 2
%config Completer.use_jedi = False ## (To fix autocomplete)
## External packages:
import pandas as pd
pd.options.display.max_rows = 999
pd.options.display.max_columns = 999
pd.set_option("display.max_columns", None)
import numpy as np
import os
import matplotlib.pyplot as plt
#if os.name == 'nt':
# plt.style.use('presentation.mplstyle') # Windows
import plotly.express as px
import plotly.graph_objects as go
import seaborn as sns
import sympy as sp
from sympy.physics.mechanics import (dynamicsymbols, ReferenceFrame,
Particle, Point)
from sympy.physics.vector.printing import vpprint, vlatex
from IPython.display import display, Math, Latex
from src.substitute_dynamic_symbols import run, lambdify
import pyro
import sklearn
import pykalman
from statsmodels.sandbox.regression.predstd import wls_prediction_std
import statsmodels.api as sm
from scipy.integrate import solve_ivp
## Local packages:
from src.data import mdl
from src.symbols import *
from src.parameters import *
import src.symbols as symbols
from src import prime_system
from src.models import regression
from src.visualization.regression import show_pred
from src.visualization.plot import track_plot
## Load models:
# (Uncomment these for faster loading):
import src.models.vmm_linear as vmm
from src.data.case_0 import ship_parameters, df_parameters, ps, ship_parameters_prime
from src.data.transform import transform_to_ship
from scipy.stats import norm
import filterpy.stats as stats
from filterpy.common import Q_discrete_white_noise
from filterpy.kalman import KalmanFilter
from numpy import sum
```
## Generate some data
```python
t_ = np.linspace(0,10,11)
k_ = 1
x = k_*t_
z = x + np.random.normal(scale=0.5, size=len(t_))
fig,ax=plt.subplots()
ax.plot(t_,x, label='real')
ax.plot(t_,z, 'o', label='measurement')
ax.legend();
```
```python
F = 2 # Guessing
for i in range(5):
F = sum(x[1:] - F*x[0:-1])/sum(x**2)
print(F)
```
```python
```
```python
len(x[0:-1])
```
```python
len(x[1:])
```
```python
F
```
```python
```
| f4567bee0391585574891c6ab3dba4296ae25a19 | 4,792 | ipynb | Jupyter Notebook | notebooks/15.20_EM-algorithm.ipynb | martinlarsalbert/wPCC | 16e0d4cc850d503247916c9f5bd9f0ddb07f8930 | [
"MIT"
] | null | null | null | notebooks/15.20_EM-algorithm.ipynb | martinlarsalbert/wPCC | 16e0d4cc850d503247916c9f5bd9f0ddb07f8930 | [
"MIT"
] | null | null | null | notebooks/15.20_EM-algorithm.ipynb | martinlarsalbert/wPCC | 16e0d4cc850d503247916c9f5bd9f0ddb07f8930 | [
"MIT"
] | null | null | null | 22.92823 | 94 | 0.545492 | true | 610 | Qwen/Qwen-72B | 1. YES
2. YES | 0.766294 | 0.7773 | 0.59564 | __label__eng_Latn | 0.68778 | 0.222201 |
## Classical Mechanics - Week 9
### Last Week:
- We saw how a potential can be used to analyze a system
- Gained experience with plotting and integrating in Python
### This Week:
- We will study harmonic oscillations using packages
- Further develope our analysis skills
- Gain more experience wtih sympy
```python
# Let's import packages, as usual
import numpy as np
import matplotlib.pyplot as plt
import sympy as sym
sym.init_printing(use_unicode=True)
```
Let's analyze a spring using sympy. It will have mass $m$, spring constant $k$, angular frequency $\omega_0$, initial position $x_0$, and initial velocity $v_0$.
The motion of this harmonic oscillator is described by the equation:
eq 1.) $m\ddot{x} = -kx$
This can be solved as
eq 2.) $x(t) = A\cos(\omega_0 t - \delta)$, $\qquad \omega_0 = \sqrt{\dfrac{k}{m}}$
Use SymPy below to plot this function. Set $A=2$, $\omega_0 = \pi/2$ and $\delta = \pi/4$.
(Refer back to ***Notebook 7*** if you need to review plotting with SymPy.)
```python
# Plot for equation 2 here
A, omega0, t, delta = sym.symbols('A, omega_0, t, delta')
x=A*sym.cos(omega0*t-delta)
x
```
```python
x1 = sym.simplify(x.subs({A:2, omega0:sym.pi/2, delta:sym.pi/4}))
x1
```
```python
sym.plot(x1,(t,0,10), title='position vs time',xlabel='t',ylabel='x')
```
## Q1.) Calculate analytically the initial conditions, $x_0$ and $v_0$, and the period of the motion for the given constants. Is your plot consistent with these values?
✅ Double click this cell, erase its content, and put your answer to the above question here.
####### Possible answers #######
Initial position $x_0=2\cos(-\pi/4)=\sqrt{2}$.
Initial velocity $v_0=-2(\pi/2)\sin(-\pi/4)=\pi/\sqrt{2}$.
The period is $2\pi/(\pi/2)=4$.
The initial position and the period are easily seen to agree with these values. The initial velocity is
positive and the initial slope looks like it agrees.
####### Possible answers #######
#### Now let's make plots for underdamped, critically-damped, and overdamped harmonic oscillators.
Below are the general equations for these oscillators:
- Underdamped, $\beta < \omega_0$ :
eq 3.) $x(t) = A e^{-\beta t}cos(\omega ' t) + B e^{-\beta t}sin(\omega ' t)$ , $\omega ' = \sqrt{\omega_0^2 - \beta^2}$
___________________________________
- Critically-damped, $\beta = \omega_0$:
eq 4.) $x(t) = Ae^{-\beta t} + B t e^{-\beta t}$
___________________________________
- Overdamped, $\beta > \omega_0$:
eq 5.) $x(t) = Ae^{-\left(\beta + \sqrt{\beta^2 - \omega_0^2}\right)t} + Be^{-\left(\beta - \sqrt{\beta^2 - \omega_0^2}\right)t}$
_______________________
In the cells below use SymPy to create the Position vs Time plots for these three oscillators.
Use $\omega_0=\pi/2$ as before, and then choose an appropriate value of $\beta$ for the three different damped oscillator solutions. Play around with the variables, $A$, $B$, and $\beta$, to see how different values affect the motion and if this agrees with your intuition.
```python
# Put your code for graphing Underdamped here
A, B, omega0, beta, t = sym.symbols('A, B, omega_0, beta, t')
omegap=sym.sqrt(omega0**2-beta**2)
x=sym.exp(-beta*t)*(A*sym.cos(omegap*t)+B*sym.sin(omegap*t))
x
```
```python
x1 = sym.simplify(x.subs({A:0, B:2, omega0:sym.pi/2, beta:sym.pi/40}))
sym.plot(x1,(t,0,30), title='Underdamped Oscillator',xlabel='t',ylabel='x')
```
```python
# Put your code for graphing Critical here
A, B, omega0, beta, t = sym.symbols('A, B, omega_0, beta, t')
x=sym.exp(-beta*t)*(A+B*t)
x
```
```python
x1 = sym.simplify(x.subs({A:0, B:2, omega0:sym.pi/2, beta:sym.pi/2}))
sym.plot(x1,(t,0,30), title='Critically-damped Oscillator',xlabel='t',ylabel='x')
```
```python
# Put your code for graphing Overdamped here
A, B, omega0, beta, t = sym.symbols('A, B, omega_0, beta, t')
beta1=beta+sym.sqrt(beta**2-omega0**2)
beta2=beta-sym.sqrt(beta**2-omega0**2)
x=A*sym.exp(-beta1*t)+B*sym.exp(-beta2*t)
x
```
```python
x1 = sym.simplify(x.subs({A:-2, B:2, omega0:sym.pi/2, beta:sym.pi}))
sym.plot(x1,(t,0,30), title='Overdamped Oscillator',xlabel='t',ylabel='x')
```
## Q2.) How would you compare the 3 different oscillators?
✅ Double click this cell, erase its content, and put your answer to the above question here.
####### Possible answers #######
Underdamped: Oscillations, with amplititude decreasing over time
Critical Damping: No oscillations. Curve dies the fastest of the three (for fixed $\omega_0$).
Overdamped: No oscillations. Curve dies slower than critically damped case.
####### Possible answers #######
# Here's another simple harmonic system, the pendulum.
The equation of motion for the pendulum is:
eq 6.) $ml\dfrac{d^2\theta}{dt^2} + mg \sin(\theta) = 0$, where $v=l\dfrac{d\theta}{dt}$ and $a=l\dfrac{d^2\theta}{dt^2}$
In the small angle approximation $\sin\theta\approx\theta$, so this can be written:
eq 7.) $\dfrac{d^2\theta}{dt^2} = -\dfrac{g}{l}\theta$
We then find the period of the pendulum to be $T = \dfrac{2\pi}{\sqrt{l/g}}$ and the angle at any given time
(if released from rest) is given by
$\theta = \theta_0\cos{\left(\sqrt{\dfrac{g}{l}} t\right)}$.
Let's use Euler's Forward method to solve equation (7) for the motion of the pendulum in the small angle approximation, and compare to the analytic solution.
First, let's graph the analytic solution for $\theta$. Go ahead and graph using either sympy, or the other method we have used, utilizing these variables:
- $t:(0s,50s)$
- $\theta_0 = 0.5$ radians
- $l = 40$ meters
```python
# Plot the analytic solution here
l=40
g=9.81
theta0, omega0, t = sym.symbols('theta_0, omega_0, t')
theta = theta0*sym.cos(omega0*t)
theta1 = sym.simplify(theta.subs({omega0:(g/l)**0.5,theta0:0.5}))
sym.plot(theta1,(t,0,50),title='Pendulum Oscillation, Small Angle Approximation',xlabel='time (s)',ylabel='Theta (radians)')
plt.show()
```
```python
# The same analytic plot, but now using matplotlib
# This is easier for comparing with the Euler's method calculation
ti=0
tf=50
dt=0.001
t=np.arange(ti,tf,dt)
theta0=0.5
l=40
g=9.81
omega0=np.sqrt(g/l)
theta=theta0*np.cos(omega0*t)
plt.grid()
plt.xlabel("Time (s)")
plt.ylabel("Theta (radians)")
plt.title("Theta (radians) vs Time (s), small angle approximation")
plt.plot(t,theta)
plt.show()
```
Now, use Euler's Forward method to obtain a plot of $\theta$ as a function of time $t$ (in the small angle approximation). Use eq (7) to calculate $\ddot{\theta}$ at each time step.
Try varying the time step size to see how it affects the Euler's method solution.
```python
# Perform Euler's Method Here
theta1=np.zeros(len(t))
theta1[0]=theta0
dtheta=0
ddtheta=-omega0**2*theta1[0]
for i in range(len(t)-1):
theta1[i+1] = theta1[i] + dtheta*dt
dtheta += ddtheta*dt
ddtheta = -omega0**2*theta1[i+1]
import matplotlib.patches as mpatches
plt.grid()
plt.xlabel("Time (s)")
plt.ylabel("Theta (radians)")
plt.title("Theta (radians) vs Time (s), small angle approximation")
blue_patch = mpatches.Patch(color = 'b', label = 'analytic')
red_patch = mpatches.Patch(color = 'r', label = 'Euler method')
plt.legend(handles=[blue_patch,red_patch],loc='lower left')
plt.plot(t,theta1,color='r')
plt.plot(t,theta,color='b')
plt.show()
```
You should have found that if you chose the time step size small enough, then the Euler's method solution was
indistinguishable from the analytic solution.
We can now trivially modify this, to solve for the pendulum **exactly**, without using the small angle approximation.
The exact equation for the acceleration is
eq 8.) $\dfrac{d^2\theta}{dt^2} = -\dfrac{g}{l}\sin\theta$.
Modify your Euler's Forward method calculation to use eq (8) to calculate $\ddot{\theta}$ at each time step in the cell below.
```python
theta2=np.zeros(len(t))
theta2[0]=theta0
dtheta=0
ddtheta=-omega0**2*np.sin(theta2[0])
for i in range(len(t)-1):
theta2[i+1] = theta2[i] + dtheta*dt
dtheta += ddtheta*dt
ddtheta = -omega0**2*np.sin(theta2[i+1])
plt.grid()
plt.xlabel("Time (s)")
plt.ylabel("Theta (radians)")
plt.title("Theta (radians) vs Time (s)")
blue_patch = mpatches.Patch(color = 'b', label = 'small angle approximation')
red_patch = mpatches.Patch(color = 'r', label = 'EXACT')
plt.legend(handles=[blue_patch,red_patch],loc='lower left')
plt.plot(t,theta2,color='r')
plt.plot(t,theta1,color='b')
plt.show()
```
# Q3.) What time step size did you use to find agreement between Euler's method and the analytic solution (in the small angle approximation)? How did the exact solution differ from the small angle approximation?
✅ Double click this cell, erase its content, and put your answer to the above question here.
####### Possible answers #######
A time step of 0.001 was sufficient so that Euler's method and the analytic formula were indistinguishable in the plots (for small angle approximation).
(Different time steps could be found, depending on how closely one compared the plots.)
The exact pendulum solution has a slightly longer period than the small angle approximation (in agreement with what we learned last week.)
####### Possible answers #######
### Now let's do something fun:
In class we found that the 2-dimensional anisotropic harmonic motion can be solved as
eq 8a.) $x(t) = A_x \cos(\omega_xt)$
eq 8b.) $y(t) = A_y \cos(\omega_yt - \delta)$
If $\dfrac{\omega_x}{\omega_y}$ is a rational number (*i.e,* a ratio of two integers), then the trajectory repeats itself after some amount of time. The plots of $x$ vs $y$ in this case are called Lissajous figures (after the French physicists Jules Lissajous). If $\dfrac{\omega_x}{\omega_y}$ is not a rational number, then the trajectory does not repeat itself, but it still shows some very interesting behavior.
Let's make some x vs y plots below for the 2-d anisotropic oscillator.
First, recreate the plots in Figure 5.9 of Taylor. (Hint: Let $A_x=A_y$. For the left plot of Figure 5.9, let $\delta=\pi/4$ and for the right plot, let $\delta=0$.)
Next, try other rational values of $\dfrac{\omega_x}{\omega_y}$ such as 5/6, 19/15, etc, and using different phase angles $\delta$.
Finally, for non-rational $\dfrac{\omega_x}{\omega_y}$, what does the trajectory plot look like if you let the length of time to be arbitrarily long?
\[For these parametric plots, it is preferable to use our original plotting method, *i.e.* using `plt.plot()`, as introduced in ***Notebook 1***.\]
```python
# Plot the Lissajous curves here
Ax=1
Ay=1
omegay=1
ti=0
tf=10
dt=0.1
t=np.arange(ti,tf,dt)
r=2
delta=np.pi/4
omegax=r*omegay
X=Ax*np.cos(omegax*t)
Y=Ay*np.cos(omegay*t-delta)
plt.plot(X,Y)
```
```python
ti=0
tf=24.1
dt=0.1
t=np.arange(ti,tf,dt)
r=np.sqrt(2)
delta=0
omegax=r*omegay
X=Ax*np.cos(omegax*t)
Y=Ay*np.cos(omegay*t-delta)
plt.plot(X,Y)
```
```python
ti=0
tf=50
dt=0.1
t=np.arange(ti,tf,dt)
r=5/6
delta=np.pi/2
omegax=r*omegay
X=Ax*np.cos(omegax*t)
Y=Ay*np.cos(omegay*t-delta)
plt.plot(X,Y)
```
```python
ti=0
tf=100
dt=0.1
t=np.arange(ti,tf,dt)
r=19/15
delta=np.pi/2
omegax=r*omegay
X=Ax*np.cos(omegax*t)
Y=Ay*np.cos(omegay*t-delta)
plt.plot(X,Y)
```
```python
ti=0
tf=1000
dt=0.1
t=np.arange(ti,tf,dt)
r=np.sqrt(2)
delta=0
omegax=r*omegay
X=Ax*np.cos(omegax*t)
Y=Ay*np.cos(omegay*t-delta)
plt.plot(X,Y)
```
# Q4.) What are some observations you make as you play with the variables? What happens for non-rational $\omega_x/\omega_y$ if you let the oscillator run for a long time?
✅ Double click this cell, erase its content, and put your answer to the above question here.
####### Possible answers #######
For rational $\omega_x/\omega_y = n/m$ the curve closes. In general, the larger $n$ and $m$ (with no common factors) the longer it takes the curve to close.
For non-rational $\omega_x/\omega_y$, the curve essentially fills in the entire rectangle if you let it run to long enough time.
####### Possible answers #######
# Notebook Wrap-up.
Run the cell below and copy-paste your answers into their corresponding cells.
```python
from IPython.display import HTML
HTML(
"""
"""
)
```
# Well that's that, another Notebook! It's now been 10 weeks of class
You've been given lots of computational and programing tools these past few months. These past two weeks have been practicing these tools and hopefully you are understanding how some of these pieces add up. Play around with the code and see how it affects our systems of equations. Solve the Schrodinger Equation for the Helium atom. Figure out the unifying theory. The future is limitless!
| a73a1c51df951f48c17577fb37b8e487ad74b29b | 529,879 | ipynb | Jupyter Notebook | doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Answers/CM_Notebook9_Answers.ipynb | Shield94/Physics321 | 9875a3bf840b0fa164b865a3cb13073aff9094ca | [
"CC0-1.0"
] | 20 | 2020-01-09T17:41:16.000Z | 2022-03-09T00:48:58.000Z | doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Answers/CM_Notebook9_Answers.ipynb | Shield94/Physics321 | 9875a3bf840b0fa164b865a3cb13073aff9094ca | [
"CC0-1.0"
] | 6 | 2020-01-08T03:47:53.000Z | 2020-12-15T15:02:57.000Z | doc/AdminBackground/PHY321/CM_Jupyter_Notebooks/Answers/CM_Notebook9_Answers.ipynb | Shield94/Physics321 | 9875a3bf840b0fa164b865a3cb13073aff9094ca | [
"CC0-1.0"
] | 33 | 2020-01-10T20:40:55.000Z | 2022-02-11T20:28:41.000Z | 524.632673 | 129,732 | 0.945386 | true | 3,784 | Qwen/Qwen-72B | 1. YES
2. YES | 0.908618 | 0.903294 | 0.820749 | __label__eng_Latn | 0.946832 | 0.745208 |
We are answering questions in <cite data-cite="bibtex_lane2019online">(Lane, 2019)</cite>
The first question we answer is on how to find the smallest absolute difference for the set of numbers $S=\left\{2,3,4,9,16\right\}$
```python
s=[2,3,4,9,16]
result=[]
for i in range(10,1,-1):
sum=0
for j in s:
sum += abs(j-i)
result.append((i,sum))
result.sort(key=lambda x: x[1])
print("The number that gives the smallest absolute difference is %d. The sum of absolute differences is %d."
% (result[0][0],result[0][1]))
```
The number that gives the smallest absolute difference is 4. The sum of absolute differences is 20.
We can generalize the logic in the cell above into a that operates on a list of numbers. Let us make the convention that we enumerate the elements $s_j \in S$ starting with $0$, so if $S$ has three elements then $S=\left\{s_0,s_1,s_2\right\}.$ We know the number we should need to subtract from the elements of $s_j \in S$ should be
\begin{equation}
\underset{i} {\mathrm{argmin}} \sum_{j=0}^{\left|S\right|-1} \left|s_j-i\right|.
\label{equ:argmin-1}
\end{equation}
We do not claim this range to search is optimal, but we show that we can choose a range that is guaranteed to find the value of $i$ that minimizes the expression above. Let $s_{\text{max}}=\text{max}\left(S\right)$ be the largest element in $S$, and let $s_{\text{min}}=\text{min}\left(S\right)$ be the smallest element in $S$. Then the value of $i$ that satisfies equation \ref{equ:argmin-1} is in the closed interval $\left[ s_{\text{min}}, s_{\text{max}} \right].$
To see why this is so, if we use any number $k$ less than the smallest element $s_{\text{min}}$ or greater than the largest element $s_{\text{max}}$ then we are adding at least $\left|s_{\text{min}}-k\right|$ or $\left|s_{\text{max}}-k\right|$ unnecessarily to every term in the sum.
```python
def find_min_abs_diff_val(s):
"""
returns the integer that when subtracted from every element of s
gives the smallest sum of the absolute values of all the differences.
for clarification see the section titled, "Smallest Absolute Deviation,"
in http://onlinestatbook.com/2/summarizing_distributions/what_is_ct.html
"""
s.sort()
s_min=s[0]
s_max=s[len(s)-1]
result=[]
for i in range(s_min, s_max, 1):
sum=0
for j in s:
sum += abs(j-i)
result.append((i,sum))
result.sort(key=lambda x: x[1])
return result[0][0]
print(find_min_abs_diff(s))
```
4
| 078fd1a35bbc73b87465f579d1fd9f2d3a1d32e2 | 3,997 | ipynb | Jupyter Notebook | ch-3/smallest-absolute-difference.ipynb | jhancock1975/online-status-book-exercises | 70059beffc7f8b2ce84a4bb5c6bcbaf8eda339fa | [
"Apache-2.0"
] | null | null | null | ch-3/smallest-absolute-difference.ipynb | jhancock1975/online-status-book-exercises | 70059beffc7f8b2ce84a4bb5c6bcbaf8eda339fa | [
"Apache-2.0"
] | null | null | null | ch-3/smallest-absolute-difference.ipynb | jhancock1975/online-status-book-exercises | 70059beffc7f8b2ce84a4bb5c6bcbaf8eda339fa | [
"Apache-2.0"
] | null | null | null | 34.162393 | 493 | 0.563172 | true | 756 | Qwen/Qwen-72B | 1. YES
2. YES | 0.942507 | 0.934395 | 0.880674 | __label__eng_Latn | 0.992957 | 0.884434 |
# Monte Carlo Markov Chain
## Christina Lee
## Category: Numerics
### Monte Carlo Physics Series
* [Monte Carlo: Calculation of Pi](../Numerics_Prog/Monte-Carlo-Pi.ipynb)
* [Monte Carlo Markov Chain](../Numerics_Prog/Monte-Carlo-Markov-Chain.ipynb)
* [Monte Carlo Ferromagnet](../Prerequisites/Monte-Carlo-Ferromagnet.ipynb)
* [Phase Transitions](../Prerequisites/Phase-Transitions.ipynb)
### Intro
If you didn't check it out already, take a look at the post that introduces using random numbers in calculations. Any such simulation is a <i>Monte Carlo</i> simulation. The most used kind of Monte Carlo simulation is a <i>Markov Chain</i>, also known as a random walk, or drunkard's walk. A Markov Chain is a series of steps where
* each new state is chosen probabilistically
* the probabilities only depend on the current state (no memory)
Imagine a drunkard trying to walk. At any one point, they could progress either left or right rather randomly. Also, just because they had been traveling in a straight line so far does not guarantee they will continue to do. They've just had extremely good luck.
We use Markov Chains to <b>approximate probability distributions</b>.
To create a good Markov Chain, we need
* <b> Ergodicity</b>: All states can be reached
* <b> Global Balance</b>: A condition that ensures the proper equilibrium distribution
### The Balances
Let $\pi_i$ be the probability that a particle is at site $i$, and $p_{ij}$ be the probability that a particle moves from $i$ to $j$. Then Global Balance can be written as,
\begin{equation}
\sum\limits_j \pi_i p_{i j} = \sum\limits_j \pi_j p_{j i} \;\;\;\;\; \forall i.
\end{equation}
In non-equation terms, this says the amount of "chain" leaving site $i$ is the same as the amount of "chain" entering site $i$ for every site in equilibrium. There is no flow.
Usually though, we actually want to work with a stricter rule than Global Balance, <b> Detailed Balance </b>, written as
\begin{equation}
\pi_i p_{i j} = \pi_j p_{j i}.
\end{equation}
Detailed Balance further constricts the transition probabilities we can assign and makes it easier to design an algorithm. Almost all MCMC algorithms out there use detailed balance, and only lately have certain applied mathematicians begun looking at breaking detailed balance to increase efficiency in certain classes of problems.
### Today's Test Problem
I will let you know now; this might be one of the most powerful numerical methods you could ever learn. I was going to put down a list of applications, but the only limit to such a list is your imagination.
Today though, we will not be trying to predict stock market crashes, calculate the PageRank of a webpage, or calculate the entropy a quantum spin liquid at zero temperature. We just want to calculate an uniform probability distribution, and look at how Monte Carlo Markov Chains behave.
* We will start with an $l\times l$ grid
* Our chain starts somewhere in that grid
* We can then move up, down, left or right equally
* If we hit an edge, we come out the opposite side <i>(toroidal boundary conditions)</i>
<b>First question!</b> Is this ergodic?
Yes! Nothing stops us from reaching any location.
<b>Second question!</b> Does this obey detailed balance?
Yes! In equilibrium, each block has a probability of $\pi_i = \frac{1}{l^2}$, and can travel to any of its 4 neighbors with probability of $p_{ij} = \frac{1}{4}$. For any two neighbors
\begin{equation}
\frac{1}{l^2}\frac{1}{4} = \frac{1}{l^2}\frac{1}{4},
\end{equation}
and if they are not neighbors,
\begin{equation}
0 = 0.
\end{equation}
```julia
using Statistics
using Plots
gr()
```
Plots.GRBackend()
```julia
# This is just the equivalent of `mod`
# for using in an array that indexes from 1.
function armod(i,j)
return (mod(i-1+j,j)+1)
end
```
armod (generic function with 1 method)
```julia
# input the size of the grid
l=5;
n=l^2;
```
```julia
function Transition(i)
#randomly chose up, down, left or right
d=rand(1:4);
if d==1 #if down
return armod(i-l,n);
elseif d==2 #if left
row=convert(Int,floor((i-1)/l));
return armod(i-1,l)+l*row;
elseif d==3 #if right
row=convert(Int,floor((i-1)/l));
return armod(i+1,l)+l*row;
else # otherwise up
return armod(i+l,n);
end
end
```
Transition (generic function with 1 method)
```julia
# The centers of blocks.
# Will be using for pictoral purposes
pos=zeros(Float64,2,n);
pos[1,:]=[floor((i-1)/l) for i in 1:n].+0.5;
pos[2,:]=[mod(i-1,l) for i in 1:n].+0.5;
```
```julia
# How many timesteps
tn=2000;
# Array of timesteps
ti=Array{Int64,1}()
# Array of errors
err=Array{Float64,1}()
# Stores current location, initialized randomly
current=rand(1:n);
# Stores last location, used for pictoral purposes
last=current;
#Keeps track of where chain went
Naccumulated=zeros(Int64,l,l);
# put in our first point
# can index 2d array as 1d
Naccumulated[current]+=1;
```
```julia
for ii in 1:tn
last=current;
# Determine the new point
current=Transition(current);
Naccumulated[current]+=1;
# add new time steps and error points
push!(ti,ii)
push!(err,std(Naccumulated/ii))
end
```
When I was using an old version and pyplot I created this video of the state at each time point. https://www.youtube.com/watch?v=gxX3Fu1uuCs
```julia
heatmap(Naccumulated/tn .- 1/l^2)
```
```julia
scatter(ti,log10.(err), markerstrokewidth=0)
plot!(xlabel="Step"
,ylabel="Log10 std"
,title="Error for a $l x$l")
```
So, after running the above code and trying to figure out how it works (mostly plotting stuff), go back and study some properties of the system.
* How long does it take to forget it's initial position?
* How does the behaviour change with system size?
* How long would you have to go to get a certain accuracy? (especially if you didn't know what distribution you where looking for)
So hopefully you enjoyed this tiny introduction to an incredibly rich subject. Feel free to explore all the nooks and crannies to really understand the basics of this kind of simulation, so you can gain more control over the more complex simulations.
Monte Carlo simulations are as much of an art as a science. You need to live them, love them, and breathe them till you find out exactly why they are behaving like little kittens that can finally jump on top of your countertops, or open your bedroom door at 1am.
For all their misbehaving, you love the kittens anyway.
@ARTICLE{1970Bimka..57...97H,
title = "{Monte Carlo Sampling Methods using Markov Chains and their Applications}",
journal = {Biometrika, Vol.~57, No.~1, p.~97-109, 1970},
year = 1970,
month = apr,
volume = 57,
pages = {97-109},
doi = {10.1093/biomet/57.1.97},
adsurl = {http://adsabs.harvard.edu/abs/1970Bimka..57...97H},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
```julia
```
| 20df515a7c6d9b11f21021158d72eb3a8bda002a | 347,860 | ipynb | Jupyter Notebook | Numerics_Prog/Monte-Carlo-Markov-Chain.ipynb | albi3ro/M4 | ccd27d4b8b24861e22fe806ebaecef70915081a8 | [
"MIT"
] | 22 | 2015-11-15T08:47:04.000Z | 2022-02-25T10:47:12.000Z | Numerics_Prog/Monte-Carlo-Markov-Chain.ipynb | albi3ro/M4 | ccd27d4b8b24861e22fe806ebaecef70915081a8 | [
"MIT"
] | 11 | 2016-02-23T12:18:26.000Z | 2019-09-14T07:14:26.000Z | Numerics_Prog/Monte-Carlo-Markov-Chain.ipynb | albi3ro/M4 | ccd27d4b8b24861e22fe806ebaecef70915081a8 | [
"MIT"
] | 6 | 2016-02-24T03:08:22.000Z | 2022-03-10T18:57:19.000Z | 115.376451 | 343 | 0.599333 | true | 1,934 | Qwen/Qwen-72B | 1. YES
2. YES | 0.787931 | 0.800692 | 0.63089 | __label__eng_Latn | 0.991225 | 0.3041 |
```julia
] activate .
```
[Vandermonde matrix:](https://en.wikipedia.org/wiki/Vandermonde_matrix)
\begin{align}V=\begin{bmatrix}1&\alpha _{1}&\alpha _{1}^{2}&\dots &\alpha _{1}^{n-1}\\1&\alpha _{2}&\alpha _{2}^{2}&\dots &\alpha _{2}^{n-1}\\1&\alpha _{3}&\alpha _{3}^{2}&\dots &\alpha _{3}^{n-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&\alpha _{m}&\alpha _{m}^{2}&\dots &\alpha _{m}^{n-1}\end{bmatrix}\end{align}
```julia
using PyCall
```
```julia
np = pyimport("numpy")
```
PyObject <module 'numpy' from 'C:\\Users\\carsten\\Anaconda3\\lib\\site-packages\\numpy\\__init__.py'>
```julia
np.vander(1:5, increasing=true)
```
5×5 Array{Int32,2}:
1 1 1 1 1
1 2 4 8 16
1 3 9 27 81
1 4 16 64 256
1 5 25 125 625
The source code for this function is [here](https://github.com/numpy/numpy/blob/v1.16.1/numpy/lib/twodim_base.py#L475-L563). It calls `np.multiply.accumulate` which is implemented in C [here](https://github.com/numpy/numpy/blob/deea4983aedfa96905bbaee64e3d1de84144303f/numpy/core/src/umath/ufunc_object.c#L3678). However, this code doesn't actually perform the computation, it basically only checks types and stuff. The actual kernel that gets called is [here](https://github.com/numpy/numpy/blob/deea4983aedfa96905bbaee64e3d1de84144303f/numpy/core/src/umath/loops.c.src#L1742). This isn't even C code but a template for C code which is used to generate type specific kernels.
Overall, this setup only supports a limited set of types, like `Float64`, `Float32`, and so forth.
Here is a simple Julia implementation (taken from [Steve's Julia intro](https://web.mit.edu/18.06/www/Fall17/1806/julia/Julia-intro.pdf))
```julia
function vander(x::AbstractVector{T}, n=length(x)) where T
m = length(x)
V = Matrix{T}(undef, m, n)
for j = 1:m
V[j,1] = one(x[j])
end
for i= 2:n
for j = 1:m
V[j,i] = x[j] * V[j,i-1]
end
end
return V
end
```
vander (generic function with 2 methods)
```julia
vander(1:5)
```
5×5 Array{Int64,2}:
1 1 1 1 1
1 2 4 8 16
1 3 9 27 81
1 4 16 64 256
1 5 25 125 625
# A quick benchmark
```julia
using BenchmarkTools, Plots
```
```julia
ns = exp10.(range(1, 4, length=30));
```
```julia
tnp = Float64[]
tjl = Float64[]
for n in ns
x = 1:n |> collect
push!(tnp, @belapsed np.vander($x) samples=3 evals=1)
push!(tjl, @belapsed vander($x) samples=3 evals=1)
end
```
```julia
plot(ns, tnp./tjl, m=:circle, xscale=:log10, xlab="matrix size", ylab="NumPy time / Julia time", legend=:false)
```
```julia
savefig("vandermonde.pdf")
savefig("vandermonde.svg")
savefig("vandermonde.png")
```
Note that the clean and concise Julia implementation is beating the numpy implementation for small and is on-par for large matrix sizes!
At the same time, the Julia code is generic and works for arbitrary types!
```julia
vander(Int32[4, 8, 16, 32])
```
4×4 Array{Int32,2}:
1 4 16 64
1 8 64 512
1 16 256 4096
1 32 1024 32768
```julia
vander(["this", "is", "a", "test"])
```
4×4 Array{String,2}:
"" "this" "thisthis" "thisthisthis"
"" "is" "isis" "isisis"
"" "a" "aa" "aaa"
"" "test" "testtest" "testtesttest"
```julia
vander([true, false, false, true])
```
4×4 Array{Bool,2}:
true true true true
true false false false
true false false false
true true true true
```julia
```
| f914ce46e8f7b810739c0e8b28f512dca498c2a1 | 26,340 | ipynb | Jupyter Notebook | playground/vandermonde/vandermonde.ipynb | crstnbr/JuliaWorkshop19 | 17a19bd100fcaf1c20b577af7af943061b8a157c | [
"MIT"
] | 98 | 2019-07-26T20:02:31.000Z | 2021-08-06T08:12:15.000Z | playground/vandermonde/vandermonde.ipynb | mattborghi/JuliaWorkshop19 | ae4fc28e52e8fc0fd9abdf6359a72b0bb5fe61f3 | [
"MIT"
] | 5 | 2019-07-25T14:24:54.000Z | 2019-10-25T17:37:37.000Z | playground/vandermonde/vandermonde.ipynb | mattborghi/JuliaWorkshop19 | ae4fc28e52e8fc0fd9abdf6359a72b0bb5fe61f3 | [
"MIT"
] | 25 | 2019-08-09T18:26:12.000Z | 2021-08-08T00:05:50.000Z | 50.947776 | 685 | 0.550683 | true | 1,348 | Qwen/Qwen-72B | 1. YES
2. YES | 0.880797 | 0.919643 | 0.810018 | __label__eng_Latn | 0.569428 | 0.720277 |
# Physics 256
## Physics of Baseball
```python
import style
style._set_css_style('../include/bootstrap.css')
```
## Last Time
### [Notebook Link: 14_ProjectileMotion.ipynb](./14_ProjectileMotion.ipynb)
- projectile motion for a cannon shell with air resistance
- building a simple targetting algorithm
## Today
- 3D motion of a pitched baseball
## Setting up the Notebook
```python
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
plt.style.use('../include/notebook.mplstyle');
%config InlineBackend.figure_format = 'svg'
```
## Converting Units
```python
import scipy.constants as constants
def convert(value,unit1,unit2):
'''Convert value in unit1 to unit2'''
# the converter
conv = {'ft->m':constants.foot,'in->m':constants.inch,'mph->m/s':constants.mph}
# make a copy and perform reverse conversions
cpy_conv = dict(conv)
for key, val in cpy_conv.items():
units = key.split('->')
conv[units[-1]+'->'+units[0]] = 1.0/val
# perform the conversion
key = '%s->%s'%(unit1,unit2)
if key in conv:
return value*conv[key]
else:
print('Unit conversion %s not possible.' %key)
```
## Drag Coefficient of a Baseball
The empirical form (from wind-tunnel and simulation measurements) is given by:
\begin{equation}
\frac{C\rho A}{m} \equiv \frac{B_2}{m} = 0.0039 + \frac{0.0058}{1+\exp{[(v-v_d)/\Delta}]}
\end{equation}
where $v_c \simeq 35~{\rm m/s}$ and $\Delta \simeq 5~{\rm m/s}$.
```python
def B2om(v):
'''The drag Coefficient of a baseball.
v in m/s
'''
vd = 35.0 # m/s
Δ = 5.0 # m/s
return 0.0039 + 0.0058/(1.0 + np.exp((v-vd)/Δ))
```
```python
v = np.linspace(0,200,1000)
plt.plot(v,B2om(convert(v,'mph','m/s'))*convert(1.0,'m','ft'))
plt.xlabel('Ball Speed [mph]')
plt.ylabel('Drag Coefficient [1/ft]')
```
## Equation of Motion for a Spinning Baseball
Taking our coordinate system origin at the pitching rubber with $x$ pointing towards home plate, $y$ pointing up and $z$ towards third base, the total force on a spinning ball is given by:
\begin{equation}
\vec{F} = \vec{F}_{\rm g} + \vec{F}_{\rm drag} + \vec{F}_{\rm magnus}.
\end{equation}
where the Magnus force is:
\begin{equation}
\vec{F}_{\rm magnus} = S_0 \vec{\omega} \times \vec{v}
\end{equation}
with the direction of $\vec{\omega}$ determined by the right-hand-rule and $S_0$ is related to the solid angular average of the drage coefficient $C$.
This can be decomposed into the following 3D equation of motion:
\begin{align}
\frac{d v_x}{dt} &= -\frac{B_2}{m} v v_x \\
\frac{d v_y}{dt} &= -g \\
\frac{d v_z}{dt} &= - \frac{S_0}{m} \omega v_x
\end{align}
which can be iterated using the Euler method to find the trajectory of the ball.
```python
from scipy.constants import g,pi
π = pi
# the time step
Δt = 0.001 # s
# the dimensionless angular drag factor
S0om = 4.1E-4
# the angular velocity
ω = 1900 * 2*π / 60 # rad/s
# initial conditions (convert everything to SI)
vx,vy,vz = convert(100,'mph','m/s'),0.0,0.0
r = [[0.0,convert(6.0+10.0/12.0,'ft','m'),0.0]]
print(r[-1][0])
while r[-1][0] <= convert(60.5,'ft','m'):
v = np.sqrt(vx**2 + vy**2 + vz**2)
vx -= B2om(v)*v*vx*Δt
vy -= g*Δt
vz -= S0om*vx*ω*Δt
r.append([r[-1][0]+vx*Δt,r[-1][1]+vy*Δt,r[-1][2]+vz*Δt])
# convert the result to feet
r = np.array(r)
r = convert(r,'m','ft')
# Plot the resulting trajectory
fig, ax = plt.subplots(2, sharex=True)
fig.subplots_adjust(hspace=0.1)
# the x-y plane
ax[0].plot(r[:,0],r[:,1])
ax[0].set_ylabel('y [ft]')
# the y-z plane
ax[1].plot(r[:,0],r[:,2])
ax[1].set_ylabel('z [ft]')
ax[1].set_xlabel('x [ft]')
ax[1].set_xlim(0,60.5);
```
### Looking in 3D
```python
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(15,6))
ax = fig.add_subplot(111, projection='3d')
ax.plot(r[:,0], r[:,2], r[:,1], linewidth=3)
ax.dist = 12
ax.set_xlabel('x [ft]')
ax.set_ylabel('z [ft]')
ax.set_zlabel('y [ft]')
ax.set_zlim3d(0,7)
ax.set_xlim3d(0,60.5)
ax.xaxis.labelpad=30
ax.yaxis.labelpad=30
ax.zaxis.labelpad=15
#ax.view_init(0, -90)
```
## The Knuckleball
```python
def Fknuck(θ):
'''The lateral acceleration on a knucleball in m/s^2'''
from scipy.constants import g
Fom = 0.5*g*(np.sin(4.0*θ) - 0.25*np.sin(8.0*θ) + \
0.08*np.sin(12.0*θ) - 0.025*np.sin(16.0*θ))
return Fom
```
```python
θ = np.linspace(0,2*π,1000)
plt.plot(θ,Fknuck(θ))
plt.xlim(0,2*π);
plt.xlabel('θ [rad]')
plt.ylabel(r'$F/m\ [{\rm m/s^2}]$')
```
```python
```
| a7c709182b98d0b83313b58f02203a2a9aa96f4a | 8,617 | ipynb | Jupyter Notebook | 4-assets/BOOKS/Jupyter-Notebooks/Overflow/15_Baseball.ipynb | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
] | null | null | null | 4-assets/BOOKS/Jupyter-Notebooks/Overflow/15_Baseball.ipynb | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
] | null | null | null | 4-assets/BOOKS/Jupyter-Notebooks/Overflow/15_Baseball.ipynb | impastasyndrome/Lambda-Resource-Static-Assets | 7070672038620d29844991250f2476d0f1a60b0a | [
"MIT"
] | 1 | 2021-11-05T07:48:26.000Z | 2021-11-05T07:48:26.000Z | 25.195906 | 197 | 0.500754 | true | 1,586 | Qwen/Qwen-72B | 1. YES
2. YES | 0.849971 | 0.727975 | 0.618758 | __label__eng_Latn | 0.701954 | 0.275913 |
# Casing design affects frac geometry and economics
_Ohm Devani_
NOTE: This project will focus on relative economic uplift between 2 casing designs in similar geology.
## Contents
1. Model inputs
2. Base case
- define per-frac endpoints (min rate defined by dfit initiation rate, max rate defined by hhp of fleet, water volume/frac)
- economic constants (pump charge, water cost
3. Economic sensitivities
1. Frac surface area (PKN) vs rate; 2 series for different casing sizes
- \$ cost/surface area vs rate vs (y-axis 2) surface area
- Talk about impact of other variables: stress shadow, additional height growth in absence of frac barriers, better proppant/fluid distribution between clusters
2. Frac pump time vs rate vs $ cost/stage (y2)
### Model parameters
0.7 psi/ft frac gradient
1.0 connate water SG
Horizontal well
Midpoint lateral TVD 10,000 ft
Scenario A:
7" 32# T-95 intermediate casing to 9000 ft (6.094" ID)
4 1/2" 11.6# P-110 liner to 20000 ft (4" ID)
Scenario B:
5 1/2" 17# P-110 casing to 20000 ft (4.892" ID)
60 bbl/ft
ignore proppant
7000 psi target surface pressure
2000 psi target NWB friction
### Pipe friction estimates
Hazen-Williams for pipe friction
SPE 146674 correlation and adjustment for friction reduction
```python
def hwfriclosspsi(cfactor,ratebpm,pipeidinch,lengthft):
hyddiam = 4*(3.14159*(pipeidinch/2)**2)/(2*3.14169*pipeidinch/2)
hwfriclosspsi = 0.2083*((100/cfactor)**1.852)*((ratebpm*42)**1.852)/(hyddiam**4.8655)*.433*lengthft/100
return hwfriclosspsi
def freduction(pipeidinch,ratebpm):
velocftpersec = (5.615*144/60)*ratebpm/(3.14159*(pipeidinch/2)**2)
fr = 6.2126*velocftpersec**0.5858
if fr<25:
fr=25
elif fr>85:
fr=85
return fr
```
```python
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
bpmrange=list(range(0, 150))
adjfac=0.1
#sceAfric = [hwfriclosspsi(120,i,6.094,9000)*(1-freduction(6.094,i)/100)*adjfac+hwfriclosspsi(120,i,4,11000)*(1-freduction(4,i)/100)*adjfac for i in bpmrange]
#sceBfric = [hwfriclosspsi(120,i,4.892,20000)*(1-freduction(4.892,i)/100)*adjfac for i in bpmrange]
sceAfric = [hwfriclosspsi(120,i,6.094,9000)*adjfac+hwfriclosspsi(120,i,4,11000)*adjfac for i in bpmrange]
sceBfric = [hwfriclosspsi(120,i,4.892,20000)*adjfac for i in bpmrange]
```
### BHTP & Net Fracture Pressure
Ozesen, Ahsen. ANALYSIS OF INSTANTANEOUS SHUT-IN PRESSURE IN SHALE OIL AND GAS RESERVOIRS. 2017. Penn State U, Masters Thesis. https://etda.libraries.psu.edu/files/final_submissions/15314.
```python
surfpress=7000
tvd=10000
nwbfriction=1000
fracgradient=0.70
def nfp(bhtp,nwbfric,fracgrad,tvd):
nfp = bhtp-nwbfric-fracgrad*tvd
return nfp
bhtpA = [surfpress-i+0.433*tvd for i in sceAfric]
bhtpB = [surfpress-i+0.433*tvd for i in sceBfric]
nfpA = [nfp(i,nwbfriction,fracgradient,tvd) for i in bhtpA]
nfpB = [nfp(i,nwbfriction,fracgradient,tvd) for i in bhtpB]
plt.plot(bpmrange, bhtpA, label = "4 1/2\" liner", linestyle="-.")
plt.plot(bpmrange, bhtpB, label = "5 1/2\" casing", linestyle=":")
plt.ylim(bottom=0)
plt.grid(color='k', linestyle='-', linewidth=0.25)
plt.xlabel('Pumping Rate [bpm]')
plt.ylabel('BHP [psi]')
plt.legend()
plt.show()
plt.plot(bpmrange, nfpA, label = "4 1/2\" liner", linestyle="-.")
plt.plot(bpmrange, nfpB, label = "5 1/2\" casing", linestyle=":")
plt.ylim(bottom=0)
plt.grid(color='k', linestyle='-', linewidth=0.25)
plt.xlabel('Pumping Rate [bpm]')
plt.ylabel('Pressure available to fracture [psi]')
plt.legend()
plt.show()
#print(nfpA)
#print(nfpB)
#print(nfpA.index(57.58718507850335))
#print(nfpB.index(21.654318992093977))
```
Under these constraints, the 5 1/2" casing designs supports a maximum pump rate of 107 bpm, which is 25 bpm higher than the 4 1/2" liner design's max pump rate of 82 bpm.
Below, I calculate the time and cost savings from this added rate over a range of pumped volumes.
HHP (hp) = Surface Treating Pressure (psi) x Rate (bpm) / 40.8
Indicative pricing estimates for a 45,000 hhp frac spread, and typical pricing are $6-8,000/hr.
Plot below shows a typical frac spread will have more HHP than is required at even the highest rate.
https://www.danielep.com/news-observations/3-14-21-dep-industry-observations-frac-engine-supply-frac-newbuilds-sand-thoughts-and-q4-earnings/
https://markets.businessinsider.com/news/stocks/fracking-fluid-end-market-size-to-reach-us-678-million-in-2025-says-stratview-research-1029144444
```python
def hhp(stp,rate):
hhp=stp*rate/40.8
return hhp
plt.plot(bpmrange, [hhp(surfpress,i) for i in bpmrange], label = "Required HHP")
plt.ylim(bottom=0, top=50000)
plt.axhline(y=45000, color='r', linestyle='--', label = "Available HHP")
plt.grid(color='k', linestyle='-', linewidth=0.25)
plt.xlabel('Pumping Rate [bpm]')
plt.ylabel('HHP required')
plt.legend()
plt.show()
```
For simplicity, I'm only considering the hourly pumping cost and neglecting other costs like diesel.
The plot below shows pumping cost vs total volume for the two casing designs pumping at max rates which retain a few hundred psi of net frac pressure (80 vs 105).
```python
volumerange=list(range(400, 800))
pumpcost=8000 #$8000/hr
amaxrate=80
bmaxrate=105
acost=[pumpcost*1000*i/(amaxrate*60) for i in volumerange]
bcost=[pumpcost*1000*i/(bmaxrate*60) for i in volumerange]
fig, ax = plt.subplots()
ax.plot(volumerange,acost, label = "4 1/2\" liner", linestyle="-.")
ax.plot(volumerange,bcost, label = "5 1/2\" casing", linestyle=":")
ax.plot(volumerange, [a_i - b_i for a_i, b_i in zip(acost, bcost)], label = "Cost Savings", linestyle="-.", color='g')
plt.ylim(bottom=0)
ax.grid(color='k', linestyle='-', linewidth=0.25)
plt.xlabel('Total Volume [MBBL]')
plt.ylabel('Pumping Cost [$]')
fmt = '${x:,.0f}'
tick = mtick.StrMethodFormatter(fmt)
ax.yaxis.set_major_formatter(tick)
plt.legend()
plt.show()
```
### Economic sensitivities
5 1/2" max rate is approx. 30% greater than 4 1/2" liner's max rate.
Fracture surface area as a function of rate is a convolved problem requiring a simulator; I suppose a range of 0%-15% oil IP30 uplift for the range of 105-80bpm, respectively.
Economic variables:
- 75% NRI
- $50 flat net oil price
- Harmonic decline
- 800 MBBL water pumped
Production for 4 1/2" scenario:
- 800 bopd IP (24.3 MBO/mo)
- 800 MBO EUR over 30 years
- Ignoring water, gas
Production for 5 1/2" scenario:
- Variable IP
- 800 MBO EUR over 30 years
- Ignoring water, gas
```python
import numpy as np
#from sympy.solvers import solve
#from sympy import Symbol
from sympy import log
#def solvedi(np,qi,t):
# di = Symbol('di',real=True)
# solvedi=solve(np/log(qi/(qi/(1+di*t)))-(qi/di),di)
# return solvedi
#adeclinerate_monthly = solvedi(800000,36500,360)
#print(N(adeclinerate_monthly[0]))
#print(N(solvedi(1000000,30000,600)))
ip=800
eur=800000
months=360
def brutedi(np,qi,t):
calcerror=-1
di=0.01
while calcerror<0:
calcerror=np-((qi/di)*log(qi/(qi/(1+di*t))))
if calcerror>0:
break
else:
di=di+0.0001
#print(di)
return di
#adeclinerate_monthly = brutedi(800000,36500,360)
ipuplift=np.linspace(0,0.15,num=16)
col2=(ipuplift+1)*ip
col3=np.zeros(col2.size)
i=0
while i<col2.size:
col3[i]=brutedi(eur,col2[i]*30.4,months)
i+=1
#print(col2)
#print(col3)
```
```python
def harmonictotal(qi,di,t):
harmonictotal=((qi/di)*log(qi/(qi/(1+di*t))))
return harmonictotal
def pv30yr(netoilprice,nri,ip,initd,discrate):
cumarray=np.zeros(360)
casharray=np.zeros(360)
ipmonth=ip*30.4
discratemonth=discrate/12
j=0
while j<casharray.size:
if j==0:
cumarray[j]=harmonictotal(ipmonth,initd,1)
casharray[j]=cumarray[j]*netoilprice*nri
else:
cumarray[j]=harmonictotal(ipmonth,initd,j+1)
casharray[j]=(cumarray[j]-cumarray[j-1])*netoilprice*nri
j+=1
#print(cumarray)
#print(casharray)
#print(cumarray[cumarray.size-1])
pv30yr=np.npv(discratemonth,casharray)
return pv30yr
```
```python
netprice=50
netri=0.75
discountrate=0.10
totalpumpedvolume=800000
#print(pv30yr(netprice,netri,920,.137,0))
col4=np.zeros(col2.size)
colincpv=np.zeros(col2.size)
i=0
while i<col4.size:
col4[i]=pv30yr(netprice,netri,col2[i],col3[i],discountrate)
i+=1
i=0
while i<colincpv.size:
colincpv[i]=col4[i]-col4[0]
i+=1
#print(colincpv)
```
C:\Users\User\Anaconda3\lib\site-packages\ipykernel_launcher.py:22: DeprecationWarning: numpy.npv is deprecated and will be removed from NumPy 1.20. Use numpy_financial.npv instead (https://pypi.org/project/numpy-financial/).
```python
pumpratearray=np.linspace(105,80,num=16)
pumpsavings=np.zeros(pumpratearray.size)
i=0
while i<pumpsavings.size:
if pumpratearray[i] == amaxrate:
pumpsavings[i]=0
else:
pumpsavings[i]=pumpcost*(-(totalpumpedvolume/(pumpratearray[i]*60))+(totalpumpedvolume/(amaxrate*60)))
i+=1
#print(pumpsavings)
```
[317460.31746032 301075.2688172 284153.00546448 266666.66666667
248587.57062147 229885.05747126 210526.31578947 190476.19047619
169696.96969697 148148.14814815 125786.16352201 102564.1025641
78431.37254902 53333.33333333 27210.88435374 0. ]
```python
farray = np.column_stack((ipuplift, col2)) #ip bopd
farray = np.column_stack((farray, col3)) #di
farray = np.column_stack((farray, colincpv))
farray = np.column_stack((farray, pumpratearray))
finalarray = np.column_stack((farray, pumpsavings))
#print(finalarray[:,5])
finaldf = pd.DataFrame(finalarray, columns = ['IP_uplift','IP_BOPD','Di_monthly','Acceration_PV','Pump_Rate','Pump_Savings'])
finaldf['Total_PV'] = finaldf['Acceration_PV'] + finaldf['Pump_Savings']
#print(finaldf)
```
```python
fig, ax = plt.subplots()
ax.plot(finaldf['Pump_Rate'],finaldf['Total_PV'], label = "5 1/2\" casing", linestyle=":", color='g')
plt.ylim(bottom=0, top=500000)
ax.grid(color='k', linestyle='-', linewidth=0.25)
plt.xlabel('Total Volume [MBBL]')
plt.ylabel('Pumping Cost [$]')
fmt = '${x:,.0f}'
tick = mtick.StrMethodFormatter(fmt)
ax.yaxis.set_major_formatter(tick)
plt.legend()
plt.show()
```
```python
```
| 64339c3d399c4dd1cc39bf395ac8a57a3b76de17 | 170,864 | ipynb | Jupyter Notebook | comp-cost.ipynb | energydevohm/completion-cost-study | d836390c4eb4f19781882feefd94e5ad79ec32af | [
"MIT"
] | null | null | null | comp-cost.ipynb | energydevohm/completion-cost-study | d836390c4eb4f19781882feefd94e5ad79ec32af | [
"MIT"
] | null | null | null | comp-cost.ipynb | energydevohm/completion-cost-study | d836390c4eb4f19781882feefd94e5ad79ec32af | [
"MIT"
] | null | null | null | 298.191972 | 55,772 | 0.915693 | true | 3,418 | Qwen/Qwen-72B | 1. YES
2. YES | 0.805632 | 0.763484 | 0.615087 | __label__eng_Latn | 0.44538 | 0.267384 |
# Sinais exponenciais
Neste notebook avaliaremos os sinais exponenciais do tipo
\begin{equation}
x(t) = A \ \mathrm{e}^{a \ t}
\end{equation}
Estamos interessados em 3 casos:
1. $A \ \in \ \mathbb{R}$ e $a \ \in \ \mathbb{R}$ - As exponenciais reais.
2. $A \ \in \ \mathbb{C}$ e $a \ \in \ \mathbb{C}, \ \mathrm{Re}\left\{a\right\} = 0$ - As exponenciais complexas.
3. $A \ \in \ \mathbb{C}$ e $a \ \in \ \mathbb{C}, \ \mathrm{Re}\left\{a\right\} < 0$
```python
# importar as bibliotecas necessárias
import numpy as np
import matplotlib.pyplot as plt
```
## 1. $A \ \in \ \mathbb{R}$ e $a \ \in \ \mathbb{R}$ - As exponenciais reais.
```python
# Sinal
t = np.linspace(-1, 5, 1000) # vetor temporal
A = 1
a = -5
xt = A*np.exp(a*t)
# Figura
plt.figure()
plt.title('Exponencial real')
plt.plot(t, xt, '-b', linewidth = 2, label = 'Exp. Real')
plt.legend(loc = 'best')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.tight_layout()
plt.show()
```
## 2. $A \ \in \ \mathbb{C}$ e $a \ \in \ \mathbb{C}, \ \mathrm{Re}\left\{a\right\} = 0$ - As exponenciais complexas.
Temos grande interesse neste caso. Ele está relacionado ao sinal
\begin{equation}
x(t) = A \ \mathrm{cos}(2 \pi f t - \phi) = A \ \mathrm{cos}(\omega t - \phi).
\end{equation}
A relação de Euler nos diz que:
\begin{equation}
\mathrm{cos}(\omega t - \phi) = \mathrm{Re}\left\{\mathrm{e}^{\mathrm{j}(\omega t-\phi)} \right\} .
\end{equation}
Assim, o sinal $x(t)$ torna-se:
\begin{equation}
x(t) = A \ \mathrm{cos}(\omega t - \phi) = A\mathrm{Re}\left\{\mathrm{e}^{\mathrm{j}(\omega t-\phi)} \right\} = \mathrm{Re}\left\{A \mathrm{e}^{\mathrm{j}(\omega t-\phi)} \right\}
\end{equation}
\begin{equation}
x(t) = \mathrm{Re}\left\{A\mathrm{e}^{-\mathrm{j}\phi} \ \mathrm{e}^{\mathrm{j}\omega t} \right\}
\end{equation}
em que $\tilde{A} = A\mathrm{e}^{-\mathrm{j}\phi}$ é a amplitude complexa do cosseno e contêm as informações de magnitude, $A$, e fase, $\phi$.
```python
# Sinal
t = np.linspace(-2, 2, 1000) # vetor temporal
A = 1.5
phi = 0
A = A*np.exp(-1j*phi)
f=1
w = 2*np.pi*f
a = 1j*w
xt = np.real(A*np.exp(a*t))
# Figura
plt.figure()
plt.title('Exponencial complexa')
plt.plot(t, xt, '-b', linewidth = 2, label = 'Exp. complexa')
plt.legend(loc = 'upper right')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.ylim((-2, 2))
plt.xlim((t[0], t[-1]))
plt.tight_layout()
plt.show()
```
3. $A \ \in \ \mathbb{C}$ e $a \ \in \ \mathbb{C}, \ \mathrm{Re}\left\{a\right\} < 0$
Neste caso, se $a \ \in \ \mathbb{C}$ e $\mathrm{Re}\left\{a\right\} < 0$, teremos um sinal oscilatório, cuja amplitude decai com o tempo. É típico de um sistema massa-mola-amortecedor.
```python
# Sinal
t = np.linspace(0, 5, 1000) # vetor temporal
A = 1.5
phi = 0.3
A = A*np.exp(-1j*phi)
f=1
w = 2*np.pi*f
a = -0.5+1j*w
xt = np.real(A*np.exp(a*t))
# Figura
plt.figure()
plt.title('Exponencial complexa vom decaimento')
plt.plot(t, xt, '-b', linewidth = 2, label = 'Exp. complexa com decaimento')
plt.legend(loc = 'upper right')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Tempo [s]')
plt.ylabel('Amplitude [-]')
plt.ylim((-2, 2))
plt.xlim((0, t[-1]))
plt.tight_layout()
plt.show()
```
## Exponenciais complexas em sinais discretos
Retomamos agora as exponenciais complexas e discretas. Lembramos que para um sinal contínuo, $x(t)=\mathrm{e}^{\mathrm{j} \omega t}$, a taxa de oscilação aumenta quando $\omega$ aumenta. Para sinais discretos do tipo
\begin{equation}
x[n] = \mathrm{e}^{\mathrm{j} \omega n}
\end{equation}
veremos um aumento da taxa de oscilação para $0 \leq \omega < \pi$ e uma diminuição da taxa de oscilação para $\pi \leq \omega \leq 2 \pi$. Isto se relacionará depois com a amostragem de sinais. Veremos que para amostrar um sinal corretamente (representar bem suas componentes de frequência), precisaremos usar uma taxa de amostragem que seja pelo menos o dobro da maior frequência contida no sinal a ser amostrado.
```python
omega = [0, np.pi/8, np.pi/4, np.pi/2, np.pi,
3*np.pi/2, 7*np.pi/4, 15*np.pi/8, 2*np.pi]#np.linspace(0, 2*np.pi, 9) # Frequencias angulares
n = np.arange(50) # amostras
plt.figure(figsize=(15,10))
for jw,w in enumerate(omega):
xn = np.real(np.exp(1j*w*n))
plt.subplot(3,3,jw+1)
plt.stem(n, xn, '-b', label = r'$\omega$ = {:.3} [rad/s]'.format(float(w)), basefmt=" ", use_line_collection= True)
plt.legend(loc = 'upper right')
plt.grid(linestyle = '--', which='both')
plt.xlabel('Amostra [-]')
plt.ylabel('Amplitude [-]')
plt.ylim((-1.5, 1.5))
plt.tight_layout()
plt.show()
```
| 2403e82a16f5776c818907980d72a15399ede1da | 182,961 | ipynb | Jupyter Notebook | Aula 6 - Sinais exponenciais/sinais exponenciais.ipynb | RicardoGMSilveira/codes_proc_de_sinais | e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0 | [
"CC0-1.0"
] | 8 | 2020-10-01T20:59:33.000Z | 2021-07-27T22:46:58.000Z | Aula 6 - Sinais exponenciais/sinais exponenciais.ipynb | RicardoGMSilveira/codes_proc_de_sinais | e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0 | [
"CC0-1.0"
] | null | null | null | Aula 6 - Sinais exponenciais/sinais exponenciais.ipynb | RicardoGMSilveira/codes_proc_de_sinais | e6a44d6322f95be3ac288c6f1bc4f7cfeb481ac0 | [
"CC0-1.0"
] | 9 | 2020-10-15T12:08:22.000Z | 2021-04-12T12:26:53.000Z | 639.723776 | 88,804 | 0.93984 | true | 1,727 | Qwen/Qwen-72B | 1. YES
2. YES | 0.924142 | 0.90053 | 0.832217 | __label__por_Latn | 0.690031 | 0.771853 |
# Implementation details: deriving expected moment dynamics
$$
\def\n{\mathbf{n}}
\def\x{\mathbf{x}}
\def\N{\mathbb{\mathbb{N}}}
\def\X{\mathbb{X}}
\def\NX{\mathbb{\N_0^\X}}
\def\C{\mathcal{C}}
\def\Jc{\mathcal{J}_c}
\def\DM{\Delta M_{c,j}}
\newcommand\diff{\mathop{}\!\mathrm{d}}
\def\Xc{\mathbf{X}_c}
\newcommand{\muset}[1]{\dot{\{}#1\dot{\}}}
$$
This notebook walks through what happens inside `compute_moment_equations()`.
We restate the algorithm outline, adding code snippets for each step.
This should help to track down issues when, unavoidably, something fails inside `compute_moment_equations()`.
```python
# initialize sympy printing (for latex output)
from sympy import init_printing
init_printing()
# import functions and classes for compartment models
from compartor import *
from compartor.compartments import ito, decomposeMomentsPolynomial, getCompartments, getDeltaM, subsDeltaM, get_dfMdt_contrib
```
We only need one transition class.
We use "coagulation" from the coagulation-fragmentation example
```python
D = 1 # number of species
x = Content('x')
y = Content('y')
transition_C = Transition(Compartment(x) + Compartment(y), Compartment(x + y), name = 'C')
k_C = Constant('k_C')
g_C = 1
Coagulation = TransitionClass(transition_C, k_C, g_C)
transition_classes = [Coagulation]
display_transition_classes(transition_classes)
```
$\displaystyle \begin{align} \left[x\right] + \left[y\right]&\overset{h_{C}}{\longrightarrow}\left[x + y\right] && h_{C} = \frac{k_{C} \left(n{\left(y \right)} - \delta_{x y}\right) n{\left(x \right)}}{\delta_{x y} + 1} \end{align}$
$$
\def\n{\mathbf{n}}
\def\x{\mathbf{x}}
\def\N{\mathbb{\mathbb{N}}}
\def\X{\mathbb{X}}
\def\NX{\mathbb{\N_0^\X}}
\def\C{\mathcal{C}}
\def\Jc{\mathcal{J}_c}
\def\DM{\Delta M_{c,j}}
\newcommand\diff{\mathop{}\!\mathrm{d}}
\def\Xc{\mathbf{X}_c}
\newcommand{\muset}[1]{\dot{\{}#1\dot{\}}}
$$
For a compartment population $\n \in \NX$ evolving stochastically according to stoichiometric equations from transition classes $\C$, we want to find an expression for
$$
\frac{\diff}{\diff t}\left< f(M^\gamma, M^{\gamma'}, \ldots) \right>
$$
in terms of expectations of population moments $M^\alpha, M^{\beta}, \ldots$
```python
fM = Moment(0)**2
display(fM)
```
### (1)
From the definition of the compartment dynamics, we have
$$
\diff M^\gamma = \sum_{c \in \C} \sum_{j \in \Jc} \DM^\gamma \diff R_{c,j}
$$
We apply Ito's rule to derive
$$
\diff f(M^\gamma, M^{\gamma'}, \ldots) = \sum_{c \in \C} \sum_{j \in \Jc}
\left(
f(M^\gamma + \DM^\gamma, M^{\gamma'} + \DM^{\gamma'}, \ldots)
- f(M^\gamma, M^{\gamma'}, \ldots)
\right) \diff R_{c,j}
$$
Assume, that $f(M^\gamma, M^{\gamma'}, \ldots)$ is a polynomial in $M^{\gamma^i}$ with $\gamma^i \in \N_0^D$.
Then $\diff f(M^\gamma, M^{\gamma'}, \ldots)$ is a polynomial in $M^{\gamma^k}, \DM^{\gamma^l}$ with $\gamma^k, \gamma^l \in \N_0^D$, that is,
$$
\diff f(M^\gamma, M^{\gamma'}, \ldots) = \sum_{c \in \C} \sum_{j \in \Jc}
\sum_{q=1}^{n_q} Q_q(M^{\gamma^k}, \DM^{\gamma^l})
\diff R_{c,j}
$$
where $Q_q(M^{\gamma^k}, \DM^{\gamma^l})$ are monomials in $M^{\gamma^k}, \DM^{\gamma^l}$.
```python
dfM = ito(fM)
dfM
```
### (2)
Let's write $Q_q(M^{\gamma^k}, \DM^{\gamma^l})$ as
$$
Q_q(M^{\gamma^k}, \DM^{\gamma^l}) = k_q \cdot \Pi M^{\gamma^k} \cdot \Pi M^{\gamma^k}
$$
where $k_q$ is a constant,
$\Pi M^{\gamma^k}$ is a product of powers of $M^{\gamma^k}$, and
$\Pi \DM^{\gamma^l}$ is a product of powers of $\DM^{\gamma^l}$.
Analogous to the derivation in SI Appendix S.3, we arrive at the expected moment dynamics
$$
\frac{\diff\left< f(M^\gamma, M^{\gamma'}, \ldots) \right>}{\diff t} =
\sum_{c \in \C} \sum_{q=1}^{n_q} \left<
\sum_{j \in \Jc} k_q \cdot \Pi M^{\gamma^k} \cdot \Pi \DM^{\gamma^k} \cdot h_{c,j}(\n)
\right>
$$
```python
monomials = decomposeMomentsPolynomial(dfM)
monomials
```
### (3)
Analogous to SI Appendix S.4, the contribution of class $c$, monomial $q$ to the expected dynamics of $f(M^\gamma, M^{\gamma'}, \ldots)$ is
$$
\begin{align}
\frac{\diff\left< f(M^\gamma, M^{\gamma'}, \ldots) \right>}{\diff t}
&= \left<
{\large\sum_{j \in \Jc}} k_q \cdot \Pi M^{\gamma^k} \cdot \Pi \DM^{\gamma^l} \cdot h_{c,j}(\n)
\right>
\\
&= \left<
{\large\sum_{\Xc}} w(\n; \Xc) \cdot k_c \cdot k_q \cdot \Pi M^{\gamma^k} \cdot g_c(\Xc) \cdot
\left<
\Pi \DM^{\gamma^l} \;\big|\; \Xc
\right>
\right>
\end{align}
$$
```python
c = 0 # take the first transition class
q = 1 # ... and the second monomial
tc = transition_classes[c]
transition, k_c, g_c, pi_c = tc.transition, tc.k, tc.g, tc.pi
(k_q, pM, pDM) = monomials[q]
```
First we compute the expression
$$
l(\n; \Xc) = k_c \cdot k_q \cdot \Pi(M^{\gamma^k}) \cdot g_c(\Xc) \cdot
\left<
\Pi \DM^{\gamma^l} \;\big|\; \Xc
\right>
$$
We start by computing the $\DM^{\gamma^l}$ from reactants and products of the transition ...
```python
reactants = getCompartments(transition.lhs)
products = getCompartments(transition.rhs)
DM_cj = getDeltaM(reactants, products, D)
DM_cj
```
... and then substituting this expression into every occurence of $\DM^\gamma$ in `pDM` (with the $\gamma$ in `DM_cj` set appropriately).
```python
pDMcj = subsDeltaM(pDM, DM_cj)
print('pDM = ')
display(pDM)
print('pDMcj = ')
display(pDMcj)
```
Then we compute the conditional expectation of the result.
```python
cexp = pi_c.conditional_expectation(pDMcj)
cexp
```
Finally we multiply the conditional expectation with the rest of the terms:
* $k_c$, and $g_c(\Xc)$ from the specification of `transition[c]`, and
* $k_q$, and $\Pi(M^{\gamma^k})$ from `monomials[q]`.
```python
l_n_Xc = k_c * k_q * pM * g_c * cexp
l_n_Xc
```
### (4)
Let's consider the expression $A = \sum_{\Xc} w(\n; \Xc) \cdot l(\n; \Xc)$ for the following cases of reactant compartments:
$\Xc = \emptyset$,
$\Xc = \muset{\x}$, and
$\Xc = \muset{\x, \x'}$.
(1) $\Xc = \emptyset$:
Then $w(\n; \Xc) = 1$, and
$$
A = l(\n)
$$
(2) $\Xc = \muset{\x}$:
Then $w(\n; \Xc) = \n(\x)$, and
$$
A = \sum_{\x \in \X} \n(\x) \cdot l(\n; \muset{\x})
$$
(3) $\Xc = \muset{\x, \x'}$:
Then
$$
w(\n; \Xc) = \frac{\n(\x)\cdot(\n(\x')-\delta_{\x,\x'})}
{1+\delta_{\x,\x'}},
$$
and
$$
\begin{align}
A &= \sum_{\x \in \X} \sum_{\x' \in \X}
\frac{1}{2-\delta_{\x,\x'}}
\cdot w(\n; \Xc) \cdot l(\n; \muset{\x, \x'}) \\
&= \sum_{\x \in \X} \sum_{\x' \in \X}
\frac{\n(\x)\cdot(\n(\x')-\delta_{\x,\x'})}{2}
\cdot l(\n; \muset{\x, \x'}) \\
&= \sum_{\x \in \X} \sum_{\x' \in \X}
\n(\x)\cdot\n(\x') \cdot \frac{1}{2}l(\n; \muset{\x, \x'})
\: - \:
\sum_{\x \in \X}
\n(\x) \cdot \frac{1}{2}l(\n; \muset{\x, \x})
\end{align}
$$
### (5)
Now let
$$
l(\n; \Xc) = k_c \cdot k_q \cdot \Pi(M^{\gamma^k}) \cdot g_c(\Xc) \cdot
\left<
\Pi \DM^{\gamma^l} \;\big|\; \Xc
\right>
$$
Plugging in the concrete $\gamma^l$ and expanding, $l(\n; \Xc)$ is a polynomial in $\Xc$.
Monomials are of the form $k \x^\alpha$ or $k \x^\alpha \x'^\beta$ with $\alpha, \beta \in \N_0^D$.
(Note that occurences of $\Pi M^{\gamma^k}$ are part of the constants $k$.)
Consider again the different cases of reactant compartments $\Xc$:
(1) $\Xc = \emptyset$:
$$
\frac{\diff\left< f(M^\gamma, M^{\gamma'}, \ldots) \right>}{\diff t}
= \left<l(\n)\right>
$$
(2) $\Xc = \muset{\x}$:
$$
\frac{\diff\left< f(M^\gamma, M^{\gamma'}, \ldots) \right>}{\diff t}
= \left<R(l(\n; \muset{\x})\right>
$$
where $R$ replaces all $k \x^\alpha$ by $k M^\alpha$.
(3) $\Xc = \muset{\x, \x'}$:
$$
\frac{\diff\left< f(M^\gamma, M^{\gamma'}, \ldots) \right>}{\diff t}
= \frac{1}{2}\left<R'(l(\n; \muset{\x, \x'})\right>
\: - \:
\frac{1}{2}\left<R(l(\n; \muset{\x, \x})\right>
$$
where $R'$ replaces all $k \x^\alpha \X'^\beta$ by $k M^\alpha M^\beta$,
and again $R$ replaces all $k \x^\alpha$ by $k M^\alpha$.
All this (the case destinction and replacements) is done in the function `get_dfMdt_contrib()`.
```python
dfMdt = get_dfMdt_contrib(reactants, l_n_Xc, D)
dfMdt
```
### (6)
Finally, sum over contributions from all $c$, $q$ for the total
$$
\frac{\diff\left< f(M^\gamma, M^{\gamma'}, \ldots) \right>}{\diff t}
$$
| 81bc79131d91c87554f1d99056a9a66de96d9141 | 33,386 | ipynb | Jupyter Notebook | (EXTRA) Implementation details.ipynb | zechnerlab/Compartor | 93c1b0752b6fdfffddd4f1ac6b9631729eae9a95 | [
"BSD-2-Clause"
] | 1 | 2021-02-10T15:56:02.000Z | 2021-02-10T15:56:02.000Z | (EXTRA) Implementation details.ipynb | zechnerlab/Compartor | 93c1b0752b6fdfffddd4f1ac6b9631729eae9a95 | [
"BSD-2-Clause"
] | null | null | null | (EXTRA) Implementation details.ipynb | zechnerlab/Compartor | 93c1b0752b6fdfffddd4f1ac6b9631729eae9a95 | [
"BSD-2-Clause"
] | 1 | 2021-12-05T11:24:22.000Z | 2021-12-05T11:24:22.000Z | 55.092409 | 4,224 | 0.70236 | true | 3,178 | Qwen/Qwen-72B | 1. YES
2. YES | 0.793106 | 0.740174 | 0.587037 | __label__eng_Latn | 0.683596 | 0.202213 |
# Linear regression
Linear regression is the simplest linear method used for modelling the relationship between the independent variables and the dependent ones. It tries to estimate it by finding a line which is as close as possible to all the data points.
\begin{equation}
y=ax+b
\end{equation}
#### Boston housing example
[Boston housing](https://www.kaggle.com/c/boston-housing) is a very simple dataset built from some statistical data of the houses of Boston suburbs and the median prices (in $1000s) of owner-occupied homes for each zone.
```python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
# Loading the dataset with pandas
boston_data = load_boston()
boston_housing_df = pd.DataFrame(boston_data.data,columns=boston_data.feature_names)
boston_housing_df["MEDV"] = boston_data.target
boston_housing_df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>CRIM</th>
<th>ZN</th>
<th>INDUS</th>
<th>CHAS</th>
<th>NOX</th>
<th>RM</th>
<th>AGE</th>
<th>DIS</th>
<th>RAD</th>
<th>TAX</th>
<th>PTRATIO</th>
<th>B</th>
<th>LSTAT</th>
<th>MEDV</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>0.00632</td>
<td>18.0</td>
<td>2.31</td>
<td>0.0</td>
<td>0.538</td>
<td>6.575</td>
<td>65.2</td>
<td>4.0900</td>
<td>1.0</td>
<td>296.0</td>
<td>15.3</td>
<td>396.90</td>
<td>4.98</td>
<td>24.0</td>
</tr>
<tr>
<th>1</th>
<td>0.02731</td>
<td>0.0</td>
<td>7.07</td>
<td>0.0</td>
<td>0.469</td>
<td>6.421</td>
<td>78.9</td>
<td>4.9671</td>
<td>2.0</td>
<td>242.0</td>
<td>17.8</td>
<td>396.90</td>
<td>9.14</td>
<td>21.6</td>
</tr>
<tr>
<th>2</th>
<td>0.02729</td>
<td>0.0</td>
<td>7.07</td>
<td>0.0</td>
<td>0.469</td>
<td>7.185</td>
<td>61.1</td>
<td>4.9671</td>
<td>2.0</td>
<td>242.0</td>
<td>17.8</td>
<td>392.83</td>
<td>4.03</td>
<td>34.7</td>
</tr>
<tr>
<th>3</th>
<td>0.03237</td>
<td>0.0</td>
<td>2.18</td>
<td>0.0</td>
<td>0.458</td>
<td>6.998</td>
<td>45.8</td>
<td>6.0622</td>
<td>3.0</td>
<td>222.0</td>
<td>18.7</td>
<td>394.63</td>
<td>2.94</td>
<td>33.4</td>
</tr>
<tr>
<th>4</th>
<td>0.06905</td>
<td>0.0</td>
<td>2.18</td>
<td>0.0</td>
<td>0.458</td>
<td>7.147</td>
<td>54.2</td>
<td>6.0622</td>
<td>3.0</td>
<td>222.0</td>
<td>18.7</td>
<td>396.90</td>
<td>5.33</td>
<td>36.2</td>
</tr>
</tbody>
</table>
</div>
There are several features in the dataset:
* **crim** per capita crime rate by town.
* **zn** proportion of residential land zoned for lots over 25,000 sq.ft.
* **indus** proportion of non-retail business acres per town.
* **chas** Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).
* **nox** nitrogen oxides concentration (parts per 10 million).
* **rm** average number of rooms per dwelling.
* **age** proportion of owner-occupied units built prior to 1940.
* **dis** weighted mean of distances to five Boston employment centres.
* **rad** index of accessibility to radial highways.
* **tax** full-value property-tax rate per \$10000
* **ptratio** pupil-teacher ratio by town.
* **black** 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town.
* **lstat** lower status of the population (percent).
* **medv** median value of owner-occupied homes in \$1000s.
The target variable $y$ is called *medv*.
#### 2D linear regression
For the simplicity, we'll consider a 2D example and try to predict the *medv* (median value), given *crim* (per capita crime rate). Let's take a look at the data in the first place.
```python
boston_housing_df.plot(x="CRIM", y="MEDV", kind="scatter")
from sklearn.linear_model import LinearRegression
# Create an instance of LinearRegression and find the coeffs
linear_regression = LinearRegression()
linear_regression.fit(X=boston_housing_df[["CRIM"]],
y=boston_housing_df["MEDV"])
linear_regression.coef_, linear_regression.intercept_
```
We can try to draw the linear regression $y$ using matplotlib. In the code below we take the MEDV features as $y$ and CRIM as $x$.
```python
# Create a polynomial to be drawn on the plot
coefficients = np.append(linear_regression.coef_,
linear_regression.intercept_)
polynomial = np.poly1d(coefficients)
# Calculate the values for a selected range
x_values = np.linspace(0, boston_housing_df["CRIM"].max())
y_values = polynomial(x_values)
# Display a scatter plot: crim vs medv and regressed line
boston_housing_df.plot(x="CRIM", y="MEDV", kind="scatter")
plt.plot(x_values, y_values, color="red", linestyle="dashed")
from sklearn.metrics import mean_squared_error
y_pred = linear_regression.predict(boston_housing_df[["CRIM"]])
y_true = boston_housing_df["MEDV"]
mean_squared_error(y_true, y_pred)
```
### Multidimensional linear regression
An intuitive selection of the possibile predictor did not help to perform a regression of the median value in the area properly. For a low crime rate it looks better, but when it comes to really high crime rate, the predicted value is negative.
For the purposes of selecting predictors, we may consider the variables which have the highest correlation with the target variable.
```python
# Calculate the Pearson correlation coefficients
boston_housing_df.corr()["MEDV"]
```
CRIM -0.388305
ZN 0.360445
INDUS -0.483725
CHAS 0.175260
NOX -0.427321
RM 0.695360
AGE -0.376955
DIS 0.249929
RAD -0.381626
TAX -0.468536
PTRATIO -0.507787
B 0.333461
LSTAT -0.737663
MEDV 1.000000
Name: MEDV, dtype: float64
The absolute value of correlation coefficients is highest for *rm* (0.689598) and *lstat* (-0.738600). That means, these values are possibly the best predictors for the target variable and we can consider them in a 3D regression.
```python
from mpl_toolkits.mplot3d import Axes3D
# Display 3D scatter: rm, lstat vs medv
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.scatter(boston_housing_df["RM"], boston_housing_df["LSTAT"],
boston_housing_df["MEDV"], c="blue")
plt.show()
```
Let's find the coefficiencies and build a linear regression instance.
```python
linear_regression = LinearRegression()
linear_regression.fit(X=boston_housing_df[["RM", "LSTAT"]],
y=boston_housing_df["MEDV"])
linear_regression.coef_, linear_regression.intercept_
```
(array([ 5.09478798, -0.64235833]), -1.358272811874489)
For a three-dimensional case we need to calculate three values. We get three coeffiencies that are used to get the $z$ values.
```python
# Calculate coefficients of 2d polynomial
coefficients = np.append(linear_regression.coef_,
linear_regression.intercept_)
# Calculate the values for a selected range
x = np.linspace(0, boston_housing_df["RM"].max())
y = np.linspace(0, boston_housing_df["LSTAT"].max())
x_values, y_values = np.meshgrid(x, y)
z_values = coefficients[0] * x_values + coefficients[1] * y_values + coefficients[2]
# Display 3D scatter: rm, lstat vs medv and regressed line
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.scatter(boston_housing_df["RM"], boston_housing_df["LSTAT"],
boston_housing_df["MEDV"], c="blue")
ax.plot_surface(x_values, y_values, z_values, linewidth=0.2,
color="red", alpha=0.5)
angle=30
ax.view_init(30, angle)
plt.show()
```
The prediction of linear regression can be done with the ``predict`` method. The cost function of a linear regression is calculated with mean squared error function.
```python
y_pred = linear_regression.predict(boston_housing_df[["RM", "LSTAT"]])
y_true = boston_housing_df["MEDV"]
mean_squared_error(y_true, y_pred)
```
30.51246877729947
# Linear regression under the hood
To understand the method in more details we use a simple example of humans heights and weights values.
```python
heights = np.array([188, 181, 197, 168, 167, 187, 178, 194, 140, 176, 168, 192, 173, 142, 176]).reshape(-1, 1)
weights = np.array([141, 106, 149, 59, 79, 136, 65, 136, 52, 87, 115, 140, 82, 69, 121]).reshape(-1, 1)
```
For comparison, we use the linear regression available in sklearn library.
```python
from sklearn import linear_model
import numpy as np
regr = linear_model.LinearRegression()
regr.fit(heights, weights)
```
LinearRegression()
As in the previous example, we can plot the slope.
```python
plt.scatter(heights, weights,color='g')
plt.plot(heights, regr.predict(heights),color='k')
plt.show()
```
The coeffieciencies are calculated as:
```python
print(regr.coef_)
print(regr.intercept_)
```
[[1.61814247]]
[-180.92401772]
If we calcualte the $y$ value for $x=150$:
```python
plt.scatter(heights, weights,color='g')
plt.plot(heights, regr.predict(heights),color='k')
x= 150
y = 61.797
plt.scatter(x, y,color='r')
plt.show()
```
## Linear regression from scratch
And now we can use the data set to implement the linear regression from scratch.
```python
x = heights.reshape(15,1)
y = weights.reshape(15,1)
```
We should add the bias column before doing the calculation.
```python
x = np.append(x, np.ones((15,1)), axis = 1)
```
The equation for calculating the weights is
\begin{equation}
(X^{T}X)^{-1}X^{T}y
\end{equation}
```python
w = np.dot(np.linalg.inv(np.dot(np.transpose(x),x)),np.dot(np.transpose(x),y))
```
The weitghs we can as the output are exaclty the same as we get using sklearn implementation of the linear regression method.
```python
w
```
array([[ 1.61814247],
[-180.92401772]])
To make the prediction a bit easier, we can implement a short function where the arguments are:
- inputs - feature $x$ of objects,
- w - weights,
- b - bias.
The calculation is easy and use the linear regression equation $y=wx+b$.
```python
def reg_predict(inputs, w, b):
results = []
for inp in inputs:
results.append(inp*w+b)
return results
```
Finally, we can plot the predicted values using the function above.
```python
plt.scatter(heights.flatten(), weights.flatten(),color='g')
plt.plot(heights.flatten(), reg_predict(heights.flatten(), w[0], w[1]) ,color='k')
x1 = 150
y = reg_predict([x1], w[0], w[1])[0]
plt.scatter(x1, y,color='r')
plt.show()
```
```python
reg_predict(x.flatten(), w[1], w[0])
```
[array([-34012.0971882]),
array([-179.30587525]),
array([-32745.62906419]),
array([-179.30587525]),
array([-35640.41334765]),
array([-179.30587525]),
array([-30393.61683388]),
array([-179.30587525]),
array([-30212.69281616]),
array([-179.30587525]),
array([-33831.17317049]),
array([-179.30587525]),
array([-32202.85701104]),
array([-179.30587525]),
array([-35097.6412945]),
array([-179.30587525]),
array([-25327.74433782]),
array([-179.30587525]),
array([-31841.00897561]),
array([-179.30587525]),
array([-30393.61683388]),
array([-179.30587525]),
array([-34735.79325907]),
array([-179.30587525]),
array([-31298.23692246]),
array([-179.30587525]),
array([-25689.59237325]),
array([-179.30587525]),
array([-31841.00897561]),
array([-179.30587525])]
# Ridge regression
Ridge regression use a regularizer and the equation is a bit more complex compare to the regular linear regression:
\begin{equation}
\sum_{i=1}^{M}(y_{i}-\sum_{j=0}^{p}w_{j}\dot x_{ij})^{2} + \lambda\sum_{j=0}^{p}w^{2}_{j}.
\end{equation}
We have an additional parameter $\lambda$ that is known in sklearn as $\alpha$. It's the regularizer that together with $w^{2}_{j}$ is known as the L2 regularizator.
```python
from sklearn.linear_model import Ridge
alpha = 0.1
heights1 = np.asmatrix(np.c_[np.ones((15,1)), heights])
ridge_regression = Ridge(alpha=alpha, fit_intercept=False)
ridge_regression.fit(X=heights1,
y=weights)
ridge_regression.coef_, ridge_regression.intercept_
```
(array([[-101.72397081, 1.16978757]]), 0.0)
Similar to the regular linear regression, the slope can be drawn as below.
```python
plt.scatter(heights, weights,color='g')
plt.plot(heights, ridge_regression.predict(heights1),color='k')
x = 150
y = reg_predict([150], ridge_regression.coef_[0][1], ridge_regression.coef_[0][0])[0]
plt.scatter(x, y,color='r')
y = ridge_regression.coef_[0][1] * 150 + ridge_regression.coef_[0][0]
plt.show()
```
For $x_{1}=150$ the result is a bit different compared to the regression without the regularization.
```python
y = ridge_regression.coef_[0][1] * 150 + ridge_regression.coef_[0][0]
print(y)
```
73.74416542365165
We can write the equation in a matrix-way as:
\begin{equation}
(X^{T}X+\alpha\dot W)^{-1}X^{T}y
\end{equation}
```python
y = weights
x = np.asmatrix(np.c_[np.ones((15,1)),heights])
I = np.identity(2)
alpha = 0.1
w = np.linalg.inv(x.T*x + alpha * I)*x.T*y
```
The weights are calculated same as in case of sklearn.
```python
w=np.array(w).ravel()
print(w)
```
[-101.72397081 1.16978757]
The plot looks as below. We see the the slope start with higer $y$ values compared to the regular linear regression.
```python
plt.scatter(heights, weights, color='g')
plt.plot(heights, reg_predict(heights.flatten(), w[1], w[0]),color='k')
x1= 150
y = x1*w[1]+w[0]
plt.scatter(x1, y,color='r')
plt.show()
```
# Lasso regression
Lasso regression uses the L1 regularization. The equation is very similar to the Ridge one, but instead of $w^{2}$ we use the magnitude of $w$.
\begin{equation}
\sum_{i=1}^{M}(y_{i}-\sum_{j=0}^{p}w_{j}\dot x_{ij})^{2} + \lambda\sum_{j=0}^{p}|w_{j}|.
\end{equation}
```python
from sklearn.linear_model import Lasso
alpha = 0.1
lasso_regression = Lasso(alpha=alpha)
lasso_regression.fit(X=heights,
y=weights)
lasso_regression.coef_, lasso_regression.intercept_
```
(array([1.61776499]), array([-180.8579086]))
```python
plt.scatter(heights, weights,color='g')
plt.plot(heights, lasso_regression.predict(heights),color='k')
plt.show()
```
```python
```
| 8370582d7a523ae4b2d22edcf7bd4110bc2d6ff0 | 196,995 | ipynb | Jupyter Notebook | ML1/linear/021_Linear_regression.ipynb | DevilWillReign/ML2022 | cb4cc692e9f0e178977fb5e1d272e581b30f998d | [
"MIT"
] | null | null | null | ML1/linear/021_Linear_regression.ipynb | DevilWillReign/ML2022 | cb4cc692e9f0e178977fb5e1d272e581b30f998d | [
"MIT"
] | null | null | null | ML1/linear/021_Linear_regression.ipynb | DevilWillReign/ML2022 | cb4cc692e9f0e178977fb5e1d272e581b30f998d | [
"MIT"
] | null | null | null | 175.73149 | 40,060 | 0.895378 | true | 4,567 | Qwen/Qwen-72B | 1. YES
2. YES | 0.79053 | 0.863392 | 0.682537 | __label__eng_Latn | 0.819026 | 0.424094 |
# Find worst cases
\begin{equation}
\begin{array}{rl}
\mathcal{F}_L =& \dfrac{4 K I H r}{Q_{in}(1+f)}\\
u_{c} =& \dfrac{-KI\mathcal{F}_L}{\theta \left(\mathcal{F}_L+1\right)}\\
\tau =& -\dfrac{r}{|u_{c}|}\\
C_{\tau,{\rm decay}}=& C_0 \exp{\left(-\lambda \tau \right)}\\
C_{\tau,{\rm filtr}}=& C_0 \exp{\left(-k_{\rm att} \tau \right)}\\
C_{\tau,{\rm dilut}} =& C_{in} \left( \dfrac{Q_{in}}{u_c \Delta y \Delta z} \right)\\
C_{\tau,{\rm both}} =& \dfrac{C_{\rm in}Q_{\rm in}}{u_c \Delta y H \theta} \exp{\left(-\lambda\dfrac{r}{|u_c|}\right)}
\end{array}
\end{equation}
```python
%reset -f
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from os import system
import os
from matplotlib.gridspec import GridSpec
from drawStuff import *
import jupypft.attachmentRateCFT as CFT
import jupypft.plotBTC as BTC
''' GLOBAL CONSTANTS '''
PI = 3.141592
THETA = 0.35
```
```python
def flowNumber():
return (4.0*K*I*H*r) / (Qin*(1+f))
def uChar():
'''Interstitial water velocity'''
return -(K*I*flowNumber())/(THETA*(flowNumber() + 1))
def tChar():
return -r/uChar()
def cDecay():
return C0 * np.exp(-decayRate * tChar())
def cAttach():
return C0 * np.exp(-attchRate * tChar())
def cDilut():
return (C0 * Qin) / (-uChar() * delY * delZ * THETA)
def cBoth():
return (C0 * Qin) / (-uChar() * delY * delZ * THETA) * np.exp(-decayRate * tChar())
def cTrice():
return (C0 * Qin) / (-uChar() * delY * delZ * THETA) * np.exp(-(decayRate+attchRate) * tChar())
def findSweet():
deltaConc = np.abs(cBoth() - np.max(cBoth()))
return np.argmin(deltaConc)
```
```python
K = 10**-2
Qin = 0.24/86400
f = 10
H = 20
r = 40
I = 0.001
C0 = 1.0
qabs = np.abs(uChar()*THETA)
kattDict = dict(
dp = 1.0E-7,
dc = 2.0E-3,
q = qabs,
theta = THETA,
visco = 0.0008891,
rho_f = 999.79,
rho_p = 1050.0,
A = 5.0E-21,
T = 10. + 273.15,
alpha = 0.01)
decayRate = 3.5353E-06
attchRate,_ = CFT.attachmentRate(**kattDict)
delY,delZ = 1.35,H
```
```python
print("Nondim Flow = {:.2E}".format(flowNumber()))
print("Charac. Vel = {:.2E} m/s".format(uChar()))
print("Charac. time = {:.2E} s".format(tChar()))
```
Nondim Flow = 1.05E+03
Charac. Vel = -2.85E-05 m/s
Charac. time = 1.40E+06 s
```python
print("Rel concenc. due decay = {:.2E}".format(cDecay()))
print("Rel conc. due dilution = {:.2E}".format(cDilut()))
print("Rel conc. due attachmt = {:.2E}".format(cAttach()))
print("Rel conc. due both eff = {:.2E}".format(cBoth()))
print("Rel conc. due three ef = {:.2E}".format(cTrice()))
```
Rel concenc. due decay = 7.05E-03
Rel conc. due dilution = 1.03E-02
Rel conc. due attachmt = 4.42E-05
Rel conc. due both eff = 7.26E-05
Rel conc. due three ef = 3.21E-09
# Plot v. 1
I = 10**np.linspace(-5,0,num=100)
cDec = cDecay()
cDil = cDilut()
cAtt = cAttach()
cBot = cBoth()
cAll = cTrice()
i = findSweet()
worstC = cBot[i]
worstI = I[i]
fig, axs = plt.subplots(2,2,sharex=True, sharey=False,\
figsize=(12,8),gridspec_kw={"height_ratios":[1,4],"hspace":0.04,"wspace":0.35})
bbox = dict(boxstyle='round', facecolor='mintcream', alpha=0.90)
arrowprops = dict(
arrowstyle="->",
connectionstyle="angle,angleA=90,angleB=40,rad=5")
fontdict = dict(size=12)
annotation = \
r"$\bf{-\log(C/C_0)} = $" + "{:.1f}".format(-np.log10(worstC)) + \
"\n@" + r" $\bf{I} = $" + "{:.1E}".format(worstI)
information = \
r"$\bf{K}$" + " = {:.1E} m/s".format(K) + "\n"\
r"$\bf{H}$" + " = {:.1f} m".format(H) + "\n"\
r"$\bf{r}$" + " = {:.1f} m".format(r) + "\n"\
r"$\bf{Q_{in}}$" + " = {:.2f} m³/d".format(Qin*86400) + "\n"\
r"$\bf{f}$" + " = {:.1f}".format(f) + "\n"\
r"$\bf{\lambda}$" + " = {:.2E} 1/s".format(decayRate)
#########################################
# Ax1 - Relative concentration
ax = axs[1,0]
ax.plot(I,cDec,label="Due decay",lw=3,ls="dashed",alpha=0.8)
ax.plot(I,cDil,label="Due dilution",lw=3,ls="dashed",alpha=0.8)
ax.plot(I,cAtt,label="Due attachment",lw=2,ls="dashed",alpha=0.6)
ax.plot(I,cBot,label="Decay + dilution",lw=3,c='k',alpha=0.9)
ax.plot(I,cAll,label="Overall effect",lw=3,c='gray',alpha=0.9)
ax.set(xscale="log",yscale="log")
ax.set(xlim=(1.0E-4,1.0E-1),ylim=(1.0E-10,1))
ax.legend(loc="lower left",shadow=True)
ax.annotate(annotation,(worstI,worstC),
xytext=(0.05,0.85), textcoords='axes fraction',
bbox=bbox, arrowprops=arrowprops)
ax.text(0.65,0.05,information,bbox=bbox,transform=ax.transAxes)
ax.set_xlabel("Water table gradient\n$I$ [m/m]",fontdict=fontdict)
ax.set_ylabel("Relative Concentration\n$C/C_0$ [-]",fontdict=fontdict)
####################################
# Ax2 - log-removals
ax = axs[1,1]
ax.plot(I,-np.log10(cDec),label="Due decay",lw=3,ls="dashed",alpha=0.8)
ax.plot(I,-np.log10(cDil),label="Due dilution",lw=3,ls="dashed",alpha=0.8)
ax.plot(I,-np.log10(cAtt),label="Due attachment",lw=2,ls="dashed",alpha=0.6)
ax.plot(I,-np.log10(cBot),label="Decay + dilution",lw=3,c='k',alpha=0.9)
ax.plot(I,-np.log10(cAll),label="Overall effect",lw=3,c='gray',alpha=0.9)
ax.set(xscale="log")
ax.set(xlim=(1.0E-4,1.0E-1),ylim=(0,10))
ax.legend(loc="upper left",shadow=True)
ax.annotate(annotation,(worstI,-np.log10(worstC)),
xytext=(0.65,0.55), textcoords='axes fraction',
bbox=bbox, arrowprops=arrowprops)
ax.text(0.65,0.70,information,bbox=bbox,transform=ax.transAxes)
ax.set_xlabel("Water table gradient\n$I$ [m/m]",fontdict=fontdict)
ax.set_ylabel("log-reductions\n$-\log(C/C_0)$ [-]",fontdict=fontdict)
####################################
#Flow number
for ax in axs[0,:]:
ax.plot(I,flowNumber(),label="flowNumber",lw=3,c="gray")
ax.axhline(y=1.0)
ax.set_xscale("log")
ax.set_yscale("log")
ax.xaxis.set_tick_params(which="both",labeltop='on',top=True,bottom=False)
ax.set_ylabel("Nondim. flow number\n$\mathcal{F}_L$ [-]")
####################################<
#Line worst case scenario
for axc in axs:
for ax in axc:
ax.axvline(x=I[i], lw=1, ls="dashed", c="red",alpha=0.5)
plt.show()
# v.2 With PFLOTRAN result
```python
listOfFiles = os.listdir("LittleValidation_MASSBALANCES")
listOfFiles.sort()
IPFLO = [float(s[9:16]) for s in listOfFiles]
CPFLO = BTC.get_endConcentrations(
"LittleValidation_MASSBALANCES",
indices={'t':"Time [d]",\
'q':"ExtractWell Water Mass [kg/d]",\
'm':"ExtractWell Vaq [mol/d]"
},
normalizeWith=dict(t=1.0,q=kattDict['rho_f']/1000.,m=1.0))
```
NumExpr defaulting to 8 threads.
```python
listOfFiles = os.listdir("LittleValidation_MASSBALANCES_Att")
listOfFiles.sort()
IPFLO2 = [float(s[8:15]) for s in listOfFiles]
CPFLO2 = BTC.get_endConcentrations(
"LittleValidation_MASSBALANCES_Att",
indices={'t':"Time [d]",\
'q':"ExtractWell Water Mass [kg/d]",\
'm':"ExtractWell Vaq [mol/d]"
},
normalizeWith=dict(t=1.0,q=kattDict['rho_f']/1000.,m=1.0))
```
```python
# Theoretical stuff
I = 10**np.linspace(-5,0,num=100)
cDec = cDecay()
cDil = cDilut()
cAtt = cAttach()
cBot = cBoth()
cAll = cTrice()
i = findSweet()
worstC = cBot[i]
worstI = I[i]
```
fig, axs = plt.subplots(2,2,sharex=True, sharey=False,\
figsize=(10,8),gridspec_kw={"height_ratios":[1,10],"hspace":0.04,"wspace":0.02})
bbox = dict(boxstyle='round', facecolor='mintcream', alpha=0.90)
arrowprops = dict(
arrowstyle="->",
connectionstyle="angle,angleA=90,angleB=40,rad=5")
fontdict = dict(size=12)
annotation = \
r"$\bf{-\log(C/C_0)} = $" + "{:.1f}".format(-np.log10(worstC)) + \
"\n@" + r" $\bf{I} = $" + BTC.sci_notation(worstI)
information = \
r"$\bf{K} = $" + BTC.sci_notation(K) + " m/s\n"\
r"$\bf{H}$" + " = {:.1f} m".format(H) + "\n"\
r"$\bf{r}$" + " = {:.1f} m".format(r) + "\n"\
r"$\bf{Q_{in}}$" + " = {:.2f} m³/d".format(Qin*86400) + "\n"\
r"$\bf{f}$" + " = {:.1f}".format(f) + "\n"\
r"$\bf{\lambda_{\rm aq}} = $" + BTC.sci_notation(decayRate) + " s⁻¹\n"\
r"$\bf{k_{\rm att}} = $" + BTC.sci_notation(attchRate) + " s⁻¹"
####################################
# Ax2 - log-removals
ax = axs[1,0]
symbols=dict(dil="\u25A2",dec="\u25B3",att="\u25CE")
ax.plot(I,-np.log10(cDil),\
label="Due dilution " + symbols['dil'],\
lw=3,ls="dashed",alpha=0.6,c='crimson')
ax.plot(I,-np.log10(cDec),\
label="Due decay " + symbols['dec'],\
lw=3,ls="dashed",alpha=0.6,c='indigo')
ax.plot(I,-np.log10(cAtt),\
label="Due attachment " + symbols['att'],\
lw=2,ls="dashed",alpha=0.5,c='olive')
ax.plot(I,-np.log10(cBot),\
label=symbols['dil'] + " + " + symbols['dec'],\
lw=3,c='k',alpha=0.9,zorder=2)
ax.plot(I,-np.log10(cAll),\
label=symbols['dil'] + " + " + symbols['dec'] + " + " + symbols['att'],\
lw=2,c='gray',alpha=0.9,zorder=2)
ax.set(xscale="log")
ax.set(xlim=(1.0E-4,1.0E-1),ylim=(0,9.9))
ax.legend(loc="upper right",shadow=True,ncol=1,\
title="Potential flow prediction",title_fontsize=11)
ax.annotate(annotation,(worstI,-np.log10(worstC)),
xytext=(0.65,0.55), textcoords='axes fraction',
bbox=bbox, arrowprops=arrowprops)
ax.set_xlabel("Water table gradient\n$I$ [m/m]",fontdict=fontdict)
ax.set_ylabel("log-reductions\n$-\log(C/C_0)$ [-]",fontdict=fontdict)
####################################
# Ax2 - log-removals
ax = axs[1,1]
ax.plot(I,-np.log10(cBot),
lw=3,c='k',alpha=0.9)
ax.plot(IPFLO,-np.log10(CPFLO),zorder=2,\
label= symbols['dil'] + " + " + symbols['dec'],\
lw=0,ls='dotted',c='k',alpha=0.9,\
marker="$\u25CA$",mec='k',mfc='k',ms=10)
ax.plot(I,-np.log10(cAll),\
lw=2,c='gray',alpha=0.9)
ax.plot(IPFLO2,-np.log10(CPFLO2),zorder=2,\
label= symbols['dil'] + " + " + symbols['dec'] + " + " + symbols['att'],\
lw=0,ls='dotted',c='gray',alpha=0.9,\
marker="$\u2217$",mec='gray',mfc='gray',ms=10)
#####
ax.set(xscale="log")
ax.set(xlim=(1.0E-4,9.0E-2),ylim=(0,9.9))
ax.legend(loc="lower left",shadow=True,ncol=1,\
labelspacing=0.4,mode=None,\
title="PFLOTRAN run",title_fontsize=11)
ax.set_xlabel("Water table gradient\n$I$ [m/m]",fontdict=fontdict)
ax.yaxis.tick_right()
ax.text(0.60,0.70,information,bbox=bbox,transform=ax.transAxes)
ax.text(I[i],0.5,"Worst\ncase",\
ha='center',va='center',weight='semibold',\
bbox=dict(boxstyle='square', fc='white', ec='red'))
####################################
#Flow number
ax = axs[0,0]
ax.plot(I,flowNumber(),label="flowNumber",lw=3,c="gray")
#ax.axhline(y=1.0)
ax.set_xscale("log")
ax.set_yscale("log")
ax.xaxis.set_tick_params(which="both",labeltop='on',top=False,bottom=False)
#ax.set_ylabel("Nondim. flow number\n$\mathcal{F}_L$ [-]")
####################################<
#Line worst case scenario
axs[0,0].axvline(x=I[i], lw=2, ls="dotted", c="red",alpha=0.5)
axs[1,0].axvline(x=I[i], lw=2, ls="dotted", c="red",alpha=0.5)
axs[1,1].axvline(x=I[i], lw=2, ls="dotted", c="red",alpha=0.5)
###############################
# Information box
ax = axs[0,1]
ax.axis('off')
plt.show()
```python
fig, axs = plt.subplots(1,1,figsize=(5,5))
fontdict = dict(size=12)
lines = {}
information = \
r"$\bf{K} = $" + BTC.sci_notation(K) + " m/s\n"\
r"$\bf{H}$" + " = {:.1f} m".format(H) + "\n"\
r"$\bf{r}$" + " = {:.1f} m".format(r) + "\n"\
r"$\bf{Q_{in}}$" + " = {:.2f} m³/d".format(Qin*86400) + "\n"\
r"$\bf{f}$" + " = {:.1f}".format(f) + "\n"\
r"$\bf{\lambda_{\rm aq}} = $" + BTC.sci_notation(decayRate) + " s⁻¹"\
####################################
# Ax2 - log-removals
ax = axs
lines['Dilution'] = ax.plot(I,-np.log10(cDil),\
label="Due dilution",\
lw=3,ls="dashed",alpha=0.99,c='#f1a340')
lines['Decay'] = ax.plot(I,-np.log10(cDec),\
label="Due decay",\
lw=3,ls="dashdot",alpha=0.99,c='#998ec3')
#ax.plot(I,-np.log10(cAtt),\
# label="Due attachment " + symbols['att'],\
# lw=2,ls="dashed",alpha=0.5,c='olive')
lines['Both'] = ax.plot(I,-np.log10(cBot),\
label="Combined effect",\
lw=3,c='k',alpha=0.9,zorder=2)
#ax.plot(I,-np.log10(cAll),\
# label=symbols['dil'] + " + " + symbols['dec'] + " + " + symbols['att'],\
# lw=2,c='gray',alpha=0.9,zorder=2)
lines['PFLOT'] = ax.plot(IPFLO,-np.log10(CPFLO),zorder=2,\
label= "BIOPARTICLE model",\
lw=0,ls='dotted',c='k',alpha=0.9,\
marker="$\u25CA$",mec='k',mfc='k',ms=10)
ax.set(xscale="log")
ax.set(xlim=(5.0E-5,5.0E-1),ylim=(-0.5,9.9))
ax.set_xlabel("Water table gradient\n$I$ [m/m]",fontdict=fontdict)
ax.set_ylabel("log-reductions\n$-\log(C/C_0)$ [-]",fontdict=fontdict)
## Legend
whitebox = ax.scatter([1],[1],c="white",marker="o",s=1,alpha=0)
handles = [lines['PFLOT'][0],
lines['Both'][0]]
labels = [lines['PFLOT'][0].get_label(),
"PF model"]
plt.legend(handles, labels,
loc="upper right",ncol=1,\
edgecolor='gray',facecolor='mintcream',labelspacing=1)
##aNNOTATIONS
rotang = -1.0
bbox = dict(boxstyle='square', fc='w', ec='#998ec3',lw=1.5)
ax.text(0.85,0.10,'Inactivation',ha='center',va='center',
fontsize=11,rotation=rotang,bbox=bbox,transform=ax.transAxes)
rotang = 21.0
bbox = dict(boxstyle='square', fc='w', ec='#f1a340',lw=1.5)
ax.text(0.12,0.21,'Dilution',ha='center',va='center',
fontsize=11,rotation=rotang,bbox=bbox,transform=ax.transAxes)
rotang = -0.0
bbox = dict(boxstyle='square', fc='k', ec=None,lw=0.0)
ax.text(0.55,0.45,'Combined',c='w',ha='center',va='center',weight='bold',
fontsize=11,rotation=rotang,bbox=bbox,transform=ax.transAxes)
bbox = dict(boxstyle='round,pad=0.5,rounding_size=0.3', fc='whitesmoke', alpha=0.90,ec='whitesmoke')
#ax.text(0.60,0.64,information,bbox=bbox,transform=ax.transAxes,fontsize=10)
fig.savefig(fname="SweetpointOnlyDecay.png",transparent=False,dpi=300,bbox_inches="tight")
#plt.show()
```
```python
fig, axs = plt.subplots(1,1,figsize=(5,5))
fontdict = dict(size=12)
lines = {}
information = \
r"$\bf{K} = $" + BTC.sci_notation(K) + " m/s\n"\
r"$\bf{H}$" + " = {:.1f} m".format(H) + "\n"\
r"$\bf{r}$" + " = {:.1f} m".format(r) + "\n"\
r"$\bf{Q_{in}}$" + " = {:.2f} m³/d".format(Qin*86400) + "\n"\
r"$\bf{f}$" + " = {:.1f}".format(f) + "\n"\
r"$\bf{\lambda_{\rm aq}} = $" + BTC.sci_notation(decayRate) + " s⁻¹\n"\
r"$\bf{k_{\rm att}} = $" + BTC.sci_notation(attchRate) + " s⁻¹"
####################################
# Ax2 - log-removals
ax = axs
lines['Dilution'] = ax.plot(I,-np.log10(cDil),\
label="Due dilution",\
lw=3,ls="dashed",alpha=0.99,c='#f1a340')
lines['Decay'] = ax.plot(I,-np.log10(cDec),\
label="Due decay",\
lw=3,ls="dashdot",alpha=0.99,c='#998ec3')
lines['Attach'] = ax.plot(I,-np.log10(cAtt),\
label="Due attachment",\
lw=3,ls="dotted",alpha=0.95,c='#4dac26')
#lines['Both'] = ax.plot(I,-np.log10(cBot),\
# label="Combined effect",\
# lw=3,c='k',alpha=0.9,zorder=2)
lines['Both'] = ax.plot(I,-np.log10(cAll),\
label="Combined\neffect",\
lw=3,c='#101613',alpha=0.99,zorder=2)
lines['PFLOT'] = ax.plot(IPFLO2,-np.log10(CPFLO2),zorder=2,\
label= "BIOPARTICLE model",\
lw=0,ls='dotted',c='k',alpha=0.9,\
marker="$\u2217$",mec='gray',mfc='k',ms=10)
ax.set(xscale="log")
ax.set(xlim=(5.0E-5,5.0E-1),ylim=(-0.5,9.9))
ax.set_xlabel("Water table gradient\n$I$ [m/m]",fontdict=fontdict)
ax.set_ylabel("log-reductions\n$-\log(C/C_0)$ [-]",fontdict=fontdict)
## Legend
whitebox = ax.scatter([1],[1],c="white",marker="o",s=1,alpha=0)
handles = [lines['PFLOT'][0],
lines['Both'][0]]
labels = [lines['PFLOT'][0].get_label(),
"PF model"]
plt.legend(handles, labels,
loc="upper right",ncol=1,\
edgecolor='gray',facecolor='mintcream',labelspacing=1)
##aNNOTATIONS
rotang = -82.0
bbox = dict(boxstyle='square', fc='w', ec='#998ec3',lw=1.5)
ax.text(0.12,0.85,'Inactivation',c='k',fontweight='normal',ha='center',va='center',
fontsize=11,rotation=rotang,bbox=bbox,transform=ax.transAxes)
rotang = 21.0
bbox = dict(boxstyle='square', fc='w', ec='#f1a340',lw=1.5)
ax.text(0.12,0.21,'Dilution',ha='center',va='center',
fontsize=11,rotation=rotang,bbox=bbox,transform=ax.transAxes)
rotang = -1.5
bbox = dict(boxstyle='square', fc='w', ec='#4dac26',ls='-',lw=1.5)
ax.text(0.85,0.10,'Filtration',ha='center',va='center',
fontsize=11,rotation=rotang,bbox=bbox,transform=ax.transAxes)
rotang = -0.0
bbox = dict(boxstyle='square', fc='#101613', ec='#101613')
ax.text(0.65,0.52,'Combined',c='w',ha='center',va='center',weight='bold',
fontsize=11,rotation=rotang,bbox=bbox,transform=ax.transAxes)
bbox = dict(boxstyle='round,pad=0.5,rounding_size=0.3', fc='whitesmoke', alpha=0.90,ec='whitesmoke')
#ax.text(0.60,0.58,information,bbox=bbox,transform=ax.transAxes,fontsize=10)
fig.savefig(fname="SweetpointAllConsidered.png",transparent=False,dpi=300,bbox_inches="tight")
plt.show()
```
# v.3 Together as filtration is assumed
fig, axs = plt.subplots(1,2,figsize=(10,5))
bbox = dict(boxstyle='round,pad=0.5,rounding_size=0.3', facecolor='mintcream', alpha=0.90)
fontdict = dict(size=12)
lines = {}
information = \
"Parameters:\n" + \
r"$\bf{K} = $" + BTC.sci_notation(K) + " m/s\n"\
r"$\bf{H}$" + " = {:.1f} m".format(H) + "\n"\
r"$\bf{r}$" + " = {:.1f} m".format(r) + "\n"\
r"$\bf{Q_{in}}$" + " = {:.2f} m³/d".format(Qin*86400) + "\n"\
r"$\bf{f}$" + " = {:.1f}".format(f) + "\n"\
r"$\bf{\lambda_{\rm aq}} = $" + BTC.sci_notation(decayRate) + " s⁻¹"\
####################################
# Ax2 - log-removals
ax = axs[0]
lines['Dilution'] = ax.plot(I,-np.log10(cDil),\
label="Due dilution",\
lw=3,ls="dashed",alpha=0.99,c='#f1a340')
lines['Decay'] = ax.plot(I,-np.log10(cDec),\
label="Due decay",\
lw=3,ls="dashdot",alpha=0.99,c='#998ec3')
#ax.plot(I,-np.log10(cAtt),\
# label="Due attachment " + symbols['att'],\
# lw=2,ls="dashed",alpha=0.5,c='olive')
lines['Both'] = ax.plot(I,-np.log10(cBot),\
label="Combined effect",\
lw=3,c='k',alpha=0.9,zorder=2)
#ax.plot(I,-np.log10(cAll),\
# label=symbols['dil'] + " + " + symbols['dec'] + " + " + symbols['att'],\
# lw=2,c='gray',alpha=0.9,zorder=2)
lines['PFLOT'] = ax.plot(IPFLO,-np.log10(CPFLO),zorder=2,\
label= "PFLOTRAN results",\
lw=0,ls='dotted',c='k',alpha=0.9,\
marker="$\u25CA$",mec='k',mfc='k',ms=10)
ax.set(xscale="log")
ax.set(xlim=(5.0E-5,5.0E-1),ylim=(0,9.9))
ax.text(1.04,0.55,information,bbox=bbox,transform=ax.transAxes)
ax.set_xlabel("Water table gradient\n$I$ [m/m]",fontdict=fontdict)
ax.set_ylabel("log-reductions\n$-\log(C/C_0)$ [-]",fontdict=fontdict)
## Legend
whitebox = ax.scatter([1],[1],c="white",marker="o",s=1,alpha=0)
handles = [whitebox,
lines['Dilution'][0],
lines['Decay'][0],
lines['Both'][0],
whitebox,
lines['PFLOT'][0]]
labels = ['PF model',
lines['Dilution'][0].get_label(),
lines['Decay'][0].get_label(),
lines['Both'][0].get_label(),
'',
lines['PFLOT'][0].get_label(),]
plt.legend(handles, labels,
loc="center left",ncol=1,bbox_to_anchor=(1.01,0.25),\
edgecolor='k',facecolor='mintcream',labelspacing=1)
bbox = dict(boxstyle='round,pad=0.5,rounding_size=0.3', facecolor='mintcream', alpha=0.90)
fontdict = dict(size=12)
lines = {}
information = \
"Parameters:\n" + \
r"$\bf{K} = $" + BTC.sci_notation(K) + " m/s\n"\
r"$\bf{H}$" + " = {:.1f} m".format(H) + "\n"\
r"$\bf{r}$" + " = {:.1f} m".format(r) + "\n"\
r"$\bf{Q_{in}}$" + " = {:.2f} m³/d".format(Qin*86400) + "\n"\
r"$\bf{f}$" + " = {:.1f}".format(f) + "\n"\
r"$\bf{\lambda_{\rm aq}} = $" + BTC.sci_notation(decayRate) + " s⁻¹\n"\
r"$\bf{k_{\rm att}} = $" + BTC.sci_notation(attchRate) + " s⁻¹"
####################################
# Ax2 - log-removals
ax = axs[1]
lines['Dilution'] = ax.plot(I,-np.log10(cDil),\
label="Due dilution",\
lw=3,ls="dashed",alpha=0.99,c='#f1a340')
lines['Decay'] = ax.plot(I,-np.log10(cDec),\
label="Due decay",\
lw=3,ls="dashdot",alpha=0.99,c='#998ec3')
lines['Attach'] = ax.plot(I,-np.log10(cAtt),\
label="Due attachment",\
lw=3,ls="dotted",alpha=0.95,c='#4dac26')
#lines['Both'] = ax.plot(I,-np.log10(cBot),\
# label="Combined effect",\
# lw=3,c='k',alpha=0.9,zorder=2)
lines['Both'] = ax.plot(I,-np.log10(cAll),\
label="Combined effect",\
lw=3,c='k',alpha=0.99,zorder=2)
lines['PFLOT'] = ax.plot(IPFLO2,-np.log10(CPFLO2),zorder=2,\
label= "PFLOTRAN results",\
lw=0,ls='dotted',c='k',alpha=0.9,\
marker="$\u2217$",mec='gray',mfc='k',ms=10)
ax.set(xscale="log")
ax.set(xlim=(5.0E-5,5.0E-1),ylim=(0,9.9))
ax.text(1.04,0.55,information,bbox=bbox,transform=ax.transAxes)
ax.set_xlabel("Water table gradient\n$I$ [m/m]",fontdict=fontdict)
ax.set_ylabel("log-reductions\n$-\log(C/C_0)$ [-]",fontdict=fontdict)
## Legend
whitebox = ax.scatter([1],[1],c="white",marker="o",s=1,alpha=0)
handles = [whitebox,
lines['Dilution'][0],
lines['Decay'][0],
lines['Attach'][0],
lines['Both'][0],
whitebox,
lines['PFLOT'][0]]
labels = ['PF model',
lines['Dilution'][0].get_label(),
lines['Decay'][0].get_label(),
lines['Attach'][0].get_label(),
lines['Both'][0].get_label(),
'',
lines['PFLOT'][0].get_label(),]
plt.legend(handles, labels,
loc="center left",ncol=1,bbox_to_anchor=(1.01,0.25),\
edgecolor='k',facecolor='mintcream',labelspacing=1)
plt.show()
____
# Find the worst case
## >> Geometric parameters $H$ and $r$
K = 10**-2
Qin = 0.24/86400
f = 10
C0 = 1.0
decayRate = 3.5353E-06
Harray = np.array([2.,5.,10.,20.,50.])
rarray = np.array([5.,10.,40.,100.])
Iarray = 10**np.linspace(-5,0,num=100)
Ci = np.zeros([len(rarray),len(Harray)])
Ii = np.zeros([len(rarray),len(Harray)])
FLi = np.zeros([len(rarray),len(Harray)])
for hi,H in enumerate(Harray):
for ri,r in enumerate(rarray):
i = findSweet()
worstC = -np.log10(cBoth()[i])
worstGradient = Iarray[i]
worstFlowNumber = flowNumber()[i]
Ci[ri,hi] = worstC
Ii[ri,hi] = worstGradient
FLi[ri,hi] = worstFlowNumbermyLabels={"Title": { 0: r"$\bf{-\log (C_{\tau}/C_0)}$",
1: r"$\bf{I}$ (%)",
2: r"$\log(\mathcal{F}_L)$"},
"Y": "Aquifer thickness\n$\\bf{H}$ (m)",
"X": "Setback distance\n$\\bf{r}$ (m)"}
threeHeatplots(data={"I":Ii.T,"C":Ci.T,"FL":FLi.T},\
xlabel=Harray,ylabel=rarray,myLabels=myLabels);
## >>Well parameters
K = 10**-2
H = 20
r = 40
C0 = 1.0
decayRate = 3.5353E-06
Qin_array = np.array([0.24,1.0,10.0,100.])/86400.
f_array = np.array([1,10.,100.,1000.,10000.])
Iarray = 10**np.linspace(-5,0,num=100)
Ci = np.zeros([len(Qin_array),len(f_array)])
Ii = np.zeros([len(Qin_array),len(f_array)])
FLi = np.zeros([len(Qin_array),len(f_array)])
for fi,f in enumerate(f_array):
for qi,Qin in enumerate(Qin_array):
i = findSweet()
worstC = -np.log10(cBoth()[i])
worstGradient = Iarray[i]
worstFlowNumber = flowNumber()[i]
Ci[qi,fi] = worstC
Ii[qi,fi] = worstGradient
FLi[qi,fi] = worstFlowNumber
### Plot heatmap
myLabels={"Title": { 0: r"$\bf{-\log (C_{\tau}/C_0)}$",
1: r"$\bf{I}$ (%)",
2: r"$\log(\mathcal{F}_L)$"},
"Y": "Extraction to injection ratio\n$\\bf{f}$ (-)",
"X": "Injection flow rate\n$\\bf{Q_{in}}$ (m³/d)"}
threeHeatplots(data={"I":Ii.T,"C":Ci.T,"FL":FLi.T},\
xlabel=f_array,ylabel=np.round(Qin_array*86400,decimals=2),myLabels=myLabels);
## Hydraulic conductivity
Qin = 0.24/86400
f = 10
C0 = 1.0
decayRate = 3.5353E-06
Karray = 10.**np.array([-1.,-2.,-3.,-4.,-5.])
rarray = np.array([5.,10.,40.,100.])
Iarray = 10**np.linspace(-5,0,num=100)
Ci = np.zeros([len(rarray),len(Karray)])
Ii = np.zeros([len(rarray),len(Karray)])
FLi = np.zeros([len(rarray),len(Karray)])
for ki,K in enumerate(Karray):
for ri,r in enumerate(rarray):
i = findSweet()
worstC = -np.log10(cBoth()[i])
worstGradient = Iarray[i]
worstFlowNumber = flowNumber()[i]
Ci[ri,ki] = worstC
Ii[ri,ki] = worstGradient
FLi[ri,ki] = worstFlowNumber
### Plot heatmap
myLabels={"Title": { 0: r"$\bf{-\log (C_{\tau}/C_0)}$",
1: r"$\bf{I}$ (%)",
2: r"$\log(\mathcal{F}_L)$"},
"Y": "Hydraulic conductivity\n$\\bf{K}$ (m/s)",
"X": "Setback distance\n$\\bf{r}$ (m)"}
threeHeatplots(data={"I":Ii.T,"C":Ci.T,"FL":FLi.T},\
xlabel=Karray,ylabel=rarray,myLabels=myLabels);K = 10**-2
H = 20
f = 10
C0 = 1.0
decayRate = 3.5353E-06
Qin_array = np.array([0.24,1.0,10.0,100.])/86400.
rarray = np.array([5.,10.,40.,100.])
Iarray = 10**np.linspace(-5,0,num=100)
Ci = np.zeros([len(rarray),len(Qin_array)])
Ii = np.zeros([len(rarray),len(Qin_array)])
FLi = np.zeros([len(rarray),len(Qin_array)])
for qi,Qin in enumerate(Qin_array):
for ri,r in enumerate(rarray):
i = findSweet()
worstC = -np.log10(cBoth()[i])
worstGradient = Iarray[i]
worstFlowNumber = flowNumber()[i]
Ci[ri,qi] = worstC
Ii[ri,qi] = worstGradient
FLi[ri,qi] = worstFlowNumber
myLabels={"Title": { 0: r"$\bf{-\log (C_{\tau}/C_0)}$",
1: r"$\bf{I}$ (%)",
2: r"$\log(\mathcal{F}_L)$"},
"Y": "Injection flow rate\n$\\bf{Q_{in}}$ (m³/d)",
"X": "Setback distance\n$\\bf{r}$ (m)"}
threeHeatplots(data={"I":Ii.T,"C":Ci.T,"FL":FLi.T},\
ylabel=rarray,xlabel=np.round(Qin_array*86400,decimals=2),myLabels=myLabels);K = 10**-2
H = 20
f = 10
C0 = 1.0
decayRate = 1.119E-5
Qin_array = np.array([0.24,1.0,10.0,100.])/86400.
rarray = np.array([5.,10.,40.,100.])
Iarray = 10**np.linspace(-5,0,num=100)
Ci = np.zeros([len(rarray),len(Qin_array)])
Ii = np.zeros([len(rarray),len(Qin_array)])
FLi = np.zeros([len(rarray),len(Qin_array)])
for qi,Qin in enumerate(Qin_array):
for ri,r in enumerate(rarray):
i = findSweet()
worstC = -np.log10(cBoth()[i])
worstGradient = Iarray[i]
worstFlowNumber = flowNumber()[i]
Ci[ri,qi] = worstC
Ii[ri,qi] = worstGradient
FLi[ri,qi] = worstFlowNumber
myLabels={"Title": { 0: r"$\bf{-\log (C_{\tau}/C_0)}$",
1: r"$\bf{I}$ (%)",
2: r"$\log(\mathcal{F}_L)$"},
"Y": "Injection flow rate\n$\\bf{Q_{in}}$ (m³/d)",
"X": "Setback distance\n$\\bf{r}$ (m)"}
threeHeatplots(data={"I":Ii.T,"C":Ci.T,"FL":FLi.T},\
ylabel=rarray,xlabel=np.round(Qin_array*86400,decimals=2),myLabels=myLabels);
## PLOTRAN SIMULATION RESULTS
minI = np.array([0.00046667, 0.0013 , 0.0048 , 0.012 , 0.00046667,
0.0013 , 0.0048 , 0.012 , 0.00053333, 0.0013 ,
0.0048 , 0.012 , 0.00056667, 0.0021 , 0.0053 ,
0.012 ])
minC = np.array([2.2514572 , 2.62298917, 3.14213329, 3.51421485, 1.64182175,
2.00913676, 2.52461269, 2.89537637, 0.74130696, 1.0754177 ,
1.55071976, 1.90646243, 0.18705258, 0.39222131, 0.73428991,
1.00387133])Qin_array = np.array([0.24,1.0,10.0,100.])/86400.
rarray = np.array([5.,10.,40.,100.])
Iarray = 10**np.linspace(-5,0,num=100)
Ci = np.zeros([len(rarray),len(Qin_array)])
Ii = np.zeros([len(rarray),len(Qin_array)])
FLi = np.zeros([len(rarray),len(Qin_array)])
i = 0
for qi,Qin in enumerate(Qin_array):
for ri,r in enumerate(rarray):
worstC = minC[i]
worstGradient = minI[i]
Ci[ri,qi] = worstC
Ii[ri,qi] = worstGradient
i += 1
myLabels={"Title": { 0: r"$\bf{-\log (C_{\tau}/C_0)}$",
1: r"$\bf{I}$ (%)",
2: r"$\log(\mathcal{F}_L)$"},
"Y": "Injection flow rate\n$\\bf{Q_{in}}$ (m³/d)",
"X": "Setback distance\n$\\bf{r}$ (m)"}
threeHeatplots(data={"I":Ii.T,"C":Ci.T,"FL":Ii.T},\
ylabel=rarray,xlabel=np.round(Qin_array*86400,decimals=2),myLabels=myLabels);
```python
```
| a177476a37fedabc31c1305b99307bc4db604baf | 125,060 | ipynb | Jupyter Notebook | notebooks/Concepts/Find worst case (1).ipynb | edsaac/bioparticle | 67e191329ef191fc539b290069524b42fbaf7e21 | [
"MIT"
] | null | null | null | notebooks/Concepts/Find worst case (1).ipynb | edsaac/bioparticle | 67e191329ef191fc539b290069524b42fbaf7e21 | [
"MIT"
] | 1 | 2020-09-25T23:31:21.000Z | 2020-09-25T23:31:21.000Z | notebooks/Concepts/Find worst case (1).ipynb | edsaac/VirusTransport_RxSandbox | 67e191329ef191fc539b290069524b42fbaf7e21 | [
"MIT"
] | 1 | 2021-09-30T05:00:58.000Z | 2021-09-30T05:00:58.000Z | 108.842472 | 46,524 | 0.795274 | true | 10,424 | Qwen/Qwen-72B | 1. YES
2. YES | 0.859664 | 0.699254 | 0.601124 | __label__yue_Hant | 0.120933 | 0.234942 |
# Population coding (Pouget et al., 2010)
A response of a cell can be characterized by an "encoding model" of the stimulus ($s$):
\begin{align}
r_{i} = f_{i}(s) + n_{i}
\end{align}
in which $n$ represents a noise term assumed to follow a normal distribution with a variance proportional to the mean value, $f_{i}(s)$. When the model ("tuning function") is assumed to be gaussian (note to self: is this only for "circular properties" like orientation/motion direction?), it can be written as:
\begin{align}
f_{i}(s) = ke^{-(s - s_{i})^{2}/2\sigma^{2}}
\end{align}
```python
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
def gaussian_model(s, si=100, k=1, sigma=5):
""" Computes average activity of cell in response to stimulus `s`
given the preferred value `si` and scaling faction `k` and width `sigma`.
"""
return k * np.exp(-(s - si) ** 2 / (2 * sigma ** 2))
vals = gaussian_model(np.arange(200))
plt.figure(figsize=(8, 3))
plt.plot(vals)
plt.xlabel('Stimulus/feature value %s' % '$s_{i}$')
plt.ylabel('Activity (a.u.)')
sns.despine()
plt.show()
```
Now, suppose we have 64 cells that fire according to differently tuned tuning functions:
```python
true_si = np.linspace(-180, 180, 64)
true_activity = gaussian_model(s=40, si=true_si, sigma=20) + np.random.normal(0, 0.1, size=true_si.size)
plt.scatter(true_si, true_activity)
plt.title('Activity of 64 neurons')
sns.despine()
plt.show()
```
```python
np.exp(5)
```
148.4131591025766
| 0b48949c9c68e41163bb566f7d458b7915b27938 | 26,873 | ipynb | Jupyter Notebook | population_coding.ipynb | lukassnoek/random_notebooks | d7df507ce2b6949726c29de0022aae2d0dc583ac | [
"MIT"
] | 3 | 2018-05-28T13:45:11.000Z | 2021-08-31T11:41:34.000Z | population_coding.ipynb | lukassnoek/random_notebooks | d7df507ce2b6949726c29de0022aae2d0dc583ac | [
"MIT"
] | null | null | null | population_coding.ipynb | lukassnoek/random_notebooks | d7df507ce2b6949726c29de0022aae2d0dc583ac | [
"MIT"
] | 2 | 2018-05-28T13:46:05.000Z | 2018-06-11T15:25:59.000Z | 193.330935 | 12,964 | 0.910058 | true | 439 | Qwen/Qwen-72B | 1. YES
2. YES | 0.932453 | 0.798187 | 0.744272 | __label__eng_Latn | 0.937467 | 0.567525 |
# Chapter 3: Linear Regression
- **Chapter 3 from the book [An Introduction to Statistical Learning](https://www.statlearning.com/).**
- **By Gareth James, Daniela Witten, Trevor Hastie and Rob Tibshirani.**
- **Pages from $120$ to $121$**
- **By [Mosta Ashour](https://www.linkedin.com/in/mosta-ashour/)**
**Exercises:**
- **[1.](#1)**
- **[2.](#2)**
- **[3.](#3)**
- **[4.](#4)**
- **[5.](#5)**
- **[6.](#6)**
- **[7.](#7)**
# <span style="font-family:cursive;color:#0071bb;"> 3.7 Exercises </span>
## <span style="font-family:cursive;color:#0071bb;"> Conceptual </span>
<a id='1'></a>
### $1.$ Describe the null hypotheses to which the $\text{p-values}$ given in Table 3.4 correspond. Explain what conclusions you can draw based on these p-values. Your explanation should be phrased in terms of <span style="font-family:cursive;color:red;"> $sales, TV, radio,$ </span> and <span style="font-family:cursive;color:red;"> $newspaper,$ </span> rather than in terms of the coefficients of the linear model.
- **The null hypothesis in this case are:**
- That there is no relationship between amount spent on $TV, radio, newspaper$ advertising and $Sales$
$$H_{0}^{(TV)}: \beta_1 = 0$$
$$H_{0}^{(radio)}: \beta_2 = 0$$
$$H_{0}^{(newspaper)}: \beta_3 = 0$$
- From the **p-values** above, it does appear that $TV$ and $radio$ have a significant impact on sales and not $newspaper$.
- The **p-values** given in table 3.4 suggest that we **can reject** the null hypotheses for $TV$ and $newspaper$ and we **can't reject** the null hypothesis for $newspaper$.
- It seems likely that there is a relationship between TV ads and Sales, and radio ads and sales and not $newspaper$.
<a id='2'></a>
### $2.$ Carefully explain the differences between the $\text{KNN}$ classifier and $\text{KNN}$ regression methods.
- **$\text{KNN}$ classifier methods**
- Attempts to predict the **class** to which the output variable belong by computing the local probability and determines a decision boundary `"typically used for qualitative response, classification problems"`.
- **$\text{KNN}$ regression methods**
- Tries to predict the **value** of the output variable by using a local average `"typically used for quantitative response, regression problems"`.
<a id='3'></a>
### $3.$ Suppose we have a data set with five predictors, $X_1 = GPA$, $X_2 = IQ$, $X_3 = Gender$ (1 for Female and 0 for Male), $X_4 = \text{Interaction between GPA and IQ}$, and $X_5 = \text{Interaction between GPA and Gender}$. The response is starting salary after graduation (in thousands of dollars). Suppose we use least squares to fit the model, and get $\hat{β_0} = 50, \hat{β_1} = 20 , \hat{β_2} = 0.07 , \hat{β_3} = 35 , \hat{β_4} = 0.01 , \hat{β_5} = −10$ .
**$(a)$** Which answer is correct, and why?
- $i.$ For a fixed value of IQ and GPA, males earn more on average than females.
- $ii.$ For a fixed value of IQ and GPA, females earn more on average than males.
- **$iii.$ For a fixed value of IQ and GPA, males earn more on average than females provided that the GPA is high enough.**
- $iv.$ For a fixed value of IQ and GPA, females earn more on average than males provided that the GPA is high enough.
### Answer:
- The least square line is given by:
$$\hat{y}=50+20GPA+0.07IQ+35Gender+0.01GPA×IQ−10GPA×Gender$$
- For males:
$$\hat{y}=50+20GPA+0.07IQ+0.01GPA×IQ$$
- For females:
$$\hat{y}=85+10GPA+0.07IQ+0.01GPA×IQ$$
- So the starting salary for females is higher than Males by `$35`, but on average males earn more than females if GPA is higher than 3.5:
$$50 + 20GPA \geq 85 + 10GPA$$
$$10GPA \geq 35$$
$$GPA \geq 3.5$$
- **Answer iii. is the correct one**
**(b)** Predict the salary of a **female** with **IQ of 110** and a **GPA of 4.0**
```python
gpa, iq, gender = 4, 110, 1
ls = 50 + 20*gpa + 0.07*iq + 35*gender + 0.01*gpa*iq + (-10*gpa*gender)
print('$', ls * 1000)
```
$ 137100.0
**$(c)$** **True or false:** Since the coefficient for the $GPA/IQ$ interaction term is very small, there is very little evidence of an interaction effect. Justify your answer.
- **False**. the interaction effect might be small but to verify if the $GPA/IQ$ has an impact on the quality of the model we need to test the null hypothesis $H_0:\hat{\beta_4}=0$ and look at the **p-value** associated with the $\text{t-statistic}$ or the $\text{F-statistic}$ to reject or not reject the null hypothesis.
<a id='4'></a>
### $4.$ I collect a set of data (n = 100 observations) containing a single predictor and a quantitative response. I then fit a linear regression model to the data, as well as a separate cubic regression, i.e. $Y = β_0 + β_1X + β_2X^2 + β_3X^3 + \epsilon$
**$(a)$** Suppose that the true relationship between $X$ and $Y$ is linear, i.e. $Y = β_0 + β_1X + ε$. Consider the training residual sum of squares ($RSS$) for the linear regression, and also the training $RSS$ for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer.
- Without knowing more details about the training data, it is difficult to know which training $RSS$ is lower between linear or cubic.
- However, We would expect the training $RSS$ for the **cubic model to be lower than the linear model** because it is more flexible which allows it to fit more closely variance in the training data despite the true relationship between $X$ and $Y$ is linear.
**$(b)$** Answer (a) using test rather than training $RSS$.
- We would expect the test $RSS$ for the **linear model to be lower than the cubic model** because The cubic model is more flexible, and so is likely to overfit the training data and would have more error than the linear regression.
**$(c)$** Suppose that the true relationship between $X$ and $Y$ is not linear, but we don't know how far it is from linear. Consider the training $RSS$ for the linear regression, and also the training $RSS$ for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer.
- We would expect the training $RSS$ for the **cubic model to be lower than the linear model** because because of the cubic model flexibility.
**$(d)$** Answer (c) using test rather than training RSS.
- **There is not enough information to tell.**
- **Cubic would be lower if:**
- The true relationship between $X$ and $Y$ is not linear and there is low noise in our training data.
- **Linear would be lower if:**
- The relationship is only slightly non-linear or the noise in our training data is high.
<a id='5'></a>
### $5.$ Consider the fitted values that result from performing linear regression without an intercept. In this setting, the i-th fitted value takes the form:
$$\hat{y_i} = x_i \hat{\beta},$$
$where$
$$\hat{\beta} = \bigg(\sum_{i=1}^{n}x_i y_i\bigg) \bigg/ \bigg(\sum_{i'=1}^{n}x_{i'}^2\bigg)$$
Show that we can write
$$\hat{y_i} = \sum_{i'=1}^n a_{i'} y_{i'}$$
What is $a_{i'}$?
*Note: We interpret this result by saying that the fitted values from linear regression are linear combinations of the response values.*
$$\hat{y_i} = x_i \frac{\sum_{i=1}^{n}x_i y_i} {\sum_{i'=1}^{n} x_{i'}^2}$$
$$\hat{y_i} = \frac{\sum_{i'=1}^{n}x_i x_i' } {\sum_{i''=1}^{n} x_{i''}^2}y_i$$
- $Where$ $$\hat{y_i} = \sum_{i'=1}^n a_{i'} y_{i'}$$
- $So$ $$a_{i'} = \frac{x_i x_i' } {\sum_{i''=1}^{n} x_{i''}^2}$$
<a id='6'></a>
### $6.$ Using $(3.4)$, argue that in the case of simple linear regression, the least squares line always passes through the point $(\bar{x}, \bar{y})$.
The least square line equation is $\hat{y}=\hat{\beta}_0+\hat{\beta}_1 x$, prove that when $x=\bar{x}$, $\hat{y} = \bar{y}$
$\text{When } x=\bar{x}$
$$\hat{y}=\hat{\beta}_0+\hat{\beta}_1 \bar{x}$$
$Where$ $$\hat{\beta}_0 = \bar{y} - \hat{\beta}_1 \bar{x}$$
$So$ $$\hat{y}=\bar{y} - \hat{\beta}_1 \bar{x}+\hat{\beta}_1 x$$
$$\hat{y}=\bar{y}$$
<a id='7'></a>
### $7.$ It is claimed in the text that in the case of simple linear regression of $Y$ onto $X$, the $R^2$ statistic $(3.17)$ is equal to the square of the correlation between $X$ and $Y$ (3.18). Prove that this is the case. For simplicity, you may assume that $\bar{x} = \bar{y}= 0$.
**Proposition**: Prove that in case of simple linear regression:
$$ y = \beta_0 + \beta_1 x + \varepsilon $$
the $R^2$ is equal to correlation between $X$ and $Y$ squared, e.g.:
$$ R^2 = corr^2(x, y) $$
We'll be using the following definitions to prove the above proposition.
**Def**:
$$ R^2 = 1- \frac{RSS}{TSS} $$
**Def**:
$$ RSS = \sum (y_i - \hat{y}_i)^2 \label{RSS} $$
**Def**:
$$ TSS = \sum (y_i - \bar{y})^2 \label{TSS} $$
**Def**:
$$
\begin{align}
corr(x, y) &= \frac{\sum (x_i - \bar{x}) (y_i - \bar{y})}
{\sigma_x \sigma_y} \\
\sigma_x^2 &= \sum (x_i - \bar{x})^2 \\
\sigma_y^2 &= \sum (y_i - \bar{y})^2
\end{align}
$$
**Proof**:
Substitute defintions of TSS and RSS into $R^2$:
$$
R^2 = 1-\frac{\sum (y_i - \hat{y}_i)^2}
{\sum y_i^2}
$$
Recall that:
$$
\begin{align}
\hat{\beta}_0 &= \bar{y} - \hat{\beta}_1 \bar{x} \label{beta0} \\
\hat{\beta}_1 &= \frac{\sum (x_i - \bar{x})(y_i - \bar{y})}
{\sum (x_i - \bar{x})^2}
\end{align}
$$
Substitute the expression for $\hat{\beta}_0$ into $\hat{y}_i$:
And with $\bar{x} = \bar{y} = 0$
$$
\begin{align}
\hat{y}_i &= \hat{\beta}_1 x_i \\
\hat{y}_i &= \frac{\sum x_i y_i}
{\sum x_i^2}
\end{align}
$$
$Then$
$$
\begin{align}
R^2 &= 1-\frac{\sum (y_i - \frac{\sum x_i y_i}
{\sum x_i^2})^2}
{\sum y_i^2}\\
&= \frac{\sum{y_i^2} -2\sum y_i (\frac{\sum x_i y_i}
{\sum x_i^2})x_i+\sum(\frac{\sum x_i y_i}
{\sum x_i^2})^2 x_i^2)}
{\sum y_i^2}\\
&= \frac{\frac{2(\sum x_i y_i)^2}{\sum x_i^2} - \frac{(\sum x_i y_i)^2}{\sum x_i^2}}{\sum y_i^2}\\
&= \frac{(\sum x_i y_i)^2}{\sum x_i^2 \sum y_i^2}
\end{align}
$$
$ \text{with } \bar{x} = \bar{y} = 0$
$
\begin{align}
corr(x, y) &= \frac{\sum x_i y_i}
{\sum x_i^2 \sum y_i^2} = R^2
\end{align}
$
## Done!
| c6ebbb18afb794aaf3ca5d40e74dd5810524fe50 | 14,237 | ipynb | Jupyter Notebook | Notebooks/3_7_0_Linear_Regression_Conceptual.ipynb | MostaAshour/ISL-in-python | 87255625066f88d5d4625d045bdc6427a4ad9193 | [
"MIT"
] | null | null | null | Notebooks/3_7_0_Linear_Regression_Conceptual.ipynb | MostaAshour/ISL-in-python | 87255625066f88d5d4625d045bdc6427a4ad9193 | [
"MIT"
] | null | null | null | Notebooks/3_7_0_Linear_Regression_Conceptual.ipynb | MostaAshour/ISL-in-python | 87255625066f88d5d4625d045bdc6427a4ad9193 | [
"MIT"
] | null | null | null | 40.793696 | 486 | 0.531081 | true | 3,351 | Qwen/Qwen-72B | 1. YES
2. YES | 0.611382 | 0.847968 | 0.518432 | __label__eng_Latn | 0.985755 | 0.042821 |
# Fractal drum
Nori Parelius
This project was a part of an exam in Computational Physics that I took in May 2015. At the time I used Fortran to solve it, but I have since rewritten it in Matlab and now Python.
The project is about finding the eigenvalues and eigenvectors of a "fractal drum" - a thin membrane stretched on a frame with a fractal shaped border. The fractal is a square Koch fractal.
Oscillations of this drum's membrane follow the wave equation
\begin{equation}
\nabla^2 u=\frac{1}{v^2}\frac{\partial^2 u}{\partial t^2},
\end{equation}
with $u$ being the displacement, $v$ velocity and $t$ time. The value of $u$ at the boundary (where the membrane is attached to the frame) is always 0. Fourier transforming the wave equation over time gives the Helmholtz equation:
\begin{equation}
-\nabla^2U(\vec{x},\omega)=\frac{\omega^2}{v^2}U(\vec{x},\omega), \quad \rm{in}\ \Omega,
\end{equation}
where $U$ is the Fourier transform of $u$ and $\omega$ is angular frequency. And for the Dirichlet boundary condition $u=0$
\begin{equation}
U(\vec{x},\omega)=0, \quad \rm{on}\ \partial \Omega
\end{equation}
Using a finite difference approximation for the Laplacian operator, the Helmholtz equation is transformed into a form
\begin{equation}
LU=\frac{\omega^2}{v^2}U
\end{equation}
where $L$ is a matrix generated from the Laplacian operator and taking into account the boundary conditions and $U$ is a vector containing the displacements for each point within the boundary.
This equation is an eigenvalue problem which means that matrix $L$ can be used to find eigenvalues $\omega^2/v^2$ and corresponding eigenvectors $U$ for which the equation is true.
To find the eigenvalues, several steps have to be taken. First the shape of the fractal boundary has to be generated and positioned onto a spatial grid so that all the corners in the boundary fall onto grid points. Then it is necessary to find out which grid points are within the boundary and which are outside, as only the points inside will be a part of the vector $U$ from the eigenvalue problem. The matrix $L$ is then generated from the finite difference scheme for the Laplace operator and using the boundary conditions. The last step is to solve the eigenvalue problem.
## 1. Creating the fractal border on a grid
### Importing libraries
```python
import numpy as np
import scipy as sc
import matplotlib.pyplot as plt
%matplotlib inline
```
### Making the fractal shape
The fractal border is created by first defining a square, given by the coordinates of its corners. Function FRACTAL takes the coordinates of two points and creates a Koch square fractal from the line. It returnes the coordinates of the corners of this fractal. Function FRACTALIZE then uses function FRACTAL to do this for several connected lines, such as the starting square. The function FRACTALIZE applies the "fractalization" f_level times.
```python
# length of the side
L_side=4.0
# position of top left corner
LT=[5.0,9.0]
# number of times the initial square will be "fractalized"
f_level=2
```
```python
# Initializing the square
corners=np.zeros((5,2))
corners[0,:]=LT
corners[1,:]=[corners[0,0]+L_side,corners[0,1]]
corners[2,:]=[corners[0,0]+L_side,corners[0,1]-L_side]
corners[3,:]=[corners[0,0],corners[0,1]-L_side]
corners[4,:]=corners[0,:]
plt.plot(corners[:,0],corners[:,1])
```
```python
def fractal(A,B):
'''
FUNCTION FRACTAL takes as input the coordinates of two points A and B, 2D
only, and they should either have the same x or y coordinate. And returns
an array of points that give the Koch fractal shape
'''
f=np.zeros((9,2));
f[0,:]=A
f[8,:]=B
if abs(A[0]-B[0]) > abs(A[1]-B[1]): # horizontal
c=1.0
else: # vertical
c=-1.0
# Points on the line connecting A and B
step=1; # how far from A we are stepping, for first point 1 step, then 2
for i in [1,4,7]: # which point in f it is
for coor in range(2): # which coordinate, x or y
f[i,coor]=A[coor]+step*(B[coor]-A[coor])/4.0
step=step+1
# Points on the line to one direction
step=1;
for i in range(2,4,1):
f[i,0]=A[0]+step*(B[0]-A[0])/4.0+c*(B[1]-A[1])/4.0
f[i,1]=A[1]+step*(B[1]-A[1])/4.0+c*(B[0]-A[0])/4.0
step=step+1
# Points on the line to the other direction
step=2;
for i in range(5,7,1):
f[i,0]=A[0]+step*(B[0]-A[0])/4.0-c*(B[1]-A[1])/4.0
f[i,1]=A[1]+step*(B[1]-A[1])/4.0-c*(B[0]-A[0])/4.0
step=step+1
return f
```
#### Example of a Koch fractal
```python
new_fractal=fractal(corners[0,:],corners[1,:])
plt.plot(new_fractal[:,0],new_fractal[:,1])
```
```python
def fractalize(corners,f_level):
'''
Function that takes an array containing coordinates of corners
of a closed shape (first and last corner the same) and fractalizes each
segment f_level number of times
'''
for k in range(f_level):
n_sides=corners.shape[0]-1;
corn=np.zeros((8*(n_sides-1)+9,2))
corn[0,:]=corners[0,:]
for i in range(1,n_sides+1):
corn_new=fractal(corners[i-1,:],corners[i,:])
corn[(8*(i-1)):(8*i+1),:]=corn_new
corners=corn
return corners
```
#### Fractalizing
```python
f=fractalize(corners,f_level)
fig, axes=plt.subplots(nrows=1,ncols=3,figsize=(12,3))
axes[0].plot(corners[:,0],corners[:,1])
axes[1].plot(fractalize(corners,1)[:,0],fractalize(corners,1)[:,1])
axes[2].plot(f[:,0],f[:,1])
for ax in axes:
ax.set_aspect('equal')
```
### Grid to put the drum on
I created a square lattice that the border will be positioned on. This lattice is the minimum size necessary to cover the whole border structure. This means that the edge of the lattice is further out from the initial square box. This grid offset depends on $l$. For each $l$ the fractal border is moved outside by $L\_side/4$ where length is the length of the side at previous $l$. This adds up to $L\_side/4+L\_side/4^2+L\_side/4^3+...L\_side/4^f\_level$, where $L\_side$ is the length of the side of the initial box. One grid constant $\delta$ was added to the offset.
Grid constant $\delta$ was chosen depending on the desired $f\_level$. If each corner of the fractal border is supposed to fall onto a grid point for any $f\_level<f\_level_{max}$ then
\begin{equation}
\delta=\frac{L\_side}{4^{f\_level_{max}}}.
\end{equation}
In addition, for $f\_level=1$ with this grid spacing (meaning corners on the grid but no additional grid points between them), there would be only one point within the boundaries. For $f\_level>1$ this is not the case any more, but I have anyway defined a variable which allows to add grid points so that the grid is more dense than the corners of fractal shape.
\begin{equation}
\delta=\frac{L\_side}{4^{f\_level_{max}}(points\_between+1)},
\end{equation}
where $n_{points\_between}$ is the number of added points between, not including, the corners.
As a result, I had to make a routine that would add these points to my array of corners and create an array containing all the edge points.
```python
# Grid parameters
# number of grid points between corner points
points_between=1
# With each iteration, the border grows by 1/4 of L_side
grid_offset=np.sum(L_side/4**(np.arange(1,f_level+1)))
# Step of the grid, so that points between fall on it too
delta= L_side/(4**f_level)/(points_between+1)
# Origin of the grid, with the offset and with adding one point at the edge
grid_origin=[LT[0]-grid_offset-delta,LT[1]+grid_offset+delta]
# Number of grid points in both directions
n_grid=int((L_side+2*grid_offset+2*delta)/delta+1)
```
#### Adding more points between the corners, on the grid
```python
def fill_edges(corners,MP):
'''
Function that takes the corners and adds a given number (MP) of points
between them, to fill out all the grid points that have an edge
'''
n_points=corners.shape[0]
n_sides=n_points-1
edges=np.zeros((n_points+n_sides*MP,2))
for i in range(n_sides):
for coor in range(2): # x y coordinates
edges[((MP+1)*i):((MP+1)*(i+1)+1),coor]=corners[i,coor]+(corners[i+1,coor]-corners[i,coor])/(MP+1)*range(MP+2)
return edges
```
```python
# Filling in the edges
edges=fill_edges(f,points_between)
plt.plot(edges[:,0],edges[:,1],marker='o')
```
## Identifying border, inside and outside grid points
To know which points are inside and which points are outside, I have created a mask, a two dimensional array of the same size as the grid, that for each point given its position in the array $mask(i,j)$ holds a value defining whether the point is in, out or on the border of the drum. The $mask$ for boundary points is -1, for points outside it is -2 and for points inside it holds the index of the point - from 0 to # points inside -1.
```python
# Grid
x=grid_origin[0]+delta*np.arange(n_grid)
y=grid_origin[1]-delta*np.arange(n_grid)
```
First function MASK_EDGES sets in the first step all the mask to -2. Next, using the coordinates(x, y) of the edge points the $mask$ is set to -1 for these edge points. The positions (i, j) of these points in the $mas$k were found by rearranging the equation that generated the physical coordinates x and y,
\begin{equation}
x=A_x+(i-1)\delta \quad \rm{and}\ y=A_y-(j-1)\delta.
\end{equation}
Where $A_x$ and $A_y$ are the x and y coordinates of the top left corner of the grid.
```python
def mask_edges(n_grid,delta,LeftTop,edges):
'''
FUNCTION MASK_EDGES creates an array the size of the grid (n_grid) and
puts its value to -1 everywhere, except for the points that belong to the
edges, which are 0, this is calculated by using the (x,y) coordinates of the edges
give in edges
x=LTx+(i-1)delta --> i=(x-LTx)/delta +1
y=LTy-(j-1)delta --> j=-(y-LTy)/delta +1
'''
me=-2*np.ones((n_grid,n_grid))
for point in range(edges.shape[0]-1):
me[int((edges[point,0]-LeftTop[0])/delta),
int((LeftTop[1]-edges[point,1])/delta)]=0
return me
```
Function INSIDE cycles through each of the points of the grid checking whether the point is inside the polygon.The algorithm consists of deciding how many times a horizontal line passing through the point in question crosses a side of the polygon on the left and on the right of the point. If the number of intersections to the left and to the right is odd, than the point is inside the polygon.
This procedure does not give clear results for the points that are right on the boundary, but these points are already known from the initial mask returned by MASK_EDGES.
```python
def point_in_polygon(e,P):
'''
FUNCTION POINT_IN_POLYGON takes as input the coordinates of the edge
points and the coordinates of the point in question P and returns a
boolean p depending on whether the point is inside the polygon or not.
Using the number of times a horizontal line passing through the point
crosses some segment
'''
x=P[0]
y=P[1]
p=False
n_corners=e.shape[0]-1
j=n_corners-1
for i in range(n_corners):
#it is horizontally between the end points
if e[i,1]<y and e[j,1] >= y or e[j,1]<y and e[i,1]>=y:
#whether x is right of the segment
if e[i,0]+(y-e[i,1])/(e[j,1]-e[i,1])*(e[j,0]-e[i,0])<x:
p=not p
j=i
return p
```
```python
def inside(edges,x,y,mask):
'''
FUNCTION INSIDE takes the edge points, the x and y points of the grid and
the mask with the edges being 0, rest being -1. It uses point_in_polygon
to find out if the point is in or out and then sets insides to numbers 1
and upwards (counting them), edge to 0 and outside to -1
'''
n_grid=x.shape[0]
me=np.zeros((n_grid,n_grid))
n_inside=0
for i in range(n_grid):
for j in range(n_grid):
# Find if it's in (1) or out (0)
me[i,j]=point_in_polygon(edges,[x[i],y[j]])
# If it's out, set it to -2,
# if it's in, number it and count it
if me[i,j]==0: # out
me[i,j]=-2
# If it corresponds to an edge point, set it to -1
if mask[i,j]==-1:
me[i,j]=-1
else: # in
# If it corresponds to an edge point, set it to -1
if mask[i,j]==-1:
me[i,j]=-1
else:
me[i,j]=n_inside;
n_inside=n_inside+1;
return me,n_inside
```
```python
from matplotlib import cm
# First all -1, edges 0
mask1=mask_edges(n_grid,delta,grid_origin,edges)
X, Y = np.meshgrid(x, y)
fig, ax = plt.subplots()
ax.pcolormesh(X,Y,mask1,cmap=cm.coolwarm)
```
```python
mask,n_in=inside(edges,x,y,mask1)
fig, ax = plt.subplots()
ax.contourf(X,Y,mask)
```
```python
```
### The Laplacian matrix
By using a Taylor expansion on the Laplace operator, we get
\begin{equation}
-\nabla^2u(x,y) \approx \frac{1}{\delta^2}(4u(x,y)-u(x-\delta,y)-u(x+\delta,y)-u(x,y-\delta)-u(x,y+\delta)).
\end{equation}
Applying this to the discretized x and y, with step $\delta$ leads to the 5-point central finite difference approximation of the Laplace, and this gives for our eigenvalue matrix equation ($LU=\frac{\omega^2}{v^2}U$):
\begin{equation}
4u_{i,j}-u_{i-1,j}-u_{i+1,j}-u_{i,j-1}-u_{i,j+1}=\omega^2/v^2\ \delta^2 u_{i,j}.
\end{equation}
This equation can be turned into a matrix equation
\begin{equation}
\frac{1}{\delta^2}L\ V=\frac{\omega^2}{v^2}V,
\end{equation}
where $V=[V_k]$ is a vector that holds values $u_{i,j}$ for i,j such that the corresponding point is within the fractal boundary, and $L$ is a matrix that has the value 4 on its diagonal and up to four times the value -1 in each row i, depending on whether the neigbouring points of $u_{i,i}$ are inside the boundary, or not.
This means that the matrix $L$ has to be constructed with the shape of the boundary taken into account.
To construct the matrix, I have created an array $n2grid$ that allows to turn the index of the drum point (the number held by the $mask$) into its position on the grid.
```python
# Vector containing the grid coordinates of each point on the inside,
n2grid=np.zeros((n_in,2))
for i in range(n_grid):
for j in range(n_grid):
if mask[i,j]>=0:
n2grid[int(mask[i,j]),:]=[i,j]
```
```python
def matrix_laplace(mask,n2grid,n_in,delta):
'''
FUNCTION LAP_MAT returns the Laplacian matrix for the points inside the
drum
'''
mat_lap=np.zeros((n_in,n_in))
for i in range(n_in): # one row of the laplacian at a time
mat_lap[i,i]=4.0/delta**2 # the diagonal is 4
for ii in [-1,1]:
if mask[int(n2grid[i,0]+ii),int(n2grid[i,1])]>=0: # if the neighbour at x+-1 is inside the drum
mat_lap[i,int(mask[int(n2grid[i,0]+ii),int(n2grid[i,1])])]=-1.0/delta**2
if mask[int(n2grid[i,0]),int(n2grid[i,1]+ii)]>=0: # same for y dir
mat_lap[i,int(mask[int(n2grid[i,0]),int(n2grid[i,1]+ii)])]=-1.0/delta**2
return mat_lap
```
```python
# Creating the Laplacian
laplace=matrix_laplace(mask,n2grid,n_in,delta)
```
### Solving the eigenvalue problem
```python
# Getting the eigenvalues and eigenvectors
eigenval,evec=np.linalg.eig(laplace)
# Sorting them from lowest to largest
idx = eigenval.argsort()
eigenval = eigenval[idx]
evec = evec[:,idx]
```
```python
# First five eigenvalues
print(eigenval[0:4])
```
[4.27736565 8.42218947 8.42218947 8.74559167]
```python
U=np.zeros((n_grid,n_grid,5))
for n in range(5):
for i in range(n_in):
U[int(n2grid[i,0]),int(n2grid[i,1]),n]=evec[i,n]
```
### The plots of the first 5 vibrational modes
The following plots show the first five eigenvectors plotted onto the grid. These represent the displacement of each drum points, so we are basically looking at how the drum membrane would physically move. The real life vibration would be a combination of these and other modes.
```python
from mpl_toolkits.mplot3d import Axes3D
fig=plt.figure(figsize=(5,25))
X, Y = np.meshgrid(x, y)
# Plot the surface.
for i in range(5):
ax[i]= fig.add_subplot(5,1,i+1, projection='3d')
ax[i].plot_surface(X, Y, U[:,:,i],rstride=1,cstride=1, cmap=cm.YlOrRd,
linewidth=0, antialiased=False,alpha=0.5)
ax[i].plot_surface(X,Y,0.001*mask1,rstride=1,cstride=1,cmap=cm.Blues,linewidth=0,
antialiased=False,alpha=0.5)
ax[i].set_xlabel("X")
ax[i].set_ylabel("Y")
ax[i].set_zlabel("U")
```
```python
```
```python
```
| 1898c47ea4930716f400fc8a643fd3be6227d52a | 377,754 | ipynb | Jupyter Notebook | fractal drum.ipynb | nori-parelius/fractal-drum | 92c3c816a0d7a4dc6634e4bc82621e05052cce9b | [
"MIT"
] | null | null | null | fractal drum.ipynb | nori-parelius/fractal-drum | 92c3c816a0d7a4dc6634e4bc82621e05052cce9b | [
"MIT"
] | null | null | null | fractal drum.ipynb | nori-parelius/fractal-drum | 92c3c816a0d7a4dc6634e4bc82621e05052cce9b | [
"MIT"
] | null | null | null | 460.67561 | 299,340 | 0.937766 | true | 4,891 | Qwen/Qwen-72B | 1. YES
2. YES | 0.934395 | 0.845942 | 0.790445 | __label__eng_Latn | 0.993846 | 0.6748 |
# Нотация Денавита-Хартенберга
```python
from sympy import *
def rz(a):
return Matrix([
[cos(a), -sin(a), 0, 0],
[sin(a), cos(a), 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]
])
def ry(a):
return Matrix([
[cos(a), 0, sin(a), 0],
[0, 1, 0, 0],
[-sin(a), 0, cos(a), 0],
[0, 0, 0, 1]
])
def rx(a):
return Matrix([
[1, 0, 0, 0],
[0, cos(a), -sin(a), 0],
[0, sin(a), cos(a), 0],
[0, 0, 0, 1]
])
def trs(x, y, z):
return Matrix([
[1, 0, 0, x],
[0, 1, 0, y],
[0, 0, 1, z],
[0, 0, 0, 1]
])
def vec(x, y, z):
return Matrix([
[x],
[y],
[z],
[1]
])
```
Если соединить матрицы и винтовое исчисление, одним из результатов будет нотация Денавита-Хартенберга.
Она подразумевает четыре последовательных преобразовния:
$$
DH(\theta, d, \alpha, a) =
R_z(\theta) T_z(d) R_x(\alpha) T_z(a)
$$
```python
def dh(theta, d, alpha, a):
return rz(theta) * trs(0, 0, d) * rx(alpha) * trs(a, 0, 0)
```
```python
theta, d, alpha, a = symbols("theta_i, d_i, alpha_i, a_i")
simplify(dh(theta, d, alpha, a))
```
DH-параметры описывают последовательные
- поворот вокруг оси $Z$ - $\theta$
- смещение вдоль оси $Z$ - $d$
- поворот вокруг новой оси $X$ - $\alpha$
- смещение вдоль новой оси $X$ - $r$
Обобщенные координаты выбираются так, чтобы попадать на вращение вокруг / смещение вдоль оси $Z$.
| 35dff3d0c15a3fd12a87713c721c51843dc68a5b | 3,242 | ipynb | Jupyter Notebook | 3 - DH notation.ipynb | red-hara/jupyter-dh-notation | 0ffd305b3e67ce7dd3c20f2d1c719b53251dbf58 | [
"MIT"
] | null | null | null | 3 - DH notation.ipynb | red-hara/jupyter-dh-notation | 0ffd305b3e67ce7dd3c20f2d1c719b53251dbf58 | [
"MIT"
] | null | null | null | 3 - DH notation.ipynb | red-hara/jupyter-dh-notation | 0ffd305b3e67ce7dd3c20f2d1c719b53251dbf58 | [
"MIT"
] | null | null | null | 23.492754 | 111 | 0.442011 | true | 631 | Qwen/Qwen-72B | 1. YES
2. YES | 0.96378 | 0.859664 | 0.828527 | __label__krc_Cyrl | 0.820881 | 0.763278 |
# Improving predictive models using non-spherical Gaussian priors
Based on the CNN abstract of [Nunez-Elizalde, Huth, & Gallant](https://www2.securecms.com/CCNeuro/docs-0/5928d71e68ed3f844e8a256f.pdf).
```
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import pearsonr
from scipy.linalg import toeplitz
from sklearn.model_selection import KFold
from sklearn.linear_model import Ridge, RidgeCV, LinearRegression
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from tqdm import tqdm_notebook
%matplotlib inline
```
First, let's generate some data with 100 ($N$) samples and 500 ($K$) features. We'll generate the true parameters ($\beta$), for which we'll calculate the covariance matrix ($\Sigma$).
```
N, K = 100, 500
X = np.random.normal(0, 1, (N, K))
X = np.c_[np.ones(N), X]
mu_betas = np.zeros(K+1)
cov_betas = 0.99**toeplitz(np.arange(0, K+1))
betas = np.random.multivariate_normal(mu_betas, cov_betas).T
plt.figure(figsize=(10, 5))
plt.subplot(1, 2, 1)
plt.imshow(cov_betas)
plt.title(r'$\mathrm{true\ cov}[\beta]$')
plt.subplot(1, 2, 2)
plt.plot(betas)
plt.title(r'$\mathrm{true}\ \beta$')
sns.despine()
plt.show()
noise = np.random.normal(0, 1, size=N)
y = X.dot(betas) + noise
```
### Standard linear regression (no regularization)
```
folds = KFold(n_splits=10)
pipe = Pipeline([
('scaler', StandardScaler()),
('model', LinearRegression())
])
scores = np.zeros(10)
for i, (train_idx, test_idx) in enumerate(folds.split(X, y)):
pipe.fit(X[train_idx], y[train_idx])
scores[i] = pipe.score(X[test_idx], y[test_idx])
print(scores, end='\n\n')
print("R2: %3f. (%.3f)" % (scores.mean(), scores.std()))
```
[-0.17506196 -0.20438521 0.69783125 -0.34696657 0.36056441 0.34926236
-0.50402675 0.05336731 0.13129403 0.30056921]
R2: 0.066245. (0.354)
### Ridge
```
folds = KFold(n_splits=10)
pipe = Pipeline([
('scaler', StandardScaler()),
('model', RidgeCV(alphas=[0.001, 0.01, 0.1, 1, 10, 100, 1000]))
])
scores = np.zeros(10)
for i, (train_idx, test_idx) in enumerate(folds.split(X, y)):
pipe.fit(X[train_idx], y[train_idx])
preds = pipe.predict(X[test_idx])
scores[i] = pipe.score(X[test_idx], y[test_idx])
print(scores, end='\n\n')
print("R2: %3f. (%.3f)" % (scores.mean(), scores.std()))
```
[-0.05012806 -0.29239414 0.50244766 -0.15503275 0.31898809 0.29825579
-0.37950182 -0.04080342 0.13129442 0.25450183]
R2: 0.058763. (0.274)
### Tikhonov
The traditional solution for Tikhonov regression, for any design matrix $X$ and reponse vector $y$, is usually written as follows:
\begin{align}
\hat{\beta} = (X^{T}X + \lambda C^{T}C)^{-1}X^{T}y
\end{align}
in which $C$ represents the penalty matrix (i.e., $\Sigma^{-\frac{1}{2}}$)
and $\lambda$ the regularization parameter. In this formulation, the prior on the model is that $\beta$ is distributed with zero mean and $(\lambda C^{T}C)^{-1}$ covariance. Alternatively, the estimation of the parameters ($\beta$) can be written as:
\begin{align}
\hat{\beta} = C^{-1}(A^{T}A + \lambda I)^{-1}X^{T}y
\end{align}
with $A$ defined as:
\begin{align}
A = XC^{-1}
\end{align}
Let's define a scikit-learn style class for generic Tikhonov regression:
```
from scipy.linalg import sqrtm
from sklearn.base import BaseEstimator, RegressorMixin
from scipy.linalg import svd
class Tikhonov(BaseEstimator, RegressorMixin):
def __init__(self, sigma, lambd=1.):
self.sigma = sigma
self.lambd = lambd
def fit(self, X, y, sample_weight=None):
sig = self.sigma
self.coef_ = np.linalg.inv(X.T @ X + self.lambd * np.linalg.inv(sig)) @ X.T @ y
return self
def predict(self, X, y=None):
return X.dot(self.coef_)
class Tikhonov2(BaseEstimator, RegressorMixin):
def __init__(self, sigma, lambd=1.):
self.sigma = sigma
self.lambd = lambd
self.C = sqrtm(self.sigma)
def fit(self, X, y, sample_weight=None):
A = X @ self.C
I = np.eye(X.shape[1])
self.coef_ = np.linalg.inv(A.T @ A + self.lambd * I) @ A.T @ y
return self
def predict(self, X, y=None):
A = X.dot(self.C)
return A.dot(self.coef_)
```
Now, let's run our (cross-validated) Tikhonov regression:
```
from sklearn.model_selection import GridSearchCV
folds = KFold(n_splits=10)
tik2 = Tikhonov(sigma=cov_betas)
gs = GridSearchCV(estimator=tik2,
param_grid=dict(lambd=[0.01, 0.1, 1, 10, 100]),
cv=3)
pipe = Pipeline([
('scaler', StandardScaler()),
('model', gs)
])
scores = np.zeros(10)
for i, (train_idx, test_idx) in tqdm_notebook(enumerate(folds.split(X, y))):
pipe.fit(X[train_idx], y[train_idx])
preds = pipe.predict(X[test_idx])
scores[i] = pipe.score(X[test_idx], y[test_idx])
print(scores, end='\n\n')
print("R2: %3f. (%.3f)" % (scores.mean(), scores.std()))
plt.plot(betas)
plt.plot(gs.best_estimator_.coef_)
```
```
from sklearn.model_selection import GridSearchCV
folds = KFold(n_splits=10)
tik2 = Tikhonov2(sigma=cov_betas)
gs = GridSearchCV(estimator=tik2,
param_grid=dict(lambd=[0.01, 0.1, 1, 10, 100]),
cv=3)
pipe = Pipeline([
('scaler', StandardScaler()),
('model', gs)
])
scores = np.zeros(10)
for i, (train_idx, test_idx) in tqdm_notebook(enumerate(folds.split(X, y))):
pipe.fit(X[train_idx], y[train_idx])
preds = pipe.predict(X[test_idx])
scores[i] = pipe.score(X[test_idx], y[test_idx])
print(scores, end='\n\n')
print("R2: %3f. (%.3f)" % (scores.mean(), scores.std()))
plt.plot(betas)
plt.plot(tik2.C @ gs.best_estimator_.coef_)
```
## Banded ridge
```
N, K = 200, 500
X = np.random.normal(0, 1, (N, K))
X = np.c_[np.ones(N), X]
mu_betas = np.zeros(K+1)
I = np.eye(K+1)
lambdas = np.r_[1, np.repeat([1, 2], repeats=K//2)].astype(float)
cov_betas = lambdas ** -2 * I
betas = np.random.multivariate_normal(mu_betas, cov_betas).T
plt.figure(figsize=(10, 5))
plt.subplot(1, 2, 1)
plt.imshow(cov_betas)
plt.title(r'$\mathrm{true\ cov}[\beta]$')
plt.subplot(1, 2, 2)
plt.plot(betas)
plt.title(r'$\mathrm{true}\ \beta$')
sns.despine()
plt.show()
noise = np.random.normal(0, 1, size=N)
y = X.dot(betas) + noise
```
```
class BandedRidge(BaseEstimator, RegressorMixin):
def __init__(self, lambd, lambdas):
self.lambd = lambd
self.lambdas = lambdas
def fit(self, X, y, sample_weight=None):
I = np.eye(X.shape[1])
I[np.diag_indices_from(I)] = self.lambdas ** 2
self.coef_ = np.linalg.inv(X.T @ X + self.lambd * I) @ X.T @ y
return self
def predict(self, X, y=None):
return X.dot(self.coef_)
```
```
folds = KFold(n_splits=10)
br = BandedRidge(lambd=1, lambdas=lambdas)
gs = GridSearchCV(estimator=br,
param_grid=dict(lambd=[0.001, 0.01, 0.1, 1, 10, 100, 1000]),
cv=3)
pipe = Pipeline([
('scaler', StandardScaler()),
('model', gs)
])
scores = np.zeros(10)
for i, (train_idx, test_idx) in tqdm_notebook(enumerate(folds.split(X, y))):
pipe.fit(X[train_idx], y[train_idx])
preds = pipe.predict(X[test_idx])
scores[i] = pipe.score(X[test_idx], y[test_idx])
print(scores, end='\n\n')
print("R2: %3f. (%.3f)" % (scores.mean(), scores.std()))
plt.plot(betas)
plt.plot(gs.best_estimator_.coef_)
```
vs. Ridge:
```
folds = KFold(n_splits=10)
pipe = Pipeline([
('scaler', StandardScaler()),
('model', RidgeCV(alphas=[0.01, 0.1, 1, 10, 100, 1000]))
])
scores = np.zeros(10)
for i, (train_idx, test_idx) in enumerate(folds.split(X, y)):
pipe.fit(X[train_idx], y[train_idx])
preds = pipe.predict(X[test_idx])
scores[i] = pipe.score(X[test_idx], y[test_idx])
print(scores, end='\n\n')
print("R2: %3f. (%.3f)" % (scores.mean(), scores.std()))
plt.plot(betas)
plt.plot(pipe.named_steps['model'].coef_)
```
and standard LR:
```
folds = KFold(n_splits=10)
pipe = Pipeline([
('scaler', StandardScaler()),
('model', LinearRegression())
])
scores = np.zeros(10)
for i, (train_idx, test_idx) in enumerate(folds.split(X, y)):
pipe.fit(X[train_idx], y[train_idx])
scores[i] = pipe.score(X[test_idx], y[test_idx])
print(scores, end='\n\n')
print("R2: %3f. (%.3f)" % (scores.mean(), scores.std()))
```
[0.53290474 0.39455398 0.32529149 0.61533419 0.42162928 0.5572809
0.13712307 0.07211125 0.53426102 0.39016258]
R2: 0.398065. (0.170)
Equivalence with scaling:
```
class BandedRidge2(BaseEstimator, RegressorMixin):
def __init__(self, lambd, lambdas):
self.lambd = lambd
self.lambdas = lambdas
def fit(self, X, y, sample_weight=None):
I = np.eye(X.shape[1])
X /= self.lambdas
self.coef_ = np.linalg.inv(X.T @ X + self.lambd * I) @ X.T @ y
return self
def predict(self, X, y=None):
return X.dot(self.coef_)
folds = KFold(n_splits=10)
br = BandedRidge2(lambd=1, lambdas=lambdas)
gs = GridSearchCV(estimator=br,
param_grid=dict(lambd=[0.001, 0.01, 0.1, 1, 10, 100, 1000]),
cv=3)
pipe = Pipeline([
('scaler', StandardScaler()),
('model', gs)
])
scores = np.zeros(10)
for i, (train_idx, test_idx) in tqdm_notebook(enumerate(folds.split(X, y))):
pipe.fit(X[train_idx], y[train_idx])
preds = pipe.predict(X[test_idx])
scores[i] = pipe.score(X[test_idx], y[test_idx])
print(scores, end='\n\n')
print("R2: %3f. (%.3f)" % (scores.mean(), scores.std()))
plt.plot(betas)
plt.plot(gs.best_estimator_.coef_)
```
| c3b7b450bd447680958cad0684a5c64baeeab36f | 289,260 | ipynb | Jupyter Notebook | tikhonov_regression_with_non_sphrerical_prior.ipynb | lukassnoek/random_notebooks | d7df507ce2b6949726c29de0022aae2d0dc583ac | [
"MIT"
] | 3 | 2018-05-28T13:45:11.000Z | 2021-08-31T11:41:34.000Z | tikhonov_regression_with_non_sphrerical_prior.ipynb | lukassnoek/random_notebooks | d7df507ce2b6949726c29de0022aae2d0dc583ac | [
"MIT"
] | null | null | null | tikhonov_regression_with_non_sphrerical_prior.ipynb | lukassnoek/random_notebooks | d7df507ce2b6949726c29de0022aae2d0dc583ac | [
"MIT"
] | 2 | 2018-05-28T13:46:05.000Z | 2018-06-11T15:25:59.000Z | 375.662338 | 47,584 | 0.933143 | true | 3,100 | Qwen/Qwen-72B | 1. YES
2. YES | 0.907312 | 0.795658 | 0.72191 | __label__eng_Latn | 0.277933 | 0.515571 |
# Direct Inversion of the Iterative Subspace
When solving systems of linear (or nonlinear) equations, iterative methods are often employed. Unfortunately, such methods often suffer from convergence issues such as numerical instability, slow convergence, and significant computational expense when applied to difficult problems. In these cases, convergence accelleration methods may be applied to both speed up, stabilize and/or reduce the cost for the convergence patterns of these methods, so that solving such problems become computationally tractable. One such method is known as the direct inversion of the iterative subspace (DIIS) method, which is commonly applied to address convergence issues within self consistent field computations in Hartree-Fock theory (and other iterative electronic structure methods). In this tutorial, we'll introduce the theory of DIIS for a general iterative procedure, before integrating DIIS into our previous implementation of RHF.
## I. Theory
DIIS is a widely applicable convergence acceleration method, which is applicable to numerous problems in linear algebra and the computational sciences, as well as quantum chemistry in particular. Therefore, we will introduce the theory of this method in the general sense, before seeking to apply it to SCF.
Suppose that for a given problem, there exist a set of trial vectors $\{\mid{\bf p}_i\,\rangle\}$ which have been generated iteratively, converging toward the true solution, $\mid{\bf p}^f\,\rangle$. Then the true solution can be approximately constructed as a linear combination of the trial vectors,
$$\mid{\bf p}\,\rangle = \sum_ic_i\mid{\bf p}_i\,\rangle,$$
where we require that the residual vector
$$\mid{\bf r}\,\rangle = \sum_ic_i\mid{\bf r}_i\,\rangle\,;\;\;\; \mid{\bf r}_i\,\rangle
=\, \mid{\bf p}_{i+1}\,\rangle - \mid{\bf p}_i\,\rangle$$
is a least-squares approximate to the zero vector, according to the constraint
$$\sum_i c_i = 1.$$
This constraint on the expansion coefficients can be seen by noting that each trial function ${\bf p}_i$ may be represented as an error vector applied to the true solution, $\mid{\bf p}^f\,\rangle + \mid{\bf e}_i\,\rangle$. Then
\begin{align}
\mid{\bf p}\,\rangle &= \sum_ic_i\mid{\bf p}_i\,\rangle\\
&= \sum_i c_i(\mid{\bf p}^f\,\rangle + \mid{\bf e}_i\,\rangle)\\
&= \mid{\bf p}^f\,\rangle\sum_i c_i + \sum_i c_i\mid{\bf e}_i\,\rangle
\end{align}
Convergence results in a minimization of the error (causing the second term to vanish); for the DIIS solution vector $\mid{\bf p}\,\rangle$ and the true solution vector $\mid{\bf p}^f\,\rangle$ to be equal, it must be that $\sum_i c_i = 1$. We satisfy our condition for the residual vector by minimizing its norm,
$$\langle\,{\bf r}\mid{\bf r}\,\rangle = \sum_{ij} c_i^* c_j \langle\,{\bf r}_i\mid{\bf r}_j\,\rangle,$$
using Lagrange's method of undetermined coefficients subject to the constraint on $\{c_i\}$:
$${\cal L} = {\bf c}^{\dagger}{\bf Bc} - \lambda\left(1 - \sum_i c_i\right)$$
where $B_{ij} = \langle {\bf r}_i\mid {\bf r}_j\rangle$ is the matrix of residual vector overlaps. Minimization of the Lagrangian with respect to the coefficient $c_k$ yields (for real values)
\begin{align}
\frac{\partial{\cal L}}{\partial c_k} = 0 &= \sum_j c_jB_{jk} + \sum_i c_iB_{ik} - \lambda\\
&= 2\sum_ic_iB_{ik} - \lambda
\end{align}
which has matrix representation
\begin{equation}
\begin{pmatrix}
B_{11} & B_{12} & \cdots & B_{1m} & -1 \\
B_{21} & B_{22} & \cdots & B_{2m} & -1 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
B_{n1} & B_{n2} & \cdots & B_{nm} & -1 \\
-1 & -1 & \cdots & -1 & 0
\end{pmatrix}
\begin{pmatrix}
c_1\\
c_2\\
\vdots \\
c_n\\
\lambda
\end{pmatrix}
=
\begin{pmatrix}
0\\
0\\
\vdots\\
0\\
-1
\end{pmatrix},
\end{equation}
which we will refer to as the Pulay equation, named after the inventor of DIIS. It is worth noting at this point that our trial vectors, residual vectors, and solution vector may in fact be tensors of arbitrary rank; it is for this reason that we have used the generic notation of Dirac in the above discussion to denote the inner product between such objects.
## II. Algorithms for DIIS
The general DIIS procedure, as described above, has the following structure during each iteration:
#### Algorithm 1: Generic DIIS procedure
1. Compute new trial vector, $\mid{\bf p}_{i+1}\,\rangle$, append to list of trial vectors
2. Compute new residual vector, $\mid{\bf r}_{i+1}\,\rangle$, append to list of trial vectors
3. Check convergence criteria
- If RMSD of $\mid{\bf r}_{i+1}\,\rangle$ sufficiently small, and
- If change in DIIS solution vector $\mid{\bf p}\,\rangle$ sufficiently small, break
4. Build **B** matrix from previous residual vectors
5. Solve Pulay equation for coefficients $\{c_i\}$
6. Compute DIIS solution vector $\mid{\bf p}\,\rangle$
For SCF iteration, the most common choice of trial vector is the Fock matrix **F**; this choice has the advantage over other potential choices (e.g., the density matrix **D**) of **F** not being idempotent, so that it may benefit from extrapolation. The residual vector is commonly chosen to be the orbital gradient in the AO basis,
$$g_{\mu\nu} = ({\bf FDS} - {\bf SDF})_{\mu\nu},$$
however the better choice (which we will make in our implementation!) is to orthogonormalize the basis of the gradient with the inverse overlap metric ${\bf A} = {\bf S}^{-1/2}$:
$$r_{\mu\nu} = ({\bf A}^{\rm T}({\bf FDS} - {\bf SDF}){\bf A})_{\mu\nu}.$$
Therefore, the SCF-specific DIIS procedure (integrated into the SCF iteration algorithm) will be:
#### Algorithm 2: DIIS within an SCF Iteration
1. Compute **F**, append to list of previous trial vectors
2. Compute AO orbital gradient **r**, append to list of previous residual vectors
3. Compute RHF energy
3. Check convergence criteria
- If RMSD of **r** sufficiently small, and
- If change in SCF energy sufficiently small, break
4. Build **B** matrix from previous AO gradient vectors
5. Solve Pulay equation for coefficients $\{c_i\}$
6. Compute DIIS solution vector **F_DIIS** from $\{c_i\}$ and previous trial vectors
7. Compute new orbital guess with **F_DIIS**
## III. Implementation
In order to implement DIIS, we're going to integrate it into an existing RHF program. Since we just-so-happened to write such a program in the last tutorial, let's re-use the part of the code before the SCF integration which won't change when we include DIIS:
```julia
# ==> Basic Setup <==
# Import statements
using PyCall: pyimport
psi4 = pyimport("psi4")
np = pyimport("numpy") # used only to cast to Psi4 arrays
using TensorOperations: @tensor
using LinearAlgebra: Diagonal, Hermitian, eigen, tr, norm, dot
using Printf: @printf
# Memory specification
psi4.set_memory(Int(5e8))
numpy_memory = 2
# Set output file
psi4.core.set_output_file("output.dat", false)
# Define Physicist's water -- don't forget C1 symmetry!
mol = psi4.geometry("""
O
H 1 1.1
H 1 1.1 2 104
symmetry c1
""")
# Set computation options
psi4.set_options(Dict("basis" => "cc-pvdz",
"scf_type" => "pk",
"e_convergence" => 1e-8))
# Maximum SCF iterations
MAXITER = 40
# Energy convergence criterion
E_conv = 1.0e-6
D_conv = 1.0e-3
```
Memory set to 476.837 MiB by Python driver.
0.001
```julia
# ==> Static 1e- & 2e- Properties <==
# Class instantiation
wfn = psi4.core.Wavefunction.build(mol, psi4.core.get_global_option("basis"))
mints = psi4.core.MintsHelper(wfn.basisset())
# Overlap matrix
S = np.asarray(mints.ao_overlap()) # we only need a copy
# Number of basis Functions & doubly occupied orbitals
nbf = size(S)[1]
ndocc = wfn.nalpha()
println("Number of occupied orbitals: ", ndocc)
println("Number of basis functions: ", nbf)
# Memory check for ERI tensor
I_size = nbf^4 * 8.e-9
println("\nSize of the ERI tensor will be $I_size GB.")
memory_footprint = I_size * 1.5
if I_size > numpy_memory
psi4.core.clean()
throw(OutOfMemoryError("Estimated memory utilization ($memory_footprint GB) exceeds " *
"allotted memory limit of $numpy_memory GB."))
end
# Build ERI Tensor
I = np.asarray(mints.ao_eri()) # we only need a copy
# Build core Hamiltonian
T = np.asarray(mints.ao_kinetic()) # we only need a copy
V = np.asarray(mints.ao_potential()) # we only need a copy
H = T + V;
```
Number of occupied orbitals: 5
Number of basis functions: 24
Size of the ERI tensor will be 0.0026542080000000003 GB.
```julia
# ==> CORE Guess <==
# AO Orthogonalization Matrix
A = mints.ao_overlap()
A.power(-0.5, 1.e-16) # ≈ Julia's A^(-0.5) after psi4view()
A = np.asarray(A)
# Transformed Fock matrix
F_p = A * H * A
# Diagonalize F_p for eigenvalues & eigenvectors with Julia
e, C_p = eigen(Hermitian(F_p))
# Transform C_p back into AO basis
C = A * C_p
# Grab occupied orbitals
C_occ = C[:, 1:ndocc]
# Build density matrix from occupied orbitals
D = C_occ * C_occ'
# Nuclear Repulsion Energy
E_nuc = mol.nuclear_repulsion_energy()
```
8.002366482173422
Now let's put DIIS into action. Before our iterations begin, we'll need to create empty lists to hold our previous residual vectors (AO orbital gradients) and trial vectors (previous Fock matrices), along with setting starting values for our SCF energy and previous energy:
```julia
# ==> Pre-Iteration Setup <==
# SCF & Previous Energy
SCF_E = 0.0
E_old = 0.0;
```
Now we're ready to write our SCF iterations according to Algorithm 2. Here are some hints which may help you along the way:
#### Starting DIIS
Since DIIS builds the approximate solution vector $\mid{\bf p}\,\rangle$ as a linear combination of the previous trial vectors $\{\mid{\bf p}_i\,\rangle\}$, there's no need to perform DIIS on the first SCF iteration, since there's only one trial vector for DIIS to use!
#### Building **B**
1. The **B** matrix in the Lagrange equation is really $\tilde{\bf B} = \begin{pmatrix} {\bf B} & -1\\ -1 & 0\end{pmatrix}$.
2. Since **B** is the matrix of residual overlaps, it will be a square matrix of dimension equal to the number of residual vectors. If **B** is an $N\times N$ matrix, how big is $\tilde{\bf B}$?
3. Since our residuals are real, **B** will be a symmetric matrix.
4. To build $\tilde{\bf B}$, make an empty array of the appropriate dimension, then use array indexing to set the values of the elements.
#### Solving the Pulay equation
1. Use built-in Julia functionality to make your life easier.
2. The solution vector for the Pulay equation is $\tilde{\bf c} = \begin{pmatrix} {\bf c}\\ \lambda\end{pmatrix}$, where $\lambda$ is the Lagrange multiplier, and the right hand side is $\begin{pmatrix} {\bf 0}\\ -1\end{pmatrix}$.
```julia
# Start from fresh orbitals
F_p = A * H * A
e, C_p = eigen(Hermitian(F_p))
C = A * C_p
C_occ = C[:, 1:ndocc]
D = C_occ * C_occ' ;
# Trial & Residual Vector Lists
F_list = []
DIIS_RESID = []
# ==> SCF Iterations w/ DIIS <==
println("==> Starting SCF Iterations <==")
SCF_E = let SCF_E = SCF_E, E_old = E_old, D = D
# Begin Iterations
for scf_iter in 1:MAXITER
# Build Fock matrix
@tensor G[p,q] := (2I[p,q,r,s] - I[p,r,q,s]) * D[r,s]
F = H + G
# Build DIIS Residual
diis_r = A * (F * D * S - S * D * F) * A
# Append trial & residual vectors to lists
push!(F_list, F)
push!(DIIS_RESID, diis_r)
# Compute RHF energy
SCF_E = tr((H + F) * D) + E_nuc
dE = SCF_E - E_old
dRMS = norm(diis_r)
@printf("SCF Iteration %3d: Energy = %4.16f dE = %1.5e dRMS = %1.5e \n",
scf_iter, SCF_E, SCF_E - E_old, dRMS)
# SCF Converged?
if abs(SCF_E - E_old) < E_conv && dRMS < D_conv
break
end
E_old = SCF_E
if scf_iter >= 2
# Build B matrix
B_dim = length(F_list) + 1
B = zeros(B_dim, B_dim)
B[end, :] .= -1
B[: , end] .= -1
B[end, end] = 0
for i in eachindex(F_list), j in eachindex(F_list)
B[i, j] = dot(DIIS_RESID[i], DIIS_RESID[j])
end
# Build RHS of Pulay equation
rhs = zeros(B_dim)
rhs[end] = -1
# Solve Pulay equation for c_i's with Julia
coeff = B \ rhs
# Build DIIS Fock matrix
F = zeros(size(F))
for i in 1:length(coeff) - 1
F += coeff[i] * F_list[i]
end
end
# Compute new orbital guess with DIIS Fock matrix
F_p = A * F * A
e, C_p = eigen(Hermitian(F_p))
C = A * C_p
C_occ = C[:, 1:ndocc]
D = C_occ * C_occ'
# MAXITER exceeded?
if scf_iter == MAXITER
psi4.core.clean()
throw(MethodError("Maximum number of SCF iterations exceeded."))
end
end
SCF_E
end
# Post iterations
println("\nSCF converged.")
println("Final RHF Energy: $SCF_E [Eh]")
```
==> Starting SCF Iterations <==
SCF Iteration 1: Energy = -68.9800327333871053 dE = -6.89800e+01 dRMS = 2.79722e+00
SCF Iteration 2: Energy = -69.6472544393141675 dE = -6.67222e-01 dRMS = 2.57832e+00
SCF Iteration 3: Energy = -75.7919291462249021 dE = -6.14467e+00 dRMS = 6.94257e-01
SCF Iteration 4: Energy = -75.9721892296710735 dE = -1.80260e-01 dRMS = 1.81547e-01
SCF Iteration 5: Energy = -75.9893690602362710 dE = -1.71798e-02 dRMS = 2.09996e-02
SCF Iteration 6: Energy = -75.9897163367029123 dE = -3.47276e-04 dRMS = 1.28546e-02
SCF Iteration 7: Energy = -75.9897932415930768 dE = -7.69049e-05 dRMS = 1.49088e-03
SCF Iteration 8: Energy = -75.9897956274068349 dE = -2.38581e-06 dRMS = 6.18909e-04
SCF Iteration 9: Energy = -75.9897957845313954 dE = -1.57125e-07 dRMS = 4.14761e-05
SCF converged.
Final RHF Energy: -75.9897957845314 [Eh]
Congratulations! You've written your very own Restricted Hartree-Fock program with DIIS convergence accelleration! Finally, let's check your final RHF energy against <span style='font-variant: small-caps'> Psi4</span>:
```julia
# Compare to Psi4
SCF_E_psi = psi4.energy("SCF")
psi4.compare_values(SCF_E_psi, SCF_E, 6, "SCF Energy")
```
SCF Energy........................................................PASSED
true
## References
1. P. Pulay. *Chem. Phys. Lett.* **73**, 393-398 (1980)
2. C. David Sherrill. *"Some comments on accellerating convergence of iterative sequences using direct inversion of the iterative subspace (DIIS)".* Available at: vergil.chemistry.gatech.edu/notes/diis/diis.pdf. (1998)
| 59715c8a9bedc5c87e8692a20987713397774e4e | 19,896 | ipynb | Jupyter Notebook | Tutorials/03_Hartree-Fock/3b_rhf-diis.ipynb | zyth0s/psi4julia | beb0384028f1a3654b8a2f8690b7db5bd9c24b86 | [
"BSD-3-Clause"
] | 4 | 2021-02-13T22:14:21.000Z | 2021-04-17T07:34:10.000Z | Tutorials/03_Hartree-Fock/3b_rhf-diis.ipynb | zyth0s/psi4julia | beb0384028f1a3654b8a2f8690b7db5bd9c24b86 | [
"BSD-3-Clause"
] | null | null | null | Tutorials/03_Hartree-Fock/3b_rhf-diis.ipynb | zyth0s/psi4julia | beb0384028f1a3654b8a2f8690b7db5bd9c24b86 | [
"BSD-3-Clause"
] | null | null | null | 41.798319 | 938 | 0.559208 | true | 4,443 | Qwen/Qwen-72B | 1. YES
2. YES | 0.843895 | 0.841826 | 0.710413 | __label__eng_Latn | 0.932634 | 0.488858 |
```python
# import Python libraries
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import sympy as sym
from sympy.plotting import plot
import pandas as pd
from IPython.display import display
from IPython.core.display import Math
```
```python
# time elbow_flexion BIClong BICshort BRA
r_ef = np.loadtxt('./../data/r_elbowflexors.mot', skiprows=7)
f_ef = np.loadtxt('./../data/f_elbowflexors.mot', skiprows=7)
```
```python
m_ef = r_ef*1
m_ef[:, 2:] = r_ef[:, 2:]*f_ef[:, 2:]
```
```python
labels = ['Biceps long head', 'Biceps short head', 'Brachialis']
fig, ax = plt.subplots(nrows=1, ncols=3, sharex=True, figsize=(10, 4))
ax[0].plot(r_ef[:, 1], r_ef[:, 2:])
#ax[0].set_xlabel('Elbow angle $(\,^o)$')
ax[0].set_title('Moment arm (m)')
ax[1].plot(f_ef[:, 1], f_ef[:, 2:])
ax[1].set_xlabel('Elbow angle $(\,^o)$', fontsize=16)
ax[1].set_title('Maximum force (N)')
ax[2].plot(m_ef[:, 1], m_ef[:, 2:])
#ax[2].set_xlabel('Elbow angle $(\,^o)$')
ax[2].set_title('Maximum torque (Nm)')
ax[2].legend(labels, loc='best', framealpha=.5)
ax[2].set_xlim(np.min(r_ef[:, 1]), np.max(r_ef[:, 1]))
plt.tight_layout()
plt.show()
```
```python
a_ef = np.array([624.3, 435.56, 987.26])/50 # 50 N/cm2
print(a_ef)
```
[ 12.486 8.7112 19.7452]
```python
from scipy.optimize import minimize
```
```python
def cf_f1(x):
"""Cost function: sum of forces."""
return x[0] + x[1] + x[2]
def cf_f2(x):
"""Cost function: sum of forces squared."""
return x[0]**2 + x[1]**2 + x[2]**2
def cf_fpcsa2(x, a):
"""Cost function: sum of squared muscle stresses."""
return (x[0]/a[0])**2 + (x[1]/a[1])**2 + (x[2]/a[2])**2
def cf_fmmax3(x, m):
"""Cost function: sum of cubic forces normalized by moments."""
return (x[0]/m[0])**3 + (x[1]/m[1])**3 + (x[2]/m[2])**3
```
```python
def cf_f1d(x):
"""Derivative of cost function: sum of forces."""
dfdx0 = 1
dfdx1 = 1
dfdx2 = 1
return np.array([dfdx0, dfdx1, dfdx2])
def cf_f2d(x):
"""Derivative of cost function: sum of forces squared."""
dfdx0 = 2*x[0]
dfdx1 = 2*x[1]
dfdx2 = 2*x[2]
return np.array([dfdx0, dfdx1, dfdx2])
def cf_fpcsa2d(x, a):
"""Derivative of cost function: sum of squared muscle stresses."""
dfdx0 = 2*x[0]/a[0]**2
dfdx1 = 2*x[1]/a[1]**2
dfdx2 = 2*x[2]/a[2]**2
return np.array([dfdx0, dfdx1, dfdx2])
def cf_fmmax3d(x, m):
"""Derivative of cost function: sum of cubic forces normalized by moments."""
dfdx0 = 3*x[0]**2/m[0]**3
dfdx1 = 3*x[1]**2/m[1]**3
dfdx2 = 3*x[2]**2/m[2]**3
return np.array([dfdx0, dfdx1, dfdx2])
```
```python
M = 20 # desired torque at the elbow
iang = 69 # which will give the closest value to 90 degrees
r = r_ef[iang, 2:]
f0 = f_ef[iang, 2:]
a = a_ef
m = m_ef[iang, 2:]
x0 = f_ef[iang, 2:]/10 # far from the correct answer for the sum of torques
print('M =', M)
print('x0 =', x0)
print('r * x0 =', np.sum(r*x0))
```
M = 20
x0 = [ 57.51311369 36.29974032 89.6470056 ]
r * x0 = 6.62200444607
```python
bnds = ((0, f0[0]), (0, f0[1]), (0, f0[2]))
```
```python
# use this in combination with the parameter bounds:
cons = ({'type': 'eq',
'fun' : lambda x, r, f0, M: np.array([r[0]*x[0] + r[1]*x[1] + r[2]*x[2] - M]),
'jac' : lambda x, r, f0, M: np.array([r[0], r[1], r[2]]), 'args': (r, f0, M)})
```
```python
# to enter everything as constraints:
cons = ({'type': 'eq',
'fun' : lambda x, r, f0, M: np.array([r[0]*x[0] + r[1]*x[1] + r[2]*x[2] - M]),
'jac' : lambda x, r, f0, M: np.array([r[0], r[1], r[2]]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: f0[0]-x[0],
'jac' : lambda x, r, f0, M: np.array([-1, 0, 0]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: f0[1]-x[1],
'jac' : lambda x, r, f0, M: np.array([0, -1, 0]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: f0[2]-x[2],
'jac' : lambda x, r, f0, M: np.array([0, 0, -1]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: x[0],
'jac' : lambda x, r, f0, M: np.array([1, 0, 0]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: x[1],
'jac' : lambda x, r, f0, M: np.array([0, 1, 0]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: x[2],
'jac' : lambda x, r, f0, M: np.array([0, 0, 1]), 'args': (r, f0, M)})
```
```python
f1r = minimize(fun=cf_f1, x0=x0, args=(), jac=cf_f1d,
constraints=cons, method='SLSQP',
options={'disp': True})
```
Optimization terminated successfully. (Exit mode 0)
Current function value: 409.5926601
Iterations: 7
Function evaluations: 7
Gradient evaluations: 7
```python
f2r = minimize(fun=cf_f2, x0=x0, args=(), jac=cf_f2d,
constraints=cons, method='SLSQP',
options={'disp': True})
```
Optimization terminated successfully. (Exit mode 0)
Current function value: 75657.3847913
Iterations: 5
Function evaluations: 6
Gradient evaluations: 5
```python
fpcsa2r = minimize(fun=cf_fpcsa2, x0=x0, args=(a,), jac=cf_fpcsa2d,
constraints=cons, method='SLSQP',
options={'disp': True})
```
Optimization terminated successfully. (Exit mode 0)
Current function value: 529.96397777
Iterations: 11
Function evaluations: 11
Gradient evaluations: 11
```python
fmmax3r = minimize(fun=cf_fmmax3, x0=x0, args=(m,), jac=cf_fmmax3d,
constraints=cons, method='SLSQP',
options={'disp': True})
```
Optimization terminated successfully. (Exit mode 0)
Current function value: 1075.13889317
Iterations: 12
Function evaluations: 12
Gradient evaluations: 12
```python
dat = np.vstack((np.around(r*100,1), np.around(a,1), np.around(f0,0), np.around(m,1)))
opt = np.around(np.vstack((f1r.x, f2r.x, fpcsa2r.x, fmmax3r.x)), 1)
er = ['-', '-', '-', '-',
np.sum(r*f1r.x)-M, np.sum(r*f2r.x)-M, np.sum(r*fpcsa2r.x)-M, np.sum(r*fmmax3r.x)-M]
data = np.vstack((np.vstack((dat, opt)).T, er)).T
rows = ['$\text{Moment arm}\;[cm]$', '$pcsa\;[cm^2]$', '$F_{max}\;[N]$', '$M_{max}\;[Nm]$',
'$\sum F_i$', '$\sum F_i^2$', '$\sum(F_i/pcsa_i)^2$', '$\sum(F_i/M_{max,i})^3$']
cols = ['Biceps long head', 'Biceps short head', 'Brachialis', 'Error in M']
df = pd.DataFrame(data, index=rows, columns=cols)
print('\nComparison of different cost functions for solving the distribution problem')
df
```
Comparison of different cost functions for solving the distribution problem
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Biceps long head</th>
<th>Biceps short head</th>
<th>Brachialis</th>
<th>Error in M</th>
</tr>
</thead>
<tbody>
<tr>
<th>$\text{Moment arm}\;[cm]$</th>
<td>4.9</td>
<td>4.9</td>
<td>2.3</td>
<td>-</td>
</tr>
<tr>
<th>$pcsa\;[cm^2]$</th>
<td>12.5</td>
<td>8.7</td>
<td>19.7</td>
<td>-</td>
</tr>
<tr>
<th>$F_{max}\;[N]$</th>
<td>575.0</td>
<td>363.0</td>
<td>896.0</td>
<td>-</td>
</tr>
<tr>
<th>$M_{max}\;[Nm]$</th>
<td>28.1</td>
<td>17.7</td>
<td>20.4</td>
<td>-</td>
</tr>
<tr>
<th>$\sum F_i$</th>
<td>215.4</td>
<td>194.2</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>$\sum F_i^2$</th>
<td>184.7</td>
<td>184.7</td>
<td>86.1</td>
<td>0.0</td>
</tr>
<tr>
<th>$\sum(F_i/pcsa_i)^2$</th>
<td>201.7</td>
<td>98.2</td>
<td>235.2</td>
<td>0.0</td>
</tr>
<tr>
<th>$\sum(F_i/M_{max,i})^3$</th>
<td>241.1</td>
<td>120.9</td>
<td>102.0</td>
<td>-3.5527136788e-15</td>
</tr>
</tbody>
</table>
</div>
| 6203a281dc4f596ec8ae6d15534f7f5271f4320c | 68,935 | ipynb | Jupyter Notebook | courses/modsim2018/ahmadhassan/Ahmad_Task20.ipynb | ahmadhassan01/bmc | 3114b7d3ecd1f7c678fac0c04e8e139ac2898992 | [
"MIT"
] | null | null | null | courses/modsim2018/ahmadhassan/Ahmad_Task20.ipynb | ahmadhassan01/bmc | 3114b7d3ecd1f7c678fac0c04e8e139ac2898992 | [
"MIT"
] | null | null | null | courses/modsim2018/ahmadhassan/Ahmad_Task20.ipynb | ahmadhassan01/bmc | 3114b7d3ecd1f7c678fac0c04e8e139ac2898992 | [
"MIT"
] | null | null | null | 134.376218 | 53,584 | 0.835149 | true | 3,085 | Qwen/Qwen-72B | 1. YES
2. YES | 0.870597 | 0.757794 | 0.659734 | __label__eng_Latn | 0.319614 | 0.371113 |
```python
import sys
import numpy as np
print(sys.version)
np.__version__
```
3.9.7 (default, Sep 16 2021, 16:59:28) [MSC v.1916 64 bit (AMD64)]
'1.20.3'
```python
#Criação de matriz com numpy matrix
matriz = np.matrix("1, 2, 3;4, 5, 6")
print(matriz)
```
[[1 2 3]
[4 5 6]]
```python
matriz2 = np.matrix([[1, 2, 3], [4, 5, 6]])
print(matriz2)
```
[[1 2 3]
[4 5 6]]
```python
matriz3 = np.matrix([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
print(matriz3)
```
[[1 2 3]
[4 5 6]
[7 8 9]]
```python
#traz a dimensão da matriz linha 1 coluna 1
matriz3[1, 1]
```
5
### Matriz Esparsa
Uma matriz esparsa possui uma grande quantidade de elementos que valem zero (ou não presentes, ou não necessários). Matrizes esparsas têm aplicações em problemas de engenharia, física (por exemplo, o método das malhas para resolução de circuitos elétricos ou sistemas de equações lineares). Também têm aplicação em computação, como por exemplo em tecnologias de armazenamento de dados.
A matriz esparsa é implementada através de um conjunto de listas ligadas que apontam para elementos diferentes de zero. De forma que os elementos que possuem valor zero não são armazenados
```python
#importação de matriz esparsa do scipy
import scipy.sparse
```
```python
linhas = np.array([0,1,2,3])
colunas = np.array([1,2,3,4])
valores = np.array([10,20,30,40])
```
```python
#criação de matriz esparsa
mat = scipy.sparse.coo_matrix((valores, (linhas, colunas))) ; print(mat)
```
(0, 1) 10
(1, 2) 20
(2, 3) 30
(3, 4) 40
```python
#criação de matriz esparsa densa
print(mat.todense())
```
[[ 0 10 0 0 0]
[ 0 0 20 0 0]
[ 0 0 0 30 0]
[ 0 0 0 0 40]]
```python
#verifica se é uma matriz esparsa
scipy.sparse.isspmatrix_coo(mat)
```
True
### Operações
```python
a = np.array([[1, 2], [3, 4]])
print(a)
```
[[1 2]
[3 4]]
```python
a * a
```
array([[ 1, 4],
[ 9, 16]])
```python
A = np.mat(a)
A
```
matrix([[1, 2],
[3, 4]])
```python
A * A
```
matrix([[ 7, 10],
[15, 22]])
## $$ \boxed{ \begin{align} \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} & \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} = \begin{pmatrix} 7 & 10 \\ 15 & 22 \end{pmatrix} \end{align} }$$
```python
from IPython.display import Image
Image('cap3/imagens/Matriz.png')
```
```python
#Fazendo um array ter o mesmo comportamento de uma matriz
np.dot(a , a)
```
array([[ 7, 10],
[15, 22]])
```python
#Converter array para matriz
matrizA = np.asmatrix(a) ;matrizA
```
matrix([[1, 2],
[3, 4]])
```python
matrizA * matrizA
```
matrix([[ 7, 10],
[15, 22]])
```python
#converter matriz para array
arrayA = np.asarray(matrizA) ; arrayA
```
array([[1, 2],
[3, 4]])
```python
arrayA * arrayA
```
array([[ 1, 4],
[ 9, 16]])
| 49555fb7debfe6bbd49bdfc39b0ef947d8baa0e8 | 28,079 | ipynb | Jupyter Notebook | NumPy/ArrayEmatrizNumpy.ipynb | DjCod3r/Jupyter | bbf8e0eb5ae766ef509968be79541eba9389c544 | [
"MIT"
] | 1 | 2022-03-03T14:40:51.000Z | 2022-03-03T14:40:51.000Z | NumPy/ArrayEmatrizNumpy.ipynb | DjCod3r/Jupyter | bbf8e0eb5ae766ef509968be79541eba9389c544 | [
"MIT"
] | null | null | null | NumPy/ArrayEmatrizNumpy.ipynb | DjCod3r/Jupyter | bbf8e0eb5ae766ef509968be79541eba9389c544 | [
"MIT"
] | null | null | null | 60.126338 | 19,138 | 0.803412 | true | 1,116 | Qwen/Qwen-72B | 1. YES
2. YES | 0.812867 | 0.865224 | 0.703312 | __label__por_Latn | 0.944887 | 0.472362 |
# Laplace transform
This notebook is a short tutorial of Laplace transform using SymPy.
The main functions to use are ``laplace_transform`` and ``inverse_laplace_transform``.
```python
from sympy import *
```
```python
init_session()
```
IPython console for SymPy 1.0 (Python 2.7.13-64-bit) (ground types: python)
These commands were executed:
>>> from __future__ import division
>>> from sympy import *
>>> x, y, z, t = symbols('x y z t')
>>> k, m, n = symbols('k m n', integer=True)
>>> f, g, h = symbols('f g h', cls=Function)
>>> init_printing()
Documentation can be found at http://docs.sympy.org/1.0/
Let us compute the Laplace transform from variables $t$ to $s$, then, we have the condition that $t>0$ (and real).
```python
t = symbols("t", real=True, positive=True)
s = symbols("s")
```
To calculate the Laplace transform of the expression $t^4$, we enter
```python
laplace_transform(t**4, t, s)
```
This function returns ``(F, a, cond)`` where ``F`` is the Laplace transform of ``f``, $\mathcal{R}(s)>a$ is the half-plane of convergence, and ``cond`` are auxiliary convergence conditions.
If we are not interested in the conditions for the convergence of this transform, we can use ``noconds=True``
```python
laplace_transform(t**4, t, s, noconds=True)
```
```python
fun = 1/((s-2)*(s-1)**2)
fun
```
```python
inverse_laplace_transform(fun, s, t)
```
Right now, Sympy does not support the tranformation of derivatives.
If we do
```python
laplace_transform(f(t).diff(t), t, s, noconds=True)
```
we don't obtain, the expected
```python
s*LaplaceTransform(f(t), t, s) - f(0)
```
or,
$$\mathcal{L}\lbrace f^{(n)}(t)\rbrace = s^n F(s) - \sum_{k=1}^{n} s^{n - k} f^{(k - 1)}(0)\, ,$$
in general.
We can still, operate with the trasformation of a differential equation.
For example, let us consider the equation
$$\frac{d f(t)}{dt} = 3f(t) + e^{-t}\, ,$$
that has as Laplace transform
$$sF(s) - f(0) = 3F(s) + \frac{1}{s+1}\, .$$
```python
eq = Eq(s*LaplaceTransform(f(t), t, s) - f(0),
3*LaplaceTransform(f(t), t, s) + 1/(s +1))
eq
```
We then solve for $F(s)$
```python
sol = solve(eq, LaplaceTransform(f(t), t, s))
sol
```
and compute the inverse Laplace transform
```python
inverse_laplace_transform(sol[0], s, t)
```
and we verify this using ``dsolve``
```python
factor(dsolve(f(t).diff(t) - 3*f(t) - exp(-t)))
```
that is equal if $4C_1 = 4f(0) + 1$.
It is common to use practial fraction decomposition when computing inverse
Laplace transforms. We can do this using ``apart``, as follows
```python
frac = 1/(x**2*(x**2 + 1))
frac
```
```python
apart(frac)
```
We can also compute the Laplace transform of Heaviside
and Dirac's Delta "functions"
```python
laplace_transform(Heaviside(t - 3), t, s, noconds=True)
```
```python
laplace_transform(DiracDelta(t - 2), t, s, noconds=True)
```
```python
```
The next cell change the format of the notebook.
```python
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
```
<link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'>
<style>
/* Based on Lorena Barba template available at: https://github.com/barbagroup/AeroPython/blob/master/styles/custom.css*/
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: 'Alegreya Sans', sans-serif;
}
h2 {
font-family: 'Fenix', serif;
}
h3{
font-family: 'Fenix', serif;
margin-top:12px;
margin-bottom: 3px;
}
h4{
font-family: 'Fenix', serif;
}
h5 {
font-family: 'Alegreya Sans', sans-serif;
}
div.text_cell_render{
font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 135%;
font-size: 120%;
width:600px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
/* .prompt{
display: None;
}*/
.text_cell_render h1 {
font-weight: 200;
font-size: 50pt;
line-height: 100%;
color:#CD2305;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 16pt;
color: #CD2305;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
| 4e471cf00c8cd5f6dab3069a42dd4d659d08098f | 40,477 | ipynb | Jupyter Notebook | notebooks/sympy/laplace_transform.ipynb | nicoguaro/AdvancedMath | 2749068de442f67b89d3f57827367193ce61a09c | [
"MIT"
] | 26 | 2017-06-29T17:45:20.000Z | 2022-02-06T20:14:29.000Z | notebooks/sympy/laplace_transform.ipynb | nicoguaro/AdvancedMath | 2749068de442f67b89d3f57827367193ce61a09c | [
"MIT"
] | null | null | null | notebooks/sympy/laplace_transform.ipynb | nicoguaro/AdvancedMath | 2749068de442f67b89d3f57827367193ce61a09c | [
"MIT"
] | 13 | 2019-04-22T08:08:56.000Z | 2022-01-27T08:15:53.000Z | 58.073171 | 3,414 | 0.748499 | true | 1,479 | Qwen/Qwen-72B | 1. YES
2. YES | 0.718594 | 0.822189 | 0.590821 | __label__eng_Latn | 0.635664 | 0.211004 |
# 13 - Panel Data and Fixed Effects
## Controlling What you Cannot See
Methods like propensity score, linear regression and matching are very good at controlling for confounding in non-random data, but they rely on a key assumption: conditional unconfoundedness
$
(Y_0, Y_1) \perp T | X
$
To put it in words, they require that all the confounders are known and measured, so that we can condition on them and make the treatment as good as random. One major issue with this is that sometimes we simply can't measure a confounder. For instance, take a classical labor economics problem of figuring out the impact of marriage on men's earnings. It's a well known fact in economics that married men earn more than single men. However, it is not clear if this relationship is causal or not. It could be that more educated men are both more likely to marry and more likely to have a high earnings job, which would mean that education is a confounder of the effect of marriage on earnings. For this confounder, we could measure the education of the person in the study and run a regression controlling for that. But another confounder could be beauty. It could be that more handsome men are both more likely to get married and more likely to have a high paying job. Unfortunately, beauty is one of those characteristics like intelligence. It's something we can't measure very well.
This puts us in a difficult situation, because if we have unmeasured confounders, we have bias. One way to deal with this is with instrumental variables, as we've seen before. But coming up with good instruments it's no easy task and requires a lot of creativity. Here, let's look at an alternative that takes advantage of time or the temporal structure of data.
The idea is to use **panel data**. Panel data is when we have **observations on the same individual for multiple periods of time**. Panel data formats are very common in the industry, where they keep records of customer behavior for the same customer and for multiple time periods. The reason we can leverage panel data is because we can compare the same unit before and after the treatment and see how they behave with it. Before we dive in the math, let's see how this makes intuitive sense.
```python
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from matplotlib import style
from matplotlib import pyplot as plt
import statsmodels.formula.api as smf
import graphviz as gr
from linearmodels.panel import PanelOLS
%matplotlib inline
pd.set_option("display.max_columns", 6)
style.use("fivethirtyeight")
```
First, let's take a look at the causal graph that we have once we include multiple observations of the same unit across time. Suppose we have a situation where marriage at the first time causes income at the same time and subsequent marital status. This is also true for times 2 and 3. Also, suppose that beauty is the same across all time periods (a bold statement, but reasonable if time is just a few years) and it causes both marriage and income.
```python
g = gr.Digraph()
g.edge("Marriage_1", "Income_1")
g.edge("Marriage_1", "Marriage_2")
g.edge("Marriage_2", "Income_2")
g.edge("Marriage_2", "Marriage_3")
g.edge("Marriage_3", "Income_3")
g.edge("Beauty", "Marriage_1")
g.edge("Beauty", "Marriage_2")
g.edge("Beauty", "Marriage_3")
g.edge("Beauty", "Income_1")
g.edge("Beauty", "Income_2")
g.edge("Beauty", "Income_3")
g
```
Remember that we cannot control beauty, since we can't measure it. But we can still use the panel structure so it is not a problem anymore. The idea is that we can see beauty - and any other attribute that is constant across time - as the defining aspects of a person. And although we can't control them directly, we can control for the individual itself.
```python
g = gr.Digraph()
g.edge("Marriage_1", "Income_1")
g.edge("Marriage_1", "Marriage_2")
g.edge("Marriage_2", "Income_2")
g.edge("Marriage_2", "Marriage_3")
g.edge("Marriage_3", "Income_3")
g.edge("Person (Beauty, Intelligence...)", "Marriage_1")
g.edge("Person (Beauty, Intelligence...)", "Marriage_2")
g.edge("Person (Beauty, Intelligence...)", "Marriage_3")
g.edge("Person (Beauty, Intelligence...)", "Income_1")
g.edge("Person (Beauty, Intelligence...)", "Income_2")
g.edge("Person (Beauty, Intelligence...)", "Income_3")
g
```
Think about it. We can't measure attributes like beauty and intelligence, but we know that the person who has them is the same individual across time. So, we can create a dummy variable indicating that person and add that to a linear model. This is what we mean when we say we can control for the person itself: we are adding a variable (dummy in this case) that denotes that particular person. When estimating the effect of marriage on income with this person dummy in our model, regression finds the effect of marriage **while keeping the person variable fixed**. Adding this entity dummy is what we call a fixed effect model.
## Fixed Effects
To make matters more formal, let's first take a look at the data that we have. Following our example, we will try to estimate the effect of marriage on income. Our data contains those 2 variables, `married` and `lwage`, on multiple individuals (`nr`) for multiple years. Note that wage is in log form. In addition to this, we have other controls, like number of hours worked that year, years of education and so on.
```python
from linearmodels.datasets import wage_panel
data = wage_panel.load()
data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>nr</th>
<th>year</th>
<th>black</th>
<th>...</th>
<th>lwage</th>
<th>expersq</th>
<th>occupation</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>13</td>
<td>1980</td>
<td>0</td>
<td>...</td>
<td>1.197540</td>
<td>1</td>
<td>9</td>
</tr>
<tr>
<th>1</th>
<td>13</td>
<td>1981</td>
<td>0</td>
<td>...</td>
<td>1.853060</td>
<td>4</td>
<td>9</td>
</tr>
<tr>
<th>2</th>
<td>13</td>
<td>1982</td>
<td>0</td>
<td>...</td>
<td>1.344462</td>
<td>9</td>
<td>9</td>
</tr>
<tr>
<th>3</th>
<td>13</td>
<td>1983</td>
<td>0</td>
<td>...</td>
<td>1.433213</td>
<td>16</td>
<td>9</td>
</tr>
<tr>
<th>4</th>
<td>13</td>
<td>1984</td>
<td>0</td>
<td>...</td>
<td>1.568125</td>
<td>25</td>
<td>5</td>
</tr>
</tbody>
</table>
<p>5 rows × 12 columns</p>
</div>
Generally, the fixed effect model is defined as
$
y_{it} = \beta X_{it} + \gamma U_i + e_{it}
$
where \\(y_{it}\\) is the outcome of individual \\(i\\) at time \\(t\\), \\(X_{it}\\) is the vector of variables for individual \\(i\\) at time \\(t\\). \\(U_i\\) is a set of unobservables for individual \\(i\\). Notice that those unobservables are unchanging through time, hence the lack of the time subscript. Finally, \\(e_{it}\\) is the error term. For the education example, \\(y_{it}\\) is log wages, \\(X_{it}\\) are the observable variables that change in time, like marriage and experience and \\(U_i\\) are the variables that are not observed but constant for each individual, like beauty and intelligence.
Now, remember how I've said that using panel data with a fixed effect model is as simple as adding a dummy for the entities. It's true, but in practice, we don't actually do it. Imagine a dataset where we have 1 million customers. If we add one dummy for each of them, we would end up with 1 million columns, which is probably not a good idea. Instead, we use the trick of partitioning the linear regression into 2 separate models. We've seen this before, but now is a good time to recap it. Suppose you have a linear regression model with a set of features \\(X_1\\) and another set of features \\(X_2\\).
$
\hat{Y} = \hat{\beta_1} X_1 + \hat{\beta_2} X_2
$
where \\(X_1\\) and \\(X_1\\) are feature matrices (one row per feature and one column per observation) and \\(\hat{\beta_1}\\) and \\(\hat{\beta_2}\\) are row vectors. You can get the exact same \\(\hat{\beta_1}\\) parameter by doing
1. regress the the outcome \\(y\\) on the second set of features \\(\hat{y^*} = \hat{\gamma_1} X_2\\)
2. regress the first set of features on the second \\(\hat{X_1} = \hat{\gamma_2} X_2\\)
3. obtain the residuals \\(\tilde{X}_1 = X_1 - \hat{X_1}\\) and \\(\tilde{y}_1 = y_1 - \hat{y^*}\\)
4. regress the residuals of the outcome on the residuals of the features \\(\hat{y} = \hat{\beta_1} \tilde{X}_1\\)
The parameter from this last regression will be exactly the same as running the regression with all the features. But how exactly does this help us? Well, we can break the estimation of the model with the entity dummies into 2. First, we use the dummies to predict the outcome and the feature. These are steps 1 and 2 above.
Now, remember how running a regression on a dummy variable is as simple as estimating the mean for that dummy? If you don't, let's use our data to show how this is true. Let's run a model where we predict wages as a function of the year dummy.
```python
mod = smf.ols("lwage ~ C(year)", data=data).fit()
mod.summary().tables[1]
```
<table class="simpletable">
<tr>
<td></td> <th>coef</th> <th>std err</th> <th>t</th> <th>P>|t|</th> <th>[0.025</th> <th>0.975]</th>
</tr>
<tr>
<th>Intercept</th> <td> 1.3935</td> <td> 0.022</td> <td> 63.462</td> <td> 0.000</td> <td> 1.350</td> <td> 1.437</td>
</tr>
<tr>
<th>C(year)[T.1981]</th> <td> 0.1194</td> <td> 0.031</td> <td> 3.845</td> <td> 0.000</td> <td> 0.059</td> <td> 0.180</td>
</tr>
<tr>
<th>C(year)[T.1982]</th> <td> 0.1782</td> <td> 0.031</td> <td> 5.738</td> <td> 0.000</td> <td> 0.117</td> <td> 0.239</td>
</tr>
<tr>
<th>C(year)[T.1983]</th> <td> 0.2258</td> <td> 0.031</td> <td> 7.271</td> <td> 0.000</td> <td> 0.165</td> <td> 0.287</td>
</tr>
<tr>
<th>C(year)[T.1984]</th> <td> 0.2968</td> <td> 0.031</td> <td> 9.558</td> <td> 0.000</td> <td> 0.236</td> <td> 0.358</td>
</tr>
<tr>
<th>C(year)[T.1985]</th> <td> 0.3459</td> <td> 0.031</td> <td> 11.140</td> <td> 0.000</td> <td> 0.285</td> <td> 0.407</td>
</tr>
<tr>
<th>C(year)[T.1986]</th> <td> 0.4062</td> <td> 0.031</td> <td> 13.082</td> <td> 0.000</td> <td> 0.345</td> <td> 0.467</td>
</tr>
<tr>
<th>C(year)[T.1987]</th> <td> 0.4730</td> <td> 0.031</td> <td> 15.232</td> <td> 0.000</td> <td> 0.412</td> <td> 0.534</td>
</tr>
</table>
Notice how this model is predicting the average income in 1980 to be 1.3935, in 1981 to be 1.5129 (1.3935+0.1194) and so on. Now, if we compute the average by year, we get the exact same result. (Remember that the base year, 1980, is the intercept. So you have to add the intercept to the parameters of the other years to get the mean `lwage` for the year).
```python
data.groupby("year")["lwage"].mean()
```
year
1980 1.393477
1981 1.512867
1982 1.571667
1983 1.619263
1984 1.690295
1985 1.739410
1986 1.799719
1987 1.866479
Name: lwage, dtype: float64
This means that if we get the average for every person in our panel, we are essentially regressing the individual dummy on the other variables. This motivates the following estimation procedure:
1. Create time-demeaned variables by subtracting the mean for the individual:
$\ddot{Y}_{it} = Y_{it} - \bar{Y}_i$
$\ddot{X}_{it} = X_{it} - \bar{X}_i$
2. Regress $\ddot{Y}_{it}$ on $\ddot{X}_{it}$
Notice that when we do so, the unobserved \\(U_i\\) vanishes. Since \\(U_i\\) is constant across time, we have that \\(\bar{U_i}=U_i\\). If we have the following system of two equations
$$
\begin{align}
Y_{it} & = \beta X_{it} + \gamma U_i + e_{it} \\
\bar{Y}_{i} & = \beta \bar{X}_{it} + \gamma \bar{U}_i + \bar{e}_{it} \\
\end{align}
$$
And we subtract one from the other, we get
$$
\begin{align}
(Y_{it} - \bar{Y}_{i}) & = (\beta X_{it} - \beta \bar{X}_{it}) + (\gamma U_i - \gamma U_i) + (e_{it}-\bar{e}_{it}) \\
(Y_{it} - \bar{Y}_{i}) & = \beta(X_{it} - \bar{X}_{it}) + (e_{it}-\bar{e}_{it}) \\
\ddot{Y}_{it} & = \beta \ddot{X}_{it} + \ddot{e}_{it} \\
\end{align}
$$
which wipes out all unobserved that are constant across time. To be honest, not only do the unobserved variables vanish. This happens to all the variables that are constant in time. For this reason, you can't include any variables that are constant across time, as they would be a linear combination of the dummy variables and the model wouldn't run.
To check which variables are those, we can group our data by individual and get the sum of the standard deviations. If it is zero, it means the variable isn't changing across time for any of the individuals.
```python
data.groupby("nr").std().sum()
```
year 1334.971910
black 0.000000
exper 1334.971910
hisp 0.000000
hours 203098.215649
married 140.372801
educ 0.000000
union 106.512445
lwage 173.929670
expersq 17608.242825
occupation 739.222281
dtype: float64
For our data, we need to remove entinicity dummies, `black` and `hisp`, since they are constant for the individual. Also, we need to remove education. We will also not use occupation, since this is probably mediating the effect of marriage on wage (it could be that single men are able to take more time demanding positions). Having selected the features we will use, it's time to estimate this model.
To run our fixed effect model, first, let's get our mean data. We can achieve this by grouping everything by individuals and taking the mean.
```python
Y = "lwage"
T = "married"
X = [T, "expersq", "union", "hours"]
mean_data = data.groupby("nr")[X+[Y]].mean()
mean_data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>married</th>
<th>expersq</th>
<th>union</th>
<th>hours</th>
<th>lwage</th>
</tr>
<tr>
<th>nr</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>13</th>
<td>0.000</td>
<td>25.5</td>
<td>0.125</td>
<td>2807.625</td>
<td>1.255652</td>
</tr>
<tr>
<th>17</th>
<td>0.000</td>
<td>61.5</td>
<td>0.000</td>
<td>2504.125</td>
<td>1.637786</td>
</tr>
<tr>
<th>18</th>
<td>1.000</td>
<td>61.5</td>
<td>0.000</td>
<td>2350.500</td>
<td>2.034387</td>
</tr>
<tr>
<th>45</th>
<td>0.125</td>
<td>35.5</td>
<td>0.250</td>
<td>2225.875</td>
<td>1.773664</td>
</tr>
<tr>
<th>110</th>
<td>0.500</td>
<td>77.5</td>
<td>0.125</td>
<td>2108.000</td>
<td>2.055129</td>
</tr>
</tbody>
</table>
</div>
To demean the data, we need to set the index of the original data to be the individual identifier, `nr`. Then, we can simply subtract one data frame from the mean data frame.
```python
demeaned_data = (data
.set_index("nr") # set the index as the person indicator
[X+[Y]]
- mean_data) # subtract the mean data
demeaned_data.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>married</th>
<th>expersq</th>
<th>union</th>
<th>hours</th>
<th>lwage</th>
</tr>
<tr>
<th>nr</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>13</th>
<td>0.0</td>
<td>-24.5</td>
<td>-0.125</td>
<td>-135.625</td>
<td>-0.058112</td>
</tr>
<tr>
<th>13</th>
<td>0.0</td>
<td>-21.5</td>
<td>0.875</td>
<td>-487.625</td>
<td>0.597408</td>
</tr>
<tr>
<th>13</th>
<td>0.0</td>
<td>-16.5</td>
<td>-0.125</td>
<td>132.375</td>
<td>0.088810</td>
</tr>
<tr>
<th>13</th>
<td>0.0</td>
<td>-9.5</td>
<td>-0.125</td>
<td>152.375</td>
<td>0.177561</td>
</tr>
<tr>
<th>13</th>
<td>0.0</td>
<td>-0.5</td>
<td>-0.125</td>
<td>263.375</td>
<td>0.312473</td>
</tr>
</tbody>
</table>
</div>
Finally, we can run our fixed effect model on the time-demeaned data.
```python
mod = smf.ols(f"{Y} ~ {'+'.join(X)}", data=demeaned_data).fit()
mod.summary().tables[1]
```
<table class="simpletable">
<tr>
<td></td> <th>coef</th> <th>std err</th> <th>t</th> <th>P>|t|</th> <th>[0.025</th> <th>0.975]</th>
</tr>
<tr>
<th>Intercept</th> <td>-8.327e-17</td> <td> 0.005</td> <td>-1.64e-14</td> <td> 1.000</td> <td> -0.010</td> <td> 0.010</td>
</tr>
<tr>
<th>married</th> <td> 0.1147</td> <td> 0.017</td> <td> 6.756</td> <td> 0.000</td> <td> 0.081</td> <td> 0.148</td>
</tr>
<tr>
<th>expersq</th> <td> 0.0040</td> <td> 0.000</td> <td> 21.958</td> <td> 0.000</td> <td> 0.004</td> <td> 0.004</td>
</tr>
<tr>
<th>union</th> <td> 0.0784</td> <td> 0.018</td> <td> 4.261</td> <td> 0.000</td> <td> 0.042</td> <td> 0.115</td>
</tr>
<tr>
<th>hours</th> <td> -8.46e-05</td> <td> 1.25e-05</td> <td> -6.744</td> <td> 0.000</td> <td> -0.000</td> <td> -6e-05</td>
</tr>
</table>
If we believe that fixed effect eliminates the all omitted variable bias, this model is telling us that marriage increases a man's wage by 11%. This result is very significant. One detail here is that for fixed effect models, the standard errors need to be clustered. So, instead of doing all our estimation by hand (which is only nice for pedagogical reasons), we can use the library `linearmodels` and set the argument `cluster_entity` to True.
```python
from linearmodels.panel import PanelOLS
mod = PanelOLS.from_formula("lwage ~ expersq+union+married+hours+EntityEffects",
data=data.set_index(["nr", "year"]))
result = mod.fit(cov_type='clustered', cluster_entity=True)
result.summary.tables[1]
```
<table class="simpletable">
<caption>Parameter Estimates</caption>
<tr>
<td></td> <th>Parameter</th> <th>Std. Err.</th> <th>T-stat</th> <th>P-value</th> <th>Lower CI</th> <th>Upper CI</th>
</tr>
<tr>
<th>expersq</th> <td>0.0040</td> <td>0.0002</td> <td>16.552</td> <td>0.0000</td> <td>0.0035</td> <td>0.0044</td>
</tr>
<tr>
<th>union</th> <td>0.0784</td> <td>0.0236</td> <td>3.3225</td> <td>0.0009</td> <td>0.0322</td> <td>0.1247</td>
</tr>
<tr>
<th>married</th> <td>0.1147</td> <td>0.0220</td> <td>5.2213</td> <td>0.0000</td> <td>0.0716</td> <td>0.1577</td>
</tr>
<tr>
<th>hours</th> <td>-8.46e-05</td> <td>2.22e-05</td> <td>-3.8105</td> <td>0.0001</td> <td>-0.0001</td> <td>-4.107e-05</td>
</tr>
</table>
Notice how the parameter estimates are identical to the ones we've got with time-demeaned data. The only difference is that the standard errors are a bit larger. Now, compare this to the simple OLS model that doesn't take the time structure of the data into account. For this model, we add back the variables that are constant in time.
```python
mod = smf.ols("lwage ~ expersq+union+married+hours+black+hisp+educ", data=data).fit()
mod.summary().tables[1]
```
<table class="simpletable">
<tr>
<td></td> <th>coef</th> <th>std err</th> <th>t</th> <th>P>|t|</th> <th>[0.025</th> <th>0.975]</th>
</tr>
<tr>
<th>Intercept</th> <td> 0.2654</td> <td> 0.065</td> <td> 4.103</td> <td> 0.000</td> <td> 0.139</td> <td> 0.392</td>
</tr>
<tr>
<th>expersq</th> <td> 0.0032</td> <td> 0.000</td> <td> 15.750</td> <td> 0.000</td> <td> 0.003</td> <td> 0.004</td>
</tr>
<tr>
<th>union</th> <td> 0.1829</td> <td> 0.017</td> <td> 10.598</td> <td> 0.000</td> <td> 0.149</td> <td> 0.217</td>
</tr>
<tr>
<th>married</th> <td> 0.1410</td> <td> 0.016</td> <td> 8.931</td> <td> 0.000</td> <td> 0.110</td> <td> 0.172</td>
</tr>
<tr>
<th>hours</th> <td> -5.32e-05</td> <td> 1.34e-05</td> <td> -3.978</td> <td> 0.000</td> <td>-7.94e-05</td> <td> -2.7e-05</td>
</tr>
<tr>
<th>black</th> <td> -0.1347</td> <td> 0.024</td> <td> -5.679</td> <td> 0.000</td> <td> -0.181</td> <td> -0.088</td>
</tr>
<tr>
<th>hisp</th> <td> 0.0132</td> <td> 0.021</td> <td> 0.632</td> <td> 0.528</td> <td> -0.028</td> <td> 0.054</td>
</tr>
<tr>
<th>educ</th> <td> 0.1057</td> <td> 0.005</td> <td> 22.550</td> <td> 0.000</td> <td> 0.097</td> <td> 0.115</td>
</tr>
</table>
This model is saying that marriage increases the man's wage by 14%. A somewhat larger effect than the one we found with the fixed effect model. This suggests some omitted variable bias due to fixed individual factors, like intelligence and beauty, not being added to the model.
## Visualizing Fixed Effects
To expand our intuition about how fixed effect models work, let's diverge a little to another example. Suppose you work for a big tech company and you want to estimate the impact of a billboard marketing campaign on in-app purchase. When you look at data from the past, you see that the marketing department tends to spend more to place billboards on cities where the purchase level is lower. This makes sense right? They wouldn't need to do lots of advertisement if sales were skyrocketing. If you run a regression model on this data, it looks like higher cost in marketing leads to less in-app purchase, but only because marketing investments is biased towards low spending regions.
```python
toy_panel = pd.DataFrame({
"mkt_costs":[5,4,3.5,3, 10,9.5,9,8, 4,3,2,1, 8,7,6,4],
"purchase":[12,9,7.5,7, 9,7,6.5,5, 15,14.5,14,13, 11,9.5,8,5],
"city":["C0","C0","C0","C0", "C2","C2","C2","C2", "C1","C1","C1","C1", "C3","C3","C3","C3"]
})
m = smf.ols("purchase ~ mkt_costs", data=toy_panel).fit()
plt.scatter(toy_panel.mkt_costs, toy_panel.purchase)
plt.plot(toy_panel.mkt_costs, m.fittedvalues, c="C5", label="Regression Line")
plt.xlabel("Marketing Costs (in 1000)")
plt.ylabel("In-app Purchase (in 1000)")
plt.title("Simple OLS Model")
plt.legend();
```
Knowing a lot about causal inference, you decide to run a fixed effect model, adding the city's indicator as a dummy variable to your model. The fixed effect model controls for city specific characteristics that are constant in time, so if a city is less open to your product, it will capture that. When you run that model, you can finally see that more marketing costs leads to higher in-app purchase.
```python
fe = smf.ols("purchase ~ mkt_costs + C(city)", data=toy_panel).fit()
fe_toy = toy_panel.assign(y_hat = fe.fittedvalues)
plt.scatter(toy_panel.mkt_costs, toy_panel.purchase, c=toy_panel.city)
for city in fe_toy["city"].unique():
plot_df = fe_toy.query(f"city=='{city}'")
plt.plot(plot_df.mkt_costs, plot_df.y_hat, c="C5")
plt.title("Fixed Effect Model")
plt.xlabel("Marketing Costs (in 1000)")
plt.ylabel("In-app Purchase (in 1000)");
```
Take a minute to appreciate what the image above is telling you about what fixed effect is doing. Notice that fixed effect is fitting **one regression line per city**. Also notice that the lines are parallel. The slope of the line is the effect of marketing costs on in-app purchase. So the **fixed effect is assuming that the causal effect is constants across all entities**, which are cities in this case. This can be a weakness or an advantage, depending on how you see it. It is a weakness if you are interested in finding the causal effect per city. Since the FE model assumes this effect is constant across entities, you won't find any difference in the causal effect. However, if you want to find the overall impact of marketing on in-app purchase, the panel structure of the data is a very useful leverage that fixed effects can explore.
## Time Effects
Just like we did a fixed effect for the individual level, we could design a fixed effect for the time level. If adding a dummy for each individual controls for fixed individual characteristics, adding a time dummy would control for variables that are fixed for each time period, but that might change across time. One example of such a variable is inflation. Prices and salary tend to go up with time, but the inflation on each time period is the same for all entities. To give a more concrete example, suppose that marriage is increasing with time. If the wage and marriage proportion also changes with time, we would have time as a confounder. Since inflation also makes salary increase with time, some of the positive association we see between marriage and wage would be simply because both are increasing with time. To correct for that, we can add a dummy variable for each time period. In `linear models`, this is as simple as adding `TimeEffects` to our formula and setting the `cluster_time` to true.
```python
mod = PanelOLS.from_formula("lwage ~ expersq+union+married+hours+EntityEffects+TimeEffects",
data=data.set_index(["nr", "year"]))
result = mod.fit(cov_type='clustered', cluster_entity=True, cluster_time=True)
result.summary.tables[1]
```
<table class="simpletable">
<caption>Parameter Estimates</caption>
<tr>
<td></td> <th>Parameter</th> <th>Std. Err.</th> <th>T-stat</th> <th>P-value</th> <th>Lower CI</th> <th>Upper CI</th>
</tr>
<tr>
<th>expersq</th> <td>-0.0062</td> <td>0.0008</td> <td>-8.1479</td> <td>0.0000</td> <td>-0.0077</td> <td>-0.0047</td>
</tr>
<tr>
<th>union</th> <td>0.0727</td> <td>0.0228</td> <td>3.1858</td> <td>0.0015</td> <td>0.0279</td> <td>0.1174</td>
</tr>
<tr>
<th>married</th> <td>0.0476</td> <td>0.0177</td> <td>2.6906</td> <td>0.0072</td> <td>0.0129</td> <td>0.0823</td>
</tr>
<tr>
<th>hours</th> <td>-0.0001</td> <td>3.546e-05</td> <td>-3.8258</td> <td>0.0001</td> <td>-0.0002</td> <td>-6.614e-05</td>
</tr>
</table>
In this new model, the effect of marriage on wage decreased significantly from `0.1147` to `0.0476`. Still, this result is significant at a 99% level, so man could still expect an increase in earnings from marriage.
## When Panel Data Won't Help You
Using panel data and fixed effects models is an extremely powerful tool for causal inference. When you don't have random data nor good instruments, the fixed effect is as convincing as it gets for causal inference with non experimental data. Still, it is worth mentioning that it is not a panacea. There are situations where even panel data won't help you.
The most obvious one is when you have confounders that are changing in time. Fixed effects can only eliminate bias from attributes that are constant for each individual. For instance, suppose that you can increase your intelligence level by reading books and eating lots of good fats. This causes you to get a higher paying job and a wife. Fixed effect won't be able to remove this bias due to unmeasured intelligence confounding because, in this example, intelligence is changing in time.
Another less obvious case when fixed effect fails is when you have **reversed causality**. For instance, let's say that it isn't marriage that causes you to earn more. Is earning more that increases your chances of getting married. In this case, it will appear that they have a positive correlation but earnings come first. They would change in time and in the same direction, so fixed effects wouldn't be able to control for that.
## Key Ideas
Here, we saw how to use panel data, data where we have multiple measurements of the same individuals across multiple time periods. When that is the case, we can use a fixed effect model that controls for the entity, holding all individual, time constant attributes, fixed. This is a powerful and very convincing way of controlling for confounding and it is as good as it gets with non random data.
Finally, we saw that FE is not a panacea. We saw two situations where it doesn't work: when we have reverse causality and when the unmeasured confounding is changing in time.
## References
I like to think of this entire book as a tribute to Joshua Angrist, Alberto Abadie and Christopher Walters for their amazing Econometrics class. Most of the ideas here are taken from their classes at the American Economic Association. Watching them is what is keeping me sane during this tough year of 2020.
* [Cross-Section Econometrics](https://www.aeaweb.org/conference/cont-ed/2017-webcasts)
* [Mastering Mostly Harmless Econometrics](https://www.aeaweb.org/conference/cont-ed/2020-webcasts)
I'll also like to reference the amazing books from Angrist. They have shown me that Econometrics, or 'Metrics as they call it, is not only extremely useful but also profoundly fun.
* [Mostly Harmless Econometrics](https://www.mostlyharmlesseconometrics.com/)
* [Mastering 'Metrics](https://www.masteringmetrics.com/)
Another important reference is Miguel Hernan and Jamie Robins' book. It has been my trustworthy companion in the most thorny causal questions I had to answer.
* [Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)
Finally, I'd also like to compliment Scott Cunningham and his brilliant work mingling Causal Inference and Rap quotes:
* [Causal Inference: The Mixtape](https://www.scunning.com/mixtape.html)
## Contribute
Causal Inference for the Brave and True is an open-source material on causal inference, the statistics of science. It uses only free software, based in Python. Its goal is to be accessible monetarily and intellectually.
If you found this book valuable and you want to support it, please go to [Patreon](https://www.patreon.com/causal_inference_for_the_brave_and_true). If you are not ready to contribute financially, you can also help by fixing typos, suggesting edits or giving feedback on passages you didn't understand. Just go to the book's repository and [open an issue](https://github.com/matheusfacure/python-causality-handbook/issues). Finally, if you liked this content, please share it with others who might find it useful and give it a [star on GitHub](https://github.com/matheusfacure/python-causality-handbook/stargazers).
```python
```
| b102577c086703ffadf6e2868dc2a679b3caec7a | 108,086 | ipynb | Jupyter Notebook | causal-inference-for-the-brave-and-true/13-Panel-Data-and-Fixed-Effects.ipynb | qiringji/python-causality-handbook | add5ab57a8e755242bdbc3d4d0ee00867f6a1e55 | [
"MIT"
] | 1 | 2021-07-07T03:57:54.000Z | 2021-07-07T03:57:54.000Z | causal-inference-for-the-brave-and-true/13-Panel-Data-and-Fixed-Effects.ipynb | qiringji/python-causality-handbook | add5ab57a8e755242bdbc3d4d0ee00867f6a1e55 | [
"MIT"
] | null | null | null | causal-inference-for-the-brave-and-true/13-Panel-Data-and-Fixed-Effects.ipynb | qiringji/python-causality-handbook | add5ab57a8e755242bdbc3d4d0ee00867f6a1e55 | [
"MIT"
] | null | null | null | 82.382622 | 23,676 | 0.710046 | true | 9,971 | Qwen/Qwen-72B | 1. YES
2. YES | 0.851953 | 0.870597 | 0.741708 | __label__eng_Latn | 0.986828 | 0.561568 |
# astGl - Uebung 4
## Aufgabe 1
```python
%matplotlib notebook
from sympy import *
import matplotlib.pyplot as plt
from IPython.display import display, Math, Latex
def disp(str):
display(Latex(str))
```
```python
G1,G2,G = symbols('G1,G2,G')
eqg = [
(G1, G),
(G2, G)
]
eqg
```
[(G1, G), (G2, G)]
```python
wu,wp2,s,A = symbols('wu,wp2,s,A')
eqop = [
(wp2, wu/4),
(A, wu/s*wp2/(s+wp2))
]
eqop
```
[(wp2, wu/4), (A, wp2*wu/(s*(s + wp2)))]
```python
eqT = -G1*A*(G1+G2)/(1+G2*A*(G1+G2))
eqT
```
$\displaystyle - \frac{A G_{1} \left(G_{1} + G_{2}\right)}{A G_{2} \left(G_{1} + G_{2}\right) + 1}$
```python
eqT.subs(eqg)
```
$\displaystyle - \frac{2 A G^{2}}{2 A G^{2} + 1}$
```python
eqT.subs(eqop).simplify()
```
$\displaystyle - \frac{G_{1} \wp_{2} wu \left(G_{1} + G_{2}\right)}{G_{2} \wp_{2} wu \left(G_{1} + G_{2}\right) + s \left(s + \wp_{2}\right)}$
```python
```
| 1169d571cc14161bb6416badf02915ef171e096f | 3,455 | ipynb | Jupyter Notebook | astGl/astGl_Uebung4_.ipynb | mnemocron/FHNW | e43c298cb9c8f617fa19b77dd6630a342c78bda7 | [
"Unlicense"
] | 1 | 2020-10-07T07:28:33.000Z | 2020-10-07T07:28:33.000Z | astGl/astGl_Uebung4_.ipynb | mnemocron/FHNW | e43c298cb9c8f617fa19b77dd6630a342c78bda7 | [
"Unlicense"
] | null | null | null | astGl/astGl_Uebung4_.ipynb | mnemocron/FHNW | e43c298cb9c8f617fa19b77dd6630a342c78bda7 | [
"Unlicense"
] | 1 | 2021-01-17T16:38:58.000Z | 2021-01-17T16:38:58.000Z | 19.088398 | 162 | 0.438205 | true | 407 | Qwen/Qwen-72B | 1. YES
2. YES | 0.914901 | 0.766294 | 0.701083 | __label__yue_Hant | 0.386654 | 0.467182 |
# Quadcopter
## Summary
This notebook outlines a the design of a motion controller for a quadcopter.
## Goals
The ultimate goal is to apply the designed control system to a simulated environment - for this I have chosen Python and specifially [pybullet](https://pybullet.org/) as the 3D physics simulator and [pyglet](http://www.pyglet.org) as the game engine to render the results.
### Control System Requirements
The design criteria for the quadrotor is to reach a step input in 3D space of (1, 1, 1):
* Settling time for x, y, z and yaw (ψ) of less than 5 seconds
* Rise time for x, y, z and yaw (ψ) of less than 2 seconds
* Overshoot of x, y, z and yaw (ψ) less than 5%
## System Description
We will use the following diagram to derive the equations of motion:
TODO - add diagram
* coordinate system and inertial frame is defined with the positive z axis in the opposite direction of gravity
* x, y, and z are the coordinates of the quadcopter centre of mass (CoM) in the inertial frame
* φ, θ, and ψ are the roll, pitch and yaw about the axes x, y and z respectively, with respect to the inertial frame - angles are 0 radians when the quadcopter is hovering, angles are measured CCW when 'looking down' the axis of rotation in the inertial frame
* rotors directly across from eachother rotate in the same direction (this allows the quadcopter to change its yaw angle while keeping its position constant)
* $F_1, F_2, F_3, F_4$ are the forces from the rotors, we assume they can only thrust in the positive z direction, with respect to the quadcopter frame
* $M_1, M_2, M_3, M_4$ are the moments of inertia for the rotors, about the z axis
* $u_1$ and $u_2$ are the motion control system inputs, where $u_1$ controls the throttle, and $u_2$ controls the rotation
* $I$ is the quadcopter moment of inertia (with x, y, and z components)
* $m$ is the mass of the quadcopter, in kg
* $l$ is the distance from the centre of mass to the rotors
* $g$ is gravity
* $R$ is the rotation matrix from the quadcopter frame to the static base frame using the [ZXY Euler angles (Wikipedia)](https://en.wikipedia.org/wiki/Euler_angles#Rotation_matrix) convention
* $\dot{x}, \dot{y}, \dot{z}$ are translational velocities
* $\ddot{x}, \ddot{y}, \ddot{z}$ are translational accelerations
* $\dot{\phi}, \dot{\theta}, \dot{\psi}$ are rotational velocities
* $\ddot{\phi}, \ddot{\theta}, \ddot{\psi}$ are rotational accelerations
## Equations of Motion
We will use the [Newton-Euler equations (Wikipedia)](https://en.wikipedia.org/wiki/Newton%E2%80%93Euler_equations) to define the equations of motion.
Using the [ZXY Euler angles (Wikipedia)](https://en.wikipedia.org/wiki/Euler_angles#Rotation_matrix) convention, we will define the rotation matrix from the quadcopter body frame into the inertial frame as $R$
```python
import sympy as sp
import matplotlib.pyplot as plt
import numpy as np
import math
sp.init_printing()
```
```python
# create our symbols and functions
t, g = sp.symbols('t g')
x = sp.Function('x')(t)
y = sp.Function('y')(t)
z = sp.Function('z')(t)
φ = sp.Function('φ')(t)
θ = sp.Function('θ')(t)
ψ = sp.Function('ψ')(t)
```
```python
# define the rotation matrices about the axes x, y, and z
Rx = sp.Matrix([[1, 0, 0], [0, sp.cos(φ), -sp.sin(φ)], [0, sp.sin(φ), sp.cos(φ)]])
Ry = sp.Matrix([[sp.cos(θ), 0, sp.sin(θ)], [0, 1, 0], [-sp.sin(θ), 0, sp.cos(θ)]])
Rz = sp.Matrix([[sp.cos(ψ), -sp.sin(ψ), 0], [sp.sin(ψ), sp.cos(ψ), 0], [0, 0, 1]])
# create the euler angle Z-X-Y rotation matrix and print it out
R = Rz*Rx*Ry
R
```
$\displaystyle \left[\begin{matrix}- \sin{\left(θ{\left(t \right)} \right)} \sin{\left(φ{\left(t \right)} \right)} \sin{\left(ψ{\left(t \right)} \right)} + \cos{\left(θ{\left(t \right)} \right)} \cos{\left(ψ{\left(t \right)} \right)} & - \sin{\left(ψ{\left(t \right)} \right)} \cos{\left(φ{\left(t \right)} \right)} & \sin{\left(θ{\left(t \right)} \right)} \cos{\left(ψ{\left(t \right)} \right)} + \sin{\left(φ{\left(t \right)} \right)} \sin{\left(ψ{\left(t \right)} \right)} \cos{\left(θ{\left(t \right)} \right)}\\\sin{\left(θ{\left(t \right)} \right)} \sin{\left(φ{\left(t \right)} \right)} \cos{\left(ψ{\left(t \right)} \right)} + \sin{\left(ψ{\left(t \right)} \right)} \cos{\left(θ{\left(t \right)} \right)} & \cos{\left(φ{\left(t \right)} \right)} \cos{\left(ψ{\left(t \right)} \right)} & \sin{\left(θ{\left(t \right)} \right)} \sin{\left(ψ{\left(t \right)} \right)} - \sin{\left(φ{\left(t \right)} \right)} \cos{\left(θ{\left(t \right)} \right)} \cos{\left(ψ{\left(t \right)} \right)}\\- \sin{\left(θ{\left(t \right)} \right)} \cos{\left(φ{\left(t \right)} \right)} & \sin{\left(φ{\left(t \right)} \right)} & \cos{\left(θ{\left(t \right)} \right)} \cos{\left(φ{\left(t \right)} \right)}\end{matrix}\right]$
### Position
Total force $F$ acting on the centre of mass, with respect to the inertial frame.
$ma = F$ where a is acceleration
$m\begin{bmatrix} \ddot{x} \\
\ddot{y} \\
\ddot{z} \end{bmatrix} = \begin{bmatrix} 0 \\
0 \\
-mg \end{bmatrix} + R\begin{bmatrix} 0 \\
0 \\
F_1+F_2+F_3+F_4\end{bmatrix}$
### Rotation
Total torque $\tau$ acting on the centre of mass with respect to the body frame.
$I\alpha = \tau - \omega\times I\omega$ where $\alpha$ is angular acceleration, $\omega$ is angular velocity, and $I$ is the [inertia tensor (Wikipedia)](https://en.wikipedia.org/wiki/Moment_of_inertia#Inertia_tensor) of the quadcopter:
$I = \begin{bmatrix} I_{xx} & I_{xy} & I_{xz} \\
I_{yx} & I_{yy} & I_{yz} \\
I_{zx} & I_{zy} & I_{zz} \end{bmatrix}$
The angular velocity with respect to the body fixed frame is required to fill out this equation.
We can define a time based function that transforms a fixed point $p_b$ in the body frame to the inertial frame as $p_i(t)$:
$p_i(t) = R(t)p_b$
Taking the time derivative gives us the velocity in the inertial frame:
$\dot{p_i(t)} = \dot{R(t)}p_b$
To get the velocity in a body fixed frame, we can multiply by the transpose of the rotation matrix (since the original matrix is converting from body to fixed):
$R^T\dot{p_i(t)} = R^T\dot{R(t)}p_b$
The term $R^T\dot{R(t)}$ can then be used to convert angular velocities in an inertial frame into a body fixed frame.
#### Alternative Calculation
**NOTE:** help needed here...to use this calculation, I also had to negate the result, but not sure why - something to do with skew symmetry?
The angular velocity with respect to the body fixed frame is required to fill out this equation. We can use the rotation matrix from the body frame to inertial frame to calculate the [angular velocity vector (Wikipedia)]((https://en.wikipedia.org/wiki/Rotation_formalisms_in_three_dimensions#Rotation_matrix_%E2%86%94_angular_velocities)) in the body frame:
$\begin{bmatrix} 0 & -\omega_z & \omega_y \\
\omega_z & 0 & -\omega_x \\
-\omega_y & \omega_x & 0 \end{bmatrix} = \dot{A}A^T$
We must first invert/transpose the rotation matrix so we can get the transformation from inertial frame into body frame.
In python we could do the following:
```python
# hack alert! had to negate the value to get it to work...
A = R.transpose()
ω = -sp.simplify(A.diff(t)*A.transpose())
```
```python
# calculate the angular velocity in the body fixed frame
ω = sp.simplify(R.transpose()*R.diff(t))
ω_x = ω[2, 1]
ω_y = ω[0, 2]
ω_z = ω[1, 0]
# print out the equations for the components of omega
ω_x, ω_y, ω_z
```
Rearranging for the angular velocities above:
$\omega = \begin{bmatrix} \omega_x \\
\omega_y \\
\omega_z \end{bmatrix} = \begin{bmatrix} \cos\theta & 0 & -\cos\phi\sin\theta \\
0 & 1 & \sin\phi \\
\sin\theta & 0 & \cos\phi\cos\theta \end{bmatrix} \begin{bmatrix} \dot{\phi} \\
\dot{\theta} \\
\dot{\psi} \end{bmatrix}$
and putting it all together:
$I\begin{bmatrix} \dot{\omega_x} \\
\dot{\omega_y} \\
\dot{\omega_z} \end{bmatrix} = \begin{bmatrix} l(F_2 - F_4) \\
l(F_3 - F_1) \\
M_1-M_2+M_3-M_4 \end{bmatrix} - \begin{bmatrix} \omega_x \\
\omega_y \\
\omega_z \end{bmatrix} \times I \begin{bmatrix} \omega_x \\
\omega_y \\
\omega_z \end{bmatrix}$
## Approximations for Hover
We will linearize our equations at a stable hover, where we make the following approximations:
* position will be constant and equal to its initial value
* time derivatives of position (e.g. velocity and acceleration) will then be 0
* roll and pitch will be constant and approximately 0
* yaw will be constant and equal to its initial value
* time derivatives of rotation (e.g. angular velocity and acceleration) will also be 0
* the sine of a small value can then be assigned to the small value
* the cosine of a small value can then be assigned to 1
* the non-principal moments of inertia are 0
Or:
$\cos\phi \approx 1 \\
\sin\phi \approx \phi \\
\cos\theta \approx 1 \\
\sin\theta \approx \theta$
The rotation matrix from body frame into inertial frame becomes:
$R = \begin{bmatrix} r11 & r12 & r13 \\
r21 & r22 & r23 \\
r31 & r32 & r33 \end{bmatrix} = \begin{bmatrix} \cos\psi - \sin\psi\phi\theta & -\sin\psi & \cos\psi\theta + \sin\psi\phi \\
\sin\psi + \cos\psi\phi\theta & \cos\psi & \sin\psi\theta - \cos\psi\phi \\
-\theta & \phi & 1 \end{bmatrix}$
### Position
We will define thrust input $u_1 = F_1+F_2+F_3+F_4$ and our force equation becomes:
$m\begin{bmatrix} \ddot{x} \\
\ddot{y} \\
\ddot{z} \end{bmatrix} = \begin{bmatrix} 0 \\
0 \\
-mg \end{bmatrix} + \begin{bmatrix} \cos\psi\theta + \sin\psi\phi \\
\sin\psi\theta - \cos\psi\phi \\
1\end{bmatrix}u_1$
The second derivative of time (acceleration) is proportional to $u_1$.
### Rotation
Assuming the non-principal components of inertia are 0 gives us a revised inertia tensor:
$I = \begin{bmatrix} I_{xx} & 0 & 0 \\
0 & I_{yy} & 0 \\
0 & 0 & I_{zz} \end{bmatrix}$
Linearizing our rotational equations gives us:
$\omega = \begin{bmatrix} \omega_x \\
\omega_y \\
\omega_z \end{bmatrix} = \begin{bmatrix} 1 & 0 & -\theta \\
0 & 1 & \phi \\
\theta & 0 & 1 \end{bmatrix} \begin{bmatrix} \dot{\phi} \\
\dot{\theta} \\
\dot{\psi} \end{bmatrix}$
and also considering that the products of two small numbers is a really small number (the angular velocities are all small), gives us:
$\omega = \begin{bmatrix} \omega_x \\
\omega_y \\
\omega_z \end{bmatrix} = \begin{bmatrix} \dot{\phi} \\
\dot{\theta} \\
\dot{\psi} \end{bmatrix}$
Let $u_2$ be:
$u_2 = \begin{bmatrix} u_{2x} \\
u_{2y} \\
u_{2z} \\
\end{bmatrix} = \begin{bmatrix} l(F_2 - F_4) \\
l(F_3 - F_1) \\
M_1-M_2+M_3-M_4 \end{bmatrix}$
Filling in our torque equation:
$\begin{bmatrix} I_{xx} & 0 & 0 \\
0 & I_{yy} & 0 \\
0 & 0 & I_{zz} \end{bmatrix} \begin{bmatrix} \dot{\omega_x} \\
\dot{\omega_y} \\
\dot{\omega_z} \end{bmatrix} = \begin{bmatrix} u_{2x} \\
u_{2y} \\
u_{2z} \end{bmatrix} - \begin{bmatrix} 0 & -\omega_z & \omega_y \\
\omega_z & 0 & -\omega_x \\
-\omega_y & \omega_x & 0 \end{bmatrix} \begin{bmatrix} I_{xx} & 0 & 0 \\
0 & I_{yy} & 0 \\
0 & 0 & I_{zz} \end{bmatrix} \begin{bmatrix} \omega_x \\
\omega_y \\
\omega_z \end{bmatrix}$
$\begin{bmatrix} I_{xx}\dot{\omega_x} \\
I_{yy}\dot{\omega_y} \\
I_{zz}\dot{\omega_z} \end{bmatrix} = \begin{bmatrix} u_{2x} + I_{yy}\omega_y\omega_z - I_{zz}\omega_z\omega_y \\
u_{2y} - I_{xx}\omega_x\omega_z + I_{zz}\omega_z\omega_x \\
u_{2z} + I_{xx}\omega_x\omega_y - I_{zz}\omega_z\omega_x \end{bmatrix}$
and also approximating that the $\omega_i$ terms multiplied together are approximately zero, we can rearrange and define the following equations:
$I_{xx}\ddot{\phi} = u_{2x} \\
I_{yy}\ddot{\theta} = u_{2y} \\
I_{zz}\ddot{\psi} = u_{2z}$
and finally get equations for the angular accelerations:
$\ddot{\phi} = \dfrac{u_{2x}}{I_{xx}} \\
\ddot{\theta} = \dfrac{u_{2y}}{I_{yy}} \\
\ddot{\psi} = \dfrac{u_{2z}}{I_{zz}}$
by taking one of the linearized equations for position, e.g. $m\ddot{x}$ and then differentiating against time twice, we can substitute in the above equations and observe that the input $u_2$ is proportional to the fourth derivative of time (snap). This means we will desire a minimum snap trajectory to smoothly control a quadcopter (citation needed).
## State Space Representation
This system has 12 states - position, rotation and their associated velocities.
*NOTE: linearized equations for hover*
$x(t) = \begin{bmatrix} x_0 \\
x_1 \\
x_2 \\
x_3 \\
x_4 \\
x_5 \\
x_6 \\
x_7 \\
x_8 \\
x_9 \\
x_{10} \\
x_{11} \end{bmatrix} = \begin{bmatrix} x \\
y \\
z \\
\phi \\
\theta \\
\psi \\
\dot{x} \\
\dot{y} \\
\dot{z} \\
\dot{\phi} \\
\dot{\theta} \\
\dot{\psi} \end{bmatrix}, \dot{x(t)} = \begin{bmatrix} \dot{x_0} \\
\dot{x_1} \\
\dot{x_1} \\
\dot{x_2} \\
\dot{x_3} \\
\dot{x_4} \\
\dot{x_5} \\
\dot{x_6} \\
\dot{x_7} \\
\dot{x_8} \\
\dot{x_9} \\
\dot{x_{10}} \\
\dot{x_{11}} \end{bmatrix} = \begin{bmatrix} x_6 \\
x_7 \\
x_8 \\
x_9 \\
x_{10} \\
x_{11} \\
\ddot{x} \\
\ddot{y} \\
\ddot{z} \\
\ddot{\phi} \\
\ddot{\theta} \\
\ddot{\psi} \end{bmatrix} = \begin{bmatrix} x_6 \\
x_7 \\
x_8 \\
x_9 \\
x_{10} \\
x_{11} \\
\dfrac{(\cos\psi\theta + \sin\psi\phi)u_1}{m} \\
\dfrac{(\sin\psi\theta - \cos\psi\phi)u_1}{m} \\
\dfrac{u_1}{m} - g \\
\dfrac{u_{2x}}{I_{xx}} \\
\dfrac{u_{2y}}{I_{yy}} \\
\dfrac{u_{2z}}{I_{zz}} \end{bmatrix}$
where:
$u(t) = \begin{bmatrix} u_1 \\
u_{2x} \\
u_{2y} \\
u_{2z} \end{bmatrix} = \begin{bmatrix} F_1+F_2+F_3+F_4 \\
l(F_2 - F_4) \\
l(F_3 - F_1) \\
M_1-M_2+M_3-M_4 \end{bmatrix}$
## Control
The following diagram represents the control system:
1. the trajectory generator provides the desired position and yaw to the position controller
1. the position controller reads the current quadcopter position state, determines commanded pitch, roll, and yaw values and sends that to the attitude controller, and simultaneously determines the thrust input ($u_1$) and sends that to the motion controller
1. the attitude controller reads the current quadcopter rotation state, and determines the rotation input ($u_2$) and sends that to the motion controller
1. the motion controller solves for the 4 rotor forces based on the given inputs ($u_1 and u_2$) and updates the quadcopter with the current required forces
UML code provided here in case of edits - image generated on [PlantUML](https://plantuml.com/):
```
@startuml
[Trajectory\nGenerator] -right-> [Position\nController] : desired\nx,y,z,𝜓
[Position\nController] -right-> [Motion\nController] : u1
[Position\nController] -down-> [Attitude\nController] : commanded\n𝜙,𝜃,𝜓
[Attitude\nController] -> [Motion\nController] : u2
[Motion\nController] -right-> [Quadcopter]
[Quadcopter] -down-> [Attitude\nController] : 𝜙,𝜃,𝜓,\nd/dt(𝜙,𝜃,𝜓)
[Quadcopter] -> [Position\nController] : x,y,z,\nd/dt(x,y,z)
@enduml
```
### PID Controllers
We can use [proportional–integral–derivative (PID) controllers (Wikipedia)](https://en.wikipedia.org/wiki/PID_controller) to reach our desired states. In general, to calculate an time based input $u(t)$ based on the time based error $e(t)$ the equation is as follows:
$u(t) = K_pe(t) + K_i\int_0^t \! e(t) + K_d\dot{(e(t)}$
For simplicity, we will ignore the integral term and implement a PD controller to focus on the current error in position and velocity, and ignore the accumulated error (we would want to include this for a real world application, but for an ideal simulation we can ignore it).
#### Position
For position, we will have the following control equations (c = commanded, d = desired):
$\begin{bmatrix} \ddot{x_c} \\
\ddot{y_c} \\
\ddot{z_c}
\end{bmatrix} = \begin{bmatrix} \ddot{x_d} + K_{p,x}(x_d - x) + K_{d,x}(\dot{x_d} - \dot{x}) \\
\ddot{y_d} + K_{p,y}(y_d - y) + K_{d,y}(\dot{y_d} - \dot{y}) \\
\ddot{z_d} + K_{p,z}(z_d - z) + K_{d,z}(\dot{z_d} - \dot{z})
\end{bmatrix}$
We can then calculate $u_1$, which is simply the combined acceleration of commanded acceleration and gravity, multiplied by the quadcopter mass:
$u_1 = m(g+\ddot{z_c}) = m(g + \ddot{z_d} + K_{p,\ddot{z}}(z_d - z) + K_{d,\ddot{z}}(\dot{z_d} - \dot{z}))$
#### Rotation
For rotation, once we have the commanded position accelerations, we can calculate the commanded rotations using trigonometry (c = commanded, d = desired):
$\begin{bmatrix} \phi_c \\
\theta_c \\
\psi_c
\end{bmatrix} = \begin{bmatrix} \dfrac{\ddot{x_c}\sin{\psi_d} - \ddot{y_c}\cos{\psi_d}}{g} \\
\dfrac{\ddot{x_c}\cos{\psi_d} + \ddot{y_c}\sin{\psi_d}}{g} \\
\psi_d
\end{bmatrix}$
*NOTE: We can use the above to calculate the commanded angular velocities and accelerations by taking 2 time derivatives. This will result in a requirement to calculate the commanded jerk and snap for x and y, which we can get from taking 2 time derivatives of our x and y accelerations above. This is why $u_2$ is dependent on the fourth time derivative of position. Commanded jerk and snap for x and y also assumes we can calculate the current jerk and acceleration of the quadcopter.*
and then calculate $u_2$:
$u_2 = \begin{bmatrix} u_{2x} \\
u_{2y} \\
u_{2z}
\end{bmatrix} = \begin{bmatrix} \ddot\phi_c + K_{p,\phi}(\phi_c - \phi) + K_{d,\phi}(\dot{\phi_c} - \dot{\phi}) \\
\ddot\theta_c + K_{p,\theta}(\theta_c - \theta) + K_{d,\theta}(\dot{\theta_c} - \dot{\theta}) \\
\ddot\psi_c + K_{p,\psi}(\psi_c - \psi) + K_{d,\psi}(\dot{\psi_c} - \dot{\psi})
\end{bmatrix}$
#### Motion Control
Now that we have our input $u$ we can then solve for the required rotor forces, and send those to the quadcopter.
*NOTE: in reality we would want to calculate the voltage required to reach a desired rotor speed in order to create the force, but for the purposes of our ideal simulation, we will just use the forces and moments.*
We now have a system of 4 equations and 4 unknowns and can use linear algebra to solve the system:
$\begin{bmatrix} u_1 \\
u_{2x} \\
u_{2y} \\
u_{2z} \end{bmatrix} = \begin{bmatrix} F_1+F_2+F_3+F_4 \\
l(F_2 - F_4) \\
l(F_3 - F_1) \\
M_1-M_2+M_3-M_4 \end{bmatrix}$
We will also need to solve for the moments, $M_i$ but these can be linearly related to the force produced. For a propellor, the force and moment can be represented as a product of some constant and the square of the propellor speed (citation needed):
$F = k_f\omega^2 \\
M = k_m\omega^2$
which means we can represent the moments as:
$M_i = \dfrac{k_m}{k_f}F_i$
and substitute appropriately:
$\begin{bmatrix} u_1 \\
u_{2x} \\
u_{2y} \\
u_{2z} \end{bmatrix} = \begin{bmatrix} F_1+F_2+F_3+F_4 \\
l(F_2 - F_4) \\
l(F_3 - F_1) \\
\dfrac{k_m}{k_f}(F_1-F_2+F_3-F_4) \end{bmatrix}$
we can now represent this system as $Af = u$ where the $f$ vector is the unknown forces, and the $A$ matrix is:
$A = \begin{bmatrix} 1 & 1 & 1 & 1 \\
0 & l & 0 & -l \\
-l & 0 & l & 0 \\
\dfrac{k_m}{k_f} & -\dfrac{k_m}{k_f} & \dfrac{k_m}{k_f} & -\dfrac{k_m}{k_f}
\end{bmatrix}$
We can then solve for $f$:
$f = A^{-1}u$
Since A is all constants, we can calculate the inverse of $A$ once and re-use for each iteration of our control loop.
## Simulation
We now have enough data to simulate the system.
### Constants
I will use a master thesis called [*Modelling, Identification and Control
of a Quadrotor Helicopter*](http://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=8847641&fileOId=8859343) by Tommaso Bresciani as a reference to fill in the constants.
### Constant Control
To ensure our model is doing what we expect, we will first simulate conditions where constant inputs are applied.
```python
# define our constants
constants = dict(
l = 235e-3, # 23.5 cm from CoM to rotor
m = 250e-3, # 250 g
Ixx = 8.1e-3, # Nms^2
Iyy = 8.1e-3, # Nms^2
Izz = 14.2e-3, # Nms^2
g = 9.81, # gravity, m/s^2,
kmkf = 50e-6 # km/kf - the ratio of the moment constant over the force constant
)
```
```python
# define our ivp function using our state space representation
def quadcopter(t, state, forces, constants):
u1 = sum(forces)
u2x = constants['l'] * (forces[1] - forces[3])
u2y = constants['l'] * (forces[2] - forces[0])
u2z = constants['kmkf'] * (forces[0] - forces[1] + forces[2] - forces[3])
phi = state[3]
theta = state[4]
psi = state[5]
u1_m = u1 / constants['m']
x_accel = (math.cos(psi)*theta + math.sin(psi)*phi) * u1_m
y_accel = (math.sin(psi)*theta - math.cos(psi)*phi) * u1_m
z_accel = u1_m - constants['g']
phi_accel = u2x / constants['Ixx']
theta_accel = u2y / constants['Iyy']
psi_accel = u2z / constants['Izz']
updated = np.zeros(12)
updated[0:6] = state[6:]
updated[6:] = [x_accel, y_accel, z_accel, phi_accel, theta_accel, psi_accel]
return updated
```
```python
# define a plotting function to show all the states
def plot_all(title, t, z):
plt.figure(figsize=(20, 24))
plt.subplot(4, 3, 1)
plt.plot(t, z.T[:,0], 'r')
plt.xlabel('time (s)')
plt.ylabel('X position (m)')
plt.title(title)
plt.grid(True)
plt.subplot(4, 3, 2)
plt.plot(t, z.T[:,1], 'b')
plt.xlabel('time (s)')
plt.ylabel('Y position (m)')
plt.grid(True)
plt.subplot(4, 3, 3)
plt.plot(t, z.T[:,2], 'g')
plt.xlabel('time (s)')
plt.ylabel('Z position (m)')
plt.grid(True)
plt.subplot(4, 3, 4)
plt.plot(t, z.T[:,3], 'y')
plt.xlabel('time (s)')
plt.ylabel('Pitch (radians)')
plt.grid(True)
plt.subplot(4, 3, 5)
plt.plot(t, z.T[:,4], 'm')
plt.xlabel('time (s)')
plt.ylabel('Roll (radians)')
plt.grid(True)
plt.subplot(4, 3, 6)
plt.plot(t, z.T[:,5], 'c')
plt.xlabel('time (s)')
plt.ylabel('Yaw (radians)')
plt.grid(True)
plt.subplot(4, 3, 7)
plt.plot(t, z.T[:,6], 'r')
plt.xlabel('time (s)')
plt.ylabel('X velocity (m/s)')
plt.grid(True)
plt.subplot(4, 3, 8)
plt.plot(t, z.T[:,7], 'b')
plt.xlabel('time (s)')
plt.ylabel('Y velocity (m/s)')
plt.grid(True)
plt.subplot(4, 3, 9)
plt.plot(t, z.T[:,8], 'g')
plt.xlabel('time (s)')
plt.ylabel('Z velocity (m/s)')
plt.grid(True)
plt.subplot(4, 3, 10)
plt.plot(t, z.T[:,9], 'y')
plt.xlabel('time (s)')
plt.ylabel('Pitch velocity (radians/s)')
plt.grid(True)
plt.subplot(4, 3, 11)
plt.plot(t, z.T[:,10], 'm')
plt.xlabel('time (s)')
plt.ylabel('Roll velocity (radians/s)')
plt.grid(True)
plt.subplot(4, 3, 12)
plt.plot(t, z.T[:,11], 'c')
plt.xlabel('time (s)')
plt.ylabel('Yaw velocity (radians/s)')
plt.grid(True)
plt.xlabel('time (s)')
plt.show()
```
```python
from scipy.integrate import solve_ivp
# let the quadcopter fall
forces = [0.0] * 4
# initial values for state vector
y0 = [0.0] * 12
args = (forces, constants)
sol = solve_ivp(quadcopter, [0, 10], y0, args=args, dense_output=True)
```
```python
t = np.linspace(0, 10, 100)
z = sol.sol(t)
# we expect the quadcopter to fall from the sky, z decreasing exponentially, and zdot (velocity) increasing linearly, all other states should be 0
plot_all('Quadcopter Falling', t, z)
```
```python
# determine the force required to hover
force = constants['m'] * constants['g'] / 4
forces = [force] * 4
args = (forces, constants)
sol = solve_ivp(quadcopter, [0, 10], y0, args=args, dense_output=True)
t = np.linspace(0, 10, 100)
z = sol.sol(t)
# we expect the quadcopter to hover so all states should be 0
plot_all('Quadcopter Hovering', t, z)
```
```python
# apply forces such that the quadcopter will change its yaw
force = constants['m'] * constants['g'] / 4
delta_f = force * 0.1
forces = [force] * 4
forces[0] -= delta_f
forces[2] -= delta_f
forces[1] += delta_f
forces[3] += delta_f
args = (forces, constants)
sol = solve_ivp(quadcopter, [0, 10], y0, args=args, dense_output=True)
t = np.linspace(0, 10, 100)
z = sol.sol(t)
# we expect the quadcopter to hover but the yaw should be changing
plot_all('Quadcopter Changing Yaw', t, z)
```
```python
# apply forces such that the quadcopter will start translating and rotating
force = constants['m'] * constants['g'] / 4
delta_f = force * 0.0001
forces = [force] * 4
forces[0] += delta_f
forces[1] += delta_f
args = (forces, constants)
sol = solve_ivp(quadcopter, [0, 10], y0, args=args, dense_output=True)
t = np.linspace(0, 10, 100)
z = sol.sol(t)
# we expect the quadcopter start moving and rotating, but yaw should be 0
plot_all('Quadcopter Movement', t, z)
```
### PD Control
We can now redefine our function to include PD control.
Our trajectory generator will give us the desired x, y, z and yaw positions, velocities, and accelerations, allowing us to calculate the commanded values - which we derived earlier:
$\begin{bmatrix} \ddot{x_c} \\
\ddot{y_c} \\
\ddot{z_c}
\end{bmatrix} = \begin{bmatrix} \ddot{x_d} + K_{p,x}(x_d - x) + K_{d,x}(\dot{x_d} - \dot{x}) \\
\ddot{y_d} + K_{p,y}(y_d - y) + K_{d,y}(\dot{y_d} - \dot{y}) \\
\ddot{z_d} + K_{p,z}(z_d - z) + K_{d,z}(\dot{z_d} - \dot{z})
\end{bmatrix}$
and commanded rotation:
$\begin{bmatrix} \phi_c \\
\theta_c \\
\psi_c
\end{bmatrix} = \begin{bmatrix} \dfrac{\ddot{x_c}\sin{\psi_d} - \ddot{y_c}\cos{\psi_d}}{g} \\
\dfrac{\ddot{x_c}\cos{\psi_d} + \ddot{y_c}\sin{\psi_d}}{g} \\
\psi_d
\end{bmatrix}$
We can then calculate the following input $u$:
$\begin{bmatrix} u_1 \\
u_{2x} \\
u_{2y} \\
u_{2z}
\end{bmatrix} = \begin{bmatrix} m(g+\ddot{z_c}) \\
\ddot\phi_c + K_{p,\phi}(\phi_c - \phi) + K_{d,\phi}(\dot{\phi_c} - \dot{\phi}) \\
\ddot\theta_c + K_{p,\theta}(\theta_c - \theta) + K_{d,\theta}(\dot{\theta_c} - \dot{\theta}) \\
\ddot\psi_c + K_{p,\psi}(\psi_c - \psi) + K_{d,\psi}(\dot{\psi_c} - \dot{\psi})
\end{bmatrix}$
Once we have $u$ we can then solve for the individual rotor forces required.
#### Step Input and Hover
We will simplify the control loop such that we will give the quadcopter a step input - i.e. we will command the quadcopter to go to some point in 3D space with a particular yaw, with the assumption that the quadcopter should hover at this steady state. This is also ideal for tuning our PD controllers.
With a desired step input and hover steady state, we can zero out the commanded velocity and all further time derivatives:
$\begin{bmatrix} \ddot{x_c} \\
\ddot{y_c} \\
\ddot{z_c}
\end{bmatrix} = \begin{bmatrix} K_{p,x}(x_d - x) - K_{d,x}\dot{x} \\
K_{p,y}(y_d - y) - K_{d,y}\dot{y} \\
K_{p,z}(z_d - z) - K_{d,z}\dot{z} \end{bmatrix}$
$\begin{bmatrix} u_1 \\
u_{2x} \\
u_{2y} \\
u_{2z}
\end{bmatrix} = \begin{bmatrix} m(g+\ddot{z_c}) \\
K_{p,\phi}(\phi_c - \phi) - K_{d,\phi}\dot{\phi} \\
K_{p,\theta}(\theta_c-\theta) - K_{d,\theta}\dot{\theta} \\
K_{p,\psi}(\psi_c - \psi) - K_{d,\psi}\dot{\psi}
\end{bmatrix}$
```python
# define our ivp function using our state space representation, and calculating the forces dynamically using PD control
# we will assume a simple trajectory planner that will send a desired hover position - so commanded velocites and accelerations will be 0
def quadcopter_pd(t, state, constants, desired, pd):
# pull out desired values
xd, yd, zd, psid = desired[0], desired[1], desired[2], desired[3]
# pull out the pd tuples
pdx, pdy, pdz, pdphi, pdtheta, pdpsi = pd[0], pd[1], pd[2], pd[3], pd[4], pd[5]
# pull out the current state positions...
x, y, z, phi, theta, psi = state[0], state[1], state[2], state[3], state[4], state[5]
# ...and state velocities
xv, yv, zv, phiv, thetav, psiv = state[6], state[7], state[8], state[9], state[10], state[11]
# calculate commanded values
xac = pdx[0]*(xd-x) - pdx[1]*xv
yac = pdy[0]*(yd-y) - pdy[1]*yv
zac = pdz[0]*(zd-z) - pdz[1]*zv
phic = (xac*math.sin(psid) - yac*math.cos(psid)) / constants['g']
thetac = (xac*math.cos(psid) + yac*math.sin(psid)) / constants['g']
psic = psid
# calulate u
u1 = constants['m'] * (constants['g'] + zac)
u2x = pdphi[0]*(phic - phi) - pdphi[1]*phiv
u2y = pdtheta[0]*(thetac - theta) - pdtheta[1]*thetav
u2z = pdpsi[0]*(psic - psi) - pdpsi[1]*psiv
# calculate updated accelerations
u1_m = u1 / constants['m']
x_accel = (math.cos(psi)*theta + math.sin(psi)*phi) * u1_m
y_accel = (math.sin(psi)*theta - math.cos(psi)*phi) * u1_m
z_accel = u1_m - constants['g']
phi_accel = u2x / constants['Ixx']
theta_accel = u2y / constants['Iyy']
psi_accel = u2z / constants['Izz']
updated = np.zeros(12)
updated[0:6] = state[6:]
updated[6:] = [x_accel, y_accel, z_accel, phi_accel, theta_accel, psi_accel]
return updated
```
```python
# x, y, z, yaw
desired = (1, 1, 1, 0.78)
# each tuple is a PD controller with the values as (P, D)
# we need PD controllers for x, y, z, pitch, roll and yaw
pd = [(3.5, 3.2), # x
(4.7, 4), # y
(7, 5), # z
(10, 0.5), # phi
(10, 0.5), # theta
(0.3, 0.25)] # psi
args = (constants, desired, pd)
sol = solve_ivp(quadcopter_pd, [0, 10], y0, args=args, dense_output=True)
t = np.linspace(0, 10, 100)
z = sol.sol(t)
# we expect the quadcopter start moving and rotating, but yaw should be 0
plot_all(f'Quadcopter step input from (0, 0, 0, 0) to {desired}', t, z)
```
#### Putting into code
```python
# define our A matrix once and invert it to save processing time
A = np.array([[1, 1, 1, 1],
[0, constants['l'], 0, -constants['l']],
[-constants['l'], 0, constants['l'], 0],
[constants['kmkf'], -constants['kmkf'], constants['kmkf'], -constants['kmkf']]])
Ainv = np.linalg.inv(A)
def control_loop(state):
# 1. explode state into values
# 2. get desired values from trajectory generator
# 3. calculate commanded values using PID control values
# 4. calculate u
# solve for the rotor forces required
u = np.array([[u1, u2x, u2y, u2z]]).transpose()
f = Ainv*u
# lastly, apply the forces to a simulation or convert into a voltage to apply to a motor
```
```python
```
| 89283f93c9f9c7b5e9bdaf815738c52ecb9cc357 | 562,357 | ipynb | Jupyter Notebook | notebooks/quadcopter-3d.ipynb | tristeng/control | dbf99de467e92d998f4fd078057476cdf98537c7 | [
"MIT"
] | 3 | 2020-11-27T10:49:46.000Z | 2021-04-04T03:41:19.000Z | notebooks/quadcopter-3d.ipynb | tristeng/control | dbf99de467e92d998f4fd078057476cdf98537c7 | [
"MIT"
] | null | null | null | notebooks/quadcopter-3d.ipynb | tristeng/control | dbf99de467e92d998f4fd078057476cdf98537c7 | [
"MIT"
] | 2 | 2021-09-23T16:07:38.000Z | 2022-02-08T04:23:46.000Z | 509.381341 | 151,176 | 0.93359 | true | 10,063 | Qwen/Qwen-72B | 1. YES
2. YES | 0.912436 | 0.870597 | 0.794364 | __label__eng_Latn | 0.8875 | 0.683907 |
# Discriminative Classification
G. Richards (2016,2018), based on materials from Connolly, VanderPlas, and Ivezic.
Last time we talked about how to do classification by mapping the full pdf of your parameter space. This time we will concentrate on methods that seek only to determine the **decision boundary**, so called [**discriminative classification**](https://en.wikipedia.org/wiki/Discriminative_model) methods.
As before, let's say that you have 2 blobs of data as shown below. In many cases, you might say "just draw a line between those two blobs that are well separated". So let's do exactly that in the example below. There are clearly lots of different lines that you could draw that would work. So, how do you do this *optimally*? And what if the blobs are not perfectly well separated?
<!-- -->
```python
# Source: https://github.com/jakevdp/ESAC-stats-2014/blob/master/notebooks/04.1-Classification-SVMs.ipynb
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring' ,edgecolor='k');
Xgrid = np.linspace(-1, 3.5)
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(Xgrid, m * Xgrid + b, '-k')
plt.xlim(-1, 3.5)
```
## Support Vector Machines
This is where [Support Vector Machines (SVM)](https://en.wikipedia.org/wiki/Support_vector_machine) come in. We are going to define a hyperplane (a plane in $N-1$ dimensions) that maximizes the distance of the closest point from each class. This distance is the "margin". It is the width of the "cylinder" that you can put between the closest points that just barely touches the points in each class. The points that touch the margin are called **support vectors**. Obvious, right?
Once again, we have an algortihm that seems purposely named to frighten people away. Though I don't know that "Data-Supported Hyperplane" classification would be any better...
```python
Xgrid = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring', edgecolor='k')
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * Xgrid+ b
plt.plot(Xgrid, yfit, '-k')
plt.fill_between(Xgrid, yfit - d, yfit + d, edgecolor='None', color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5)
```
To make life easier, we'll assume that the classes are separable by a straight line and that the decision boundary is at 0, with the two edges at $-1$ and $1$, and define $y \in \{-1,1\}$.
The hyperplane which maximizes the margin is given by finding
> \begin{equation}
\max_{\beta_0,\beta}(m) \;\;\;
\mbox{subject to} \;\;\; \frac{1}{||\beta||} y_i ( \beta_0 + \beta^T x_i )
\geq m \,\,\, \forall \, i.
\end{equation}
The constraints can be written as $y_i ( \beta_0 + \beta^T x_i ) \geq m ||\beta|| $.
Thus the optimization problem is equivalent to minimizing
> \begin{equation}
\frac{1}{2} ||\beta|| \;\;\; \mbox{subject to} \;\;\; y_i
( \beta_0 + \beta^T x_i ) \geq 1 \,\,\, \forall \, i.
\end{equation}
This optimization is a _quadratic programming_ problem (quadratic objective function with linear constraints).
To make sure that we get through all the remaining classification algorithms, we'll skip over the mathematical details. You can read about them in Ivezic $\S$ 9.6 or in Karen Leighly's [classification lecture notes](http://seminar.ouml.org/lectures/classification/).
For realistic data sets where the decision boundary is not obvious we relax the assumption that the classes are linearly separable. This changes the minimization condition and puts bounds on the number of misclassifications (which we would obviously like to minimize).
Treating Scikit-Learn's agorithm as a black box, let's fit a Support Vector Machine Classifier to these points.
The Scikit-Learn implementation of SVM classification is [`SVC`](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC) which looks like:
```python
from sklearn.svm import SVC
svm = SVC(kernel='linear')
svm.fit(X,y)
```
In order to better visualize what SVM is doing, let's create a convenience function that will plot the decision boundaries:
```python
def plot_svc_decision_function(clf, ax=None):
"""Plot the decision function for a 2D SVC"""
if ax is None:
ax = plt.gca()
u = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
v = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
yy, xx = np.meshgrid(v, u)
P = np.zeros_like(xx)
for i, ui in enumerate(u):
for j, vj in enumerate(v):
Xgrid = np.array([ui, vj])
P[i, j] = clf.decision_function(Xgrid.reshape(1,-1))
return ax.contour(xx, yy, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
#GTR: Not clear why we need the loops and can't just
# make the Xgrid array like we normally do.
```
Now let's plot the decision boundary and the support vectors, which are stored in the `support_vectors_` attribute of the classifier.
```python
plt.scatter(svm.support_vectors_[:, 0], svm.support_vectors_[:, 1], s=200, edgecolor='k', facecolor='w');
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring', edgecolor='k')
plot_svc_decision_function(svm)
```
Below is an example using the same data set from last time.
```python
%matplotlib inline
# Ivezic, Figure 9.10
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.svm import SVC
from astroML.decorators import pickle_results
from astroML.datasets import fetch_rrlyrae_combined
from astroML.utils import split_samples
from astroML.utils import completeness_contamination
#----------------------------------------------------------------------
# get data and split into training & testing sets
X, y = fetch_rrlyrae_combined()
X = X[:, [1, 0, 2, 3]] # rearrange columns for better 1-color results
# SVM takes several minutes to run, and is order[N^2]
# truncating the dataset can be useful for experimentation.
#X = X[::5]
#y = y[::5]
(X_train, X_test), (y_train, y_test) = split_samples(X, y, [0.75, 0.25], random_state=0)
N_tot = len(y)
N_stars = np.sum(y == 0)
N_rrlyrae = N_tot - N_stars
N_train = len(y_train)
N_test = len(y_test)
N_plot = 5000 + N_rrlyrae
#----------------------------------------------------------------------
# Fit SVM
Ncolors = np.arange(1, X.shape[1] + 1)
#@pickle_results('SVM_rrlyrae.pkl')
def compute_SVM(Ncolors):
y_class = []
y_pred = []
for nc in Ncolors:
# perform support vector classification
svm = SVC(kernel='linear', C=1, class_weight='balanced')
svm.fit(X_train[:, :nc], y_train)
y_pred.append(svm.predict(X_test[:, :nc]))
y_class.append(svm)
return y_class, y_pred
y_class, y_pred = compute_SVM(Ncolors)
completeness, contamination = completeness_contamination(y_pred, y_test)
print "completeness", completeness
print "contamination", contamination
#------------------------------------------------------------
# compute the decision boundary
svm = y_class[1]
w = svm.coef_[0]
a = -w[0] / w[1]
yy = np.linspace(-0.1, 0.4)
xx = a * yy - svm.intercept_[0] / w[1]
#----------------------------------------------------------------------
# plot the results
fig = plt.figure(figsize=(15, 7))
fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0,
left=0.1, right=0.95, wspace=0.2)
# left plot: data and decision boundary
ax = fig.add_subplot(121)
ax.plot(xx, yy, '-k')
#Too many RR Lyrae to plot, so just show the last N_plot
im = ax.scatter(X[-N_plot:, 1], X[-N_plot:, 0], c=y[-N_plot:],
s=4, lw=0, cmap=plt.cm.binary, zorder=2)
im.set_clim(-0.5, 1)
ax.set_xlim(0.7, 1.35)
ax.set_ylim(-0.15, 0.4)
ax.set_xlabel('$u-g$')
ax.set_ylabel('$g-r$')
# plot completeness vs Ncolors
ax = fig.add_subplot(222)
ax.plot(Ncolors, completeness, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylabel('completeness')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
# plot contamination vs Ncolors
ax = fig.add_subplot(224)
ax.plot(Ncolors, contamination, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%i'))
ax.set_xlabel('N colors')
ax.set_ylabel('contamination')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
plt.show()
```
Some comments on these results:
- The median of a distribution is unaffected by large perturbations of outlying points, as long as those perturbations do not cross the boundary.
- In the same way, once the support vectors are determined, changes to the positions or numbers of points beyond the margin will not change the decision boundary. For this reason, SVM can be a very powerful tool for discriminative classification.
- This is why there is a high completeness compared to the other methods: it does not matter that the background sources outnumber the RR Lyrae stars by a factor of $\sim$200 to 1. It simply determines the best boundary between the small RR Lyrae clump and the large background clump.
- This completeness, however, comes at the cost of a relatively large contamination level.
Note that:
- SVM is not scale invariant so it often worth rescaling the data to [0,1] or to whiten it to have a mean of 0 and variance 1 (remember to do this to the test data as well!)
- The data don't need to be separable (we can put a constraint in minimizing the number of "failures")
### Kernel Methods
If the contamination is driven by non-linear effects (which isn't the case here), it may be worth implementing a non-linear decision boundary. As before, we do that by *kernelization*.
Go to [Scikit-Learn SVM](http://scikit-learn.org/stable/modules/svm.html) (or better yet [this example](https://scikit-learn.org/stable/auto_examples/svm/plot_iris.html)) and see if you can figure out how to implement SVC with `kernel='rbf'` using the RR-Lyrae example above. (As in Figure 9.11) Also see what happens if you don't use `class_weight='balanced'`.
```python
%matplotlib inline
# Ivezic, Figure 9.10
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.svm import SVC
from astroML.decorators import pickle_results
from astroML.datasets import fetch_rrlyrae_combined
from astroML.utils import split_samples
from astroML.utils import completeness_contamination
#----------------------------------------------------------------------
# get data and split into training & testing sets
X, y = fetch_rrlyrae_combined()
X = X[:, [1, 0, 2, 3]] # rearrange columns for better 1-color results
# SVM takes several minutes to run, and is order[N^2]
# truncating the dataset can be useful for experimentation.
X = X[::5]
y = y[::5]
(X_train, X_test), (y_train, y_test) = split_samples(X, y, [0.75, 0.25], random_state=0)
N_tot = len(y)
N_stars = np.sum(y == 0)
N_rrlyrae = N_tot - N_stars
N_train = len(y_train)
N_test = len(y_test)
N_plot = 5000 + N_rrlyrae
#----------------------------------------------------------------------
# Fit SVM
Ncolors = np.arange(1, X.shape[1] + 1)
#@pickle_results('SVM_rrlyrae.pkl')
def compute_SVM(Ncolors):
y_class = []
y_pred = []
for nc in Ncolors:
# perform support vector classification
#svm = SVC(kernel='linear', C=1, class_weight='balanced')
svm = SVC(___,gamma=___,C=___,class_weight=___) # Complete
svm.fit(X_train[:, :nc], y_train)
y_pred.append(svm.predict(X_test[:, :nc]))
y_class.append(svm)
return y_class, y_pred
y_class, y_pred = compute_SVM(Ncolors)
completeness, contamination = completeness_contamination(y_pred, y_test)
print "completeness", completeness
print "contamination", contamination
#------------------------------------------------------------
# compute the decision boundary
svm = y_class[1]
#w = svm.coef_[0]
#a = -w[0] / w[1]
#yy = np.linspace(-0.1, 0.4)
#xx = a * yy - svm.intercept_[0] / w[1]
#----------------------------------------------------------------------
# plot the results
fig = plt.figure(figsize=(15, 7))
fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0,
left=0.1, right=0.95, wspace=0.2)
# left plot: data and decision boundary
ax = fig.add_subplot(121)
#ax.plot(xx, yy, '-k')
im = ax.scatter(X[-N_plot:, 1], X[-N_plot:, 0], c=y[-N_plot:],
s=4, lw=0, cmap=plt.cm.binary, zorder=2)
plot_svc_decision_function(svm)
im.set_clim(-0.5, 1)
ax.set_xlim(0.7, 1.35)
ax.set_ylim(-0.15, 0.4)
ax.set_xlabel('$u-g$')
ax.set_ylabel('$g-r$')
# plot completeness vs Ncolors
ax = fig.add_subplot(222)
ax.plot(Ncolors, completeness, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylabel('completeness')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
# plot contamination vs Ncolors
ax = fig.add_subplot(224)
ax.plot(Ncolors, contamination, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%i'))
ax.set_xlabel('N colors')
ax.set_ylabel('contamination')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
plt.show()
```
Let's take a quick look at an example where the data are not linearly separable and where kernelization really makes a difference.
```python
from sklearn.datasets.samples_generator import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
clf = SVC(kernel='linear').fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring', edgecolor='k')
plot_svc_decision_function(clf);
```
But we can make a transform of the data to *make* it linearly separable, for example with a simple **radial basis function** as shown below.
```python
# Transform X using a radial basis function
z = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))
from IPython.html.widgets import interact
from mpl_toolkits import mplot3d
def plot_3D(elev=30, azim=30):
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], z, c=y, s=50, cmap='spring', edgecolor='k')
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plot_3D()
# GTR: Or even fancier with
# interact(plot_3D, elev=[-90, 90], azip=(-180, 180));
```
Now we can trivially separate these populations as shown below!
```python
clf = SVC(kernel='rbf')
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring', edgecolor='k')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
```
## Decision Trees
A [**decision tree**](https://en.wikipedia.org/wiki/Decision_tree) is similar to the process of classification that you might do by hand: define some criteria to separate the sample into 2 groups (not necessarily equal). Then take those sub-groups and do it again. Keep going until you reach a stopping point such as not having a minimum number of objects to split again. In short, we have done a hierarchical application of decision boundaries.
The tree structure is as follows:
- top node contains the entire data set
- at each branch the data are subdivided into two child nodes
- split is based on a predefined decision boundary (usually axis aligned)
- splitting repeats, recursively, until we reach a predefined stopping criteria
Below is a simple example of a decision tree.
```python
# Source: Jake VanderPlas, https://github.com/LocalGroupAstrostatistics2015/MachineLearning/blob/master/fig_code/figures.py
def plot_example_decision_tree():
fig = plt.figure(figsize=(10, 4))
ax = fig.add_axes([0, 0, 0.8, 1], frameon=False, xticks=[], yticks=[])
ax.set_title('Example Decision Tree: Animal Classification', size=24)
def text(ax, x, y, t, size=20, **kwargs):
ax.text(x, y, t,
ha='center', va='center', size=size,
bbox=dict(boxstyle='round', ec='k', fc='w'), **kwargs)
text(ax, 0.5, 0.9, "How big is\nthe animal?", 20)
text(ax, 0.3, 0.6, "Does the animal\nhave horns?", 18)
text(ax, 0.7, 0.6, "Does the animal\nhave two legs?", 18)
text(ax, 0.12, 0.3, "Are the horns\nlonger than 10cm?", 14)
text(ax, 0.38, 0.3, "Is the animal\nwearing a collar?", 14)
text(ax, 0.62, 0.3, "Does the animal\nhave wings?", 14)
text(ax, 0.88, 0.3, "Does the animal\nhave a tail?", 14)
text(ax, 0.4, 0.75, "> 1m", 12, alpha=0.4)
text(ax, 0.6, 0.75, "< 1m", 12, alpha=0.4)
text(ax, 0.21, 0.45, "yes", 12, alpha=0.4)
text(ax, 0.34, 0.45, "no", 12, alpha=0.4)
text(ax, 0.66, 0.45, "yes", 12, alpha=0.4)
text(ax, 0.79, 0.45, "no", 12, alpha=0.4)
ax.plot([0.3, 0.5, 0.7], [0.6, 0.9, 0.6], '-k')
ax.plot([0.12, 0.3, 0.38], [0.3, 0.6, 0.3], '-k')
ax.plot([0.62, 0.7, 0.88], [0.3, 0.6, 0.3], '-k')
ax.plot([0.0, 0.12, 0.20], [0.0, 0.3, 0.0], '--k')
ax.plot([0.28, 0.38, 0.48], [0.0, 0.3, 0.0], '--k')
ax.plot([0.52, 0.62, 0.72], [0.0, 0.3, 0.0], '--k')
ax.plot([0.8, 0.88, 1.0], [0.0, 0.3, 0.0], '--k')
ax.axis([0, 1, 0, 1])
plot_example_decision_tree()
```
<!-- For our RR Lyrae stars -->
<! -- -->
The "leaf (terminal) nodes" record the fraction of points that have one classification or the other
Application of the tree to classification is simple (a series of binary decisions). The fraction of points from the training set classified as one class or the other (in the leaf node) defines the class associated with that leaf node.
The binary splitting makes this extremely efficient. Tthe trick is to ask the *right* questions.
So, decision trees are simple to interpret (just a set of questions).
Scikit-learn implements the [`DecisionTreeClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) as follows:
```python
import numpy as np
from sklearn.tree import DecisionTreeClassifier
X = np.random.random((100,2))
y = (X[:,0] + X[:,1] > 1).astype(int)
dtree = DecisionTreeClassifier(max_depth=6)
dtree.fit(X,y)
y_pred = dtree.predict(X)
```
An example with our data set of RR Lyrae stars shows that it has moderately good completenees and contamination, but that, for this data set, it is not the optimal choice.
```python
# Ivezic, Figure 9.13
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.tree import DecisionTreeClassifier
from astroML.datasets import fetch_rrlyrae_combined
from astroML.utils import split_samples
from astroML.utils import completeness_contamination
#----------------------------------------------------------------------
# get data and split into training & testing sets
X, y = fetch_rrlyrae_combined()
X = X[:, [1, 0, 2, 3]] # rearrange columns for better 1-color results
(X_train, X_test), (y_train, y_test) = split_samples(X, y, [0.75, 0.25], random_state=0)
N_tot = len(y)
N_stars = np.sum(y == 0)
N_rrlyrae = N_tot - N_stars
N_train = len(y_train)
N_test = len(y_test)
N_plot = 5000 + N_rrlyrae
#----------------------------------------------------------------------
# Fit Decision tree
Ncolors = np.arange(1, X.shape[1] + 1)
y_class = []
y_pred = []
Ncolors = np.arange(1, X.shape[1] + 1)
depths = [7, 12]
for depth in depths:
y_class.append([])
y_pred.append([])
for nc in Ncolors:
dt = DecisionTreeClassifier(random_state=0, max_depth=depth,
criterion='entropy')
dt.fit(X_train[:, :nc], y_train)
y_pred[-1].append(dt.predict(X_test[:, :nc]))
y_class[-1].append(dt)
completeness, contamination = completeness_contamination(y_pred, y_test)
print "completeness", completeness
print "contamination", contamination
#------------------------------------------------------------
# compute the decision boundary
dt = y_class[1][1]
xlim = (0.7, 1.35)
ylim = (-0.15, 0.4)
xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 101),
np.linspace(ylim[0], ylim[1], 101))
#GTR Whoa!? Why is this yy, xx and not xx, yy????
#Ah, because the plot reverses the usual python order
xystack = np.vstack([yy.ravel(),xx.ravel()])
Xgrid = xystack.T
Z = dt.predict(Xgrid)
Z = Z.reshape(xx.shape)
#----------------------------------------------------------------------
# plot the results
fig = plt.figure(figsize=(13, 6))
fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0,
left=0.1, right=0.95, wspace=0.2)
# left plot: data and decision boundary
ax = fig.add_subplot(121)
im = ax.scatter(X[-N_plot:, 1], X[-N_plot:, 0], c=y[-N_plot:],
s=4, lw=0, cmap=plt.cm.binary, zorder=2)
im.set_clim(-0.5, 1)
ax.contour(xx, yy, Z, [0.5], colors='k')
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_xlabel('$u-g$')
ax.set_ylabel('$g-r$')
ax.text(0.02, 0.02, "depth = %i" % depths[1],
transform=ax.transAxes)
# plot completeness vs Ncolors
ax = fig.add_subplot(222)
ax.plot(Ncolors, completeness[0], 'o-k', ms=6, label="depth=%i" % depths[0])
ax.plot(Ncolors, completeness[1], '^--k', ms=6, label="depth=%i" % depths[1])
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylabel('completeness')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
# plot contamination vs Ncolors
ax = fig.add_subplot(224)
ax.plot(Ncolors, contamination[0], 'o-k', ms=6, label="depth=%i" % depths[0])
ax.plot(Ncolors, contamination[1], '^--k', ms=6, label="depth=%i" % depths[1])
ax.legend(loc='lower right', bbox_to_anchor=(1.0, 0.79))
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%i'))
ax.set_xlabel('N colors')
ax.set_ylabel('contamination')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
plt.show()
```
### Splitting Criteria
Now let's talk about the best ways to split the data. This is actually a really hard problem, which you can read about more in Ivezic $\S$ 9.7.1.
One way is to use the information content or entropy, $E(x)$, of the data
$$ E(x) = -\sum_i p_i(x) \ln (p_i(x)),$$
where $i$ is the class and $p_i(x)$ is the probability of that class
given the training data.
Another commonly used "loss function" (especially for categorical classification) is the Gini coefficient:
$$ G = \sum_i^k p_i(1-p_i).$$
It essentially estimates the probability of incorrect classification by choosing both a point and (separately) a class randomly from the data.
Try changing the example above to use `criterion='gini'` and `min_samples_leaf=3`, see [here](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier) for all of the parameters.
Obviously in constructing a decision treee, if your choice of stopping criteria is too loose, further splitting just ends up adding noise. So here is an example using cross-validation in order to optimize the depth of the tree (and to avoid overfitting).
Note that here we aren't classifying the objects into discrete categories, rather we are classifying them into a continuous category. That is, we are doing regression. In this particular case, we are using the colors of galaxies in order to predict their redshifts (distances).
```python
# Ivezic, Figure 9.14
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.tree import DecisionTreeRegressor
from astroML.datasets import fetch_sdss_specgals
#------------------------------------------------------------
# Fetch data and prepare it for the computation
data = fetch_sdss_specgals()
# put magnitudes in a matrix
mag = np.vstack([data['modelMag_%s' % f] for f in 'ugriz']).T
z = data['z']
# train on ~60,000 points
mag_train = mag[::10]
z_train = z[::10]
# test on ~6,000 separate points
mag_test = mag[1::100]
z_test = z[1::100]
#------------------------------------------------------------
# Compute the cross-validation scores for several tree depths
depth = np.arange(1, 21)
rms_test = np.zeros(len(depth))
rms_train = np.zeros(len(depth))
i_best = 0
z_fit_best = None
for i, d in enumerate(depth):
clf = DecisionTreeRegressor(max_depth=d, random_state=0)
clf.fit(mag_train, z_train)
z_fit_train = clf.predict(mag_train)
z_fit_test = clf.predict(mag_test)
rms_train[i] = np.mean(np.sqrt((z_fit_train - z_train) ** 2))
rms_test[i] = np.mean(np.sqrt((z_fit_test - z_test) ** 2))
if rms_test[i] <= rms_test[i_best]:
i_best = i
z_fit_best = z_fit_test
best_depth = depth[i_best]
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(10, 5))
fig.subplots_adjust(wspace=0.25,
left=0.1, right=0.95,
bottom=0.15, top=0.9)
# first panel: cross-validation
ax = fig.add_subplot(121)
ax.plot(depth, rms_test, '-k', label='cross-validation')
ax.plot(depth, rms_train, '--k', label='training set')
ax.set_xlabel('depth of tree')
ax.set_ylabel('rms error')
ax.yaxis.set_major_locator(plt.MultipleLocator(0.01))
ax.set_xlim(0, 21)
ax.set_ylim(0.009, 0.04)
ax.legend(loc=1)
# second panel: best-fit results
ax = fig.add_subplot(122)
ax.scatter(z_test, z_fit_best, s=1, lw=0, c='k')
ax.plot([-0.1, 0.4], [-0.1, 0.4], ':k')
ax.text(0.04, 0.96, "depth = %i\nrms = %.3f" % (best_depth, rms_test[i_best]),
ha='left', va='top', transform=ax.transAxes)
ax.set_xlabel(r'$z_{\rm true}$')
ax.set_ylabel(r'$z_{\rm fit}$')
ax.set_xlim(-0.02, 0.4001)
ax.set_ylim(-0.02, 0.4001)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.1))
plt.show()
```
That's doing the Cross Validation by hand, let's try it automatically using data like the first example that we started with today.
```python
X, y = make_blobs(n_samples=500, centers=3,
random_state=0, cluster_std=1.50)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='spring',edgecolor='k')
```
```python
#from sklearn.grid_search import GridSearchCV
from sklearn.model_selection import GridSearchCV
clf = DecisionTreeClassifier()
drange = np.arange(1,21)
grid = GridSearchCV(clf, param_grid={'max_depth': drange}, cv=5)
grid.fit(X, y)
best = grid.best_params_['max_depth']
print("best parameter choice:", best)
```
Now plot the decision boundary
```python
dt = DecisionTreeClassifier(random_state=0, max_depth=best, criterion='entropy')
dt.fit(X, y)
xlim = (-4, 8)
ylim = (-6, 10)
xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 51),
np.linspace(ylim[0], ylim[1], 51))
xystack = np.vstack([xx.ravel(),yy.ravel()])
Xgrid = xystack.T
Z = dt.predict(Xgrid)
Z = Z.reshape(xx.shape)
print Z
#----------------------------------------------------------------------
# plot the results
fig = plt.figure(figsize=(8, 8))
fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0,
left=0.1, right=0.95, wspace=0.2)
# left plot: data and decision boundary
ax = fig.add_subplot(111)
im = ax.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap=plt.cm.spring, zorder=2, edgecolor='k')
ax.contour(xx, yy, Z, [0.5], colors='k')
```
Note that Decision Trees are the basis for the rest of the material today. So it is useful to consider some of the [advantages and disadvantages](https://scikit-learn.org/stable/modules/tree.html).
## Ensemble Learning
You may have noticed that each of the classification methods that we have described so far has its strengths and weaknesses. You might wonder if we could gain something by some sort of averaging of weighted "voting". Such a process is what we call *ensemble learning*. We'll discuss two such processes: [**bagging**](https://en.wikipedia.org/wiki/Bootstrap_aggregating) and [**random forests**](https://en.wikipedia.org/wiki/Random_forest).
### Bagging
Bagging (short for *bootstrap aggregation*--a name, unlike SVM, which actually makes some sense) can significantly improve the performance of decision trees. In short, bagging averages the predictive results of a series of *bootstrap* samples.
Remember that instead of splitting the sample into training and test sets that do not overlap, bootstrap says to draw from the observed data set with replacement. So we select indices $j$ from the range $i=1,\ldots,N$ and this is our new sample. Some indices, $i$, will be repeated and we do this $B$ times.
For a sample of $N$ points in a training set, bagging generates $B$ equally sized bootstrap samples from which to estimate the function $f_i(x)$. The final estimator for $\hat{y}$, defined by bagging, is then
$$\hat{y} = f(x) = \frac{1}{B} \sum_i^B f_i(x).$$
### Random Forests
Random forests extend bagging by generating decision trees from the bootstrap samples. A interesting aspect of random forests is that the features on which to generate the tree are selected at random from the full set of features in the data (the number of features selected per split level is typically the square root of the total number of attributes, $\sqrt{D}$). The final classification from the random forest is based on the averaging of the classifications of each of the individual decision trees. So, you can literally give it the kitchen sink (including attributes that you might not otherwise think would be useful for classification).
Random forests help to overcome some of the limitations of decision trees.
As before, cross-validation can be used to determine the optimal depth. Generally the number of trees, $n$, that are chosen is the number at which the cross-validation error plateaus.
Below we give the same example as above for estimation of galaxy redshifts, where Scikit-Learn's [`RandomForestClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) call looks as follows:
```python
import numpy as np
from sklearn.ensemble import RandomForestClassifier
X = np.random.random((100,2))
y = (X[:,0] + X[:,1] > 1).astype(int)
ranfor = RandomForestClassifier(10)
ranfor.fit(X,y)
y_pred = ranfor.predict(X)
```
```python
# Ivezic, Figure 9.15
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from astroML.datasets import fetch_sdss_specgals
#------------------------------------------------------------
# Fetch and prepare the data
data = fetch_sdss_specgals()
# put magnitudes in a matrix
mag = np.vstack([data['modelMag_%s' % f] for f in 'ugriz']).T
z = data['z']
# train on ~60,000 points
mag_train = mag[::10]
z_train = z[::10]
# test on ~6,000 distinct points
mag_test = mag[1::100]
z_test = z[1::100]
#------------------------------------------------------------
# Compute the results
def compute_photoz_forest(depth):
rms_test = np.zeros(len(depth))
rms_train = np.zeros(len(depth))
i_best = 0
z_fit_best = None
for i, d in enumerate(depth):
clf = RandomForestRegressor(n_estimators=10,
max_depth=d, random_state=0)
clf.fit(mag_train, z_train)
z_fit_train = clf.predict(mag_train)
z_fit = clf.predict(mag_test)
rms_train[i] = np.mean(np.sqrt((z_fit_train - z_train) ** 2))
rms_test[i] = np.mean(np.sqrt((z_fit - z_test) ** 2))
if rms_test[i] <= rms_test[i_best]:
i_best = i
z_fit_best = z_fit
return rms_test, rms_train, i_best, z_fit_best
depth = np.arange(1, 21)
rms_test, rms_train, i_best, z_fit_best = compute_photoz_forest(depth)
best_depth = depth[i_best]
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(13, 6))
fig.subplots_adjust(wspace=0.25,
left=0.1, right=0.95,
bottom=0.15, top=0.9)
# left panel: plot cross-validation results
ax = fig.add_subplot(121)
ax.plot(depth, rms_test, '-k', label='cross-validation')
ax.plot(depth, rms_train, '--k', label='training set')
ax.legend(loc=1)
ax.set_xlabel('depth of tree')
ax.set_ylabel('rms error')
ax.set_xlim(0, 21)
ax.set_ylim(0.009, 0.04)
ax.yaxis.set_major_locator(plt.MultipleLocator(0.01))
# right panel: plot best fit
ax = fig.add_subplot(122)
ax.scatter(z_test, z_fit_best, s=1, lw=0, c='k')
ax.plot([-0.1, 0.4], [-0.1, 0.4], ':k')
ax.text(0.03, 0.97, "depth = %i\nrms = %.3f" % (best_depth, rms_test[i_best]),
ha='left', va='top', transform=ax.transAxes)
ax.set_xlabel(r'$z_{\rm true}$')
ax.set_ylabel(r'$z_{\rm fit}$')
ax.set_xlim(-0.02, 0.4001)
ax.set_ylim(-0.02, 0.4001)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.1))
plt.show()
```
How many attributes/features is the code currently using? Looking at [`RandomForestClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html), how might you use `max_features` to change this?
## Boosting
Boosting is an ensemble approach motivated by the idea that combining many weak classifiers can result in an improved classification. Boosting creates models that attempt to correct the errors of the ensemble so far. At the heart of boosting is the idea that we reweight the data based on how incorrectly the data were classified in the previous iteration.
We run the classification multiple times and each time reweight the data based on the previous performance of the classifier. At the end of this procedure we allow the classifiers to vote on the final classification. The most popular form of boosting is that of adaptive boosting. In this case we take a weak classifier, $h(x)$, and create a strong classifier, $f(x)$, as
$$ f(x) = \sum_m^B\theta_m h_m(x),$$
where $m$ is the number of iterations and $\theta_m$ is the weight of the classifier in each iteration.
If we chose $\theta_m=1/B$, then we'd essentially have bagging. For boosting the idea is to increase the weight of the misclassified data in each step.
A fundamental limitation of the boosted decision tree is the computation time for large data sets (they rely on a chain of classifiers which are each dependent on the last), whereas random forests can be easily parallelized.
The example given below is actually Scikit-Learn's [`GradientBoostingClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html) where we approximate the steepest descent criterion after each simple evaluation.
```python
import numpy as np
from sklearn.ensemble import GradientBoostingClassifier
X = np.random.random((100,2))
y = (X[:,0] + X[:,1] > 1).astype(int)
gradboost = GradientBoostingClassifier()
gradboost.fit(X,y)
y_pred = gradboost.predict(X)
```
```python
# Ivezic, Figure 9.16
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from sklearn.ensemble import GradientBoostingRegressor
from astroML.datasets import fetch_sdss_specgals
#------------------------------------------------------------
# Fetch and prepare the data
data = fetch_sdss_specgals()
# put magnitudes in a matrix
mag = np.vstack([data['modelMag_%s' % f] for f in 'ugriz']).T
z = data['z']
# train on ~60,000 points
mag_train = mag[::10]
z_train = z[::10]
# test on ~6,000 distinct points
mag_test = mag[1::100]
z_test = z[1::100]
#------------------------------------------------------------
# Compute the results
def compute_photoz_forest(N_boosts):
rms_test = np.zeros(len(N_boosts))
rms_train = np.zeros(len(N_boosts))
i_best = 0
z_fit_best = None
for i, Nb in enumerate(N_boosts):
try:
# older versions of scikit-learn
clf = GradientBoostingRegressor(n_estimators=Nb, learn_rate=0.1,
max_depth=3, max_features='sqrt', random_state=0)
except TypeError:
clf = GradientBoostingRegressor(n_estimators=Nb, learning_rate=0.1,
max_depth=3, max_features='sqrt', random_state=0)
clf.fit(mag_train, z_train)
z_fit_train = clf.predict(mag_train)
z_fit = clf.predict(mag_test)
rms_train[i] = np.mean(np.sqrt((z_fit_train - z_train) ** 2))
rms_test[i] = np.mean(np.sqrt((z_fit - z_test) ** 2))
if rms_test[i] <= rms_test[i_best]:
i_best = i
z_fit_best = z_fit
return rms_test, rms_train, i_best, z_fit_best
N_boosts = (10, 100, 200, 300, 400, 500)
rms_test, rms_train, i_best, z_fit_best = compute_photoz_forest(N_boosts)
best_N = N_boosts[i_best]
#------------------------------------------------------------
# Plot the results
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(wspace=0.25,
left=0.1, right=0.95,
bottom=0.15, top=0.9)
# left panel: plot cross-validation results
ax = fig.add_subplot(121)
ax.plot(N_boosts, rms_test, '-k', label='cross-validation')
ax.plot(N_boosts, rms_train, '--k', label='training set')
ax.legend(loc=1)
ax.set_xlabel('number of boosts')
ax.set_ylabel('rms error')
ax.set_xlim(0, 510)
ax.set_ylim(0.009, 0.032)
ax.yaxis.set_major_locator(plt.MultipleLocator(0.01))
ax.text(0.03, 0.03, "Tree depth: 3",
ha='left', va='bottom', transform=ax.transAxes)
# right panel: plot best fit
ax = fig.add_subplot(122)
ax.scatter(z_test, z_fit_best, s=1, lw=0, c='k')
ax.plot([-0.1, 0.4], [-0.1, 0.4], ':k')
ax.text(0.04, 0.96, "N = %i\nrms = %.3f" % (best_N, rms_test[i_best]),
ha='left', va='top', transform=ax.transAxes)
ax.set_xlabel(r'$z_{\rm true}$')
ax.set_ylabel(r'$z_{\rm fit}$')
ax.set_xlim(-0.02, 0.4001)
ax.set_ylim(-0.02, 0.4001)
ax.xaxis.set_major_locator(plt.MultipleLocator(0.1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.1))
plt.show()
```
## Alright, alright but what the @#%! should I use?
A convenient cop-out: no single model can be known in advance to be the best classifier!
In general the level of accuracy increases for parametric models as:
- <b>naive Bayes</b>,
- linear discriminant analysis (LDA),
- logistic regression,
- linear support vector machines,
- quadratic discriminant analysis (QDA),
- linear ensembles of linear models.
For non-parametric models accuracy increases as:
- decision trees
- $K$-nearest-neighbor,
- neural networks
- kernel discriminant analysis,
- <b> kernelized support vector machines</b>
- <b> random forests</b>
- boosting
See also Ivezic, Table 9.1.
Naive Bayes and its variants are by far the easiest to compute. Linear support vector machines are more expensive, though several fast algorithms exist. Random forests can be easily parallelized.
We saw before that Scikit-learn has tools for computing ROC curves, which is implemented as follows.
```python
import numpy as np
from sklearn.naive_bayes import GaussianNB
from sklearn import metrics
X = np.random.random((100,2))
y = (X[:,0] + X[:,1] > 1).astype(int)
gnb = GaussianNB().fit(X,y)
y_prob = gnb.predict_proba(X)
# Compute precision/recall curve
pr, re, thresh = metrics.precision_recall_curve(y, y_prob[:,0])
# Compute ROC curve
tpr, fpr, thresh = metrics.roc_curve(y, y_prob[:,0])
```
Let's remember what they had to say:
Here's an example with a different data set. Here we are trying to distinguish quasars (in black) from stars (in grey)
```python
# Ivezic, 9.18
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from astroML.utils import split_samples
from sklearn.metrics import roc_curve
from sklearn.naive_bayes import GaussianNB
#from sklearn.lda import LDA
#from sklearn.qda import QDA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
#from astroML.classification import GMMBayes
from sklearn.mixture import GaussianMixture
#------------------------------------------------------------
# Fetch data and split into training and test samples
from astroML.datasets import fetch_dr7_quasar
from astroML.datasets import fetch_sdss_sspp
quasars = fetch_dr7_quasar()
stars = fetch_sdss_sspp()
# Truncate data for speed
quasars = quasars[::5]
stars = stars[::5]
# stack colors into matrix X
Nqso = len(quasars)
Nstars = len(stars)
X = np.empty((Nqso + Nstars, 4), dtype=float)
X[:Nqso, 0] = quasars['mag_u'] - quasars['mag_g']
X[:Nqso, 1] = quasars['mag_g'] - quasars['mag_r']
X[:Nqso, 2] = quasars['mag_r'] - quasars['mag_i']
X[:Nqso, 3] = quasars['mag_i'] - quasars['mag_z']
X[Nqso:, 0] = stars['upsf'] - stars['gpsf']
X[Nqso:, 1] = stars['gpsf'] - stars['rpsf']
X[Nqso:, 2] = stars['rpsf'] - stars['ipsf']
X[Nqso:, 3] = stars['ipsf'] - stars['zpsf']
y = np.zeros(Nqso + Nstars, dtype=int)
y[:Nqso] = 1
# split into training and test sets
(X_train, X_test), (y_train, y_test) = split_samples(X, y, [0.9, 0.1],
random_state=0)
#------------------------------------------------------------
# Compute fits for all the classifiers
def compute_results(*args):
names = []
probs = []
for classifier, kwargs in args:
print classifier.__name__
model = classifier(**kwargs)
model.fit(X, y)
y_prob = model.predict_proba(X_test)
names.append(classifier.__name__)
probs.append(y_prob[:, 1])
return names, probs
LRclass_weight = dict([(i, np.sum(y_train == i)) for i in (0, 1)])
names, probs = compute_results((GaussianNB, {}),
(LDA, {}),
(QDA, {}),
(LogisticRegression,
dict(class_weight=LRclass_weight)),
(KNeighborsClassifier,
dict(n_neighbors=10)),
(DecisionTreeClassifier,
dict(random_state=0, max_depth=12,
criterion='entropy')),
(GaussianMixture, dict(n_components=3, tol=1E-5,
covariance_type='full')))
#------------------------------------------------------------
# Plot results
fig = plt.figure(figsize=(13, 7))
fig.subplots_adjust(left=0.1, right=0.95, bottom=0.15, top=0.9, wspace=0.25)
# First axis shows the data
ax1 = fig.add_subplot(121)
im = ax1.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=4,
linewidths=0, edgecolors='none',
cmap=plt.cm.binary)
im.set_clim(-0.5, 1)
ax1.set_xlim(-0.5, 3.0)
ax1.set_ylim(-0.3, 1.4)
ax1.set_xlabel('$u - g$')
ax1.set_ylabel('$g - r$')
labels = dict(GaussianNB='GNB',
LinearDiscriminantAnalysis='LDA',
QuadraticDiscriminantAnalysis='QDA',
KNeighborsClassifier='KNN',
DecisionTreeClassifier='DT',
GaussianMixture='GMMB',
LogisticRegression='LR')
# Second axis shows the ROC curves
ax2 = fig.add_subplot(122)
for name, y_prob in zip(names, probs):
fpr, tpr, thresholds = roc_curve(y_test, y_prob)
fpr = np.concatenate([[0], fpr])
tpr = np.concatenate([[0], tpr])
ax2.plot(fpr, tpr, label=labels[name])
ax2.legend(loc=4)
ax2.set_xlabel('false positive rate')
ax2.set_ylabel('true positive rate')
ax2.set_xlim(0, 0.15)
ax2.set_ylim(0.6, 1.01)
ax2.xaxis.set_major_locator(plt.MaxNLocator(5))
plt.show()
```
Curiously GMMBayes went from being one of the best to one of the worst after I changed from using the deprecated GMMBayes to GaussianMixture. So, it is likely that the current input parameters are not optimal for that.
Add a `precision_recall` plot.
```python
# Ivezic, 9.18
# Author: Jake VanderPlas
# License: BSD
# The figure produced by this code is published in the textbook
# "Statistics, Data Mining, and Machine Learning in Astronomy" (2013)
# For more information, see http://astroML.github.com
# To report a bug or issue, use the following forum:
# https://groups.google.com/forum/#!forum/astroml-general
import numpy as np
from matplotlib import pyplot as plt
from astroML.utils import split_samples
from sklearn.metrics import roc_curve
from sklearn.metrics import ____ #Complete
from sklearn.naive_bayes import GaussianNB
#from sklearn.lda import LDA
#from sklearn.qda import QDA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
#from astroML.classification import GMMBayes
from sklearn.mixture import GaussianMixture
#------------------------------------------------------------
# Fetch data and split into training and test samples
from astroML.datasets import fetch_dr7_quasar
from astroML.datasets import fetch_sdss_sspp
quasars = fetch_dr7_quasar()
stars = fetch_sdss_sspp()
# Truncate data for speed
quasars = quasars[::5]
stars = stars[::5]
# stack colors into matrix X
Nqso = len(quasars)
Nstars = len(stars)
X = np.empty((Nqso + Nstars, 4), dtype=float)
X[:Nqso, 0] = quasars['mag_u'] - quasars['mag_g']
X[:Nqso, 1] = quasars['mag_g'] - quasars['mag_r']
X[:Nqso, 2] = quasars['mag_r'] - quasars['mag_i']
X[:Nqso, 3] = quasars['mag_i'] - quasars['mag_z']
X[Nqso:, 0] = stars['upsf'] - stars['gpsf']
X[Nqso:, 1] = stars['gpsf'] - stars['rpsf']
X[Nqso:, 2] = stars['rpsf'] - stars['ipsf']
X[Nqso:, 3] = stars['ipsf'] - stars['zpsf']
y = np.zeros(Nqso + Nstars, dtype=int)
y[:Nqso] = 1
# split into training and test sets
(X_train, X_test), (y_train, y_test) = split_samples(X, y, [0.9, 0.1],
random_state=0)
#------------------------------------------------------------
# Compute fits for all the classifiers
def compute_results(*args):
names = []
probs = []
for classifier, kwargs in args:
print classifier.__name__
model = classifier(**kwargs)
model.fit(X, y)
y_prob = model.predict_proba(X_test)
names.append(classifier.__name__)
probs.append(y_prob[:, 1])
return names, probs
LRclass_weight = dict([(i, np.sum(y_train == i)) for i in (0, 1)])
names, probs = compute_results((GaussianNB, {}),
(LDA, {}),
(QDA, {}),
(LogisticRegression,
dict(class_weight=LRclass_weight)),
(KNeighborsClassifier,
dict(n_neighbors=10)),
(DecisionTreeClassifier,
dict(random_state=0, max_depth=12,
criterion='entropy')),
(GaussianMixture, dict(n_components=3, tol=1E-5,
covariance_type='full')))
#------------------------------------------------------------
# Plot results
fig = plt.figure(figsize=(18, 7))
fig.subplots_adjust(left=0.1, right=0.95, bottom=0.15, top=0.9, wspace=0.25)
# First axis shows the data
ax1 = fig.add_subplot(131)
im = ax1.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=4,
linewidths=0, edgecolors='none',
cmap=plt.cm.binary)
im.set_clim(-0.5, 1)
ax1.set_xlim(-0.5, 3.0)
ax1.set_ylim(-0.3, 1.4)
ax1.set_xlabel('$u - g$')
ax1.set_ylabel('$g - r$')
labels = dict(GaussianNB='GNB',
LinearDiscriminantAnalysis='LDA',
QuadraticDiscriminantAnalysis='QDA',
KNeighborsClassifier='KNN',
DecisionTreeClassifier='DT',
GaussianMixture='GMMB',
LogisticRegression='LR')
# Second axis shows the ROC curves
ax2 = fig.add_subplot(132)
ax3 = fig.add_subplot(133)
for name, y_prob in zip(names, probs):
fpr, tpr, thresholds = roc_curve(y_test, y_prob)
precision, recall, thresholds2 = ____(____, ____) # Complete
fpr = np.concatenate([[0], fpr])
tpr = np.concatenate([[0], tpr])
precision = ___.___(___,___) # Complete
recall = ___.___(___,___) # Complete
ax2.plot(fpr, tpr, label=labels[name])
ax3.plot(____, ____, label=labels[name]) # Complete
ax2.legend(loc=4)
ax2.set_xlabel('false positive rate')
ax2.set_ylabel('true positive rate')
ax2.set_xlim(0, 0.15)
ax2.set_ylim(0.6, 1.01)
ax2.xaxis.set_major_locator(plt.MaxNLocator(5))
ax3.set_xlim(0.5, 1.01)
ax3.set_ylim(0.5, 1.01)
ax3.set_xlabel('recall')
ax3.set_ylabel('precision')
plt.show()
```
| 7cda1f52207e774687eeebfaccd06942f37651a9 | 72,034 | ipynb | Jupyter Notebook | notebooks/Classification2.ipynb | gtrichards/PHYS_T480_F18 | b3ffcd9effb427a67ac0ed50695328f6a91b3f64 | [
"MIT"
] | 12 | 2018-12-26T20:19:42.000Z | 2022-02-10T04:10:00.000Z | notebooks/Classification2.ipynb | gtrichards/PHYS_T480_F18 | b3ffcd9effb427a67ac0ed50695328f6a91b3f64 | [
"MIT"
] | 1 | 2019-07-17T11:46:25.000Z | 2019-07-19T11:41:45.000Z | notebooks/Classification2.ipynb | gtrichards/PHYS_T480_F18 | b3ffcd9effb427a67ac0ed50695328f6a91b3f64 | [
"MIT"
] | 6 | 2018-09-24T00:44:04.000Z | 2020-05-24T02:07:01.000Z | 37.054527 | 660 | 0.548755 | true | 14,230 | Qwen/Qwen-72B | 1. YES
2. YES | 0.863392 | 0.901921 | 0.778711 | __label__eng_Latn | 0.868131 | 0.647538 |
## First Assignment
#### 1) Apply the appropriate string methods to the **x** variable (as '.upper') to change it exactly to: "$Dichlorodiphenyltrichloroethane$".
```python
x = "DiClOrod IFeNi lTRicLOr oETaNo DiChlorod iPHeny lTrichL oroEThaNe"
```
```python
y = x.replace(' ','')
print(y[27:].capitalize())
```
Dichlorodiphenyltrichloroethane
#### 2) Assign respectively the values: 'word', 15, 3.14 and 'list' to variables A, B, C and D in a single line of code. Then, print them in that same order on a single line separated by a space, using only one print statement.
```python
A, B, C, D = 'word', 15, 3.14, 'list'
print(f'{A} {B} {C} {D}')
```
word 15 3.14 list
#### 3) Use the **input()** function to receive an input in the form **'68.4 1.71'**, that is, two floating point numbers in a line separated by space. Then, assign these numbers to the variables **w** and **h** respectively, which represent an individual's weight and height (hint: take a look at the '.split()' method). With this data, calculate the individual's Body Mass Index (BMI) from the following relationship:
\begin{equation}
BMI = \dfrac{weight}{height^2}
\end{equation}
```python
weight, height = input("Enter weight and height, seperated by space: ").split()
BMI = float(weight)/float(height)**2
print(BMI)
```
#### This value can also be classified according to ranges of values, following to the table below. Use conditional structures to classify and print the classification assigned to the individual.
<center><\center>
(source: https://healthtravelguide.com/bmi-calculator/)
```python
if BMI < 18.5:
print("Underweight")
elif BMI <= 24.9:
print("Normal weight")
elif BMI <= 29.9:
print("Pre-obesity")
elif BMI <= 34.9:
print("Obesity class I")
elif BMI <= 39.9:
print("Obesity class II")
else:
print("Obesity class III")
```
Obesity class III
#### 4) Receive an integer as an input and, using a loop, calculate the factorial of this number, that is, the product of all the integers from one to the number provided.
```python
number = int(input("Insert an integer:"))
fact = 1
for i in range(1, number+1, 1):
fact = fact * i
print(fact)
```
Insert an integer:14
87178291200
#### 5) Using a while loop and the input function, read an indefinite number of integers until the number read is -1. Present the sum of all these numbers in the form of a print, excluding the -1 read at the end.
```python
sum_input, number_input = 0, 0
while number_input != -1:
sum_input += number_input
number_input = int(input("Enter an integer: "))
print(f"Sum of input = {sum_input}")
```
Enter an integer: 4
Enter an integer: 6
Enter an integer: -1
Sum of input = 10
#### 6) Read the **first name** of an employee, his **amount of hours worked** and the **salary per hour** in a single line separated by commas. Next, calculate the **total salary** for this employee and show it to two decimal places.
```python
name, hours, salary = input("Insert name of the employee, amount of hours worked and salary per hour, separated by commas:").split(",")
totalSalary = float(hours)*float(salary)
print(f"{name} earns {round(totalSalary,2)}")
```
Insert name of the employee, amount of hours worked and salary per hour, separated by commas:Heinz, 23, 193.440021
Heinz earns 4449.12
#### 7) Read three floating point values **A**, **B** and **C** respectively. Then calculate itens a, b, c, d and e:
```python
x, y, z = input("Enter the values for A, B, C separated by comma").split(",")
A, B, C = float(x), float(y), float(z)
```
a) the area of the triangle rectangle with A as the base and C as the height.
```python
print("Area of the triangle:", A*C/2)
```
Area of the triangle: 5.0
b) the area of the circle of radius C. (pi = 3.14159)
```python
print("Area of the circle:", 3.14159*C**2)
```
Area of the circle: 78.53975
c) the area of the trapezoid that has A and B for bases and C for height.
```python
print("Area of the trapezoid:", (A+B)*C/2)
```
d) the area of the square that has side B.
```python
print("Area of the square:", B**2)
```
Area of the square: 9.0
e) the area of the rectangle that has sides A and B.
```python
print("Area of the rectangle:", A*B)
```
Area of the rectangle: 6.0
#### 8) Read **the values a, b and c** and calculate the **roots of the second degree equation** $ax^{2}+bx+c=0$ using [this formula](https://en.wikipedia.org/wiki/Quadratic_equation). If it is not possible to calculate the roots, display the message **“There are no real roots”**.
```python
a, b, c = [float(x) for x in input("Enter the values of a, b and c, separated by commas: ").split(",")]
root = (b**2) - (4*a*c)
if root < 0:
print("There are no real roots")
elif root == 0:
x = -b/(2*a)
print(f"Root: {x}")
else:
x1 = (-b+root**2)/(2*a)
x2 = (-b-root**2)/(2*a)
print(f"Roots: {x1}, {x2}")
```
Enter the values of a, b and c, separated by commas: 2, 6666, 1
Roots: 493629481513409.5, -493629481516742.5
#### 9) Read four floating point numerical values corresponding to the coordinates of two geographical coordinates in the cartesian plane. Each point will come in a line with its coordinates separated by space. Then calculate and show the distance between these two points.
(obs: $d=\sqrt{(x_1-x_2)^2 + (y_1-y_2)^2}$)
```python
x1, x2 = [float(a) for a in input("Enter coordinates x1 and x2, separated by comma ").split(",")]
y1, y2 = [float(a) for a in input("Enter coordinates y1 and y2: separated by comma ").split(",")]
d = ((x1-x2)**2+(y1-y2)**2)**0.5
print(f"d = {d}")
```
Enter coordinates x1 and x2, separated by comma 2, 6
Enter coordinates y1 and y2: separated by comma 6, 2
d = 5.656854249492381
#### 10) Read **two floating point numbers** on a line that represent **coordinates of a cartesian point**. With this, use **conditional structures** to determine if you are at the origin, printing the message **'origin'**; in one of the axes, printing **'x axis'** or **'y axis'**; or in one of the four quadrants, printing **'q1'**, **'q2**', **'q3'** or **'q4'**.
```python
x, y = [float(x) for x in input("Insert a float for x, y: ").split(",")]
if x == 0 and y == 0:
print('Origin')
elif x == 0:
print('y axis')
elif y == 0:
print('x axis')
elif x > 0 and y > 0:
print('q1')
elif x > 0:
print('q4')
elif y > 0:
print('q2')
else:
print('q3')
```
#### 11) Read an integer that represents a phone code for international dialing.
#### Then, inform to which country the code belongs to, considering the generated table below:
(You just need to consider the first 10 entries)
```python
import pandas as pd
df = pd.read_html('https://en.wikipedia.org/wiki/Telephone_numbers_in_Europe')[1]
df = df.iloc[:,:2]
df.head(20)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Country</th>
<th>Country calling code</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>Austria</td>
<td>43</td>
</tr>
<tr>
<th>1</th>
<td>Belgium</td>
<td>32</td>
</tr>
<tr>
<th>2</th>
<td>Bulgaria</td>
<td>359</td>
</tr>
<tr>
<th>3</th>
<td>Croatia</td>
<td>385</td>
</tr>
<tr>
<th>4</th>
<td>Cyprus</td>
<td>357</td>
</tr>
<tr>
<th>5</th>
<td>Czech Republic</td>
<td>420</td>
</tr>
<tr>
<th>6</th>
<td>Denmark</td>
<td>45</td>
</tr>
<tr>
<th>7</th>
<td>Estonia</td>
<td>372</td>
</tr>
<tr>
<th>8</th>
<td>Finland</td>
<td>358</td>
</tr>
<tr>
<th>9</th>
<td>France</td>
<td>33</td>
</tr>
<tr>
<th>10</th>
<td>Germany</td>
<td>49</td>
</tr>
<tr>
<th>11</th>
<td>Greece</td>
<td>30</td>
</tr>
<tr>
<th>12</th>
<td>Hungary</td>
<td>36</td>
</tr>
<tr>
<th>13</th>
<td>Iceland</td>
<td>354</td>
</tr>
<tr>
<th>14</th>
<td>Ireland</td>
<td>353</td>
</tr>
<tr>
<th>15</th>
<td>Italy</td>
<td>39</td>
</tr>
<tr>
<th>16</th>
<td>Latvia</td>
<td>371</td>
</tr>
<tr>
<th>17</th>
<td>Liechtenstein</td>
<td>423</td>
</tr>
<tr>
<th>18</th>
<td>Lithuania</td>
<td>370</td>
</tr>
<tr>
<th>19</th>
<td>Luxembourg</td>
<td>352</td>
</tr>
</tbody>
</table>
</div>
```python
country_code = int(input("Enter the country calling code:"))
code_dict = {43:'Austria',
32:'Belgium',
359:'Bulgaria',
385:'Croatia',
357:'Cyprus',
420:'Czech Republic',
45:'Denmark',
372:'Estonia',
359:'Finland',
33:'France',
}
if country_code in code_dict.keys():
print(code_dict[country_code])
else:
print('Cannot find country.')
```
Enter the country calling code:33
France
#### 12) Write a piece of code that reads 6 numbers in a row. Next, show the number of positive values entered. On the next line, print the average of the values to one decimal place.
```python
numbers = [float(x) for x in input("Enter six values separated by commas: ").split(",")]
pos, total = 0, 0
for i in numbers:
total += i
if i > 0:
pos += 1
print(f"{pos} positive numbers")
print(f"Average = {round(total/6 , 1)}")
```
Enter six values separated by commas: 5, 2, 5, 7, 2, 0
5 positive numbers
Average = 3.5
#### 13) Read an integer **N**. Then print the **square of each of the even values**, from 1 to N, including N, if applicable, arranged one per line.
```python
N = int(input("Enter integer: "))
for x in range(1,N+1,1):
if x%2==0:
print(x**2)
```
Enter integer: 10
4
16
36
64
100
#### 14) Using **input()**, read an integer and print its classification as **'even / odd'** and **'positive / negative'** . The two classes for the number must be printed on the same line separated by a space. In the case of zero, print only **'null'**.
```python
number = int(input("Enter an integer:"))
class_even = "even" if number%2 == 0 else "odd"
class_pos = "positive" if number == abs(number) else "negative"
if number == 0:
print("null")
else:
print(class_even, class_pos)
```
Enter an integer:-15
odd negative
## Challenge
#### 15) Ordering problems are recurrent in the history of programming. Over time, several algorithms have been developed to fulfill this function. The simplest of these algorithms is the [**Bubble Sort**](https://en.wikipedia.org/wiki/Bubble_sort), which is based on comparisons of elements two by two in a loop of passes through the elements. Your mission, if you decide to accept it, will be to input six whole numbers ramdonly ordered. Then implement the **Bubble Sort** principle to order these six numbers **using only loops and conditionals**.
#### At the end, print the six numbers in ascending order on a single line separated by spaces.
```python
```
| 19d84d6fbd2ad03cb3a9fdda6cc7f921dd902567 | 25,673 | ipynb | Jupyter Notebook | Assigments/Assignment_1.ipynb | stkiesling/Python_Course | 57e953677c9d5913da6a7744ca82e2eaf66c2638 | [
"Apache-2.0"
] | null | null | null | Assigments/Assignment_1.ipynb | stkiesling/Python_Course | 57e953677c9d5913da6a7744ca82e2eaf66c2638 | [
"Apache-2.0"
] | null | null | null | Assigments/Assignment_1.ipynb | stkiesling/Python_Course | 57e953677c9d5913da6a7744ca82e2eaf66c2638 | [
"Apache-2.0"
] | null | null | null | 31.773515 | 1,482 | 0.500682 | true | 3,528 | Qwen/Qwen-72B | 1. YES
2. YES | 0.857768 | 0.888759 | 0.762349 | __label__eng_Latn | 0.970853 | 0.609524 |
```python
import sympy as sp
```
```python
v0,l,x,y = sp.symbols('V_0 L x y')
```
```python
eq1 = sp.Eq(x**2/(l/(2*v0))**2+(y-v0)**2,v0**2)
eq1
```
$\displaystyle \left(- V_{0} + y\right)^{2} + \frac{4 V_{0}^{2} x^{2}}{L^{2}} = V_{0}^{2}$
```python
eq = sp.solve(eq1,y)[0]
sp.simplify(eq)
```
$\displaystyle \frac{V_{0} \left(L - \sqrt{L^{2} - 4 x^{2}}\right)}{L}$
```python
```
| 0db3e54a757fb9eb8dcf4975998083cb2009b2a0 | 1,845 | ipynb | Jupyter Notebook | Potentials/semiellipticalpotential.ipynb | ethank5149/Quantum-Mechanics | 71e1c2a47b8a399bf0ba7e07bb0dcbaa4a2068bd | [
"MIT"
] | null | null | null | Potentials/semiellipticalpotential.ipynb | ethank5149/Quantum-Mechanics | 71e1c2a47b8a399bf0ba7e07bb0dcbaa4a2068bd | [
"MIT"
] | null | null | null | Potentials/semiellipticalpotential.ipynb | ethank5149/Quantum-Mechanics | 71e1c2a47b8a399bf0ba7e07bb0dcbaa4a2068bd | [
"MIT"
] | null | null | null | 20.5 | 116 | 0.478049 | true | 178 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92523 | 0.757794 | 0.701134 | __label__yue_Hant | 0.428198 | 0.467301 |
# <center> Neutrino-Driven Wind Transsonic Velocity Solver </center>
<center>By Brian Nevins</center>
Image from: <a href="https://www.newsweek.com/weird-neutron-star-shouldnt-exist-discovered-scientists-1140445"> Newsweek </a>
---
# Authors
Brian Nevins<br>
Dr. Luke Roberts, Michigan State University
---
# Abstract
When a massive star, between 10 and ~29 times the mass of our sun, dies, most of its outer layers are blown out to space in a massive explosion called a supernova. The inner core, about 1.4 solar masses, forms a neutron star - essentially a giant, super-dense atomic nucleus with a radius of roughly 10km. In a short period after the supernova explosion, some of the outer material from the dead star falls back onto the surface of the neutron star. This material is then blown back into space by the huge numbers of neutrinos being released from the newly formed neutron star, as what is known as the neutrino-driven wind.
The neutrino-driven wind can be well approximated as a steady-state, time independent system, with a constant mass loss rate. The hydrodynamic equations governing the wind are:
\begin{equation}
\dot{M}=4\pi r^2 v \rho
\end{equation}
\begin{equation}
v \frac{\partial v}{\partial r}=-\frac{1}{\rho}\frac{\partial P}{\partial r}-\frac{G M}{r^2}
\end{equation}
\begin{equation}
v \frac{\partial s}{\partial r}=\frac{S_\epsilon}{n_B T}
\end{equation}
and the equation of state for the wind, which gives the specific pressure ($p$), speed of sound ($c_s$), and specific entropy ($s$). For this code we use a simple gamma law equation of state, where
\begin{equation}
p=\rho T
\end{equation}
\begin{equation}
c_s^2=\gamma T
\end{equation}
\begin{equation}
s=\frac{1}{\gamma-1}\ln(T \rho^{1-\gamma})
\end{equation}
The governing equations can then be rewritten in terms of a timelike variable $\psi$, with respect to which we can computationally integrate using a Runge-Kutta method. The governing equations, in terms of dimensionless variables $x=\ln(\frac{r}{r_0}), u=\ln(\frac{v}{c_s}), w=\ln(\frac{T}{T_0})$, become:
\begin{equation}
\frac{dx}{d\psi}=f_1
\end{equation}
\begin{equation}
\frac{du}{d\psi}=f_2
\end{equation}
\begin{equation}
\frac{dw}{d\psi}=-(\gamma-1)(2f_1+f_2)
\end{equation}
where $f_1=1-e^{2u-w},$ $f_2=a e^{-w-x}-2$, and $a=\frac{GM}{c_s^2 r_0}$. Numerically, $a\approx.1$ from an order-of-magnitude calculation. We integrate this with an adaptive RK4 method, specifying starting values of $r=r_0$ and $T=T_0$. Our independent variable is the starting value of $v=v_0$, which we vary in order to find a realistic solution curve when we plot $v$ vs $r$. We can then use $v_0$ to find the mass loss rate $\dot{M}$.
----
# Statement of Need
The neutrino-driven wind is a site for interesting nucleosynthesis reactions. The nuclei that can be formed depend on a number of parameters, especially the mass loss rate. This code will be useful in determining mass loss rates for different equations of state, and can be adapted to incorporate other parameters such as secondary heating to determine their effect on nucleosynthesis.
----
# Installation instructions
This code can be run as-is, if the git repository is properly cloned. The Adiabatic_wind_solver.py file, which contains the main code, is self-contained and can be moved as needed. Dependencies are listed below, and can be installed via pip:<br>
numpy
matplotlib.pyplot
os
contextlib
Ipython.display
----
# Tests
Several testing methods are included in the test_Adiabatic_wind_solver.py file. These tests check several of the methods in the main python file. The most important tests are the last two, which check the RK method and the bisection search for the critical velocity for simple cases and compare the results to known values. If these do not succeed, something has been changed that significantly alters the critical velocity for multiple gamma values. These tests can be run by entering the pytest command in the main folder of the project.
```python
!pytest
```
============================= test session starts =============================
platform win32 -- Python 3.7.3, pytest-5.0.1, py-1.8.0, pluggy-0.12.0
rootdir: D:\Documents\Brian\Important\MSU\Neutrino-driven-winds-research\neutrino-winds
plugins: arraydiff-0.3, doctestplus-0.3.0, openfiles-0.3.2, remotedata-0.3.2
collected 4 items
tests\test_Adiabatic_wind_solver.py .... [100%]
========================== 4 passed in 66.44 seconds ==========================
---
# Example Usage
This code is designed to approximate the critical transsonic velocity of the neutrino-driven wind from the surface of a proto-neutron star, where the wind follows a gamma law equation of state. This is accomplished by evolving a set of ODEs until a sign change occurs in one of two characteristic functions of the system $f_1$ and $f_2$. The critical velocity is found by adjusting the starting velocity of the wind so that the two functions cross zero at the same time using a bisection method.
The difference in the zeros of these functions is easy to see in the isothermal case $(\gamma = 1)$. The curves that approach zero at large r represent "breeze" solutions, where material is lifted off the surface of the star but does not have sufficient energy to escape, and falls back to the surface. The curves that bend upward are nonphysical. The behavior of the curves is determined by which function crosses zero first.
```python
%matplotlib inline
import Adiabatic_wind_solver as aws
s=aws.solver(1,10)
s.makePlots(.001,.01,.0005,False,20,5);
```
There exists a solution between these two curve sets for which the two functions cross zero at the same time. We use a bisection method to determine the bounds on the critical velocity for this to take place, as shown below.
```python
v0avg=s.findV0(.001,.006,.0001)
s.makePlot(v0avg)
```
This solution does not return to zero, and represents a true wind solution where the material blown off continues out into space.
We can also determine the critical velocity for an isentropic wind that follows an ideal gas equation of state $(\gamma=5/3)$. It is harder to see the different solution sets, but the bisection method is just as effective for finding the critical velocity.
```python
s1=aws.solver(5/3,.1)
v0=s1.findV0(.0001,.009,.0001)
s1.makePlot(v0,xrange=50)
```
We can also see how the critical velocity depends on the value of $\gamma$, for a general gamma law equation of state. The gammaSearch function simply iterates through given range of $\gamma$ values, finds the critical velocity, and plots those velocities. The system seems to destabilize just below $\gamma=1.5$, as the critical velocity drops dramatically to zero.
```python
#This function takes a long time to evaluate
g=s1.gammaSearch(a=.1,g0=10,dg=-.01,glim=1,lower=.01,upper=.9,itermax=100)
```
---
# Methodology
My original proposal outlined several goals surrounding the integration and visualization of the ODEs, including a minimization algorithm suggested by my advisor. All of these goals were met, and the minimization algorithm (the search method for the critical velocity) was implemented. Useful visuals were generated, as I suggested would be beneficial. These visuals were not generated in real-time, as I initially thought, but the primary code is fast enough that this did not seem necessary. The overarching goal of the project was to integrate the ODEs and find the critical velocity, and that was accomplished. I did not deviate significantly from my project proposal. I did go a step further, by investigating how the critical velocity depended on $\gamma$ for a gamma law equation of state, which fits into the broader goal of seeing how solutions to the ODEs depend on the various physics involved.
---
# Concluding Remarks
I successfully found critical velocities for winds that follow a gamma law equation of state, of which the isothermal case is a member with $\gamma=1$. All of my stated project goals were accomplished. Future work will involve generalizing this code for any equation of state, to more accurately model real neutron stars. I would like to make inputting this equation of state as intuitive as possible, which may involve using symbolic python or something similar. It will also be helpful to numerically convert the critical velocity I found to the mass loss rate it corresponds to. Although finding the critical velocity is fairly fast in python, my investigation of the relationship between $\gamma$ and the critical velocity shows that increasing efficiency should also be a goal for future work. This will likely involve converting my code into C, and then writing a python wrapper for easier use.
----
# References
Lamers, Henny J. G. L. M., and Joseph P. Cassinelli. <i>Introduction to Stellar Winds</i>. Cambridge University Press, 1999.
Roberts, Luke. <i>Notes on the Basic Physics of the Neutrino Driven Wind</i>. 18 June 2017.
Arcones, A., and F. K. Thielemann. “Neutrino-Driven Wind Simulations and Nucleosynthesis of Heavy Elements.” Journal of Physics G: Nuclear and Particle Physics, vol. 40, no. 1, Jan. 2013, p. 013201. arXiv.org, doi:10.1088/0954-3899/40/1/013201.
Code is stored in a repository on <a href="https://github.com/bnevs88/neutrino-winds.git">Github</a>.
```python
```
| 1bb22b181d717149799235646a281d82ab3ee00f | 91,604 | ipynb | Jupyter Notebook | neutrino-winds/Final Project Report.ipynb | colbrydi/neutrino-winds | 0088a0568841cda00ee8303b797d05be9feab844 | [
"BSD-3-Clause"
] | null | null | null | neutrino-winds/Final Project Report.ipynb | colbrydi/neutrino-winds | 0088a0568841cda00ee8303b797d05be9feab844 | [
"BSD-3-Clause"
] | null | null | null | neutrino-winds/Final Project Report.ipynb | colbrydi/neutrino-winds | 0088a0568841cda00ee8303b797d05be9feab844 | [
"BSD-3-Clause"
] | null | null | null | 243.62766 | 42,804 | 0.91103 | true | 2,354 | Qwen/Qwen-72B | 1. YES
2. YES | 0.826712 | 0.824462 | 0.681592 | __label__eng_Latn | 0.997967 | 0.421899 |
###### Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2017 L.A. Barba, N.C. Clementi
# Bird's-eye view of mechanical vibrations
Welcome to **Lesson 4** of the third module in _Engineering Computations_. This course module is dedicated to studying the dynamics of change with computational thinking, Python and Jupyter.
The first three lessons give you a solid footing to tackle problems involving motion, velocity, and acceleration. They are:
1. [Lesson 1](http://go.gwu.edu/engcomp3lesson1): Catch things in motion
2. [Lesson 2](http://go.gwu.edu/engcomp3lesson2): Step to the future
3. [Lesson 3](http://go.gwu.edu/engcomp3lesson3): Get with the oscillations
You learned to compute velocity and acceleration from position data, using numerical derivatives, and to capture position data of moving objects from images and video. For physical contexts we used free-fall of a ball, and projectile motion. Then you faced the opposite challenge: computing velocity and position from acceleration data, leading to the idea of stepping forward in time to solve a differential equation.
Our general approach combines these key ideas: (1) turning a second-order differential equation into a system of first-order equations; (2) writing the system in vector form, and the solution in terms of a state vector; (3) designing a code to obtain the solution using separate functions to compute the derivatives of the state vector, and to step the system in time with a chosen scheme (e.g., Euler, Euler-Cromer, Runge-Kutta). It's a rock-steady approach that will serve you well!
In this lesson, you'll get a broader view of applying your new-found skills to learn about mechanical vibrations: a classic engineering problem. You'll study general spring-mass systems with damping and a driving force, and appreciate the diversity of behaviors that arise. We'll end the lesson presenting a powerful method to study dynamical systems: visualizing direction fields and trajectories in the phase plane.
Are you ready? Start by loading the Python libraries that we know and love.
```python
import numpy
from matplotlib import pyplot
%matplotlib inline
pyplot.rc('font', family='serif', size='14')
```
## General spring-mass system
The simplest mechanical oscillating system is a mass $m$ attached to a spring, without friction. We discussed this system in the [previous lesson](http://go.gwu.edu/engcomp3lesson3). In general, though, these systems are subject to friction—represented by a mechanical damper—and a driving force. Also, the spring's restoring force could be a nonlinear function of position, $k(x)$.
#### General spring-mass system, with driving and damping.
Newton's law applied to the general (driven, damped, nonlinear) spring-mass system is:
\begin{equation}
m \ddot{x} = F(t) -b(\dot{x}) - k(x)
\end{equation}
where
* $F(t)$ is the driving force
* $b(\dot{x})$ is the damping force
* $k(x)$ is the restoring force, possibly nonlinear
Written as a system of two differential equations, we have:
\begin{eqnarray}
\dot{x} &=& v, \nonumber\\
\dot{v} &=& \frac{1}{m} \left( F(t) - k(x) - b(v) \right).
\end{eqnarray}
With the state vector,
\begin{equation}
\mathbf{x} = \begin{bmatrix}
x \\ v
\end{bmatrix},
\end{equation}
the differential equation in vector form is:
\begin{equation}
\dot{\mathbf{x}} = \begin{bmatrix}
v \\ \frac{1}{m} \left( F(t) - k(x) - b(v) \right)
\end{bmatrix}.
\end{equation}
In this more general system, the time variable could appear explicitly on the right-hand side, via the driving function $F(t)$. We'll need to adapt the code for the time-stepping function to take the time as an additional argument.
The `euler_cromer()` function we defined in the previous lesson took three arguments: `state, rhs, dt`—the state vector, the Python function computing the right-hand side of the differential equation, and the time step. Let's re-work that function now to take an additional `time` variable, which also gets used in the `rhs` function.
```python
# new version of the function, taking time as explicit argument
def euler_cromer(state, rhs, time, dt):
'''Update a state to the next time increment using Euler-Cromer's method.
Arguments
---------
state : state vector of dependent variables
rhs : function that computes the RHS of the DE, taking (state, time)
time : float, time instant
dt : float, time step
Returns
-------
next_state : state vector updated after one time increment'''
mid_state = state + rhs(state, time)*dt # Euler step
mid_derivs = rhs(mid_state, time) # update derivatives
next_state = numpy.array([mid_state[0], state[1] + mid_derivs[1]*dt])
return next_state
```
### Case with linear damping
Let's look at the behavior of a system with linear restoring force, linear damping, but no driving force: $k(x)= kx$, $b(v)=bv$, $F(t)=0$.
The differential system is now:
\begin{equation}
\dot{\mathbf{x}} = \begin{bmatrix}
v \\ \frac{1}{m} \left( - kx - bv \right)
\end{bmatrix}.
\end{equation}
Now we need to write a function to compute the right-hand side (derivatives) for this system.
Even though the system does not explicitly use the time variable in the right-hand side, we still include `time` as an argument to the function, so that it is consistent with our new design for the `euler_cromer()` step. We include `time` in the arguments list, but it is not used inside the function code. It's thus a good idea to specify a _default value_ for this argument by writing `time=0` in the arguments list: that will allow us to also call the function leaving the `time` argument blank, if we wanted to (in which case, it will automatically be assigned its default value of 0).
Another option for the default value is `time=None`. It doesn't matter because the variable is not used inside the function!
```python
def dampedspring(state, time=0):
'''Computes the right-hand side of the spring-mass differential
equation, with linear damping.
Arguments
---------
state : state vector of two dependent variables
time : float, time instant
Returns
-------
derivs: derivatives of the state vector
'''
derivs = numpy.array([state[1], 1/m*(-k*state[0]-b*state[1])])
return derivs
```
Let's try it!
The following example is from section 4.3.9 of Ref. [1] (an open-access text!).
We set the model parameters, the initial conditions, and the time-stepping conditions.
Then we initialize the numerical solution array `num_sol`, and call the `euler_cromer()` function in the `for` statement.
Notice that we pass the time instant `t[i]` to the function's `time` argument (which will allow us to use the same calling signature when we solve for a system with driving force).
```python
m = 1.0
k = 1.0
b = 0.3
```
```python
x0 = 1 # initial position
v0 = 0 # initial velocity
```
```python
T = 12*numpy.pi
N = 5000
dt = T/N
t = numpy.linspace(0, T, N)
```
```python
num_sol = numpy.zeros([N,2]) #initialize solution array
#Set intial conditions
num_sol[0,0] = x0
num_sol[0,1] = v0
```
```python
for i in range(N-1):
num_sol[i+1] = euler_cromer(num_sol[i], dampedspring, t[i], dt)
```
Time to plot the solution—in our plot of position versus time below, notice that we added a line with [`pyplot.figtext()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.figtext.html?highlight=matplotlib%20pyplot%20figtext#matplotlib.pyplot.figtext) at the end. This command adds a custom text to the figure: we use it to print the values of the spring-mass model parameters corresponding to the plot. See how we print the parameter values in the text string? We used Python's string formatter, which you learned about in [Module 2 Lesson 1](http://go.gwu.edu/engcomp2lesson1).
If we were to re-run the solution with different model parameters, re-executing the code in this cell would update the plot and the text with the proper values. (We don't want to rely on manually changing the text, as that is error prone!)
```python
fig = pyplot.figure(figsize=(6,4))
pyplot.plot(t, num_sol[:, 0], linewidth=2, linestyle='-')
pyplot.xlabel('Time [s]')
pyplot.ylabel('Position, $x$ [m]')
pyplot.title('Damped spring-mass system with Euler-Cromer method.\n')
pyplot.figtext(0.1,-0.1,'$m={:.1f}$, $k={:.1f}$, $b={:.1f}$'.format(m,k,b));
```
The result above shows that the oscillations die down over a few periods: the oscillations are _damped_ over time.
And our plot looks pretty close to [Fig. 4.27](https://link.springer.com/chapter/10.1007%2F978-3-319-32428-9_4#Fig27) of Ref. [1], as it should.
### Case with sinusoidal driving, and damping
Suppose now that an external force of the form $F(t) = A \sin(\omega t)$ drives the system. This is a typical situation in mechanical systems. Let's find out what a system like that behaves like. The example below comes from section 4.3.10 of Ref. [1].
We're showy, so we decided to use the Unicode character for the Greek letter $\omega$ in the code… because we can!
With a handy table of [Unicode for greek letters](https://gist.github.com/beniwohli/765262), you can pick a symbol code, type it into a code cell, and out comes the symbol. Then, it's a copy-and-paste job to reuse the symbol in the code. And using greek letters for some variable names is very chic.
```python
u'\u03C9'
```
'ω'
```python
A = 0.5 # parameter values from example in 4.3.10 of Ref. [1]
ω = 3
```
More than showy, we're snazzy, and so we build a one-line function using the [`lambda`](https://docs.python.org/3/reference/expressions.html#lambda) keyword.
It's just too cool.
In Python, you can create a small function in one line using the assignment operator `=`, followed by the `lambda` keyword, then a statement of the form `arguments: expression`—in our case, we have the single argument `time`, and the expression is the sinusoidal driving.
The sine mathematical function is avaible to us from the [`math` library](https://docs.python.org/3/library/math.html). Check it out.
```python
from math import sin
F = lambda time: A*sin(ω*time)
```
This is really a function: we can call `F()` at any point in our code, passing a value of time, and it will output the result of $F(t) = A \sin(\omega t)$.
Now, let's write the right-hand side function of derivatives for the driven spring-mass system (with damping). Notice that we use the lambda function `F()` inside this new function, and the `time` variable explicitly as the argument to `F()`. Some powerful Python kung fu!
```python
def drivenspring(state, time):
'''Computes the right-hand side of the spring-mass differential
equation, with sinusoidal driving and linear damping.
Arguments
---------
state : state vector of two dependent variables
time : float, time instant
Returns
-------
derivs: derivatives of the state vector
'''
derivs = numpy.array([state[1], 1/m*(F(time)-k*state[0]-b*state[1])])
return derivs
```
Here is where the power of our code design becomes clear: solving the differential equation via time-stepping inside a `for` statement looks just like before, with the only difference being that we pass another right-hand-side function of derivatives.
The code cell below solves the driven spring-mass system with the same model parameters we used for the damped system without driving.
```python
for i in range(N-1):
num_sol[i+1] = euler_cromer(num_sol[i], drivenspring, t[i], dt)
```
```python
fig = pyplot.figure(figsize=(6,4))
pyplot.plot(t, num_sol[:, 0], linewidth=2, linestyle='-')
pyplot.xlabel('Time [s]')
pyplot.ylabel('Position, $x$ [m]')
pyplot.title('Driven spring-mass system with Euler-Cromer method.\n')
pyplot.figtext(0.1,-0.1,'$m={:.1f}$, $k={:.1f}$, $b={:.1f}$, $A={:.1f}$, $\omega={:.1f}$'.format(m,k,b,A,ω));
```
And our result looks just like [Fig. 4.28](https://link.springer.com/chapter/10.1007%2F978-3-319-32428-9_4#Fig28) of Ref. [1], as it should. You can see that the system starts out dominated by the spring-mass oscillations, which get damped over time and the effect of the external driving becomes visible, and the sinusoidal driving is all that is left in the end.
##### Exercise:
* Experiment with different values of the driving-force amplitude, $A$, and frequency, $\omega$.
* Swap the sine driving for a cosine, and see what happens.
An interesting behavior occurs when the damping is low enough and the frequency of the driving force coincides with the natural frequency of the mass-spring system, $\sqrt{k/m}$: **resonance**.
Try these parameters:
```python
ω = 1
b = 0.1
```
```python
for i in range(N-1):
num_sol[i+1] = euler_cromer(num_sol[i], drivenspring, t[i], dt)
```
```python
fig = pyplot.figure(figsize=(6,4))
pyplot.plot(t, num_sol[:, 0], linewidth=2, linestyle='-')
pyplot.xlabel('Time [s]')
pyplot.ylabel('Position, $x$ [m]')
pyplot.title('Driven spring-mass system with Euler-Cromer method.\n')
pyplot.figtext(0.1,-0.1,'$m={:.1f}$, $k={:.1f}$, $b={:.1f}$, $A={:.1f}$, $\omega={:.1f}$'.format(m,k,b,A,ω));
```
As you can see, the amplitude of the oscillations grow over time! (Compare the vertical axis of this plot with the previous one.) Our result matches with [Fig. 4.29](https://link.springer.com/chapter/10.1007%2F978-3-319-32428-9_4#Fig29) of Ref. [1].
## Solutions on the phase plane
The spring-mass system, as you see, can behave in various ways. If the spring is linear, and there is no damping or driving (like in the previous lesson), the motion is periodic. If we add damping, the oscillatory motion decays over time. With driving, the motion can be rather more complicated, and sometimes can exhibit resonance.
Each of these types of motion is represented by corresponding solutions to the differential system, dictated by the model parameters and the initial conditions.
How could we get a sense for all the types of solutions to a differential system?
A powerful method to do this is to use the _phase plane_.
A system of two first-order differential equations:
\begin{eqnarray}
\dot{x}(t) &=& f(x, y) \\
\dot{y}(t) &=& g(x, y)
\end{eqnarray}
with state vector
\begin{equation}
\mathbf{x} = \begin{bmatrix}
x \\ y
\end{bmatrix},
\end{equation}
is called a _planar autonomous system_: planar, because the state vector has two components; and autonomous (self-generating), because the time variable does not explicitly appear on the right-hand side
(which wouldn't apply to the driven spring-mass system).
For initial conditions $\mathbf{x}_0=(x_0, y_0)$, the system has a unique solution $\mathbf{x}(t)=\left(x(t), y(t)\right)$. This solution can be represented by a planar curve on the $xy$-plane—the **phase plane**—and is called a _trajectory_ of the system.
On the phase plane, we can plot a **direction (slope) field** by generating a uniform grid of points $(x_i, y_j)$ in some chosen range $(x_\text{min}, x_\text{max})\times(y_\text{min}, y_\text{max})$, and drawing small line segments representing the direction of the vector field $(f(x,y), g(x,y)$ on each point.
Let's draw a direction field for the damped spring-mass system, and include a solution trajectory. We copied the whole problem set-up below, to get a solution all in one code cell, for easy trial with different parameter choices.
```python
m = 1
k = 1
b = 0.3
x0 = 3 # initial position
v0 = 3 # initial velocity
T = 12*numpy.pi
N = 5000
dt = T/N
t = numpy.linspace(0, T, N)
num_sol = numpy.zeros([N,2]) #initialize solution array
#Set intial conditions
num_sol[0,0] = x0
num_sol[0,1] = v0
for i in range(N-1):
num_sol[i+1] = euler_cromer(num_sol[i], dampedspring, t[i], dt)
```
To choose a range for the plotting area of the direction field, let's look at the maximum values of the solution array.
```python
numpy.max(num_sol[:,0])
```
4.0948277569088525
```python
numpy.max(num_sol[:,1])
```
3.0
With that information, we choose the plotting area as $(-4,4)\times(-4,4)$. Below, we'll create an array named `coords` to hold the positions of mesh lines on each coordinate direction. Here, we pick 11 mesh points in each direction.
Then, we'll call the very handy [`meshgrid()`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.meshgrid.html) function of NumPy—you should definitely study the documentation and use pen and paper to diligently figure out what it does!
The outputs of `meshgrid()` are two matrices holding the $x$ and $y$ coordinates, respectively, of points on the grid. Combined, these two matrices give the coordinate pairs of every grid point where we'll compute the direction field.
```python
coords = numpy.linspace(-4,4,11)
X, Y = numpy.meshgrid(coords, coords)
```
Look at the vector form of the differential system again… with our two matrices of coordinate values for the grid points, we could compute the vector field on all these points in one go using array operations:
```python
F = Y
G = 1/m * (-k*X -b*Y)
```
Matplotlib has a type of plot called [`quiver`](https://matplotlib.org/examples/pylab_examples/quiver_demo.html) that draws a vector field on a plane. Let's try it out using the vector field we computed above.
```python
fig = pyplot.figure(figsize=(7,7))
pyplot.quiver(X,Y, F,G);
```
OK, that's not bad. The arrows on each grid point represent vectors $(f(x,y), g(x,y))$, computed from the right-hand side of the differential equation.
_What are the axes on this plot?_ Well, they are the components of the state vector—which for the spring-mass system are _position_ and _velocity_. The vector field looks like a "flow" going around the origin, the values of position and velocity oscillating around. If you imagine an initial condition represented by a coordinate pair $(x_0,y_0)$, the solution trajectory would follow along the arrows, spiraling around the origin, while slowly approaching it.
We'd like to visualize a trajectory on the vector-field plot, and also improve it in a few ways. But before that, Python will astonish you with a splendid fact: you can also compute the vector field on the grid points by calling the function `dampedspring()`, passing as argument a list made of the matrices `X` and `Y`.
_Why does this work?_ Study the function and think!
```python
F, G = dampedspring([X,Y])
```
The default behavior of `quiver` is to scale the vectors (arrows) with the magnitude, but direction fields are usually drawn using line segments of equal length. Also by default, the vectors are drawn _starting at_ the grid points, while direction fields ususally _center_ the line segments. We can improve our plot using by _scaling_ the vectors by their magnitude, and using the `pivot='mid'` option on the plot. A little transparency is also nice.
To plot the improved direction field below, we drew ideas from a tutorial available online, see Ref. [2]. To compute the magnitude of the vectors, we use the [`numpy.hypot()`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.hypot.html) function, which returns the triangle hypotenuse of two right-angled sides.
We should also add axis labels and a title!
```python
M = numpy.hypot(F,G)
M[ M == 0] = 1 # to avoid zero-division
F = F/M
G = G/M
fig = pyplot.figure(figsize=(7,7))
pyplot.quiver(X,Y, F,G, pivot='mid', alpha=0.5)
pyplot.plot(num_sol[:,0], num_sol[:,1], color= '#0096d6', linewidth=2)
pyplot.xlabel('Position, $x$, [m]')
pyplot.ylabel('Velocity, $v$, [m/s]')
pyplot.title('Direction field for the damped spring-mass system\n')
pyplot.figtext(0.1,0,'$m={:.1f}$, $k={:.1f}$, $b={:.1f}$'.format(m,k,b));
```
And just for kicks, let's re-do everything with zero damping:
```python
m = 1
k = 1
b = 0
x0 = 3 # initial position
v0 = 3 # initial velocity
T = 12*numpy.pi
N = 5000
dt = T/N
t = numpy.linspace(0, T, N)
num_sol = numpy.zeros([N,2]) #initialize solution array
#Set intial conditions
num_sol[0,0] = x0
num_sol[0,1] = v0
for i in range(N-1):
num_sol[i+1] = euler_cromer(num_sol[i], dampedspring, t[i], dt)
F, G = dampedspring([X,Y])
M = numpy.hypot(F,G)
M[ M == 0] = 1 # to avoid zero-division
F = F/M
G = G/M
fig = pyplot.figure(figsize=(7,7))
pyplot.quiver(X,Y, F,G, pivot='mid', alpha=0.5)
pyplot.plot(num_sol[:,0], num_sol[:,1], color= '#0096d6', linewidth=2)
pyplot.xlabel('Position, $x$, [m]')
pyplot.ylabel('Velocity, $v$, [m/s]')
pyplot.title('Direction field for the un-damped spring-mass system\n')
pyplot.figtext(0.1,0,'$m={:.1f}$, $k={:.1f}$, $b={:.1f}$'.format(m,k,b));
```
##### Challenge task
* Write a function to draw direction fields as above, taking as arguments the right-hand-side derivatives function, and lists containing the plot limits and the number of grid lines in each coordinate direction.
* Write some code to capture mouse clicks on the direction field, following what you learned in [Lesson 1](http://go.gwu.edu/engcomp3lesson1) of this module.
* Use the captured mouse clicks as initial conditions and obtain the corresponding trajectories by solving the differential system, then make a new plot showing the trajectories with different colors.
## What we've learned
* General spring-mass systems have several behaviors: periodic in the undamped case, decaying oscillations when damped, complex oscillations when driven.
* Resonance appears when the driving frequency matches the natural frequency of the system.
* We can add formatted strings in figure titles, labels and added text.
* The `lambda` keyword builds one-line Python functions.
* The `meshgrid()` function of NumPy is handy for building a grid of points on a plane.
* State vectors of a differential system live on the _phase plane_.
* Solutions of the differential system (given initial conditions) are _trajectories_ on the phase plane.
* Trajectories for the undamped spring-mass system are circles; in the damped case, they are spirals toward the origin.
## References
1. Linge S., Langtangen H.P. (2016) Solving Ordinary Differential Equations. In: Programming for Computations - Python. Texts in Computational Science and Engineering, vol 15. Springer, Cham, https://doi.org/10.1007/978-3-319-32428-9_4, open access and reusable under [CC-BY-NC](http://creativecommons.org/licenses/by-nc/4.0/) license.
V
2. [Plotting direction fields and trajectories in the phase plane](http://scipy-cookbook.readthedocs.io/items/LoktaVolterraTutorial.html?highlight=direction%20fields#Plotting-direction-fields-and-trajectories-in-the-phase-plane), as part of the Lotka-Volterra tutorial by Pauli Virtanen and Bhupendra, in the _SciPy Cookbook_.
```python
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
```
<link href="https://fonts.googleapis.com/css?family=Merriweather:300,300i,400,400i,700,700i,900,900i" rel='stylesheet' >
<link href="https://fonts.googleapis.com/css?family=Source+Sans+Pro:300,300i,400,400i,700,700i" rel='stylesheet' >
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' >
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 800px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1000px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.5em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Source Sans Pro', sans-serif;
line-height: 140%;
font-size: 110%;
width:680px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Merriweather', serif;
font-style:regular;
font-weight: bold;
font-size: 250%;
line-height: 100%;
color: #004065;
margin-bottom: 1em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Merriweather', serif;
font-weight: bold;
font-size: 180%;
line-height: 100%;
color: #0096d6;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h3 {
font-family: 'Merriweather', serif;
font-size: 150%;
margin-top:12px;
margin-bottom: 3px;
font-style: regular;
color: #008367;
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Merriweather', serif;
font-weight: 300;
font-size: 100%;
line-height: 120%;
text-align: left;
width:500px;
margin-top: 1em;
margin-bottom: 2em;
margin-left: 80pt;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Source Sans Pro', sans-serif;
font-weight: regular;
font-size: 130%;
color: #e31937;
font-style: italic;
margin-bottom: .5em;
margin-top: 1em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'Source Code Pro', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
/* .prompt{
display: None;
}*/
.warning{
color: rgb( 240, 20, 20 )
}
</style>
| cd4dfd5390e927e8d46fd18f35e5060aa1e2c9ef | 337,284 | ipynb | Jupyter Notebook | notebooks_en/4_Birdseye_Vibrations.ipynb | engineersCode/EngCom3_flyatchange | ced7377e7a79e6a82da1249254013faccbd6763a | [
"BSD-3-Clause"
] | 6 | 2019-06-26T17:56:09.000Z | 2019-12-14T17:04:37.000Z | notebooks_en/4_Birdseye_Vibrations.ipynb | engineersCode/EngCom3_flyatchange | ced7377e7a79e6a82da1249254013faccbd6763a | [
"BSD-3-Clause"
] | 1 | 2018-05-18T13:25:58.000Z | 2018-05-19T03:27:05.000Z | notebooks_en/4_Birdseye_Vibrations.ipynb | engineersCode/EngCom3_flyatchange | ced7377e7a79e6a82da1249254013faccbd6763a | [
"BSD-3-Clause"
] | 7 | 2019-10-28T15:53:48.000Z | 2021-09-12T21:43:16.000Z | 308.303473 | 86,364 | 0.910867 | true | 6,953 | Qwen/Qwen-72B | 1. YES
2. YES | 0.76908 | 0.774583 | 0.595717 | __label__eng_Latn | 0.984952 | 0.22238 |
```python
from estado import *
from sympy import *
init_printing(use_unicode=True)
import numpy as np
```
```python
```
```python
estado_inicial_de_busca = estado('water','gas',200,120.46850585938)
estado_finalB = busca_estado('specific_enthalpy',2802.88935988,'T',estado_inicial_de_busca, precision=0.9)
```
<class 'estado.estado'>
```python
print(estado_finalB)
```
Estado = overheated steam, tabela: gas, Pressão = 2.0 Kpa, Temperatura = 166.48413085938 Celsius
```python
#vapor 15 bar e 320 C
#passa turbina
#vai pro volume de 0.6m3
#valvula aberta até reservatório tenha vapor a 15 bar e 400C
print('Ex1')
#itema
#massa que entrou, W realizado pela bomba, e entropia gerada
p_i = 1500
p_fA = 1500
t_i = 320
t_fA = 400
estado_inicial = estado('water','gas',p_i, t_i)
estado_finalA = estado('water','gas',p_fA, t_fA)
vol = 0.6 #m3
densidade_final = estado_finalA.density
massa_finalA = densidade_final*vol
print('resps A')
print('massa no reservatório: ', "{:.4f}".format(massa_finalA), 'Kg')
trab = massa_finalA*(estado_inicial.specific_enthalpy - estado_finalA.specific_inner_energy)
print('trabalho realizado pela turbina: ', "{:.4f}".format(trab), 'Kj')
Sger = (-1)*massa_finalA*(estado_inicial.specific_entropy - estado_finalA.specific_entropy)
print('Sger: ', "{:.4f}".format(Sger), 'kj')
#A massa que entra no reservatório, a temperatura final no tanque e a entropia gerada durante o processo
#de enchimento (pressão final no tanque é 15 bar), quando não há trabalho realizado pela turbina;
u_final = estado_inicial.specific_enthalpy
estado_inicial_de_busca = estado('water','gas',p_i,t_i)
estado_finalB = busca_estado('specific_inner_energy',u_final,'T',estado_inicial_de_busca)
print()
print('resps B')
massa_finalB = estado_finalB.density*vol
t_fB = estado_finalB.temperature
SgerB = (-1)*massa_finalB*(estado_inicial.specific_entropy - estado_finalB.specific_entropy)
print('massa no reservatorio: ', "{:.4f}".format(massa_finalB), 'kg')
print('T_final: ', "{:.4f}".format(t_fB), 'Celsius')
print('SgerB: ', "{:.4f}".format(SgerB), 'kj')
#O máximo trabalho que pode ser realizado pela turbina (pressão final no tanque é 15 bar). Determine para
#esse cenário a massa que entra no reservatório, a temperatura final no tanque e a entropia gerada durante
#o processo de enchimento
print()
print('resps C')
p_fC = 1500
SgerC = 0
estado_finalC = estado_inicial
massa_finalC = estado_finalC.density*vol
t_fC = estado_finalC.temperature
trab = massa_finalC*(estado_inicial.specific_enthalpy - estado_finalC.specific_inner_energy)
print('trab: ',"{:.4f}".format(trab),'kj')
print('massa no reservatorio: ', "{:.4f}".format(massa_finalC), 'kg')
print('T_final: ', "{:.4f}".format(t_fC), 'Celsius')
print('SgerB: ', "{:.4f}".format(SgerC), 'kj')
#Analise e justifique os resultados obtidos em a, b e c, em termos dos valores da massa que entra no tanque,
#temperatura final no tanque, trabalho realizado e entropia gerada.
#tomar item A como base
```
Ex1
resps A
massa no reservatório: 2.9555 Kg
trabalho realizado pela turbina: 386.0864 Kj
Sger: 0.8129 kj
<class 'estado.estado'>
resps B
massa no reservatorio: 2.6317 kg
T_final: 477.5065 Celsius
SgerB: 1.3458 kj
resps C
trab: 900.0000 kj
massa no reservatorio: 3.3990 kg
T_final: 320.0000 Celsius
SgerB: 0.0000 kj
```python
#Um reservatório rígido e adiabático contém ar, que pode ser tratado como gás perfeito.
#Uma membrana mantém o ar separado em duas massas iguais, à mesma temperatura T1 e às pressões P1 e P2,
#sendo P1 > P2. A membrana se rompe permitindo a mistura das massas. Um aluno de Termo diz que espera que
#a pressão final seja maior que P1. Um outro aluno diz que a pressão final, Pf, necessariamente terá que ser menor
#que raiz quadrada do produto das duas pressões iniciais. Verifique as respostas dos dois alunos.
#Resp discord 7:12 pm, lui mandou
Cp0 = symbols('Cp')
R = symbols('R')
Tf, T1, T2, T = symbols('Tf T1 T2 T')
Pf, P1, P2 = symbols('Pf P1 P2')
DeltaS1 = Cp0*log(T/T) - R*log(Pf/P1)
DeltaS2 = Cp0*log(T/T) - R*log(Pf/P2)
f = DeltaS1 + DeltaS2
solve(f,Pf, domain=S.Reals)
```
```python
print('Ex 3')
n_usp = str(1)
X = int(n_usp[-1])
print('Fim do numero usp: ', X)
Vgasi = 0.15 #m3
Vaguai = 0.15 #me
tgasi = 50 #Celsius
taguai = 70 #celsius
tituloi = (10 + X)/100
t_frio = 25 #celsius
paguaf = 120 + 10*X #kpa
# processo de compressão do gás é adiabático e reversível
#que a hipótese de gás perfeito é
#válida, que os calores específicos podem ser considerados constantes
patm = 100 #kpa
#(a) A massa de água; (b) A massa de gás; (c) O título (se for saturado) ou a temperatura (se não for
#saturado) do estado final da água; (d) A temperatura final do gás; (d) O trabalho de compressão do gás; (e) O
#trabalho líquido realizado pela água; (f) A quantidade de calor transferida à água; (g) A quantidade de calor
#extraída do ambiente; (h) O trabalho realizado pela bomba de calor.
estado_inicial_agua = estado('water','saturado',T=taguai)
vvapi = 1/estado_inicial_agua.density_steam
vagi = 1/estado_inicial_agua.density_water
vtot = vvapi*tituloi + vagi*(1-tituloi)
denstot = 1/vtot
massa_de_agua = denstot*Vaguai
print('A. massa de agua: ',"{:.4f}".format(massa_de_agua),'Kg')
R = 0.2968
P = estado_inicial_agua.pressure*100
pgasi = P
V = Vgasi
T = tgasi + 273.15
massa_de_gas = (P*V)/(R*T)
print('B. massa de gas: ', "{:.4f}".format(massa_de_gas),'Kg')
densidade_final_agua = massa_de_agua/(Vgasi*2)
estado_finalAgua_se_sat = estado('water','saturado',p=paguaf)
sat = False
if estado_finalAgua_se_sat.density_steam < densidade_final_agua and densidade_final_agua < estado_finalAgua_se_sat.density_water:
print('Fica saturado msm')
estado_finalAgua = estado_finalAgua_se_sat
vliq = 1/estado_finalAgua_se_sat.density_water
vvapor = 1/estado_finalAgua_se_sat.density_steam
meuv = 1/densidade_final_agua
titulof = (meuv - vliq)/(vvapor - vliq)
print('C. Como é saturado, aqui está o titulo: ', "{:.4f}".format(titulof))
sat = True
else:
estado_inicial_de_busca = estado('water','gas',paguaf,300)
estado_finalAgua = busca_estado('density',densidade_final_agua,'T', estado_inicial_de_busca, proporcionalidade=-1)
print('C. Como não é saturado, aqui está a temp: ', estado_finalAgua.temperature, 'Celsius')
Cp0 = 1.041
Cv0 = 0.744
k = 1.4
DeltaS = 0 #sem entropia, adiabatico irreversilvel
T2 = symbols('T2')
f = Cp0*log((T2+273.15)/(tgasi+273.15)) -R*log(patm/pgasi)
T2gas = solveset(f).args[0]
T_final_gas = T2gas
print('D1. Temperatura final do gas', "{:.4f}".format(T2gas), 'Celsius')
Trab_gas = -1*massa_de_gas*Cv0*(T2gas-tgasi)
print('D2. Trabalho: ', "{:.4f}".format(Trab_gas), 'Kj')
VF = massa_de_gas*R*(T2gas+ 273.15)/patm
Trab_conj = patm*VF
Trab_agua = Trab_conj - Trab_gas
print('E. Trabalho liq realizado pela agua: ', "{:.4f}".format(Trab_agua), 'Kj')
Uaguai = estado_inicial_agua.specific_inner_energy_v*tituloi + estado_inicial_agua.specific_inner_energy_water*(1-tituloi)
if sat:
Uaguaf = estado_finalAgua.specific_inner_energy_steam*titulof + estado_finalAgua.specific_inner_energy_water*(1-titulof)
else:
Uaguaf = estado_finalAgua.specific_inner_energy
deltaU = Uaguaf - Uaguai
Qh = deltaU*massa_de_agua + Trab_agua
print('F. Calor que vai para agua: ', "{:.4f}".format(Qh) , 'KJ')
Saguai = estado_inicial_agua.specific_entropy_steam*tituloi + estado_inicial_agua.specific_entropy_water*(1-tituloi)
if sat:
Saguaf = estado_finalAgua.specific_entropy_steam*titulof + estado_finalAgua.specific_entropy_water*(1-titulof)
else:
Saguaf = estado_finalAgua.specific_entropy
deltaS = Saguaf - Saguai
Ql = massa_de_agua*deltaS*(t_frio+273.15)
print('G. Calor extraído do ambiente (Ql): ', "{:.4f}".format(Ql), 'KJ')
Trab_bomba = Qh - Ql
print('H. Trabalho executado pela bomba: ', "{:.4f}".format(Trab_bomba), 'KJ')
print()
print()
```
```python
print('Ex4')
#AR
T1 = 20 + 273.15#Kelvin
P1 = 100 #kpa
mponto = 0.025 #kg/seg
D1 = 0.01 #m
Wponto = -3.5 #Kw
T2 = 50 + 273.15#Kelvin
P2 = 650 #kpa
#ar sai sem e cinética
#Contudo, a energia cinética do ar que entra no compressor não pode ser desprezada
Vt = 1.5 #m3
Pi = 100 #kpa
Tt = 25 + 273.15#Kelvin estável, troca calor
Cv = 0.717
Cp = 1.005
R = Cp - Cv
K = Cp/Cv
Tamb = 25 + 273.15#Kelvin
A_entrada = np.pi *(D1**2)/4
dens1 = P1/(R*T1)
Vol_ponto = mponto / dens1
Vel_entrada = Vol_ponto/A_entrada
#a) a taxa de transferência de calor para o compressor;
Qponto = - Wponto + mponto*(Cp*(T1-T2)) + (mponto*Vel_entrada**2/2)/1000
Qponto *= -1
print('A. Q ponto compressor: ', "{:.4f}".format(Qponto), 'KW/Kg')
#b) a pressão do ar no tanque após 200 segundos de operação;
mitanque = (Pi*Vt)/(R*Tt)
mentra = 200*mponto
mfinalt = mitanque + mentra
Pfinalt = (mfinalt*R*Tt)/Vt
print('B. Pressão apos 200 seg: ', "{:.4f}".format(Pfinalt), 'Kpa')
#c) a transferência de calor total do tanque para o ambiente durante os primeiros 200 s de funcionamento;
Qb = Qponto*200
Qt = symbols('Qt')
Eientr = Cp * T1 + (Vel_entrada**2/2)/1000
Wb = Wponto*200
DeltaEe = mentra * Cv * Tt
DeltaEd = Qb + Qt - Wb + mentra*(Eientr)
Eq = DeltaEd - DeltaEe
Qt = solve(Eq,Qt)[0]
print('C. A transferencia de calor no tanque é: ', "{:.4f}".format(Qt), 'KJ')
#d) a entropia gerada na válvula e tanque durante os primeiros 200 s de funcionamento;
Sger = symbols('Sger')
Santesb = 0 #interpolado pela tabela
Sdpsb = Santesb + Cp*log(T2/T1) -R* log(P2/P1)
Stcheio = Santesb + Cp*log(Tt/T1) -R* log(Pfinalt/P1)
Stvazio = Santesb + Cp*log(Tt/T1) -R* log(Pi/P1)
DeltaSe = Stcheio*mfinalt - Stvazio*mitanque
Se = Sdpsb
DeltaSd = Qt/Tt + mentra*Se + Sger
Eq = DeltaSd - DeltaSe
Sger = solve(Eq, Sger)[0]
print('D. Entropia gerada dps do compressor em 200s: ', "{:.4f}".format(Sger), 'Kj')
SgerB = Qponto*200/Tamb + mentra*Santesb - mentra*Sdpsb
SgerB *= -1
# Sgerpontob = Qponto/Tamb + mponto*Santesb - mponto*Sdpsb
# Sgerpontob *= -1
# SgerB = Sgerpontob*200
SLiq = Sger + SgerB
print('E. Entropia liq gerada: ', "{:.4f}".format(SLiq), 'KJ')
```
```python
```
| 2d48b510740da1fb31c922d670d66a9befcc68a3 | 15,475 | ipynb | Jupyter Notebook | Thermo/Estudo.ipynb | victorathanasio/Personal-projects | 94c870179cec32aa733a612a6faeb047df16d977 | [
"MIT"
] | null | null | null | Thermo/Estudo.ipynb | victorathanasio/Personal-projects | 94c870179cec32aa733a612a6faeb047df16d977 | [
"MIT"
] | null | null | null | Thermo/Estudo.ipynb | victorathanasio/Personal-projects | 94c870179cec32aa733a612a6faeb047df16d977 | [
"MIT"
] | null | null | null | 33.936404 | 139 | 0.549208 | true | 3,648 | Qwen/Qwen-72B | 1. YES
2. YES | 0.793106 | 0.805632 | 0.638952 | __label__por_Latn | 0.855513 | 0.32283 |
# "Symbolic Euler's Method"
> "Applying Euler's Method to ODE , but with a twist: we're going to call method with Symbolic variables"
- toc: true
- badges: true
- comments: true
- categories: [jupyter, math, calculus, symbolics, julialang]
```julia
#collapse-show
# load dependacies
using MyCalculus
using Plots
using Symbolics
```
## Appply Euler's method for the following First-Order Differential Equation:
$fun(t, y) = y^2 -t$
```julia
# function definition : RHS of ODE
fun(t, y) = y^2 -t
```
fun (generic function with 1 method)
## Definition of Euler's Method function:
$EulerMethod(f, x₀, y₀, step= 0.5, n=100.0)$
Function to approximate solution to an ordinary differential equation using the Euler's Method.
dy/dx = f(x, y) where y(x₀) = y₀ and x₀ = x₀.
__Arguments__
- $f:$ function to approximate solution to, of the form f(x, y).
This is the right hand side of the differential equation.
- $x₀:$ initial $x$ value condition
- $y₀:$ initial $y$ value condition
- $step:$ step size for the Euler's Method
- $n:$ number of steps to take
__Returns__
- $x:$ array of x values
- $y:$ array of y values
```julia
x, y = EulerMethod(fun, -1.0, -0.5, 0.5, 5)
```
([-1.0, -0.5, 0.0, 0.5, 1.0], [-0.5, 0.125, 0.3828125, 0.456085205078125, 0.31009206222370267])
```julia
plot(x,y, ylim=(-0.5,0.5), xlim=(-1,1), framestyle = :, ylabel = "y", xlabel = "x")
```
### We can also use Symbolics by passing Symbolics variables to the function:
This is a good way to "peek" into the inner workings of the Euler's Method and see what it's doing for every iteration.
It also showcases the power of Symbolics. Due to highly composable julia code, we can use Symbolics everywhere a number is expected.
```julia
@variables x₀, y₀, step #defines the symbollics variables
t, y = EulerMethod(fun, x₀, y₀, step, 5)
```
(Num[x₀, step + x₀, x₀ + 2step, x₀ + 3step, x₀ + 4step], Num[y₀, y₀ + step*(y₀^2 - x₀), y₀ + step*(y₀^2 - x₀) + step*((y₀ + step*(y₀^2 - x₀))^2 - step - x₀), y₀ + step*(y₀^2 - x₀) + step*((y₀ + step*(y₀^2 - x₀))^2 - step - x₀) + step*((y₀ + step*(y₀^2 - x₀) + step*((y₀ + step*(y₀^2 - x₀))^2 - step - x₀))^2 - 2step - x₀), y₀ + step*(y₀^2 - x₀) + step*((y₀ + step*(y₀^2 - x₀))^2 - step - x₀) + step*((y₀ + step*(y₀^2 - x₀) + step*((y₀ + step*(y₀^2 - x₀))^2 - step - x₀))^2 - 2step - x₀) + step*((y₀ + step*(y₀^2 - x₀) + step*((y₀ + step*(y₀^2 - x₀))^2 - step - x₀) + step*((y₀ + step*(y₀^2 - x₀) + step*((y₀ + step*(y₀^2 - x₀))^2 - step - x₀))^2 - 2step - x₀))^2 - 3step - x₀)])
```julia
#collapse-output
t;
```
\begin{equation} \left[ \begin{array}{c} x{_0} \\ step + x{_0} \\ x{_0} + 2 step \\ x{_0} + 3 step \\ x{_0} + 4 step \\ \end{array} \right] \end{equation}
```julia
#collapse-output
y;
```
\begin{equation} \left[ \begin{array}{c} y{_0} \\ y{_0} + step \left( - x{_0} + y{_0}^{2} \right) \\ y{_0} + step \left( - x{_0} + y{_0}^{2} \right) + step \left( - step - x{_0} + \left( y{_0} + step \left( - x{_0} + y{_0}^{2} \right) \right)^{2} \right) \\ y{_0} + step \left( - x{_0} + y{_0}^{2} \right) + step \left( - step - x{_0} + \left( y{_0} + step \left( - x{_0} + y{_0}^{2} \right) \right)^{2} \right) + step \left( - 2 step - x{_0} + \left( y{_0} + step \left( - x{_0} + y{_0}^{2} \right) + step \left( - step - x{_0} + \left( y{_0} + step \left( - x{_0} + y{_0}^{2} \right) \right)^{2} \right) \right)^{2} \right) \\ y{_0} + step \left( - x{_0} + y{_0}^{2} \right) + step \left( - step - x{_0} + \left( y{_0} + step \left( - x{_0} + y{_0}^{2} \right) \right)^{2} \right) + step \left( - 2 step - x{_0} + \left( y{_0} + step \left( - x{_0} + y{_0}^{2} \right) + step \left( - step - x{_0} + \left( y{_0} + step \left( - x{_0} + y{_0}^{2} \right) \right)^{2} \right) \right)^{2} \right) + step \left( - 3 step - x{_0} + \left( y{_0} + step \left( - x{_0} + y{_0}^{2} \right) + step \left( - step - x{_0} + \left( y{_0} + step \left( - x{_0} + y{_0}^{2} \right) \right)^{2} \right) + step \left( - 2 step - x{_0} + \left( y{_0} + step \left( - x{_0} + y{_0}^{2} \right) + step \left( - step - x{_0} + \left( y{_0} + step \left( - x{_0} + y{_0}^{2} \right) \right)^{2} \right) \right)^{2} \right) \right)^{2} \right) \\ \end{array} \right] \end{equation}
```julia
```
```julia
```
```julia
```
| ca76e14ca7d2e0fcb32e1dc68f1f4f5437a2666f | 80,043 | ipynb | Jupyter Notebook | src/2021-12-28-EulersMethod.ipynb | gjunqueira-sys/MyCalculus.jl | 9a1dee9be36b805e9523ca6d047d827f58c29a62 | [
"MIT"
] | null | null | null | src/2021-12-28-EulersMethod.ipynb | gjunqueira-sys/MyCalculus.jl | 9a1dee9be36b805e9523ca6d047d827f58c29a62 | [
"MIT"
] | null | null | null | src/2021-12-28-EulersMethod.ipynb | gjunqueira-sys/MyCalculus.jl | 9a1dee9be36b805e9523ca6d047d827f58c29a62 | [
"MIT"
] | null | null | null | 236.813609 | 26,720 | 0.741102 | true | 1,750 | Qwen/Qwen-72B | 1. YES
2. YES | 0.880797 | 0.857768 | 0.75552 | __label__eng_Latn | 0.618529 | 0.593657 |
---
## 30. Integración Numérica
Eduard Larrañaga (ealarranaga@unal.edu.co)
---
### Resumen
En este cuaderno se presentan algunas técnicas de integración numérica.
---
Una de las tareas más comunes en astrofísica es evaluar integrales como
\begin{equation}
I = \int_a^b f(x) dx ,
\end{equation}
y, en muchos casos, estas no pueden realizarse en forma analítica. El integrando en estas expresiones puede darse como una función analítica $f(x)$ o como un conjunto discreto de valores $f(x_i)$. Acontinuación describiremos algunas técnicas para realizar estas integrales numéricamente en ambos casos.
---
## Interpolación por intervalos y cuadraturas
Cualquier método de integración que utilice una suma con pesos es denominado **regla de cuadraturas**. Suponga que conocemos (o podemos evaluar) el integrando $f(x)$ en un conjunto finito de *nodos*, $\{x_j\}$ con $j=0,\cdots,n$ en el intervalo $[a,b]$ y tal que $x_0 = a$ y $x_n = b$. Con esto se obtendrá un conjunto de $n+1$ nodos o equivalentemente $n$ intervalos. Una aproximación discreta de la integral de esta función está dada por la **regla del rectángulo**,
\begin{equation}
I = \int_a^b f(x) dx \approx \Delta x \sum_{i=0}^{n-1} f(x_i),
\end{equation}
donde el ancho de los intervalos es $\Delta x = \frac{b-a}{n}$. A partir de la definición de una integral, es claro que esta aproximación converge al valor real de la integral cuando $n\rightarrow \infty$, i.e. cuando $\Delta x \rightarrow 0$.
A pesar de que la regla del rectangulo puede dar una buena aproximación de la integral, puede ser mejorada al utilizar una función interpolada en cada intervalo. Los métodos en los que se utiliza la interpolación de polinomios se denominan, en general, **cuadraturas de Newton-Cotes**.
---
### Regla de punto medio
La modificación más simple a la regla del rectángulo descrita arriba es utilizar el valor central de la función $f(x)$ en cada intervalo en lugar del valor en uno de los nodos. de Esta forma, si es posible evaluar el integrando en el punto medio de cada intervalo, el valor aproximado de la integral estará dado por
\begin{equation}
I = \int_{a}^{b} f(x) dx = \sum _{i=0}^{n-1} (x_{i+1} - x_i) f(\bar{x}_i ),
\end{equation}
donde $\bar{x}_i = \frac{x_i + x_{i+1}}{2}$ es el punto medio en el intervalo $[x_i, x_{i+1}]$.
Con el fin de estimar el error asociado con este método, se utiliza una expansión en serie de Taylor del integrando en el intervalo $[x_i, x_{i+1}]$ alrededor del punto medio $\bar{x}_i$,
\begin{equation}
f(x) = f(\bar{x}_i) + f'(\bar{x}_i)(x-\bar{x}_i) + \frac{f''(\bar{x}_i)}{2}(x-\bar{x}_i)^2 + \frac{f'''(\bar{x}_i)}{6}(x-\bar{x}_i)^3 + ...
\end{equation}
Integrando esta expresión desde $x_i$ hasta $x_{i+1}$, y notando que los terminos de orden impar se anulan, se obtiene
\begin{equation}
\int_{x_i}^{x_{i+1}} f(x)dx = f(\bar{x}_i)(x_{i+1}-x_i) + \frac{f''(\bar{x}_i)}{24}(x_{i+1}-x_i)^3 + ...
\end{equation}
Esta expansión muestra que el error asociado con la aproximación en cada intervalo es de orden $\varepsilon_i = (x_{i+1}-x_i)^3$. Ya que la integral total se obtiene como una suma de $n$ integrales similares, el error total es será de orden $\varepsilon = n \varepsilon_i $.
Cuando los nodos están igualmente espaciados, podemos escribir el tamaño de estos intervalos como $h = \frac{b - a}{n}$ y por ello, el error asociado con cada intervalo es $\varepsilon_i =\frac{(b - a)^3}{n^3} = h^3$, mientras que el error total de la cuadratúra será de orden $\varepsilon = n \varepsilon_i = \frac{(b - a)^3}{n^2} = nh^3$.
#### Ejemplo. Integración numérica
Leeremos los datos de la función desde un archivo .txt y estimaremos numéricamente el valor de la integral de esta función utilizando la regla del punto medio. Debido a que la función esdada en forma de puntos discretos (y no en forma analítica), no es posible evaluar el valor de la función en los puntos medios por lo que utilizaremos inicialmente el valor en el primer punto de cada uno de los intervalos para calcular las sumas parciales.
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Reading the data
data = np.loadtxt('data_points1.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
plt.figure(figsize=(7,5))
# Numerical integration loop
Integral = 0.
for i in range(N-1):
dx = x[i+1] - x[i]
Integral = Integral + dx*f[i]
plt.vlines([x[i], x[i+1]], 0, [f[i], f[i]], color='red')
plt.plot([x[i], x[i+1]], [f[i], f[i]],color='red')
plt.fill_between([x[i], x[i+1]], [f[i], f[i]],color='red', alpha=0.3)
plt.scatter(x, f, color='black')
plt.hlines(0, x.min(), x.max())
plt.title('Integración por la regla del rectángulo')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
print(f'El resultado de la integración numérica de la función discreta')
print(f'entre x = {x[0]:.1f} y x = {x[len(x)-1]:.1f} es I = {Integral:.5e}')
```
```python
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Reading the data
data = np.loadtxt('data_points2.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
plt.figure(figsize=(7,5))
# Numerical integration loop
Integral = 0.
for i in range(N-1):
dx = x[i+1] - x[i]
Integral = Integral + dx*f[i]
plt.vlines([x[i], x[i+1]], 0, [f[i], f[i]], color='red')
plt.plot([x[i], x[i+1]], [f[i], f[i]],color='red')
plt.fill_between([x[i], x[i+1]], [f[i], f[i]],color='red', alpha=0.3)
plt.scatter(x, f, color='black')
plt.hlines(0, x.min(), x.max())
plt.title('Integración por la regla del rectángulo')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
print(f'El resultado de la integración numérica de la función discreta')
print(f'entre x = {x[0]:.1f} y x = {x[len(x)-1]:.1f} es I = {Integral:.5e}')
```
---
### Regla del Trapezoide
La siguiente generalización de la regla del rectángulo corresponde a aproximar la función $f(x)$ con un polinomio lineal en cada uno de los intervalos. Esto se conoce como la **regla del trapezoide** y la correspondiente cuadratura estará dada por
\begin{equation}
I = \int_{a}^{b} f(x) dx = \sum _{i=0}^{n-1} \frac{1}{2} (x_{i+1} - x_i) \left[ f(x_{i+1}) + f(x_i) \right] .
\end{equation}
Contrario a lo que sucede en la regla del punto medio, este método no necesita la evaluación del integrando en el punto medio sino en los dos nodos de cada intervalo.
#### Ejemplo. Integración con la regla del trapezoide.
De nuevo se leerán los datos de la función a partir de un archivo .txt file y se integrará numéricamente utilizando la regla del trapezoide.
```python
import numpy as np
import matplotlib.pyplot as plt
# Reading the data
data = np.loadtxt('data_points1.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
plt.figure(figsize=(7,5))
# Numerical integration loop
Integral = 0.
for i in range(N-1):
dx = x[i+1] - x[i]
f_mean = (f[i] + f[i+1])/2
Integral = Integral + dx*f_mean
plt.vlines([x[i], x[i+1]], 0, [f[i], f[i+1]], color='red')
plt.plot([x[i], x[i+1]], [f[i], f[i+1]],color='red')
plt.fill_between([x[i], x[i+1]], [f[i], f[i+1]],color='red', alpha=0.3)
plt.scatter(x, f, color='black')
plt.hlines(0, x.min(), x.max())
plt.title('Integración con la regla del trapezoide')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
print(f'El resultado de la integración numérica de la función discreta')
print(f'entre x = {x[0]:.1f} y x = {x[len(x)-1]:.1f} es I = {Integral:.5e}')
```
```python
import numpy as np
import matplotlib.pyplot as plt
# Reading the data
data = np.loadtxt('data_points2.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
plt.figure(figsize=(7,5))
# Numerical integration loop
Integral = 0.
for i in range(N-1):
dx = x[i+1] - x[i]
f_mean = (f[i] + f[i+1])/2
Integral = Integral + dx*f_mean
plt.vlines([x[i], x[i+1]], 0, [f[i], f[i+1]], color='red')
plt.plot([x[i], x[i+1]], [f[i], f[i+1]],color='red')
plt.fill_between([x[i], x[i+1]], [f[i], f[i+1]],color='red', alpha=0.3)
plt.scatter(x, f, color='black')
plt.hlines(0, x.min(), x.max())
plt.title('Integración con la regla del trapezoide')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
print(f'El resultado de la integración numérica de la función discreta')
print(f'entre x = {x[0]:.1f} y x = {x[len(x)-1]:.1f} es I = {Integral:.5e}')
```
---
## Regla de Simpson
La regla de Simpson es un método en el que la integral $f(x)$ se estima aproximando el integrando por un polinomi de segundo orden en cada intervalo.
Si se conocen tres valores de la función; $f_1 =f(x_1)$, $f_2 =f(x_2)$ y $f_3 =f(x_3)$; en los puntos $x_1 < x_2 < x_3$, se puede ajustar un polinomio de segundo orden de la forma
l
\begin{equation}
p_2 (x) = A (x-x_1)^2 + B (x-x_1) + C .
\end{equation}
Al integrar este polinomio en el intervalo $[x_1 , x_3]$, se obtiene
\begin{equation}
\int_{x_1}^{x^3} p_2 (x) dx = \frac{x_3 - x_1}{6} \left( f_1 + 4f_2 + f_3 \right) + \mathcal{O} \left( (x_3 - x_1)^5 \right)
\end{equation}
---
### Regla de Simpson con nodos igualmente espaciados
Si se tienen $N$ nodos igualmente espaciados en el intervalo de integración, o equivalentemente $n=N-1$ intervalos con un ancho constante $h$, la integral total mediante la regla de Simpson se escribe en la forma
\begin{equation}
I = \int_a^b f(x) dx \approx \frac{\Delta x}{3} \sum_{i=0}^{\frac{n-2}{2}} \left[ f(x_{2i}) + 4f(x_{2i+1}) + f(x_{2i+2}) \right] .
\end{equation}
El error numérico en cada intervalo es de orden $\Delta x^5$ y por lo tanto, la integral total tendrá un error de orden $n \Delta x^5 = \frac{(a-b)^5}{n^4}$.
#### Ejemplo. Integración con la regla de Simpson
```python
import numpy as np
import matplotlib.pyplot as plt
def quadraticInterpolation(x1, x2, x3, f1, f2, f3, x):
p2 = (((x-x2)*(x-x3))/((x1-x2)*(x1-x3)))*f1 + (((x-x1)*(x-x3))/((x2-x1)*(x2-x3)))*f2 +\
(((x-x1)*(x-x2))/((x3-x1)*(x3-x2)))*f3
return p2
# Reading the data
data = np.loadtxt('data_points1.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
n = N-1
plt.figure(figsize=(7,5))
# Numerical integration loop
Integral = 0.
for i in range(int((n-2)/2 +1)):
dx = x[2*i+1] -x[2*i]
Integral = Integral + dx*(f[2*i] + 4*f[2*i+1] + f[2*i+2])/3
x_interval = np.linspace(x[2*i],x[2*i+2],6)
y_interval = quadraticInterpolation(x[2*i], x[2*i+1], x[2*i+2], f[2*i], f[2*i+1], f[2*i+2], x_interval)
plt.plot(x_interval, y_interval,'r')
plt.fill_between(x_interval, y_interval, color='red', alpha=0.3)
plt.scatter(x, f, color='black')
plt.hlines(0, x.min(), x.max())
plt.title('Integración con la regla de Simpson')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
print(f'El resultado de la integración numérica de la función discreta')
print(f'entre x = {x[0]:.1f} y x = {x[len(x)-1]:.1f} es I = {Integral:.5e}')
```
```python
import numpy as np
import matplotlib.pyplot as plt
def quadraticInterpolation(x1, x2, x3, f1, f2, f3, x):
p2 = (((x-x2)*(x-x3))/((x1-x2)*(x1-x3)))*f1 + (((x-x1)*(x-x3))/((x2-x1)*(x2-x3)))*f2 +\
(((x-x1)*(x-x2))/((x3-x1)*(x3-x2)))*f3
return p2
# Reading the data
data = np.loadtxt('data_points2.txt', comments='#', delimiter=',')
x = data[:,0]
f = data[:,1]
N = len(x)
n = N-1
plt.figure(figsize=(7,5))
# Numerical integration loop
Integral = 0.
for i in range(int((n-2)/2 +1)):
dx = x[2*i+1] - x[2*i]
Integral = Integral + dx*(f[2*i] + 4*f[2*i+1] + f[2*i+2])/3
x_interval = np.linspace(x[2*i],x[2*i+2],6)
y_interval = quadraticInterpolation(x[2*i], x[2*i+1], x[2*i+2], f[2*i], f[2*i+1], f[2*i+2], x_interval)
plt.plot(x_interval, y_interval,'r')
plt.fill_between(x_interval, y_interval, color='red', alpha=0.3)
plt.scatter(x, f, color='black')
plt.hlines(0, x.min(), x.max())
plt.title('Integración con la regla de Simpson')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.show()
print(f'El resultado de la integración numérica de la función discreta')
print(f'entre x = {x[0]:.1f} y x = {x[len(x)-1]:.1f} es I = {Integral:.5e}')
```
---
### Regla de Simpson para nodos no-equidistantes
Cuando los nodos de la malla de discretización de $f(x)$ no están igualmente espaciados, la regla de Simpson debe modificarse en la forma
\begin{equation}
I = \int_a^b f(x) dx \approx \sum_{i=0}^{\frac{n-2}{2}} \left[ \alpha f(x_{2i}) + \beta f(x_{2i+1}) +\gamma f(x_{2i+2}) \right]
\end{equation}
donde
\begin{align}
\alpha = &\frac{-h_{2i+1}^2 + h_{2i+1} h_{2i} + 2 h_{2i}^2}{6 h_{2i}} \\
\beta = &\frac{ (h_{2i+1} + h_{2i})^3 }{6 h_{2i+1} h_{2i}} \\
\gamma =& \frac{2 h_{2i+1}^2 + h_{2i+1} h_{2i} - h_{2i}^2}{6 h_{2i+1}}
\end{align}
y $h_j = x_{j+1} - x_j$.
| 5d640532850390f62842f65b21a1b6d4e87f7cc4 | 130,619 | ipynb | Jupyter Notebook | 03. Integracion/01. Integracion.ipynb | jegonzalezba/AstrofisicaComputacional2022 | eeacf21b2b2cf1605149fd57ba39f8e14aa7309e | [
"MIT"
] | 1 | 2022-03-26T21:47:31.000Z | 2022-03-26T21:47:31.000Z | 03. Integracion/01. Integracion.ipynb | jegonzalezba/AstrofisicaComputacional2022 | eeacf21b2b2cf1605149fd57ba39f8e14aa7309e | [
"MIT"
] | null | null | null | 03. Integracion/01. Integracion.ipynb | jegonzalezba/AstrofisicaComputacional2022 | eeacf21b2b2cf1605149fd57ba39f8e14aa7309e | [
"MIT"
] | null | null | null | 178.685363 | 21,120 | 0.887329 | true | 4,443 | Qwen/Qwen-72B | 1. YES
2. YES | 0.901921 | 0.822189 | 0.741549 | __label__spa_Latn | 0.737932 | 0.5612 |
# Scenario A - Noise Level Variation (multiple runs for init mode)
In this scenario the noise level on a generated dataset is varied in three steps: low/medium/high,
the rest of the parameters in the dataset is kept constant.
The model used in the inference of the parameters is formulated as follows:
\begin{equation}
\large y = f(x) = \sum\limits_{m=1}^M \big[A_m \cdot e^{-\frac{(x-\mu_m)^2}{2\cdot\sigma_m^2}}\big] + \epsilon
\end{equation}
This file runs a series of runs for a single sampler init mode. It does not store the traces or plots, only the summary statistics are stored.
```python
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pymc3 as pm
import arviz as az
#az.style.use('arviz-darkgrid')
print('Running on PyMC3 v{}'.format(pm.__version__))
```
## Import local modules
```python
import datetime
import os
import sys
sys.path.append('../../modules')
import datagen as dg
import models as mdl
import results as res
import figures as fig
import settings as cnf
```
## Local configuration
```python
# output for results and images
out_path = './output_mruns_lognormal_adapt'
file_basename = out_path + '/scenario_noise'
# if dir does not exist, create it
if not os.path.exists(out_path):
os.makedirs(out_path)
conf = {}
# scenario name
conf['scenario'] = 'noise variation'
# initialization method for sampler ('jitter+adapt_diag'/'advi+adapt_diag'/'adapt_diag')
conf['init_mode'] = 'adapt_diag'
# probabilistic model (priors)
conf['prior_model'] = 'lognormal'
# provide peak positions to the model as testvalues ('yes'/'no')
conf['peak_info'] = 'yes'
# absolute peak shift (e.g. 2%(4), 5%(10) or 10%(20) of X-min.)
conf['peak_shift'] = 0.0
# dataset directory
conf['dataset_dir'] = './input_datasets'
# number of runs over the dataset
conf['nruns'] = 1
# number of cores to run
conf['ncores'] = 2
# number of samples per chain
conf['nsamples'] = 2000
```
```python
conf
```
## Save configuration
```python
cnf.save(out_path, conf)
```
# Generate data and plot
```python
# list of wavelengths (x-values)
xval = [i for i in range(200, 400, 2)]
ldata = []
lpeaks = []
# number of spectra per noise level
nsets = 10
# noise level is 1%, 2% and 5% of the minimal signal amplitude
noise_levels = [0.05, 0.10, 0.25]
# total number of datasets
tsets = nsets * len(noise_levels)
# load pre-generated datasets from disk
ldata, lpeaks, _ = dg.data_load(tsets, conf['dataset_dir'])
# add peakshift
lpeaks = dg.add_peakshift(lpeaks, conf['peak_shift'])
```
```python
# plot datasets
#fig.plot_datasets(ldata, lpeaks, dims=(int(tsets/2),2), figure_size=(12,int(tsets*(1.8))),
# savefig='yes', fname=file_basename)
```
# Initialize models and run inference
```python
# convert pandas data to numpy arrays
x_val = np.array(xval, dtype='float32')
# store dataset y-values in list
cols = ldata[0].columns
y_val = [ldata[i][cols].values for i in range(len(ldata))]
```
```python
# initialize models and run inference
models = []
traces = []
for r in range(conf['nruns']):
print("running loop {0}/{1} over datasets".format(r+1,conf['nruns']))
for i in range(len(ldata)):
if conf['peak_info'] == 'yes':
plist = lpeaks[i].flatten()
plist.sort()
model_g = mdl.model_pvoigt(xvalues=x_val, observations=y_val[i], npeaks=3,
mu_peaks=plist, pmodel=conf['prior_model'])
else:
model_g = mdl.model_pvoigt(xvalues=x_val, observations=y_val[i], npeaks=3,
pmodel=conf['prior_model'])
models.append(model_g)
with model_g:
print("({0}:{1}) running inference on dataset #{2}/{3}".format(r+1,conf['nruns'],i+1,len(ldata)))
trace_g = pm.sample(conf['nsamples'], init=conf['init_mode'], cores=conf['ncores'])
traces.append(trace_g)
```
# Model visualization
```python
pm.model_to_graphviz(models[0])
```
```python
# save model figure as image
img = pm.model_to_graphviz(models[0])
img.render(filename=file_basename + '_model', format='png');
```
# Collect results and save
```python
# posterior predictive traces
ppc = [pm.sample_posterior_predictive(traces[i], samples=500, model=models[i]) for i in range(len(traces))]
```
```python
varnames = ['amp', 'mu', 'sigma', 'epsilon']
nruns = conf['nruns']
# total dataset y-values, noise and run number list
ly_val = [val for run in range(nruns) for idx, val in enumerate(y_val)]
lnoise = [nl for run in range(nruns) for nl in noise_levels for i in range(nsets)]
lruns = ['{0}'.format(run+1) for run in range(nruns) for i in range(tsets)]
# collect the results and display
df = res.get_results_summary(varnames, traces, ppc, ly_val, epsilon_real=lnoise, runlist=lruns)
df
```
```python
# save results to .csv
df.to_csv(file_basename + '.csv', index=False)
```
```python
cnf.close(out_path)
```
| 81985c20076858b3685c35e442f094780782a5e5 | 8,954 | ipynb | Jupyter Notebook | code/scenarios/scenario_a/scenario_noise_mruns.ipynb | jnispen/PPSDA | 910261551dd08768a72ab0a3e81bd73c706a143a | [
"MIT"
] | 1 | 2021-01-07T02:22:25.000Z | 2021-01-07T02:22:25.000Z | code/scenarios/scenario_a/scenario_noise_mruns.ipynb | jnispen/PPSDA | 910261551dd08768a72ab0a3e81bd73c706a143a | [
"MIT"
] | null | null | null | code/scenarios/scenario_a/scenario_noise_mruns.ipynb | jnispen/PPSDA | 910261551dd08768a72ab0a3e81bd73c706a143a | [
"MIT"
] | null | null | null | 25.65616 | 148 | 0.527362 | true | 1,363 | Qwen/Qwen-72B | 1. YES
2. YES | 0.787931 | 0.685949 | 0.540481 | __label__eng_Latn | 0.697599 | 0.094048 |
$$ \newcommand{\pd}[2]{ \frac{\partial #1}{\partial #2} }
\newcommand{\od}[2]{\frac{d #1}{d #2}}
\newcommand{\td}[2]{\frac{D #1}{D #2}}
\newcommand{\ab}[1]{\langle #1 \rangle}
\newcommand{\bss}[1]{\textsf{\textbf{#1}}}
\newcommand{\ol}{\overline}
\newcommand{\olx}[1]{\overline{#1}^x}
$$
# Hydrostatic and Geostrophic Balances
In the previous lecture, we obtained the local-tangent-plane form of the Boussinesq equations of motion. We repeat the final equations, in component form, below:
$$ \begin{align}
\td{u}{t} - f v &= -\pd{\phi}{x} + \nu \nabla^2 u \\
\td{v}{t} + f u &= -\pd{\phi}{y} + \nu \nabla^2 v \\
\td{w}{t} &= -\pd{\phi}{z} + b + \nu \nabla^2 w \ .
\end{align} $$
In this lecture, we ask _what are the dominant balances in these equations under common oceanographic conditions_.
## Hydrostatic Balance
We already saw hydrostatic balance for the _background pressure and density field_. For the large-scale flow, the dynamic pressure is also in hydrostatic balance:
$$ \pd{\phi}{z} = b \ . $$
To remind ourselves of the underlying physics, we can write out $\phi$ and $b$ explicitly and drop the common factor of $1/\rho_0$:
$$ \pd{}{z} \delta p = - g \delta \rho \ . $$
Hydrostatic balance can be used to define $\phi$ at any point. In order to do so, however, we must think a bit more about sea-surface height and its relation to pressure.
## Sea Surface Height, Dynamic Height, and Pressure
What is the dynamic pressure $\phi$ at an arbitrary depth $z$ for a flow in hydrostatic balance? The dynamc sea-surface $\eta$ is defined relative to the mean geoid (i.e. a surface of constant geopotential) at $z=0$.
We go back to the full hydrostatic balance with both backround and dynamic pressure:
$$ \pd{}{z} (p) = - g \rho \ . $$
We now integrate this equation in the vertical from an arbitrary $z$ up to the sea surface $\eta$:
$$ \int_z^\eta \pd{}{z'} p dz'
= p(\eta) - p(z)
= - g \int_z^\eta \rho dz' \ . $$
$p(\eta)$ is the pressure right at the sea surface, which is given by the atmospheric pressure. Although atmospheric pressure loading can have a small effect on ocean circulation, it is generally negligible compared to the huge pressures generated internally by the ocean. We will now take $p(\eta)=0$ to simplify the bookkeeping, this gives.
$$ p(z) = g \int_z^\eta \rho dz' \ . $$
Now let's subtract the reference pressure. It given integrating the hydrostatic balance for the background density up to $z=0$. This upper limit is important: the reference pressure is defined for a flat sea surface.
$$ p_{ref}(z) = g \int_z^0 \rho_0 dz' \ . $$
Subtracting the two equations, we obtain
$$ \begin{align}
\delta p(z) = p(z) - p_{ref}(z) &= g \int_z^\eta \rho dz' \ - g \int_z^0 \rho_0 dz' \\
&= g \int_0^\eta \rho dz' + g \int_z^0 \delta \rho dz' \\
&= g \rho_0 \int_0^\eta dz' + g \int_0^\eta \delta \rho dz' + g \int_z^0 \delta \rho dz'
\end{align} $$
We see there is a contribution from the density fluctuations in the interior (second term), plus the variations in the sea-surface height (first term). We can, to the usual precision of the Boussinesq approximation, neglect the density fluctuations within the depths 0 to $\eta$. Dividing through by $\rho_0$, we obtain
$$ \begin{align}
\phi(z) = g \eta - \int_z^0 b dz'
\end{align} $$
It is common in oceanography to define a quanity known as _dynamic height_, which expresses dynamic pressure variations in terms of their effective sea-surface height. In our notation, with the Boussinesq approximation, dynamic height is defined as
$$ D = \frac{\phi}{g} \ .$$
The dynamic pressure at the ocean bottom is given by
$$ \begin{align}
\phi(-H) = g \eta - \int_{-H}^0 b dz' \ .
\end{align} $$
The first term is related to fluctuations in the total ocean volume at each point.
The second term is called the _steric_ component.
## Rossby Number
Let's estimate the relative size of the acceleration to Coriolis terms on the left-hand side of the horizontal momentum equations.
The ratio of these terms defines the _Rossby number_.
$$ R_O = \frac{ \left | \td{u}{t} \right | }{ | f v |} $$
The magnitude of the acceleration term can be estimated as $U^2 / L$, where $U$ is a representitive velocity scale of the flow and $L$ is a representative length scale. So we can estimate the Rossby number as
$$ R_O = \frac{U}{f L} \ . $$
What are representative values? For the large-scale ocean circulation, U = 0.01 m/s, L = 1000 km, and f = 10$^{-4}$ s$^{-1}$. This gives $R_O = 10^{-4}$. So the _acceleration terms are often totally negligible_!
For low Rossby number conditions, we can neglect the acceleration and write the horizontal momentum equations as
$$ \begin{align} - f v &= -\pd{\phi}{x} + \nu \nabla^2 u \\
f u &= -\pd{\phi}{y} + \nu \nabla^2 v
\end{align} $$
The _geostrophic flow_ is defined as the flow determined by the balance between the Coriolis term and and the pressure gradient:
$$ \begin{align} - f v_g &= -\pd{\phi}{x} \\
f u_g &= -\pd{\phi}{y}
\end{align} $$
while the ageostrophic flow is defined via the balance between the Coriolis term and the friction term:
$$ \begin{align} - f v_a &= \nu \nabla^2 u \\
f u_a &= \nu \nabla^2 v
\end{align} $$
The total flow is given by the sum of the geostrophic and ageostrophic components:
$$ \mathbf{u} = \mathbf{u}_a + \mathbf{u}_g \ .$$
## Geostrophic Flow
Away from the boundaries, Friction is weak, and the flow is, to a good approximation geostrophic. Geostrophic (or "balanced") flow is a ubiquitous feature of flows in the ocean and atmosphere. Geostrophic flow is characterized by flow along the pressure contours (i.e. _isobars_), as illustrated below.
Rotational flow around pressure minimum is called _cyclonic_. Cyclonc flow is counterclockwise in the northern hemisphere and (due to the change of sign of $f$) clockwise in the southern hemisphere.
_ [AVISO mean dynamic topography](https://www.aviso.altimetry.fr/en/applications/ocean/large-scale-circulation/mean-dynamic-topography.html) _
### Thermal Wind
The geostrophic flow is determined by the pressure and the pressure is determined by the density (via hydrostatic balance). We can see this relationship more clearly if we take the derivative in $z$ of the geostrophic equations:
$$ \begin{align} - f \pd{v_g}{z} &= -\pd{}{x}\pd{\phi}{z} = -\pd{b}{x}\\
f \pd{u_g}{z} &= -\pd{}{y} \pd{\phi}{z} = -\pd{b}{y} \ .
\end{align} $$
These equations, called _thermal wind_ relate the _vertical shear of the geostrophic flow_ to the _horizontal gradients of buoyancy_. They are very useful for interpreting hydrographic data.
[WOCE Atlantic Atlas Vol. 3](http://whp-atlas.ucsd.edu/whp_atlas/atlantic/a03/sections/printatlas/printatlas.htm)
```python
import pooch
import xarray as xr
from matplotlib import pyplot as plt
url = "ftp://ftp.spacecenter.dk/pub/DTU10/2_MIN/DTU10MDT_2min.nc"
fname = pooch.retrieve(url, known_hash="5d8eb0782514ca5f061eecc5a726f05b34a6ca34281cd644ad19c059ea8b528f")
ds = xr.open_dataset(fname)
ds
```
Downloading data from 'ftp://ftp.spacecenter.dk/pub/DTU10/2_MIN/DTU10MDT_2min.nc' to file '/home/jovyan/.cache/pooch/cd1481012b784d34e848db69c19b022f-DTU10MDT_2min.nc'.
SHA256 hash of downloaded file: 5d8eb0782514ca5f061eecc5a726f05b34a6ca34281cd644ad19c059ea8b528f
Use this value as the 'known_hash' argument of 'pooch.retrieve' to ensure that the file hasn't changed if it is downloaded again in the future.
<div><svg style="position: absolute; width: 0; height: 0; overflow: hidden">
<defs>
<symbol id="icon-database" viewBox="0 0 32 32">
<path d="M16 0c-8.837 0-16 2.239-16 5v4c0 2.761 7.163 5 16 5s16-2.239 16-5v-4c0-2.761-7.163-5-16-5z"></path>
<path d="M16 17c-8.837 0-16-2.239-16-5v6c0 2.761 7.163 5 16 5s16-2.239 16-5v-6c0 2.761-7.163 5-16 5z"></path>
<path d="M16 26c-8.837 0-16-2.239-16-5v6c0 2.761 7.163 5 16 5s16-2.239 16-5v-6c0 2.761-7.163 5-16 5z"></path>
</symbol>
<symbol id="icon-file-text2" viewBox="0 0 32 32">
<path d="M28.681 7.159c-0.694-0.947-1.662-2.053-2.724-3.116s-2.169-2.030-3.116-2.724c-1.612-1.182-2.393-1.319-2.841-1.319h-15.5c-1.378 0-2.5 1.121-2.5 2.5v27c0 1.378 1.122 2.5 2.5 2.5h23c1.378 0 2.5-1.122 2.5-2.5v-19.5c0-0.448-0.137-1.23-1.319-2.841zM24.543 5.457c0.959 0.959 1.712 1.825 2.268 2.543h-4.811v-4.811c0.718 0.556 1.584 1.309 2.543 2.268zM28 29.5c0 0.271-0.229 0.5-0.5 0.5h-23c-0.271 0-0.5-0.229-0.5-0.5v-27c0-0.271 0.229-0.5 0.5-0.5 0 0 15.499-0 15.5 0v7c0 0.552 0.448 1 1 1h7v19.5z"></path>
<path d="M23 26h-14c-0.552 0-1-0.448-1-1s0.448-1 1-1h14c0.552 0 1 0.448 1 1s-0.448 1-1 1z"></path>
<path d="M23 22h-14c-0.552 0-1-0.448-1-1s0.448-1 1-1h14c0.552 0 1 0.448 1 1s-0.448 1-1 1z"></path>
<path d="M23 18h-14c-0.552 0-1-0.448-1-1s0.448-1 1-1h14c0.552 0 1 0.448 1 1s-0.448 1-1 1z"></path>
</symbol>
</defs>
</svg>
<style>/* CSS stylesheet for displaying xarray objects in jupyterlab.
*
*/
:root {
--xr-font-color0: var(--jp-content-font-color0, rgba(0, 0, 0, 1));
--xr-font-color2: var(--jp-content-font-color2, rgba(0, 0, 0, 0.54));
--xr-font-color3: var(--jp-content-font-color3, rgba(0, 0, 0, 0.38));
--xr-border-color: var(--jp-border-color2, #e0e0e0);
--xr-disabled-color: var(--jp-layout-color3, #bdbdbd);
--xr-background-color: var(--jp-layout-color0, white);
--xr-background-color-row-even: var(--jp-layout-color1, white);
--xr-background-color-row-odd: var(--jp-layout-color2, #eeeeee);
}
html[theme=dark],
body.vscode-dark {
--xr-font-color0: rgba(255, 255, 255, 1);
--xr-font-color2: rgba(255, 255, 255, 0.54);
--xr-font-color3: rgba(255, 255, 255, 0.38);
--xr-border-color: #1F1F1F;
--xr-disabled-color: #515151;
--xr-background-color: #111111;
--xr-background-color-row-even: #111111;
--xr-background-color-row-odd: #313131;
}
.xr-wrap {
display: block;
min-width: 300px;
max-width: 700px;
}
.xr-text-repr-fallback {
/* fallback to plain text repr when CSS is not injected (untrusted notebook) */
display: none;
}
.xr-header {
padding-top: 6px;
padding-bottom: 6px;
margin-bottom: 4px;
border-bottom: solid 1px var(--xr-border-color);
}
.xr-header > div,
.xr-header > ul {
display: inline;
margin-top: 0;
margin-bottom: 0;
}
.xr-obj-type,
.xr-array-name {
margin-left: 2px;
margin-right: 10px;
}
.xr-obj-type {
color: var(--xr-font-color2);
}
.xr-sections {
padding-left: 0 !important;
display: grid;
grid-template-columns: 150px auto auto 1fr 20px 20px;
}
.xr-section-item {
display: contents;
}
.xr-section-item input {
display: none;
}
.xr-section-item input + label {
color: var(--xr-disabled-color);
}
.xr-section-item input:enabled + label {
cursor: pointer;
color: var(--xr-font-color2);
}
.xr-section-item input:enabled + label:hover {
color: var(--xr-font-color0);
}
.xr-section-summary {
grid-column: 1;
color: var(--xr-font-color2);
font-weight: 500;
}
.xr-section-summary > span {
display: inline-block;
padding-left: 0.5em;
}
.xr-section-summary-in:disabled + label {
color: var(--xr-font-color2);
}
.xr-section-summary-in + label:before {
display: inline-block;
content: '►';
font-size: 11px;
width: 15px;
text-align: center;
}
.xr-section-summary-in:disabled + label:before {
color: var(--xr-disabled-color);
}
.xr-section-summary-in:checked + label:before {
content: '▼';
}
.xr-section-summary-in:checked + label > span {
display: none;
}
.xr-section-summary,
.xr-section-inline-details {
padding-top: 4px;
padding-bottom: 4px;
}
.xr-section-inline-details {
grid-column: 2 / -1;
}
.xr-section-details {
display: none;
grid-column: 1 / -1;
margin-bottom: 5px;
}
.xr-section-summary-in:checked ~ .xr-section-details {
display: contents;
}
.xr-array-wrap {
grid-column: 1 / -1;
display: grid;
grid-template-columns: 20px auto;
}
.xr-array-wrap > label {
grid-column: 1;
vertical-align: top;
}
.xr-preview {
color: var(--xr-font-color3);
}
.xr-array-preview,
.xr-array-data {
padding: 0 5px !important;
grid-column: 2;
}
.xr-array-data,
.xr-array-in:checked ~ .xr-array-preview {
display: none;
}
.xr-array-in:checked ~ .xr-array-data,
.xr-array-preview {
display: inline-block;
}
.xr-dim-list {
display: inline-block !important;
list-style: none;
padding: 0 !important;
margin: 0;
}
.xr-dim-list li {
display: inline-block;
padding: 0;
margin: 0;
}
.xr-dim-list:before {
content: '(';
}
.xr-dim-list:after {
content: ')';
}
.xr-dim-list li:not(:last-child):after {
content: ',';
padding-right: 5px;
}
.xr-has-index {
font-weight: bold;
}
.xr-var-list,
.xr-var-item {
display: contents;
}
.xr-var-item > div,
.xr-var-item label,
.xr-var-item > .xr-var-name span {
background-color: var(--xr-background-color-row-even);
margin-bottom: 0;
}
.xr-var-item > .xr-var-name:hover span {
padding-right: 5px;
}
.xr-var-list > li:nth-child(odd) > div,
.xr-var-list > li:nth-child(odd) > label,
.xr-var-list > li:nth-child(odd) > .xr-var-name span {
background-color: var(--xr-background-color-row-odd);
}
.xr-var-name {
grid-column: 1;
}
.xr-var-dims {
grid-column: 2;
}
.xr-var-dtype {
grid-column: 3;
text-align: right;
color: var(--xr-font-color2);
}
.xr-var-preview {
grid-column: 4;
}
.xr-var-name,
.xr-var-dims,
.xr-var-dtype,
.xr-preview,
.xr-attrs dt {
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
padding-right: 10px;
}
.xr-var-name:hover,
.xr-var-dims:hover,
.xr-var-dtype:hover,
.xr-attrs dt:hover {
overflow: visible;
width: auto;
z-index: 1;
}
.xr-var-attrs,
.xr-var-data {
display: none;
background-color: var(--xr-background-color) !important;
padding-bottom: 5px !important;
}
.xr-var-attrs-in:checked ~ .xr-var-attrs,
.xr-var-data-in:checked ~ .xr-var-data {
display: block;
}
.xr-var-data > table {
float: right;
}
.xr-var-name span,
.xr-var-data,
.xr-attrs {
padding-left: 25px !important;
}
.xr-attrs,
.xr-var-attrs,
.xr-var-data {
grid-column: 1 / -1;
}
dl.xr-attrs {
padding: 0;
margin: 0;
display: grid;
grid-template-columns: 125px auto;
}
.xr-attrs dt,
.xr-attrs dd {
padding: 0;
margin: 0;
float: left;
padding-right: 10px;
width: auto;
}
.xr-attrs dt {
font-weight: normal;
grid-column: 1;
}
.xr-attrs dt:hover span {
display: inline-block;
background: var(--xr-background-color);
padding-right: 10px;
}
.xr-attrs dd {
grid-column: 2;
white-space: pre-wrap;
word-break: break-all;
}
.xr-icon-database,
.xr-icon-file-text2 {
display: inline-block;
vertical-align: middle;
width: 1em;
height: 1.5em !important;
stroke-width: 0;
stroke: currentColor;
fill: currentColor;
}
</style><pre class='xr-text-repr-fallback'><xarray.Dataset>
Dimensions: (lon: 10801, lat: 5400)
Coordinates:
* lon (lon) float64 0.0 0.03333 0.06667 0.1 ... 359.9 359.9 360.0 360.0
* lat (lat) float64 -89.98 -89.95 -89.92 -89.88 ... 89.92 89.95 89.98
Data variables:
mdt (lat, lon) float32 ...
Attributes:
Conventions: COARDS/CF-1.0
title: DTU10MDT_2min.mdt.nc
source: Danish National Space Center
node_offset: 1</pre><div class='xr-wrap' hidden><div class='xr-header'><div class='xr-obj-type'>xarray.Dataset</div></div><ul class='xr-sections'><li class='xr-section-item'><input id='section-174307df-cece-4c62-ab91-3d91ecf736ad' class='xr-section-summary-in' type='checkbox' disabled ><label for='section-174307df-cece-4c62-ab91-3d91ecf736ad' class='xr-section-summary' title='Expand/collapse section'>Dimensions:</label><div class='xr-section-inline-details'><ul class='xr-dim-list'><li><span class='xr-has-index'>lon</span>: 10801</li><li><span class='xr-has-index'>lat</span>: 5400</li></ul></div><div class='xr-section-details'></div></li><li class='xr-section-item'><input id='section-10a0027b-c404-484b-a129-dc848711bf6c' class='xr-section-summary-in' type='checkbox' checked><label for='section-10a0027b-c404-484b-a129-dc848711bf6c' class='xr-section-summary' >Coordinates: <span>(2)</span></label><div class='xr-section-inline-details'></div><div class='xr-section-details'><ul class='xr-var-list'><li class='xr-var-item'><div class='xr-var-name'><span class='xr-has-index'>lon</span></div><div class='xr-var-dims'>(lon)</div><div class='xr-var-dtype'>float64</div><div class='xr-var-preview xr-preview'>0.0 0.03333 0.06667 ... 360.0 360.0</div><input id='attrs-02b957da-dcda-4676-8a0c-b51b0a03ecf8' class='xr-var-attrs-in' type='checkbox' ><label for='attrs-02b957da-dcda-4676-8a0c-b51b0a03ecf8' title='Show/Hide attributes'><svg class='icon xr-icon-file-text2'><use xlink:href='#icon-file-text2'></use></svg></label><input id='data-6657f096-9613-4a1f-a801-11dd15b7eb90' class='xr-var-data-in' type='checkbox'><label for='data-6657f096-9613-4a1f-a801-11dd15b7eb90' title='Show/Hide data repr'><svg class='icon xr-icon-database'><use xlink:href='#icon-database'></use></svg></label><div class='xr-var-attrs'><dl class='xr-attrs'><dt><span>long_name :</span></dt><dd>longitude</dd><dt><span>units :</span></dt><dd>degrees_east</dd><dt><span>actual_range :</span></dt><dd>[-1.66666667e-02 3.60016667e+02]</dd></dl></div><div class='xr-var-data'><pre>array([0.000000e+00, 3.333333e-02, 6.666667e-02, ..., 3.599333e+02,
3.599667e+02, 3.600000e+02])</pre></div></li><li class='xr-var-item'><div class='xr-var-name'><span class='xr-has-index'>lat</span></div><div class='xr-var-dims'>(lat)</div><div class='xr-var-dtype'>float64</div><div class='xr-var-preview xr-preview'>-89.98 -89.95 ... 89.95 89.98</div><input id='attrs-344e807a-01b8-4e4b-8e70-f0b625885f2a' class='xr-var-attrs-in' type='checkbox' ><label for='attrs-344e807a-01b8-4e4b-8e70-f0b625885f2a' title='Show/Hide attributes'><svg class='icon xr-icon-file-text2'><use xlink:href='#icon-file-text2'></use></svg></label><input id='data-8cb5590e-85d2-495b-949e-34aad4b97064' class='xr-var-data-in' type='checkbox'><label for='data-8cb5590e-85d2-495b-949e-34aad4b97064' title='Show/Hide data repr'><svg class='icon xr-icon-database'><use xlink:href='#icon-database'></use></svg></label><div class='xr-var-attrs'><dl class='xr-attrs'><dt><span>long_name :</span></dt><dd>latitude</dd><dt><span>units :</span></dt><dd>degrees_north</dd><dt><span>actual_range :</span></dt><dd>[-90. 90.]</dd></dl></div><div class='xr-var-data'><pre>array([-89.983333, -89.95 , -89.916667, ..., 89.916667, 89.95 ,
89.983333])</pre></div></li></ul></div></li><li class='xr-section-item'><input id='section-01366bd1-6bd9-4231-a5a7-aa68bc518030' class='xr-section-summary-in' type='checkbox' checked><label for='section-01366bd1-6bd9-4231-a5a7-aa68bc518030' class='xr-section-summary' >Data variables: <span>(1)</span></label><div class='xr-section-inline-details'></div><div class='xr-section-details'><ul class='xr-var-list'><li class='xr-var-item'><div class='xr-var-name'><span>mdt</span></div><div class='xr-var-dims'>(lat, lon)</div><div class='xr-var-dtype'>float32</div><div class='xr-var-preview xr-preview'>...</div><input id='attrs-9c2aea3c-73af-4ab1-94d8-b2498f118152' class='xr-var-attrs-in' type='checkbox' ><label for='attrs-9c2aea3c-73af-4ab1-94d8-b2498f118152' title='Show/Hide attributes'><svg class='icon xr-icon-file-text2'><use xlink:href='#icon-file-text2'></use></svg></label><input id='data-f47a4624-3a2b-47ba-a320-6b73638362b4' class='xr-var-data-in' type='checkbox'><label for='data-f47a4624-3a2b-47ba-a320-6b73638362b4' title='Show/Hide data repr'><svg class='icon xr-icon-database'><use xlink:href='#icon-database'></use></svg></label><div class='xr-var-attrs'><dl class='xr-attrs'><dt><span>long_name :</span></dt><dd>mean ocean dynamic topography</dd><dt><span>units :</span></dt><dd>m</dd><dt><span>actual_range :</span></dt><dd>[-2.26 2.155]</dd></dl></div><div class='xr-var-data'><pre>[58325400 values with dtype=float32]</pre></div></li></ul></div></li><li class='xr-section-item'><input id='section-708c400b-08df-4f86-aae5-fad7063cdeae' class='xr-section-summary-in' type='checkbox' checked><label for='section-708c400b-08df-4f86-aae5-fad7063cdeae' class='xr-section-summary' >Attributes: <span>(4)</span></label><div class='xr-section-inline-details'></div><div class='xr-section-details'><dl class='xr-attrs'><dt><span>Conventions :</span></dt><dd>COARDS/CF-1.0</dd><dt><span>title :</span></dt><dd>DTU10MDT_2min.mdt.nc</dd><dt><span>source :</span></dt><dd>Danish National Space Center</dd><dt><span>node_offset :</span></dt><dd>1</dd></dl></div></li></ul></div></div>
```python
mdt = ds.mdt.coarsen(lon=10, lat=10, boundary='pad').mean()
```
```python
import cartopy.crs as ccrs
proj = ccrs.Robinson(central_longitude=180)
fig = plt.figure(figsize=(16, 6))
ax = plt.axes(projection=proj, facecolor='0.8')
mdt.plot(ax=ax, transform=ccrs.PlateCarree())
ax.coastlines()
```
```python
```
| 0a9c457bb77eebf8099c4d7c518e6c3c82494c99 | 291,727 | ipynb | Jupyter Notebook | book/06_hydrostatic_geostrophic.ipynb | monocilindro/intro_to_physical_oceanography | 1cd76829d94dcbd13e5e81c923db924ff0798c1b | [
"MIT"
] | 82 | 2015-09-18T02:01:53.000Z | 2022-02-28T01:43:48.000Z | book/06_hydrostatic_geostrophic.ipynb | monocilindro/intro_to_physical_oceanography | 1cd76829d94dcbd13e5e81c923db924ff0798c1b | [
"MIT"
] | 5 | 2015-09-19T01:35:28.000Z | 2022-02-28T17:23:53.000Z | book/06_hydrostatic_geostrophic.ipynb | monocilindro/intro_to_physical_oceanography | 1cd76829d94dcbd13e5e81c923db924ff0798c1b | [
"MIT"
] | 51 | 2015-09-12T00:30:33.000Z | 2022-02-08T19:37:51.000Z | 427.752199 | 261,932 | 0.923346 | true | 7,081 | Qwen/Qwen-72B | 1. YES
2. YES | 0.845942 | 0.835484 | 0.706771 | __label__eng_Latn | 0.529117 | 0.480397 |
# R(2,2) playground
Ted Corcovilos, 2021-01-08
Playing around with the 2d "mother algebra" $R(2,2)$, as described in C. Doran, et al., "Lie Groups as Spin Groups," *Journal of Mathematical Physics 34*, 3642 (1993). doi:[10.1063/1.530050](http://doi.org/10.1063/1.530050)
I'll name the basis vectors $p_1, p_2, m_1, m_2$ with the diagonal metric $[1,1,-1,-1]$.
Position vectors in this basis are represented by null vectors of the form $x (p_1+m_1) + y (p_2+m_2)$.
```python
from sympy import *
from galgebra.ga import Ga
from galgebra.printer import latex
from IPython.display import Math
init_printing(latex_printer=latex, use_latex='mathjax')
```
```python
xy = (x,y) = symbols("x y", real=True)
```
```python
xyxy = (xp, yp, xm, ym) = symbols("xp yp xm ym", real=True)
```
```python
R22 = Ga('p1 p2 m1 m2', g=[1,1,-1,-1], coords=xyxy)
```
```python
p1, p2, m1, m2 = R22.mv() # break out basis vectors
```
```python
# a real position vector has the form...
r = x*(p1+m1)+y*(p2+m2)
```
```python
r
```
\begin{equation*} x \boldsymbol{p}_{1} + y \boldsymbol{p}_{2} + x \boldsymbol{m}_{1} + y \boldsymbol{m}_{2} \end{equation*}
```python
a, b, c, d = symbols("a b c d", real=True)
f, g, h, j = symbols("f g h j", real=True)
```
```python
# big pseudoscalar
I=p1^p2^m1^m2
# special bivectors (ref. Doran)
K = (p1^m1)+(p2^m2)
E = (p1^p2)+(m1^m2)
F = (p1^m2)+(p2^m1)
```
From the paper, only bivectors that commute with $K$ will preserve null vectors. $E$ corresponds to the pseudo-scalar in the 2d position space. $F$ is the leftover bit needed to complete the bivector space. Geometrically, $K$ describes scaling, $E$ rotations, and $F$ shears.
```python
B1 = a*(p1^m1)+b*(p2^m2)+c*(p1^m2)+d*(p2^m1)+f*(p1^p2)+g*(m1^m2)
```
```python
# check if commutator is zero
B1 >> K
```
\begin{equation*} \left ( c - d\right ) \boldsymbol{p}_{1}\wedge \boldsymbol{p}_{2} + \left ( f + g\right ) \boldsymbol{p}_{1}\wedge \boldsymbol{m}_{2} + \left ( - f - g\right ) \boldsymbol{p}_{2}\wedge \boldsymbol{m}_{1} + \left ( c - d\right ) \boldsymbol{m}_{1}\wedge \boldsymbol{m}_{2} \end{equation*}
```python
# So, this will be zero iff c=d and f=-g
# Redefine, change up the names a bit:
B1 = (a+b)/2*(p1^m1)+(a-b)/2*(p2^m2)+c/2*(p1^m2)+c/2*(p2^m1)+d/2*(p1^p2)-d/2*(m1^m2)
```
```python
# check commutator again
B1 >> K
```
\begin{equation*} 0 \end{equation*}
```python
# also need to normalize B1
# the norm squared is
Bnorm2=(B1*(B1.rev())).scalar()
```
```python
A, B, C, D = symbols("A B C D", real=True)
```
Let's break `B1` down term by term to see how it transforms position.
```python
# Define a small number as a placeholder to go from members of the Lie algebra to the Lie group
ϵ = symbols("ϵ", real=True)
```
Look at each bivector in the Lie algebra and see how the corresponding Lie group member transforms a position vector.
```python
# for the "a" term
((-ϵ*p1^m1/2).exp())*r*((ϵ*p1^m1/2).exp())
```
\begin{equation*} x e^{ϵ} \boldsymbol{p}_{1} + y \boldsymbol{p}_{2} + x e^{ϵ} \boldsymbol{m}_{1} + y \boldsymbol{m}_{2} \end{equation*}
So, the $p_1 \wedge m_1$ term looks like a scaling of $x$.
```python
# for the "b" term
((-ϵ*p2^m2/2).exp())*r*((ϵ*p2^m2/2).exp())
```
\begin{equation*} x \boldsymbol{p}_{1} + y e^{ϵ} \boldsymbol{p}_{2} + x \boldsymbol{m}_{1} + y e^{ϵ} \boldsymbol{m}_{2} \end{equation*}
Scaling of $y$, as expected.
```python
# confirm that the a and b pieces commute:
(p1^m1) >> (p2^m2)
```
\begin{equation*} 0 \end{equation*}
```python
# commuting Lie algebra elements => we can apply the exponentials individually
# overall scaling
(-ϵ*p1^m1/2).exp()*((-ϵ*p2^m2/2).exp())*r*((ϵ*p2^m2/2).exp())*(ϵ*p1^m1/2).exp()
```
\begin{equation*} x e^{ϵ} \boldsymbol{p}_{1} + y e^{ϵ} \boldsymbol{p}_{2} + x e^{ϵ} \boldsymbol{m}_{1} + y e^{ϵ} \boldsymbol{m}_{2} \end{equation*}
```python
# inverse scaling (is there a better name?)
(-ϵ*p1^m1/2).exp()*((ϵ*p2^m2/2).exp())*r*((-ϵ*p2^m2/2).exp())*(ϵ*p1^m1/2).exp()
```
\begin{equation*} x e^{ϵ} \boldsymbol{p}_{1} + y e^{- ϵ} \boldsymbol{p}_{2} + x e^{ϵ} \boldsymbol{m}_{1} + y e^{- ϵ} \boldsymbol{m}_{2} \end{equation*}
```python
# check the pieces of the c term for commuting
(p1^m2) >> (p2^m1)
```
\begin{equation*} 0 \end{equation*}
```python
# the c term
(-ϵ*p1^m2/2).exp()*((-ϵ*p2^m1/2).exp())*r*((ϵ*p2^m1/2).exp())*(ϵ*p1^m2/2).exp()
```
\begin{equation*} \left ( x \cosh{\left (ϵ \right )} + y \sinh{\left (ϵ \right )}\right ) \boldsymbol{p}_{1} + \left ( x \sinh{\left (ϵ \right )} + y \cosh{\left (ϵ \right )}\right ) \boldsymbol{p}_{2} + \left ( x \cosh{\left (ϵ \right )} + y \sinh{\left (ϵ \right )}\right ) \boldsymbol{m}_{1} + \left ( x \sinh{\left (ϵ \right )} + y \cosh{\left (ϵ \right )}\right ) \boldsymbol{m}_{2} \end{equation*}
Scissor shear? (Not really a boost because this isn't Minkowski space...)
```python
#check the d term for commuting:
(p1^p2) >> (m1^m2)
```
\begin{equation*} 0 \end{equation*}
```python
# the d term
(-ϵ*p1^p2/2).exp()*(ϵ*m1^m2/2).exp()*r*(-ϵ*m1^m2/2).exp()*(ϵ*p1^p2/2).exp()
```
\begin{equation*} \left ( x \cos{\left (ϵ \right )} - y \sin{\left (ϵ \right )}\right ) \boldsymbol{p}_{1} + \left ( x \sin{\left (ϵ \right )} + y \cos{\left (ϵ \right )}\right ) \boldsymbol{p}_{2} + \left ( x \cos{\left (ϵ \right )} - y \sin{\left (ϵ \right )}\right ) \boldsymbol{m}_{1} + \left ( x \sin{\left (ϵ \right )} + y \cos{\left (ϵ \right )}\right ) \boldsymbol{m}_{2} \end{equation*}
Rotation.
So, the matrix version of these terms looks something like
$$
\begin{array}{cc}
a \rightarrow \begin{pmatrix} e^A & 0 \\ 0 & e^A \end{pmatrix}
&
b \rightarrow \begin{pmatrix} e^B & 0 \\ 0 & e^{-B} \end{pmatrix}
\\
c \rightarrow \begin{pmatrix} \cosh C & \sinh C \\ \sinh C & \cosh C \end{pmatrix}
&
d \rightarrow \begin{pmatrix} \cos D & -\sin D \\ \sin D & \cos D \end{pmatrix}
\end{array}
$$
Note that these do not mutually commute, so it's hard to decouple a generic matrix into these pieces.
Also, these are all positive-definite matrices, so we're not covering the full GL group. Need one more piece:
$$
m \rightarrow \begin{pmatrix} 1 & 0 \\ 0 & \pm 1 \end{pmatrix}
$$
to cover reflections.
Stepping back, we can identify the 4 Lie algebra generators as the identity and the (real) Pauli matrices, demonstrating the isomorphism between the bivectors and spinors.
```python
# for example σ_x:
simplify(exp(C*Matrix([[0,1],[1,0]])))
```
$\displaystyle \left[\begin{array}{cc}\cosh{\left (C \right )} & \sinh{\left (C \right )}\\\sinh{\left (C \right )} & \cosh{\left (C \right )}\end{array}\right]$
```python
# σ_y
simplify(exp(D*Matrix([[0,-1],[1,0]])))
```
$\displaystyle \left[\begin{array}{cc}\cos{\left (D \right )} & - \sin{\left (D \right )}\\\sin{\left (D \right )} & \cos{\left (D \right )}\end{array}\right]$
```python
# σ_z
simplify(exp(B*Matrix([[1,0],[0,-1]])))
```
$\displaystyle \left[\begin{array}{cc}e^{B} & 0\\0 & e^{- B}\end{array}\right]$
```python
# I
simplify(exp(A*Matrix([[1,0],[0,1]])))
```
$\displaystyle \left[\begin{array}{cc}e^{A} & 0\\0 & e^{A}\end{array}\right]$
We can combine the last three terms and simplify using the axis-angle formula for SU(2). (Need to use complex values?)
This still leaves the open problem of decomposing a generic 2x2 matrix into a product of the operators above. The linear algebra solution is straight-forward but tedious. Does GA give us any shortcuts?
```python
```
| db9f4202f01771b3c4b8ecf99512a5878be1504f | 16,934 | ipynb | Jupyter Notebook | R22.ipynb | corcoted/GA-scratch | a7ba5da5fa758f52330c4e1218d56c2e193a3091 | [
"MIT"
] | null | null | null | R22.ipynb | corcoted/GA-scratch | a7ba5da5fa758f52330c4e1218d56c2e193a3091 | [
"MIT"
] | null | null | null | R22.ipynb | corcoted/GA-scratch | a7ba5da5fa758f52330c4e1218d56c2e193a3091 | [
"MIT"
] | null | null | null | 25.050296 | 450 | 0.483879 | true | 2,897 | Qwen/Qwen-72B | 1. YES
2. YES | 0.904651 | 0.810479 | 0.7332 | __label__eng_Latn | 0.783089 | 0.541801 |
# Principle of least action
> Marcos Duarte
> Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/))
> Federal University of ABC, Brazil
The [principle of least action](https://en.wikipedia.org/wiki/Principle_of_least_action) applied to the movement of a mechanical system states that "the average kinetic energy less the average potential energy is as little as possible for the path of an object going from one point to another" (Prof. Richard Feynman in [The Feynman Lectures on Physics](http://www.feynmanlectures.caltech.edu/II_19.html)).
This principle is so fundamental that it can be used to derive the equations of motion of a system, for example, independent of the Newton's laws of motion. Let's now see the principle of least action in mathematical terms.
The difference between the kinetic and potential energy in a system is known as the [Lagrange or Lagrangian function](https://en.wikipedia.org/wiki/Lagrangian_mechanics):
\begin{equation}
\mathcal{L} = T - V
\label{eq_lagrange}
\end{equation}
The [principle of least action](https://en.wikipedia.org/wiki/Principle_of_least_action) states that the actual path which a system follows between two points in the time interval $t_1$ and $t_2$ is such that the integral
\begin{equation}
\mathcal{S}\; =\; \int _{t_1}^{t_2} \mathcal{L} \; dt
\label{eq_action}
\end{equation}
is stationary, meaning that $\delta \mathcal{S}=0$ (i.e., the value of $\mathcal{S}$ is an extremum), and it can be shown in fact that the value of this integral is a minimum for the actual path of the system. The integral above is known as the action integral and $\mathcal{S}$ is known as the action.
For a didactic demonstration that the integral above is stationary, see [The Feynman Lectures on Physics](http://www.feynmanlectures.caltech.edu/II_19.html).
## References
- Feynman R, Leighton R, Sands M (2013) [The Feynman Lectures on Physics - HTML edition](http://www.feynmanlectures.caltech.edu/).
- Taylor JR (2005) [Classical Mechanics](https://books.google.com.br/books?id=P1kCtNr-pJsC). University Science Books.
| cadf6c94eba05b34b3da8073e212031e753205ae | 3,823 | ipynb | Jupyter Notebook | notebooks/principle_of_least_action.ipynb | gbiomech/BMC | fec9413b17a54f00ba6818438f7a50b132353e42 | [
"CC-BY-4.0"
] | 1 | 2022-01-07T22:30:39.000Z | 2022-01-07T22:30:39.000Z | notebooks/principle_of_least_action.ipynb | gbiomech/BMC | fec9413b17a54f00ba6818438f7a50b132353e42 | [
"CC-BY-4.0"
] | null | null | null | notebooks/principle_of_least_action.ipynb | gbiomech/BMC | fec9413b17a54f00ba6818438f7a50b132353e42 | [
"CC-BY-4.0"
] | null | null | null | 36.409524 | 419 | 0.608423 | true | 569 | Qwen/Qwen-72B | 1. YES
2. YES | 0.79053 | 0.822189 | 0.649965 | __label__eng_Latn | 0.978064 | 0.348418 |
## Performance Indicator
It is fundamental for any algorithm to measure the performance. In a multi-objective scenario, we can not calculate the distance to the true global optimum but must consider a set of solutions. Moreover, sometimes the optimum is not even known, and other techniques must be used.
First, let us consider a scenario where the Pareto-front is known:
```python
import numpy as np
from pymoo.factory import get_problem
from pymoo.visualization.scatter import Scatter
# The pareto front of a scaled zdt1 problem
pf = get_problem("zdt1").pareto_front()
# The result found by an algorithm
A = pf[::10] * 1.1
# plot the result
Scatter(legend=True).add(pf, label="Pareto-front").add(A, label="Result").show()
```
### Generational Distance (GD)
The GD performance indicator <cite data-cite="gd"></cite> measure the distance from solution to the Pareto-front. Let us assume the points found by our algorithm are the objective vector set $A=\{a_1, a_2, \ldots, a_{|A|}\}$ and the reference points set (Pareto-front) is $Z=\{z_1, z_2, \ldots, z_{|Z|}\}$. Then,
\begin{align}
\begin{split}
\text{GD}(A) & = & \; \frac{1}{|A|} \; \bigg( \sum_{i=1}^{|A|} d_i^p \bigg)^{1/p}\\[2mm]
\end{split}
\end{align}
where $d_i$ represents the Euclidean distance (p=2) from $a_i$ to its nearest reference point in $Z$. Basically, this results in the average distance from any point $A$ to the closest point in the Pareto-front.
```python
from pymoo.factory import get_performance_indicator
gd = get_performance_indicator("gd", pf)
print("GD", gd.calc(A))
```
GD 0.05497689467314528
### Generational Distance Plus (GD+)
Ishibushi et. al. proposed in <cite data-cite="igd_plus"></cite> GD+:
\begin{align}
\begin{split}
\text{GD}^+(A) & = & \; \frac{1}{|A|} \; \bigg( \sum_{i=1}^{|A|} {d_i^{+}}^2 \bigg)^{1/2}\\[2mm]
\end{split}
\end{align}
where for minimization $d_i^{+} = max \{ a_i - z_i, 0\}$ represents the modified distance from $a_i$ to its nearest reference point in $Z$ with the corresponding value $z_i$.
```python
from pymoo.factory import get_performance_indicator
gd_plus = get_performance_indicator("gd+", pf)
print("GD+", gd_plus.calc(A))
```
GD+ 0.05497689467314528
### Inverted Generational Distance (IGD)
The IGD performance indicator <cite data-cite="igd"></cite> inverts the generational distance and measures the distance from any point in $Z$ to the closest point in $A$.
\begin{align}
\begin{split}
\text{IGD}(A) & = & \; \frac{1}{|Z|} \; \bigg( \sum_{i=1}^{|Z|} \hat{d_i}^p \bigg)^{1/p}\\[2mm]
\end{split}
\end{align}
where $\hat{d_i}$ represents the euclidean distance (p=2) from $z_i$ to its nearest reference point in $A$.
```python
from pymoo.factory import get_performance_indicator
igd = get_performance_indicator("igd", pf)
print("IGD", igd.calc(A))
```
IGD 0.06690908300327662
### Inverted Generational Distance Plus (IGD+)
In <cite data-cite="igd_plus"></cite> Ishibushi et. al. proposed IGD+ which is weakly Pareto compliant wheres the original IGD is not.
\begin{align}
\begin{split}
\text{IGD}^{+}(A) & = & \; \frac{1}{|Z|} \; \bigg( \sum_{i=1}^{|Z|} {d_i^{+}}^2 \bigg)^{1/2}\\[2mm]
\end{split}
\end{align}
where for minimization $d_i^{+} = max \{ a_i - z_i, 0\}$ represents the modified distance from $z_i$ to the closest solution in $A$ with the corresponding value $a_i$.
```python
from pymoo.factory import get_performance_indicator
igd_plus = get_performance_indicator("igd+", pf)
print("IGD+", igd_plus.calc(A))
```
IGD+ 0.06466828842775944
### Hypervolume
For all performance indicators showed so far, a target set needs to be known. For Hypervolume only a reference point needs to be provided. First, I would like to mention that we are using the Hypervolume implementation from [DEAP](https://deap.readthedocs.io/en/master/). It calculates the area/volume, which is dominated by the provided set of solutions with respect to a reference point.
<div style="display: block;margin-left: auto;margin-right: auto;width: 40%;">
</div>
This image is taken from <cite data-cite="hv"></cite> and illustrates a two objective example where the area which is dominated by a set of points is shown in grey.
Whereas for the other metrics, the goal was to minimize the distance to the Pareto-front, here, we desire to maximize the performance metric.
```python
from pymoo.factory import get_performance_indicator
hv = get_performance_indicator("hv", ref_point=np.array([1.2, 1.2]))
print("hv", hv.calc(A))
```
hv 0.9631646448182305
| 8b9050be9a8b0d98a1138eec121c860bb92828b4 | 8,988 | ipynb | Jupyter Notebook | doc/source/misc/performance_indicator.ipynb | renansantosmendes/benchmark_tests | 106f842b304a7fc9fa348ea0b6d50f448e46538b | [
"Apache-2.0"
] | null | null | null | doc/source/misc/performance_indicator.ipynb | renansantosmendes/benchmark_tests | 106f842b304a7fc9fa348ea0b6d50f448e46538b | [
"Apache-2.0"
] | null | null | null | doc/source/misc/performance_indicator.ipynb | renansantosmendes/benchmark_tests | 106f842b304a7fc9fa348ea0b6d50f448e46538b | [
"Apache-2.0"
] | null | null | null | 26.91018 | 395 | 0.556965 | true | 1,358 | Qwen/Qwen-72B | 1. YES
2. YES | 0.904651 | 0.877477 | 0.79381 | __label__eng_Latn | 0.97428 | 0.682619 |
## bayestestimation basis
The bayestestimation module uses a hierachical Bayesian model to estimate the posterior distributions of two samples, the parameters of these samples can be approximated by simulation, as can the difference in the paramters.
#### Sections
- Specifying the hierachial model
- Estimating the posterior distribution
- Estimating the Bayes Factor
#### Specifying the hierachial model
The module largely follows the Bayesian-estimation-supercedes-the-t-test (BEST) implementation as specified by Kruschke ([link](https://pdfs.semanticscholar.org/dea6/0927efbd1f284b4132eae3461ea7ce0fb62a.pdf)).
Let $Y_A$ and $Y_B$ represent samples of continuous data from populations $A$ and $B$. The distributions of $Y_A$ and $Y_B$ can be specified using the following hierachial model:
\begin{equation}
\begin{aligned}
Y_A &\sim \textrm{T}(\nu, \mu_A, \sigma_A)
\\
Y_B &\sim \textrm{T}(\nu, \mu_B, \sigma_B)
\\
\mu_A, \mu_B &\sim \textrm{N}(\mu, 2s)
\\
\sigma_A, \sigma_B &\sim \textrm{Inv-Gamma}(\alpha, \beta)
\\
\nu &\sim \textrm{Exp(+1)}(\phi)
\end{aligned}
\end{equation}
Where $\mu$, $s$, $\alpha$, $\beta$ and $\phi$ are constants. $\textrm{Exp(+1)}$ represents an exponential distribution shifted by +1.
Following Krusckhe, the default value for $\phi$ is 1/30. Also following Kruschke, the default values for $\mu$ and $s$ are the sample mean of the combined samples of $Y_A$ and $Y_B$ ($\bar{Y}$), and the combined sample standard deviation of $Y_A$ and $Y_B$, respectively.
Deviating from Kruscke, the prior distributions of $\sigma_A$ and $\sigma_B$ are modelled using an inverse-gamma distribution. The weakly-informative default values $\alpha$ and $\beta$ are set to 0.001.
#### Estimating the posterior distribution
Estimation of the posterior distributions of $\mu_A$, $\mu_B$, $\sigma_A$, $\sigma_B$ is carried out using [pystan's](https://pystan.readthedocs.io/en/latest/index.html) MCMC sampling.
The parameter $\mu_B - \mu_A$ can easily be estimated using the draws from the posteriors of $\mu_A$ and $\mu_B$.
#### Estimating the Bayes Factor
(watch this space)
```python
```
| 85ea9aee95acae26b4e3ff748ac959ce50dff91e | 3,407 | ipynb | Jupyter Notebook | docs/bayestestimation_basis.ipynb | oli-chipperfield/bayestestimation | aace7949fff01af2b574334a1a1fd2ce93fe10f4 | [
"MIT"
] | 1 | 2021-01-29T01:33:52.000Z | 2021-01-29T01:33:52.000Z | docs/bayestestimation_basis.ipynb | oli-chipperfield/bayestestimation | aace7949fff01af2b574334a1a1fd2ce93fe10f4 | [
"MIT"
] | null | null | null | docs/bayestestimation_basis.ipynb | oli-chipperfield/bayestestimation | aace7949fff01af2b574334a1a1fd2ce93fe10f4 | [
"MIT"
] | null | null | null | 34.07 | 285 | 0.59319 | true | 624 | Qwen/Qwen-72B | 1. YES
2. YES | 0.897695 | 0.787931 | 0.707322 | __label__eng_Latn | 0.977696 | 0.481678 |
# Solutions to Exercises (not Activities) in the Bohemian Unit
1: Write down as many questions as you can for this unit.
Maybe this is the most important one of these in this book (OER). Your questions are as likely as ours to be productive. But, here are some of ours. Most have no answers that we know of. In no particular order, here we go. For a given integer polynomial, which companion Bohemian matrix has _minimal_ height? Which matrix in the family has the maximum determinant? Which matrix in the family has the maximum characteristic height? Minimum nonzero determinant? How many matrices are singular? How many are stable? How many have multiple eigenvalues? How many nilpotent matrices are there? How many non-normal matrices are there? How many commuting pairs are there? What is the distribution of eigenvalue condition numbers? How many different eigenvalues are there? How many matrices have a given characteristic polynomial? How many have nontrivial Jordan form? How many have nontrivial Smith form? How many are orthogonal? unimodular? How many matrices have inverses that are also Bohemian? With the same height? (In this case we say the matrix family has _rhapsody_). Given the eigenvalues, can we find a matrix in the family with those eigenvalues?
The colouring scheme we settled on uses an approximate inversion of the cumulative density function and maps that onto a _perceptually even_ sequential colour map like viridis or cividis. We have only just begun to explore more sophisticated schemes, but this one has the advantage that you can sort of "see" the probability density in the colours. Are there better methods? Can we do this usefully in $3D$? What about multi-dimensional problems? Tensors?
2: Looking back at the Fractals Unit, there seem to be clear connections with this unit. Discuss them.
Some of the Bohemian eigenvalue pictures are suggestive of fractals, especially the upper Hessenberg zero diagonal Toeplitz ones. In fact we have just managed to prove that Sierpinski-like triangles genuinely appear in some of these (when the population has three elements). In some other cases we think we have also managed to prove that in the limit as $m\to\infty$ we get a genuine fractal. We can't think of a way to connect the Julia set idea, though.
3: Looking back at the Rootfinding Unit, there seem to be clear connections with this unit. Discuss them.
Mostly the connection is practical: we want a good way to find all the roots of the characteristic polynomial (when we go that route, possibly because of the compression factor). Newton's method can be used (sometimes!) to "polish" the roots to greater accuracy, for instance.
4: Looking back at the Continued Fractions Unit, it seems a stretch to connect them. Can you find a connection?
__One connection__: We thought of taking a general matrix and letting its entries be the partial quotients of the continued fraction of a number chosen "at random" in $[0,1)$. To make that "at random" have the right distribution turned out to be simple enough, because the distribution of partial quotients is known (called the [Gauss-Kuzmin distribution](https://en.wikipedia.org/wiki/Gauss–Kuzmin_distribution)) and its relation to the probability distribution of the iterates $x_n$ of the Gauss map is also known (and called the "Gauss measure"). This is
\begin{equation}
F(x) = \frac{1}{\ln 2}\int_{t=0}^x \frac{1}{1+t}\,dt = \mathrm{lg}(1+x)
\end{equation}
where the symbol lg means "log to base 2" and is frequently seen in computer science. We can invert this cumulative distribution by solving $u=\mathrm{lg}(1+x)$ by raising both sides to the power $2$, like so: $2^u = 1+x$ so $x=2^u-1$. Therefore, sampling $u$ uniformly on $(0,1)$ will give $x$ distributed on $(0,1)$ according to the Gauss measure; then taking the fractional part of $1/x$ will give us the partial quotients with the correct distribution.
Doing this a large number of times and plotting the eigenvalue density we get a very interesting image; we're still thinking about it. We were surprised that there was a pattern, and we don't yet know how to explain it. This isn't _quite_ a Bohemian matrix problem, because the partial quotients of most numbers are unbounded. Still, very large entries are not likely, even though this is quite definitely what is known as a "heavy-tailed distribution" or "fat-tailed distribution". The expected value of a partial quotient is infinity! The use of floating-point essentially bounds the largest entry in practice; we do not know what this does to the statistics.
__Another way to connect__ continued fractions to Bohemian matrices might be to compute the continued fractions of the eigenvalues of one of our earlier computations, and see if there was any correlation with the "holes" (there might be—we have not tried this).
```python
import itertools
import random
import numpy as np
from numpy.polynomial import Polynomial as Poly
import matplotlib as plt
import time
import csv
import math
from PIL import Image
import json
import ast
import sys
sys.path.insert(0,'../../code')
from bohemian_inheritance import *
from densityPlot import *
```
```python
rng = np.random.default_rng(2022)
# We will convert uniform floats u on (0,1) to
# floats that have the Gauss measure by x = 2^u - 1
# and this will get the right distribution of partial quotients
# This lambda function replaced by broadcast operations which are faster
# partial_quotient = lambda u: math.floor( 1/(2**u-1)) # u=0 is unlikely
# 5000 in 1.5 seconds, 50,000 in 12 seconds, 500,000 in 2 minutes, 1 million in 4 minutes
# six hours and 14 minutes for one hundred million matrices at mdim=13
Nsample = 1*10**5
mdim = 6
A = Bohemian(mdim)
sequencelength = A.getNumberOfMatrixEntries()
one = np.ones(sequencelength)
two = 2*one
start = time.time()
B = 0.6*mdim*(Nsample)**(0.25)
print( "Bounding box is width 2*{}".format(B))
bounds = [-B,B,-B,B] # Found by experiment; Depends on Nsample!
nrow = math.floor(9*B) # These need to be adjusted depending on Nsample as well
ncol = math.floor(9*B)
image = DensityPlot(bounds, nrow, ncol)
for k in range(Nsample):
u = rng.random(size=sequencelength)
r = np.power(two,u)-one
p = np.floor_divide(one,r)
A.makeMatrix( p )
image.addPoints(A.eig())
# We encode the population into a label which we will use for the filename.
poplabel = "ContinuedFraction"
cmap = 'viridis'
fname = '../Supplementary Material/Bohemian/dense/pop_{}_{}_{}N{}.png'.format(poplabel,cmap,mdim,Nsample)
image.makeDensityPlot(cmap, filename=fname, bgcolor=[0, 0, 0, 1], colorscale="cumulative")
finish = time.time()
print("Took {} seconds to compute and plot ".format(finish-start))
```
| 5ac3944988c175044bf22e2d1a09eb4cebd863de | 112,685 | ipynb | Jupyter Notebook | book/Solutions/Solutions to Exercises (not Activities) in the Bohemian Unit.ipynb | jameshughes89/Computational-Discovery-on-Jupyter | 614eaaae126082106e1573675599e6895d09d96d | [
"MIT"
] | 14 | 2022-02-21T23:50:22.000Z | 2022-03-23T22:21:55.000Z | book/Solutions/Solutions to Exercises (not Activities) in the Bohemian Unit.ipynb | jameshughes89/Computational-Discovery-on-Jupyter | 614eaaae126082106e1573675599e6895d09d96d | [
"MIT"
] | null | null | null | book/Solutions/Solutions to Exercises (not Activities) in the Bohemian Unit.ipynb | jameshughes89/Computational-Discovery-on-Jupyter | 614eaaae126082106e1573675599e6895d09d96d | [
"MIT"
] | 2 | 2022-02-22T02:43:44.000Z | 2022-02-23T14:27:31.000Z | 629.52514 | 103,600 | 0.942787 | true | 1,681 | Qwen/Qwen-72B | 1. YES
2. YES | 0.699254 | 0.828939 | 0.579639 | __label__eng_Latn | 0.998974 | 0.185026 |
# Getting started with TensorFlow (Eager Mode)
**Learning Objectives**
- Understand difference between Tensorflow's two modes: Eager Execution and Graph Execution
- Practice defining and performing basic operations on constant Tensors
- Use Tensorflow's automatic differentiation capability
## Introduction
**Eager Execution**
Eager mode evaluates operations immediatley and return concrete values immediately. To enable eager mode simply place `tf.enable_eager_execution()` at the top of your code. We recommend using eager execution when prototyping as it is intuitive, easier to debug, and requires less boilerplate code.
**Graph Execution**
Graph mode is TensorFlow's default execution mode (although it will change to eager with TF 2.0). In graph mode operations only produce a symbolic graph which doesn't get executed until run within the context of a tf.Session(). This style of coding is less inutitive and has more boilerplate, however it can lead to performance optimizations and is particularly suited for distributing training across multiple devices. We recommend using delayed execution for performance sensitive production code.
```python
# Ensure that we have Tensorflow 1.13.1 installed.
!pip3 freeze | grep tensorflow==1.13.1 || pip3 install tensorflow==1.13.1
```
/bin/sh: pip3: command not found
/bin/sh: pip3: command not found
```python
import tensorflow as tf
print(tf.__version__)
```
/Users/crawles/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
1.10.1
## Eager Execution
```python
tf.enable_eager_execution()
```
### Adding Two Tensors
The value of the tensor, as well as its shape and data type are printed
```python
a = tf.constant(value = [5, 3, 8], dtype = tf.int32)
b = tf.constant(value = [3, -1, 2], dtype = tf.int32)
c = tf.add(x = a, y = b)
print(c)
```
tf.Tensor([ 8 2 10], shape=(3,), dtype=int32)
#### Overloaded Operators
We can also perform a `tf.add()` using the `+` operator. The `/,-,*` and `**` operators are similarly overloaded with the appropriate tensorflow operation.
```python
c = a + b # this is equivalent to tf.add(a,b)
print(c)
```
tf.Tensor([ 8 2 10], shape=(3,), dtype=int32)
### NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
```python
import numpy as np
a_py = [1,2] # native python list
b_py = [3,4] # native python list
a_np = np.array(object = [1,2]) # numpy array
b_np = np.array(object = [3,4]) # numpy array
a_tf = tf.constant(value = [1,2], dtype = tf.int32) # native TF tensor
b_tf = tf.constant(value = [3,4], dtype = tf.int32) # native TF tensor
for result in [tf.add(x = a_py, y = b_py), tf.add(x = a_np, y = b_np), tf.add(x = a_tf, y = b_tf)]:
print("Type: {}, Value: {}".format(type(result), result))
```
Type: <class 'EagerTensor'>, Value: [4 6]
Type: <class 'EagerTensor'>, Value: [4 6]
Type: <class 'EagerTensor'>, Value: [4 6]
You can convert a native TF tensor to a NumPy array using .numpy()
```python
a_tf.numpy()
```
array([1, 2], dtype=int32)
### Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
#### Toy Dataset
We'll model the following function:
\begin{equation}
y= 2x + 10
\end{equation}
```python
X = tf.constant(value = [1,2,3,4,5,6,7,8,9,10], dtype = tf.float32)
Y = 2 * X + 10
print("X:{}".format(X))
print("Y:{}".format(Y))
```
X:[ 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.]
Y:[12. 14. 16. 18. 20. 22. 24. 26. 28. 30.]
#### Loss Function
Using mean squared error, our loss function is:
\begin{equation}
MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2
\end{equation}
$\hat{Y}$ represents the vector containing our model's predictions:
\begin{equation}
\hat{Y} = w_0X + w_1
\end{equation}
```python
def loss_mse(X, Y, w0, w1):
Y_hat = w0 * X + w1
return tf.reduce_mean(input_tensor = (Y_hat - Y)**2)
```
#### Gradient Function
To use gradient descent we need to take the partial derivative of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables. The `params=[2,3]` argument tells TensorFlow to only compute derivatives with respect to the 2nd and 3rd arguments to the loss function (counting from 0, so really the 3rd and 4th).
```python
# Counting from 0, the 2nd and 3rd parameter to the loss function are our weights
grad_f = tf.contrib.eager.gradients_function(f = loss_mse, params=[2,3])
```
#### Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
```python
STEPS = 1000
LEARNING_RATE = .02
# Initialize weights
w0 = tf.constant(value = 0.0, dtype = tf.float32)
w1 = tf.constant(value = 0.0, dtype = tf.float32)
for step in range(STEPS):
#1. Calculate gradients
d_w0, d_w1 = grad_f(X, Y, w0, w1)
#2. Update weights
w0 = w0 - d_w0 * LEARNING_RATE
w1 = w1 - d_w1 * LEARNING_RATE
#3. Periodically print MSE
if step % 100 == 0:
print("STEP: {} MSE: {}".format(step, loss_mse(X, Y, w0, w1)))
# Print final MSE and weights
print("STEP: {} MSE: {}".format(STEPS,loss_mse(X, Y, w0, w1)))
print("w0:{}".format(round(float(w0), 4)))
print("w1:{}".format(round(float(w1), 4)))
```
STEP: 0 MSE: 167.6111297607422
STEP: 100 MSE: 3.5321757793426514
STEP: 200 MSE: 0.6537718176841736
STEP: 300 MSE: 0.12100745737552643
STEP: 400 MSE: 0.022397063672542572
STEP: 500 MSE: 0.004145540297031403
STEP: 600 MSE: 0.0007674093940295279
STEP: 700 MSE: 0.0001420201879227534
STEP: 800 MSE: 2.628635775181465e-05
STEP: 900 MSE: 4.86889211970265e-06
STEP: 1000 MSE: 9.178326081382693e-07
w0:2.0003
w1:9.9979
## Bonus
Try modelling a non-linear function such as: $y=xe^{-x^2}$
```python
X = tf.constant(value = np.linspace(0,2,1000), dtype = tf.float32)
Y = X*np.exp(-X**2) * X
from matplotlib import pyplot as plt
%matplotlib inline
plt.plot(X, Y)
```
```python
def make_features(X):
features = [X]
features.append(tf.ones_like(X)) # Bias.
features.append(tf.square(X))
features.append(tf.sqrt(X))
features.append(tf.exp(X))
return tf.stack(features, axis=1)
def make_weights(n_weights):
W = [tf.constant(value = 0.0, dtype = tf.float32) for _ in range(n_weights)]
return tf.expand_dims(tf.stack(W),-1)
def predict(X, W):
Y_hat = tf.matmul(X, W)
return tf.squeeze(Y_hat, axis=-1)
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
return tf.reduce_mean(input_tensor = (Y_hat - Y)**2)
X = tf.constant(value = np.linspace(0,2,1000), dtype = tf.float32)
Y = np.exp(-X**2) * X
grad_f = tf.contrib.eager.gradients_function(f = loss_mse, params=[2])
```
```python
STEPS = 2000
LEARNING_RATE = .02
# Weights/features.
Xf = make_features(X)
# Xf = Xf[:,0:2] # Linear features only.
W = make_weights(Xf.get_shape()[1].value)
# For plotting
steps = []
losses = []
plt.figure()
for step in range(STEPS):
#1. Calculate gradients
dW = grad_f(Xf, Y, W)[0]
#2. Update weights
W -= dW * LEARNING_RATE
#3. Periodically print MSE
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
# Print final MSE and weights
print("STEP: {} MSE: {}".format(STEPS,loss_mse(Xf, Y, W)))
# Plot results
plt.figure()
plt.plot(X, Y, label='actual')
plt.plot(X, predict(Xf, W), label='predicted')
plt.legend()
```
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| 4403e592643fa21744f90f2c71020bacadd9315b | 64,233 | ipynb | Jupyter Notebook | courses/machine_learning/deepdive/02_tensorflow/a_tfstart_eager.ipynb | kamalaboulhosn/training-data-analyst | 41b2464a562b8d1d2699e4a6acc01ca3bb083d90 | [
"Apache-2.0"
] | 3 | 2019-06-27T16:32:45.000Z | 2019-08-09T17:37:22.000Z | courses/machine_learning/deepdive/02_tensorflow/a_tfstart_eager.ipynb | yungshenglu/training-data-analyst | 6cf69648400705298a88c2feeb69de1c593e245a | [
"Apache-2.0"
] | 6 | 2020-01-28T22:55:06.000Z | 2022-02-10T00:32:23.000Z | courses/machine_learning/deepdive/02_tensorflow/a_tfstart_eager.ipynb | yungshenglu/training-data-analyst | 6cf69648400705298a88c2feeb69de1c593e245a | [
"Apache-2.0"
] | 4 | 2020-05-15T06:23:05.000Z | 2021-12-20T06:00:15.000Z | 114.701786 | 22,368 | 0.854841 | true | 2,543 | Qwen/Qwen-72B | 1. YES
2. YES | 0.903294 | 0.896251 | 0.809579 | __label__eng_Latn | 0.918508 | 0.719255 |
_Lambda School Data Science_
# Ordinary Least Squares Regression
## What is Linear Regression?
Linear Regression is a statistical model that seeks to describe the relationship between some y variable and one or more x variables.
In the simplest case, linear regression seeks to fit a straight line through a cloud of points. This line is referred to as the "regression line" or "line of best fit." This line tries to summarize the relationship between our X and Y in a way that enables us to use the equation for that line to make predictions.
### Synonyms for "y variable"
- Dependent Variable
- Response Variable
- Outcome Variable
- Predicted Variable
- Measured Variable
- Explained Variable
- Label
- Target
### Synonyms for "x variable"
- Independent Variable
- Explanatory Variable
- Regressor
- Covariate
- Feature
# Simple Linear Regresion (bivariate)
## Making Predictions
Say that we were trying to create a model that captured the relationship between temperature outside and ice cream sales. In Machine Learning our goal is often different that of other flavors of Linear Regression Analysis, because we're trying to fit a model to this data with the intention of making **predictions** on new data (in the future) that we don't have yet.
## What are we trying to predict?
So if we had measured ice cream sales and the temprature outside on 11 different days, at the end of our modeling **what would be the thing that we would want to predict? - Ice Cream Sales or Temperature?**
We would probably want to be measuring temperature with the intention of using that to **forecast** ice cream sales. If we were able to successfully forecast ice cream sales from temperature, this might help us know beforehand how much ice cream to make or how many cones to buy or on which days to open our store, etc. Being able to make predictions accurately has a lot of business implications. This is why making accurate predictions is so valuable (And in large part is why data scientists are paid so well).
### Y Variable Intuition
We want the thing that we're trying to predict to serve as our **y** variable. This is why it's sometimes called the "predicted variable." We call it the "dependent" variable because our prediction for how much ice cream we're going to sell "depends" on the temperature outside.
### X Variable Intuition
All other variables that we use to predict our y variable (we're going to start off just using one) we call our **x** variables. These are called our "independent" variables because they don't *depend* on y, they "explain" y. Hence they are also referred to as our "explanatory" variables.
```python
%matplotlib inline
from ipywidgets import interact
from matplotlib.patches import Rectangle
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
import statsmodels.api as sm
```
```python
columns = ['Year','Incumbent Party Candidate','Other Candidate','Incumbent Party Vote Share']
data = [[1952,"Stevenson","Eisenhower",44.6],
[1956,"Eisenhower","Stevenson",57.76],
[1960,"Nixon","Kennedy",49.91],
[1964,"Johnson","Goldwater",61.34],
[1968,"Humphrey","Nixon",49.60],
[1972,"Nixon","McGovern",61.79],
[1976,"Ford","Carter",48.95],
[1980,"Carter","Reagan",44.70],
[1984,"Reagan","Mondale",59.17],
[1988,"Bush, Sr.","Dukakis",53.94],
[1992,"Bush, Sr.","Clinton",46.55],
[1996,"Clinton","Dole",54.74],
[2000,"Gore","Bush, Jr.",50.27],
[2004,"Bush, Jr.","Kerry",51.24],
[2008,"McCain","Obama",46.32],
[2012,"Obama","Romney",52.00]]
df = pd.DataFrame(data=data, columns=columns)
```
```python
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Year</th>
<th>Incumbent Party Candidate</th>
<th>Other Candidate</th>
<th>Incumbent Party Vote Share</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1952</td>
<td>Stevenson</td>
<td>Eisenhower</td>
<td>44.60</td>
</tr>
<tr>
<th>1</th>
<td>1956</td>
<td>Eisenhower</td>
<td>Stevenson</td>
<td>57.76</td>
</tr>
<tr>
<th>2</th>
<td>1960</td>
<td>Nixon</td>
<td>Kennedy</td>
<td>49.91</td>
</tr>
<tr>
<th>3</th>
<td>1964</td>
<td>Johnson</td>
<td>Goldwater</td>
<td>61.34</td>
</tr>
<tr>
<th>4</th>
<td>1968</td>
<td>Humphrey</td>
<td>Nixon</td>
<td>49.60</td>
</tr>
<tr>
<th>5</th>
<td>1972</td>
<td>Nixon</td>
<td>McGovern</td>
<td>61.79</td>
</tr>
<tr>
<th>6</th>
<td>1976</td>
<td>Ford</td>
<td>Carter</td>
<td>48.95</td>
</tr>
<tr>
<th>7</th>
<td>1980</td>
<td>Carter</td>
<td>Reagan</td>
<td>44.70</td>
</tr>
<tr>
<th>8</th>
<td>1984</td>
<td>Reagan</td>
<td>Mondale</td>
<td>59.17</td>
</tr>
<tr>
<th>9</th>
<td>1988</td>
<td>Bush, Sr.</td>
<td>Dukakis</td>
<td>53.94</td>
</tr>
<tr>
<th>10</th>
<td>1992</td>
<td>Bush, Sr.</td>
<td>Clinton</td>
<td>46.55</td>
</tr>
<tr>
<th>11</th>
<td>1996</td>
<td>Clinton</td>
<td>Dole</td>
<td>54.74</td>
</tr>
<tr>
<th>12</th>
<td>2000</td>
<td>Gore</td>
<td>Bush, Jr.</td>
<td>50.27</td>
</tr>
<tr>
<th>13</th>
<td>2004</td>
<td>Bush, Jr.</td>
<td>Kerry</td>
<td>51.24</td>
</tr>
<tr>
<th>14</th>
<td>2008</td>
<td>McCain</td>
<td>Obama</td>
<td>46.32</td>
</tr>
<tr>
<th>15</th>
<td>2012</td>
<td>Obama</td>
<td>Romney</td>
<td>52.00</td>
</tr>
</tbody>
</table>
</div>
```python
df.plot(x='Year', y='Incumbent Party Vote Share', kind='scatter');
```
```python
df['Incumbent Party Vote Share'].describe()
```
count 16.000000
mean 52.055000
std 5.608951
min 44.600000
25% 48.350000
50% 50.755000
75% 55.495000
max 61.790000
Name: Incumbent Party Vote Share, dtype: float64
```python
target = 'Incumbent Party Vote Share'
df['Prediction'] = df[target].mean()
df['Error'] = df['Prediction'] - df[target]
```
```python
df['Error'].sum()
```
-1.4210854715202004e-14
```python
df['Absolute Error'] = df['Error'].abs()
df['Absolute Error'].sum()
```
72.82
```python
df['Absolute Error'].mean()
```
4.55125
```python
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Year</th>
<th>Incumbent Party Candidate</th>
<th>Other Candidate</th>
<th>Incumbent Party Vote Share</th>
<th>Prediction</th>
<th>Error</th>
<th>Absolute Error</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1952</td>
<td>Stevenson</td>
<td>Eisenhower</td>
<td>44.60</td>
<td>52.055</td>
<td>7.455</td>
<td>7.455</td>
</tr>
<tr>
<th>1</th>
<td>1956</td>
<td>Eisenhower</td>
<td>Stevenson</td>
<td>57.76</td>
<td>52.055</td>
<td>-5.705</td>
<td>5.705</td>
</tr>
<tr>
<th>2</th>
<td>1960</td>
<td>Nixon</td>
<td>Kennedy</td>
<td>49.91</td>
<td>52.055</td>
<td>2.145</td>
<td>2.145</td>
</tr>
<tr>
<th>3</th>
<td>1964</td>
<td>Johnson</td>
<td>Goldwater</td>
<td>61.34</td>
<td>52.055</td>
<td>-9.285</td>
<td>9.285</td>
</tr>
<tr>
<th>4</th>
<td>1968</td>
<td>Humphrey</td>
<td>Nixon</td>
<td>49.60</td>
<td>52.055</td>
<td>2.455</td>
<td>2.455</td>
</tr>
<tr>
<th>5</th>
<td>1972</td>
<td>Nixon</td>
<td>McGovern</td>
<td>61.79</td>
<td>52.055</td>
<td>-9.735</td>
<td>9.735</td>
</tr>
<tr>
<th>6</th>
<td>1976</td>
<td>Ford</td>
<td>Carter</td>
<td>48.95</td>
<td>52.055</td>
<td>3.105</td>
<td>3.105</td>
</tr>
<tr>
<th>7</th>
<td>1980</td>
<td>Carter</td>
<td>Reagan</td>
<td>44.70</td>
<td>52.055</td>
<td>7.355</td>
<td>7.355</td>
</tr>
<tr>
<th>8</th>
<td>1984</td>
<td>Reagan</td>
<td>Mondale</td>
<td>59.17</td>
<td>52.055</td>
<td>-7.115</td>
<td>7.115</td>
</tr>
<tr>
<th>9</th>
<td>1988</td>
<td>Bush, Sr.</td>
<td>Dukakis</td>
<td>53.94</td>
<td>52.055</td>
<td>-1.885</td>
<td>1.885</td>
</tr>
<tr>
<th>10</th>
<td>1992</td>
<td>Bush, Sr.</td>
<td>Clinton</td>
<td>46.55</td>
<td>52.055</td>
<td>5.505</td>
<td>5.505</td>
</tr>
<tr>
<th>11</th>
<td>1996</td>
<td>Clinton</td>
<td>Dole</td>
<td>54.74</td>
<td>52.055</td>
<td>-2.685</td>
<td>2.685</td>
</tr>
<tr>
<th>12</th>
<td>2000</td>
<td>Gore</td>
<td>Bush, Jr.</td>
<td>50.27</td>
<td>52.055</td>
<td>1.785</td>
<td>1.785</td>
</tr>
<tr>
<th>13</th>
<td>2004</td>
<td>Bush, Jr.</td>
<td>Kerry</td>
<td>51.24</td>
<td>52.055</td>
<td>0.815</td>
<td>0.815</td>
</tr>
<tr>
<th>14</th>
<td>2008</td>
<td>McCain</td>
<td>Obama</td>
<td>46.32</td>
<td>52.055</td>
<td>5.735</td>
<td>5.735</td>
</tr>
<tr>
<th>15</th>
<td>2012</td>
<td>Obama</td>
<td>Romney</td>
<td>52.00</td>
<td>52.055</td>
<td>0.055</td>
<td>0.055</td>
</tr>
</tbody>
</table>
</div>
```python
mean_absolute_error(y_true=df[target], y_pred=df['Prediction'])
```
4.55125
```python
r2_score(y_true=df[target], y_pred=df['Prediction'])
```
0.0
# R Squared: $R^2$
One final attribute of linear regressions that we're going to talk about today is a measure of goodness of fit known as $R^2$ or R-squared. $R^2$ is a statistical measure of how close the data are fitted to our regression line. A helpful interpretation for the $R^2$ is the percentage of the dependent variable that is explained by the model.
In other words, the $R^2$ is the percentage of y that is explained by the x variables included in the model. For this reason the $R^2$ is also known as the "coefficient of determination," because it explains how much of y is explained (or determined) by our x varaibles. We won't go into the calculation of $R^2$ today, just know that a higher $R^2$ percentage is nearly always better and indicates a model that fits the data more closely.
# Add data
```python
columns = ['Year','Average Recent Growth in Personal Incomes']
data = [[1952,2.40],
[1956,2.89],
[1960, .85],
[1964,4.21],
[1968,3.02],
[1972,3.62],
[1976,1.08],
[1980,-.39],
[1984,3.86],
[1988,2.27],
[1992, .38],
[1996,1.04],
[2000,2.36],
[2004,1.72],
[2008, .10],
[2012, .95]]
growth = pd.DataFrame(data=data, columns=columns)
```
```python
df = df.merge(growth)
df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Year</th>
<th>Incumbent Party Candidate</th>
<th>Other Candidate</th>
<th>Incumbent Party Vote Share</th>
<th>Prediction</th>
<th>Error</th>
<th>Absolute Error</th>
<th>Average Recent Growth in Personal Incomes</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1952</td>
<td>Stevenson</td>
<td>Eisenhower</td>
<td>44.60</td>
<td>52.055</td>
<td>7.455</td>
<td>7.455</td>
<td>2.40</td>
</tr>
<tr>
<th>1</th>
<td>1956</td>
<td>Eisenhower</td>
<td>Stevenson</td>
<td>57.76</td>
<td>52.055</td>
<td>-5.705</td>
<td>5.705</td>
<td>2.89</td>
</tr>
<tr>
<th>2</th>
<td>1960</td>
<td>Nixon</td>
<td>Kennedy</td>
<td>49.91</td>
<td>52.055</td>
<td>2.145</td>
<td>2.145</td>
<td>0.85</td>
</tr>
<tr>
<th>3</th>
<td>1964</td>
<td>Johnson</td>
<td>Goldwater</td>
<td>61.34</td>
<td>52.055</td>
<td>-9.285</td>
<td>9.285</td>
<td>4.21</td>
</tr>
<tr>
<th>4</th>
<td>1968</td>
<td>Humphrey</td>
<td>Nixon</td>
<td>49.60</td>
<td>52.055</td>
<td>2.455</td>
<td>2.455</td>
<td>3.02</td>
</tr>
<tr>
<th>5</th>
<td>1972</td>
<td>Nixon</td>
<td>McGovern</td>
<td>61.79</td>
<td>52.055</td>
<td>-9.735</td>
<td>9.735</td>
<td>3.62</td>
</tr>
<tr>
<th>6</th>
<td>1976</td>
<td>Ford</td>
<td>Carter</td>
<td>48.95</td>
<td>52.055</td>
<td>3.105</td>
<td>3.105</td>
<td>1.08</td>
</tr>
<tr>
<th>7</th>
<td>1980</td>
<td>Carter</td>
<td>Reagan</td>
<td>44.70</td>
<td>52.055</td>
<td>7.355</td>
<td>7.355</td>
<td>-0.39</td>
</tr>
<tr>
<th>8</th>
<td>1984</td>
<td>Reagan</td>
<td>Mondale</td>
<td>59.17</td>
<td>52.055</td>
<td>-7.115</td>
<td>7.115</td>
<td>3.86</td>
</tr>
<tr>
<th>9</th>
<td>1988</td>
<td>Bush, Sr.</td>
<td>Dukakis</td>
<td>53.94</td>
<td>52.055</td>
<td>-1.885</td>
<td>1.885</td>
<td>2.27</td>
</tr>
<tr>
<th>10</th>
<td>1992</td>
<td>Bush, Sr.</td>
<td>Clinton</td>
<td>46.55</td>
<td>52.055</td>
<td>5.505</td>
<td>5.505</td>
<td>0.38</td>
</tr>
<tr>
<th>11</th>
<td>1996</td>
<td>Clinton</td>
<td>Dole</td>
<td>54.74</td>
<td>52.055</td>
<td>-2.685</td>
<td>2.685</td>
<td>1.04</td>
</tr>
<tr>
<th>12</th>
<td>2000</td>
<td>Gore</td>
<td>Bush, Jr.</td>
<td>50.27</td>
<td>52.055</td>
<td>1.785</td>
<td>1.785</td>
<td>2.36</td>
</tr>
<tr>
<th>13</th>
<td>2004</td>
<td>Bush, Jr.</td>
<td>Kerry</td>
<td>51.24</td>
<td>52.055</td>
<td>0.815</td>
<td>0.815</td>
<td>1.72</td>
</tr>
<tr>
<th>14</th>
<td>2008</td>
<td>McCain</td>
<td>Obama</td>
<td>46.32</td>
<td>52.055</td>
<td>5.735</td>
<td>5.735</td>
<td>0.10</td>
</tr>
<tr>
<th>15</th>
<td>2012</td>
<td>Obama</td>
<td>Romney</td>
<td>52.00</td>
<td>52.055</td>
<td>0.055</td>
<td>0.055</td>
<td>0.95</td>
</tr>
</tbody>
</table>
</div>
```python
feature = 'Average Recent Growth in Personal Incomes'
```
```python
df.plot(x=feature, y=target, kind='scatter');
```
We can see from the scatterplot that these data points seem to follow a somewhat linear relationship. This means that we could probably summarize their relationship well by fitting a line of best fit to these points. Lets do it.
## The Equation for a Line
As we know a common equation for a line is:
\begin{align}
y = mx + b
\end{align}
Where $m$ is the slope of our line and $b$ is the y-intercept.
If we want to plot a line through our cloud of points we figure out what these two values should be. Linear Regression seeks to **estimate** the slope and intercept values that describe a line that best fits the data points.
```python
m = 4
b = 44
df['Prediction'] = m * df[feature] + b
df['Error'] = df['Prediction'] - df[target]
df['Absolute Error'] = df['Error'].abs()
df['Absolute Error'].sum()
```
45.27999999999999
```python
df['Absolute Error'].mean()
```
2.829999999999999
```python
r2_score(y_true=df[target], y_pred=df['Prediction'])
```
0.5178779627255485
```python
ax = df.plot(x=feature, y=target, kind='scatter')
df.plot(x=feature, y='Prediction', kind='line', ax=ax);
```
```python
def regression(m, b):
df['Prediction'] = m * df[feature] + b
df['Error'] = df['Prediction'] - df[target]
df['Absolute Error'] = df['Error'].abs()
sum_absolute_error = df['Absolute Error'].sum()
title = f'Sum of absolute errors: {sum_absolute_error}'
ax = df.plot(x=feature, y=target, kind='scatter', title=title, figsize=(7, 7))
df.plot(x=feature, y='Prediction', kind='line', ax=ax)
regression(m=4, b=48)
```
## Residual Error
The residual error is the distance between points in our dataset and our regression line.
```python
def regression(m, b):
df['Prediction'] = m * df[feature] + b
df['Error'] = df['Prediction'] - df[target]
df['Absolute Error'] = df['Error'].abs()
sum_absolute_error = df['Absolute Error'].sum()
title = f'Sum of absolute errors: {sum_absolute_error}'
ax = df.plot(x=feature, y=target, kind='scatter', title=title, figsize=(7, 7))
df.plot(x=feature, y='Prediction', kind='line', ax=ax)
for x, y1, y2 in zip(df[feature], df[target], df['Prediction']):
ax.plot((x, x), (y1, y2), color='grey')
regression(m=3, b=46)
```
```python
interact(regression, m=(-10,10,0.5), b=(40,60,0.5));
```
interactive(children=(FloatSlider(value=0.0, description='m', max=10.0, min=-10.0, step=0.5), FloatSlider(valu…
```python
df['Square Error'] = df['Error'] **2
```
```python
def regression(m, b):
df['Prediction'] = m * df[feature] + b
df['Error'] = df['Prediction'] - df[target]
df['Absolute Error'] = df['Error'].abs()
df['Square Error'] = df['Error'] **2
sum_square_error = df['Square Error'].sum()
title = f'Sum of square errors: {sum_square_error}'
ax = df.plot(x=feature, y=target, kind='scatter', title=title, figsize=(7, 7))
df.plot(x=feature, y='Prediction', kind='line', ax=ax)
xmin, xmax = ax.get_xlim()
ymin, ymax = ax.get_ylim()
scale = (xmax-xmin)/(ymax-ymin)
for x, y1, y2 in zip(df[feature], df[target], df['Prediction']):
bottom_left = (x, min(y1, y2))
height = abs(y1 - y2)
width = height * scale
ax.add_patch(Rectangle(xy=bottom_left, width=width, height=height, alpha=0.1))
```
```python
interact(regression, m=(-10,10,0.5), b=(40,60,0.5));
```
interactive(children=(FloatSlider(value=0.0, description='m', max=10.0, min=-10.0, step=0.5), FloatSlider(valu…
```python
b = 46
ms = np.arange(-10,10,0.5)
sses = []
for m in ms:
predictions = m * df[feature] + b
errors = predictions - df[target]
square_errors = errors ** 2
sse = square_errors.sum()
sses.append(sse)
hypotheses = pd.DataFrame({'Slope': ms})
hypotheses['Intercept'] = b
hypotheses['Sum of Square Errors'] = sses
hypotheses.sort_values(by='Sum of Square Errors')
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Slope</th>
<th>Intercept</th>
<th>Sum of Square Errors</th>
</tr>
</thead>
<tbody>
<tr>
<th>26</th>
<td>3.0</td>
<td>46</td>
<td>200.48220</td>
</tr>
<tr>
<th>27</th>
<td>3.5</td>
<td>46</td>
<td>209.41375</td>
</tr>
<tr>
<th>25</th>
<td>2.5</td>
<td>46</td>
<td>234.96115</td>
</tr>
<tr>
<th>28</th>
<td>4.0</td>
<td>46</td>
<td>261.75580</td>
</tr>
<tr>
<th>24</th>
<td>2.0</td>
<td>46</td>
<td>312.85060</td>
</tr>
<tr>
<th>29</th>
<td>4.5</td>
<td>46</td>
<td>357.50835</td>
</tr>
<tr>
<th>23</th>
<td>1.5</td>
<td>46</td>
<td>434.15055</td>
</tr>
<tr>
<th>30</th>
<td>5.0</td>
<td>46</td>
<td>496.67140</td>
</tr>
<tr>
<th>22</th>
<td>1.0</td>
<td>46</td>
<td>598.86100</td>
</tr>
<tr>
<th>31</th>
<td>5.5</td>
<td>46</td>
<td>679.24495</td>
</tr>
<tr>
<th>21</th>
<td>0.5</td>
<td>46</td>
<td>806.98195</td>
</tr>
<tr>
<th>32</th>
<td>6.0</td>
<td>46</td>
<td>905.22900</td>
</tr>
<tr>
<th>20</th>
<td>0.0</td>
<td>46</td>
<td>1058.51340</td>
</tr>
<tr>
<th>33</th>
<td>6.5</td>
<td>46</td>
<td>1174.62355</td>
</tr>
<tr>
<th>19</th>
<td>-0.5</td>
<td>46</td>
<td>1353.45535</td>
</tr>
<tr>
<th>34</th>
<td>7.0</td>
<td>46</td>
<td>1487.42860</td>
</tr>
<tr>
<th>18</th>
<td>-1.0</td>
<td>46</td>
<td>1691.80780</td>
</tr>
<tr>
<th>35</th>
<td>7.5</td>
<td>46</td>
<td>1843.64415</td>
</tr>
<tr>
<th>17</th>
<td>-1.5</td>
<td>46</td>
<td>2073.57075</td>
</tr>
<tr>
<th>36</th>
<td>8.0</td>
<td>46</td>
<td>2243.27020</td>
</tr>
<tr>
<th>16</th>
<td>-2.0</td>
<td>46</td>
<td>2498.74420</td>
</tr>
<tr>
<th>37</th>
<td>8.5</td>
<td>46</td>
<td>2686.30675</td>
</tr>
<tr>
<th>15</th>
<td>-2.5</td>
<td>46</td>
<td>2967.32815</td>
</tr>
<tr>
<th>38</th>
<td>9.0</td>
<td>46</td>
<td>3172.75380</td>
</tr>
<tr>
<th>14</th>
<td>-3.0</td>
<td>46</td>
<td>3479.32260</td>
</tr>
<tr>
<th>39</th>
<td>9.5</td>
<td>46</td>
<td>3702.61135</td>
</tr>
<tr>
<th>13</th>
<td>-3.5</td>
<td>46</td>
<td>4034.72755</td>
</tr>
<tr>
<th>12</th>
<td>-4.0</td>
<td>46</td>
<td>4633.54300</td>
</tr>
<tr>
<th>11</th>
<td>-4.5</td>
<td>46</td>
<td>5275.76895</td>
</tr>
<tr>
<th>10</th>
<td>-5.0</td>
<td>46</td>
<td>5961.40540</td>
</tr>
<tr>
<th>9</th>
<td>-5.5</td>
<td>46</td>
<td>6690.45235</td>
</tr>
<tr>
<th>8</th>
<td>-6.0</td>
<td>46</td>
<td>7462.90980</td>
</tr>
<tr>
<th>7</th>
<td>-6.5</td>
<td>46</td>
<td>8278.77775</td>
</tr>
<tr>
<th>6</th>
<td>-7.0</td>
<td>46</td>
<td>9138.05620</td>
</tr>
<tr>
<th>5</th>
<td>-7.5</td>
<td>46</td>
<td>10040.74515</td>
</tr>
<tr>
<th>4</th>
<td>-8.0</td>
<td>46</td>
<td>10986.84460</td>
</tr>
<tr>
<th>3</th>
<td>-8.5</td>
<td>46</td>
<td>11976.35455</td>
</tr>
<tr>
<th>2</th>
<td>-9.0</td>
<td>46</td>
<td>13009.27500</td>
</tr>
<tr>
<th>1</th>
<td>-9.5</td>
<td>46</td>
<td>14085.60595</td>
</tr>
<tr>
<th>0</th>
<td>-10.0</td>
<td>46</td>
<td>15205.34740</td>
</tr>
</tbody>
</table>
</div>
```python
hypotheses.plot(x='Slope', y='Sum of Square Errors',
title=f'Intercept={b}');
```
```python
X = df[[feature]]
y = df[target]
X.shape, y.shape
```
((16, 1), (16,))
```python
model = LinearRegression()
model.fit(X, y)
```
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None,
normalize=False)
```python
model.coef_, model.intercept_
```
(array([3.06052805]), 46.247648016800795)
```python
model.predict([[0]])
```
array([46.24764802])
```python
model.predict([[1]])
```
array([49.30817607])
```python
model.predict([[2]])
```
array([52.36870413])
```python
model.predict([[3]])
```
array([55.42923218])
```python
model.predict(X)
```
array([53.59291535, 55.09257409, 48.84909686, 59.13247113, 55.49044274,
57.32675957, 49.55301832, 45.05404208, 58.06128631, 53.1950467 ,
47.41064868, 49.43059719, 53.47049423, 51.51175627, 46.55370082,
49.15514967])
```python
df['Prediction'] = model.predict(X)
```
```python
ax = df.plot(x=feature, y=target, kind='scatter', title='sklearn LinearRegression')
df.plot(x=feature, y='Prediction', kind='line', ax=ax);
```
```python
df['Error'] = df['Prediction'] - y
```
```python
df['Absolute Error'] = df['Error'].abs()
df['Square Error'] = df['Error'] ** 2
```
```python
df['Square Error'].mean()
```
12.392042143389562
```python
mean_squared_error(y_true=y, y_pred=model.predict(X))
```
12.39204214338956
```python
np.sqrt(mean_squared_error(y, model.predict(X)))
```
3.5202332512760512
```python
model.score(X, y)
```
0.5798462099485426
```python
r2_score(y, model.predict(X))
```
0.5798462099485426
### Statsmodels
https://www.statsmodels.org/dev/examples/notebooks/generated/ols.html
```python
X
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Average Recent Growth in Personal Incomes</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2.40</td>
</tr>
<tr>
<th>1</th>
<td>2.89</td>
</tr>
<tr>
<th>2</th>
<td>0.85</td>
</tr>
<tr>
<th>3</th>
<td>4.21</td>
</tr>
<tr>
<th>4</th>
<td>3.02</td>
</tr>
<tr>
<th>5</th>
<td>3.62</td>
</tr>
<tr>
<th>6</th>
<td>1.08</td>
</tr>
<tr>
<th>7</th>
<td>-0.39</td>
</tr>
<tr>
<th>8</th>
<td>3.86</td>
</tr>
<tr>
<th>9</th>
<td>2.27</td>
</tr>
<tr>
<th>10</th>
<td>0.38</td>
</tr>
<tr>
<th>11</th>
<td>1.04</td>
</tr>
<tr>
<th>12</th>
<td>2.36</td>
</tr>
<tr>
<th>13</th>
<td>1.72</td>
</tr>
<tr>
<th>14</th>
<td>0.10</td>
</tr>
<tr>
<th>15</th>
<td>0.95</td>
</tr>
</tbody>
</table>
</div>
```python
sm.add_constant(X)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>const</th>
<th>Average Recent Growth in Personal Incomes</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1.0</td>
<td>2.40</td>
</tr>
<tr>
<th>1</th>
<td>1.0</td>
<td>2.89</td>
</tr>
<tr>
<th>2</th>
<td>1.0</td>
<td>0.85</td>
</tr>
<tr>
<th>3</th>
<td>1.0</td>
<td>4.21</td>
</tr>
<tr>
<th>4</th>
<td>1.0</td>
<td>3.02</td>
</tr>
<tr>
<th>5</th>
<td>1.0</td>
<td>3.62</td>
</tr>
<tr>
<th>6</th>
<td>1.0</td>
<td>1.08</td>
</tr>
<tr>
<th>7</th>
<td>1.0</td>
<td>-0.39</td>
</tr>
<tr>
<th>8</th>
<td>1.0</td>
<td>3.86</td>
</tr>
<tr>
<th>9</th>
<td>1.0</td>
<td>2.27</td>
</tr>
<tr>
<th>10</th>
<td>1.0</td>
<td>0.38</td>
</tr>
<tr>
<th>11</th>
<td>1.0</td>
<td>1.04</td>
</tr>
<tr>
<th>12</th>
<td>1.0</td>
<td>2.36</td>
</tr>
<tr>
<th>13</th>
<td>1.0</td>
<td>1.72</td>
</tr>
<tr>
<th>14</th>
<td>1.0</td>
<td>0.10</td>
</tr>
<tr>
<th>15</th>
<td>1.0</td>
<td>0.95</td>
</tr>
</tbody>
</table>
</div>
```python
model = sm.OLS(y, sm.add_constant(X))
results = model.fit()
print(results.summary())
```
OLS Regression Results
======================================================================================
Dep. Variable: Incumbent Party Vote Share R-squared: 0.580
Model: OLS Adj. R-squared: 0.550
Method: Least Squares F-statistic: 19.32
Date: Mon, 29 Apr 2019 Prob (F-statistic): 0.000610
Time: 12:42:56 Log-Likelihood: -42.839
No. Observations: 16 AIC: 89.68
Df Residuals: 14 BIC: 91.22
Df Model: 1
Covariance Type: nonrobust
=============================================================================================================
coef std err t P>|t| [0.025 0.975]
-------------------------------------------------------------------------------------------------------------
const 46.2476 1.622 28.514 0.000 42.769 49.726
Average Recent Growth in Personal Incomes 3.0605 0.696 4.396 0.001 1.567 4.554
==============================================================================
Omnibus: 5.392 Durbin-Watson: 2.379
Prob(Omnibus): 0.067 Jarque-Bera (JB): 2.828
Skew: -0.961 Prob(JB): 0.243
Kurtosis: 3.738 Cond. No. 4.54
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
/anaconda3/lib/python3.7/site-packages/scipy/stats/stats.py:1416: UserWarning: kurtosistest only valid for n>=20 ... continuing anyway, n=16
"anyway, n=%i" % int(n))
# The Anatomy of Linear Regression
- Intercept: The $b$ value in our line equation $y=mx+b$
- Slope: The $m$ value in our line equation $y=mx+b$. These two values together define our regression line.
- $\hat{y}$ : A prediction
- Line of Best Fit (Regression Line)
- Predicted (fitted) Values: Points on our regression line
- Observed Values: Points from our dataset
- Error: The distance between predicted and observed values.
# More Formal Notation
We have talked about a line of regression being represented like a regular line $y=mx+b$ but as we get to more complicated versions we're going to need to extend this equation. So lets establish the proper terminology.
**X** - Independent Variable, predictor variable, explanatory variable, regressor, covariate
**Y** - Response variable, predicted variable, measured vairable, explained variable, outcome variable
$\beta_0$ - "Beta Naught" or "Beta Zero", the intercept value. This is how much of y would exist if X were zero. This is sometimes represented by the letter "a" but I hate that. So it's "Beta 0" during my lecture.
$\beta_1$ - "Beta One" The primary coefficient of interest. This values is the slope of the line that is estimated by "minimizing the sum of the squared errors/residuals" - We'll get to that.
$\epsilon$ - "Epsilon" The "error term", random noise, things outside of our model that affect y.
# How Does it do it?
## Minimizing the Sum of the Squared Error
The most common method of estimating our $\beta$ parameters is what's known as "Ordinary Least Squares" (OLS). (There are different methods of arriving at a line of best fit). OLS estimates the parameters that minimize the squared distance between each point in our dataset and our line of best fit.
\begin{align}
SSE = \sum(y_i - \hat{y})^2
\end{align}
## Linear Algebra!
The same result that is found by minimizing the sum of the squared errors can be also found through a linear algebra process known as the "Least Squares Solution:"
Before we can work with this equation in its linear algebra form we have to understand how to set up the matrices that are involved in this equation.
### The $\beta$ vector
The $\beta$ vector represents all the parameters that we are trying to estimate, our $y$ vector and $X$ matrix values are full of data from our dataset. The $\beta$ vector holds the variables that we are solving for: $\beta_0$ and $\beta_1$
Now that we have all of the necessary parts we can set them up in the following equation:
\begin{align}
y = X \beta + \epsilon
\end{align}
Since our $\epsilon$ value represents **random** error we can assume that it will equal zero on average.
\begin{align}
y = X \beta
\end{align}
The objective now is to isolate the $\beta$ matrix. We can do this by pre-multiplying both sides by "X transpose" $X^{T}$.
\begin{align}
X^{T}y = X^{T}X \beta
\end{align}
Since anything times its transpose will result in a square matrix, if that matrix is then an invertible matrix, then we should be able to multiply both sides by its inverse to remove it from the right hand side. (We'll talk tomorrow about situations that could lead to $X^{T}X$ not being invertible.)
\begin{align}
(X^{T}X)^{-1}X^{T}y = (X^{T}X)^{-1}X^{T}X \beta
\end{align}
Since any matrix multiplied by its inverse results in the identity matrix, and anything multiplied by the identity matrix is itself, we are left with only $\beta$ on the right hand side:
\begin{align}
(X^{T}X)^{-1}X^{T}y = \hat{\beta}
\end{align}
We will now call it "beta hat" $\hat{\beta}$ because it now represents our estimated values for $\beta_0$ and $\beta_1$
### Lets calculate our $\beta$ coefficients with numpy!
```python
X = sm.add_constant(df[feature]).values
print('X')
print(X)
y = df[target].values[:, np.newaxis]
print('y')
print(y)
X_transpose = X.T
print('X Transpose')
print(X_transpose)
X_transpose_X = X_transpose @ X
print('X Transpose X')
print(X_transpose_X)
X_transpose_X_inverse = np.linalg.inv(X_transpose_X)
print('X Transpose X Inverse')
print(X_transpose_X_inverse)
X_transpose_y = X_transpose @ y
print('X Transpose y')
print(X_transpose_y)
beta_hat = X_transpose_X_inverse @ X_transpose_y
print('Beta Hat')
print(beta_hat)
```
X
[[ 1. 2.4 ]
[ 1. 2.89]
[ 1. 0.85]
[ 1. 4.21]
[ 1. 3.02]
[ 1. 3.62]
[ 1. 1.08]
[ 1. -0.39]
[ 1. 3.86]
[ 1. 2.27]
[ 1. 0.38]
[ 1. 1.04]
[ 1. 2.36]
[ 1. 1.72]
[ 1. 0.1 ]
[ 1. 0.95]]
y
[[44.6 ]
[57.76]
[49.91]
[61.34]
[49.6 ]
[61.79]
[48.95]
[44.7 ]
[59.17]
[53.94]
[46.55]
[54.74]
[50.27]
[51.24]
[46.32]
[52. ]]
X Transpose
[[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. ]
[ 2.4 2.89 0.85 4.21 3.02 3.62 1.08 -0.39 3.86 2.27 0.38 1.04
2.36 1.72 0.1 0.95]]
X Transpose X
[[16. 30.36 ]
[30.36 86.821]]
X Transpose X Inverse
[[ 0.18575056 -0.06495418]
[-0.06495418 0.03423145]]
X Transpose y
[[ 832.88 ]
[1669.7967]]
Beta Hat
[[46.24764802]
[ 3.06052805]]
# Multiple Regression
Simple or bivariate linear regression involves a single $x$ variable and a single $y$ variable. However, we can have many $x$ variables. A linear regression model that involves multiple x variables is known as **Multiple** Regression - NOT MULTIVARIATE!
```python
df.sort_values(by='Square Error', ascending=False)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Year</th>
<th>Incumbent Party Candidate</th>
<th>Other Candidate</th>
<th>Incumbent Party Vote Share</th>
<th>Prediction</th>
<th>Error</th>
<th>Absolute Error</th>
<th>Average Recent Growth in Personal Incomes</th>
<th>Square Error</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>1952</td>
<td>Stevenson</td>
<td>Eisenhower</td>
<td>44.60</td>
<td>53.592915</td>
<td>8.992915</td>
<td>8.992915</td>
<td>2.40</td>
<td>80.872526</td>
</tr>
<tr>
<th>4</th>
<td>1968</td>
<td>Humphrey</td>
<td>Nixon</td>
<td>49.60</td>
<td>55.490443</td>
<td>5.890443</td>
<td>5.890443</td>
<td>3.02</td>
<td>34.697316</td>
</tr>
<tr>
<th>11</th>
<td>1996</td>
<td>Clinton</td>
<td>Dole</td>
<td>54.74</td>
<td>49.430597</td>
<td>-5.309403</td>
<td>5.309403</td>
<td>1.04</td>
<td>28.189758</td>
</tr>
<tr>
<th>5</th>
<td>1972</td>
<td>Nixon</td>
<td>McGovern</td>
<td>61.79</td>
<td>57.326760</td>
<td>-4.463240</td>
<td>4.463240</td>
<td>3.62</td>
<td>19.920515</td>
</tr>
<tr>
<th>12</th>
<td>2000</td>
<td>Gore</td>
<td>Bush, Jr.</td>
<td>50.27</td>
<td>53.470494</td>
<td>3.200494</td>
<td>3.200494</td>
<td>2.36</td>
<td>10.243163</td>
</tr>
<tr>
<th>15</th>
<td>2012</td>
<td>Obama</td>
<td>Romney</td>
<td>52.00</td>
<td>49.155150</td>
<td>-2.844850</td>
<td>2.844850</td>
<td>0.95</td>
<td>8.093173</td>
</tr>
<tr>
<th>1</th>
<td>1956</td>
<td>Eisenhower</td>
<td>Stevenson</td>
<td>57.76</td>
<td>55.092574</td>
<td>-2.667426</td>
<td>2.667426</td>
<td>2.89</td>
<td>7.115161</td>
</tr>
<tr>
<th>3</th>
<td>1964</td>
<td>Johnson</td>
<td>Goldwater</td>
<td>61.34</td>
<td>59.132471</td>
<td>-2.207529</td>
<td>2.207529</td>
<td>4.21</td>
<td>4.873184</td>
</tr>
<tr>
<th>8</th>
<td>1984</td>
<td>Reagan</td>
<td>Mondale</td>
<td>59.17</td>
<td>58.061286</td>
<td>-1.108714</td>
<td>1.108714</td>
<td>3.86</td>
<td>1.229246</td>
</tr>
<tr>
<th>2</th>
<td>1960</td>
<td>Nixon</td>
<td>Kennedy</td>
<td>49.91</td>
<td>48.849097</td>
<td>-1.060903</td>
<td>1.060903</td>
<td>0.85</td>
<td>1.125515</td>
</tr>
<tr>
<th>10</th>
<td>1992</td>
<td>Bush, Sr.</td>
<td>Clinton</td>
<td>46.55</td>
<td>47.410649</td>
<td>0.860649</td>
<td>0.860649</td>
<td>0.38</td>
<td>0.740716</td>
</tr>
<tr>
<th>9</th>
<td>1988</td>
<td>Bush, Sr.</td>
<td>Dukakis</td>
<td>53.94</td>
<td>53.195047</td>
<td>-0.744953</td>
<td>0.744953</td>
<td>2.27</td>
<td>0.554955</td>
</tr>
<tr>
<th>6</th>
<td>1976</td>
<td>Ford</td>
<td>Carter</td>
<td>48.95</td>
<td>49.553018</td>
<td>0.603018</td>
<td>0.603018</td>
<td>1.08</td>
<td>0.363631</td>
</tr>
<tr>
<th>7</th>
<td>1980</td>
<td>Carter</td>
<td>Reagan</td>
<td>44.70</td>
<td>45.054042</td>
<td>0.354042</td>
<td>0.354042</td>
<td>-0.39</td>
<td>0.125346</td>
</tr>
<tr>
<th>13</th>
<td>2004</td>
<td>Bush, Jr.</td>
<td>Kerry</td>
<td>51.24</td>
<td>51.511756</td>
<td>0.271756</td>
<td>0.271756</td>
<td>1.72</td>
<td>0.073851</td>
</tr>
<tr>
<th>14</th>
<td>2008</td>
<td>McCain</td>
<td>Obama</td>
<td>46.32</td>
<td>46.553701</td>
<td>0.233701</td>
<td>0.233701</td>
<td>0.10</td>
<td>0.054616</td>
</tr>
</tbody>
</table>
</div>
```python
"""
Fatalities denotes the cumulative number of American military
fatalities per millions of US population the in Korea, Vietnam,
Iraq and Afghanistan wars during the presidential terms
preceding the 1952, 1964, 1968, 1976 and 2004, 2008 and
2012 elections.
http://www.douglas-hibbs.com/HibbsArticles/HIBBS-PRESVOTE-SLIDES-MELBOURNE-Part1-2014-02-26.pdf
"""
columns = ['Year','US Military Fatalities per Million']
data = [[1952,190],
[1956, 0],
[1960, 0],
[1964, 1],
[1968,146],
[1972, 0],
[1976, 2],
[1980, 0],
[1984, 0],
[1988, 0],
[1992, 0],
[1996, 0],
[2000, 0],
[2004, 4],
[2008, 14],
[2012, 5]]
deaths = pd.DataFrame(data=data, columns=columns)
```
```python
df = df.merge(deaths)
```
```python
features = ['Average Recent Growth in Personal Incomes',
'US Military Fatalities per Million']
target = 'Incumbent Party Vote Share'
X = df[features]
y = df[target]
model = sm.OLS(y, sm.add_constant(X))
results = model.fit()
print(results.summary())
```
OLS Regression Results
======================================================================================
Dep. Variable: Incumbent Party Vote Share R-squared: 0.871
Model: OLS Adj. R-squared: 0.851
Method: Least Squares F-statistic: 43.77
Date: Mon, 29 Apr 2019 Prob (F-statistic): 1.68e-06
Time: 12:57:54 Log-Likelihood: -33.412
No. Observations: 16 AIC: 72.82
Df Residuals: 13 BIC: 75.14
Df Model: 2
Covariance Type: nonrobust
=============================================================================================================
coef std err t P>|t| [0.025 0.975]
-------------------------------------------------------------------------------------------------------------
const 46.6621 0.937 49.806 0.000 44.638 48.686
Average Recent Growth in Personal Incomes 3.4820 0.408 8.527 0.000 2.600 4.364
US Military Fatalities per Million -0.0537 0.010 -5.408 0.000 -0.075 -0.032
==============================================================================
Omnibus: 2.475 Durbin-Watson: 2.607
Prob(Omnibus): 0.290 Jarque-Bera (JB): 0.666
Skew: 0.096 Prob(JB): 0.717
Kurtosis: 3.981 Cond. No. 110.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
/anaconda3/lib/python3.7/site-packages/scipy/stats/stats.py:1416: UserWarning: kurtosistest only valid for n>=20 ... continuing anyway, n=16
"anyway, n=%i" % int(n))
```python
model = LinearRegression()
model.fit(X, y)
print('Intercept:', model.intercept_)
pd.Series(model.coef_, features)
```
Intercept: 46.662064098015115
Average Recent Growth in Personal Incomes 3.481956
US Military Fatalities per Million -0.053661
dtype: float64
```python
np.sqrt(mean_squared_error(y, model.predict(X)))
```
1.9528760602450586
# Train / Test Split
```python
train = df.query('Year < 2008')
test = df.query('Year >= 2008')
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
X_train.shape, y_train.shape, X_test.shape, y_test.shape
```
((14, 2), (14,), (2, 2), (2,))
```python
model.fit(X_train, y_train)
model.predict(X_test)
```
array([45.86970509, 49.39965918])
```python
test
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Year</th>
<th>Incumbent Party Candidate</th>
<th>Other Candidate</th>
<th>Incumbent Party Vote Share</th>
<th>Prediction</th>
<th>Error</th>
<th>Absolute Error</th>
<th>Average Recent Growth in Personal Incomes</th>
<th>Square Error</th>
<th>US Military Fatalities per Million</th>
</tr>
</thead>
<tbody>
<tr>
<th>14</th>
<td>2008</td>
<td>McCain</td>
<td>Obama</td>
<td>46.32</td>
<td>46.553701</td>
<td>0.233701</td>
<td>0.233701</td>
<td>0.10</td>
<td>0.054616</td>
<td>14</td>
</tr>
<tr>
<th>15</th>
<td>2012</td>
<td>Obama</td>
<td>Romney</td>
<td>52.00</td>
<td>49.155150</td>
<td>-2.844850</td>
<td>2.844850</td>
<td>0.95</td>
<td>8.093173</td>
<td>5</td>
</tr>
</tbody>
</table>
</div>
### More about the "Bread & Peace" model
- https://fivethirtyeight.com/features/what-do-economic-models-really-tell-us-about-elections/
- https://statmodeling.stat.columbia.edu/2007/12/15/bread_and_peace/
- https://avehtari.github.io/RAOS-Examples/ElectionsEconomy/hibbs.html
- https://douglas-hibbs.com/
- http://www.douglas-hibbs.com/HibbsArticles/HIBBS-PRESVOTE-SLIDES-MELBOURNE-Part1-2014-02-26.pdf
# Dimensionality in Linear Regression!
Muliple Regression is simply an extension of the bivariate case. The reason why we see the bivariate case demonstrated so often is simply because it's easier to graph and all of the intuition from the bivariate case is the same as we keep on adding explanatory variables.
As we increase the number of $x$ values in our model we are simply fitting a n-1-dimensional plane to an n-dimensional cloud of points within an n-dimensional hypercube.
# Interpreting Coefficients
One of Linear Regression's strengths is that the parameters of the model (coefficients) are readily interpretable and useful. Not only do they describe the relationship between x and y but they put a number on just how much x is associated with y. We should be careful to not speak about this relationshiop in terms of causality because these coefficients are in fact correlative measures. We would need a host of additional techniques in order to estimate a causal effect using linear regression (econometrics).
\begin{align}
\hat{\beta} = \frac{Cov(x,y)}{Var(y)}
\end{align}
Going back to the two equations for the two models that we have estimated so far, lets replace their beta values with their actual values to see if we can make sense of how to interpret these beta coefficients.
## Bivariate Model
$y_i = \beta_0 + \beta_1temperature + \epsilon$
$sales_i = -596.2 + 24.69temperature + \epsilon$
What might $\beta_0$ in this model represent? It represents the level of sales that we would have if temperature were 0. Since this is negative one way of interpreting it is that it's so cold outside that you would have to pay people to eat ice cream. A more appropriate interpretation is probably that the ice cream store owner should close his store down long before the temperature reaches 0 degrees farenheit (-17.7 celsius). The owner can compare his predicted sales with his costs of doing business to know how warm the weather has to get before he should open his store.
What might the $beta_1$ in this model reprsent? it represents the increase in sales for each degree of temperature increase. For every degree that the temperature goes up outside he has $25 more in sales.
## Multiple Regression Model
$y_i = \beta_0 + \beta_1age_i + \beta_2weight_i + \epsilon$
$BloodPressure_i = 30.99+ .86age_i + .33weight_i + \epsilon$
The interpretation of coefficients in this example are similar. The intercept value repesents the blood pressure a person would have if they were 0 years old and weighed 0 pounds. This not a super useful interpretation. If we look at our data it is unlikely that we have any measurements like these in the dataset. This means that our interpretation of our intercept likely comes from extrapolating the regression line (plane). Coefficients having straightforward interpretations is a strength of linear regression if we're careful about extrapolation and only interpreting our data within the context that it was gathered.
The interpretation of our other coefficients can be a useful indicator for how much a person similar to those in our dataset's blood pressure will go up on average with each additional year of age and pound of weight.
# Basic Model Validation
One of the downsides of relying on $R^2$ too much is that although it tells you when you're fitting the data well, it doesn't tell you when you're *overfitting* the data. The best way to tell if you're overfitting the data is to get some data that your model hasn't seen yet, and evaluate how your predictions do. This is essentially what "model validation" is.
# Why is Linear Regression so Important?
## Popularity
Linear Regression is an extremely popular technique that every data scientist **needs** to understand. It's not the most advanced technique and there are supervised learning techniques that will obtain a higher accuracy, but where it lacks in accuracy it makes up for it in interpretability and simplicity.
## Interpretability
Few other models possess coefficients that are so directly linked to their variables with a such a clear interpretation. Tomorrow we're going to learn about ways to make them even easier to interpret.
## Simplicity
A linear regression model can be communicated just by writing out its equation. It's kind of incredible that such high dimensional relationships can be described from just a linear combination of variables and coefficients.
| be0eda1a091b44accb5ead828cd32329eee621bb | 265,801 | ipynb | Jupyter Notebook | module1-ols-regression/ols-regression.ipynb | Jaavion/DS-Unit-2-Sprint-2-Regression | 42dfba88be9d346a31b017f15893b697ede2b185 | [
"MIT"
] | null | null | null | module1-ols-regression/ols-regression.ipynb | Jaavion/DS-Unit-2-Sprint-2-Regression | 42dfba88be9d346a31b017f15893b697ede2b185 | [
"MIT"
] | null | null | null | module1-ols-regression/ols-regression.ipynb | Jaavion/DS-Unit-2-Sprint-2-Regression | 42dfba88be9d346a31b017f15893b697ede2b185 | [
"MIT"
] | null | null | null | 71.509551 | 25,006 | 0.648718 | true | 17,519 | Qwen/Qwen-72B | 1. YES
2. YES | 0.771843 | 0.651355 | 0.502744 | __label__eng_Latn | 0.661891 | 0.006372 |
# Model Project
***
_In this model project we will present a simple Robinson Crusoe production economy. We will solve the model analytically using sympy, evaluate the markets in different parameterizations of price and wage and visualize one solution_
## The theoretical model:
Imagine that Crusoe is schizophenic and makes his decisions as a manager and consumer separately. His decsisions are however, guided by market prices - labor wage and consumption price. It is assumed that Crusoe is endowed with a total time endowment of 60 hours per week.
<br> **Producer problem:** When Crusoe acts as a manager he seeks to maximize his profit subject to the production function while taking the price and wage as given.
<br>
\\[ \max_{x,l} px-wl \\]
subject to
\\[ x=f(l)=Al^\beta \\]
<br> Where p is the market price of the good, w is the wage, x is the good and l is labor. A and $\beta$ reflect technology and returns to scale, respectively.
<br> **Consumer problem:** When acting as a consumer, Crusoe maximizes his utility of the consumption good x and leisure (the latter is defined as whats left of total time endowment when working l hours)
<br>
As the consumer and owner of the firm, Crusoe will receive profit from "selling" his labor in the producer problem.
<br>
\\[ \max_{x,l} u(x,(L-l))=x^\alpha(L-l)^{(1-\alpha)} \\]
subject to
\\[ px=wl+\pi(w,p) \\]
<br> When solving the model, we need to derive demand and supply expressions for labor and the consumption good. When equalizing supply and demand in one of the markets and acquiring an equilibrium, it follows from Walras' law that the other market will also reach an equilibrium.
<br>
Hence, the model is solved by first optimizing in the markets separately, deriving the supply and demand expressions, then equalizing supply and demand across markets and solving for the consumption price and labor wage.
## The analytical solution:
Install packages.
```python
import numpy as np
import sympy as sm
%matplotlib inline
import matplotlib.pyplot as plt
```
In the follwoing, we will primarily analyze the model using sympy. However, due to computational problems when solving the model using sympy with algebraic expressions only (see https://docs.sympy.org/0.7.6/tutorial/solvers.html), we define some of the parameter values as the following:
```python
A = 13.15
beta = 0.5
alpha = 2/3
L = 60
I = 10
```
### Producer problem:
The producer problem is quite simple and the easiest way to solve the maximization problem is to substitute the constraint (production function) into the profit function, deriving the reduced form.
```python
# Profit function
def prof(x,l):
return p*x-w*l
# Production function
def prod(l):
return A*l**beta
# Reduced form - subsituting production function into profit function
def reduced(l):
return prof(prod(l),l)
```
Substituting for x and maximizing w.r.t labor (l):
```python
# Optimization using sympy diff:
focProd = sm.diff(reduced(l),l)
# Isolating labor and thus, deriving labor demand:
laborDemand = sm.solve(focProd,l)
# Finding the supply of goods:
profSubs = prof(x,laborDemand[0])
goodSupply = sm.solve(profSubs,x)
# Printing labor demand and goods supply
print("Labor demand: lD=", laborDemand)
print("supply of goods: xS=", goodSupply)
```
Labor demand: lD= [43.230625*p**2/w**2]
supply of goods: xS= [43.230625*p/w]
Profit, as a function of w and p, is derived by inserting labor demand and goods supply into the profit function:
```python
Profit = (1-beta)*(A*p)**(1/(1-beta))*(beta/w)**(beta/(1-beta))
print(Profit)
```
43.230625*p**2.0*(1/w)**1.0
### Consumer problem:
The consumer problem is not as simple as the producer problem and we solve this using sympy's version of lagrange.
```python
# The variables to maximize w.r.t
x, l = sm.var('x,l',real=True)
# The objective function
util = x**alpha*(L-l)**(1-alpha)
# Budget constraint
bud = p*x-w*l-Profit
# Specifying the shadow price
lam = sm.symbols('lambda',real = True)
# Setting up the Lagragian
L = util-lam*bud
```
```python
# Differentiating w.r.t x and l
gradL = [sm.diff(L,c) for c in [x,l]]
# The focs and the shadow price
KKT_eqs = gradL + [bud]
KKT_eqs
```
```python
# Showing the stationary points solving the constrained problem
stationary_points = sm.solve(KKT_eqs,[x,l,lam],dict=True)
stationary_points
```
We note that all proposed solutions of l and x are the same. The only variation between the three proposed solutions is the shadow price. Hence, we proceed with the solutions of l and x and derive the optimal wage. This is done by equalizing supply and demand of labor, hence, obtaining equilibrium in one market.
```python
equalLab = sm.Eq(-14.4102083333333*p**2/w**2+40,43.230625*p**2/w**2)
opt_wag = sm.solve(equalLab,w)
print("Optimal wage depending on price", opt_wag)
```
Optimal wage depending on price [-1.20042527186549*p, 1.20042527186549*p]
Since the price cannot be negative the solution is 1.2*p. This means that any set of prices will imply an approximately 1.2 times higher wage with equilibrium in both markets. In the following, we evaluate the optimal wage expression and labor demand in different values of p and hence, w.
```python
# We convert the symbolic optimal wage expression a function depending of p
_opt_wag = sm.lambdify(p,opt_wag[1])
# We evaluate the wage in a price of 10 and 1, respectively.
p1_vec = np.array([10,1])
wages = _opt_wag(p1_vec)
print("Optimal wage, when price is 10 and 1, respectively: ",wages)
# We evaluate the labor demand in the prices and wages. First making the labor demand
# expression a function depending on p and w.
_lab_dem = sm.lambdify((p,w),laborDemand)
p1_vec = np.array([10,1])
# Labor demand evaluated in price and wages.
labor_dem1 = [_lab_dem(p1_vec[0],wages[0]), _lab_dem(p1_vec[1],wages[1])]
print("Labor demand evaluated in combination of wages and prices: ", labor_dem1)
# Labor supply from lagrange optimization problem
_lab_sup = [-14.4102083333333*p1_vec[0]**2/wages[0]**2+40,-14.4102083333333*p1_vec[1]**2/wages[1]**2+40]
print("Labor supply evaluated in combination of wages and prices: ", _lab_sup)
# Profit in different combination of wages and prices
Profit_eval = 43.230625*p1_vec**2.0*(1/wages)**1.0
print("Profit evaluated in combination of prices and wages:", [Profit_eval[0], Profit_eval[1]])
# Demand of consumption good
good_opt = prod(30)
print("Demand and supply of consumption good: ", good_opt)
# Utility in different combination of wages and prices
_utility = sm.lambdify((x,l),util)
print("Utility evaluated in combination of prices and wages:", _utility(good_opt,30))
```
Optimal wage, when price is 10 and 1, respectively: [12.00425272 1.20042527]
Labor demand evaluated in combination of wages and prices: [[29.999999999999954], [29.999999999999957]]
Labor supply evaluated in combination of wages and prices: [30.00000000000004, 30.000000000000036]
Profit evaluated in combination of prices and wages: [360.12758155964644, 36.01275815596465]
Demand and supply of consumption good: 72.02551631192935
Utility evaluated in combination of prices and wages: 53.78956164406131
## Visualization:
We now visualize the solution when the price is 1 and the wage is 1.2.
```python
def prof_ny(x,l,profit):
return 1*x-1.2*l+profit
def util_ny(l, profit):
return (profit/((L-l)**(1-alpha)))**(1/alpha)
def budget_ny(x,l,profit):
return x-1.2*l+profit
fig = plt.figure(figsize=(8,4),dpi=100)
labor_vec = np.linspace(0,59,500)
goods_vec = np.linspace(0,140,500)
ax_left = fig.add_subplot(1,2,1)
ax_left.plot(labor_vec, prod(labor_vec))
ax_left.plot(labor_vec, prof_ny(goods_vec,labor_vec,0))
ax_left.plot(labor_vec, prof_ny(goods_vec,labor_vec,Profit_eval[1]))
ax_left.set_title('Price-taking producer')
ax_left.set_xlabel('l')
ax_left.set_ylabel('x')
ax_right = fig.add_subplot(1,2,2)
ax_right.plot(labor_vec, util_ny(labor_vec,54))
ax_right.plot(labor_vec, budget_ny(goods_vec,labor_vec,Profit_eval[1]))
ax_right.set_title('Price-taking consumer')
ax_right.set_xlabel('l')
ax_right.set_ylabel('x')
```
The found solution of p=1 and w=1,2 is represented by the tangency of the isoprofit curve and the production plan in the left panel at the tangency of the budget constraint and indifference curve at the right panel.
| cdaba8bdaad60deb04573dacea255ed432411996 | 75,219 | ipynb | Jupyter Notebook | modelproject/ModelProject4.ipynb | NumEconCopenhagen/projects-2019-cl | 39de2cd51b04af07852cd2f3e614809373c6fb82 | [
"MIT"
] | null | null | null | modelproject/ModelProject4.ipynb | NumEconCopenhagen/projects-2019-cl | 39de2cd51b04af07852cd2f3e614809373c6fb82 | [
"MIT"
] | 8 | 2019-04-14T15:53:56.000Z | 2019-05-14T21:53:36.000Z | modelproject/ModelProject4.ipynb | NumEconCopenhagen/projects-2019-cl | 39de2cd51b04af07852cd2f3e614809373c6fb82 | [
"MIT"
] | null | null | null | 149.244048 | 41,052 | 0.858374 | true | 2,247 | Qwen/Qwen-72B | 1. YES
2. YES | 0.851953 | 0.752013 | 0.640679 | __label__eng_Latn | 0.989257 | 0.326843 |
# Cariberation (TOP LEFT -> Bottom Right)
```python
import cv2
import mediapipe as mp
mp_drawing = mp.solutions.drawing_utils
mp_hands = mp.solutions.hands
# text cv2 puttext
font = cv2.FONT_HERSHEY_SIMPLEX
location = (100,50)
fontScale = 1
fontColor = (255,255,255)
lineType = 2
cali=[]
# For webcam input:
hands = mp_hands.Hands(
min_detection_confidence=0.75, min_tracking_confidence=0.75)
cap = cv2.VideoCapture(0)
countdown=100
while cap.isOpened():
success, image = cap.read()
if not success:
break
# Flip the image horizontally for a later selfie-view display, and convert
# the BGR image to RGB.
image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB)
# To improve performance, optionally mark the image as not writeable to
# pass by reference.
image.flags.writeable = False
results = hands.process(image)
# Draw the hand annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.multi_hand_landmarks:
cv2.putText(image,str(countdown/5), location, font, fontScale,fontColor,lineType)
for hand_landmarks in results.multi_hand_landmarks:
mp_drawing.draw_landmarks(image, hand_landmarks, mp_hands.HAND_CONNECTIONS)
if countdown==0:
land = hand_landmarks
val = land.landmark
cali.append([val[8].x,val[8].y])
countdown=100
else:
cv2.putText(image,'No Hands!', location, font, fontScale,fontColor,lineType)
cv2.imshow('MediaPipe Hands', image)
key = cv2.waitKey(1)
countdown-=1
if key == 27:
break
if len(cali)==2:
break
hands.close()
cap.release()
cv2.destroyAllWindows()
```
# Calculations
```python
print(cali)
from sympy import symbols, Eq, solve
x, y = symbols('x,y')
# defining equations
eq1 = Eq((x*cali[0][0]+y), 0)
eq2 = Eq((x*cali[1][0]+y), 1920)
e1=solve((eq1, eq2), (x, y))
l1=list(e1.values())
x, y = symbols('x,y')
# defining equations
eq1 = Eq((x*cali[0][1]+y), 0)
eq2 = Eq((x*cali[1][1]+y), 1080)
e2=solve((eq1, eq2), (x, y))
l2=list(e2.values())
print(l1)
print(l2)
```
[[0.557029664516449, 0.21512927114963531], [0.7416685819625854, 0.47747597098350525]]
[10398.6744861636, -5792.37016044346]
[4116.68986377151, -885.620489942256]
# Cursur Movement (No click)
```python
import pyautogui
pyautogui.FAILSAFE= False
import cv2
import mediapipe as mp
mp_drawing = mp.solutions.drawing_utils
mp_hands = mp.solutions.hands
# text cv2 puttext
font = cv2.FONT_HERSHEY_SIMPLEX
location = (100,50)
fontScale = 1
fontColor = (255,255,255)
lineType = 2
# For webcam input:
hands = mp_hands.Hands(
min_detection_confidence=0.75, min_tracking_confidence=0.75)
cap = cv2.VideoCapture(0)
while cap.isOpened():
success, image = cap.read()
if not success:
break
# Flip the image horizontally for a later selfie-view display, and convert
# the BGR image to RGB.
image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB)
# To improve performance, optionally mark the image as not writeable to
# pass by reference.
image.flags.writeable = False
results = hands.process(image)
# Draw the hand annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp_drawing.draw_landmarks(image, hand_landmarks, mp_hands.HAND_CONNECTIONS)
land = hand_landmarks
val = land.landmark
x=val[8].x*l1[0]+l1[1]
if(x<0):
x=0
if(x>1920):
x=1920
y=val[8].y*l2[0]+l2[1]
if(y<0):
y=0
if(y>1080):
y=1080
pyautogui.moveTo(x, y)
else:
cv2.putText(image,'No Hands!', location, font, fontScale,fontColor,lineType)
cv2.imshow('MediaPipe Hands', image)
key = cv2.waitKey(1)
if key == 27:
break
hands.close()
cap.release()
cv2.destroyAllWindows()
```
```python
import pyautogui
print(pyautogui.size())
```
Size(width=1920, height=1080)
# Mouse
```python
from keras.models import load_model
import numpy as np
import cv2
import mediapipe as mp
mp_drawing = mp.solutions.drawing_utils
mp_hands = mp.solutions.hands
model=load_model("curmodel")
import pyautogui
pyautogui.FAILSAFE= False
```
```python
Char="0123456789DOU$"
font = cv2.FONT_HERSHEY_SIMPLEX
location = (100,50)
fontScale = 1
fontColor = (255,255,255)
lineType = 2
hands = mp_hands.Hands(
min_detection_confidence=0.5, min_tracking_confidence=0.5)
cap = cv2.VideoCapture(0)
ccount=20
while cap.isOpened():
success, image = cap.read()
if not success:
break
image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB)
image.flags.writeable = False
results = hands.process(image)
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.multi_hand_landmarks:
L=[]
for hand_landmarks in results.multi_hand_landmarks:
mp_drawing.draw_landmarks(image, hand_landmarks, mp_hands.HAND_CONNECTIONS)
land = hand_landmarks
val = land.landmark
first=val[0];
for j in val:
L.append(j.x-first.x);
L.append(j.y-first.y);
L.append(j.z-first.z);
break
L=L[3::]
L=np.array(L)
x = np.expand_dims(L,0)
x = np.expand_dims(x,-1)
fi=int(np.argmax(model.predict(x)))
if fi==1:
for hand_landmarks in results.multi_hand_landmarks:
land = hand_landmarks
val = land.landmark
x=val[8].x*l1[0]+l1[1]
if(x<0):
x=0
if(x>1920):
x=1920
y=val[8].y*l2[0]+l2[1]
if(y<0):
y=0
if(y>1080):
y=1080
pyautogui.moveTo(x, y)
if fi==6:
ccount-=1
for hand_landmarks in results.multi_hand_landmarks:
land = hand_landmarks
val = land.landmark
x=val[8].x*l1[0]+l1[1]
if(x<0):
x=0
if(x>1920):
x=1920
y=val[8].y*l2[0]+l2[1]
if(y<0):
y=0
if(y>1080):
y=1080
if(ccount==0):
ccount=20
print("Single click")
pyautogui.click(x, y)
if fi==2:
ccount-=1
for hand_landmarks in results.multi_hand_landmarks:
land = hand_landmarks
val = land.landmark
x=val[8].x*l1[0]+l1[1]
if(x<0):
x=0
if(x>1920):
x=1920
y=val[8].y*l2[0]+l2[1]
if(y<0):
y=0
if(y>1080):
y=1080
if(ccount==0):
ccount=20
print("Double clicked")
pyautogui.click(x, y,clicks=2)
else:
cv2.putText(image,'No Hands!', location, font, fontScale,fontColor,lineType)
cv2.imshow('MediaPipe Hands', image)
key = cv2.waitKey(1)
if key == 27:
break
hands.close()
cap.release()
cv2.destroyAllWindows()
```
Single click
Double clicked
Single click
Single click
Single click
Single click
Single click
```python
```
| ff2e912cc762621473b876349d89a8f6b0c62e0c | 12,065 | ipynb | Jupyter Notebook | Hackverse/Cursor/.ipynb_checkpoints/Cursor-checkpoint.ipynb | princesinghr1/team_Light | e015f9517e5347fe6fd731928e99697621d70a81 | [
"MIT"
] | null | null | null | Hackverse/Cursor/.ipynb_checkpoints/Cursor-checkpoint.ipynb | princesinghr1/team_Light | e015f9517e5347fe6fd731928e99697621d70a81 | [
"MIT"
] | null | null | null | Hackverse/Cursor/.ipynb_checkpoints/Cursor-checkpoint.ipynb | princesinghr1/team_Light | e015f9517e5347fe6fd731928e99697621d70a81 | [
"MIT"
] | 2 | 2021-02-27T07:53:30.000Z | 2021-02-27T07:54:04.000Z | 30.31407 | 98 | 0.458185 | true | 2,169 | Qwen/Qwen-72B | 1. YES
2. YES | 0.817574 | 0.692642 | 0.566286 | __label__eng_Latn | 0.373711 | 0.154003 |
# Radar FMCW and CSM algorithms
This notebook shows a naive (didactically intuitive) implementation of following algorithms:
- FMCW (Frequency Modulated Continuous Wave)
- CSM (Chirp Sequence Modulation)
Both algorithms are used for range and velocity measurements in automated/assisted driving domain.
## Introduction
### Doppler
An electromagnetic wave
\begin{align}
w(t) &= a_t cos(2\pi f_0t)
\end{align}
reflected on a moving object with velocity $v$ leads to a new wave:
\begin{align}
u_r(t) &= a_r \text{cos}(2\pi (f_0 - f_{d}) t + \phi_r + \phi_t) \\
\end{align}
In order to understand what signal the algorithms operate on we have to take a closer look at the inner workings of a continuous wave radar (simplified view):
[Source](https://en.wikipedia.org/wiki/Continuous-wave_radar#/media/File:Bsp2_CW-Radar.EN.png)
### Antennas
The CW-RADAR has one transmitting antenna and at least one receiving antenna. The former emits a continuous radio wave, that's different to pulse-based RADAR systems. The receiving antenna transforms the reflected waves from the object of interests and converts the signal into eletrical domain (voltage).
Automotive long range radars use a $77.7\text{GHz}$ base frequency.
### RF-Generator
The RF-generator is a module which dynamically generates a sine voltage with a particular frequency. Since both methods presented here modulate (alter) the frequency of the tramitted radio wave, the generator is an essential part in the system.
### Mixer
The mixer is the most important component in the system described here. It allows us to measure the frequency change introduced by the moving object we are going to detect. We apply to the antenna the voltage $u_t(t)$ and receive the voltage $u_r(t)$.
\begin{align}
u_t(t) &= A_t \text{cos}(2\pi f_0 t+\phi_t) \\
u_r(t) &= A_r \text{cos}(2\pi (f_0 - f_{d}) t + \phi_r)
\end{align}
With Doppler frequency
\begin{align}
f_{d} = \frac{2v}{\lambda} = \frac{2 \dot r}{\lambda} = \frac{2 f_0 \dot r}{c}
\end{align}
Given base wavelength $\lambda$ and target velocity $v=\dot r$.
Theoretically, if we could sample $f_0 - f_{d} \approx 77.7GHz$ and we were done. But high frequency AD converters (according to Nyquist $\geq 2f_0 \approx 145GHz$ is required) are techinically expensive and the change introduced by $f_d$ is very small to detect changes. That's why we need the mixer to get rid of the high frequency $f_0$ and work only on the $f_d$.
The mixer multiplies both signals, corresponding to trigonometry relations:
\begin{align}
\text{cos}(x)\text{cos}(y) = \frac{1}{2} [ \text{cos}(x-y) + \text{cos}(x+y)]
\end{align}
Applied to $u_t(t)$ and $u_r(t)$:
\begin{align}
u_t(t)u_r(t) &= \frac{A_t A_r}{2} [ \text{cos}(2\pi f_0 t+\phi_t-2\pi (f_0 - f_{d}) t - \phi_r) + \text{cos}(2\pi f_0 t+\phi_t+ 2\pi (f_0 - f_{d}) t + \phi_r)] \\
&= \frac{A_t A_r}{2} [ \text{cos}(2\pi f_{d} t + \phi_t - \phi_r) + \text{cos}(2\pi f_0 t+\phi_t + 2\pi (f_0- f_{d}) t + \phi_r)] \\
&= \frac{A_t A_r}{2} [ \text{cos}(2\pi f_{d} t + \phi_t - \phi_r) + \text{cos}(2\pi \underbrace{2 f_0}_{\text{2x 77GHz}} t - 2\pi f_{d}t + \phi_t + \phi_r)] \\
\end{align}
The product of both signals equals to a sum of two waves, the first, only dependent on the doppler frequency (which is velocity dependent) and the second a very high frequency signal (144 GHz) which we filter out with the low pass filter:
\begin{align}
u_{It,r}(t) = \frac{A_t A_r}{2} \text{cos}(2\pi f_{d} t + \phi_t - \phi_r)
\end{align}
After the AD converter we could sample $u_{It,r}(t)$ and measure the (unsigned) velocity. The index $I$ means in-phase. We later will introduce an other index in order to extract the sign of the velocity (approaching vs. departing).
```python
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy.fftpack import fft
```
Constant for velocity of light within the air:
```python
c = 299792458
```
## FCMW
Instead of sending with one frequency, we will alter $f_0$ over time:
\begin{align}
u_t(t) &= A_t \text{cos}(2\pi f_t t+\phi_t)
\end{align}
with
\begin{align}
f_t = f_t(t) = f_0 + m_w(t-t_0)
\end{align}
```python
f_0 = 77.7*1e9 # 77.7GHz
phi_t = 0
A_t = 1
f_ramp = 425*1e6 #Hz
T_ramp = 0.010
m_w = f_ramp/T_ramp
```
```python
def f_t(t):
return f_0 + m_w*(t % T_ramp)
```
```python
def u_t(t):
f_t = f_t(t)
return A_t*np.cos(2*np.pi*f_b*t + phi_t)
```
The received signal depends on both the range $r$ and the velocity $v$:
\begin{align}
u_r(t) &= A_r \text{cos}(2\pi f_r t+\phi_r)
\end{align}
with
\begin{align}
f_r = f_r(t) = f_0 + m_w(t- \frac{2r}{c}-t_0) - \frac{2vf_0}{c}
\end{align}
```python
v = 50/3.6 # 50km/h
r = 100 # 100m distance
```
```python
def f_r(t):
return f_0 + m_w*(t % T_ramp - 2*r/c) - 2*v*f_0/c
```
```python
t = np.arange(0, 2*T_ramp, 1e-8)
```
```python
plt.figure(figsize=(15,5))
plt.plot(t, f_t(t), label="$f_t$")
plt.plot(t, f_r(t), label="$f_r$")
plt.legend()
plt.xlabel("t [s]")
plt.ylabel("frequency [Hz]")
plt.title("Transmitted and received frequencies")
plt.grid();
```
Because we don't see any difference let's zoom in
```python
t_op = T_ramp
plt.figure(figsize=(10,10))
plt.plot(t, f_t(t), label="$f_t$")
plt.plot(t, f_r(t), label="$f_r$")
plt.legend()
plt.xlim([t_op-T_ramp/10000, t_op+T_ramp/10000])
plt.ylim([f_0-100e3, f_0+100e3])
plt.xlabel("t [s]")
plt.ylabel("frequency [Hz]")
plt.title("Transmitted and received frequencies")
plt.grid();
```
If we could measure $\Delta f$ directly we could derive $r$ and $v$:
\begin{align}
\Delta f = f_t-f_r = \frac{2m_w}{c}r + \frac{2 f_0}{c}\dot r
\end{align}
There is just one problem. We have one equation but two unknowns. The solution for that problem is to have a second ramp with a different $m_w$:
\begin{align}
f_t(t) = \{\begin{array}{lr}
f_0 + m_{w,1}(t-t_0), & \text{for } 0 \leq t < T_{r,1}\\
f_0 + m_w T_{r,1} + m_{w,2}(t-t_0), & \text{for } T_{r,1} \leq t < T_{r,1}+T_{r,2}\
\end{array}
\end{align}
Define and implement both waveforms:
```python
T_ramp1 = T_ramp*0.7
T_ramp2 = T_ramp*0.3
m_w1 = f_ramp/T_ramp1
m_w2 = -f_ramp/T_ramp2
```
```python
def f_t(t):
r1 = f_0 + m_w1*(t % (T_ramp1+T_ramp2))
r1[(t%(T_ramp1+T_ramp2)) > T_ramp1] = 0
r2 = f_0 + m_w1*T_ramp1 + m_w2*( (t-T_ramp1) % (T_ramp1+T_ramp2))
r2[(t%(T_ramp1+T_ramp2)) <= T_ramp1] = 0
return r1+r2
```
```python
def f_r(t):
r1 = f_0 + m_w1*(t % (T_ramp1+T_ramp2) - 2*r/c) - 2*v*f_0/c
r1[(t%(T_ramp1+T_ramp2)) > T_ramp1] = 0
r2 = f_0 + m_w1*T_ramp1 + m_w2*( (t-T_ramp1) % (T_ramp1+T_ramp2) - 2*r/c) - 2*v*f_0/c
r2[(t%(T_ramp1+T_ramp2)) <= T_ramp1] = 0
return r1+r2
```
```python
t = np.arange(0, 2*T_ramp, 1e-7)
plt.figure(figsize=(15,5))
plt.plot(t, f_t(t), label="$f_t$")
plt.plot(t, f_r(t), label="$f_r$")
plt.legend()
plt.xlabel("t [s]")
plt.ylabel("frequency [Hz]")
plt.title("FMCW ramps with different $m_{w,1} \\neq m_{w,2}$")
plt.grid();
```
Zoom in again:
```python
t_op = T_ramp
plt.figure(figsize=(10,5))
plt.plot(t, f_t(t), label="$f_t$")
plt.plot(t, f_r(t), label="$f_r$")
plt.legend()
plt.xlim([t_op-T_ramp/5000, t_op+T_ramp/5000])
plt.ylim([f_0-100e3, f_0+600e3])
plt.xlabel("t [s]")
plt.ylabel("frequency [Hz]")
plt.title("FMCW ramps with different $m_{w,1} \\neq m_{w,2}$")
plt.grid();
```
Now we have two equations:
\begin{align}
\Delta f_1 = \frac{2m_{w,1}}{c}r + \frac{2 f_0}{c}\dot r \\
\Delta f_2 = \frac{2m_{w,2}}{c}r + \frac{2 f_0}{c}\dot r
\end{align}
Solving the equation system:
```python
t_f1 = np.asarray([T_ramp*0.1]) # beginning of rising ramp
t_f2 = np.asarray([T_ramp*0.8]) # beginning of falling ramp
delta_f_1 = f_t(t_f1) - f_r(t_f1)
delta_f_2 = f_t(t_f2) - f_r(t_f2)
```
```python
A = np.asarray([[2*m_w1/c, 2*f_0/c], [2*m_w2/c, 2*f_0/c]])
Y = np.asarray([delta_f_1, delta_f_2])
```
```python
x = np.linalg.solve(A, Y).flatten()
print("range r:", x[0])
print("velocity v:", x[1]*3.6)
```
range r: 99.99999999693145
velocity v: 49.99999988508472
As you could see we could successfully estimate the range and velocity.
**Note**: we get $\Delta f_i$ for free as a result of the FFT on the mixed and filtered signal
\begin{align}
u_{It,r}(t) = \frac{A_t A_r}{2} \text{cos}(2\pi (f_t - f_r) t + \phi_t - \phi_r) = \frac{A_t A_r}{2} \text{cos}(2\pi \Delta f t + \phi_t - \phi_r)
\end{align}
In the FMCW chapter we did not address various problems like:
- multi target estimation (what happens if the FFT returns 2, 3, 4, ... major peaks?)
- angle estimation
We will solve the first problem in our second method.
## Chirp Sequence Modulation
This method is an extension of FMCW. Instead of sending one singe rising and falling frequency chirp we will repeat that process $n_r$ times in a sequence with chirp frequency $f_{chirp}$:
```python
# chirp sequence frequency
f_chirp = 50*1e3 #Hz
# ramp frequency
f_r = 200*1e6 #Hz
T_r = 1/f_chirp # duration of one cycle
m_w = f_r/T_r
n_r = 150 # number of chirps
T_M = T_r*n_r
```
Because we perform analog sampling, we have to configure our AD-converter:
```python
# sample settings
f_s = 50e6 #50 MHz
n_s = int(T_r*f_s)
```
Base frequency setup
```python
f_0 = 77.7*1e9
# some helpful
w_0 = 2*np.pi*f_0
lambda_0 = c/f_0
```
```python
def f_transmitted(t):
return f_0 + m_w*(t%T_r)
```
```python
def chirp(t):
return np.cos(2*np.pi*(f_transmitted(t))*t)
```
Lets visualize the chirp sequence:
```python
t = np.linspace(0, 3*T_r, 1e6)
```
```python
plt.figure(figsize=(15,5))
plt.plot(t, f_transmitted(t))
plt.xlabel("t [s]")
plt.ylabel("frequency [Hz]")
plt.title("Chirp sequence Modulation, transmitted signal $f_t(t)$");
```
### Reflector setup
This is our target of interest
```python
r_0 = 50 # initial distance
v_veh = 36/3.6 # velocity
```
```python
def get_range(t):
return r_0+v_veh*t
```
### Returned waveform
According to Eq. 17.46 in [1](https://www.springer.com/de/book/9783658057336) the returned waveform (after mixing and LF-filter) is:
\begin{align}
u_{It,r}(t) = \frac{A_t A_r}{2} \text{cos}\left(2\pi(\frac{2m_w}{c}r + \frac{2 f_0}{c}\dot r)t + \frac{4 \pi f_0 r}{c} + 2\pi(\frac{2 r}{c})^2m_w\right)
\end{align}
```python
4*np.pi*100*f_0/c, 2*np.pi*(2*100/c)**2*m_w
```
(325694.31641129137, 27.963945936914552)
As you can see $2\pi(\frac{2 r}{c})^2m_w$ has not so much influence on the phase, so we ignore it:
\begin{align}
u_{It,r}(t) = \frac{A_t A_r}{2} \text{cos}\left(2\pi(\frac{2m_w}{c}r + \frac{2 f_0}{c}\dot r)t + \frac{4 \pi f_0 r}{c}\right)
\end{align}
```python
def itr(t):
r = get_range(t)
w_itr = 2*f_0*v_veh/c + 2*m_w*r/c
# we do t%T_r because the eq. above only valid within the ramp
v = np.cos(2*np.pi*w_itr*(t%T_r) +2*r*2*np.pi*f_0/c)
return v
```
We build up a table of $n_r × n_s$ where $n_r$ is the number of chirps (ramps) and $n_s$ is the number of samples within a chirp.
```python
print(n_r, n_s)
```
150 1000
```python
t_sample = np.linspace(0, T_M, n_r*n_s)
```
```python
v_sample = itr(t_sample)
```
```python
plt.figure(figsize=(15,5))
plt.plot(t_sample, v_sample, "+")
plt.xlim(0, 0.1*T_r)
plt.xlabel("t [s]")
plt.title("Samples visualized in time range [0, 10% $T_r$]");
```
We allocate memory for the sampling data
```python
table = np.zeros((n_r, n_s))
```
Fill in the values ...
```python
for chirp_nr in range(n_r):
table[chirp_nr, :] = v_sample[(chirp_nr*n_s):(n_s*(chirp_nr+1))]
```
DF for pretty printing
```python
table_df = pd.DataFrame(data=table,
columns=["sample_%03d"%i for i in range(n_s)],
index=["chirp_%03d"%i for i in range(n_r)])
```
```python
table_df.head(10)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>sample_000</th>
<th>sample_001</th>
<th>sample_002</th>
<th>sample_003</th>
<th>sample_004</th>
<th>sample_005</th>
<th>sample_006</th>
<th>sample_007</th>
<th>sample_008</th>
<th>sample_009</th>
<th>...</th>
<th>sample_990</th>
<th>sample_991</th>
<th>sample_992</th>
<th>sample_993</th>
<th>sample_994</th>
<th>sample_995</th>
<th>sample_996</th>
<th>sample_997</th>
<th>sample_998</th>
<th>sample_999</th>
</tr>
</thead>
<tbody>
<tr>
<th>chirp_000</th>
<td>0.905353</td>
<td>0.999836</td>
<td>0.920138</td>
<td>0.680144</td>
<td>0.321662</td>
<td>-0.092857</td>
<td>-0.491199</td>
<td>-0.803969</td>
<td>-0.976681</td>
<td>-0.979245</td>
<td>...</td>
<td>0.415170</td>
<td>0.007652</td>
<td>-0.401199</td>
<td>-0.740157</td>
<td>-0.950170</td>
<td>-0.994652</td>
<td>-0.865854</td>
<td>-0.586214</td>
<td>-0.204448</td>
<td>0.212936</td>
</tr>
<tr>
<th>chirp_001</th>
<td>0.976847</td>
<td>0.804429</td>
<td>0.491871</td>
<td>0.093624</td>
<td>-0.320933</td>
<td>-0.679581</td>
<td>-0.919838</td>
<td>-0.999850</td>
<td>-0.905677</td>
<td>-0.653725</td>
<td>...</td>
<td>-0.225759</td>
<td>-0.603753</td>
<td>-0.876566</td>
<td>-0.996670</td>
<td>-0.943140</td>
<td>-0.725302</td>
<td>-0.381107</td>
<td>0.029482</td>
<td>0.434935</td>
<td>0.764616</td>
</tr>
<tr>
<th>chirp_002</th>
<td>0.644981</td>
<td>0.276856</td>
<td>-0.139501</td>
<td>-0.531555</td>
<td>-0.831006</td>
<td>-0.985685</td>
<td>-0.968646</td>
<td>-0.782858</td>
<td>-0.460686</td>
<td>-0.058257</td>
<td>...</td>
<td>-0.773011</td>
<td>-0.964635</td>
<td>-0.988205</td>
<td>-0.839616</td>
<td>-0.544753</td>
<td>-0.154986</td>
<td>0.261782</td>
<td>0.632944</td>
<td>0.893837</td>
<td>0.999011</td>
</tr>
<tr>
<th>chirp_003</th>
<td>0.046790</td>
<td>-0.365037</td>
<td>-0.713270</td>
<td>-0.937241</td>
<td>-0.997932</td>
<td>-0.884770</td>
<td>-0.617469</td>
<td>-0.242597</td>
<td>0.174539</td>
<td>0.561268</td>
<td>...</td>
<td>-0.999508</td>
<td>-0.925246</td>
<td>-0.689792</td>
<td>-0.334165</td>
<td>0.079680</td>
<td>0.479643</td>
<td>0.796044</td>
<td>0.973761</td>
<td>0.981834</td>
<td>0.818855</td>
</tr>
<tr>
<th>chirp_004</th>
<td>-0.570722</td>
<td>-0.856199</td>
<td>-0.992513</td>
<td>-0.955916</td>
<td>-0.752785</td>
<td>-0.418507</td>
<td>-0.011319</td>
<td>0.397840</td>
<td>0.737690</td>
<td>0.949024</td>
<td>...</td>
<td>-0.811267</td>
<td>-0.501932</td>
<td>-0.105152</td>
<td>0.309948</td>
<td>0.671049</td>
<td>0.915242</td>
<td>0.999983</td>
<td>0.910510</td>
<td>0.662410</td>
<td>0.298907</td>
</tr>
<tr>
<th>chirp_005</th>
<td>-0.952571</td>
<td>-0.993817</td>
<td>-0.861923</td>
<td>-0.579868</td>
<td>-0.196791</td>
<td>0.220570</td>
<td>0.599505</td>
<td>0.873996</td>
<td>0.996222</td>
<td>0.944891</td>
<td>...</td>
<td>-0.286397</td>
<td>0.129656</td>
<td>0.523121</td>
<td>0.825448</td>
<td>0.983966</td>
<td>0.971059</td>
<td>0.788975</td>
<td>0.469436</td>
<td>0.068113</td>
<td>-0.345076</td>
</tr>
<tr>
<th>chirp_006</th>
<td>-0.941085</td>
<td>-0.721066</td>
<td>-0.375424</td>
<td>0.035623</td>
<td>0.440464</td>
<td>0.768568</td>
<td>0.962774</td>
<td>0.989248</td>
<td>0.843378</td>
<td>0.550576</td>
<td>...</td>
<td>0.357312</td>
<td>0.707445</td>
<td>0.934326</td>
<td>0.998429</td>
<td>0.888586</td>
<td>0.623932</td>
<td>0.250578</td>
<td>-0.166433</td>
<td>-0.554448</td>
<td>-0.845866</td>
</tr>
<tr>
<th>chirp_007</th>
<td>-0.541005</td>
<td>-0.150571</td>
<td>0.266096</td>
<td>0.636404</td>
<td>0.895838</td>
<td>0.999200</td>
<td>0.928483</td>
<td>0.696006</td>
<td>0.342272</td>
<td>-0.071093</td>
<td>...</td>
<td>0.852756</td>
<td>0.991682</td>
<td>0.957836</td>
<td>0.757114</td>
<td>0.424486</td>
<td>0.017904</td>
<td>-0.391798</td>
<td>-0.733240</td>
<td>-0.946936</td>
<td>-0.995655</td>
</tr>
<tr>
<th>chirp_008</th>
<td>0.082467</td>
<td>0.482098</td>
<td>0.797739</td>
<td>0.974397</td>
<td>0.981296</td>
<td>0.817232</td>
<td>0.510791</td>
<td>0.115359</td>
<td>-0.300171</td>
<td>-0.663405</td>
<td>...</td>
<td>0.994356</td>
<td>0.864427</td>
<td>0.583895</td>
<td>0.201635</td>
<td>-0.215755</td>
<td>-0.595555</td>
<td>-0.871595</td>
<td>-0.995784</td>
<td>-0.946484</td>
<td>-0.732285</td>
</tr>
<tr>
<th>chirp_009</th>
<td>0.671886</td>
<td>0.915698</td>
<td>0.999976</td>
<td>0.910036</td>
<td>0.661548</td>
<td>0.297805</td>
<td>-0.117823</td>
<td>-0.512924</td>
<td>-0.818662</td>
<td>-0.981771</td>
<td>...</td>
<td>0.723355</td>
<td>0.378482</td>
<td>-0.032331</td>
<td>-0.437512</td>
<td>-0.766468</td>
<td>-0.961886</td>
<td>-0.989720</td>
<td>-0.845120</td>
<td>-0.553281</td>
<td>-0.165046</td>
</tr>
</tbody>
</table>
<p>10 rows × 1000 columns</p>
</div>
Now our table consists all sampled measurement during a cycle. We will use this matrix to calculate range and velocity.
### FFT over the first chirp
For didactic reasons we do the FFT procedure only for the first churp to make an example of the procedure.
```python
chirp0_samples = table_df.iloc[0].values
```
```python
chirp0_magnitude = fft(chirp0_samples)
```
```python
# frequencies found by FFT, will be used later
frequencies = np.arange(0, n_s//2)*f_s/n_s
```
```python
frequencies[:3]
```
array([ 0., 50000., 100000.])
Each frequency is referred to a range
\begin{align}
f_* = \frac{2m_w}{c}r + \frac{2 f_0}{c}\dot r
\end{align}
Because $m_w$ is so large, the influence of the velocity (second term) is very low:
```python
f_star1 = 2*m_w/c*100
f_star2 = 2*f_0/c*33
print(f_star1, f_star2)
print(f_star2/f_star1*100, "%")
```
6671281.9039630415 17105.833929951634
0.25640999999999997 %
That means:
\begin{align}
f_* \approx \frac{2m_w}{c}r
\end{align}
We rearrange:
\begin{align}
r \approx \frac{f_* c}{2 m_w}
\end{align}
```python
def freq_to_range(f):
return f*c/(2*m_w)
```
```python
ranges = freq_to_range(frequencies)
```
```python
plt.figure(figsize=(10,5))
plt.plot(ranges, 2.0/n_s*np.abs(chirp0_magnitude[0:n_s//2]))
plt.plot(ranges, 2.0/n_s*np.abs(chirp0_magnitude[0:n_s//2]), "k+")
plt.xlabel("range $r$ [m]")
plt.title("derived range (FFT of chirp_0)")
print(freq_to_range(frequencies)[np.argmax(2.0/n_s*np.abs(chirp0_magnitude[0:n_s//2]))])
```
Looks like we got the distance right: $\approx 50m$
# Calculate range bins for each chirp
Now we do the same for each of $n_r$ chirps and save the FFT results in a table of shape $n_r × n_s/2$.
```python
range_table = np.zeros((n_r, n_s//2), dtype=np.csingle)
```
```python
for chirp_nr in range(n_r):
chirp_ad_values = table_df.iloc[chirp_nr].values
chirp_fft = fft(chirp_ad_values) # FFT
range_table[chirp_nr, :] = 2.0/n_s*chirp_fft[:n_s//2]
```
Let's visualize the table
```python
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(15,10), sharex=True, sharey=True)
abs_axes = ax[0, 0]
phi_axes = ax[0, 1]
real_axes = ax[1, 0]
imag_axes = ax[1, 1]
im_asb = abs_axes.imshow(np.abs(range_table), cmap = plt.get_cmap('RdYlBu'))
abs_axes.set_xticks(range(ranges.size)[::50])
abs_axes.set_xticklabels(ranges[::50], rotation=90)
fig.colorbar(im_asb, ax=abs_axes)
abs_axes.set_xlabel("range [m]")
abs_axes.set_ylabel("chirp number")
abs_axes.set_title("$|A(j\omega)|$")
im_phi = phi_axes.imshow(np.angle(range_table)*360/(2*np.pi), cmap = plt.get_cmap('RdYlBu'))
fig.colorbar(im_phi, ax=phi_axes)
phi_axes.set_xlabel("range [m]")
phi_axes.set_ylabel("chirp number")
phi_axes.set_title("$∠ A(j\omega)$")
phi_axes.set_xticks(range(ranges.size)[::50])
phi_axes.set_xticklabels(ranges[::50], rotation=90)
im_real = real_axes.imshow(np.real(range_table), cmap = plt.get_cmap('RdYlBu'))
fig.colorbar(im_real, ax=real_axes)
real_axes.set_xlabel("range [m]")
real_axes.set_ylabel("chirp number")
real_axes.set_title("Real{$A(j\omega)$}")
real_axes.set_xticks(range(ranges.size)[::50])
real_axes.set_xticklabels(ranges[::50], rotation=90)
im_imag = imag_axes.imshow(np.imag(range_table), cmap = plt.get_cmap('RdYlBu'))
fig.colorbar(im_imag, ax=imag_axes)
imag_axes.set_xlabel("range [m]")
imag_axes.set_ylabel("chirp number")
imag_axes.set_title("Imag{$A(j\omega)$}");
imag_axes.set_xticks(range(ranges.size)[::50])
imag_axes.set_xticklabels(ranges[::50], rotation=90);
fig.suptitle("Range FFT table visualized.");
```
## Velocity estimation
We make a second FFT over each range bin (column).
Again, we initialize an empty table. Now the rows will be different velocity ranges.
```python
velocity_table = np.zeros((n_r, range_table.shape[1]), dtype=np.csingle)
```
```python
for r in range(range_table.shape[1]):
range_bin_magn = range_table[:, r]
range_bin_fft = fft(range_bin_magn)
velocity_table[:, r]= 2.0/n_r*range_bin_fft
```
Second FFT on columns returns the phase shift
\begin{align}
... + \frac{4 \pi f_0 }{c} r = ... + \frac{4 \pi f_0 }{c} vt \\
\Rightarrow \frac{4 \pi f_0 }{c}v = \omega_{\text{FFT, columns}}
\end{align}
After rearranging ...
```python
def angle_freq_to_velocity(w):
return w*c/(4*np.pi*f_0)
```
```python
omega_second = 2*np.pi*np.concatenate((np.arange(0, n_r//2), np.arange(-n_r//2, 0)[::-1]))*f_chirp/n_r
```
```python
velocities = angle_freq_to_velocity(omega_second)
```
```python
plt.figure(figsize=(15,10))
plt.imshow(np.abs(velocity_table), cmap = plt.get_cmap('RdYlBu'))
plt.xticks(range(ranges.size)[::20], ranges[::20], rotation=90);
plt.yticks(range(velocities.size)[::10], velocities[::10]);
plt.xlim([0, 200])
plt.xlabel("range $r$ [m]")
plt.ylabel("velocity $\\dot r = v$ [m/s]");
plt.title("Chirp Sequence Modulation Result - $r, \\dot r$ map")
plt.colorbar();
```
As you can see, we could successfully recognize (blue dot) the target in 50m range with velocity of $10 m/s$.
```python
```
| 7bb173bd02ae1356b1a8b4a37319026f2e918dd1 | 355,928 | ipynb | Jupyter Notebook | RADAR.ipynb | kopytjuk/fmcw | ceba9f71e41d54c3b339c7e40a840a3d8db542d8 | [
"MIT"
] | 31 | 2019-12-23T05:06:19.000Z | 2022-02-22T17:19:01.000Z | RADAR.ipynb | kopytjuk/fmcw | ceba9f71e41d54c3b339c7e40a840a3d8db542d8 | [
"MIT"
] | null | null | null | RADAR.ipynb | kopytjuk/fmcw | ceba9f71e41d54c3b339c7e40a840a3d8db542d8 | [
"MIT"
] | 9 | 2020-05-06T20:54:58.000Z | 2022-02-13T09:42:35.000Z | 215.322444 | 65,232 | 0.902295 | true | 8,585 | Qwen/Qwen-72B | 1. YES
2. YES | 0.891811 | 0.810479 | 0.722794 | __label__eng_Latn | 0.511098 | 0.517624 |
# Introduction:
Une sonde spatiale est un véhicule spatial sans équipage lancé dans l'espace pour étudier à plus ou moins grande distance différents objets célestes et elle est amenée à franchir de grandes distances et à fonctionner loin de la Terre et du Soleil. Le facteur principale qui doit être mis en jeu afin de réussir la mission de la sonde est la précision et la complexité de la navigation.
Dans cette optique, on essayera dans ce projet de simuler la trajectoire d'une sonde spatiale dans le système solaire en intégrant les équations de mouvement de Newton, et pour ce faire on commencera d'abord dans la section 1 et 2 par simuler le système solaire et comprendre sa dynamique, et dans la section 3, on va essayer de simuler la trajectoire de la sonde spatiale New Horizons.
Dans cette première partie, on se fixe comme objectif de simuler la dynamique du système solaire en 2 dimensions.
**Remarques :**
Tout les codes de cette partie vont être dans le dossier **In_2dim**.
# 1. Simulation de système solaire:
## 1.1 Classe des objets.
Pour simplifier la tâche de manipulation des planètes et des sondes, il vaut mieux assigner à chaque objet une classe qui le caractérise avec ses propres attributs:
1. Attributs
1. Masse
2. Position initiale $(x_0,y_0)$
3. Vitesse initiale $(vx_0, vy_0)$
4. Nom de l'objet
5. Liste des positions $(x, y)$
7. Liste des vitesses $(vx, vy)$
On peut alors définir cette classe avec le code en bas.
(La classe est créée dans le fichier objet.py)
```python
class objet:
""" Classe représentant les objets qui influence par la gravitation
Attributs:
nom
masse: Kg
position (x, y): au (astronomical unit)
vitesse (v_x, v_y) : au/day
"""
nom = "objet"
masse = None
x0 = 0
y0 = 0
vx0 = 0
vy0 = 0
#Listes des positions et vitesse
x = None
y = None
vx = None
vy = None
#Definition de constructeur
def __init__(self, nom = "objet", masse = None, x0 = 0, y0 = 0, vx0 = 0, vy0 = 0):
"""Constructeur de notre classe"""
self.nom = nom
self.masse = masse
self.x0 = x0
self.y0 = y0
self.vx0 = vx0
self.vy0 = vy0
```
**Variables Globales:**
Dans toute cette partie, on va utiliser ces variables globales qui vont servir dans les calculs prochains.
```python
#Definitions de parametres
au = 1.49597870e11 #Unité astronomique
jour = 24*3600 #Un jour
G = 6.67408e-11 #Constante gravitationelle
```
## 1.2 Equations de Newtons:
Les équations qu'on va utiliser pour décrire le mouvement des planètes et les sondes spatiales sont les équations de Newton, donc, pour décrire le mouvement d'un objet dans le système solaire qui subit une force gravitationnelle de la part du soleil, il suffit d'intégrer les équations de Newton de secondes ordres en unités internationales.
\begin{equation}
\frac{d²x}{dt²} = -G \frac{M_{soleil}}{(x²+y²)^{3/2}} x \\
\frac{d²y}{dt²} = -G \frac{M_{soleil}}{(x²+y²)^{3/2}} y
\end{equation}
Mais, pour des résultats plus signicatifs, il vaut mieux travailler avec des distance en $au$ (unité astronomique $:=$ distance soleil-terre) et pour le temps en $jour$, d'où les équations suivantes en unités pratiques:
\begin{equation}
\frac{d²x}{dt²} = -G \frac{M_{soleil}}{(x²+y²)^{3/2}} x \ \frac{(day)²}{(au)³} \\
\frac{d²y}{dt²} = -G \frac{M_{soleil}}{(x²+y²)^{3/2}} y \ \frac{(day)²}{(au)³}
\end{equation}
**Implémentation :** (dans le fichier objet.py) -> NB: Toutes les fonctions seront stockés dans le fichier "objet.py"
Pour implémenter les équations de Newton, il est préférable de définir des fonctions $fx$ et $fy$ qui prennent en argument la masse de l'objet qu'on gravite autour et les coordonnées de l'objet gravitant et qui donnent comme output l'acceleration gravitationnelle subit par l'objet suivant $\vec{x}$ et $\vec{y}$.
```python
#Definition de fonction fx(M,x,y) et fy(M,x,y)
def fx(M,x,y):
"""
Retourne l'acceleration gravitationnelle suivant x dû à un objet de masse M distants de l'objet étudié de x**2+y**2
"""
return -((G*M)/(x**2+y**2)**(3/2))*x*(jour**2/au**3)
def fy(M,x,y):
"""
Retourne l'acceleration gravitationnelle suivant y dû à un objet de masse M distants de l'objet étudié de x**2+y**2
"""
return -((G*M)/(x**2+y**2)**(3/2))*y*(jour**2/au**3)
```
## 1.3 Simulation d'interaction entre le soleil et une autre planète.
Puisque la masse de Soleil $ M_{soleil} >> M_{planète} $, on peut consider que le soleil reste fixe au cours de mouvement, donc on peut se servir des fonctions de la partie **1.2** pour intérgrer les équations de Newton, il nous faut juste une certaine condition initiale sur la position et la vitesse qu'on a pris d'ici: http://vo.imcce.fr/webservices/miriade.
Dans un premier temps, on peut faire cette première simulation pour le soleil et la terre.
Dans le code ci-dessous, on définit les conditions initiales de la terre, le pas de temps $dt$ et la période d'intégration $T$ et on rénitialise les attributs $terre.x/y$ et $terre.vx/vy$ qui vont contenir les positions et les vitesses de la terre durant toute la période d'intégration.
```python
#Pour faire des plots
import numpy as np
import matplotlib.pyplot as plt
# Definition des objets
soleil = objet("Soleil", 1.989*1e30, 0, 0, 0, 0) #(nom, masse, x, y, vx, vy)
#Données prises de http://vo.imcce.fr/webservices/miriade
terre = objet ("Terre", 5.972*1e24, -0.7528373239252, 0.6375222355089, -0.0113914294224, -0.0131912591762)
dt = 1 #step, un jour
T = int(365/dt)*20 # Periode d'integration (Nombre de steps) -> une année * ...
#Definition des tableau de coordonnés et initiation
terre.x = np.zeros(T) ; terre.x[0] = terre.x0
terre.y = np.zeros(T) ; terre.y[0] = terre.y0
terre.vx = np.zeros(T) ; terre.vx[0] = terre.vx0
terre.vy = np.zeros(T) ; terre.vy[0] = terre.vy0
```
### 1.3.1 Comment estimer la précision des algorithmes d'intégration?
**Conservation d'énergie totale de système:**
Au cours de l'intégration des équations de mouvement, l'énergie de système doit rester conservée, donc on va utiliser comme critère de précision, la variation de l'énergie mécanique (massique) de l'objet étudié, si l'énergie mécanique reste la plus proche possible de la valeur initiale, alors la méthode d'intégration choisie est plus précise. On va calculer alors l'énergie mécanique (massique) à l'aide de la fonction $E$.
```python
def E(M, x, y, vx, vy):
return 0.5*(vx**2+vy**2)*(au**2/jour**2)-(G*M)/(np.sqrt(x**2+y**2)*au)
E = np.vectorize(E)
```
N.B:
* La fonction $E$ calcule l'énergie (massique) d'un objet sous effet d'un autre objet (seul) de masse M. On verra après une fonction qui permettra de calculer l'énergie d'un objet qui subit l'effet de gravitations de plusieurs autres objets.
**Comparaison des trajectoires:**
Aussi pour s'assurer des validités des calculs, on va comparer les trajectoires simulées aux données prises par les observations astrométriques.
### 1.3.2 Intégration par la méthode d'Euler:
**Méthode:**
A titre d'initiation, on va commencer par intégrer les équations de Newton à l'aide de méthode d'Euler qui consiste à faire les étapes suivantes:
$$\vec{X}_{i+1} = \vec{X_{i}} + \frac{h}{2}.\vec{V}_{i} $$
$$\vec{V}_{i+1} = \vec{V}_{i} + \frac{h}{2}.\vec{F}(\vec{X}_{i}) $$
Avec $\vec{F}$ l'accelération gravitationnelle, pour notre cas particulier de Système Soleil-Terre on a:
\begin{equation}
F_x = -G \frac{M_{soleil}}{(x²+y²)^{3/2}} x \ \frac{(day)²}{(au)³} \\
F_y = -G \frac{M_{soleil}}{(x²+y²)^{3/2}} y \ \frac{(day)²}{(au)³}
\end{equation}
Et $h$ le pas d'intégration qui correspond à la variable $dt$.
**Implémentation:** (dans le fichier Interaction_Soleil_Planete_Euler.py)
```python
#----------------------------------------------------------------------------------------------------------
# Integration des equations de newton par methode d'euler
#-------------------------
for i in range(T-1):
#Affectation des vitesses a l'instant i+1
terre.vx[i+1] = terre.vx[i] + dt*fx(soleil.masse, terre.x[i], terre.y[i])
terre.vy[i+1] = terre.vy[i] + dt*fy(soleil.masse, terre.x[i], terre.y[i])
#Affectation des positions a l'instant i+1
terre.x[i+1] = terre.x[i] + dt*terre.vx[i]
terre.y[i+1] = terre.y[i] + dt*terre.vy[i]
#----------------------------------------------------------------------------------------------------
```
**Plot de trajectoire:**
```python
fig=plt.figure(figsize=(9, 6), dpi= 100, facecolor='w', edgecolor='k') #To modify the size of the figure
ax = fig.add_subplot(111) #definition de l'axe
#Plot de la trajectoire simulee:
ax.plot(terre.x, terre.y) #Plot de la trajectoire simulee de la terre
plt.xlabel("y (Au)")
plt.ylabel("x (Au)")
plt.gca().set_aspect('equal', adjustable='box') #equal ratios of x and y
plt.show()
```
**Estimation précision en regardant l'énergie mécanique:**
```python
Nrg = E(soleil.masse, terre.x, terre.y, terre.vx, terre.vy) #Calcul d'energie mecanique
Nrg /= np.abs(Nrg[0]) #Pour Normaliser l'energie et pour faire un plot plus significatif
#Definition de figure
fig=plt.figure(figsize=(9, 6), dpi= 100, facecolor='w', edgecolor='k') #To modify the size of the figure
ax = fig.add_subplot(111) #definition de l'axe
#Plot d'energie en fonction de temps
t = np.linspace(1,T,T)*dt
ax.plot(t, Nrg)
ax.set_xlabel("t (jour)")
ax.set_ylabel("E/$|E_0|$")
ax.get_yaxis().get_major_formatter().set_useOffset(False) #Disable scaling of values in plot wrt y-axis
#Affichage de l'energie moyenne
print("Résultats : ")
print("Energie moyenne = " + str(np.mean(Nrg)) + ", Ecart_Type = " + str(np.std(Nrg)))
plt.show()
```
On remarque si on augmente le pas d'intégration $dt$ la précision diminue, en plus, la trajectoire simulée n'est pas fermée, par conséquent, il faut utiliser une méthode d'intégration plus précise.
### 1.3.3 Intégration par la méthode de Runge-Kutta d'ordre 2
La méthode d'Euler est une méthode dite de premier d'ordre où l'erreur d'intergration est de l'ordre $h$, donc pour encore avoir plus de précision, il faut utiliser une méthode plus précise d'où la méthode de Runge-Kutta d'ordre 2.
**Méthode:**
Soit $h$ le pas d'intégration, le méthode de Runge-Kutta d'ordre 2 consiste à faire les étapes suivantes:
$$\vec{X}_{i+1} = \vec{X_{i}} + \frac{h}{2}.\vec{V}_{i+1/2} $$
$$\vec{V}_{i+1} = \vec{V}_{i} + \frac{h}{2}.\vec{F}(\vec{X}_{i+1/2}) $$
telles que :
$$ \vec{X}_{i+1/2} = \vec{X}_{i} + \frac{h}{2}.\vec{V}_{i} $$
$$ \vec{V}_{i+1/2} = \vec{V}_{i} + \frac{h}{2}.\vec{F}(\vec{X}_{i}) $$
Avec $\vec{F}$ déjà définie dans la méthode d'Euler (1.3.2)
La particularité de cette méthode consiste à définir des variables au milieu $\vec{X}_{i+1/2}$ et $\vec{V}_{i+1/2}$ qui servent comme intermédiaires dans le calcul.
**Implémentation:** (dans le fichier Interaction_Soleil_Planete_Runge-Kutta2.py)
```python
#----------------------------------------------------------------------------------------------------------
# Integration des equations de newton par Runge Kutta2
#-------------------------
for i in range(T-1):
#Definition des variables de milieux
vx_demi = terre.vx[i] + (dt/2)*fx(soleil.masse, terre.x[i], terre.y[i])
vy_demi = terre.vy[i] + (dt/2)*fy(soleil.masse, terre.x[i], terre.y[i])
x_demi = terre.x[i] + (dt/2)*terre.vx[i]
y_demi = terre.y[i] + (dt/2)*terre.vy[i]
# Affectation des positions et vitesses à l'indice i+1
terre.vx[i+1] = terre.vx[i] + dt*fx(soleil.masse, x_demi, y_demi)
terre.vy[i+1] = terre.vy[i] + dt*fy(soleil.masse, x_demi, y_demi)
terre.x[i+1] = terre.x[i] + dt*vx_demi
terre.y[i+1] = terre.y[i] + dt*vy_demi
#----------------------------------------------------------------------------------------------------
```
**Plot de trajectoire:**
```python
fig=plt.figure(figsize=(9, 6), dpi= 100, facecolor='w', edgecolor='k') #To modify the size of the figure
ax = fig.add_subplot(111) #definition de l'axe
#Plot de la trajectoire simulee:
ax.plot(terre.x, terre.y) #Plot de la trajectoire simulee de la terre
plt.xlabel("y (Au)")
plt.ylabel("x (Au)")
plt.gca().set_aspect('equal', adjustable='box') #equal ratios of x and y
plt.show()
```
**Estimation précision en regardant l'énergie mécanique:**
```python
Nrg = E(soleil.masse, terre.x, terre.y, terre.vx, terre.vy) #Calcul d'energie mecanique
Nrg /= np.abs(Nrg[0]) #Pour Normaliser l'energie et pour faire un plot plus significatif
#Definition de figure
fig=plt.figure(figsize=(9, 6), dpi= 100, facecolor='w', edgecolor='k') #To modify the size of the figure
ax = fig.add_subplot(111) #definition de l'axe
#Plot d'energie en fonction de temps
t = np.linspace(1,T,T)*dt
ax.plot(t, Nrg)
ax.set_xlabel("t (jour)")
ax.set_ylabel("E/$|E_0|$")
ax.get_yaxis().get_major_formatter().set_useOffset(False) #Disable scaling of values in plot wrt y-axis
#Affichage de l'energie moyenne
print("Résultats : ")
print("Energie moyenne = " + str(np.mean(Nrg)) + ", Ecart_Type = " + str(np.std(Nrg)))
plt.show()
```
On voit bien que le Runge-Kutta est plus précis que Euler compte tenu du fait que avec le même pas d'intégration $dt=1 \ jour$, on a une trajectoire plus précise et un écart-type de l'énergie plus petit dans le cas d'intégration par Runge-Kutta.
### 1.3.4 Intégration par Leapfrog:
Puisque, les méthodes précédentes ne conservent pas l'énérgie mécanique, il faut envisager des méthodes d'intégrations qui conservent l'énergie, ces méthodes sont dites symplectiques.
**Méthode:**
Cette méthode d'intégration comme son nom l'indique, consiste à calculer les positions et les vitesses dans des endroits différents de la manière suivante:
$$\vec{X}_{i+1} = \vec{X_{i}} + h.\vec{V}_{i+1/2} $$
$$\vec{V}_{i+3/2} = \vec{V}_{i+1/2} + h.\vec{F}(\vec{X}_{i+1}) $$
Ici, on aura besoin de $\vec{V}_{1/2}$ pour initier l'algorithme, on fait alors l'approximation suivante:
$$ \vec{V}_{1/2} = \vec{V}_0 + \frac{h}{2}.\vec{F}(\vec{X}_{0}) $$
Cette étape nous coûte un erreur de l'ordre de $h²$, ce qui est tolérable parce qu'il s'agit d'une méthode de second ordre, donc cette étape n'influe pas sur la précision globale de l'intégration.
Pour plus d'informations sur ce schéma voir les liens ci-dessous:
http://physics.ucsc.edu/~peter/242/leapfrog.pdf
https://en.wikipedia.org/wiki/Leapfrog_integration
**Implémentation:** (dans le fichier Interaction_Soleil_Planete_LeapFrog.py)
```python
#----------------------------------------------------------------------------------------------------------
# Integration des equations de newton par LeapFrog
#-------------------------
#Definition des vitesses au milieux
vx_demi = np.zeros(T); vx_demi[0] = terre.vx0 + (dt/2)*fx(soleil.masse, terre.x0, terre.y0)
vy_demi = np.zeros(T); vy_demi[0] = terre.vx0 + (dt/2)*fy(soleil.masse, terre.x0, terre.y0)
for i in range(T-1):
# Affectation des positions à l'indice i+1
terre.x[i+1] = terre.x[i] + dt*vx_demi[i]
terre.y[i+1] = terre.y[i] + dt*vy_demi[i]
#Affectation des vitesses:
vx_demi[i+1] = vx_demi[i] + dt*fx(soleil.masse, terre.x[i+1], terre.y[i+1])
vy_demi[i+1] = vy_demi[i] + dt*fy(soleil.masse, terre.x[i+1], terre.y[i+1])
#Affecter les vitesses de la terre par celles de milieu
terre.vx = vx_demi; terre.vy = vy_demi
```
**Plot de trajectoire:**
```python
fig=plt.figure(figsize=(9, 6), dpi= 100, facecolor='w', edgecolor='k') #To modify the size of the figure
ax = fig.add_subplot(111) #definition de l'axe
#Plot de la trajectoire simulee:
ax.plot(terre.x, terre.y) #Plot de la trajectoire simulee de la terre
plt.xlabel("y (Au)")
plt.ylabel("x (Au)")
plt.gca().set_aspect('equal', adjustable='box') #equal ratios of x and y
plt.show()
```
**Estimation précision en regardant l'énergie mécanique:**
```python
Nrg = E(soleil.masse, terre.x, terre.y, terre.vx, terre.vy) #Calcul d'energie mecanique
Nrg /= np.abs(Nrg[0]) #Pour Normaliser l'energie et pour faire un plot plus significatif
#Definition de figure
fig=plt.figure(figsize=(9, 6), dpi= 100, facecolor='w', edgecolor='k') #To modify the size of the figure
ax = fig.add_subplot(111) #definition de l'axe
#Plot d'energie en fonction de temps
t = np.linspace(1,T,T)*dt
ax.plot(t, Nrg)
ax.set_xlabel("t (jour)")
ax.set_ylabel("E/$|E_0|$")
ax.get_yaxis().get_major_formatter().set_useOffset(False) #Disable scaling of values in plot wrt y-axis
#Affichage de l'energie moyenne
print("Résultats : ")
print("Energie moyenne = " + str(np.mean(Nrg)) + ", Ecart_Type = " + str(np.std(Nrg)))
plt.show()
```
Vu les résultats ci-dessus, on voit bien que Leap-Frog est un schéma d'intégration symplectique qui conserve l'énergie, mais l'incovénient de cette méthode se manifeste dans le faite que la position et la vitesse ne sont pas calculées au même instant d'où la méthode suivante:
### 1.3.5 Intégration par Verlet:
**Méthode:**
Cette méthode d'intégration est similaire à la méthode de leapfrog, mais elle permet de calculer les positions et les vitesses aux mêmes endroits, ce qui permet par exemple de tracer un portrait de phase, on pourra implémenter cette méthode de la manière suivante:
$$\vec{X}_{i+1} = \vec{X_{i}} + h.\vec{V}_{i+1/2} $$
$$\vec{V}_{i+1} = \vec{V}_{i+1/2} + \frac{h}{2}.\vec{F}(\vec{X}_{i+1}) $$
telle que :
$$ \vec{V}_{i+1/2} = \vec{V}_{i} + \frac{h}{2}.\vec{F}(\vec{X}_{i}) $$
Pour plus d'informations sur ce schéma voir ces liens:
https://en.wikipedia.org/wiki/Verlet_integration
http://www.fisica.uniud.it/~ercolessi/md/md/node21.html
**Implémentation:** (dans le fichier Interaction_Soleil_Planete_Verlet.py)
```python
#----------------------------------------------------------------------------------------------------------
# Integration des equations de newton par l'integrateur de Verlet
#-------------------------
#Il faut re-initier les vitesses à cause de la modification introduite par leapfrog
terre.vx[0] = terre.vx0; terre.vy[0] = terre.vy0
for i in range(T-1):
#Definition des variables de milieux
vx_demi = terre.vx[i] + (dt/2)*fx(soleil.masse, terre.x[i], terre.y[i])
vy_demi = terre.vy[i] + (dt/2)*fy(soleil.masse, terre.x[i], terre.y[i])
# Affectation des positions à l'indice i+1
terre.x[i+1] = terre.x[i] + dt*vx_demi
terre.y[i+1] = terre.y[i] + dt*vy_demi
terre.vx[i+1] = vx_demi + (dt/2)*fx(soleil.masse, terre.x[i+1], terre.y[i+1])
terre.vy[i+1] = vy_demi + (dt/2)*fy(soleil.masse, terre.x[i+1], terre.y[i+1])
#----------------------------------------------------------------------------------------------------
```
** Plot de trajectoire: **
```python
fig=plt.figure(figsize=(9, 6), dpi= 100, facecolor='w', edgecolor='k') #To modify the size of the figure
ax = fig.add_subplot(111) #definition de l'axe
#Plot de la trajectoire simulee:
ax.plot(terre.x, terre.y) #Plot de la trajectoire simulee de la terre
plt.xlabel("y (Au)")
plt.ylabel("x (Au)")
plt.gca().set_aspect('equal', adjustable='box') #equal ratios of x and y
plt.show()
```
**Estimation précision en regardant l'énergie mécanique:**
```python
Nrg = E(soleil.masse, terre.x, terre.y, terre.vx, terre.vy) #Calcul d'energie mecanique
Nrg /= np.abs(Nrg[0]) #Pour Normaliser l'energie et pour faire un plot plus significatif
#Definition de figure
fig=plt.figure(figsize=(9, 6), dpi= 100, facecolor='w', edgecolor='k') #To modify the size of the figure
ax = fig.add_subplot(111) #definition de l'axe
#Plot d'energie en fonction de temps
t = np.linspace(1,T,T)*dt
ax.plot(t, Nrg)
ax.set_xlabel("t (jour)")
ax.set_ylabel("E/$|E_0|$")
ax.get_yaxis().get_major_formatter().set_useOffset(False) #Disable scaling of values in plot wrt y-axis
#Affichage de l'energie moyenne
print("Résultats : ")
print("Energie moyenne = " + str(np.mean(Nrg)) + ", Ecart_Type = " + str(np.std(Nrg)))
plt.show()
```
On peut conclure que la méthode de Verlet est la plus précise, vu les valeurs de l'écart-type de l'énergie, donc d'ici jusqu'à la fin de ce rapport on utilisera le schéma de Verlet comme méthode d'intégration des équations de Newton.
## 1.4 Implémentation de système solaire:
Maintenant, puisque on a compris comment évolue une seule planète autour de soleil, on va simuler la dynamique des planètes de système solaire autour de soleil.
Dans cette partie, on doit faire appel à une approche similaire mais un peu différente, parce que pour chaque planète de système solaire on doit tenir compte de la force gravitationnelle des autres planètes, donc on doit intégrer des nouvelles équations qui tiennent en compte du couplage entre les planètes.
### 1.4.1 Equations de Newton:
On va faire l'hypothèse que le soleil reste fixe à cause de sa grande masse, alors on aura:
$$
(\forall \ i \in \ [| 1,8 |] )\ ; \ \frac{d²\vec{r}_i}{dt²} = -G\sum_{j = 0 ; \ j \neq i}^{9} \frac{M_j}{||\vec{r_i}-\vec{r_j}||^{3}} (\vec{r_i}-\vec{r_j})$$
Avec:
* Objet 0: Soleil
* Objet $i$ tel que $i \ \in \ [|1,9|]$: les planètes de système solaire de Mercure à Neptune et la planète naine Pluto.
### 1.4.2 Conditions Initiales et définition de système solaire:
Une façon de stocker les positions initiales des objets consiste à les stocker dans un fichier texte qu'on peut nommer "initial_conditions_solarsystem.txt", pour des raisons techniques de Python, on va stocker les noms des objets dans un autre fichier "names_solarsystem.txt". (A cause des problèmes d'encodage on ne peut qu'importer un seul type de données avec la méthode de **numpy** : $np.genfromtxt$).
On choisit comme date de début le "2017-02-28" à "00:00 GMT".
Maintenant on a tout ce qu'il faut pour définir les objets de notre système solaire. Tout d'abord, on va créer les objets à partir de la classe **objet**.
```python
bodies = np.array([objet() for i in range(10)]) #Creation d'une liste des objets (on a au total 10 objets: soleil et 8 planetes et Pluto)
```
Après, il faut charger les données relatives aux paramètres des objets de système solaire afin d'initialiser leurs attributs.
```python
import os
os.chdir("/home/mh541/Desktop/Projet_Numerique/In_2dim") #Please change to the to the directory where 'initial_conditions_solarsystem.txt' is saved
data = np.genfromtxt("initial_conditions_solarsystem.txt", usecols=(1,2,3,4,5), skip_header=1) #On ne peut pas importer du texte avec genfromtxt
names = np.loadtxt("names_solarsystem.txt", dtype = str, skiprows=1, usecols=(1,))
```
Ici, il ne reste qu'à affecter les valeurs chargées aux attributs des objets. On définit aussi "Nbr_obj" la variable qui contient le nombre total des objets dans notre système.
```python
#Definition des parametres de chaque objet
Nbr_obj = len(bodies) #Definition de Nbr d'objets
for i in range(Nbr_obj):
bodies[i].nom = names[i][2:-1] # [2:-1] pour supprimer les caracteres indesires
bodies[i].masse = data[i][0]
bodies[i].x0 = data[i][1]
bodies[i].y0 = data[i][2]
bodies[i].vx0 = data[i][3]
bodies[i].vy0 = data[i][4]
```
### 1.4.3 Calcul d'accélération et d'énergie totale:
Comme approche naîve, on va tenir compte des couplages entre les autres objets autre que le soleil.
Pour simplifier la tâche de calcul d'accelération dû à la gravitation subit par un objet, ça serait mieux de définir la fonction **acceleration** qui permet de calculer cette accelération à un instant donné pour un objet donné.
(Cette fonction est dans objet.py)
```python
def acceleration(bodies, i, j):
"""
Calculer l'acceleration relative à un objet bodies[i]
bodies: tous les objets
i: index of concerned body which undergoes the gravitation of other objects.
j: index of the step
"""
N = len(bodies)
ax = 0; ay = 0 #L'acceleration
for jp in range(N):
#Chaque objet bodies[jp] applique une force de gravitation sur l'objet bodies[i]
if jp == i: #On ne veut pas avoir le même objet bodies[jp]
continue
ax += fx(bodies[jp].masse, bodies[i].x[j]-bodies[jp].x[j], bodies[i].y[j]-bodies[jp].y[j]) #Effet du à l'objet bodies[jp]
ay += fy(bodies[jp].masse, bodies[i].x[j]-bodies[jp].x[j], bodies[i].y[j]-bodies[jp].y[j]) #---
return ax, ay
```
Cette fonction permet de retourner les accélérations suivant les axes x et y, en prenant comme paramètres la liste des objets $bodies$, et l'indice de l'objet concerné en plus de l'indice de pas voulu.
Pour évaluer la conservation d'énergie pendant l'intergration des équation de mouvement, c'est préférable de créer la fonction **Energy** qui permet de calculer l'énergie mécanique de chaque objet.
(Ces fonctions seront dans objet.py)
```python
#On calcule d'abord l'energie potentielle
def pot(M, x, y):
"""
Retourne le potentiel massique d'un objet par rapport à un autre objet de masse M et distants de x**2+y**2
"""
return -(G*M)/(np.sqrt(x**2+y**2)*au)
def Energy(bodies, i):
"""
L'Energie massique d'un objet sous l'effet d'autres objet qui lui entoure.
"""
N = len(bodies)
potential = 0
for jp in range(N):
if jp == i:
continue
potential += pot(bodies[jp].masse, bodies[i].x-bodies[jp].x, bodies[i].y-bodies[jp].y)
return 0.5*(au**2/jour**2)*(bodies[i].vx**2+bodies[i].vy**2)+potential
```
### 1.4.4 Intégrations des équations de mouvement:
Les objets maintement sont bien défini, il suffit maintenant d'intérger les équations de mouvement définit dans **1.4.1** pour avoir les trajectoires.
Puisque dans la partie précédente **1.3**, on a vu que le schéma d'intergration de Verlet est le plus précis et permet aussi de conserver l'énergie mécanique, alors on va s'en servir pour déduire les trajectoires des planètes.
**Implémentation:**
```python
#Redefinition des steps
dt = 2 #step
T = int(365/dt)*165 # (Nombre de steps)<-> Periode d'integration
#Definition des vitesses au milieu
vx_demi = np.zeros(Nbr_obj)
vy_demi = np.zeros(Nbr_obj)
#Intialisation des attributs x,y,vx,vy de chaque objet bodies[i]
for i in range(Nbr_obj):
bodies[i].x = np.zeros(T); bodies[i].x[0] = bodies[i].x0
bodies[i].y = np.zeros(T); bodies[i].y[0] = bodies[i].y0
bodies[i].vx = np.zeros(T); bodies[i].vx[0] = bodies[i].vx0
bodies[i].vy = np.zeros(T); bodies[i].vy[0] = bodies[i].vy0
#Integration a l'aide de schema de Verlet
for j in range(T-1):#A chaque pas de temps j
#Phase 1: Calcul de vitesses milieu et affectation des position a l'intant j+1
for i in range(1,Nbr_obj): #Modification des parametres pour chaque objet a un instant donne j
fx_j, fy_j = acceleration(bodies, i, j) #Calcul de l'acceleration au pas j relative à l'objet i
#Affectation des vitesses de milieu
vx_demi[i] = bodies[i].vx[j] + (dt/2)*fx_j
vy_demi[i] = bodies[i].vy[j] + (dt/2)*fy_j
# Affectation des positions à l'indice j+1
bodies[i].x[j+1] = bodies[i].x[j] + dt*vx_demi[i]
bodies[i].y[j+1] = bodies[i].y[j] + dt*vy_demi[i]
#Phase 2: Affectation des vitesse a l'instant j+1
for i in range(1,Nbr_obj):
#L'acceleration au pas j+1 relative à l'objet j
fx_jplus1, fy_jplus1 = acceleration(bodies, i, j+1) #Il faut faire cette étape après le calcul de postion à l'indice i+1
# Affectation des vitesses à l'indice j+1
bodies[i].vx[j+1] = vx_demi[i] + (dt/2)*fx_jplus1
bodies[i].vy[j+1] = vy_demi[i] + (dt/2)*fy_jplus1
```
**Plot des trajectoires:**
```python
#Definition de figure
fig=plt.figure(figsize=(9, 6), dpi= 100, facecolor='w', edgecolor='k') #To modify the size of the figure
ax = fig.add_subplot(111) #definition de l'axe
#Pour chaque objet faire un plot (Soleil non inclus)
for i in range(1,Nbr_obj):
ax.plot(bodies[i].x, bodies[i].y, label= bodies[i].nom)
plt.xlabel("x (Au)")
plt.ylabel("y (Au)")
plt.gca().set_aspect('equal', adjustable='box') #equal ratios of x and y
plt.legend()
plt.show()
```
**Estimation précision en regardant l'énergie mécanique:**
```python
#Definition de figure
fig=plt.figure(figsize=(9, 6), dpi= 100, facecolor='w', edgecolor='k') #To modify the size of the figure
ax = fig.add_subplot(111) #definition de l'axe
Nrg = Energy(bodies, 1) #Cacul de l'energie d'un objet -> Changez le numero pour voir l'energie de chaque objet;
Nrg /= np.abs(Nrg[0]) #Pour Normaliser
#Plot de l'energie
t = np.linspace(1,T,T)*dt
ax.plot(t, Nrg)
# ax.plot(t[:365], Nrg[:365])
ax.set_xlabel("t (jour)")
ax.set_ylabel("E/$|E_0|$")
ax.get_yaxis().get_major_formatter().set_useOffset(False) #Disable scaling of values in plot wrt y-axis
#Affichage des résulats
print("Résultats : ")
print("Energie moyenne = " + str(np.mean(Nrg)) + ", Ecart_Type = " + str(np.std(Nrg)))
plt.show()
```
Pour une période d'intégration de 165 ans avec un pas de 2 jours, on observe que les planètes décrivent des trajectoires fermées avec une très bonne précision.
On remarque aussi que l'écart-type de l'énergie dimunie si on raffine le pas d'intégration $dt$, ce qui montre que l'implémentation du schéma d'intégration marche bien.
Dans la figure ci-dessus, on voit des oscillations rapides en energie de Mercure par rapport à la période totale d'intégration, pour bien voir les oscillations d'énergie mécanique (massique) de chaque planète, remplacez "ax.plot(t, Nrg)" par "ax.plot(t[:365], Nrg[:365])" dans le code au-dessus.
Pour conclure cette partie, on peut dire que notre schéma d'intégration de Verlet permet de simuler les trajectoires des planètes en **2D** avec une très bonne précision.
| 3a7dd694ce46b423fde21a4818bb5e69fd2122fd | 639,424 | ipynb | Jupyter Notebook | Solar System in 2D.ipynb | mhibatallah/Simulating-the-New-Horizon-Space-Probe-Trajectory | c90558a1b82b6c8d738f06dc1a0dd341657e5f9f | [
"MIT"
] | null | null | null | Solar System in 2D.ipynb | mhibatallah/Simulating-the-New-Horizon-Space-Probe-Trajectory | c90558a1b82b6c8d738f06dc1a0dd341657e5f9f | [
"MIT"
] | null | null | null | Solar System in 2D.ipynb | mhibatallah/Simulating-the-New-Horizon-Space-Probe-Trajectory | c90558a1b82b6c8d738f06dc1a0dd341657e5f9f | [
"MIT"
] | null | null | null | 505.07425 | 129,848 | 0.928847 | true | 9,388 | Qwen/Qwen-72B | 1. YES
2. YES | 0.760651 | 0.685949 | 0.521768 | __label__fra_Latn | 0.865877 | 0.050571 |
```python
import numpy as np
import numpy.linalg as nl
import numpy.random as nr
import sympy as sy
import IPython.display as disp
sy.init_printing()
```
# 역행렬<br>Inverse matrix
## 2x2
다음 비디오는 역행열 찾는 가우스 조단법을 소개한다.<br>
Following video introduces Gauss Jordan method finding the inverse matrix. (36:23 ~ 42:20)
[](https://www.youtube.com/watch?v=FX4C-JpTFgY&list=PL221E2BBF13BECF6C&index=9&start=2183&end=2540)
아래 2x2 행렬을 생각해 보자.<br>
Let's think about the 2x2 matrix.
```python
A22 = np.array([
[1, 3],
[2, 7]
])
```
```python
A22
```
오른쪽에 같은 크기의 단위행렬을 붙여 보자.<br>
Let's augment an identity matrix of the same size.
```python
I22 = np.identity(2)
```
```python
I22
```
```python
AX22 = np.hstack([A22, I22])
```
```python
AX22
```
이제 왼쪽 2x2 부분을 단위행렬로 만들어 보자.<br>
Let's make the left 2x2 part an identity matrix.
첫 행에 2를 곱한 후 2행에서 빼 보자.<br>
Let's multipy 2 to the first row and then subtract from the second row.
```python
AX22[1, :] -= 2 * AX22[0, :]
```
```python
AX22
```
이번에는 2번째 행에 3을 곱해서 첫 행에서 빼 보자.<br>
Now let's multipy 3 to the second row and subtract from the first row.
```python
AX22[0, :] -= 3 * AX22[1, :]
```
```python
AX22
```
위 `AX22` 행렬에서 오른쪽 두 행을 따로 떼어 보자.<br>
Let's separate the right two columns of the `AX22` matrix above.
```python
A22_inv = AX22[:, 2:]
```
```python
A22_inv
```
A 행렬과 곱해보자.<br>
Let's multipy with the A matrix.
```python
A22 @ A22_inv
```
## `numpy`
`numpy.ndarray` 의 `.I` 속성을 이용할 수도 있다.<br>
We can use the `.I` property of `numpy.ndarray`.
```python
mat_A22_inv = np.matrix(A22).I
```
```python
mat_A22_inv
```
```python
A22 @ mat_A22_inv
```
또한 `numpy.linalg.inv()` 함수도 있다.<br>
Also, `numpy.linalg.inv()` function is available.
```python
A22_inv = nl.inv(A22)
```
```python
A22_inv
```
```python
A22_inv @ A22
```
## 3x3
다음 비디오는 역행열 찾는 가우스 조단법을 소개한다.<br>
Following video introduces Gauss Jordan method finding the inverse matrix.
[](https://www.youtube.com/watch?v=obts_JDS6_Q)
아래 행렬을 생각해 보자.<br>
Let's think about the following matrix.
```python
A33_list = [
[1, 0, 1],
[0, 2, 1],
[1, 1, 1],
]
```
```python
A33 = np.array(A33_list)
```
```python
A33
```
오른쪽에 같은 크기의 단위행렬을 붙여 보자.<br>
Let's augment an identity matrix of the same size.
```python
I33 = np.identity(A33.shape[0])
```
```python
I33
```
```python
AX33 = np.hstack([A33, I33])
```
```python
AX33
```
이제 왼쪽 부분을 단위행렬로 만들어 보자.<br>
Let's make the left part an identity matrix.
첫 행을 3행에서 빼 보자.<br>
Let's subtract the first row from the third row.
```python
AX33[2, :] -= AX33[0, :]
```
```python
AX33
```
이번에는 2번째 행과 3번째 행을 바꾸자.<br>
Now let's swap the second and the third rows. ([ref](https://stackoverflow.com/a/54069951))
```python
AX33[[1, 2]] = AX33[[2, 1]]
```
```python
AX33
```
두번째 행에 2를 곱해서 3행에서 빼 보자.<br>
Let's multiply 2 to the second row and subtract from the third row.
```python
AX33[2, :] -= 2 * AX33[1, :]
```
```python
AX33
```
첫번째 행에서 3번째 행을 빼 보자.<br>
Let's subtract the third row from the first row.
```python
AX33[0, :] -= AX33[2, :]
```
```python
AX33
```
위 `AX` 행렬에서 오른쪽 두 행을 따로 떼어 보자.<br>
Let's separate the right two columns of the `AX` matrix above.
```python
A33_inv = AX33[:, 3:]
```
```python
A33_inv
```
A 행렬과 곱해보자.<br>
Let's multipy with the A matrix.
```python
A33 @ A33_inv
```
## `numpy`
`numpy.ndarray` 의 `.I` 속성을 이용할 수도 있다.<br>
We can use the `.I` property of `numpy.ndarray`.
```python
mat_A33_inv = np.matrix(A33).I
```
```python
mat_A33_inv
```
```python
A33 @ mat_A33_inv
```
## 표준 기능으로 구현한 가우스 조단법<br>Gauss Jordan method in Standard library
다음 셀은 가우스 조단법을 표준기능 만으로 구현한다.<br>
Following cell implements the Gauss Jordan method with standard library only.
```python
import typing
Scalar = typing.Union[int, float]
Row = typing.Union[typing.List[Scalar], typing.Tuple[Scalar]]
Matrix = typing.Union[typing.List[Row], typing.Tuple[Row]]
def get_zero(n:int) -> Matrix:
return [
[0] * n for i in range(n)
]
def get_identity(n:int) -> Matrix:
result = get_zero(n)
for i in range(n):
result[i][i] = 1
return result
def augment_mats(A:Matrix, B:Matrix):
assert len(A) == len(B)
return [row_A + row_B for row_A, row_B in zip(A, B)]
def gauss_jordan(A:Matrix) -> Matrix:
AX = augment_mats(A, get_identity(len(A)))
# pivot loop
for p in range(len(AX)):
one_over_pivot = 1.0 / AX[p][p]
# normalize a row with one_over_pivot
for j in range(len(AX[p])):
AX[p][j] *= one_over_pivot
# row loop
for i in range(len(AX)):
if i != p:
# row operation
multiplier = - AX[i][p]
# column loop
for j in range(0, len(AX[p])):
AX[i][j] += multiplier * AX[p][j]
return [row[len(A):] for row in AX]
```
위 행렬의 예로 확인해 보자.<br>
Let's check with the matrix above.
```python
mat_A33_inv_GJ = gauss_jordan(A33_list)
```
```python
import pprint
pprint.pprint(mat_A33_inv_GJ, width=40)
```
```python
np.array(mat_A33_inv_GJ) @ A33
```
## 4x4
아래 행렬을 생각해 보자.<br>
Let's think about the following matrix.
```python
A44_list = [
[1, 0, 2, 0],
[1, 1, 0, 0],
[1, 2, 0, 1],
[1, 1, 1, 1],
]
```
```python
A44 = np.array(A44_list)
```
```python
A44
```
다음 셀은 넘파이 다차원 배열 `numpy.ndarray` 을 위해 구현한 가우스 조르단 소거법 함수를 불러들인다.<br>
Following cell imports an implementation of the Gauss Jordan Elimination for a `numpy.ndarray`.
```python
import gauss_jordan
```
위 행렬에 적용해 보자.<br>
Let's apply to the matrix above.
```python
A44_inv_array = gauss_jordan.inv(A44)
```
```python
A44_inv_array
```
```python
A44_inv_array @ A44
```
```python
import numpy.testing as nt
nt.assert_array_almost_equal(A44_inv_array @ A44, np.array(get_identity(len(A44_list))))
```
## 연습 문제<br>Exercise
위 가우스 조단법에서 메모리를 더 절약하는 방안을 제안해 보시오<br>
Regarding the Gauss Jordan implementation above, propose how we can save more memory.
## 참고문헌<br>References
* Gilbert Strang. 18.06 Linear Algebra. Spring 2010. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.
* Marc Peter Deisenroth, A Aldo Faisal, and Cheng Soon Ong, Mathematics For Machine Learning, Cambridge University Press, 2020, ISBN 978-1108455145.
## Final Bell<br>마지막 종
```python
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
```
```python
```
| db9e25653564fec6f10a9e8dfc127cd7bb4f51bb | 18,290 | ipynb | Jupyter Notebook | 60_linear_algebra_2/150_Inverse_matrix.ipynb | kangwonlee/2009eca-nmisp-template | 46a09c988c5e0c4efd493afa965d4a17d32985e8 | [
"BSD-3-Clause"
] | null | null | null | 60_linear_algebra_2/150_Inverse_matrix.ipynb | kangwonlee/2009eca-nmisp-template | 46a09c988c5e0c4efd493afa965d4a17d32985e8 | [
"BSD-3-Clause"
] | null | null | null | 60_linear_algebra_2/150_Inverse_matrix.ipynb | kangwonlee/2009eca-nmisp-template | 46a09c988c5e0c4efd493afa965d4a17d32985e8 | [
"BSD-3-Clause"
] | null | null | null | 18.456105 | 219 | 0.470585 | true | 2,455 | Qwen/Qwen-72B | 1. YES
2. YES | 0.817574 | 0.839734 | 0.686545 | __label__kor_Hang | 0.890633 | 0.433405 |
```python
from sympy import *
init_printing()
```
```python
K,L,r,w,p,T = symbols('K L r w p T',
real=True,
positive=True,
finite=True)
production = T * K * L
cost = r*K + w*L**2
profit = p * production - cost
profit
```
```python
DK = profit.diff(K)
DL = profit.diff(L)
DK, DL
```
```python
A = Matrix([
[0, T*p],
[T*p, -2*w]
])
b = Matrix([
[r],
[0]
])
A, b
```
```python
d = A.det()
d
```
Assume that $p >0$.
```python
Ainv = A.inv()
Ainv
```
```python
sol = solve([DK, DL], [K, L], dict=True)
sol
```
```python
Hessian = Matrix([
[DK.diff(K), DK.diff(L)],
[DL.diff(K), DL.diff(L)]
])
Hessian
```
```python
# What is the rank of the matrix A?
A.rank()
```
```python
```
| faa1a83889e1a9b508086f497b6df7ad34b05361 | 15,389 | ipynb | Jupyter Notebook | assets/pdfs/math_bootcamp/final2017/problem_2.ipynb | joepatten/joepatten.github.io | 4b9acc8720f3a33337368fee719902b54a6f2f68 | [
"MIT"
] | null | null | null | assets/pdfs/math_bootcamp/final2017/problem_2.ipynb | joepatten/joepatten.github.io | 4b9acc8720f3a33337368fee719902b54a6f2f68 | [
"MIT"
] | 5 | 2020-08-09T16:28:31.000Z | 2020-08-10T14:48:57.000Z | assets/pdfs/math_bootcamp/final2017/problem_2.ipynb | joepatten/joepatten.github.io | 4b9acc8720f3a33337368fee719902b54a6f2f68 | [
"MIT"
] | null | null | null | 49.964286 | 1,920 | 0.742673 | true | 254 | Qwen/Qwen-72B | 1. YES
2. YES | 0.92944 | 0.815232 | 0.75771 | __label__eng_Latn | 0.563114 | 0.598746 |
Consider the standard incomplete markets model and answer the following:
Write a python program that returns the recursive competitive equilibrium for a economy with
the following parameters:
* intertemporal discount factor($\beta$) = 0.98;
* CRRA utility function with $\sigma$ = 2;
* depreciation rate $\delta$= 0.08;
* production function: $f(K,L) = K^{\alpha}L^{1-\alpha}$
* $\alpha$ = 0.44
* state-transition matrix:
* $\pi(y'|y)=$ $$\begin{bmatrix} 0.4 & 0.5 & 0.1 \\ 0.3 & 0.2 & 0.5 \\ 0.2 & 0.4 & 0.4 \end{bmatrix}$$.
Answer:
Importing modules
```python
import numpy as np
```
The recursive problem is a little different than we were doing so far. A stationary recursive competitive equilibrium
is a value function $v:Z \times M \rightarrow \mathbb{R} $, policy function $a': Z \rightarrow \mathbb{R}$ and
consumption $c: Z \rightarrow \mathbb{R}$, policy functions for the firm $K$ and $L$, prices $r,w$ and a mesure
$\phi \in M$ such that:
* $v,a', c$ are mesurable with respect to $B(Z)$, v satisfies the household Bellman's equation:
* $v(a,y,\phi)$ = $max_{c\geq 0, a' \geq 0}$ $(U(c) + \beta \sum_{y \in Y} \pi(y'|y)v(a', y';\phi))$
s.t $c + a' = wy + (1+r)a$ where $a',c$ are the associated policy functions for a given r and w.
* $K, L$ satisfy, given $r$ and $w$:
* $r =$ $F_{k}(K,L)$ - $\delta$
* $w =$ $F_{L}(K, L)$
* $K' \equiv a'(a,y) d\phi = K$ (because its stationary)
* $L$ = $\int y $d$\phi$
* $\int c(a,y,\phi)d\phi$ + $\int a'(a,y,\phi)d\phi$ = $F[K(\phi), L(\phi)] + (1-\delta)K(\phi)$
When something appears with a line e.g: $a'$ it stands for the future value of this parameter in this
case, tomorrow assets distribution. Every future distribution is defined by the associated policy function
for that variable. $r$ is the interest rate, $w$ is the wage, $K$ and $L$ are the capital stock and labor supply
respectively.
Now we will solve this problem for case asked. We already know that for the standard CRRA utility function we have:
$u(c) = \frac{c^{1-\delta}-1}{1-\delta}$. But now $\delta$ stands for depreciation rate and our risk aversion turned
in $\sigma$ so now we have, $u(c) = \frac{c^{1-\sigma}-1}{1-\sigma}$.
```python
def util_func(x):
return (x**(1-sigma))/(1-sigma)
```
Note that we didn't defined $\sigma$ , we will it next. We will create a function that assigns values for the parameters
needed to solve the equilibrium.
```python
def parameters_objects():
global beta, sigma,delta,alpha,pi,y_domain, a_domain, V, iG, n_y, n_a, w,r
beta = 0.98
sigma = 2
delta = 0.08
alpha = 0.44
pi = np.array([[0.4 , 0.5 , 0.1],
[0.3 , 0.2 , 0.5 ],
[0.2 , 0.4 , 0.4]])
r = 0.05 #we will need some interest rate to maximize our objective function
w = 1 #we will need the wage too
# the income domain was not defined we will create one to make sure our program works properly
y_min = 1e-2
y_max = 1
n_y = len(pi) #must have one income for each state
y_domain = np.linspace(y_min , y_max, n_y)
# neither the assets domain was defined
a_max = 10
n_a = 9 #three assets distributions for each income state
# we need a non-Ponzi condition for this case
barA = 0 # no borrow allowed
a_domain = np.linspace(-barA, a_max, n_a)
#Now we just neeed some place to store our value function and policy function
V = np.zeros((n_y, n_a)) #value
iG = np.zeros((n_y, n_a), dtype = np.int) #policy
```
Now we need to build our objective function: $U(c) + \beta \sum_{y \in Y} \pi(y'|y)v(a', y';\phi)$
```python
def build_objective(V):
global n_y, n_a, w, r
F_OBJ = np.zeros((n_y, n_a, n_a)) #one dimension to each income, asset, and future asset
#looping through all income, assets and future assets
for i_y in range(n_y):
y = y_domain[i_y]
for i_a in range(n_a):
a = a_domain[i_a]
for i_a_line in range(n_a): #i_a_line stands for future assets
aa = a_domain[i_a_line]
c = w*y + a - aa/(1+r) #consumption
if c <= 0:
F_OBJ[i_y, i_a, i_a_line] = -np.inf
else:
F_OBJ[i_y, i_a, i_a_line] = util_func(c) + beta*(np.dot(pi[i_y, :],V[:, i_a_line]))
return F_OBJ
```
Solving the problem:
```python
def maximize_TV_IG(F_OBJ):
#maximizing for time t
TV = np.zeros((n_y, n_a))
T_iG = np.zeros((n_y, n_a), dtype = np.int)
for i_y, y in enumerate(y_domain):
for i_a, a in enumerate(a_domain):
TV[i_y, i_a] = np.max(F_OBJ[i_y, i_a, :]) # max value of f_obj
T_iG[i_y, i_a] = np.argmax(F_OBJ[i_y, i_a, :]) # position associated to (y,a) pair that maximizes F_OBJ
return TV, T_iG
```
Computing the stationary state:
```python
def compute_V_G_est():
global V
norm, tol = 2, 1e-7
while norm>tol:
F_OBJ = build_objective(V)
TV, T_iG = maximize_TV_IG(F_OBJ)
norm = np.max(abs(TV - V))
V = np.copy(TV)
iG = np.copy(T_iG)
return V, iG
```
Now we will introduce future assets and heterogeneity, introducing an endogenous transition matrix function:
```python
def compute_Q(iG):
#This function gives a markov transition function, Q_{r} to compute, a stationary measure phi_{r}
#associated associated to this transition function
Q = np.zeros((n_y*n_a, n_y*n_a))
for i_y in range(n_y):
for i_a in range(n_a):
c_state = i_y*n_a + i_a
for i_y_line in range(n_y):
for i_a_line in range(n_a):
n_state = i_y_line*n_a + i_a_line
if iG[i_y, i_a] == i_a_line:
Q[c_state, n_state] += pi[i_y, i_y_line]
return Q
```
Now we compute $\phi_{r}$
```python
def compute_phi(Q):
global phi
phi = np.ones(n_y*n_a) / (n_y*n_a)
norm_Q, tol_Q = 1, 1e-6
while norm_Q > tol_Q:
T_phi = np.dot(phi, Q)
norm_Q = max(abs(T_phi - phi))
phi = T_phi/sum(T_phi)
return phi
```
We need a few more things to compute the equilibrium:
```python
# Expected savings(assets)
def compute_Ea(phi):
Ea = 0
for i_y in range(n_y):
for i_a in range(n_a):
s_index = iG[i_y, i_a]
savings = a_domain[s_index]
t_index = i_y * n_a + i_a
size = phi[t_index]
Ea += savings*size
return Ea
```
Labor supply
```python
def compute_L():
L = 0
for i_y in range(n_y):
for i_a in range(n_a):
labor_supply = y_domain[i_y]
t_index = i_y*n_a + i_a
size_l = phi[t_index]
L += labor_supply*size_l
return L
```
The production function is $f(K,L) = K^{\alpha}L^{1-\alpha}$. As we've seen:
\begin{equation}
r = F_{k}(K,L) - \delta
\end{equation}
So:
\begin{equation}
k = (\alpha/(r+\delta))^{(1/(1-\alpha)}
\end{equation}.
```python
def compute_k(r):
k =(alpha/(r+delta))**(1/(1-alpha))
return k
```
Similarly:
\begin{equation}
w = (1-\alpha) \times (k^{\alpha})
\end{equation}
```python
def compute_w(k):
w = (1-alpha)*(k**alpha)
return w
```
When we reach the equilibrium there should be no excess demand :
$d = K - E(a) = 0$. And finally we can compute the equilibrium.
```python
def compute_d(phi):
k = compute_k(r)
L = compute_L()
K = k*L
ea = compute_Ea(phi)
d = K - ea
return d
```
```python
def compute_equilibrium():
global r, w, V, iG, Q, phi, L, k
rho = beta**(-1)-1
r_1, r_2 = -delta, rho #interest rate domain.
norm_r, tol_r = 1, 1e-10
while norm_r>tol_r:
r = (r_1+r_2)/2
k = compute_k(r)
V, iG = compute_V_G_est()
Q = compute_Q(iG)
phi = compute_phi(Q)
d = compute_d(phi)
if d>0:
r_1 = r
elif d<0:
r_2 = r
norm_r = abs(r_1-r_2)
#printing the interest rate at each step
print('[d,r_L,r_H,norm]=[{:9.6f},{:9.6f},{:9.6f},{:9.6f}]'.format(d,r_1,r_2,norm_r))
```
```python
parameters_objects()
compute_equilibrium()
```
<ipython-input-3-8bad40f2643f>:26: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
iG = np.zeros((n_y, n_a), dtype = np.int) #policy
<ipython-input-5-d39ec0f74c3b>:4: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
T_iG = np.zeros((n_y, n_a), dtype = np.int)
[d,r_L,r_H,norm]=[25.618551,-0.029796, 0.020408, 0.050204]
[d,r_L,r_H,norm]=[12.419544,-0.004694, 0.020408, 0.025102]
[d,r_L,r_H,norm]=[ 4.430990, 0.007857, 0.020408, 0.012551]
[d,r_L,r_H,norm]=[ 3.337801, 0.014133, 0.020408, 0.006276]
[d,r_L,r_H,norm]=[ 2.863614, 0.017270, 0.020408, 0.003138]
[d,r_L,r_H,norm]=[ 2.642114, 0.018839, 0.020408, 0.001569]
[d,r_L,r_H,norm]=[ 2.534993, 0.019624, 0.020408, 0.000784]
[d,r_L,r_H,norm]=[ 2.482308, 0.020016, 0.020408, 0.000392]
[d,r_L,r_H,norm]=[ 2.456181, 0.020212, 0.020408, 0.000196]
[d,r_L,r_H,norm]=[ 2.443171, 0.020310, 0.020408, 0.000098]
[d,r_L,r_H,norm]=[ 2.436679, 0.020359, 0.020408, 0.000049]
[d,r_L,r_H,norm]=[ 2.433436, 0.020384, 0.020408, 0.000025]
[d,r_L,r_H,norm]=[ 2.431816, 0.020396, 0.020408, 0.000012]
[d,r_L,r_H,norm]=[ 2.431006, 0.020402, 0.020408, 0.000006]
[d,r_L,r_H,norm]=[ 2.430601, 0.020405, 0.020408, 0.000003]
[d,r_L,r_H,norm]=[ 2.430398, 0.020407, 0.020408, 0.000002]
[d,r_L,r_H,norm]=[ 2.430297, 0.020407, 0.020408, 0.000001]
[d,r_L,r_H,norm]=[ 2.430246, 0.020408, 0.020408, 0.000000]
[d,r_L,r_H,norm]=[ 2.430221, 0.020408, 0.020408, 0.000000]
[d,r_L,r_H,norm]=[ 2.430208, 0.020408, 0.020408, 0.000000]
[d,r_L,r_H,norm]=[ 2.430202, 0.020408, 0.020408, 0.000000]
[d,r_L,r_H,norm]=[ 2.430199, 0.020408, 0.020408, 0.000000]
[d,r_L,r_H,norm]=[ 2.430197, 0.020408, 0.020408, 0.000000]
[d,r_L,r_H,norm]=[ 2.430197, 0.020408, 0.020408, 0.000000]
[d,r_L,r_H,norm]=[ 2.430196, 0.020408, 0.020408, 0.000000]
[d,r_L,r_H,norm]=[ 2.430196, 0.020408, 0.020408, 0.000000]
[d,r_L,r_H,norm]=[ 2.430196, 0.020408, 0.020408, 0.000000]
[d,r_L,r_H,norm]=[ 2.430196, 0.020408, 0.020408, 0.000000]
[d,r_L,r_H,norm]=[ 2.430196, 0.020408, 0.020408, 0.000000]
[d,r_L,r_H,norm]=[ 2.430196, 0.020408, 0.020408, 0.000000]
Describe how the equilibrium interest rate depends of the intertemporal discount factor,𝛽, and intertemporal substitution/risk aversion, 𝜎.
Answer:(Change $/beta$ and see what happens)
When 𝛽=0.98, the equilibrium interest rate is 2.04, which is the same as 𝜌. Now, the agent became, impatient thus the his respective beta became lower. That is, in order to the agent give up present consumption, he needs to be compensated, in the intertemporal budget constraint context that means the equilibrium interest rate needs to be greater than before. So the respective interest rate equilibrium is about 5,9 which is a big increase compared to the variation of the betas.
| 0764087b10f6cf068544d46eaaa8085247b42c3c | 19,471 | ipynb | Jupyter Notebook | Recursive Equilibrium.ipynb | valcareggi/Macroeconomics | e6b1165aeacf2369b70f2d710a198962c3390864 | [
"MIT"
] | null | null | null | Recursive Equilibrium.ipynb | valcareggi/Macroeconomics | e6b1165aeacf2369b70f2d710a198962c3390864 | [
"MIT"
] | null | null | null | Recursive Equilibrium.ipynb | valcareggi/Macroeconomics | e6b1165aeacf2369b70f2d710a198962c3390864 | [
"MIT"
] | null | null | null | 29.90937 | 488 | 0.50963 | true | 4,080 | Qwen/Qwen-72B | 1. YES
2. YES | 0.887205 | 0.746139 | 0.661978 | __label__eng_Latn | 0.872525 | 0.376328 |
<a href="https://colab.research.google.com/github/mohd-faizy/Probabilistic-Deep-Learning-with-TensorFlow/blob/main/Week_3_Programming_Assignment.ipynb" target="_parent"></a>
# Programming Assignment
## RealNVP for the LSUN bedroom dataset
### Instructions
In this notebook, you will develop the RealNVP normalising flow architecture from scratch, including the affine coupling layers, checkerboard and channel-wise masking, and combining into a multiscale architecture. You will train the normalising flow on a subset of the LSUN bedroom dataset.
Some code cells are provided for you in the notebook. You should avoid editing provided code, and make sure to execute the cells in order to avoid unexpected errors. Some cells begin with the line:
`#### GRADED CELL ####`
Don't move or edit this first line - this is what the automatic grader looks for to recognise graded cells. These cells require you to write your own code to complete them, and are automatically graded when you submit the notebook. Don't edit the function name or signature provided in these cells, otherwise the automatic grader might not function properly.
### How to submit
Complete all the tasks you are asked for in the worksheet. When you have finished and are happy with your code, press the **Submit Assignment** button at the top of this notebook.
### Let's get started!
We'll start running some imports, and loading the dataset. Do not edit the existing imports in the following cell. If you would like to make further Tensorflow imports, you should add them here.
```python
#### PACKAGE IMPORTS ####
# Run this cell first to import all required packages. Do not make any imports elsewhere in the notebook
import tensorflow as tf
import tensorflow_probability as tfp
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras import Model, Input
from tensorflow.keras.layers import Conv2D, BatchNormalization
from tensorflow.keras.optimizers import Adam
tfd = tfp.distributions
tfb = tfp.bijectors
# If you would like to make further imports from tensorflow, add them here
from tensorflow.keras import layers
from tensorflow.keras.regularizers import l2
```
#### The LSUN Bedroom Dataset
In this assignment, you will use a subset of the [LSUN dataset](https://www.yf.io/p/lsun). This is a large-scale image dataset with 10 scene and 20 object categories. A subset of the LSUN bedroom dataset has been provided, and has already been downsampled and preprocessed into smaller, fixed-size images.
* F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser and J. Xia. "LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop". [arXiv:1506.03365](https://arxiv.org/abs/1506.03365), 10 Jun 2015
Your goal is to develop the RealNVP normalising flow architecture using bijector subclassing, and use it to train a generative model of the LSUN bedroom data subset. For full details on the RealNVP model, refer to the original paper:
* L. Dinh, J. Sohl-Dickstein and S. Bengio. "Density estimation using Real NVP". [arXiv:1605.08803](https://arxiv.org/abs/1605.08803), 27 Feb 2017.
#### Import the data
The dataset required for this project can be downloaded from the following link:
https://drive.google.com/file/d/1scbDZrn5pkRjF_CeZp66uHVQC9o1gIsg/view?usp=sharing
You should upload this file to Drive for use in this Colab notebook. It is recommended to unzip it on Drive, which can be done using the `zipfile` package:
>```
import zipfile
with zipfile.ZipFile("/path/to/lsun_bedroom.zip","r") as zip_ref:
zip_ref.extractall('lsun_bedroom_data')
```
```python
# Run this cell to connect to your Drive folder
from google_drive_downloader import GoogleDriveDownloader as gdd
gdd.download_file_from_google_drive(file_id='1scbDZrn5pkRjF_CeZp66uHVQC9o1gIsg',
dest_path='/content/lsun_bedroom/lsun_bedroom.zip',
unzip=True,
showsize=True)
```
Downloading 1scbDZrn5pkRjF_CeZp66uHVQC9o1gIsg into /content/lsun_bedroom/lsun_bedroom.zip...
115.5 MiB Done.
Unzipping...Done.
#### Load the dataset
The following functions will be useful for loading and preprocessing the dataset. The subset you will use for this assignment consists of 10,000 training images, 1000 validation images and 1000 test images.
The images have been downsampled to 32 x 32 x 3 in order to simplify the training process.
```python
# Functions for loading and preprocessing the images
def load_image(filepath):
raw_img = tf.io.read_file(filepath)
img_tensor_int = tf.image.decode_jpeg(raw_img, channels=3)
img_tensor_flt = tf.image.convert_image_dtype(img_tensor_int, tf.float32)
img_tensor_flt = tf.image.resize(img_tensor_flt, [32, 32])
img_tensor_flt = tf.image.random_flip_left_right(img_tensor_flt)
return img_tensor_flt, img_tensor_flt
def load_dataset(split):
train_list_ds = tf.data.Dataset.list_files('/content/lsun_bedroom/{}/*.jpg'.format(split), shuffle=False)
train_ds = train_list_ds.map(load_image)
return train_ds
```
```python
# Load the training, validation and testing datasets splits
train_ds = load_dataset('train')
val_ds = load_dataset('val')
test_ds = load_dataset('test')
```
```python
# Shuffle the datasets
shuffle_buffer_size = 1000
train_ds = train_ds.shuffle(shuffle_buffer_size)
val_ds = val_ds.shuffle(shuffle_buffer_size)
test_ds = test_ds.shuffle(shuffle_buffer_size)
```
```python
# Display a few examples
n_img = 4
f, axs = plt.subplots(n_img, n_img, figsize=(14, 14))
for k, image in enumerate(train_ds.take(n_img**2)):
i = k // n_img
j = k % n_img
axs[i, j].imshow(image[0])
axs[i, j].axis('off')
f.subplots_adjust(wspace=0.01, hspace=0.03)
```
```python
# Batch the Dataset objects
batch_size = 64
train_ds = train_ds.batch(batch_size)
val_ds = val_ds.batch(batch_size)
test_ds = test_ds.batch(batch_size)
```
### Affine coupling layer
We will begin the development of the RealNVP architecture with the core bijector that is called the _affine coupling layer_. This bijector can be described as follows: suppose that $x$ is a $D$-dimensional input, and let $d<D$. Then the output $y$ of the affine coupling layer is given by the following equations:
$$
\begin{align}
y_{1:d} &= x_{1:d} \tag{1}\\
y_{d+1:D} &= x_{d+1:D}\odot \exp(s(x_{1:d})) + t(x_{1:d}), \tag{2}
\end{align}
$$
where $s$ and $t$ are functions from $\mathbb{R}^d\rightarrow\mathbb{R}^{D-d}$, and define the log-scale and shift operations on the vector $x_{d+1:D}$ respectively.
The log of the Jacobian determinant for this layer is given by $\sum_{j}s(x_{1:d})_j$.
The inverse operation can be easily computed as
$$
\begin{align}
x_{1:d} &= y_{1:d}\tag{3}\\
x_{d+1:D} &= \left(y_{d+1:D} - t(y_{1:d})\right)\odot \exp(-s(y_{1:d})),\tag{4}
\end{align}
$$
In practice, we will implement equations $(1)$ and $(2)$ using a binary mask $b$:
$$
\begin{align}
\text{Forward pass:}\qquad y &= b\odot x + (1-b)\odot\left(x\odot\exp(s(b\odot x)) + t(b\odot x)\right),\tag{5}\\
\text{Inverse pass:}\qquad x &= b\odot y + (1-b)\odot\left(y - t(b\odot x) \odot\exp( -s(b\odot x))\right).\tag{6}
\end{align}
$$
Our inputs $x$ will be a batch of 3-dimensional Tensors with `height`, `width` and `channels` dimensions. As in the original architecture, we will use both spatial 'checkerboard' masks and channel-wise masks:
<center>Figure 1. Spatial checkerboard mask (left) and channel-wise mask (right). From the original paper.</center>
#### Custom model for log-scale and shift
You should now create a custom model for the shift and log-scale parameters that are used in the affine coupling layer bijector. We will use a convolutional residual network, with two residual blocks and a final convolutional layer. Using the functional API, build the model according to the following specifications:
* The function takes the `input_shape` and `filters` as arguments
* The model should use the `input_shape` in the function argument to set the shape in the Input layer (call this layer `h0`).
* The first hidden layer should be a Conv2D layer with number of filters set by the `filters` argument, and a ReLU activation
* The second hidden layer should be a BatchNormalization layer
* The third hidden layer should be a Conv2D layer with the same number of filters as the input `h0` to the model, and a ReLU activation
* The fourth hidden layer should be a BatchNormalization layer
* The fifth hidden layer should be the sum of the fourth hidden layer output and the inputs `h0`. Call this layer `h1`
* The sixth hidden layer should be a Conv2D layer with filters set by the `filters` argument, and a ReLU activation
* The seventh hidden layer should be a BatchNormalization layer
* The eighth hidden layer should be a Conv2D layer with the same number of filters as `h1` (and `h0`), and a ReLU activation
* The ninth hidden layer should be a BatchNormalization layer
* The tenth hidden layer should be the sum of the ninth hidden layer output and `h1`
* The eleventh hidden layer should be a Conv2D layer with the number of filters equal to twice the number of channels of the model input, and a linear activation. Call this layer `h2`
* The twelfth hidden layer should split `h2` into two equal-sized Tensors along the final channel axis. These two Tensors are the shift and log-scale Tensors, and should each have the same shape as the model input
* The final layer should then apply the `tanh` nonlinearity to the log_scale Tensor. The outputs to the model should then be the list of Tensors `[shift, log_scale]`
All Conv2D layers should use a 3x3 kernel size, `"SAME"` padding and an $l2$ kernel regularizer with regularisation coefficient of `5e-5`.
_Hint: use_ `tf.split` _with arguments_ `num_or_size_splits=2, axis=-1` _to create the output Tensors_.
In total, the network should have 14 layers (including the `Input` layer).
```python
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def get_conv_resnet(input_shape, filters):
"""
This function should build a CNN ResNet model according to the above specification,
using the functional API. The function takes input_shape as an argument, which should be
used to specify the shape in the Input layer, as well as a filters argument, which
should be used to specify the number of filters in (some of) the convolutional layers.
Your function should return the model.
"""
h0 = layers.Input(shape=input_shape)
h = layers.Conv2D(filters=filters, kernel_size=(3,3), padding="SAME", kernel_regularizer=l2(5e-5), activation="relu")(h0)
h = layers.BatchNormalization()(h)
h = layers.Conv2D(filters=input_shape[-1], kernel_size=(3,3), padding="SAME", kernel_regularizer=l2(5e-5), activation="relu")(h)
h = layers.BatchNormalization()(h)
h1 = layers.Add()([h0, h])
h = layers.Conv2D(filters=filters, kernel_size=(3,3), padding="SAME", kernel_regularizer=l2(5e-5), activation="relu")(h1)
h = layers.BatchNormalization()(h)
h = layers.Conv2D(filters=input_shape[-1], kernel_size=(3,3), padding="SAME", kernel_regularizer=l2(5e-5), activation="relu")(h)
h = layers.BatchNormalization()(h)
h = layers.Add()([h1, h])
h2 = layers.Conv2D(filters=2*input_shape[-1], kernel_size=(3,3), padding="SAME", kernel_regularizer=l2(5e-5), activation="linear")(h)
shift, log_scale = layers.Lambda(lambda t: tf.split(t, num_or_size_splits=2, axis=-1))(h2)
log_scale = layers.Activation(activation="tanh")(log_scale)
model = Model(inputs=h0, outputs=[shift, log_scale])
return model
```
```python
# Test your function and print the model summary
conv_resnet = get_conv_resnet((32, 32, 3), 32)
conv_resnet.summary()
```
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 32, 32, 3)] 0
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 32, 32, 32) 896 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 32, 32, 32) 128 conv2d[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 32, 32, 3) 867 batch_normalization[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 32, 32, 3) 12 conv2d_1[0][0]
__________________________________________________________________________________________________
add (Add) (None, 32, 32, 3) 0 input_1[0][0]
batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 32, 32, 32) 896 add[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 32, 32, 32) 128 conv2d_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 32, 32, 3) 867 batch_normalization_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 32, 32, 3) 12 conv2d_3[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 32, 32, 3) 0 add[0][0]
batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 32, 32, 6) 168 add_1[0][0]
__________________________________________________________________________________________________
lambda (Lambda) [(None, 32, 32, 3), 0 conv2d_4[0][0]
__________________________________________________________________________________________________
activation (Activation) (None, 32, 32, 3) 0 lambda[0][1]
==================================================================================================
Total params: 3,974
Trainable params: 3,834
Non-trainable params: 140
__________________________________________________________________________________________________
You can also inspect your model architecture graphically by running the following cell. It should look something like the following:
```python
# Plot the model graph
tf.keras.utils.plot_model(conv_resnet, show_layer_names=False, rankdir='LR')
```
```python
# Check the output shapes are as expected
print(conv_resnet(tf.random.normal((1, 32, 32, 3)))[0].shape)
print(conv_resnet(tf.random.normal((1, 32, 32, 3)))[1].shape)
```
(1, 32, 32, 3)
(1, 32, 32, 3)
#### Binary masks
Now that you have a shift and log-scale model built, we will now implement the affine coupling layer. We will first need functions to create the binary masks $b$ as described above. The following function creates the spatial 'checkerboard' mask.
It takes a rank-2 `shape` as input, which correspond to the `height` and `width` dimensions, as well as an `orientation` argument (an integer equal to `0` or `1`) that determines which way round the zeros and ones are entered into the Tensor.
```python
# Function to create the checkerboard mask
def checkerboard_binary_mask(shape, orientation=0):
height, width = shape[0], shape[1]
height_range = tf.range(height)
width_range = tf.range(width)
height_odd_inx = tf.cast(tf.math.mod(height_range, 2), dtype=tf.bool)
width_odd_inx = tf.cast(tf.math.mod(width_range, 2), dtype=tf.bool)
odd_rows = tf.tile(tf.expand_dims(height_odd_inx, -1), [1, width])
odd_cols = tf.tile(tf.expand_dims(width_odd_inx, 0), [height, 1])
checkerboard_mask = tf.math.logical_xor(odd_rows, odd_cols)
if orientation == 1:
checkerboard_mask = tf.math.logical_not(checkerboard_mask)
return tf.cast(tf.expand_dims(checkerboard_mask, -1), tf.float32)
```
This function creates a rank-3 Tensor to mask the `height`, `width` and `channels` dimensions of the input. We can take a look at this checkerboard mask for some example inputs below. In order to make the Tensors easier to inspect, we will squeeze out the single channel dimension (which is always 1 for this mask).
```python
# Run the checkerboard_binary_mask function to see an example
# NB: we squeeze the shape for easier viewing. The full shape is (4, 4, 1)
tf.squeeze(checkerboard_binary_mask((4, 4), orientation=0))
```
<tf.Tensor: shape=(4, 4), dtype=float32, numpy=
array([[0., 1., 0., 1.],
[1., 0., 1., 0.],
[0., 1., 0., 1.],
[1., 0., 1., 0.]], dtype=float32)>
```python
# The `orientation` should be 0 or 1, and determines which way round the binary entries are
tf.squeeze(checkerboard_binary_mask((4, 4), orientation=1))
```
<tf.Tensor: shape=(4, 4), dtype=float32, numpy=
array([[1., 0., 1., 0.],
[0., 1., 0., 1.],
[1., 0., 1., 0.],
[0., 1., 0., 1.]], dtype=float32)>
You should now complete the following function to create a channel-wise mask. This function takes a single integer `num_channels` as an input, as well as an `orientation` argument, similar to above. You can assume that the `num_channels` integer is even.
The function should return a rank-3 Tensor with singleton entries for `height` and `width`. In the channel axis, the first `num_channels // 2` entries should be zero (for `orientation=0`) and the final `num_channels // 2` entries should be one (for `orientation=0`). The zeros and ones should be reversed for `orientation=1`. The `dtype` of the returned Tensor should be `tf.float32`.
```python
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def channel_binary_mask(num_channels, orientation=0):
"""
This function takes an integer num_channels and orientation (0 or 1) as
arguments. It should create a channel-wise binary mask with
dtype=tf.float32, according to the above specification.
The function should then return the binary mask.
"""
if orientation == 0:
return tf.concat([tf.zeros((1,1, num_channels//2), dtype=tf.float32),
tf.ones((1,1, num_channels - num_channels//2), dtype=tf.float32)], axis=-1)
return tf.concat([tf.ones((1,1, num_channels//2), dtype=tf.float32),
tf.zeros((1,1, num_channels - num_channels//2), dtype=tf.float32)], axis=-1)
```
```python
# Run your function to see an example channel-wise binary mask
channel_binary_mask(6, orientation=0)
```
<tf.Tensor: shape=(1, 1, 6), dtype=float32, numpy=array([[[0., 0., 0., 1., 1., 1.]]], dtype=float32)>
```python
#### GRADED CELL ####
# Complete the following functions.
# Make sure to not change the function names or arguments.
def forward(x, b, shift_and_log_scale_fn):
"""
This function takes the input Tensor x, binary mask b and callable
shift_and_log_scale_fn as arguments.
This function should implement the forward transformation in equation (5)
and return the output Tensor y, which will have the same shape as x
"""
t_shift, s_log_scale = shift_and_log_scale_fn(b*x)
return b*x + (1-b)*(x*tf.math.exp(s_log_scale) + t_shift)
def inverse(y, b, shift_and_log_scale_fn):
"""
This function takes the input Tensor x, binary mask b and callable
shift_and_log_scale_fn as arguments.
This function should implement the forward transformation in equation (5)
and return the output Tensor y, which will have the same shape as x
"""
t_shift, s_log_scale = shift_and_log_scale_fn(b*y)
return b*y + (1-b)*((y - t_shift)*tf.math.exp(-s_log_scale))
```
The new bijector class also requires the `log_det_jacobian` methods to be implemented. Recall that the log of the Jacobian determinant of the forward transformation is given by $\sum_{j}s(x_{1:d})_j$, where $s$ is the log-scale function of the affine coupling layer.
You should now complete the following functions to define the `forward_log_det_jacobian` and `inverse_log_det_jacobian` methods of the affine coupling layer bijector.
* Both functions `forward_log_det_jacobian` and `inverse_log_det_jacobian` takes an input Tensor `x` (or `y`), a rank-3 binary mask `b`, and the `shift_and_log_scale_fn` callable
* These arguments are the same as the description for the `forward` and `inverse` functions
* The `forward_log_det_jacobian` function should implement the log of the Jacobian determinant for the transformation $(5)$
* The `inverse_log_det_jacobian` function should implement the log of the Jacobian determinant for the transformation $(6)$
* Both functions should reduce sum over the last three axes of the input Tensor (`height`, `width` and `channels`)
```python
#### GRADED CELL ####
# Complete the following functions.
# Make sure to not change the function names or arguments.
def forward_log_det_jacobian(x, b, shift_and_log_scale_fn):
"""
This function takes the input Tensor x, binary mask b and callable
shift_and_log_scale_fn as arguments.
This function should compute and return the log of the Jacobian determinant
of the forward transformation in equation (5)
"""
_, s_log_scale = shift_and_log_scale_fn(b*x)
return tf.reduce_sum((1-b)*s_log_scale, axis=[-1,-2,-3])
def inverse_log_det_jacobian(y, b, shift_and_log_scale_fn):
"""
This function takes the input Tensor y, binary mask b and callable
shift_and_log_scale_fn as arguments.
This function should compute and return the log of the Jacobian determinant
of the forward transformation in equation (6)
"""
_, s_log_scale = shift_and_log_scale_fn(b*y)
return -tf.reduce_sum((1-b)*s_log_scale, axis=[-1,-2,-3])
```
You are now ready to create the coupling layer bijector, using bijector subclassing. You should complete the class below to define the `AffineCouplingLayer`.
* You should complete the initialiser `__init__`, and the internal class method `_get_mask`
* The `_forward`, `_inverse`, `_forward_log_det_jacobian` and `_inverse_log_det_jacobian` methods are completed for you using the functions you have written above. Do not modify these methods
* The initialiser takes the `shift_and_log_scale_fn` callable, `mask_type` string (either `"checkerboard"` or `"channel"`, `orientation` (integer, either `0` or `1`) as required arguments, and allows for extra keyword arguments
* The required arguments should be set as class attributes in the initialiser (note that the `shift_and_log_scale_fn` attribute is being used in the `_forward`, `_inverse`, `_forward_log_det_jacobian` and `_inverse_log_det_jacobian` methods)
* The initialiser should call the base class initialiser, and pass in any extra keyword arguments
* The class should have a required number of event dimensions equal to 3
* The internal method `_get_mask` takes a `shape` as an argument, which is the shape of an input Tensor
* This method should use the `checkerboard_binary_mask` and `channel_binary_mask` functions above, as well as the `mask_type` and `orientation` arguments passed to the initialiser to compute and return the required binary mask
* This method is used in each of the `_forward`, `_inverse`, `_forward_log_det_jacobian` and `_inverse_log_det_jacobian` methods
```python
#### GRADED CELL ####
# Complete the following class.
# Make sure to not change the class or method names or arguments.
class AffineCouplingLayer(tfb.Bijector):
"""
Class to implement the affine coupling layer.
Complete the __init__ and _get_mask methods according to the instructions above.
"""
def __init__(self, shift_and_log_scale_fn, mask_type, orientation, **kwargs):
"""
The class initialiser takes the shift_and_log_scale_fn callable, mask_type,
orientation and possibly extra keywords arguments. It should call the
base class initialiser, passing any extra keyword arguments along.
It should also set the required arguments as class attributes.
"""
super(AffineCouplingLayer, self).__init__(forward_min_event_ndims=3, **kwargs)
self.shift_and_log_scale_fn = shift_and_log_scale_fn
self.mask_type = mask_type
self.orientation = orientation
def _get_mask(self, shape):
"""
This internal method should use the binary mask functions above to compute
and return the binary mask, according to the arguments passed in to the
initialiser.
"""
if self.mask_type == "channel":
return channel_binary_mask(shape[-1], self.orientation)
return checkerboard_binary_mask(shape[1:], self.orientation)
def _forward(self, x):
b = self._get_mask(x.shape)
return forward(x, b, self.shift_and_log_scale_fn)
def _inverse(self, y):
b = self._get_mask(y.shape)
return inverse(y, b, self.shift_and_log_scale_fn)
def _forward_log_det_jacobian(self, x):
b = self._get_mask(x.shape)
return forward_log_det_jacobian(x, b, self.shift_and_log_scale_fn)
def _inverse_log_det_jacobian(self, y):
b = self._get_mask(y.shape)
return inverse_log_det_jacobian(y, b, self.shift_and_log_scale_fn)
```
```python
# Test your function by creating an instance of the AffineCouplingLayer class
affine_coupling_layer = AffineCouplingLayer(conv_resnet, 'channel', orientation=1,
name='affine_coupling_layer')
```
```python
# The following should return a Tensor of the same shape as the input
affine_coupling_layer.forward(tf.random.normal((16, 32, 32, 3))).shape
```
TensorShape([16, 32, 32, 3])
```python
# The following should compute a log_det_jacobian for each event in the batch
affine_coupling_layer.forward_log_det_jacobian(tf.random.normal((16, 32, 32, 3)), event_ndims=3).shape
```
TensorShape([16])
#### Combining the affine coupling layers
In the affine coupling layer, part of the input remains unchanged in the transformation $(5)$. In order to allow transformation of all of the input, several coupling layers are composed, with the orientation of the mask being reversed in subsequent layers.
```python
# Run this cell to download and view a sketch of the affine coupling layers
!wget -q -O alternating_masks.png --no-check-certificate "https://docs.google.com/uc?export=download&id=1r1vASfLOW3kevxRzFUXhCtHN8dzldHve"
Image("alternating_masks.png", width=800)
```
<center>Figure 2. RealNVP alternates the orientation of masks from one affine coupling layer to the next. From the original paper.</center>
Our model design will be similar to the original architecture; we will compose three affine coupling layers with checkerboard masking, followed by a batch normalization bijector (`tfb.BatchNormalization` is a built-in bijector), followed by a squeezing operation, followed by three more affine coupling layers with channel-wise masking and a final batch normalization bijector.
The squeezing operation divides the spatial dimensions into 2x2 squares, and reshapes a Tensor of shape `(H, W, C)` into a Tensor of shape `(H // 2, W // 2, 4 * C)` as shown in Figure 1.
The squeezing operation is also a bijective operation, and has been provided for you in the class below.
```python
# Bijector class for the squeezing operation
class Squeeze(tfb.Bijector):
def __init__(self, name='Squeeze', **kwargs):
super(Squeeze, self).__init__(forward_min_event_ndims=3, is_constant_jacobian=True,
name=name, **kwargs)
def _forward(self, x):
input_shape = x.shape
height, width, channels = input_shape[-3:]
y = tfb.Reshape((height // 2, 2, width // 2, 2, channels), event_shape_in=(height, width, channels))(x)
y = tfb.Transpose(perm=[0, 2, 1, 3, 4])(y)
y = tfb.Reshape((height // 2, width // 2, 4 * channels),
event_shape_in=(height // 2, width // 2, 2, 2, channels))(y)
return y
def _inverse(self, y):
input_shape = y.shape
height, width, channels = input_shape[-3:]
x = tfb.Reshape((height, width, 2, 2, channels // 4), event_shape_in=(height, width, channels))(y)
x = tfb.Transpose(perm=[0, 2, 1, 3, 4])(x)
x = tfb.Reshape((2 * height, 2 * width, channels // 4),
event_shape_in=(height, 2, width, 2, channels // 4))(x)
return x
def _forward_log_det_jacobian(self, x):
return tf.constant(0., x.dtype)
def _inverse_log_det_jacobian(self, y):
return tf.constant(0., y.dtype)
def _forward_event_shape_tensor(self, input_shape):
height, width, channels = input_shape[-3], input_shape[-2], input_shape[-1]
return height // 2, width // 2, 4 * channels
def _inverse_event_shape_tensor(self, output_shape):
height, width, channels = output_shape[-3], output_shape[-2], output_shape[-1]
return height * 2, width * 2, channels // 4
```
You can see the effect of the squeezing operation on some example inputs in the cells below. In the forward transformation, each spatial dimension is halved, whilst the channel dimension is multiplied by 4. The opposite happens in the inverse transformation.
```python
# Test the Squeeze bijector
squeeze = Squeeze()
squeeze(tf.ones((10, 32, 32, 3))).shape
```
TensorShape([10, 16, 16, 12])
```python
# Test the inverse operation
squeeze.inverse(tf.ones((10, 4, 4, 96))).shape
```
TensorShape([10, 8, 8, 24])
We can now construct a block of coupling layers according to the architecture described above. You should complete the following function to chain together the bijectors that we have constructed, to form a bijector that performs the following operations in the forward transformation:
* Three `AffineCouplingLayer` bijectors with `"checkerboard"` masking with orientations `0, 1, 0` respectively
* A `BatchNormalization` bijector
* A `Squeeze` bijector
* Three more `AffineCouplingLayer` bijectors with `"channel"` masking with orientations `0, 1, 0` respectively
* Another `BatchNormalization` bijector
The function takes the following arguments:
* `shift_and_log_scale_fns`: a list or tuple of six conv_resnet models
* The first three models in this list are used in the three coupling layers with checkerboard masking
* The last three models in this list are used in the three coupling layers with channel masking
* `squeeze`: an instance of the `Squeeze` bijector
_NB: at this point, we would like to point out that we are following the exposition in the original paper, and think of the forward transformation as acting on the input image. Note that this is in contrast to the convention of using the forward transformation for sampling, and the inverse transformation for computing log probs._
```python
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def realnvp_block(shift_and_log_scale_fns, squeeze):
"""
This function takes a list or tuple of six conv_resnet models, and an
instance of the Squeeze bijector.
The function should construct the chain of bijectors described above,
using the conv_resnet models in the coupling layers.
The function should then return the chained bijector.
"""
block = [AffineCouplingLayer(shift_and_log_scale_fns[0], 'checkerboard', orientation=0),
AffineCouplingLayer(shift_and_log_scale_fns[1], 'checkerboard', orientation=1),
AffineCouplingLayer(shift_and_log_scale_fns[2], 'checkerboard', orientation=0),
tfb.BatchNormalization(),
squeeze,
AffineCouplingLayer(shift_and_log_scale_fns[3], 'channel', orientation=0),
AffineCouplingLayer(shift_and_log_scale_fns[4], 'channel', orientation=1),
AffineCouplingLayer(shift_and_log_scale_fns[5], 'channel', orientation=0),
tfb.BatchNormalization()
]
return tfb.Chain(list(reversed(block)))
```
```python
# Run your function to create an instance of the bijector
checkerboard_fns = []
for _ in range(3):
checkerboard_fns.append(get_conv_resnet((32, 32, 3), 512))
channel_fns = []
for _ in range(3):
channel_fns.append(get_conv_resnet((16, 16, 12), 512))
block = realnvp_block(checkerboard_fns + channel_fns, squeeze)
```
```python
# Test the bijector on a dummy input
block.forward(tf.random.normal((10, 32, 32, 3))).shape
```
TensorShape([10, 16, 16, 12])
#### Multiscale architecture
The final component of the RealNVP is the multiscale architecture. The squeeze operation reduces the spatial dimensions but increases the channel dimensions. After one of the blocks of coupling-squeeze-coupling that you have implemented above, half of the dimensions are factored out as latent variables, while the other half is further processed through subsequent layers. This results in latent variables that represent different scales of features in the model.
```python
# Run this cell to download and view a sketch of the multiscale architecture
!wget -q -O multiscale.png --no-check-certificate "https://docs.google.com/uc?export=download&id=19Sc6PKbc8Bi2DoyupHZxHvB3m6tw-lki"
Image("multiscale.png", width=700)
```
<center>Figure 3. RealNVP creates latent variables at different scales by factoring out half of the dimensions at each scale. From the original paper.</center>
The final scale does not use the squeezing operation, and instead applies four affine coupling layers with alternating checkerboard masks.
The multiscale architecture for two latent variable scales is implemented for you in the following bijector.
```python
# Bijector to implement the multiscale architecture
class RealNVPMultiScale(tfb.Bijector):
def __init__(self, **kwargs):
super(RealNVPMultiScale, self).__init__(forward_min_event_ndims=3, **kwargs)
# First level
shape1 = (32, 32, 3) # Input shape
shape2 = (16, 16, 12) # Shape after the squeeze operation
shape3 = (16, 16, 6) # Shape after factoring out the latent variable
self.conv_resnet1 = get_conv_resnet(shape1, 64)
self.conv_resnet2 = get_conv_resnet(shape1, 64)
self.conv_resnet3 = get_conv_resnet(shape1, 64)
self.conv_resnet4 = get_conv_resnet(shape2, 128)
self.conv_resnet5 = get_conv_resnet(shape2, 128)
self.conv_resnet6 = get_conv_resnet(shape2, 128)
self.squeeze = Squeeze()
self.block1 = realnvp_block([self.conv_resnet1, self.conv_resnet2,
self.conv_resnet3, self.conv_resnet4,
self.conv_resnet5, self.conv_resnet6], self.squeeze)
# Second level
self.conv_resnet7 = get_conv_resnet(shape3, 128)
self.conv_resnet8 = get_conv_resnet(shape3, 128)
self.conv_resnet9 = get_conv_resnet(shape3, 128)
self.conv_resnet10 = get_conv_resnet(shape3, 128)
self.coupling_layer1 = AffineCouplingLayer(self.conv_resnet7, 'checkerboard', 0)
self.coupling_layer2 = AffineCouplingLayer(self.conv_resnet8, 'checkerboard', 1)
self.coupling_layer3 = AffineCouplingLayer(self.conv_resnet9, 'checkerboard', 0)
self.coupling_layer4 = AffineCouplingLayer(self.conv_resnet10, 'checkerboard', 1)
self.block2 = tfb.Chain([self.coupling_layer4, self.coupling_layer3,
self.coupling_layer2, self.coupling_layer1])
def _forward(self, x):
h1 = self.block1.forward(x)
z1, h2 = tf.split(h1, 2, axis=-1)
z2 = self.block2.forward(h2)
return tf.concat([z1, z2], axis=-1)
def _inverse(self, y):
z1, z2 = tf.split(y, 2, axis=-1)
h2 = self.block2.inverse(z2)
h1 = tf.concat([z1, h2], axis=-1)
return self.block1.inverse(h1)
def _forward_log_det_jacobian(self, x):
log_det1 = self.block1.forward_log_det_jacobian(x, event_ndims=3)
h1 = self.block1.forward(x)
_, h2 = tf.split(h1, 2, axis=-1)
log_det2 = self.block2.forward_log_det_jacobian(h2, event_ndims=3)
return log_det1 + log_det2
def _inverse_log_det_jacobian(self, y):
z1, z2 = tf.split(y, 2, axis=-1)
h2 = self.block2.inverse(z2)
log_det2 = self.block2.inverse_log_det_jacobian(z2, event_ndims=3)
h1 = tf.concat([z1, h2], axis=-1)
log_det1 = self.block1.inverse_log_det_jacobian(h1, event_ndims=3)
return log_det1 + log_det2
def _forward_event_shape_tensor(self, input_shape):
height, width, channels = input_shape[-3], input_shape[-2], input_shape[-1]
return height // 4, width // 4, 16 * channels
def _inverse_event_shape_tensor(self, output_shape):
height, width, channels = output_shape[-3], output_shape[-2], output_shape[-1]
return 4 * height, 4 * width, channels // 16
```
```python
# Create an instance of the multiscale architecture
multiscale_bijector = RealNVPMultiScale()
```
#### Data preprocessing bijector
We will also preprocess the image data before sending it through the RealNVP model. To do this, for a Tensor $x$ of pixel values in $[0, 1]^D$, we transform $x$ according to the following:
$$
T(x) = \text{logit}\left(\alpha + (1 - 2\alpha)x\right),\tag{7}
$$
where $\alpha$ is a parameter, and the logit function is the inverse of the sigmoid function, and is given by
$$
\text{logit}(p) = \log (p) - \log (1 - p).
$$
You should now complete the following function to construct this bijector from in-built bijectors from the bijectors module.
* The function takes the parameter `alpha` as an input, which you can assume to take a small positive value ($\ll0.5$)
* The function should construct and return a bijector that computes $(7)$ in the forward pass
```python
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def get_preprocess_bijector(alpha):
"""
This function should create a chained bijector that computes the
transformation T in equation (7) above.
This can be computed using in-built bijectors from the bijectors module.
Your function should then return the chained bijector.
"""
return tfb.Chain([tfb.Invert(tfb.Sigmoid()),
tfb.Shift(alpha),
tfb.Scale(1 - 2*alpha)])
```
```python
# Create an instance of the preprocess bijector
preprocess = get_preprocess_bijector(0.05)
```
#### Train the RealNVP model
Finally, we will use our RealNVP model to train
We will use the following model class to help with the training process.
```python
# Helper class for training
class RealNVPModel(Model):
def __init__(self, **kwargs):
super(RealNVPModel, self).__init__(**kwargs)
self.preprocess = get_preprocess_bijector(0.05)
self.realnvp_multiscale = RealNVPMultiScale()
self.bijector = tfb.Chain([self.realnvp_multiscale, self.preprocess])
def build(self, input_shape):
output_shape = self.bijector(tf.expand_dims(tf.zeros(input_shape[1:]), axis=0)).shape
self.base = tfd.Independent(tfd.Normal(loc=tf.zeros(output_shape[1:]), scale=1.),
reinterpreted_batch_ndims=3)
self._bijector_variables = (
list(self.bijector.variables))
self.flow = tfd.TransformedDistribution(
distribution=self.base,
bijector=tfb.Invert(self.bijector),
)
super(RealNVPModel, self).build(input_shape)
def call(self, inputs, training=None, **kwargs):
return self.flow
def sample(self, batch_size):
sample = self.base.sample(batch_size)
return self.bijector.inverse(sample)
```
```python
# Create an instance of the RealNVPModel class
realnvp_model = RealNVPModel()
realnvp_model.build((1, 32, 32, 3))
```
```python
# Compute the number of variables in the model
print("Total trainable variables:")
print(sum([np.prod(v.shape) for v in realnvp_model.trainable_variables]))
```
Total trainable variables:
315180
Note that the model's `call` method returns the `TransformedDistribution` object. Also, we have set up our datasets to return the input image twice as a 2-tuple. This is so we can train our model with negative log-likelihood as normal.
```python
# Define the negative log-likelihood loss function
def nll(y_true, y_pred):
return -y_pred.log_prob(y_true)
```
It is recommended to use the GPU accelerator hardware on Colab to train this model, as it can take some time to train. Note that it is not required to train the model in order to pass this assignment. For optimal results, a larger model should be trained for longer.
```python
# Compile and train the model
optimizer = Adam()
realnvp_model.compile(loss=nll, optimizer=Adam())
realnvp_model.fit(train_ds, validation_data=val_ds, epochs=30)
```
Epoch 1/30
938/938 [==============================] - 177s 174ms/step - loss: -707.8909 - val_loss: -5840.3013
Epoch 2/30
938/938 [==============================] - 159s 169ms/step - loss: -6111.2559 - val_loss: -6756.3145
Epoch 3/30
938/938 [==============================] - 159s 169ms/step - loss: -6910.6631 - val_loss: -6927.1729
Epoch 4/30
938/938 [==============================] - 159s 170ms/step - loss: -7312.1277 - val_loss: -7508.2822
Epoch 5/30
938/938 [==============================] - 159s 170ms/step - loss: -7643.3239 - val_loss: -7852.6743
Epoch 6/30
938/938 [==============================] - 160s 170ms/step - loss: -7877.9718 - val_loss: -8059.9521
Epoch 7/30
938/938 [==============================] - 160s 170ms/step - loss: -8029.5228 - val_loss: -8154.4463
Epoch 8/30
938/938 [==============================] - 160s 170ms/step - loss: -8096.0561 - val_loss: -8171.3911
Epoch 9/30
938/938 [==============================] - 161s 171ms/step - loss: -8275.6569 - val_loss: -8381.8096
Epoch 10/30
938/938 [==============================] - 160s 171ms/step - loss: -8362.6644 - val_loss: -8478.6221
Epoch 11/30
938/938 [==============================] - 160s 170ms/step - loss: -8414.7269 - val_loss: -8432.4609
Epoch 12/30
938/938 [==============================] - 160s 170ms/step - loss: -8513.4065 - val_loss: -8552.8350
Epoch 13/30
938/938 [==============================] - 160s 170ms/step - loss: -8580.6069 - val_loss: -8183.6743
Epoch 14/30
938/938 [==============================] - 160s 171ms/step - loss: -8577.6506 - val_loss: -8218.2852
Epoch 15/30
938/938 [==============================] - 161s 171ms/step - loss: -8641.1439 - val_loss: -8737.7217
Epoch 16/30
938/938 [==============================] - 161s 172ms/step - loss: -8719.1222 - val_loss: -8792.2207
Epoch 17/30
938/938 [==============================] - 160s 171ms/step - loss: -8745.2288 - val_loss: -8806.3740
Epoch 18/30
938/938 [==============================] - 161s 171ms/step - loss: -8789.5733 - val_loss: -8812.3027
Epoch 19/30
938/938 [==============================] - 161s 171ms/step - loss: -8822.5408 - val_loss: -8733.9414
Epoch 20/30
938/938 [==============================] - 160s 170ms/step - loss: -8845.0601 - val_loss: -8685.7627
Epoch 21/30
938/938 [==============================] - 158s 168ms/step - loss: -8790.0170 - val_loss: -8930.2686
Epoch 22/30
938/938 [==============================] - 158s 168ms/step - loss: -8892.3129 - val_loss: -8887.8994
Epoch 23/30
938/938 [==============================] - 158s 168ms/step - loss: -8916.0576 - val_loss: -8944.6865
Epoch 24/30
938/938 [==============================] - 158s 168ms/step - loss: -8948.5608 - val_loss: -9006.7412
Epoch 25/30
938/938 [==============================] - 158s 168ms/step - loss: -8946.5670 - val_loss: -9010.3037
Epoch 26/30
938/938 [==============================] - 158s 168ms/step - loss: -8982.3949 - val_loss: -9001.4053
Epoch 27/30
938/938 [==============================] - 158s 168ms/step - loss: -8981.5100 - val_loss: -9039.0400
Epoch 28/30
938/938 [==============================] - 159s 169ms/step - loss: -9011.4595 - val_loss: -8984.9727
Epoch 29/30
938/938 [==============================] - 158s 168ms/step - loss: -9019.1256 - val_loss: -9087.3760
Epoch 30/30
938/938 [==============================] - 158s 168ms/step - loss: -9036.3030 - val_loss: -9039.8252
<tensorflow.python.keras.callbacks.History at 0x7fa8001200d0>
```python
# Evaluate the model
realnvp_model.evaluate(test_ds)
```
157/157 [==============================] - 8s 49ms/step - loss: -9033.6807
-9033.6806640625
#### Generate some samples
```python
# Sample from the model
samples = realnvp_model.sample(8).numpy()
```
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:2273: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
warnings.warn('`layer.apply` is deprecated and '
```python
# Display the samples
n_img = 8
f, axs = plt.subplots(2, n_img // 2, figsize=(14, 7))
for k, image in enumerate(samples):
i = k % 2
j = k // 2
axs[i, j].imshow(image[0])
axs[i, j].axis('off')
f.subplots_adjust(wspace=0.01, hspace=0.03)
```
Congratulations on completing this programming assignment! In the next week of the course we will look at the variational autoencoder.
| 603d7060cefbc6df27c52fb8522d12338273c4dd | 206,634 | ipynb | Jupyter Notebook | 03_Bijectors_and_Normalising_Flows/Week_3_Programming_Assignment.ipynb | mohd-faizy/07T_Probabilistic-Deep-Learning-with-TensorFlow- | 3cf6719c55c1744f820c2a437ce986cc0e71ba61 | [
"MIT"
] | 28 | 2020-12-21T16:28:38.000Z | 2022-03-25T16:12:43.000Z | 03_Bijectors_and_Normalising_Flows/Week_3_Programming_Assignment.ipynb | mohd-faizy/07T_Probabilistic-Deep-Learning-with-TensorFlow- | 3cf6719c55c1744f820c2a437ce986cc0e71ba61 | [
"MIT"
] | null | null | null | 03_Bijectors_and_Normalising_Flows/Week_3_Programming_Assignment.ipynb | mohd-faizy/07T_Probabilistic-Deep-Learning-with-TensorFlow- | 3cf6719c55c1744f820c2a437ce986cc0e71ba61 | [
"MIT"
] | 20 | 2021-01-08T10:55:46.000Z | 2022-03-31T23:00:45.000Z | 114.099393 | 96,590 | 0.801494 | true | 12,174 | Qwen/Qwen-72B | 1. YES
2. YES | 0.782662 | 0.795658 | 0.622732 | __label__eng_Latn | 0.921716 | 0.285145 |
# Modelproject - Cournot competition
## Introduction
In this project, we find the optimal production quantity for each of two firms in a Cournot - competition. We compare the situation with two identical firms and two non-identical firms.
### We apply following assumptions for the model:
* There are two firms in the economy. In the first part they are identical, in the second part they are non-identical
* The firms choose the output simultanously ($Q_1$ and $Q_2$)
* The total quantity in the economy is $Q = Q_1 + Q_2$
* The marketprice is decreasing in quantity produced $P(Q) = a - b*Q$
* For identical firms: Both firms share equal marginal cost $0 \leq c<a$, $ATC = MC = c$
* For non-identical firms: The firms has different marginal cost: ($c_1$, $c_2$)
In the end of the paper, we compare the outcomes for a market with identical firms vs non-identical firms
```python
# Importing packages
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import numpy as np
import sympy as sm
from IPython.display import display
from sympy import simplify
```
```python
# Set symbols according to the equation
q1, q2 = sm.symbols('Q_1 Q_2') # Quantity for each firm
c = sm.symbols('c') # Marginal cost for bth firms on the identical-market
c1, c2 = sm.symbols('c_1 c_2') # Marginal cost for each firm on the non-identical market
a, b = sm.symbols('a b') # Constants
```
# Two Identical Firms
Define functions
```python
# We define p being the demand (price) on the market for two identical firms
def p(q1, q2, b, a):
return a-b*q1-b*q2
```
```python
# Define the total cost functions for firms 1 and firm 2.
def tc1(q1,c):
return c*q1
def tc2(q2,c):
return c*q2
```
```python
# Define profits for firm 1 and firm 2
def profit1(q1, p, tc1):
return q1*p(q1, q2, b, a)-tc1(q1,c)
def profit2(q2, p, tc2):
return q2*p(q1, q2, b, a)-tc2(q2,c)
```
To maximize profits for each firm, we differentiate the profit functions wrt. Q1 and Q2 respectively and find the first-order-conditions
```python
# Diff and define FOCs:
print('First order condition for firm 1:')
FOC1 = sm.diff(profit1(q1, p, tc1), q1)
display(FOC1)
print('First order condition for firm 2:')
FOC2 = sm.diff(profit2(q2, p, tc2), q2)
display(FOC2)
```
First order condition for firm 1:
$\displaystyle - 2 Q_{1} b - Q_{2} b + a - c$
First order condition for firm 2:
$\displaystyle - Q_{1} b - 2 Q_{2} b + a - c$
To determine how each firm reacts (the choice of quantity) to the opposite firm's optimal quantity, we define the reactionfunctions
```python
#Define reaction functions
print('Reaction function R(q1):')
R1 = sm.solve(FOC1,q1)
q1_solve=(R1[0])
display(q1_solve)
print('Reaction function R(q2):')
R2 = sm.solve(FOC2,q2)
q2_solve=(R2[0])
display(q2_solve)
```
Reaction function R(q1):
$\displaystyle \frac{- Q_{2} b + a - c}{2 b}$
Reaction function R(q2):
$\displaystyle \frac{- Q_{1} b + a - c}{2 b}$
We solve for Q1, Q2 and find the total quantity of goods in the market:
```python
print('The equation with two unknowns (Q1 and Q2), we have to solve')
q_all = sm.Eq((q1_solve + q2_solve))
display(q_all)
sol_dict = sm.solve((FOC1,FOC2), (q1, q2))
print('Optimal Quantity for firm 1:')
q1_optimal = sol_dict[q1]
display(q1_optimal)
print('Optimal Quantity for firm 2:')
q2_optimal = sol_dict[q2]
display(q2_optimal)
print('Total Quantity in the economy with two identical firms:')
q_total = q1_optimal + q2_optimal
display(q_total)
```
The equation with two unknowns (Q1 and Q2), we have to solve
$\displaystyle \frac{- Q_{1} b + a - c}{2 b} + \frac{- Q_{2} b + a - c}{2 b} = 0$
Optimal Quantity for firm 1:
$\displaystyle \frac{a - c}{3 b}$
Optimal Quantity for firm 2:
$\displaystyle \frac{a - c}{3 b}$
Total Quantity in the economy with two identical firms:
$\displaystyle \frac{2 \left(a - c\right)}{3 b}$
When total quantity (Q) is found, we can apply this to the demand (price)
```python
print('Optimal demand (price) in Cournot-competition for two identical firms:')
p_opt = a-b*q_total
display(p_opt)
```
Optimal demand (price) in Cournot-competition for two identical firms:
$\displaystyle \frac{a}{3} + \frac{2 c}{3}$
We find the profit for each firm and total profit in the economy by inserting optimal price, marginal cost and optimal quantity in the profit function
```python
print('Profit for firm 1:')
profit_1 = (p_opt-c)*q1_optimal
display(sm.simplify(profit_1))
print('Profit for firm 2:')
profit_2 = (p_opt-c)*q2_optimal
display(sm.simplify(profit_2))
print('Total profit in the economy:')
profit_total = profit_1 + profit_2
display(sm.simplify(profit_total))
```
Profit for firm 1:
$\displaystyle \frac{\left(a - c\right)^{2}}{9 b}$
Profit for firm 2:
$\displaystyle \frac{\left(a - c\right)^{2}}{9 b}$
Total profit in the economy:
$\displaystyle \frac{2 \left(a - c\right)^{2}}{9 b}$
Not surprisingly, the profits are equivalent for the identical firms. Now, what happens when the firms are not identical?
# Two non-identical Firms
Now, the two firms does not share the same level of costs (c). For this reason, we apply $c_1$ and $c_2$ for firm 1 and firm 2 respectively
* Firm 1: $ATC1 = MC1 = c_1$
* Firm 2: $ATC2 = MC2 = c_2$
The demand function (P) and the reaction functions $R(Q_i)$ are equal to the ones found before with two identical firms.
Therefore, we apply the same procedure
```python
# Define the total cost functions
def tc1_non(q1,c1):
return c1*q1
def tc2_non(q2,c1):
return c1*q2
```
```python
# Define the profit functions
def profit1_non(q1, p, tc1):
return q1*p(q1, q2, b, a)-tc1(q1,c1)
def profit2_non(q2, p, tc2):
return q2*p(q1, q2, b, a)-tc2(q2,c2)
```
```python
# To determine the optimal quantity produced for each firm, we first find the first-order-conditions (FOC)
print('First order condition for firm_non 1:')
FOC1_non = sm.diff(profit1_non(q1, p, tc1), q1)
display(FOC1_non)
print('First order condition for firm_non 2:')
FOC2_non = sm.diff(profit2_non(q2, p, tc2), q2)
display(FOC2_non)
```
First order condition for firm_non 1:
$\displaystyle - 2 Q_{1} b - Q_{2} b + a - c_{1}$
First order condition for firm_non 2:
$\displaystyle - Q_{1} b - 2 Q_{2} b + a - c_{2}$
```python
# To determine how each firm reacts (the choice of quantity) to the opposite firm's optimal quantity, we define the reactionfunctions
print('Reaction function R(q1) non-identical firms:')
R1_non = sm.solve(FOC1_non,q1)
q1_solve_non=(R1_non[0])
display(q1_solve_non)
print('Reaction function R(q2) non-identical firms:')
R2_non = sm.solve(FOC2_non,q2)
q2_solve_non=(R2_non[0])
display(q2_solve_non)
# As there are two firms on the market, we set these reaction functions equal to eachother
print('The equation for non-identical firms with two unknowns (Q1 and Q2), we have to solve:')
q_all_non = sm.Eq((q1_solve_non + q2_solve_non), 0)
display(sm.simplify(q_all_non))
```
Reaction function R(q1) non-identical firms:
$\displaystyle \frac{- Q_{2} b + a - c_{1}}{2 b}$
Reaction function R(q2) non-identical firms:
$\displaystyle \frac{- Q_{1} b + a - c_{2}}{2 b}$
The equation for non-identical firms with two unknowns (Q1 and Q2), we have to solve:
$\displaystyle \frac{- Q_{1} b - Q_{2} b + 2 a - c_{1} - c_{2}}{2 b} = 0$
Now we solve for Q1 and Q2 and find the total quantity of goods in the market:
```python
# Define a dictionary to solve for optimal quantities
sol_dict_non = sm.solve((FOC1_non,FOC2_non), (q1, q2))
# Optimal quantity for the firms
print('Optimal Quantity for firm 1:')
q1_optimal_non = sol_dict_non[q1]
display(q1_optimal_non)
print('Optimal Quantity for firm 2:')
q2_optimal_non = sol_dict_non[q2]
display(q2_optimal_non)
print('Total Quantity in the economy:')
q_total_non = (q1_optimal_non + q2_optimal_non)
display(sm.simplify(q_total_non))
```
Optimal Quantity for firm 1:
$\displaystyle \frac{a - 2 c_{1} + c_{2}}{3 b}$
Optimal Quantity for firm 2:
$\displaystyle \frac{a + c_{1} - 2 c_{2}}{3 b}$
Total Quantity in the economy:
$\displaystyle \frac{2 a - c_{1} - c_{2}}{3 b}$
When total quantity (Q) is found, we can apply this to the demand (price)
```python
print('Optimal demand (price) in Cournot-competition for two non-identical firms:')
p_opt_non = a-b*q_total_non
sm.simplify(p_opt_non)
```
Optimal demand (price) in Cournot-competition for two non-identical firms:
$\displaystyle \frac{a}{3} + \frac{c_{1}}{3} + \frac{c_{2}}{3}$
We find the profit for each non-identical firm and total profit in the economy by inserting optimal price, marginal cost and optimal quantity in the profit function
```python
# Profit for each firm is given by
print('Profit for firm 1:')
profit_1_non = (p_opt_non*q1_optimal_non)-c1*q1_optimal_non
display(simplify(profit_1_non))
print('Profit for firm 2:')
profit_2_non = (p_opt_non*q2_optimal_non)-c2*q2_optimal_non
display(sm.simplify(profit_2_non))
# Total profit in the economy
print('Total profit in the economy for two non-identical firms:')
profit_total_non = profit_1_non + profit_2_non
display(sm.simplify(profit_total_non))
```
Profit for firm 1:
$\displaystyle \frac{\left(a - 2 c_{1} + c_{2}\right)^{2}}{9 b}$
Profit for firm 2:
$\displaystyle \frac{\left(a + c_{1} - 2 c_{2}\right)^{2}}{9 b}$
Total profit in the economy for two non-identical firms:
$\displaystyle \frac{2 a^{2} - 2 a c_{1} - 2 a c_{2} + 5 c_{1}^{2} - 8 c_{1} c_{2} + 5 c_{2}^{2}}{9 b}$
## Compare the two situations with figure-plots and Sliders to change the values.
Here, you can play around with the sliders to observe the impact of the
different parameters, given in the model
```python
# Equilibrium quantities and profits for 2 identical firms
def f(a, b, c):
display(f'Optimal quantity for identical firms is {(a-c)/(3*b)}')
display(f'Profit for identical firms is {(a-c)**2/(9*b)}')
interact(f, a=(0.0,10.0), b=(0.0,1.0), c=(0.0,10.0));
```
interactive(children=(FloatSlider(value=5.0, description='a', max=10.0), FloatSlider(value=0.5, description='b…
```python
def f(a, b, c1, c2):
display(f'Optimal quantity non-identical firm 1 is {(a-2*c1+c2)/(3*b)}')
display(f'Optimal quantity non-identical firm 2 is {(a-2*c2+c1)/(3*b)}')
display(f'The profit given parameter values for non-identical firm 1 is {(((a-2*c1+c2))**2)/(9*b)}')
display(f'The profit given parameter values for non-identical firm 2 is {(((a-2*c2+c1))**2)/(9*b)}')
interact(f, a=(0.0,10.0), b=(0.0,1.0), c1=(0.0,5.0), c2=(0.0,5.0));
```
interactive(children=(FloatSlider(value=5.0, description='a', max=10.0), FloatSlider(value=0.5, description='b…
# conclusion
### We have created an algorithm that can solve Cournot-competition problems and created sliders that show results dependent on parameter values.
### We find for all firms that the price is positvely correlated with a and negatively correlated with b.
### Identical firms are affected negatively by higher prices.
### Profits of non-identical firms are negatively affected if their own production costs increase, but profits are positively affected if competition production costs increase
____________________________________
## Comparing the market for Identical firms vs non-identical firms
### Identical firms
* Total quantity produced:
$ Q= \frac{2(a-c)}{3b}$
* price: $ P = \frac{a+2c}{3}$
* Total profit:
$ = \frac{2(a-c)^2}{9b}$
* Profit for each firm: $Profit=\frac{(a-c)^2}{9b}$
### non-identical firms
* Total quantity produced:
$Q=\frac{2a-c_1-c_2}{3b}$
* Quantity firm 1: $Q_1=\frac{a-2c_1+c_2}{3b}$
* Quantity firm 2: $Q_2=\frac{a+c_1-2c_2}{3b}$
* price:
$P=\frac{a+c_1+c_2}{3}$
* Total profit:
$ = \frac{(2a-c_1-c_2)^2}{9b}$
* profit firm 1: $ = \frac{(a-2c_1+c_2)^2}{9b}$
* Profit firm 2: $ = \frac{(a+c_1-2c_2)^2}{9b}$
The main difference on the two markets is given by the non-identical level of marginal cost, $c$
For $c > c_1+c_2$:
* Total produced quantity will be lower for identical firms. Vice Versa for non-identical firms.
* The price of the product will be higher in the economy for identical firms. Vice Versa for non-identical firms
* Ceterus Parabus profit will be lower for identical firms vs non-identical firms.
| 66f3610e2b9de6e0dc757fbe2f37bb812bc5ae89 | 24,393 | ipynb | Jupyter Notebook | Modelproject-Cournot99.ipynb | NumEconCopenhagen/projects-2020-the-group | 2c408fc5541265277373db5f37e1cf7b42af775f | [
"MIT"
] | null | null | null | Modelproject-Cournot99.ipynb | NumEconCopenhagen/projects-2020-the-group | 2c408fc5541265277373db5f37e1cf7b42af775f | [
"MIT"
] | 12 | 2020-04-13T08:35:03.000Z | 2020-05-13T11:02:36.000Z | Modelproject-Cournot99.ipynb | NumEconCopenhagen/projects-2020-the-group | 2c408fc5541265277373db5f37e1cf7b42af775f | [
"MIT"
] | 1 | 2020-03-16T12:36:53.000Z | 2020-03-16T12:36:53.000Z | 28.463244 | 195 | 0.537449 | true | 3,860 | Qwen/Qwen-72B | 1. YES
2. YES | 0.861538 | 0.766294 | 0.660191 | __label__eng_Latn | 0.959608 | 0.372176 |
```python
%matplotlib inline
import warnings
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import gridspec
warnings.filterwarnings('ignore')
```
<style type="text/css">
.input, .output_prompt {
display:none !important;
}
</style>
# Introduction to Pulsar Timing
[](http://mybinder.org/repo/matteobachetti/timing-lectures)
(These slides are obtained from the iPython notebook that can be found [here](https://github.com/matteobachetti/timing-lectures))
## Contents
* Finding pulsars: the buried clock
* Frequency analysis: the Fourier Transform and the Power Density Spectrum
* Refine the search: Folding search (+ $Z^2_n$, $H$-test, ...)
* Getting pulse arrival times
# Finding pulsars: the buried clock
# Finding pulsars: the buried clock
* Pulsars are stable rotators: very predictable "clocks"
# Finding pulsars: the buried clock
* Pulsars are stable rotators: very predictable "clocks"
* Often signal buried in noise (below: a 0.853-Hz sinusoidal pulse buried in noise ~30 times stronger)
```python
def pulsar(time, period=1):
return (1 + np.sin(2 * np.pi / period * time)) / 2.
time_bin = 0.009
# --- Parameters ----
period = 0.8532
pulsar_amp = 0.5
pulsar_stdev = 0.05
noise_amp = 15
noise_stdev = 1.5
# --------------------
# refer to the center of the time bin
time = np.arange(0, 100, time_bin) + time_bin / 2
signal = np.random.normal(pulsar_amp * pulsar(time, period), pulsar_stdev)
noise = np.random.normal(noise_amp, noise_stdev, len(time))
total = signal + noise
# PLOTTING -------------------------
plt.plot(time, signal, 'r-', label='signal')
plt.plot(time, noise, 'b-', label='noise')
plt.plot(time, total, 'k-', label='total')
plt.xlim(0, 30)
plt.xlabel('Time')
plt.ylabel('Flux')
a = plt.legend()
# -----------------------------------
```
# Frequency analysis: the Fourier Transform
Through the Fourier transform, we can decompose a function of time into a series of functions of frequency:
\begin{equation}
\mathcal{F}(\omega) = \int^{\infty}_{-\infty} e^{-i\omega t} f(t)
\end{equation}
or, more appropriate to our case, in the discrete form, we can decompose a time series into a frequency series:
\begin{equation}
F_k = \sum^{N-1}_{k=0} e^{-2\pi i k n/N} t_n
\end{equation}
it is, in general, a **complex** function.
The Fourier transform of a sinusoid will give a high (in absolute terms) value of the $F_k$ corresponding to the frequency of the sinusoid. Other periodic functions will produce high contribution at the fundamental frequency plus one or more multiples of the fundamental, called *harmonics*.
## Our example
Let's take the Fourier transform of the signal we simulated above (only taking *positive* frequencies)
```python
ft = np.fft.fft(total)
freqs = np.fft.fftfreq(len(total), time[1] - time[0])
good = freqs >0
freqs = freqs[good]
ft = ft[good]
# PLOTTING ---------------------------
plt.plot(freqs, ft.real, 'r-', label='real')
plt.plot(freqs, ft.imag, 'b-', label='imag')
plt.xlim([-0.1, 10])
a = plt.legend()
_ = plt.xlabel('Frequency (Hz)')
_ = plt.ylabel('FT')
# -------------------------------------
```
Note that the imaginary part and real part of the Fourier transform have different contributions at the pulsar frequency (1/0.85 s ~ 1.2 Hz). This is because they depend strongly on the phase of the signal [Exercise: **why?**].
## Our example - 2
If we applied a shift of 240 ms (just any value) to the signal:
```python
shift = 0.240
signal_shift = np.roll(total, int(shift / time_bin))
ft_shift = np.fft.fft(signal_shift)
freqs_shift = np.fft.fftfreq(len(total), time[1] - time[0])
good = freqs_shift >0
freqs_shift = freqs_shift[good]
ft_shift = ft_shift[good]
# PLOTTING -------------------------------------
plt.plot(freqs_shift, ft_shift.real, 'r-', label='real')
plt.plot(freqs_shift, ft_shift.imag, 'b-', label='imag')
plt.xlim([-0.1, 10])
a = plt.legend()
_ = plt.xlabel('Frequency (Hz)')
_ = plt.ylabel('FT')
# ----------------------------------------------
```
we would clearly have non-zero values at ~0.85 Hz both in the real and the imaginary parts.
# The Power Density Spectrum
To solve these issues with real and imaginary parts, we can instead take the *squared modulus* of the Fourier transform. This is called **Periodogram**, but most people use the word **Power Density Spectrum** (a periodogram is actually a single realization of the underlying PDS).
\begin{equation}
\mathcal{P}(\omega) = \mathcal{F}(\omega) \cdot \mathcal{F}^*(\omega)
\end{equation}
This function is positive-definite and in our case results in a clear peak at the pulse frequency, *consistent* between original and shifted signal:
```python
pds = np.abs(ft*ft.conj())
pds_shift = np.abs(ft_shift*ft_shift.conj())
fmax = freqs[np.argmax(pds)]
pmax = 1 / fmax
# PLOTTING ---------------------------------
plt.plot(freqs, pds, 'r-', label='PDS of signal')
plt.plot(freqs_shift, pds_shift, 'b-', label='PDS of shifted signal')
a = plt.legend()
a = plt.xlabel('Frequency (Hz)')
a = plt.ylabel('PDS')
plt.xlim([-0.1, 3.5])
_ = plt.gca().annotate('max = {:.2f} s ({:.2f} Hz)'.format(pmax, fmax), xy=(2., max(pds) / 2))
# -------------------------------------------
```
## The Power Density Spectrum -2
The PDS of a generic non-sinusoidal pulse profile will, in general, contain more than one harmonic, with the fundamental not always predominant.
```python
def gaussian_periodic(x, x0, amp, width):
'''Approximates a Gaussian periodic function by summing the contributions in the phase
range 0--1 with those in the phase range -1--0 and 1--2'''
phase = x - np.floor(x)
lc = np.zeros_like(x)
for shift in [-1, 0, 1]:
lc += amp * np.exp(-(phase + shift - x0)**2 / width ** 2)
return lc
def generate_profile(time, period):
'''Simulate a given profile with 1-3 Gaussian components'''
total_phase = time / period
ngauss = np.random.randint(1, 3)
lc = np.zeros_like(total_phase)
for i in range(ngauss):
ph0 = np.random.uniform(0, 1)
amp = np.random.uniform(0.1, 1)
width = np.random.uniform(0.01, 0.2)
lc += gaussian_periodic(total_phase, ph0, amp, width)
return lc
# PLOTTING -------------------------
ncols = 2
nrows = 3
fig = plt.figure(figsize=(12, 8))
fig.suptitle('Profiles and their PDSs')
gs = gridspec.GridSpec(nrows, ncols)
for c in range(ncols):
for r in range(nrows):
# ----------------------------------
noise = np.random.normal(noise_amp, noise_stdev, len(time))
lc = generate_profile(time, period)
lc_noisy = np.random.normal(2 * lc, 0.2) + noise
lcft = np.fft.fft(lc_noisy)
lcfreq = np.fft.fftfreq(len(lc_noisy), time[1] - time[0])
lcpds = np.absolute(lcft) ** 2
# PLOTTING -------------------------
gs_2 = gridspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs[r, c])
ax = plt.subplot(gs_2[0])
good = time < period * 3
ax.plot(time[good] / period, lc[good])
ax.set_xlim([0,3])
ax.set_xlabel('Phase')
ax = plt.subplot(gs_2[1])
ax.plot(lcfreq[lcfreq > 0], lcpds[lcfreq > 0] / max(lcpds[lcfreq > 0]))
ax.set_xlabel('Frequency')
ax.set_xlim([0, 10])
# ----------------------------------
```
## Pulsation?
Here are some examples of power density spectra. In some cases, it might look like a pulsation is present in the data. How do we assess this?
```python
# PLOTTING -------------------------
ncols = 2
nrows = 3
fig = plt.figure(figsize=(12, 8))
fig.suptitle('Profiles and their PDSs')
gs = gridspec.GridSpec(nrows, ncols)
for c in range(ncols):
for r in range(nrows):
# ----------------------------------
noise = np.random.normal(noise_amp, noise_stdev, len(time))
lc = np.zeros_like(time)
lc_noisy = np.random.normal(2 * lc, 0.2) + noise
lcft = np.fft.fft(lc_noisy)
lcfreq = np.fft.fftfreq(len(lc_noisy), time[1] - time[0])
lcpds = np.absolute(lcft) ** 2
# PLOTTING -------------------------
gs_2 = gridspec.GridSpecFromSubplotSpec(1, 2, subplot_spec=gs[r, c])
ax = plt.subplot(gs_2[0])
good = time < period * 3
ax.plot(time[good] / period, lc[good])
ax.set_xlim([0,3])
ax.set_xlabel('Phase')
ax = plt.subplot(gs_2[1])
ax.plot(lcfreq[lcfreq > 0], lcpds[lcfreq > 0] / max(lcpds[lcfreq > 0]))
ax.set_xlabel('Frequency')
ax.set_xlim([0, 10])
# ----------------------------------
```
# Epoch folding
Epoch folding consists of summing equal, one pulse period-long, chunks of data. If the period is just right, the crests will sum up in phase, gaining signal over noise [Exercise: **how much will we gain** by summing up in phase $N$ chunks of data at the right period?].
```python
def epoch_folding(time, signal, period, nperiods=3, nbin=16):
# The phase of the pulse is always between 0 and 1.
phase = time / period
phase -= phase.astype(int)
# First histogram: divide phase range in nbin bins, and count how many signal bins
# fall in each histogram bin. The sum is weighted by the value of the signal at
# each phase
prof_raw, bins = np.histogram(
phase, bins=np.linspace(0, 1, nbin + 1),
weights=signal)
# "Exposure": how many signal bins have been summed in each histogram bin
expo, bins = np.histogram(phase, bins=np.linspace(0, 1, nbin + 1))
# ---- Evaluate errors -------
prof_sq, bins = np.histogram(
phase, bins=np.linspace(0, 1, nbin + 1),
weights=signal ** 2)
# Variance of histogram bin: "Mean of squares minus square of mean" X N
hist_var = (prof_sq / expo - (prof_raw / expo) ** 2) * expo
# Then, take square root -> Stdev, then normalize / N.
prof_err = np.sqrt(hist_var)
#-----------------------------
# Normalize by exposure
prof = prof_raw / expo
prof_err = prof_err / expo
# histogram returns all bin edges, including last one. Here we take the
# center of each bin.
phase_bins = (bins[1:] + bins[:-1]) / 2
# ---- Return the same pattern 'nperiods' times, for visual purposes -----
final_prof = np.array([])
final_phase = np.array([])
final_prof_err = np.array([])
for n in range(nperiods):
final_prof = np.append(final_prof, prof)
final_phase = np.append(final_phase, phase_bins + n)
final_prof_err = np.append(final_prof_err, prof_err)
# ---------------------------
return final_phase, final_prof, final_prof_err
phase, profile, profile_err = epoch_folding(time, total, period)
phase_shift, profile_shift, profile_shift_err = epoch_folding(time, signal_shift, period)
# PLOTTING -------------------------------------------------------------
plt.errorbar(
phase, profile, yerr=profile_err, drawstyle='steps-mid',
label='Signal')
plt.errorbar(
phase_shift, profile_shift, yerr=profile_shift_err,
drawstyle='steps-mid', label='Shifted signal')
_ = plt.legend()
_ = plt.xlabel('Phase')
_ = plt.ylabel('Counts/bin')
# -------------------------------------------------------------------
```
# Epoch folding search
Now, let's run epoch folding at a number of trial periods around the pulse period. To evaluate how much a given profile "looks pulsar-y", we can use the $\chi^2$ statistics, as follows:
\begin{equation}
\mathcal{S} = \sum_{i=0}^N \frac{(p_i - \bar{p})^2}{\sigma_p^2}
\end{equation}
for each profile obtained for each trial value of the pulse frequency and look for peaks$^1$. [Exercise: do you know what statistics this is? And why does that statistics work for our case? Exercise-2: Note the very large number of trials. Can we optimize the search so that we use less trials without losing sensitivity?]
$^1$ 1. Leahy, D. A. et al. On searches for pulsed emission with application to four globular cluster X-ray sources - NGC 1851, 6441, 6624, and 6712. _ApJ_ **266**, 160 (1983).
```python
def pulse_profile_stat(profile, profile_err):
return np.sum(
(profile - np.mean(profile)) ** 2 / profile_err ** 2)
trial_periods = np.arange(0.7, 1.0, 0.0002)
stats = np.zeros_like(trial_periods)
for i, p in enumerate(trial_periods):
phase, profile, profile_err = epoch_folding(time, total, p)
stats[i] = pulse_profile_stat(profile, profile_err)
bestp = trial_periods[np.argmax(stats)]
phase_search, profile_search, profile_search_err = \
epoch_folding(time, total, bestp)
phase, profile, profile_err = epoch_folding(time, total, period)
# PLOTTING -------------------------------
fig = plt.figure(figsize=(10, 3))
gs = gridspec.GridSpec(1, 2)
ax = plt.subplot(gs[0])
ax.plot(trial_periods, stats)
ax.set_xlim([0.7, 1])
ax.set_xlabel('Period (s)')
ax.set_ylabel('$\chi^2$')
ax.axvline(period, color='r', label="True value")
_ = ax.legend()
ax.annotate('max = {:.5f} s'.format(pmax), xy=(.9, max(stats) / 2))
ax2 = plt.subplot(gs[1])
ax2.errorbar(phase_search, profile_search, yerr=profile_search_err,
drawstyle='steps-mid', label='Search')
ax2.errorbar(phase, profile, yerr=profile_err, drawstyle='steps-mid',
label='True period')
ax2.set_xlabel('Phase')
ax2.set_ylabel('Counts/bin')
_ = ax2.legend()
# ------------------------------------------
```
# Times of arrival (TOA)
To calculate the time of arrival of the pulses, we need to:
* Choose what **part of the pulse** is the reference (e.g., the maximum). Once we know that, if $\phi_{max}$ is the phase of the maximum of the pulse, $t_{start}$ the time at the start of the folded light curve, and $p$ is the folding period,
$TOA = t_{start} + \phi_{max} \cdot p$
* Choose a **method** to calculate the TOA:
+ The maximum bin?
+ The phase of a sinusoidal fit?
+ The phase of a more complicated fit?
Hereafter, we are going to use the maximum of the pulse as a reference, and we will calculate the TOA with the three methods above.
## TOA from the maximum bin
**Advantage**
* Very fast and easy to implement
**Disadvantages**
* Very rough (maximum precision, the width of the bin)
* Very uncertain (if statistics is low and/or the pulse is broad, many close-by bins can randomly be the maximum)
```python
phase_bin = 1 / 32.
ph = np.arange(phase_bin / 2, phase_bin / 2 + 1, phase_bin)
shape = np.sin(2 * np.pi * ph) + 2
pr_1 = np.random.poisson(shape * 10000) / 10000
pr_2 = np.random.poisson(shape * 10) / 10
# PLOTTING -----------------------------
plt.plot(ph, shape, label='Theoretical shape', color='k')
plt.plot(
ph, pr_1, drawstyle='steps-mid', color='r',
label='Shape - good stat')
plt.plot(
ph, pr_2, drawstyle='steps-mid', color='b',
label='Shape - bad stat')
plt.axvline(0.25, ls=':', color='k', lw=2, label='Real maximum')
plt.axvline(
ph[np.argmax(pr_1)], ls='--', color='r', lw=2,
label='Maximum - good stat')
plt.axvline(
ph[np.argmax(pr_2)], ls='--', color='b', lw=2,
label='Maximum - bad stat')
_ = plt.legend()
# --------------------------------------
```
## TOA from single sinusoidal fit
**Advantage**
* Relatively easy task (fitting with a sinusoid)
* Errors are well determined provided that the pulse is broad
**Disadvantages**
* If profile is not sinusoidal, might not be well determined
Below, the phase of the pulse is always 0.25
```python
def sinusoid(phase, phase0, amplitude, offset):
return offset + np.cos(2 * np.pi * (phase - phase0))
from scipy.optimize import curve_fit
# PLOTTING ------------------
fig = plt.figure(figsize=(12, 3))
gs = gridspec.GridSpec(1, 4)
ax1 = plt.subplot(gs[0])
ax1.set_title('Theoretical')
ax2 = plt.subplot(gs[1])
ax2.set_title('Sinusoidal, good stat')
ax3 = plt.subplot(gs[2])
ax3.set_title('Sinusoidal, noisy')
ax4 = plt.subplot(gs[3])
ax4.set_title('Complicated profile')
# ---------------------------
# Fit sinusoid to theoretical shape
par, pcov = curve_fit(sinusoid, ph, shape)
# PLOTTING -----------------------------------------------
ax1.plot(ph, sinusoid(ph, *par))
ax1.plot(ph, shape)
par[0] -= np.floor(par[0])
ax1.annotate('phase = {:.2f}'.format(par[0]), xy=(.5, .3))
ax1.set_ylim([0, 4])
# Fit to good-stat line
# ---------------------------------------------------------
par, pcov = curve_fit(sinusoid, ph, pr_1)
# PLOTTING -----------------------------------------------
ax2.plot(ph, sinusoid(ph, *par))
ax2.plot(ph, pr_1)
par[0] -= np.floor(par[0])
ax2.annotate('phase = {:.2f}'.format(par[0]), xy=(.5, .3))
ax2.set_ylim([0, 4])
# Fit to bad-stat line
# ---------------------------------------------------------
par, pcov = curve_fit(sinusoid, ph, pr_2)
# PLOTTING -----------------------------------------------
ax3.plot(ph, sinusoid(ph, *par))
ax3.plot(ph, pr_2, drawstyle='steps-mid')
par[0] -= np.floor(par[0])
ax3.annotate('phase = {:.2f}'.format(par[0]), xy=(.5, .3))
ax3.set_ylim([0, 4])
# Now try with a complicated profile (a double Gaussian)
# ---------------------------------------------------------
pr_3_clean = 0.3 + np.exp(- (ph - 0.25) ** 2 / 0.001) + 0.5 * np.exp(- (ph - 0.75) ** 2 / 0.01)
pr_3 = np.random.poisson(pr_3_clean * 100) / 50
# Let us normalize the template with the same factor (100 / 50) of the randomized one. It will be helpful later
pr_3_clean *= 2
par, pcov = curve_fit(sinusoid, ph, pr_3, maxfev=10000)
# PLOTTING -----------------------------------------------
ax4.plot(ph, sinusoid(ph, *par), label='Fit')
ax4.plot(ph, pr_3, drawstyle='steps-mid', label='Noisy profile')
ax4.plot(ph, pr_3_clean, label='Real profile')
par[0] -= np.floor(par[0])
ax4.annotate('phase = {:.2f}'.format(par[0]), xy=(.5, .3))
ax4.set_ylim([0, 4])
_ = ax4.legend()
# ---------------------------------------------------------
```
## TOA from non-sinusoidal fit: multiple harmonic fitting
**Multiple harmonic fitting**$^1$ (the profile is described by a sum of sinusoids) is just an extension of the single-harmonic fit by adding additional sinusoidal components at multiple frequencies.
**Advantages**
* Still conceptually easy, but more robust and reliable
**Disadvantages**
* The phase is not determined by the fit (in general, it isn't he phase of any of the sinusoids [Exercise: why?]) and needs to be determined from the maximum of the profile. Errors are not straightforward to implement.
$^1$e.g. Riggio, A. et al. Timing of the accreting millisecond pulsar IGR J17511-3057. _A&A_ **526**, 95 (2011).
## TOA from non-sinusoidal fit: Template pulse shapes
* **Cross-correlation** of template pulse shape
* **Fourier-domain fitting (FFTFIT)**$^1$ -> the usual choice. Consists of taking the Fourier transform of the profile $\mathcal{P}$ and of the template $\mathcal{T}$ and minimizing the following objective function (similar to $\chi^2$):
\begin{equation}
F = \sum_k \frac{|\mathcal{P}_k - a\mathcal{T}_k e^{-2\pi i k\phi}|^2}{\sigma^2}
\end{equation}
**Advantages**
* Much more robust and reliable
* Errors well determined whatever the pulse shape
**Disadvantages**
* Relatively trickier to implement
* Needs good template pulse profile
$^1$Taylor, J. H. Pulsar Timing and Relativistic Gravity. _Philosophical Transactions: Physical Sciences and Engineering_ **341**, 117–134 (1992).
```python
def fftfit_fun(profile, template, amplitude, phase):
'''Objective function to be minimized - \'a la Taylor (1992).'''
prof_ft = np.fft.fft(profile)
temp_ft = np.fft.fft(template)
freq = np.fft.fftfreq(len(profile))
good = freq > 0
idx = np.arange(0, prof_ft.size, dtype=int)
sigma = np.std(prof_ft[good])
return np.sum(np.absolute(prof_ft - temp_ft*amplitude*np.exp(-2*np.pi*1.0j*idx*phase))**2 / sigma)
def obj_fun(pars, data):
'''Wrap parameters and input data up in order to be used with minimization
algorithms.'''
amplitude, phase = pars
profile, template = data
return fftfit_fun(profile, template, amplitude, phase)
# Produce 16 realizations of pr_3, at different amplitudes and phases, and reconstruct the phase
from scipy.optimize import fmin, basinhopping
# PLOTTING --------------------------
fig = plt.figure(figsize=(10, 10))
fig.suptitle('FFTfit results')
gs = gridspec.GridSpec(4, 4)
# -----------------------------------
amp0 = 1
phase0 = 0
p0 = [amp0, phase0]
for i in range(16):
# PLOTTING --------------------------
col = i % 4
row = i // 4
# -----------------------------------
factor = 10 ** np.random.uniform(1, 3)
pr_orig = np.random.poisson(pr_3_clean * factor)
roll_len = np.random.randint(0, len(pr_orig) - 1)
pr = np.roll(pr_orig, roll_len)
# # Using generic minimization algorithms is faster, but local minima can be a problem
# res = fmin(obj_fun, p0, args=([pr, pr_3_clean],), disp=False, full_output=True)
# amplitude_res, phase_res = res[0]
# The basinhopping algorithm is very slow but very effective in finding
# the global minimum of functions with local minima.
res = basinhopping(obj_fun, p0, minimizer_kwargs={'args':([pr, pr_3_clean],)})
amplitude_res, phase_res = res.x
phase_res -= np.floor(phase_res)
newphase = ph + phase_res
newphase -= np.floor(newphase)
# Sort arguments of phase so that they are ordered in plot
# (avoids ugly lines traversing the plot)
order = np.argsort(newphase)
# PLOTTING --------------------------
ax = plt.subplot(gs[row, col])
ax.plot(ph, pr, 'k-')
ax.plot(newphase[order], amplitude_res * pr_3_clean[order], 'r-')
# -------------------------------------
```
## The Z_n search
$Z_n^2$ is another widely used statistics for high-energy pulsar searches.
It measures how the probability of photons in a given phase is proportional to a given combination of $n$ harmonics. Or in other words, how well the pulse profile is described by a combination of sinusoidal harmonics.
The definition of this statistical indicator is (Buccheri+1983):
$$
Z^2_n = \dfrac{2}{N} \sum_{k=1}^n \left[{\left(\sum_{j=1}^N \cos k \phi_j\right)}^2 + {\left(\sum_{j=1}^N \sin k \phi_j\right)}^2\right] \; ,
$$
The formula can be slightly modified for binned data, by introducing a `weight` quantity giving the number of photons (or another measure of flux) in a given bin (Huppenkothen+2019):
$$
Z^2_n \approx \dfrac{2}{\sum_j{w_j}} \sum_{k=1}^n \left[{\left(\sum_{j=1}^m w_j \cos k \phi_j\right)}^2 + {\left(\sum_{j=1}^m w_j \sin k \phi_j\right)}^2\right]
$$
```python
def z_n(time, p, n=2, weight=1):
'''Z^2_n statistics, a` la Buccheri+03, A&A, 128, 245, eq. 2.
Parameters
----------
phase : array of floats
The phases of the events
n : int, default 2
Number of harmonics, including the fundamental
Other Parameters
----------------
norm : float or array of floats
A normalization factor that gets multiplied as a weight.
Returns
-------
z2_n : float
The Z^2_n statistics of the events.
'''
phase = time / p
nbin = len(phase)
if nbin == 0:
return 0
weight = np.asarray(weight)
if weight.size == 1:
total_weight = nbin * weight
else:
total_weight = np.sum(weight)
phase = phase * 2 * np.pi
return 2 / total_weight * \
np.sum([np.sum(np.cos(k * phase) * weight) ** 2 +
np.sum(np.sin(k * phase) * weight) ** 2
for k in range(1, n + 1)])
trial_periods = np.arange(0.7, 1.0, 0.0002)
stats = np.zeros_like(trial_periods)
for i, p in enumerate(trial_periods):
stats[i] = z_n(time, p, weight=total)
bestp = trial_periods[np.argmax(stats)]
phase_search, profile_search, profile_search_err = \
epoch_folding(time, total, bestp)
phase, profile, profile_err = epoch_folding(time, total, period)
# PLOTTING -------------------------------
fig = plt.figure(figsize=(10, 3))
gs = gridspec.GridSpec(1, 2)
ax = plt.subplot(gs[0])
ax.plot(trial_periods, stats)
ax.set_xlim([0.7, 1])
ax.set_xlabel('Period (s)')
ax.set_ylabel('$\chi^2$')
ax.axvline(period, color='r', label="True value")
_ = ax.legend()
ax.annotate('max = {:.5f} s'.format(pmax), xy=(.9, max(stats) / 2))
ax2 = plt.subplot(gs[1])
ax2.errorbar(phase_search, profile_search, yerr=profile_search_err,
drawstyle='steps-mid', label='Search')
ax2.errorbar(phase, profile, yerr=profile_err, drawstyle='steps-mid',
label='True period')
ax2.set_xlabel('Phase')
ax2.set_ylabel('Counts/bin')
_ = ax2.legend()
# ------------------------------------------
```
## Pulsation searches with HENDRICS
1. To read a fits file into an event list file:
```
$ HENreadevents file.evt.gz
```
a file called something like `file_mission_instr_ev.nc` appears
2. To calculate the light curve (binning the events) with a sample time of 1 s:
```
$ HENlcurve file_mission_instr_ev.nc -b 1
```
3. To calculate the averaged power density spectrum cutting the data by chunks of 128 s:
```
$ HENfspec file_mission_instr_lc.nc -f 128
```
4. To watch the power density spectrum:
```
$ HENplot file_mission_instr_pds.nc
```
5. To run a $Z^2_4$ search, e.g. between frequencies 0.5 and 0.6:
```
$ HENzsearch file_mission_instr_ev.nc -f 0.5 -F 0.6 -N 4
```
6. To run a $Z^2_2$ search searching in the frequency -- fdot space
```
$ HENzsearch file_mission_instr_ev.nc -f 0.5 -F 0.6 -N 2 --fast
$ HENplot file_mission_instr_Z2n.nc
```
7. Then... follow the instructions...
### BONUS
8. Calculate the TOAs and create a parameter and timing file (can you find how?)
9. Use `pintk` (from `github.com/nanograv/PINT`) to fit the pulse solution
```
$ pintk parfile.par timfile.tim
```
NB: due to a bug to PINT (under investigation), you might need to add the line
```
TZRMJD 55555
```
Substitute 55555 with the value of PEPOCH in the parameter file.
```python
```
| cbb27b8b5e2935e50639f57079efcfe47d6a0916 | 37,815 | ipynb | Jupyter Notebook | lectures/Day4-FourierMethods/1_Introduction_to_pulsar_timing.ipynb | carmensg/IAA_School2019 | e07274d0b3437ccedc5d306b7f86f4a12535b1a2 | [
"BSD-2-Clause"
] | null | null | null | lectures/Day4-FourierMethods/1_Introduction_to_pulsar_timing.ipynb | carmensg/IAA_School2019 | e07274d0b3437ccedc5d306b7f86f4a12535b1a2 | [
"BSD-2-Clause"
] | null | null | null | lectures/Day4-FourierMethods/1_Introduction_to_pulsar_timing.ipynb | carmensg/IAA_School2019 | e07274d0b3437ccedc5d306b7f86f4a12535b1a2 | [
"BSD-2-Clause"
] | 4 | 2019-10-18T05:11:00.000Z | 2021-11-23T13:42:04.000Z | 33.793566 | 331 | 0.513606 | true | 7,253 | Qwen/Qwen-72B | 1. YES
2. YES | 0.885631 | 0.805632 | 0.713493 | __label__eng_Latn | 0.82114 | 0.496015 |
<!-- dom:TITLE: Computational Physics Lectures: Partial differential equations -->
# Computational Physics Lectures: Partial differential equations
<!-- dom:AUTHOR: Morten Hjorth-Jensen at Department of Physics, University of Oslo & Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University -->
<!-- Author: -->
**Morten Hjorth-Jensen**, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University
Date: **Aug 23, 2017**
Copyright 1999-2017, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license
## Famous PDEs
In the Natural Sciences we often encounter problems with many variables
constrained by boundary conditions and initial values. Many of these problems
can be modelled as partial differential equations. One case which arises
in many situations is the so-called wave equation whose one-dimensional form
reads
<!-- Equation labels as ordinary links -->
<div id="eq:waveeqpde"></div>
$$
\begin{equation}
\label{eq:waveeqpde} \tag{1}
\frac{\partial^2 u}{\partial x^2}=A\frac{\partial^2 u}{\partial t^2},
\end{equation}
$$
where $A$ is a constant. The solution $u$ depends on both spatial and temporal variables, viz. $u=u(x,t)$.
## Famous PDEs, two dimension
In two dimension we have $u=u(x,y,t)$. We will, unless otherwise stated, simply use $u$ in our discussion below.
Familiar situations which this equation can model
are waves on a string, pressure waves, waves on the surface of a fjord or a
lake, electromagnetic waves and sound waves to mention a few. For e.g., electromagnetic
waves we have the constant $A=c^2$, with $c$ the speed of light. It is rather
straightforward to extend this equation to two or three dimension. In two dimensions
we have
$$
\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}=A\frac{\partial^2 u}{\partial t^2},
$$
## Famous PDEs, diffusion equation
The diffusion equation whose one-dimensional version reads
<!-- Equation labels as ordinary links -->
<div id="eq:diffusionpde"></div>
$$
\begin{equation}
\label{eq:diffusionpde} \tag{2}
\frac{\partial^2 u}{\partial x^2}=A\frac{\partial u}{\partial t},
\end{equation}
$$
and $A$ is in this case called the diffusion constant. It can be used to model
a wide selection of diffusion processes, from molecules to the diffusion of heat
in a given material.
## Famous PDEs, Laplace's equation
Another familiar equation from electrostatics is Laplace's equation, which looks similar
to the wave equation in Eq. ([eq:waveeqpde](#eq:waveeqpde)) except that we have set $A=0$
<!-- Equation labels as ordinary links -->
<div id="eq:laplacepde"></div>
$$
\begin{equation}
\label{eq:laplacepde} \tag{3}
\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}=0,
\end{equation}
$$
or if we have a finite electric charge represented by a charge density
$\rho(\mathbf{x})$ we have the familiar Poisson equation
<!-- Equation labels as ordinary links -->
<div id="eq:poissonpde"></div>
$$
\begin{equation}
\label{eq:poissonpde} \tag{4}
\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}=-4\pi \rho(\mathbf{x}).
\end{equation}
$$
## Famous PDEs, Helmholtz' equation
Other famous partial differential equations are the Helmholtz (or eigenvalue) equation, here specialized to two
dimensions only
<!-- Equation labels as ordinary links -->
<div id="eq:helmholtz"></div>
$$
\begin{equation}
\label{eq:helmholtz} \tag{5}
-\frac{\partial^2 u}{\partial x^2}-\frac{\partial^2 u}{\partial y^2}=\lambda u,
\end{equation}
$$
the linear transport equation (in $2+1$ dimensions) familiar from Brownian motion as well
<!-- Equation labels as ordinary links -->
<div id="eq:transport"></div>
$$
\begin{equation}
\label{eq:transport} \tag{6}
\frac{\partial u}{\partial t} +\frac{\partial u}{\partial x}+\frac{\partial u}{\partial y }=0,
\end{equation}
$$
## Famous PDEs, Schroedinger's equation in two dimensions
Schroedinger's equation
$$
-\frac{\partial^2 u}{\partial x^2}-\frac{\partial^2 u}{\partial y^2}+f(x,y)u = \imath\frac{\partial u}{\partial t}.
$$
## Famous PDEs, Maxwell's equations
Important systems of linear partial differential equations are the famous Maxwell equations
$$
\frac{\partial \mathbf{E}}{\partial t} = \mathrm{curl}\mathbf{B},
$$
and
$$
-\mathrm{curl} \mathbf{E} = \mathbf{B}
$$
and
$$
\mathrm{div} \mathbf{E} = \mathrm{div}\mathbf{B} = 0.
$$
## Famous PDEs, Euler's equations
Similarly, famous systems of non-linear partial differential equations are for example Euler's equations for
incompressible, inviscid flow
$$
\frac{\partial \mathbf{u}}{\partial t} +\mathbf{u}\nabla\mathbf{u}= -Dp; \hspace{1cm} \mathrm{div} \mathbf{u} = 0,
$$
with $p$ being the pressure and
$$
\nabla = \frac{\partial}{\partial x}e_x+\frac{\partial}{\partial y}e_y,
$$
in the two dimensions. The unit vectors are $e_x$ and $e_y$.
## Famous PDEs, the Navier-Stokes' equations
Another example is the set of
Navier-Stokes equations for incompressible, viscous flow
$$
\frac{\partial \mathbf{u}}{\partial t} +\mathbf{u}\nabla\mathbf{u}-\Delta \mathbf{u}= -Dp; \hspace{1cm} \mathrm{div} \mathbf{u} = 0.
$$
## Famous PDEs, general equation in two dimensions
A general partial differential equation with two given dimensions
reads
$$
A(x,y)\frac{\partial^2 u}{\partial x^2}+B(x,y)\frac{\partial^2 u}{\partial x\partial y}
+C(x,y)\frac{\partial^2 u}{\partial y^2}=F(x,y,u,\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y}),
$$
and if we set
$$
B=C=0,
$$
we recover the $1+1$-dimensional diffusion equation which is an example
of a so-called parabolic partial differential equation.
With
$$
B=0, \hspace{1cm} AC < 0
$$
we get the $2+1$-dim wave equation which is an example of a so-called
elliptic PDE, where more generally we have
$B^2 > AC$.
For $B^2 < AC$
we obtain a so-called hyperbolic PDE, with the Laplace equation in
Eq. ([eq:laplacepde](#eq:laplacepde)) as one of the
classical examples.
These equations can all be easily extended to non-linear partial differential
equations and $3+1$ dimensional cases.
## Diffusion equation
The diffusion equation describes in typical applications the evolution in time of the density $u$ of a quantity like
the particle density, energy density, temperature gradient, chemical concentrations etc.
The basis is the assumption that the flux density $\mathbf{\rho}$ obeys the Gauss-Green theorem
$$
\int_V \mathrm{div}\mathbf{\rho} dx = \int_{\partial V} \mathbf{\rho}\mathbf{n}dS,
$$
where $n$ is the unit outer normal field and $V$ is a smooth region with the space where
we seek a solution.
The Gauss-Green theorem leads to
$$
\mathrm{div} \mathbf{\rho} = 0.
$$
## Diffusion equation
Assuming that the flux is proportional to the gradient $\mathbf{\nabla} u$ but pointing in the opposite direction
since the flow is from regions of high concetration to lower concentrations, we obtain
$$
\mathbf{\rho} = -D\mathbf{\nabla} u,
$$
resulting in
$$
\mathrm{div} \mathbf{\nabla} u = D\Delta u = 0,
$$
which is Laplace's equation. The constant $D$ can be coupled with various physical
constants, such as the diffusion constant or the specific heat and thermal conductivity discussed below.
## Diffusion equation, famous laws
If we let $u$ denote the concetration of a particle species, this results in Fick's law of diffusion.
If it denotes the temperature gradient, we have Fourier'slaw of heat conduction and if it refers to the
electrostatic potential we have Ohm's law of electrical conduction.
Coupling the rate of change (temporal dependence) of $u$ with the flux density we have
$$
\frac{\partial u}{\partial t} = -\mathrm{div}\mathbf{\rho},
$$
which results in
$$
\frac{\partial u}{\partial t}= D \mathrm{div} \mathbf{\nabla} u = D \Delta u,
$$
the diffusion equation, or heat equation.
## Diffusion equation, heat equation
If we specialize to the heat equation,
we assume that the diffusion of heat through some
material is proportional with the temperature gradient $T(\mathbf{x},t)$
and using
conservation of energy we arrive at the diffusion equation
$$
\frac{\kappa}{C\rho}\nabla^2 T(\mathbf{x},t) =\frac{\partial T(\mathbf{x},t)}{\partial t}
$$
where $C$ is the specific heat and $\rho$
the density of the material.
Here we let the density be represented by a
constant, but there is no problem introducing an explicit spatial dependence, viz.,
$$
\frac{\kappa}{C\rho(\mathbf{x},t)}\nabla^2 T(\mathbf{x},t) =
\frac{\partial T(\mathbf{x},t)}{\partial t}.
$$
## Diffusion equation, heat equation in one dimension
Setting all constants equal to the diffusion constant $D$, i.e.,
$$
D=\frac{C\rho}{\kappa},
$$
we arrive at
$$
\nabla^2 T(\mathbf{x},t) =
D\frac{\partial T(\mathbf{x},t)}{\partial t}.
$$
Specializing to the $1+1$-dimensional case we have
$$
\frac{\partial^2 T(x,t)}{\partial x^2}=D\frac{\partial T(x,t)}{\partial t}.
$$
## Diffusion equation, dimensionless form
We note that the dimension of $D$ is time/length$^2$.
Introducing the dimensionless variables $\alpha\hat{x}=x$
we get
$$
\frac{\partial^2 T(x,t)}{\alpha^2\partial \hat{x}^2}=
D\frac{\partial T(x,t)}{\partial t},
$$
and since $\alpha$ is just a constant we could define
$\alpha^2D= 1$ or use the last expression to define a dimensionless time-variable
$\hat{t}$. This yields a simplified diffusion equation
$$
\frac{\partial^2 T(\hat{x},\hat{t})}{\partial \hat{x}^2}=
\frac{\partial T(\hat{x},\hat{t})}{\partial \hat{t}}.
$$
It is now a partial differential equation in terms of dimensionless
variables. In the discussion below, we will however, for the sake
of notational simplicity replace $\hat{x}\rightarrow x$ and
$\hat{t}\rightarrow t$. Moreover, the solution to the $1+1$-dimensional
partial differential equation is replaced by $T(\hat{x},\hat{t})\rightarrow u(x,t)$.
## Explicit Scheme
In one dimension we have the following equation
$$
\nabla^2 u(x,t) =\frac{\partial u(x,t)}{\partial t},
$$
or
$$
u_{xx} = u_t,
$$
with initial conditions, i.e., the conditions at $t=0$,
$$
u(x,0)= g(x) \hspace{0.5cm} 0 < x < L
$$
with $L=1$ the length of the $x$-region of interest.
## Explicit Scheme, boundary conditions
The boundary conditions are
$$
u(0,t)= a(t) \hspace{0.5cm} t \ge 0,
$$
and
$$
u(L,t)= b(t) \hspace{0.5cm} t \ge 0,
$$
where $a(t)$ and $b(t)$ are two functions which depend on time only, while
$g(x)$ depends only on the position $x$.
Our next step is to find a numerical algorithm for solving this equation. Here we recur
to our familiar equal-step methods
and introduce different step lengths for the space-variable $x$ and time $t$ through
the step length for $x$
$$
\Delta x=\frac{1}{n+1}
$$
and the time step length $\Delta t$. The position after $i$ steps and
time at time-step $j$ are now given by
$$
\begin{array}{cc} t_j=j\Delta t & j \ge 0 \\
x_i=i\Delta x & 0 \le i \le n+1\end{array}
$$
## Explicit Scheme, algorithm
If we use standard approximations for the derivatives we obtain
$$
u_t\approx \frac{u(x,t+\Delta t)-u(x,t)}{\Delta t}=\frac{u(x_i,t_j+\Delta t)-u(x_i,t_j)}{\Delta t}
$$
with a local approximation error $O(\Delta t)$
and
$$
u_{xx}\approx \frac{u(x+\Delta x,t)-2u(x,t)+u(x-\Delta x,t)}{\Delta x^2},
$$
or
$$
u_{xx}\approx \frac{u(x_i+\Delta x,t_j)-2u(x_i,t_j)+u(x_i-\Delta x,t_j)}{\Delta x^2},
$$
with a local approximation error $O(\Delta x^2)$. Our approximation is to higher order
in coordinate space. This can be justified since in most cases it is the spatial
dependence which causes numerical problems.
## Explicit Scheme, simplifications
These equations can be further simplified as
$$
u_t\approx \frac{u_{i,j+1}-u_{i,j}}{\Delta t},
$$
and
$$
u_{xx}\approx \frac{u_{i+1,j}-2u_{i,j}+u_{i-1,j}}{\Delta x^2}.
$$
The one-dimensional diffusion equation can then be rewritten in its
discretized version as
$$
\frac{u_{i,j+1}-u_{i,j}}{\Delta t}=\frac{u_{i+1,j}-2u_{i,j}+u_{i-1,j}}{\Delta x^2}.
$$
Defining $\alpha = \Delta t/\Delta x^2$ results in the explicit scheme
<!-- Equation labels as ordinary links -->
<div id="eq:explicitpde"></div>
$$
\begin{equation}
\label{eq:explicitpde} \tag{7}
u_{i,j+1}= \alpha u_{i-1,j}+(1-2\alpha)u_{i,j}+\alpha u_{i+1,j}.
\end{equation}
$$
## Explicit Scheme, solving the equations
Since all the discretized initial values
$$
u_{i,0} = g(x_i),
$$
are known, then after one time-step the only unknown quantity is
$u_{i,1}$ which is given by
$$
u_{i,1}= \alpha u_{i-1,0}+(1-2\alpha)u_{i,0}+\alpha u_{i+1,0}=
\alpha g(x_{i-1})+(1-2\alpha)g(x_{i})+\alpha g(x_{i+1}).
$$
We can then obtain $u_{i,2}$ using the previously calculated values $u_{i,1}$
and the boundary conditions $a(t)$ and $b(t)$.
This algorithm results in a so-called explicit scheme, since the next functions
$u_{i,j+1}$ are explicitely given by Eq. ([eq:explicitpde](#eq:explicitpde)).
## Explicit Scheme, simple case
We specialize to the case
$a(t)=b(t)=0$ which results in $u_{0,j}=u_{n+1,j}=0$.
We can then reformulate our partial differential equation through the vector
$V_j$ at the time $t_j=j\Delta t$
$$
V_j=\begin{bmatrix}u_{1,j}\\ u_{2,j} \\ \dots \\ u_{n,j}\end{bmatrix}.
$$
## Explicit Scheme, matrix-vector formulation
This results in a matrix-vector multiplication
$$
V_{j+1} = \mathbf{A}V_{j}
$$
with the matrix $\mathbf{A}$ given by
$$
\mathbf{A}=\begin{bmatrix}1-2\alpha&\alpha&0& 0\dots\\
\alpha&1-2\alpha&\alpha & 0\dots \\
\dots & \dots & \dots & \dots \\
0\dots & 0\dots &\alpha& 1-2\alpha\end{bmatrix}
$$
which means we can rewrite the original partial differential equation as
a set of matrix-vector multiplications
$$
V_{j+1} = \mathbf{A}V_{j}=\dots = \mathbf{A}^{j+1}V_0,
$$
where $V_0$ is the initial vector at time $t=0$ defined by the initial value
$g(x)$.
In the numerical implementation
one should avoid to treat this problem as a matrix vector multiplication
since the matrix is triangular and at most three elements in each row are different from zero.
## Explicit Scheme, sketch of code
It is rather easy to implement this matrix-vector multiplication as seen in the following piece of code
// First we set initialise the new and old vectors
// Here we have chosen the boundary conditions to be zero.
// n+1 is the number of mesh points in x
// Armadillo notation for vectors
u(0) = unew(0) = u(n) = unew(n) = 0.0;
for (int i = 1; i < n; i++) {
x = i*step;
// initial condition
u(i) = func(x);
// intitialise the new vector
unew(i) = 0;
}
// Time integration
for (int t = 1; t <= tsteps; t++) {
for (int i = 1; i < n; i++) {
// Discretized diff eq
unew(i) = alpha * u(i-1) + (1 - 2*alpha) * u(i) + alpha * u(i+1);
}
// note that the boundaries are not changed.
## Explicit Scheme, stability condition
However, although the explicit scheme is easy to implement, it has a very weak
stability condition, given by
$$
\Delta t/\Delta x^2 \le 1/2.
$$
This means that if $\Delta x = 0.01$ (a rather frequent choice), then $\Delta t= 5\times 10^{-5}$. This has obviously
bad consequences if our time interval is large.
In order to derive this relation we need some results from studies of iterative schemes.
If we require that our solution approaches a definite value after
a certain amount of time steps we need to require that the so-called
spectral radius $\rho(\mathbf{A})$ of our matrix $\mathbf{A}$ satisfies the condition
<!-- Equation labels as ordinary links -->
<div id="eq:rhoconverge"></div>
$$
\begin{equation}
\label{eq:rhoconverge} \tag{8}
\rho(\mathbf{A}) < 1.
\end{equation}
$$
## Explicit Scheme, spectral radius and stability
The spectral radius is defined
as
$$
\rho(\mathbf{A}) = \hspace{0.1cm}\mathrm{max}\left\{|\lambda|:\mathrm{det}(\mathbf{A}-\lambda\hat{I})=0\right\},
$$
which is interpreted as the smallest number such that a circle with radius centered at zero in the complex plane
contains all eigenvalues of $\mathbf{A}$. If the matrix is positive definite, the condition in
Eq. ([eq:rhoconverge](#eq:rhoconverge)) is always satisfied.
## Explicit Scheme, eigenvalues and stability
We can obtain closed-form expressions for the eigenvalues of $\mathbf{A}$. To achieve this it is convenient
to rewrite the matrix as
$$
\mathbf{A}=\hat{I}-\alpha\hat{B},
$$
with
$$
\hat{B} =\begin{bmatrix}2&-1&0& 0 &\dots\\
-1&2&-1& 0&\dots \\
\dots & \dots & \dots & \dots & -1 \\
0 & 0 &\dots &-1&2\end{bmatrix}.
$$
## Explicit Scheme, final stability analysis
The eigenvalues of $\mathbf{A}$ are $\lambda_i=1-\alpha\mu_i$, with $\mu_i$ being the
eigenvalues of $\hat{B}$. To find $\mu_i$ we note that the matrix elements of $\hat{B}$ are
$$
b_{ij} = 2\delta_{ij}-\delta_{i+1j}-\delta_{i-1j},
$$
meaning that we
have the following set of eigenequations for component $i$
$$
(\hat{B}\hat{x})_i = \mu_ix_i,
$$
resulting in
$$
(\hat{B}\hat{x})_i=\sum_{j=1}^n\left(2\delta_{ij}-\delta_{i+1j}-\delta_{i-1j}\right)x_j =
2x_i-x_{i+1}-x_{i-1}=\mu_ix_i.
$$
## Explicit Scheme, stability condition
If we assume that $x$ can be expanded in a basis of $x=(\sin{(\theta)}, \sin{(2\theta)},\dots, \sin{(n\theta)})$
with $\theta = l\pi/n+1$, where we have the endpoints given by $x_0 = 0$ and $x_{n+1}=0$, we can rewrite the
last equation as
$$
2\sin{(i\theta)}-\sin{((i+1)\theta)}-\sin{((i-1)\theta)}=\mu_i\sin{(i\theta)},
$$
or
$$
2\left(1-\cos{(\theta)}\right)\sin{(i\theta)}=\mu_i\sin{(i\theta)},
$$
which is nothing but
$$
2\left(1-\cos{(\theta)}\right)x_i=\mu_ix_i,
$$
with eigenvalues $\mu_i = 2-2\cos{(\theta)}$.
Our requirement in
Eq. ([eq:rhoconverge](#eq:rhoconverge)) results in
$$
-1 < 1-\alpha2\left(1-\cos{(\theta)}\right) < 1,
$$
which is satisfied only if $\alpha < \left(1-\cos{(\theta)}\right)^{-1}$ resulting in
$\alpha \le 1/2$ or $\Delta t/\Delta x^2 \le 1/2$.
## Explicit Scheme, general tridiagonal matrix
A more general tridiagonal matrix
$$
\mathbf{A} =\begin{bmatrix}a&b&0& 0 &\dots\\
c&a&b& 0&\dots \\
\dots & \dots & \dots & \dots & b \\
0 & 0 &\dots &c&a\end{bmatrix},
$$
has eigenvalues $\mu_i=a+s\sqrt{bc}\cos{(i\pi/n+1)}$ with $i=1:n$.
## Implicit Scheme
In deriving the equations for the explicit scheme we started with the so-called
forward formula for the first derivative, i.e., we used the discrete approximation
$$
u_t\approx \frac{u(x_i,t_j+\Delta t)-u(x_i,t_j)}{\Delta t}.
$$
However, there is nothing which hinders us from using the backward formula
$$
u_t\approx \frac{u(x_i,t_j)-u(x_i,t_j-\Delta t)}{\Delta t},
$$
still with a truncation error which goes like $O(\Delta t)$.
## Implicit Scheme
We could also have used a midpoint approximation for the first derivative, resulting in
$$
u_t\approx \frac{u(x_i,t_j+\Delta t)-u(x_i,t_j-\Delta t)}{2\Delta t},
$$
with a truncation error $O(\Delta t^2)$.
Here we will stick to the backward formula and come back to the latter below.
For the second derivative we use however
$$
u_{xx}\approx \frac{u(x_i+\Delta x,t_j)-2u(x_i,t_j)+u(x_i-\Delta x,t_j)}{\Delta x^2},
$$
and define again $\alpha = \Delta t/\Delta x^2$.
## Implicit Scheme
We obtain now
$$
u_{i,j-1}= -\alpha u_{i-1,j}+(1-2\alpha)u_{i,j}-\alpha u_{i+1,j}.
$$
Here $u_{i,j-1}$ is the only unknown quantity.
Defining the matrix
$\mathbf{A}$
$$
\mathbf{A}=\begin{bmatrix}1+2\alpha&-\alpha&0& 0 &\dots\\
-\alpha&1+2\alpha&-\alpha & 0 & \dots \\
\dots & \dots & \dots & \dots &\dots \\
\dots & \dots & \dots & \dots & -\alpha \\
0 & 0 &\dots &-\alpha& 1+2\alpha\end{bmatrix},
$$
we can reformulate again the problem as a matrix-vector multiplication
$$
\mathbf{A}V_{j} = V_{j-1}
$$
## Implicit Scheme
It means that we can rewrite the problem as
$$
V_{j} = \mathbf{A}^{-1}V_{j-1}=\mathbf{A}^{-1}\left(\mathbf{A}^{-1}V_{j-2}\right)=\dots = \mathbf{A}^{-j}V_0.
$$
This is an implicit scheme since it relies on determining the vector
$u_{i,j-1}$ instead of $u_{i,j+1}$.
If $\alpha$ does not depend on time $t$, we need
to invert a matrix only once. Alternatively we can solve this system of equations using our methods
from linear algebra.
These are however very cumbersome ways of solving since they involve $\sim O(N^3)$ operations
for a $N\times N$ matrix.
It is much faster to solve these linear equations using methods for tridiagonal matrices,
since these involve only $\sim O(N)$ operations.
## Implicit Scheme
The implicit scheme is always stable since the spectral radius satisfies $\rho(\mathbf{A}) < 1 $. We could have inferred this by noting that
the matrix is positive definite, viz. all eigenvalues are larger than zero. We see this from
the fact that $\mathbf{A}=\hat{I}+\alpha\hat{B}$ has eigenvalues
$\lambda_i = 1+\alpha(2-2cos(\theta))$ which satisfy $\lambda_i > 1$. Since it is the inverse which stands
to the right of our iterative equation, we have $\rho(\mathbf{A}^{-1}) < 1 $
and the method is stable for all combinations
of $\Delta t$ and $\Delta x$.
### Program Example for Implicit Equation
We show here parts of a simple example of how to solve the one-dimensional diffusion equation using the implicit
scheme discussed above. The program uses the function to solve linear equations with a tridiagonal
matrix.
// parts of the function for backward Euler
void backward_euler(int n, int tsteps, double delta_x, double alpha)
{
double a, b, c;
vec u(n+1); // This is u of Au = y
vec y(n+1); // Right side of matrix equation Au=y, the solution at a previous step
// Initial conditions
for (int i = 1; i < n; i++) {
y(i) = u(i) = func(delta_x*i);
}
// Boundary conditions (zero here)
y(n) = u(n) = u(0) = y(0);
// Matrix A, only constants
a = c = - alpha;
b = 1 + 2*alpha;
// Time iteration
for (int t = 1; t <= tsteps; t++) {
// here we solve the tridiagonal linear set of equations,
tridag(a, b, c, y, u, n+1);
// boundary conditions
u(0) = 0;
u(n) = 0;
// replace previous time solution with new
for (int i = 0; i <= n; i++) {
y(i) = u(i);
}
// You may consider printing the solution at regular time intervals
.... // print statements
} // end time iteration
...
}
## Crank-Nicolson scheme
It is possible to combine the implicit and explicit methods in a slightly more general
approach. Introducing a parameter $\theta$ (the so-called $\theta$-rule) we can set up
an equation
<!-- Equation labels as ordinary links -->
<div id="eq:cranknicolson"></div>
$$
\begin{equation}
\label{eq:cranknicolson} \tag{9}
\frac{\theta}{\Delta x^2}\left(u_{i-1,j}-2u_{i,j}+u_{i+1,j}\right)+
\frac{1-\theta}{\Delta x^2}\left(u_{i+1,j-1}-2u_{i,j-1}+u_{i-1,j-1}\right)=
\frac{1}{\Delta t}\left(u_{i,j}-u_{i,j-1}\right),
\end{equation}
$$
which for $\theta=0$ yields the forward formula for the first derivative and
the explicit scheme, while $\theta=1$ yields the backward formula and the implicit
scheme. These two schemes are called the backward and forward Euler schemes, respectively.
For $\theta = 1/2$ we obtain a new scheme after its inventors, Crank and Nicolson.
This scheme yields a truncation in time which goes like $O(\Delta t^2)$ and it is stable
for all possible combinations of $\Delta t$ and $\Delta x$.
## Derivation of CN scheme
To derive the Crank-Nicolson equation,
we start with the forward Euler scheme and Taylor expand $u(x,t+\Delta t)$,
$u(x+\Delta x, t)$ and $u(x-\Delta x,t)$
<!-- Equation labels as ordinary links -->
<div id="_auto1"></div>
$$
\begin{equation}
u(x+\Delta x,t)=u(x,t)+\frac{\partial u(x,t)}{\partial x} \Delta x+\frac{\partial^2 u(x,t)}{2\partial x^2}\Delta x^2+\mathcal{O}(\Delta x^3),
\label{_auto1} \tag{10}
\end{equation}
$$
$$
\nonumber
u(x-\Delta x,t)=u(x,t)-\frac{\partial u(x,t)}{\partial x}\Delta x+\frac{\partial^2 u(x,t)}{2\partial x^2} \Delta x^2+\mathcal{O}(\Delta x^3),
$$
<!-- Equation labels as ordinary links -->
<div id="eq:deltat0"></div>
$$
\nonumber
u(x,t+\Delta t)=u(x,t)+\frac{\partial u(x,t)}{\partial t}\Delta t+ \mathcal{O}(\Delta t^2).
\label{eq:deltat0} \tag{11}
$$
## Taylor expansions
With these Taylor expansions the approximations for the derivatives takes the form
<!-- Equation labels as ordinary links -->
<div id="_auto2"></div>
$$
\begin{equation}
\left[\frac{\partial u(x,t)}{\partial t}\right]_{\text{approx}} =\frac{\partial u(x,t)}{\partial t}+\mathcal{O}(\Delta t) ,
\label{_auto2} \tag{12}
\end{equation}
$$
$$
\nonumber
\left[\frac{\partial^2 u(x,t)}{\partial x^2}\right]_{\text{approx}}=\frac{\partial^2 u(x,t)}{\partial x^2}+\mathcal{O}(\Delta x^2).
$$
It is easy to convince oneself that the backward Euler method must have the same truncation errors as the forward Euler scheme.
## Error in CN scheme
For the Crank-Nicolson scheme we also need to Taylor expand $u(x+\Delta x, t+\Delta t)$ and $u(x-\Delta x, t+\Delta t)$ around $t'=t+\Delta t/2$.
<!-- Equation labels as ordinary links -->
<div id="_auto3"></div>
$$
\begin{equation}
u(x+\Delta x, t+\Delta t)=u(x,t')+\frac{\partial u(x,t')}{\partial x}\Delta x+\frac{\partial u(x,t')}{\partial t} \frac{\Delta t}{2} +\frac{\partial^2 u(x,t')}{2\partial x^2}\Delta x^2+\frac{\partial^2 u(x,t')}{2\partial t^2}\frac{\Delta t^2}{4} +\notag
\label{_auto3} \tag{13}
\end{equation}
$$
$$
\nonumber
\frac{\partial^2 u(x,t')}{\partial x\partial t}\frac{\Delta t}{2} \Delta x+ \mathcal{O}(\Delta t^3)
$$
$$
\nonumber
u(x-\Delta x, t+\Delta t)=u(x,t')-\frac{\partial u(x,t')}{\partial x}\Delta x+\frac{\partial u(x,t')}{\partial t} \frac{\Delta t}{2} +\frac{\partial^2 u(x,t')}{2\partial x^2}\Delta x^2+\frac{\partial^2 u(x,t')}{2\partial t^2}\frac{\Delta t^2}{4} - \notag
$$
$$
\nonumber
\frac{\partial^2 u(x,t')}{\partial x\partial t}\frac{\Delta t}{2} \Delta x+ \mathcal{O}(\Delta t^3)
$$
<!-- Equation labels as ordinary links -->
<div id="_auto4"></div>
$$
\begin{equation}
u(x+\Delta x,t)=u(x,t')+\frac{\partial u(x,t')}{\partial x}\Delta x-\frac{\partial u(x,t')}{\partial t} \frac{\Delta t}{2} +\frac{\partial^2 u(x,t')}{2\partial x^2}\Delta x^2+\frac{\partial^2 u(x,t')}{2\partial t^2}\frac{\Delta t^2}{4} -\notag
\label{_auto4} \tag{14}
\end{equation}
$$
$$
\nonumber
\frac{\partial^2 u(x,t')}{\partial x\partial t}\frac{\Delta t}{2} \Delta x+ \mathcal{O}(\Delta t^3)
$$
$$
\nonumber
u(x-\Delta x,t)=u(x,t')-\frac{\partial u(x,t')}{\partial x}\Delta x-\frac{\partial u(x,t')}{\partial t} \frac{\Delta t}{2} +\frac{\partial^2 u(x,t')}{2\partial x^2}\Delta x^2+\frac{\partial^2 u(x,t')}{2\partial t^2}\frac{\Delta t^2}{4} +\notag
$$
$$
\nonumber
\frac{\partial^2 u(x,t')}{\partial x\partial t}\frac{\Delta t}{2} \Delta x+ \mathcal{O}(\Delta t^3)
$$
$$
\nonumber
u(x,t+\Delta t)=u(x,t')+\frac{\partial u(x,t')}{\partial t}\frac{\Delta_t}{2} +\frac{\partial ^2 u(x,t')}{2\partial t^2}\Delta t^2 + \mathcal{O}(\Delta t^3)
$$
<!-- Equation labels as ordinary links -->
<div id="eq:deltat"></div>
$$
\nonumber
u(x,t)=u(x,t')-\frac{\partial u(x,t')}{\partial t}\frac{\Delta t}{2}+\frac{\partial ^2 u(x,t')}{2\partial t^2}\Delta t^2 + \mathcal{O}(\Delta t^3)
\label{eq:deltat} \tag{15}
$$
We now insert these expansions in the approximations for the derivatives to find
<!-- Equation labels as ordinary links -->
<div id="_auto5"></div>
$$
\begin{equation}
\left[\frac{\partial u(x,t')}{\partial t}\right]_{\text{approx}} =\frac{\partial u(x,t')}{\partial t}+\mathcal{O}(\Delta t^2) ,
\label{_auto5} \tag{16}
\end{equation}
$$
$$
\nonumber
\left[\frac{\partial^2 u(x,t')}{\partial x^2}\right]_{\text{approx}}=\frac{\partial^2 u(x,t')}{\partial x^2}+\mathcal{O}(\Delta x^2).
$$
## Truncation errors and stability
The following table summarizes the three methods.
<table border="1">
<thead>
<tr><th align="center"> *Scheme:* </th> <th align="center"> *Truncation Error:* </th> <th align="center"> *Stability requirements:* </th> </tr>
</thead>
<tbody>
<tr><td align="center"> Crank-Nicolson </td> <td align="center"> $\mathcal{O}(\Delta x^2)$ and $\mathcal{O}(\Delta t^2)$ </td> <td align="center"> Stable for all $\Delta t$ and $\Delta x$. </td> </tr>
<tr><td align="center"> Backward Euler </td> <td align="center"> $\mathcal{O}(\Delta x^2)$ and $\mathcal{O}(\Delta t)$ </td> <td align="center"> Stable for all $\Delta t$ and $\Delta x$. </td> </tr>
<tr><td align="center"> Forward Euler </td> <td align="center"> $\mathcal{O}(\Delta x^2)$ and $\mathcal{O}(\Delta t)$ </td> <td align="center"> $\Delta t\leq \frac{1}{2}\Delta x^2$ </td> </tr>
</tbody>
</table>
## Rewrite of CN scheme
Using our previous definition of $\alpha=\Delta t/\Delta x^2$ we can rewrite Eq. ([eq:cranknicolson](#eq:cranknicolson)) as
$$
-\alpha u_{i-1,j}+\left(2+2\alpha\right)u_{i,j}-\alpha u_{i+1,j}=
\alpha u_{i-1,j-1}+\left(2-2\alpha\right)u_{i,j-1}+\alpha u_{i+1,j-1},
$$
or in matrix-vector form as
$$
\left(2\hat{I}+\alpha\hat{B}\right)V_{j}=
\left(2\hat{I}-\alpha\hat{B}\right)V_{j-1},
$$
where the vector $V_{j}$ is the same as defined in the implicit case while the matrix
$\hat{B}$ is
$$
\hat{B}=\begin{bmatrix}2&-1&0&0 & \dots\\
-1& 2& -1 & 0 &\dots \\
\dots & \dots & \dots & \dots & \dots \\
\dots & \dots & \dots & \dots &-1 \\
0& 0 & \dots &-1& 2\end{bmatrix}.
$$
## Final CN equations
We can rewrite the Crank-Nicolson scheme as follows
$$
V_{j}=
\left(2\hat{I}+\alpha\hat{B}\right)^{-1}\left(2\hat{I}-\alpha\hat{B}\right)V_{j-1}.
$$
We have already obtained the eigenvalues for the two matrices
$\left(2\hat{I}+\alpha\hat{B}\right)$ and $\left(2\hat{I}-\alpha\hat{B}\right)$.
This means that the spectral function has to satisfy
$$
\rho(\left(2\hat{I}+\alpha\hat{B}\right)^{-1}\left(2\hat{I}-\alpha\hat{B}\right)) <1,
$$
meaning that
$$
\left|(\left(2+\alpha\mu_i\right)^{-1}\left(2-\alpha\mu_i\right)\right| <1,
$$
and since $\mu_i = 2-2cos(\theta)$ we have $0< \mu_i < 4$. A little algebra shows that
the algorithm is stable for all possible values of $\Delta t$ and $\Delta x$.
## Parts of Code for the Crank-Nicolson Scheme
We can code in an efficient way the Crank-Nicolson algortihm by first multplying the matrix
$$
\tilde{V}_{j-1}=\left(2\hat{I}-\alpha\hat{B}\right)V_{j-1},
$$
with our previous vector $V_{j-1}$ using the matrix-vector multiplication algorithm for a
tridiagonal matrix, as done in the forward-Euler scheme. Thereafter we can solve the equation
$$
\left(2\hat{I}+\alpha\hat{B}\right) V_{j}=
\tilde{V}_{j-1},
$$
using our method for systems of linear equations with a tridiagonal matrix, as done for the backward Euler scheme.
## Parts of Code for the Crank-Nicolson Scheme
We illustrate this in the following part of our program.
void crank_nicolson(int n, int tsteps, double delta_x, double alpha)
{
double a, b, c;
vec u(n+1); // This is u in Au = r
vec r(n+1); // Right side of matrix equation Au=r
....
// setting up the matrix
a = c = - alpha;
b = 2 + 2*alpha;
// Time iteration
for (int t = 1; t <= tsteps; t++) {
// Calculate r for use in tridag, right hand side of the Crank Nicolson method
for (int i = 1; i < n; i++) {
r(i) = alpha*u(i-1) + (2 - 2*alpha)*u(i) + alpha*u(i+1);
}
r(0) = 0;
r(n) = 0;
// Then solve the tridiagonal matrix
tridiag(a, b, c, r, u, xsteps+1);
u(0) = 0;
u(n) = 0;
// Eventual print statements etc
....
}
## Python code for solving the one-dimensional diffusion equation
The following Python code sets up and solves the diffusion equation for all three methods discussed.
```
%matplotlib inline
# Code for solving the 1+1 dimensional diffusion equation
# du/dt = ddu/ddx on a rectangular grid of size L x (T*dt),
# with with L = 1, u(x,0) = g(x), u(0,t) = u(L,t) = 0
import numpy, sys, math
from matplotlib import pyplot as plt
import numpy as np
def forward_step(alpha,u,uPrev,N):
"""
Steps forward-euler algo one step ahead.
Implemented in a separate function for code-reuse from crank_nicolson()
"""
for x in xrange(1,N+1): #loop from i=1 to i=N
u[x] = alpha*uPrev[x-1] + (1.0-2*alpha)*uPrev[x] + alpha*uPrev[x+1]
def forward_euler(alpha,u,N,T):
"""
Implements the forward Euler sheme, results saved to
array u
"""
#Skip boundary elements
for t in xrange(1,T):
forward_step(alpha,u[t],u[t-1],N)
def tridiag(alpha,u,N):
"""
Tridiagonal gaus-eliminator, specialized to diagonal = 1+2*alpha,
super- and sub- diagonal = - alpha
"""
d = numpy.zeros(N) + (1+2*alpha)
b = numpy.zeros(N-1) - alpha
#Forward eliminate
for i in xrange(1,N):
#Normalize row i (i in u convention):
b[i-1] /= d[i-1];
u[i] /= d[i-1] #Note: row i in u = row i-1 in the matrix
d[i-1] = 1.0
#Eliminate
u[i+1] += u[i]*alpha
d[i] += b[i-1]*alpha
#Normalize bottom row
u[N] /= d[N-1]
d[N-1] = 1.0
#Backward substitute
for i in xrange(N,1,-1): #loop from i=N to i=2
u[i-1] -= u[i]*b[i-2]
#b[i-2] = 0.0 #This is never read, why bother...
def backward_euler(alpha,u,N,T):
"""
Implements backward euler scheme by gaus-elimination of tridiagonal matrix.
Results are saved to u.
"""
for t in xrange(1,T):
u[t] = u[t-1].copy()
tridiag(alpha,u[t],N) #Note: Passing a pointer to row t, which is modified in-place
def crank_nicolson(alpha,u,N,T):
"""
Implents crank-nicolson scheme, reusing code from forward- and backward euler
"""
for t in xrange(1,T):
forward_step(alpha/2,u[t],u[t-1],N)
tridiag(alpha/2,u[t],N)
def g(x):
"""Initial condition u(x,0) = g(x), x \in [0,1]"""
return numpy.sin(math.pi*x)
# Number of integration points along x-axis
N = 100
# Step length in time
dt = 0.01
# Number of time steps till final time
T = 100
# Define method to use 1 = explicit scheme, 2= implicit scheme, 3 = Crank-Nicolson
method = 2
#dx = 1/float(N+1)
u = numpy.zeros((T,N+2),numpy.double)
(x,dx) = numpy.linspace (0,1,N+2, retstep=True)
alpha = dt/(dx**2)
#Initial codition
u[0,:] = g(x)
u[0,0] = u[0,N+1] = 0.0 #Implement boundaries rigidly
if method == 1:
forward_euler(alpha,u,N,T)
elif method == 2:
backward_euler(alpha,u,N,T)
elif method == 3:
crank_nicolson(alpha,u,N,T)
else:
print "Please select method 1,2, or 3!"
import sys
sys.exit(0)
# To do: add movie
```
## Solution for the One-dimensional Diffusion Equation
It cannot be repeated enough, it is always useful to find cases where one can compare the numerical results
and the developed algorithms and codes with closed-form solutions.
The above case is also particularly simple.
We have the following partial differential equation
$$
\nabla^2 u(x,t) =\frac{\partial u(x,t)}{\partial t},
$$
with initial conditions
$$
u(x,0)= g(x) \hspace{0.5cm} 0 < x < L.
$$
## Solution for the One-dimensional Diffusion Equation
The boundary conditions are
$$
u(0,t)= 0 \hspace{0.5cm} t \ge 0, \hspace{1cm} u(L,t)= 0 \hspace{0.5cm} t \ge 0,
$$
We assume that we have solutions of the form (separation of variable)
$$
u(x,t)=F(x)G(t).
$$
which inserted in the partial differential equation results in
$$
\frac{F''}{F}=\frac{G'}{G},
$$
where the derivative is with respect to $x$ on the left hand side and with respect to $t$ on right hand side.
This equation should hold for all $x$ and $t$. We must require the rhs and lhs to be equal to a constant.
## Solution for the One-dimensional Diffusion Equation
We call this constant $-\lambda^2$. This gives us the two differential equations,
$$
F''+\lambda^2F=0; \hspace{1cm} G'=-\lambda^2G,
$$
with general solutions
$$
F(x)=A\sin(\lambda x)+B\cos(\lambda x); \hspace{1cm} G(t)=Ce^{-\lambda^2t}.
$$
## Solution for the One-dimensional Diffusion Equation
To satisfy the boundary conditions we require $B=0$ and $\lambda=n\pi/L$. One solution is therefore found to be
$$
u(x,t)=A_n\sin(n\pi x/L)e^{-n^2\pi^2 t/L^2}.
$$
But there are infinitely many possible $n$ values (infinite number of solutions). Moreover,
the diffusion equation is linear and because of this we know that a superposition of solutions
will also be a solution of the equation. We may therefore write
$$
u(x,t)=\sum_{n=1}^{\infty} A_n \sin(n\pi x/L) e^{-n^2\pi^2 t/L^2}.
$$
## Solution for the One-dimensional Diffusion Equation
The coefficient $A_n$ is in turn determined from the initial condition. We require
$$
u(x,0)=g(x)=\sum_{n=1}^{\infty} A_n \sin(n\pi x/L).
$$
The coefficient $A_n$ is the Fourier coefficients for the function $g(x)$. Because of this, $A_n$ is given by (from the theory on Fourier series)
$$
A_n=\frac{2}{L}\int_0^L g(x)\sin(n\pi x/L) \mathrm{d}x.
$$
Different $g(x)$ functions will obviously result in different results for $A_n$.
## Explict scheme for the diffusion equation in two dimensions
The $2+1$-dimensional diffusion equation, with the diffusion constant $D=1$, is given by
$$
\frac{\partial u}{\partial t}=\left(\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}\right),
$$
where we have $u=u(x,y,t)$.
We assume that we have a square lattice of length $L$ with equally
many mesh points in the $x$ and $y$ directions.
We discretize again position and time using now
$$
u_{xx}\approx \frac{u(x+h,y,t)-2u(x,y,t)+u(x-h,y,t)}{h^2},
$$
which we rewrite as, in its discretized version,
$$
u_{xx}\approx \frac{u^{l}_{i+1,j}-2u^{l}_{i,j}+u^{l}_{i-1,j}}{h^2},
$$
where $x_i=x_0+ih$, $y_j=y_0+jh$ and $t_l=t_0+l\Delta t$, with $h=L/(n+1)$ and $\Delta t$ the time step.
## Explict scheme for the diffusion equation in two dimensions
We have defined our domain to start $x(y)=0$ and end at $X(y)=L$.
The second derivative with respect to $y$ reads
$$
u_{yy}\approx \frac{u^{l}_{i,j+1}-2u^{l}_{i,j}+u^{l}_{i,j-1}}{h^2}.
$$
We use again the so-called forward-going Euler formula for the first derivative in time. In its discretized form we have
$$
u_{t}\approx \frac{u^{l+1}_{i,j}-u^{l}_{i,j}}{\Delta t},
$$
resulting in
$$
u^{l+1}_{i,j}= u^{l}_{i,j} + \alpha\left[u^{l}_{i+1,j}+u^{l}_{i-1,j}+u^{l}_{i,j+1}+u^{l}_{i,j-1}-4u^{l}_{i,j}\right],
$$
where the left hand side, with the solution at the new time step, is the only unknown term, since starting with $t=t_0$, the right hand side is entirely
determined by the boundary and initial conditions. We have $\alpha=\Delta t/h^2$.
This scheme can be implemented using essentially the same approach as we used in Eq. ([eq:explicitpde](#eq:explicitpde)).
## Laplace's and Poisson's Equations
Laplace's equation reads
$$
\nabla^2 u(\mathbf{x})=u_{xx}+u_{yy}=0.
$$
with possible boundary conditions
$u(x,y) = g(x,y) $ on the border $\delta\Omega$. There is no time-dependence.
We seek a solution in the region $\Omega$ and we choose a quadratic mesh
with equally many steps in both directions. We could choose the grid to be rectangular or following
polar coordinates $r,\theta$ as well. Here we choose equal steps lengths in the $x$ and
the $y$ directions. We set
$$
h=\Delta x = \Delta y = \frac{L}{n+1},
$$
where $L$ is the length of the sides and we have $n+1$ points in both directions.
## Laplace's and Poisson's Equations, discretized version
The discretized version reads
$$
u_{xx}\approx \frac{u(x+h,y)-2u(x,y)+u(x-h,y)}{h^2},
$$
and
$$
u_{yy}\approx \frac{u(x,y+h)-2u(x,y)+u(x,y-h)}{h^2},
$$
which we rewrite as
$$
u_{xx}\approx \frac{u_{i+1,j}-2u_{i,j}+u_{i-1,j}}{h^2},
$$
and
$$
u_{yy}\approx \frac{u_{i,j+1}-2u_{i,j}+u_{i,j-1}}{h^2}.
$$
## Laplace's and Poisson's Equations, final discretized version
Inserting in Laplace's equation we obtain
<!-- Equation labels as ordinary links -->
<div id="eq:laplacescheme"></div>
$$
\begin{equation}
\label{eq:laplacescheme} \tag{17}
u_{i,j}= \frac{1}{4}\left[u_{i,j+1}+u_{i,j-1}+u_{i+1,j}+u_{i-1,j}\right].
\end{equation}
$$
This is our final numerical scheme for solving Laplace's equation.
Poisson's equation adds only a minor complication
to the above equation since in this case we have
$$
u_{xx}+u_{yy}=-\rho(x,y),
$$
and we need only to add a discretized version of $\rho(\mathbf{x})$
resulting in
<!-- Equation labels as ordinary links -->
<div id="eq:poissonscheme"></div>
$$
\begin{equation}
\label{eq:poissonscheme} \tag{18}
u_{i,j}= \frac{1}{4}\left[u_{i,j+1}+u_{i,j-1}+u_{i+1,j}+u_{i-1,j}\right]
+\frac{h^2}{4}\rho_{i,j}.
\end{equation}
$$
## Laplace's and Poisson's Equations, boundary conditions
The boundary condtions read
$$
u_{i,0} = g_{i,0} \hspace{0.5cm} 0\le i \le n+1,
$$
$$
u_{i,L} = g_{i,0} \hspace{0.5cm} 0\le i \le n+1,
$$
$$
u_{0,j} = g_{0,j} \hspace{0.5cm} 0\le j \le n+1,
$$
and
$$
u_{L,j} = g_{L,j} \hspace{0.5cm} 0\le j \le n+1.
$$
With $n+1$ mesh points the equations for $u$ result in a system of $(n+1)^2$ linear equations in the $(n+1)^2$ unknown $u_{i,j}$.
## Scheme for solving Laplace's (Poisson's) equation
We rewrite Eq. ([eq:poissonscheme](#eq:poissonscheme))
<!-- Equation labels as ordinary links -->
<div id="eq:poissonrewritten"></div>
$$
\begin{equation}
\label{eq:poissonrewritten} \tag{19}
4u_{i,j}= \left[u_{i,j+1}+u_{i,j-1}+u_{i+1,j}+u_{i-1,j}\right]
-h^2\rho_{i,j}=\Delta_{ij}-\tilde{\rho}_{ij},
\end{equation}
$$
where we have defined
$$
\Delta_{ij}= \left[u_{i,j+1}+u_{i,j-1}+u_{i+1,j}+u_{i-1,j}\right],
$$
and
$$
\tilde{\rho}_{ij}=h^2\rho_{i,j}.
$$
## Scheme for solving Laplace's (Poisson's) equation
In order to illustrate how we can transform the last equations into a
linear algebra problem of the type $\mathbf{A}\mathbf{x}=\mathbf{w}$, with
$\mathbf{A}$ a matrix and $\mathbf{x}$ and $\mathbf{w}$ unknown and known
vectors respectively, let us also for the sake of simplicity assume
that the number of points $n=3$. We assume also that $u(x,y) = g(x,y)
$ on the border $\delta\Omega$.
The inner values of the function $u$ are then
given by
$$
4u_{11} -u_{21} -u_{01} - u_{12}- u_{10}=-\tilde{\rho}_{11} \nonumber
$$
$$
4u_{12} - u_{02} - u_{22} - u_{13}- u_{11}=-\tilde{\rho}_{12} \nonumber
$$
$$
4u_{21} - u_{11} - u_{31} - u_{22}- u_{20}=-\tilde{\rho}_{21} \nonumber
$$
$$
4u_{22} - u_{12} - u_{32} - u_{23}- u_{21}=-\tilde{\rho}_{22}. \nonumber
$$
## Scheme for solving Laplace's (Poisson's) equation
If we isolate on the left-hand side the unknown quantities $u_{11}$, $u_{12}$, $u_{21}$ and $u_{22}$, that is
the inner points not constrained by the boundary conditions, we can
rewrite the above equations as a matrix $\mathbf{A}$ times an unknown vector $\mathbf{x}$, that is
$$
Ax = b,
$$
or in more detail
$$
\begin{bmatrix} 4&-1 &-1 &0 \\
-1& 4 &0 &-1 \\
-1& 0 &4 &-1 \\
0& -1 &-1 &4 \\
\end{bmatrix}\begin{bmatrix}
u_{11}\\
u_{12}\\
u_{21} \\
u_{22} \\
\end{bmatrix}=\begin{bmatrix}
u_{01}+u_{10}-\tilde{\rho}_{11}\\
u_{13}+u_{02}-\tilde{\rho}_{12}\\
u_{31}+u_{20}-\tilde{\rho}_{21} \\
u_{32}+u_{23}-\tilde{\rho}_{22}\\
\end{bmatrix}.
$$
## Scheme for solving Laplace's (Poisson's) equation
The right hand side is constrained by the values at the boundary plus the known function $\tilde{\rho}$.
For a two-dimensional equation it is easy to convince oneself that for larger sets of mesh points,
we will not have more than five function values for every row of the above matrix. For a problem with $n+1$
mesh points, our matrix $\mathbf{A}\in {\mathbb{R}}^{(n+1)\times (n+1)}$ leads to $(n-1)\times (n-1)$ unknown function
values $u_{ij}$.
This means that, if we fix the endpoints for the two-dimensional case (with a square lattice) at $i(j)=0$
and $i(j)=n+1$, we have to solve the equations for $1 \ge i(j) le n$.
Since the matrix is rather sparse but is not on a tridiagonal form, elimination methods like the LU decomposition discussed, are not very practical. Rather, iterative schemes like Jacobi's method or the Gauss-Seidel are preferred.
The above matrix is also always diagonally dominant, a necessary condition
for these iterative solvers to converge.
## Scheme for solving Laplace's (Poisson's) equation using Jacobi's iterative method
In setting up for example Jacobi's method, it is useful to rewrite the matrix $\mathbf{A}$ as
$$
\mathbf{A}=\mathbf{D}+\mathbf{U}+\mathbf{L},
$$
with $\mathbf{D}$ being a diagonal matrix with $4$ as the only value, $\mathbf{U}$ is an upper triangular matrix and $\mathbf{L}$
a lower triangular matrix. In our case we have
$$
\mathbf{D}=\begin{bmatrix}4&0 &0 &0 \\
0& 4 &0 &0 \\
0& 0 &4 &0 \\
0& 0 &0 &4 \\
\end{bmatrix},
$$
and
$$
\mathbf{L}=\begin{bmatrix} 0&0 &0 &0 \\
-1& 0 &0 &0 \\
-1& 0 &0 &0 \\
0& -1 &-1 &0 \\
\end{bmatrix} \hspace{1cm} \mathbf{U}= \begin{bmatrix}
0&-1 &-1 &0 \\
0& 0 &0 &-1 \\
0& 0 &0 &-1 \\
0& 0 &0 &0 \\
\end{bmatrix}.
$$
## Scheme for solving Laplace's (Poisson's) equation, with Jacobi's method
We assume now that we have an estimate for the unknown functions $u_{11}$, $u_{12}$, $u_{21}$ and $u_{22}$. We will call this
the zeroth value and label it as
$u^{(0)}_{11}$, $u^{(0)}_{12}$, $u^{(0)}_{21}$ and $u^{(0)}_{22}$. We can then set up an iterative scheme where the next solution
is defined in terms of the previous one as
$$
u^{(1)}_{11} =\frac{1}{4}(b_1-u^{(0)}_{12} -u^{(0)}_{21}) \nonumber
$$
$$
u^{(1)}_{12} =\frac{1}{4}(b_2-u^{(0)}_{11}-u^{(0)}_{22}) \nonumber
$$
$$
u^{(1)}_{21} =\frac{1}{4}(b_3-u^{(0)}_{11}-u^{(0)}_{22}) \nonumber
$$
$$
u^{(1)}_{22}=\frac{1}{4}(b_4-u^{(0)}_{12}-u^{(0)}_{21}), \nonumber
$$
where we have defined the vector
$$
\mathbf{b}= \begin{bmatrix} u_{01}+u_{10}-\tilde{\rho}_{11}\\
u_{13}+u_{02}-\tilde{\rho}_{12}\\
u_{31}+u_{20}-\tilde{\rho}_{21} \\
u_{32}+u_{23}-\tilde{\rho}_{22}\\
\end{bmatrix}.
$$
## Scheme for solving Laplace's (Poisson's) equation, final rewrite
We can rewrite the equations in a more compact form in terms of the matrices $\mathbf{D}$, $\mathbf{L}$ and $\mathbf{U}$ as,
after $r+1$ iterations,
<!-- Equation labels as ordinary links -->
<div id="eq:jacobisolverpoisson"></div>
$$
\begin{equation} \label{eq:jacobisolverpoisson} \tag{20}
\mathbf{x}^{(r+1)}= \mathbf{D}^{-1}\left(\mathbf{b} - (\mathbf{L}+\mathbf{U})\mathbf{x}^{(r)}\right),
\end{equation}
$$
where the unknown functions are now defined in terms of
$$
\mathbf{x}= \begin{bmatrix} u_{11}\\
u_{12}\\
u_{21}\\
u_{22}\\
\end{bmatrix}.
$$
If we wish to implement Gauss-Seidel's algorithm,
the set of equations to solve are then given by
<!-- Equation labels as ordinary links -->
<div id="eq:gausseidelsolverpoisson"></div>
$$
\begin{equation} \label{eq:gausseidelsolverpoisson} \tag{21}
\mathbf{x}^{(r+1)}= -(\mathbf{D}+\mathbf{L})^{-1}\left(\mathbf{b} -\mathbf{U}\mathbf{x}^{(r)}\right),
\end{equation}
$$
or alternatively as
$$
\mathbf{x}^{(r+1)}= \mathbf{D}^{-1}\left(\mathbf{b} -\mathbf{L}\mathbf{x}^{(r+1)}-\mathbf{U}\mathbf{x}^{(r)}\right).
$$
## Jacobi Algorithm for solving Laplace's Equation
It is thus fairly straightforward to extend this equation to the
three-dimensional case. Whether we solve Eq. ([eq:laplacescheme](#eq:laplacescheme))
or Eq. ([eq:poissonscheme](#eq:poissonscheme)), the solution strategy remains the same.
We know the values of $u$ at $i=0$ or $i=n+1$ and at $j=0$ or
$j=n+1$ but we cannot start at one of the boundaries and work our way into and
across the system since Eq. ([eq:laplacescheme](#eq:laplacescheme)) requires the knowledge
of $u$ at all of the neighbouring points in order to calculate $u$ at any
given point.
## Jacobi Algorithm for solving Laplace's Equation
The way we solve these equations is based on an iterative scheme based on the Jacobi method or
the Gauss-Seidel method or the relaxation methods.
Implementing Jacobi's method is rather simple. We start with an initial guess
for $u_{i,j}^{(0)}$ where all values are known. To obtain a new solution we
solve Eq. ([eq:laplacescheme](#eq:laplacescheme)) or Eq. ([eq:poissonscheme](#eq:poissonscheme))
in order to obtain a new solution $u_{i,j}^{(1)}$.
Most likely this solution will not be a solution to
Eq. ([eq:laplacescheme](#eq:laplacescheme)). This solution is in turn
used to obtain a new and improved $u_{i,j}^{(2)}$. We continue this process
till we obtain a result which satisfies some specific convergence criterion.
## Jacobi Algorithm for solving Laplace's Equation, the algorithm
Summarized, this algorithm reads
1. Make an initial guess for $u_{i,j}$ at all interior points $(i,j)$ for all $i=1:n$ and $j=1:n$
2. Use Eq. ([eq:laplacescheme](#eq:laplacescheme)) to compute $u^{m}$ at all interior points $(i,j)$. The index $m$ stands for iteration number $m$.
3. Stop if prescribed convergence threshold is reached, otherwise continue to the next step.
4. Update the new value of $u$ for the given iteration
5. Go to step 2
## Jacobi Algorithm for solving Laplace's Equation, simple example
A simple example may help in understanding this method.
We consider a condensator with parallel
plates separated at a distance $L$ resulting in for example the voltage differences
$u(x,0)=200sin(2\pi x/L)$ and
$u(x,1)=-200sin(2\pi x/L)$. These are our boundary conditions and we ask
what is the voltage $u$ between the plates?
To solve this problem numerically we provide below a C++ program
which solves iteratively Eq. ([eq:laplacescheme](#eq:laplacescheme)) using Jacobi's method. Only the part which computes
Eq. ([eq:laplacescheme](#eq:laplacescheme)) is included here.
....
// We define the step size for a square lattice with n+1 points
double h = (xmax-xmin)/(n+1);
double L = xmax-xmin; // The length of the lattice
// We allocate space for the vector u and the temporary vector to
// be upgraded in every iteration
mat u( n+1, n+1); // using Armadillo to define matrices
mat u_temp( n+1, n+1); // This is the temporary value
u = 0. // This is also our initial guess for all unknown values
// We need to set up the boundary conditions. Specify for various cases
.....
// The iteration algorithm starts here
iterations = 0;
while( (iterations <= max_iter) && ( diff > 0.00001) ){
u_temp = u; diff = 0.;
for (j = 1; j<= n,j++){
for(l = 1; l <= n; l++){
u(j,l) = 0.25*(u_temp(j+1,l)+u_temp(j-1,l)+ &
u_temp(j,l+1)+u_temp(j,l-1));
diff += fabs(u_temp(i,j)-u(i,j));
}
}
iterations++;
diff /= pow((n),2.0);
} // end while loop
## Jacobi Algorithm for solving Laplace's Equation, to observe
The important part of the algorithm is applied in the function which
sets up the two-dimensional Laplace equation. There we have a while
statement which tests the difference between the temporary vector and
the solution $u_{i,j}$. Moreover, we have fixed the number of
iterations to a given maximum. We need also to provide a convergence
tolerance. In the above program example we have fixed this to be
$0.00001$. Depending on the type of applications one may have to
change both the number of maximum iterations and the tolerance.
## Python code for solving the two-dimensional Laplace equation
The following Python code sets up and solves the Laplace equation in two dimensions.
```
# Solves the 2d Laplace equation using relaxation method
import numpy, math
def relax(A, maxsteps, convergence):
"""
Relaxes the matrix A until the sum of the absolute differences
between the previous step and the next step (divided by the number of
elements in A) is below convergence, or maxsteps is reached.
Input:
- A: matrix to relax
- maxsteps, convergence: Convergence criterions
Output:
- A is relaxed when this method returns
"""
iterations = 0
diff = convergence +1
Nx = A.shape[1]
Ny = A.shape[0]
while iterations < maxsteps and diff > convergence:
#Loop over all *INNER* points and relax
Atemp = A.copy()
diff = 0.0
for y in xrange(1,Ny-1):
for x in xrange(1,Ny-1):
A[y,x] = 0.25*(Atemp[y,x+1]+Atemp[y,x-1]+Atemp[y+1,x]+Atemp[y-1,x])
diff += math.fabs(A[y,x] - Atemp[y,x])
diff /=(Nx*Ny)
iterations += 1
print "Iteration #", iterations, ", diff =", diff;
def boundary(A,x,y):
"""
Set up boundary conditions
Input:
- A: Matrix to set boundaries on
- x: Array where x[i] = hx*i, x[last_element] = Lx
- y: Eqivalent array for y
Output:
- A is initialized in-place (when this method returns)
"""
#Boundaries implemented (condensator with plates at y={0,Lx}, DeltaV = 200):
# A(x,0) = 100*sin(2*pi*x/Lx)
# A(x,Ly) = -100*sin(2*pi*x/Lx)
# A(0,y) = 0
# A(Lx,y) = 0
Nx = A.shape[1]
Ny = A.shape[0]
Lx = x[Nx-1] #They *SHOULD* have same sizes!
Ly = x[Nx-1]
A[:,0] = 100*numpy.sin(math.pi*x/Lx)
A[:,Nx-1] = - 100*numpy.sin(math.pi*x/Lx)
A[0,:] = 0.0
A[Ny-1,:] = 0.0
#Main program
import sys
# Input parameters
Nx = 100
Ny = 100
maxiter = 1000
x = numpy.linspace(0,1,num=Nx+2) #Also include edges
y = numpy.linspace(0,1,num=Ny+2)
A = numpy.zeros((Nx+2,Ny+2))
boundary(A,x,y)
#Remember: as solution "creeps" in from the edges,
#number of steps MUST AT LEAST be equal to
#number of inner meshpoints/2 (unless you have a better
#estimate for the solution than zeros() )
relax(A,maxiter,0.00001)
# To do: add visualization
```
## Jacobi's algorithm extended to the diffusion equation in two dimensions
Let us know implement the implicit scheme and show how we can extend the previous algorithm for solving
Laplace's or Poisson's equations to the diffusion equation as well. As the reader will notice, this simply implies a
slight redefinition of the vector $\mathbf{b}$ defined in Eq. ([eq:jacobisolverpoisson](#eq:jacobisolverpoisson)).
To see this, let us first set up the diffusion in two spatial dimensions, with boundary and initial conditions.
The $2+1$-dimensional diffusion equation (with dimensionless variables) reads for a function
$u=u(x,y,t)$
$$
\frac{\partial u}{\partial t}= D\left(\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}\right).
$$
## Jacobi's algorithm extended to the diffusion equation in two dimensions
We assume that we have a square lattice of length $L$ with equally
many mesh points in the $x$ and $y$ directions. Setting the diffusion
constant $D=1$ and using the shorthand notation
$u_{xx}={\partial^2 u}/{\partial x^2}$ etc for the second
derivatives and $u_t={\partial u}/{\partial t}$ for the time
derivative, we have, with a given set of boundary and initial
conditions,
$$
\begin{array}{cc}u_t= u_{xx}+u_{yy}& x, y\in(0,L), t>0 \\
u(x,y,0) = g(x,y)& x, y\in (0,L) \\
u(0,y,t)=u(L,y,t)=u(x,0,t)=u(x,L,t)0 & t > 0\\
\end{array}
$$
## Jacobi's algorithm extended to the diffusion equation in two dimensions, discretizing
We discretize again position and time, and use the following approximation for the second derivatives
$$
u_{xx}\approx \frac{u(x+h,y,t)-2u(x,y,t)+u(x-h,y,t)}{h^2},
$$
which we rewrite as, in its discretized version,
$$
u_{xx}\approx \frac{u^{l}_{i+1,j}-2u^{l}_{i,j}+u^{l}_{i-1,j}}{h^2},
$$
where $x_i=x_0+ih$, $y_j=y_0+jh$ and $t_l=t_0+l\Delta t$, with $h=L/(n+1)$ and $\Delta t$ the time step.
## Jacobi's algorithm extended to the diffusion equation in two dimensions, the second derivative
The second derivative with respect to $y$ reads
$$
u_{yy}\approx \frac{u^{l}_{i,j+1}-2u^{l}_{i,j}+u^{l}_{i,j-1}}{h^2}.
$$
We use now the so-called backward going Euler formula for the first derivative in time. In its discretized form we have
$$
u_{t}\approx \frac{u^{l}_{i,j}-u^{l-1}_{i,j}}{\Delta t},
$$
resulting in
$$
u^{l}_{i,j}+4\alpha u^{l}_{i,j}- \alpha\left[u^{l}_{i+1,j}+u^{l}_{i-1,j}+u^{l}_{i,j+1}+u^{l}_{i,j-1}\right] = u^{l-1}_{i,j},
$$
where the right hand side is the only known term, since starting with $t=t_0$, the right hand side is entirely
determined by the boundary and initial conditions. We have $\alpha=\Delta t/h^2$.
## Jacobi's algorithm extended to the diffusion equation in two dimensions
For future time steps, only the boundary values are determined
and we need to solve the equations for the interior part in an iterative way similar to what was done for Laplace's or Poisson's equations.
To see this, we rewrite the previous equation as
$$
u^{l}_{i,j}= \frac{1}{1+4\alpha}\left[\alpha(u^{l}_{i+1,j}+u^{l}_{i-1,j}+u^{l}_{i,j+1}+u^{l}_{i,j-1})+u^{l-1}_{i,j}\right],
$$
or in a more compact form as
<!-- Equation labels as ordinary links -->
<div id="eq:implicitdiff2dim"></div>
$$
\begin{equation}
\label{eq:implicitdiff2dim} \tag{22}
u^{l}_{i,j}= \frac{1}{1+4\alpha}\left[\alpha\Delta^l_{ij}+u^{l-1}_{i,j}\right],
\end{equation}
$$
with $\Delta^l_{ij}= \left[u^l_{i,j+1}+u^l_{i,j-1}+u^l_{i+1,j}+u^l_{i-1,j}\right]$.
This equation has essentially the same structure as Eq. ([eq:poissonrewritten](#eq:poissonrewritten)), except that
the function $\rho_{ij}$ is replaced by the solution at a previous time step $l-1$. Furthermore, the diagonal matrix elements
are now given by $1+4\alpha$, while the non-zero non-diagonal matrix elements equal $\alpha$. This matrix is also positive definite, meaning in turn that
iterative schemes like the Jacobi or the Gauss-Seidel methods will converge to the desired solution after a given number of iterations.
## [Solving project 1 again but now with Jacobi's method](https://github.com/CompPhysics/ComputationalPhysics/blob/master/doc/Programs/LecturePrograms/programs/PDE/cpp/Jacobi.cpp)
Let us revisit project 1 and the Thomas algorithm for solving a system of tridiagonal matrices for the equation
// Solves linear equations for simple tridiagonal matrix using the iterative Jacobi method
....
// Begin main program
int main(int argc, char *argv[]){
// missing statements, see code link above
mat A = zeros<mat>(n,n);
// Set up arrays for the simple case
vec b(n); vec x(n);
A(0,1) = -1; x(0) = h; b(0) = hh*f(x(0));
x(n-1) = x(0)+(n-1)*h; b(n-1) = hh*f(x(n-1));
for (int i = 1; i < n-1; i++){
x(i) = x(i-1)+h;
b(i) = hh*f(x(i));
A(i,i-1) = -1.0;
A(i,i+1) = -1.0;
}
A(n-2,n-1) = -1.0; A(n-1,n-2) = -1.0;
// solve Ax = b by iteration with a random starting vector
int maxiter = 100; double diff = 1.0;
double epsilon = 1.0e-10; int iter = 0;
vec SolutionOld = randu<vec>(n);
vec SolutionNew = zeros<vec>(n);
while (iter <= maxiter || diff > epsilon){
SolutionNew = (b -A*SolutionOld)*0.5;
iter++; diff = fabs(sum(SolutionNew-SolutionOld)/n);
SolutionOld = SolutionNew;
}
vec solution = SolutionOld;}
## [Program to solve Jacobi's method in two dimension](https://github.com/CompPhysics/ComputationalPhysics/blob/master/doc/Programs/LecturePrograms/programs/PDE/cpp/diffusion2dim.cpp)
The following program sets up the diffusion equation solver in two spatial dimensions using Jacobi's method. Note that we have skipped a loop over time. This has to be inserted in order to perform the calculations.
/* Simple program for solving the two-dimensional diffusion
equation or Poisson equation using Jacobi's iterative method
Note that this program does not contain a loop over the time
dependence.
*/
#include <iostream>
#include <iomanip>
#include <armadillo>
using namespace std;
using namespace arma;
int JacobiSolver(int, double, double, mat &, mat &, double);
int main(int argc, char * argv[]){
int Npoints = 40;
double ExactSolution;
double dx = 1.0/(Npoints-1);
double dt = 0.25*dx*dx;
double tolerance = 1.0e-14;
mat A = zeros<mat>(Npoints,Npoints);
mat q = zeros<mat>(Npoints,Npoints);
// setting up an additional source term
for(int i = 0; i < Npoints; i++)
for(int j = 0; j < Npoints; j++)
q(i,j) = -2.0*M_PI*M_PI*sin(M_PI*dx*i)*sin(M_PI*dx*j);
int itcount = JacobiSolver(Npoints,dx,dt,A,q,tolerance);
// Testing against exact solution
double sum = 0.0;
for(int i = 0; i < Npoints; i++){
for(int j=0;j < Npoints; j++){
ExactSolution = -sin(M_PI*dx*i)*sin(M_PI*dx*j);
sum += fabs((A(i,j) - ExactSolution));
}
}
cout << setprecision(5) << setiosflags(ios::scientific);
cout << "Jacobi method with error " << sum/Npoints << " in " << itcount << " iterations" << endl;
}
## [The Jacobi solver function](https://github.com/CompPhysics/ComputationalPhysics/blob/master/doc/Programs/LecturePrograms/programs/PDE/cpp/diffusion2dim.cpp)
// Function for setting up the iterative Jacobi solver
int JacobiSolver(int N, double dx, double dt, mat &A, mat &q, double abstol)
{
int MaxIterations = 100000;
mat Aold = zeros<mat>(N,N);
double D = dt/(dx*dx);
for(int i=1; i < N-1; i++)
for(int j=1; j < N-1; j++)
Aold(i,j) = 1.0;
// Boundary Conditions -- all zeros
for(int i=0; i < N; i++){
A(0,i) = 0.0;
A(N-1,i) = 0.0;
A(i,0) = 0.0;
A(i,N-1) = 0.0;
}
// Start the iterative solver
for(int k = 0; k < MaxIterations; k++){
for(int i = 1; i < N-1; i++){
for(int j=1; j < N-1; j++){
A(i,j) = dt*q(i,j) + Aold(i,j) +
D*(Aold(i+1,j) + Aold(i,j+1) - 4.0*Aold(i,j) +
Aold(i-1,j) + Aold(i,j-1));
}
}
double sum = 0.0;
for(int i = 0; i < N;i++){
for(int j = 0; j < N;j++){
sum += (Aold(i,j)-A(i,j))*(Aold(i,j)-A(i,j));
Aold(i,j) = A(i,j);
}
}
if(sqrt (sum) <abstol){
return k;
}
}
cerr << "Jacobi: Maximum Number of Interations Reached Without Convergence\n";
return MaxIterations;
}
<!-- !split -->
## [Parallel Jacobi](https://github.com/CompPhysics/ComputationalPhysics/blob/master/doc/Programs/LecturePrograms/programs/PDE/cpp/MPIdiffusion.cpp)
In order to parallelize the Jacobi method we need to introduce to new **MPI** functions, namely *MPIGather* and *MPIAllgather*.
Here we present a parallel implementation of the Jacobi method without an explicit link to the diffusion equation. Let us go back to the plain Jacobi method
and implement it in parallel.
// Main program first
#include <mpi.h>
// Omitted statements
int main(int argc, char * argv[]){
int i,j, N = 20;
double **A,*x,*q;
int totalnodes,mynode;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD, &totalnodes);
MPI_Comm_rank(MPI_COMM_WORLD, &mynode);
if(mynode==0){
}
ParallelJacobi(mynode,totalnodes,N,A,x,q,1.0e-14);
if(mynode==0){
for(int i = 0; i < N; i++)
cout << x[i] << endl;
}
MPI_Finalize();
}
<!-- !split -->
## [Parallel Jacobi](https://github.com/CompPhysics/ComputationalPhysics/blob/master/doc/Programs/LecturePrograms/programs/PDE/cpp/MPIdiffusion.cpp)
Here follows the parallel implementation of the Jacobi algorithm
int ParallelJacobi(int mynode, int numnodes, int N, double **A, double *x, double *b, double abstol){
int i,j,k,i_global;
int maxit = 100000;
int rows_local,local_offset,last_rows_local,*count,*displacements;
double sum1,sum2,*xold;
double error_sum_local, error_sum_global;
MPI_Status status;
rows_local = (int) floor((double)N/numnodes);
local_offset = mynode*rows_local;
if(mynode == (numnodes-1))
rows_local = N - rows_local*(numnodes-1);
/*Distribute the Matrix and R.H.S. among the processors */
if(mynode == 0){
for(i=1;i<numnodes-1;i++){
for(j=0;j<rows_local;j++)
MPI_Send(A[i*rows_local+j],N,MPI_DOUBLE,i,j,MPI_COMM_WORLD);
MPI_Send(b+i*rows_local,rows_local,MPI_DOUBLE,i,rows_local,
MPI_COMM_WORLD);
}
last_rows_local = N-rows_local*(numnodes-1);
for(j=0;j<last_rows_local;j++)
MPI_Send(A[(numnodes-1)*rows_local+j],N,MPI_DOUBLE,numnodes-1,j,
MPI_COMM_WORLD);
MPI_Send(b+(numnodes-1)*rows_local,last_rows_local,MPI_DOUBLE,numnodes-1,
last_rows_local,MPI_COMM_WORLD);
}
else{
A = CreateMatrix(rows_local,N);
x = new double[rows_local];
b = new double[rows_local];
for(i=0;i<rows_local;i++)
MPI_Recv(A[i],N,MPI_DOUBLE,0,i,MPI_COMM_WORLD,&status);
MPI_Recv(b,rows_local,MPI_DOUBLE,0,rows_local,MPI_COMM_WORLD,&status);
}
xold = new double[N];
count = new int[numnodes];
displacements = new int[numnodes];
//set initial guess to all 1.0
for(i=0; i<N; i++){
xold[i] = 1.0;
}
for(i=0;i<numnodes;i++){
count[i] = (int) floor((double)N/numnodes);
displacements[i] = i*count[i];
}
count[numnodes-1] = N - ((int)floor((double)N/numnodes))*(numnodes-1);
for(k=0; k<maxit; k++){
error_sum_local = 0.0;
for(i = 0; i<rows_local; i++){
i_global = local_offset+i;
sum1 = 0.0; sum2 = 0.0;
for(j=0; j < i_global; j++)
sum1 = sum1 + A[i][j]*xold[j];
for(j=i_global+1; j < N; j++)
sum2 = sum2 + A[i][j]*xold[j];
x[i] = (-sum1 - sum2 + b[i])/A[i][i_global];
error_sum_local += (x[i]-xold[i_global])*(x[i]-xold[i_global]);
}
MPI_Allreduce(&error_sum_local,&error_sum_global,1,MPI_DOUBLE,
MPI_SUM,MPI_COMM_WORLD);
MPI_Allgatherv(x,rows_local,MPI_DOUBLE,xold,count,displacements,
MPI_DOUBLE,MPI_COMM_WORLD);
if(sqrt(error_sum_global)<abstol){
if(mynode == 0){
for(i=0;i<N;i++)
x[i] = xold[i];
}
else{
DestroyMatrix(A,rows_local,N);
delete[] x;
delete[] b;
}
delete[] xold;
delete[] count;
delete[] displacements;
return k;
}
}
cerr << "Jacobi: Maximum Number of Interations Reached Without Convergence\n";
if(mynode == 0){
for(i=0;i<N;i++)
x[i] = xold[i];
}
else{
DestroyMatrix(A,rows_local,N);
delete[] x;
delete[] b;
}
delete[] xold;
delete[] count;
delete[] displacements;
return maxit;
}
<!-- !split -->
## [Parallel Jacobi](https://github.com/CompPhysics/ComputationalPhysics/blob/master/doc/Programs/LecturePrograms/programs/PDE/cpp/OpenMPdiffusion.cpp)
Here follows the parallel implementation of the diffusion equation using OpenMP
/* Simple program for solving the two-dimensional diffusion
equation or Poisson equation using Jacobi's iterative method
Note that this program does not contain a loop over the time
dependence. It uses OpenMP to parallelize
*/
#include <iostream>
#include <iomanip>
#include <armadillo>
#include <omp.h>
using namespace std;
using namespace arma;
int JacobiSolver(int, double, double, mat &, mat &, double);
int main(int argc, char * argv[]){
int Npoints = 100;
double ExactSolution;
double dx = 1.0/(Npoints-1);
double dt = 0.25*dx*dx;
double tolerance = 1.0e-8;
mat A = zeros<mat>(Npoints,Npoints);
mat q = zeros<mat>(Npoints,Npoints);
int thread_num;
omp_set_num_threads(4);
thread_num = omp_get_max_threads ();
cout << " The number of processors available = " << omp_get_num_procs () << endl ;
cout << " The number of threads available = " << thread_num << endl;
// setting up an additional source term
for(int i = 0; i < Npoints; i++)
for(int j = 0; j < Npoints; j++)
q(i,j) = -2.0*M_PI*M_PI*sin(M_PI*dx*i)*sin(M_PI*dx*j);
int itcount = JacobiSolver(Npoints,dx,dt,A,q,tolerance);
// Testing against exact solution
double sum = 0.0;
for(int i = 0; i < Npoints; i++){
for(int j=0;j < Npoints; j++){
ExactSolution = -sin(M_PI*dx*i)*sin(M_PI*dx*j);
sum += fabs((A(i,j) - ExactSolution));
}
}
cout << setprecision(5) << setiosflags(ios::scientific);
cout << "Jacobi error is " << sum/Npoints << " in " << itcount << " iterations" << endl;
}
// Function for setting up the iterative Jacobi solver
int JacobiSolver(int N, double dx, double dt, mat &A, mat &q, double abstol)
{
int MaxIterations = 100000;
double D = dt/(dx*dx);
// initial guess
mat Aold = randu<mat>(N,N);
// Boundary conditions, all zeros
for(int i=0; i < N; i++){
A(0,i) = 0.0;
A(N-1,i) = 0.0;
A(i,0) = 0.0;
A(i,N-1) = 0.0;
}
double sum = 1.0;
int k = 0;
// Start the iterative solver
while (k < MaxIterations && sum > abstol){
int i, j;
sum = 0.0;
// Define parallel region
# pragma omp parallel default(shared) private (i, j) reduction(+:sum)
{
# pragma omp for
for(i = 1; i < N-1; i++){
for(j = 1; j < N-1; j++){
A(i,j) = dt*q(i,j) + Aold(i,j) +
D*(Aold(i+1,j) + Aold(i,j+1) - 4.0*Aold(i,j) +
Aold(i-1,j) + Aold(i,j-1));
}
}
for(i = 0; i < N;i++){
for(j = 0; j < N;j++){
sum += fabs(Aold(i,j)-A(i,j));
Aold(i,j) = A(i,j);
}
}
sum /= (N*N);
} //end parallel region
k++;
} //end while loop
return k;
}
## Wave Equation in two Dimensions
The $1+1$-dimensional wave equation reads
$$
\frac{\partial^2 u}{\partial x^2}=\frac{\partial^2 u}{\partial t^2},
$$
with $u=u(x,t)$ and we have assumed that we operate with
dimensionless variables. Possible boundary and initial conditions
with $L=1$ are
$$
\begin{array}{cc} u_{xx} = u_{tt}& x\in(0,1), t>0 \\
u(x,0) = g(x)& x\in (0,1) \\
u(0,t)=u(1,t)=0 & t > 0\\
\partial u/\partial t|_{t=0}=0 & x\in (0,1)\\
\end{array} .
$$
## Wave Equation in two Dimensions, discretizing
We discretize again time and position,
$$
u_{xx}\approx \frac{u(x+\Delta x,t)-2u(x,t)+u(x-\Delta x,t)}{\Delta x^2},
$$
and
$$
u_{tt}\approx \frac{u(x,t+\Delta t)-2u(x,t)+u(x,t-\Delta t)}{\Delta t^2},
$$
which we rewrite as
$$
u_{xx}\approx \frac{u_{i+1,j}-2u_{i,j}+u_{i-1,j}}{\Delta x^2},
$$
and
$$
u_{tt}\approx \frac{u_{i,j+1}-2u_{i,j}+u_{i,j-1}}{\Delta t^2},
$$
resulting in
<!-- Equation labels as ordinary links -->
<div id="eq:wavescheme"></div>
$$
\begin{equation}
\label{eq:wavescheme} \tag{23}
u_{i,j+1}=2u_{i,j}-u_{i,j-1}+\frac{\Delta t^2}{\Delta x^2}\left(u_{i+1,j}-2u_{i,j}+u_{i-1,j}\right).
\end{equation}
$$
## Wave Equation in two Dimensions
If we assume that all values at times $t=j$ and $t=j-1$ are known, the only unknown variable is $u_{i,j+1}$ and the last equation yields thus an explicit
scheme for updating this quantity. We have thus an explicit finite difference
scheme for computing the wave function $u$. The only additional complication
in our case is the initial condition given by the first derivative in time,
namely $\partial u/\partial t|_{t=0}=0$.
The discretized version of this first derivative is given by
$$
u_t\approx \frac{u(x_i,t_j+\Delta t)-u(x_i,t_j-\Delta t)}{2\Delta t},
$$
and at $t=0$ it reduces to
$$
u_t\approx \frac{u_{i,+1}-u_{i,-1}}{2\Delta t}=0,
$$
implying that $u_{i,+1}=u_{i,-1}$.
## Wave Equation in two Dimensions
If we insert this condition in Eq. ([eq:wavescheme](#eq:wavescheme)) we arrive at a
special formula for the first time step
<!-- Equation labels as ordinary links -->
<div id="eq:firstwavescheme"></div>
$$
\begin{equation}
\label{eq:firstwavescheme} \tag{24}
u_{i,1}=u_{i,0}+\frac{\Delta t^2}{2\Delta x^2}\left(u_{i+1,0}-2u_{i,0}+u_{i-1,0}\right).
\end{equation}
$$
We need seemingly two different equations, one for the first time step
given by Eq. ([eq:firstwavescheme](#eq:firstwavescheme)) and one for all other time-steps
given by Eq. ([eq:wavescheme](#eq:wavescheme)). However, it suffices to use
Eq. ([eq:wavescheme](#eq:wavescheme)) for all times as long as we
provide $u(i,-1)$ using
$$
u_{i,-1}=u_{i,0}+\frac{\Delta t^2}{2\Delta x^2}\left(u_{i+1,0}-2u_{i,0}+u_{i-1,0}\right),
$$
in our setup of the initial conditions.
## Wave Equation in two Dimensions
The situation is rather similar for the $2+1$-dimensional case,
except that we now need to discretize the spatial $y$-coordinate as well.
Our equations will now depend on three variables whose discretized versions
are now
$$
\begin{array}{cc} t_l=l\Delta t& l \ge 0 \\
x_i=i\Delta x& 0 \le i \le n_x\\
y_j=j\Delta y& 0 \le j \le n_y\end{array} ,
$$
and we will let $\Delta x=\Delta y = h$ and $n_x=n_y$ for the sake of
simplicity.
The equation with initial and boundary conditions reads now
$$
\begin{array}{cc} u_{xx}+u_{yy} = u_{tt}& x,y\in(0,1), t>0 \\
u(x,y,0) = g(x,y)& x,y\in (0,1) \\
u(0,0,t)=u(1,1,t)=0 & t > 0\\
\partial u/\partial t|_{t=0}=0 & x,y\in (0,1)\\
\end{array}.
$$
## Wave Equation in two Dimensions
We have now the following discretized partial derivatives
$$
u_{xx}\approx \frac{u_{i+1,j}^l-2u_{i,j}^l+u_{i-1,j}^l}{h^2},
$$
and
$$
u_{yy}\approx \frac{u_{i,j+1}^l-2u_{i,j}^l+u_{i,j-1}^l}{h^2},
$$
and
$$
u_{tt}\approx \frac{u_{i,j}^{l+1}-2u_{i,j}^{l}+u_{i,j}^{l-1}}{\Delta t^2},
$$
which we merge into the discretized $2+1$-dimensional wave equation
as
<!-- Equation labels as ordinary links -->
<div id="eq:21wavescheme"></div>
$$
\begin{equation}
\label{eq:21wavescheme} \tag{25}
u_{i,j}^{l+1}
=2u_{i,j}^{l}-u_{i,j}^{l-1}+\frac{\Delta t^2}{h^2}\left(u_{i+1,j}^l-4u_{i,j}^l+u_{i-1,j}^l+u_{i,j+1}^l+u_{i,j-1}^l\right),
\end{equation}
$$
where again we have an explicit scheme with $u_{i,j}^{l+1}$ as the only
unknown quantity.
## Wave Equation in two Dimensions
It is easy to account for different step lengths for $x$ and $y$.
The partial derivative is treated in much the same way
as for the one-dimensional case, except that we now have an additional
index due to the extra spatial dimension, viz., we need to compute
$u_{i,j}^{-1}$ through
$$
u_{i,j}^{-1}=u_{i,j}^0+\frac{\Delta t}{2h^2}\left(u_{i+1,j}^0-4u_{i,j}^0+u_{i-1,j}^0+u_{i,j+1}^0+u_{i,j-1}^0\right),
$$
in our setup of the initial conditions.
## Analytical Solution for the two-dimensional wave equation
We develop here the closed-form solution for the $2+1$ dimensional wave equation with the following boundary and initial conditions
$$
\begin{array}{cc} c^2(u_{xx}+u_{yy}) = u_{tt}& x,y\in(0,L), t>0 \\
u(x,y,0) = f(x,y) & x,y\in (0,L) \\
u(0,0,t)=u(L,L,t)=0 & t > 0\\
\partial u/\partial t|_{t=0}= g(x,y) & x,y\in (0,L)\\
\end{array} .
$$
## Analytical Solution for the two-dimensional wave equation, first step
Our first step is to make the ansatz
$$
u(x,y,t) = F(x,y) G(t),
$$
resulting in the equation
$$
FG_{tt}= c^2(F_{xx}G+F_{yy}G),
$$
or
$$
\frac{G_{tt}}{c^2G} = \frac{1}{F}(F_{xx}+F_{yy}) = -\nu^2.
$$
## Analytical Solution for the two-dimensional wave equation,
The lhs and rhs are independent of each other and we obtain two differential equations
$$
F_{xx}+F_{yy}+F\nu^2=0,
$$
and
$$
G_{tt} + Gc^2\nu^2 = G_{tt} + G\lambda^2 = 0,
$$
with $\lambda = c\nu$.
We can in turn make the following ansatz for the $x$ and $y$ dependent part
$$
F(x,y) = H(x)Q(y),
$$
which results in
$$
\frac{1}{H}H_{xx} = -\frac{1}{Q}(Q_{yy}+Q\nu^2)= -\kappa^2.
$$
## Analytical Solution for the two-dimensional wave equation, separation of variables
Since the lhs and rhs are again independent of each other, we can separate the latter equation into two independent
equations, one for $x$ and one for $y$, namely
$$
H_{xx} + \kappa^2H = 0,
$$
and
$$
Q_{yy} + \rho^2Q = 0,
$$
with $\rho^2= \nu^2-\kappa^2$.
## Analytical Solution for the two-dimensional wave equation, separation of variables
The second step is to solve these differential equations, which all have trigonometric functions as solutions, viz.
$$
H(x) = A\cos(\kappa x)+B\sin(\kappa x),
$$
and
$$
Q(y) = C\cos(\rho y)+D\sin(\rho y).
$$
## Analytical Solution for the two-dimensional wave equation, boundary conditions
The boundary conditions require that $F(x,y) = H(x)Q(y)$ are zero at the boundaries, meaning that
$H(0)=H(L)=Q(0)=Q(L)=0$. This yields the solutions
$$
H_m(x) = \sin(\frac{m\pi x}{L}) \hspace{1cm} Q_n(y) = \sin(\frac{n\pi y}{L}),
$$
or
$$
F_{mn}(x,y) = \sin(\frac{m\pi x}{L})\sin(\frac{n\pi y}{L}).
$$
With $\rho^2= \nu^2-\kappa^2$ and $\lambda = c\nu$ we have an eigenspectrum $\lambda=c\sqrt{\kappa^2+\rho^2}$
or $\lambda_{mn}= c\pi/L\sqrt{m^2+n^2}$.
## Analytical Solution for the two-dimensional wave equation, separation of variables and solutions
The solution for $G$ is
$$
G_{mn}(t) = B_{mn}\cos(\lambda_{mn} t)+D_{mn}\sin(\lambda_{mn} t),
$$
with the general solution of the form
$$
u(x,y,t) = \sum_{mn=1}^{\infty} u_{mn}(x,y,t) = \sum_{mn=1}^{\infty}F_{mn}(x,y)G_{mn}(t).
$$
## Analytical Solution for the two-dimensional wave equation, final steps
The final step is to determine the coefficients $B_{mn}$ and $D_{mn}$ from the Fourier coefficients.
The equations for these are determined by the initial conditions $u(x,y,0) = f(x,y)$ and
$\partial u/\partial t|_{t=0}= g(x,y)$.
The final expressions are
$$
B_{mn} = \frac{2}{L}\int_0^L\int_0^L dxdy f(x,y) \sin(\frac{m\pi x}{L})\sin(\frac{n\pi y}{L}),
$$
and
$$
D_{mn} = \frac{2}{L}\int_0^L\int_0^L dxdy g(x,y) \sin(\frac{m\pi x}{L})\sin(\frac{n\pi y}{L}).
$$
Inserting the particular functional forms of $f(x,y)$ and $g(x,y)$ one obtains the final closed-form expressions.
## Python code for solving the two-dimensional wave equation
The following Python code sets up and solves the two-dimensional wave equation for all three methods discussed.
```
#Program which solves the 2+1-dimensional wave equation by a finite difference scheme
from numpy import *
#Define the grid
N = 31
h = 1.0 / (N-1)
dt = .0005
t_steps = 10000
x,y = ndgrid(linspace(0,1,N),linspace(0,1,N),sparse=False)
alpha = dt**2 / h**2
#Initial conditions with du/dt = 0
u = sin(x*pi)*cos(y*pi-pi/2)
u_old = zeros(u.shape,type(u[0,0]))
for i in xrange(1,N-1):
for j in xrange(1,N-1):
u_old[i,j] = u[i,j] + (alpha/2)*(u[i+1,j] - 4*u[i,j] + u[i-1,j] + u[i,j+1] + u[i,j-1])
u_new = zeros(u.shape,type(u[0,0]))
#We don't necessarily want to plot every time step. We plot every n'th step where
n = 100
plotnr = 0
#Iteration over time steps
for k in xrange(t_steps):
for i in xrange(1,N-1): #1 - N-2 because we don't want to change the boundaries
for j in xrange(1,N-1):
u_new[i,j] = 2*u[i,j] - u_old[i,j] + alpha*(u[i+1,j] - 4*u[i,j] + u[i-1,j] + u[i,j+1] + u[i,j-1])
#Prepare for next time step by manipulating pointers
temp = u_new
u_new = u_old
u_old = u
u = temp
#To do: Make movie
```
| 3a635af85d9f4de1e1809cda3e15ebd2d060faf9 | 133,206 | ipynb | Jupyter Notebook | doc/pub/pde/ipynb/pde.ipynb | halvarsu/ComputationalPhysics | a300693de89968f3d976aa68121a4e1c135ffab3 | [
"CC0-1.0"
] | 1 | 2019-04-12T12:30:48.000Z | 2019-04-12T12:30:48.000Z | doc/pub/pde/ipynb/pde.ipynb | cosmologist10/ComputationalPhysics | c6642becb1036e2faaf4f1da78a31785b2033fe7 | [
"CC0-1.0"
] | null | null | null | doc/pub/pde/ipynb/pde.ipynb | cosmologist10/ComputationalPhysics | c6642becb1036e2faaf4f1da78a31785b2033fe7 | [
"CC0-1.0"
] | null | null | null | 29.263181 | 284 | 0.491006 | true | 25,997 | Qwen/Qwen-72B | 1. YES
2. YES
| 0.782662 | 0.828939 | 0.648779 | __label__eng_Latn | 0.963299 | 0.345662 |
```python
import sys
print("Using Python {}.{}.".format(sys.version_info.major, sys.version_info.minor))
```
Using Python 3.9.
## Importing packages
```python
from sympy import *
from scipy.optimize import toms748
from scipy.integrate import solve_ivp
from scipy.integrate import quad
import numpy as np
init_printing()
```
# Function definitions
```python
# Definition of coordinates
x1, x2, xi1, xi2 = symbols('x_1 x_2 xi_1 xi_2')
# Definition of directional variables
n1, n2 = symbols('n_1 n_2')
# Definition of sliding direction (Out of plane)
n = Matrix([[-1], [0], [0]])
```
```python
# Fundamental solution to laplacian equal to zero and the directional derivative
E = -1/(2*pi) * log(sqrt((x1-xi1)**2 + (x2-xi2)**2))
def ddn(expr):
NewExpr = (n1/sqrt(n1**2+n2**2))*diff(expr,x1) + (n2/sqrt(n1**2+n2**2))*diff(expr,x2)
return NewExpr
E
```
```python
# Friction law function symbols and function definitio
mu, muo, a, Sr, Vo, b, L, psi= symbols('mu mu_o a S_r V_o b L psi')
def frictionLaw():
return muo + a*ln(Sr/Vo) + b* ln(Vo*psi/L)
# State variable relations
def AgingLaw():
return 1-Sr*psi/L
def RuinaLaw():
return 1-Sr*psi/L
mu = frictionLaw()
mu
```
```python
# BEM symbols
S, Vp, t, tau, Theta, sigmaN, u, eta= symbols('S V_p t tau Theta sigma_n u eta' )
ThetaA = symbols('Theta_a')
```
# Implementation Aspects
$\int_{\Gamma_F}E\frac{\partial u}{\partial n}dS +
\int_{\Gamma_D}E\frac{\partial u}{\partial n}dS +
{\int_{\Gamma_N}E\frac{\partial u}{\partial n}dS} =
\frac{1}{2}u(x)+
\int_{\Gamma_F}u\frac{\partial E}{\partial n_{\xi}}dS +
\int_{\Gamma_D}u\frac{\partial E}{\partial n_{\xi}}dS +
\int_{\Gamma_N}u\frac{\partial E}{\partial n_{\xi}}dS$
Taking out the terms that become zero, we obtain
$\int_{\Gamma_F}E\frac{\partial u}{\partial n}dS +
\int_{\Gamma_D}E\frac{\partial u}{\partial n}dS
=
\frac{1}{2}u(x) +
\int_{\Gamma_F}u\frac{\partial E}{\partial n_{\xi}}dS$
```python
P1, P2, P3 = symbols('\int_{\Gamma_F}EdS \int_{\Gamma_D}EdS \int_{\Gamma_N}EdS')
Wf, Wa = symbols('W_f W_a')
P1, P2, P3, Wf, Wa
```
```python
Wa = 10
x2 = 12
```
```python
# On-fault slip
def integrandP1(Chi, wa, xo):
return (ln(abs(xo - wa*Chi)))/Chi**2
def ImproperInt(wa, xo):
return -abs(wa)/(2*pi)*quad(integrandP1, 1, np.inf, args=(wa, xo))[0]
# On plate boundary
def integrandP2(Chi, xo, wf, h):
return ln(abs((xo - wf)-h*(Chi+1)/2))
def IntGaussianQuad(xo, wf, h):
if(xo != h/2 + wf):
return -wf/(4.0*pi)*quad(integrandP2, -1, 1, args=(xo, wf, h))[0]
```
# Putting it all together
```python
# Definition of mapping matrix from quantities to BEM
M = Matrix([[1, 0], [0, 0], [0, 0], [0, 1]])
M, M.T
```
```python
# Putting it all together
S = 2 * u + Vp * t
tau = M.T*Theta
#Fric = tau + mu * sigmaN + eta* V
tau
```
# Initialization step
```python
u = ln(abs(x2))
u
```
```python
Theta = ThetaA*abs(a)**2/x2**2
Theta
```
```python
```
| 08e3fd1a5fa2188dd96219afafa17f591723be91 | 40,664 | ipynb | Jupyter Notebook | PythonCodes/Exercises/Class-SEAS/.ipynb_checkpoints/BEM-checkpoint.ipynb | Nicolucas/C-Scripts | 2608df5c2e635ad16f422877ff440af69f98f960 | [
"MIT"
] | 1 | 2020-02-25T08:05:13.000Z | 2020-02-25T08:05:13.000Z | PythonCodes/Exercises/Class-SEAS/.ipynb_checkpoints/BEM-checkpoint.ipynb | Nicolucas/C-Scripts | 2608df5c2e635ad16f422877ff440af69f98f960 | [
"MIT"
] | null | null | null | PythonCodes/Exercises/Class-SEAS/.ipynb_checkpoints/BEM-checkpoint.ipynb | Nicolucas/C-Scripts | 2608df5c2e635ad16f422877ff440af69f98f960 | [
"MIT"
] | null | null | null | 87.074946 | 5,800 | 0.824611 | true | 1,087 | Qwen/Qwen-72B | 1. YES
2. YES | 0.899121 | 0.651355 | 0.585647 | __label__eng_Latn | 0.310517 | 0.198984 |
# Linear models with CNN features
```python
# Rather than importing everything manually, we'll make things easy
# and load them all in utils.py, and just import them from there.
%matplotlib inline
import utils; reload(utils)
from utils import *
```
## Introduction
We need to find a way to convert the imagenet predictions to a probability of being a cat or a dog, since that is what the Kaggle competition requires us to submit. We could use the imagenet hierarchy to download a list of all the imagenet categories in each of the dog and cat groups, and could then solve our problem in various ways, such as:
- Finding the largest probability that's either a cat or a dog, and using that label
- Averaging the probability of all the cat categories and comparing it to the average of all the dog categories.
But these approaches have some downsides:
- They require manual coding for something that we should be able to learn from the data
- They ignore information available in the predictions; for instance, if the models predicts that there is a bone in the image, it's more likely to be a dog than a cat.
A very simple solution to both of these problems is to learn a linear model that is trained using the 1,000 predictions from the imagenet model for each image as input, and the dog/cat label as target.
```python
%matplotlib inline
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
import scipy
from sklearn.preprocessing import OneHotEncoder
from sklearn.metrics import confusion_matrix
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
import utils; reload(utils)
from utils import plots, get_batches, plot_confusion_matrix, get_data
```
```python
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential
from keras.layers import Input
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
```
## Linear models in keras
It turns out that each of the Dense() layers is just a *linear model*, followed by a simple *activation function*. We'll learn about the activation function later - first, let's review how linear models work.
A linear model is (as I'm sure you know) simply a model where each row is calculated as *sum(row * weights)*, where *weights* needs to be learnt from the data, and will be the same for every row. For example, let's create some data that we know is linearly related:
```python
x = random((30,2))
y = np.dot(x, [2., 3.]) + 1.
```
```python
x[:5]
```
```python
y[:5]
```
We can use keras to create a simple linear model (*Dense()* - with no activation - in Keras) and optimize it using SGD to minimize mean squared error (*mse*):
```python
lm = Sequential([ Dense(1, input_shape=(2,)) ])
lm.compile(optimizer=SGD(lr=0.1), loss='mse')
```
(See the *Optim Tutorial* notebook and associated Excel spreadsheet to learn all about SGD and related optimization algorithms.)
This has now learnt internal weights inside the lm model, which we can use to evaluate the loss function (MSE).
```python
lm.evaluate(x, y, verbose=0)
```
```python
lm.fit(x, y, nb_epoch=5, batch_size=1)
```
```python
lm.evaluate(x, y, verbose=0)
```
And, of course, we can also take a look at the weights - after fitting, we should see that they are close to the weights we used to calculate y (2.0, 3.0, and 1.0).
```python
lm.get_weights()
```
## Train linear model on predictions
Using a Dense() layer in this way, we can easily convert the 1,000 predictions given by our model into a probability of dog vs cat--simply train a linear model to take the 1,000 predictions as input, and return dog or cat as output, learning from the Kaggle data. This should be easier and more accurate than manually creating a map from imagenet categories to one dog/cat category.
### Training the model
We start with some basic config steps. We copy a small amount of our data into a 'sample' directory, with the exact same structure as our 'train' directory--this is *always* a good idea in *all* machine learning, since we should do all of our initial testing using a dataset small enough that we never have to wait for it.
```python
#path = "data/dogscats/sample/"
path = "data/dogscats/"
model_path = path + 'models/'
if not os.path.exists(model_path): os.mkdir(model_path)
```
We will process as many images at a time as our graphics card allows. This is a case of trial and error to find the max batch size - the largest size that doesn't give an out of memory error.
```python
batch_size=100
#batch_size=4
```
We need to start with our VGG 16 model, since we'll be using its predictions and features.
```python
from vgg16 import Vgg16
vgg = Vgg16()
model = vgg.model
```
Our overall approach here will be:
1. Get the true labels for every image
2. Get the 1,000 imagenet category predictions for every image
3. Feed these predictions as input to a simple linear model.
Let's start by grabbing training and validation batches.
```python
# Use batch size of 1 since we're just doing preprocessing on the CPU
val_batches = get_batches(path+'valid', shuffle=False, batch_size=1)
batches = get_batches(path+'train', shuffle=False, batch_size=1)
```
Loading and resizing the images every time we want to use them isn't necessary - instead we should save the processed arrays. By far the fastest way to save and load numpy arrays is using bcolz. This also compresses the arrays, so we save disk space. Here are the functions we'll use to save and load using bcolz.
```python
import bcolz
def save_array(fname, arr): c=bcolz.carray(arr, rootdir=fname, mode='w'); c.flush()
def load_array(fname): return bcolz.open(fname)[:]
```
We have provided a simple function that joins the arrays from all the batches - let's use this to grab the training and validation data:
```python
val_data = get_data(path+'valid')
```
```python
trn_data = get_data(path+'train')
```
```python
trn_data.shape
```
```python
save_array(model_path+ 'train_data.bc', trn_data)
save_array(model_path + 'valid_data.bc', val_data)
```
We can load our training and validation data later without recalculating them:
```python
trn_data = load_array(model_path+'train_data.bc')
val_data = load_array(model_path+'valid_data.bc')
```
```python
val_data.shape
```
Keras returns *classes* as a single column, so we convert to one hot encoding
```python
def onehot(x): return np.array(OneHotEncoder().fit_transform(x.reshape(-1,1)).todense())
```
```python
val_classes = val_batches.classes
trn_classes = batches.classes
val_labels = onehot(val_classes)
trn_labels = onehot(trn_classes)
```
```python
trn_labels.shape
```
```python
trn_classes[:4]
```
```python
trn_labels[:4]
```
...and their 1,000 imagenet probabilties from VGG16--these will be the *features* for our linear model:
```python
trn_features = model.predict(trn_data, batch_size=batch_size)
val_features = model.predict(val_data, batch_size=batch_size)
```
```python
trn_features.shape
```
```python
save_array(model_path+ 'train_lastlayer_features.bc', trn_features)
save_array(model_path + 'valid_lastlayer_features.bc', val_features)
```
We can load our training and validation features later without recalculating them:
```python
trn_features = load_array(model_path+'train_lastlayer_features.bc')
val_features = load_array(model_path+'valid_lastlayer_features.bc')
```
Now we can define our linear model, just like we did earlier:
```python
# 1000 inputs, since that's the saved features, and 2 outputs, for dog and cat
lm = Sequential([ Dense(2, activation='softmax', input_shape=(1000,)) ])
lm.compile(optimizer=RMSprop(lr=0.1), loss='categorical_crossentropy', metrics=['accuracy'])
```
We're ready to fit the model!
```python
batch_size=64
```
```python
batch_size=4
```
```python
lm.fit(trn_features, trn_labels, nb_epoch=3, batch_size=batch_size,
validation_data=(val_features, val_labels))
```
```python
lm.summary()
```
### Viewing model prediction examples
Keras' *fit()* function conveniently shows us the value of the loss function, and the accuracy, after every epoch ("*epoch*" refers to one full run through all training examples). The most important metrics for us to look at are for the validation set, since we want to check for over-fitting.
- **Tip**: with our first model we should try to overfit before we start worrying about how to handle that - there's no point even thinking about regularization, data augmentation, etc if you're still under-fitting! (We'll be looking at these techniques shortly).
As well as looking at the overall metrics, it's also a good idea to look at examples of each of:
1. A few correct labels at random
2. A few incorrect labels at random
3. The most correct labels of each class (ie those with highest probability that are correct)
4. The most incorrect labels of each class (ie those with highest probability that are incorrect)
5. The most uncertain labels (ie those with probability closest to 0.5).
Let's see what we, if anything, we can from these (in general, these are particularly useful for debugging problems in the model; since this model is so simple, there may not be too much to learn at this stage.)
Calculate predictions on validation set, so we can find correct and incorrect examples:
```python
# We want both the classes...
preds = lm.predict_classes(val_features, batch_size=batch_size)
# ...and the probabilities of being a cat
probs = lm.predict_proba(val_features, batch_size=batch_size)[:,0]
probs[:8]
```
```python
preds[:8]
```
Get the filenames for the validation set, so we can view images:
```python
filenames = val_batches.filenames
```
```python
# Number of images to view for each visualization task
n_view = 4
```
Helper function to plot images by index in the validation set:
```python
def plots_idx(idx, titles=None):
plots([image.load_img(path + 'valid/' + filenames[i]) for i in idx], titles=titles)
```
```python
#1. A few correct labels at random
correct = np.where(preds==val_labels[:,1])[0]
idx = permutation(correct)[:n_view]
plots_idx(idx, probs[idx])
```
```python
#2. A few incorrect labels at random
incorrect = np.where(preds!=val_labels[:,1])[0]
idx = permutation(incorrect)[:n_view]
plots_idx(idx, probs[idx])
```
```python
#3. The images we most confident were cats, and are actually cats
correct_cats = np.where((preds==0) & (preds==val_labels[:,1]))[0]
most_correct_cats = np.argsort(probs[correct_cats])[::-1][:n_view]
plots_idx(correct_cats[most_correct_cats], probs[correct_cats][most_correct_cats])
```
```python
# as above, but dogs
correct_dogs = np.where((preds==1) & (preds==val_labels[:,1]))[0]
most_correct_dogs = np.argsort(probs[correct_dogs])[:n_view]
plots_idx(correct_dogs[most_correct_dogs], 1-probs[correct_dogs][most_correct_dogs])
```
```python
#3. The images we were most confident were cats, but are actually dogs
incorrect_cats = np.where((preds==0) & (preds!=val_labels[:,1]))[0]
most_incorrect_cats = np.argsort(probs[incorrect_cats])[::-1][:n_view]
plots_idx(incorrect_cats[most_incorrect_cats], probs[incorrect_cats][most_incorrect_cats])
```
```python
#3. The images we were most confident were dogs, but are actually cats
incorrect_dogs = np.where((preds==1) & (preds!=val_labels[:,1]))[0]
most_incorrect_dogs = np.argsort(probs[incorrect_dogs])[:n_view]
plots_idx(incorrect_dogs[most_incorrect_dogs], 1-probs[incorrect_dogs][most_incorrect_dogs])
```
```python
#5. The most uncertain labels (ie those with probability closest to 0.5).
most_uncertain = np.argsort(np.abs(probs-0.5))
plots_idx(most_uncertain[:n_view], probs[most_uncertain])
```
Perhaps the most common way to analyze the result of a classification model is to use a [confusion matrix](http://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/). Scikit-learn has a convenient function we can use for this purpose:
```python
cm = confusion_matrix(val_classes, preds)
```
We can just print out the confusion matrix, or we can show a graphical view (which is mainly useful for dependents with a larger number of categories).
```python
plot_confusion_matrix(cm, val_batches.class_indices)
```
### About activation functions
Do you remember how we defined our linear model? Here it is again for reference:
```python
lm = Sequential([ Dense(2, activation='softmax', input_shape=(1000,)) ])
```
And do you remember the definition of a fully connected layer in the original VGG?:
```python
model.add(Dense(4096, activation='relu'))
```
You might we wondering, what's going on with that *activation* parameter? Adding an 'activation' parameter to a layer in Keras causes an additional function to be called after the layer is calculated. You'll recall that we had no such parameter in our most basic linear model at the start of this lesson - that's because a simple linear model has no *activation function*. But nearly all deep model layers have an activation function - specifically, a *non-linear* activation function, such as tanh, sigmoid (```1/(1+exp(x))```), or relu (```max(0,x)```, called the *rectified linear* function). Why?
The reason for this is that if you stack purely linear layers on top of each other, then you just end up with a linear layer! For instance, if your first layer was ```2*x```, and your second was ```-2*x```, then the combination is: ```-2*(2*x) = -4*x```. If that's all we were able to do with deep learning, it wouldn't be very deep! But what if we added a relu activation after our first layer? Then the combination would be: ```-2 * max(0, 2*x)```. As you can see, that does not simplify to just a linear function like the previous example--and indeed we can stack as many of these on top of each other as we wish, to create arbitrarily complex functions.
And why would we want to do that? Because it turns out that such a stack of linear functions and non-linear activations can approximate any other function just as close as we want. So we can **use it to model anything**! This extraordinary insight is known as the *universal approximation theorem*. For a visual understanding of how and why this works, I strongly recommend you read Michael Nielsen's [excellent interactive visual tutorial](http://neuralnetworksanddeeplearning.com/chap4.html).
The last layer generally needs a different activation function to the other layers--because we want to encourage the last layer's output to be of an appropriate form for our particular problem. For instance, if our output is a one hot encoded categorical variable, we want our final layer's activations to add to one (so they can be treated as probabilities) and to have generally a single activation much higher than the rest (since with one hot encoding we have just a single 'one', and all other target outputs are zero). Our classication problems will always have this form, so we will introduce the activation function that has these properties: the *softmax* function. Softmax is defined as (for the i'th output activation): ```exp(x[i]) / sum(exp(x))```.
I suggest you try playing with that function in a spreadsheet to get a sense of how it behaves.
We will see other activation functions later in this course - but relu (and minor variations) for intermediate layers and softmax for output layers will be by far the most common.
# Modifying the model
## Retrain last layer's linear model
Since the original VGG16 network's last layer is Dense (i.e. a linear model) it seems a little odd that we are adding an additional linear model on top of it. This is especially true since the last layer had a softmax activation, which is an odd choice for an intermediate layer--and by adding an extra layer on top of it, we have made it an intermediate layer. What if we just removed the original final layer and replaced it with one that we train for the purpose of distinguishing cats and dogs? It turns out that this is a good idea - as we'll see!
We start by removing the last layer, and telling Keras that we want to fix the weights in all the other layers (since we aren't looking to learn new parameters for those other layers).
```python
vgg.model.summary()
```
```python
model.pop()
for layer in model.layers: layer.trainable=False
```
**Careful!** Now that we've modified the definition of *model*, be careful not to rerun any code in the previous sections, without first recreating the model from scratch! (Yes, I made that mistake myself, which is why I'm warning you about it now...)
Now we're ready to add our new final layer...
```python
model.add(Dense(2, activation='softmax'))
```
```python
??vgg.finetune
```
...and compile our updated model, and set up our batches to use the preprocessed images (note that now we will also *shuffle* the training batches, to add more randomness when using multiple epochs):
```python
gen=image.ImageDataGenerator()
batches = gen.flow(trn_data, trn_labels, batch_size=batch_size, shuffle=True)
val_batches = gen.flow(val_data, val_labels, batch_size=batch_size, shuffle=False)
```
We'll define a simple function for fitting models, just to save a little typing...
```python
def fit_model(model, batches, val_batches, nb_epoch=1):
model.fit_generator(batches, samples_per_epoch=batches.N, nb_epoch=nb_epoch,
validation_data=val_batches, nb_val_samples=val_batches.N)
```
...and now we can use it to train the last layer of our model!
(It runs quite slowly, since it still has to calculate all the previous layers in order to know what input to pass to the new final layer. We could precalculate the output of the penultimate layer, like we did for the final layer earlier - but since we're only likely to want one or two iterations, it's easier to follow this alternative approach.)
```python
opt = RMSprop(lr=0.1)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
```
```python
fit_model(model, batches, val_batches, nb_epoch=2)
```
Before moving on, go back and look at how little code we had to write in this section to finetune the model. Because this is such an important and common operation, keras is set up to make it as easy as possible. We didn't even have to use any external helper functions in this section.
It's a good idea to save weights of all your models, so you can re-use them later. Be sure to note the git log number of your model when keeping a research journal of your results.
```python
model.save_weights(model_path+'finetune1.h5')
```
```python
model.load_weights(model_path+'finetune1.h5')
```
```python
model.evaluate(val_data, val_labels)
```
We can look at the earlier prediction examples visualizations by redefining *probs* and *preds* and re-using our earlier code.
```python
preds = model.predict_classes(val_data, batch_size=batch_size)
probs = model.predict_proba(val_data, batch_size=batch_size)[:,0]
probs[:8]
```
```python
cm = confusion_matrix(val_classes, preds)
```
```python
plot_confusion_matrix(cm, {'cat':0, 'dog':1})
```
## Retraining more layers
Now that we've fine-tuned the new final layer, can we, and should we, fine-tune *all* the dense layers? The answer to both questions, it turns out, is: yes! Let's start with the "can we" question...
### An introduction to back-propagation
The key to training multiple layers of a model, rather than just one, lies in a technique called "back-propagation" (or *backprop* to its friends). Backprop is one of the many words in deep learning parlance that is creating a new word for something that already exists - in this case, backprop simply refers to calculating gradients using the *chain rule*. (But we will still introduce the deep learning terms during this course, since it's important to know them when reading about or discussing deep learning.)
As you (hopefully!) remember from high school, the chain rule is how you calculate the gradient of a "function of a function"--something of the form *f(u), where u=g(x)*. For instance, let's say your function is ```pow((2*x), 2)```. Then u is ```2*x```, and f(u) is ```power(u, 2)```. The chain rule tells us that the derivative of this is simply the product of the derivatives of f() and g(). Using *f'(x)* to refer to the derivative, we can say that: ```f'(x) = f'(u) * g'(x) = 2*u * 2 = 2*(2*x) * 2 = 8*x```.
Let's check our calculation:
```python
# sympy let's us do symbolic differentiation (and much more!) in python
import sympy as sp
# we have to define our variables
x = sp.var('x')
# then we can request the derivative or any expression of that variable
pow(2*x,2).diff()
```
The key insight is that the stacking of linear functions and non-linear activations we learnt about in the last section is simply defining a function of functions (of functions, of functions...). Each layer is taking the output of the previous layer's function, and using it as input into its function. Therefore, we can calculate the derivative at any layer by simply multiplying the gradients of that layer and all of its following layers together! This use of the chain rule to allow us to rapidly calculate the derivatives of our model at any layer is referred to as *back propagation*.
The good news is that you'll never have to worry about the details of this yourself, since libraries like Theano and Tensorflow (and therefore wrappers like Keras) provide *automatic differentiation* (or *AD*). ***TODO***
### Training multiple layers in Keras
The code below will work on any model that contains dense layers; it's not just for this VGG model.
NB: Don't skip the step of fine-tuning just the final layer first, since otherwise you'll have one layer with random weights, which will cause the other layers to quickly move a long way from their optimized imagenet weights.
```python
layers = model.layers
# Get the index of the first dense layer...
first_dense_idx = [index for index,layer in enumerate(layers) if type(layer) is Dense][0]
# ...and set this and all subsequent layers to trainable
for layer in layers[first_dense_idx:]: layer.trainable=True
```
Since we haven't changed our architecture, there's no need to re-compile the model - instead, we just set the learning rate. Since we're training more layers, and since we've already optimized the last layer, we should use a lower learning rate than previously.
```python
K.set_value(opt.lr, 0.01)
fit_model(model, batches, val_batches, 3)
```
This is an extraordinarily powerful 5 lines of code. We have fine-tuned all of our dense layers to be optimized for our specific data set. This kind of technique has only become accessible in the last year or two - and we can already do it in just 5 lines of python!
```python
model.save_weights(model_path+'finetune2.h5')
```
There's generally little room for improvement in training the convolutional layers, if you're using the model on natural images (as we are). However, there's no harm trying a few of the later conv layers, since it may give a slight improvement, and can't hurt (and we can always load the previous weights if the accuracy decreases).
```python
for layer in layers[12:]: layer.trainable=True
K.set_value(opt.lr, 0.001)
```
```python
fit_model(model, batches, val_batches, 4)
```
```python
model.save_weights(model_path+'finetune3.h5')
```
You can always load the weights later and use the model to do whatever you need:
```python
model.load_weights(model_path+'finetune2.h5')
model.evaluate_generator(get_batches('valid', gen, False, batch_size*2), val_batches.N)
```
```python
```
| fdc05f00441f36e4a2d82812e2ea9b1ee967fb1e | 42,942 | ipynb | Jupyter Notebook | deeplearning1/nbs/lesson2.ipynb | mribbons/fastaicourses | 0c854faca4f107058668858878668d2cc5fadd35 | [
"Apache-2.0"
] | null | null | null | deeplearning1/nbs/lesson2.ipynb | mribbons/fastaicourses | 0c854faca4f107058668858878668d2cc5fadd35 | [
"Apache-2.0"
] | null | null | null | deeplearning1/nbs/lesson2.ipynb | mribbons/fastaicourses | 0c854faca4f107058668858878668d2cc5fadd35 | [
"Apache-2.0"
] | null | null | null | 28.103403 | 770 | 0.605235 | true | 5,701 | Qwen/Qwen-72B | 1. YES
2. YES | 0.91611 | 0.815232 | 0.746842 | __label__eng_Latn | 0.996663 | 0.573497 |
<a href="https://colab.research.google.com/github/Alro10/PyTorch1.xTutorials/blob/master/04-Neural-Network/04_NeuralNets_mnist.ipynb" target="_parent"></a>
# Neural Networks
This is a tutorial for using shallow neural networks (NNets). The magic ReLU activation is a part of NNets arquitecture:
\begin{equation}
f(x)=x^{+}=\max (0, x)
\end{equation}
## Get dataset
```
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
# Device configuration (use GPU or CPU)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Hyper-parameters
input_size = 784
hidden_size = 500
num_classes = 10
num_epochs = 5
batch_size = 100
learning_rate = 0.001
# MNIST dataset
train_dataset = torchvision.datasets.MNIST(root='../../data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = torchvision.datasets.MNIST(root='../../data',
train=False,
transform=transforms.ToTensor())
# Data loader
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
```
## Feedforward Network
```
# Fully connected neural network with one hidden layer
class NeuralNet(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
# Now it only takes a call to the layers to make predictions
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
```
## Training step
```
model = NeuralNet(input_size, hidden_size, num_classes).to(device)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# Train the model
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Move tensors to the configured device
images = images.reshape(-1, 28*28).to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 100 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
```
Epoch [1/5], Step [100/600], Loss: 0.2915
Epoch [1/5], Step [200/600], Loss: 0.3362
Epoch [1/5], Step [300/600], Loss: 0.1498
Epoch [1/5], Step [400/600], Loss: 0.1193
Epoch [1/5], Step [500/600], Loss: 0.0917
Epoch [1/5], Step [600/600], Loss: 0.1291
Epoch [2/5], Step [100/600], Loss: 0.0539
Epoch [2/5], Step [200/600], Loss: 0.1187
Epoch [2/5], Step [300/600], Loss: 0.2009
Epoch [2/5], Step [400/600], Loss: 0.0779
Epoch [2/5], Step [500/600], Loss: 0.0737
Epoch [2/5], Step [600/600], Loss: 0.0289
Epoch [3/5], Step [100/600], Loss: 0.0519
Epoch [3/5], Step [200/600], Loss: 0.1052
Epoch [3/5], Step [300/600], Loss: 0.0256
Epoch [3/5], Step [400/600], Loss: 0.0246
Epoch [3/5], Step [500/600], Loss: 0.0307
Epoch [3/5], Step [600/600], Loss: 0.0552
Epoch [4/5], Step [100/600], Loss: 0.0579
Epoch [4/5], Step [200/600], Loss: 0.0238
Epoch [4/5], Step [300/600], Loss: 0.0778
Epoch [4/5], Step [400/600], Loss: 0.0621
Epoch [4/5], Step [500/600], Loss: 0.0135
Epoch [4/5], Step [600/600], Loss: 0.0564
Epoch [5/5], Step [100/600], Loss: 0.0463
Epoch [5/5], Step [200/600], Loss: 0.0198
Epoch [5/5], Step [300/600], Loss: 0.0471
Epoch [5/5], Step [400/600], Loss: 0.0246
Epoch [5/5], Step [500/600], Loss: 0.0140
Epoch [5/5], Step [600/600], Loss: 0.0820
## Accuracy
```
# Test the model
# In test phase, we don't need to compute gradients (for memory efficiency)
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.reshape(-1, 28*28).to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total))
```
Accuracy of the network on the 10000 test images: 97.73 %
## Save model
```
# Save the model checkpoint
torch.save(model.state_dict(), 'model_net.ckpt')
```
```
!ls
```
model_net.ckpt sample_data
| 27d4dc87aa2a88c7628ba4b984565f2baca27e8a | 10,698 | ipynb | Jupyter Notebook | lesson04-Neural-Network/04_NeuralNets_mnist.ipynb | Alro10/PyTorch1.0Tutorials | f37ac6e4ed877a0e8f69d986db3a18c1ba571975 | [
"MIT"
] | 1 | 2019-08-16T01:40:16.000Z | 2019-08-16T01:40:16.000Z | lesson04-Neural-Network/04_NeuralNets_mnist.ipynb | Alro10/PyTorch1.0Tutorials | f37ac6e4ed877a0e8f69d986db3a18c1ba571975 | [
"MIT"
] | null | null | null | lesson04-Neural-Network/04_NeuralNets_mnist.ipynb | Alro10/PyTorch1.0Tutorials | f37ac6e4ed877a0e8f69d986db3a18c1ba571975 | [
"MIT"
] | null | null | null | 32.222892 | 262 | 0.439615 | true | 1,501 | Qwen/Qwen-72B | 1. YES
2. YES | 0.839734 | 0.640636 | 0.537964 | __label__yue_Hant | 0.24917 | 0.088199 |
# Time dependent tensile response
```python
%matplotlib widget
import matplotlib.pylab as plt
from bmcs_beam.tension.time_dependent_cracking import TimeDependentCracking
```
```python
import sympy as sp
sp.init_printing()
import numpy as np
```
# Single material point
## Time dependent function
```python
TimeDependentCracking(T_prime_0 = 100).interact()
```
VBox(children=(Output(), Tab(children=(VBox(children=(GridBox(children=(FloatSlider(value=45.0, description='\…
### Time-dependent temperature evolution function
Find a suitable continuous function that can represent the temperature evolution during the hydration. Currently the a function of a Weibull type has been chosen and transformed such that the peak value and the corresponding time can be specified as a parameter.
```python
t = sp.symbols('t', nonnegative=True)
```
```python
T_m = sp.Symbol("T_m", positive = True)
T_s = sp.Symbol("T_s", positive = True)
```
```python
omega_fn = 1 - sp.exp(-(t/T_s)**T_m)
```
```python
T_prime_0 = sp.Symbol("T_prime_0", positive = True)
```
```python
T_t = (1 - omega_fn) * T_prime_0 * t
```
**Shape functions for temperature evolution**
```python
T_t
```
```python
T_prime_t = sp.simplify(T_t.diff(t))
T_prime_t
```
**Transform the shape function**
to be able to explicitly specify the maximum temperature and corresponding time
```python
t_argmax_T = sp.Symbol("t_argmax_T")
T_s_sol = sp.solve( sp.Eq( sp.solve(T_prime_t,t)[0], t_argmax_T ), T_s)[0]
```
```python
T_max = sp.Symbol("T_max", positive=True)
T_prime_0_sol = sp.solve(sp.Eq(T_t.subs(T_s, T_s_sol).subs(t, t_argmax_T), T_max),
T_prime_0)[0]
```
```python
T_max_t = sp.simplify( T_t.subs({T_s: T_s_sol, T_prime_0: T_prime_0_sol}) )
T_max_t
```
```python
get_T_t = sp.lambdify((t, T_prime_0, T_m, T_s), T_t)
get_T_max_t = sp.lambdify((t, T_max, t_argmax_T, T_m), T_max_t)
data = dict(T_prime_0=100, T_m=1, T_s=1)
```
```python
_, ax = plt.subplots(1,1)
t_range = np.linspace(0,10,100)
plt.plot(t_range, get_T_t(t_range, **data));
plt.plot(t_range, get_T_max_t(t_range, 37, 1., 2));
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
### Time dependent compressive strength
**From Eurocode 2:**
$s$ captures the effect of cement type on the time evolution of the compressive strength
it ranges from $s = 0.2$ for class R (rapid), $s = 0.25$ for class N (normal), and $s = 0.38$ for class S (slow).
```python
s = sp.Symbol("s", positive=True)
```
```python
beta_cc = sp.exp( s * (1 - sp.sqrt(28/t)))
beta_cc
```
```python
get_beta_cc = sp.lambdify((t, s), beta_cc )
```
```python
_, ax = plt.subplots(1,1)
plt.plot(t_range, get_beta_cc(t_range, 0.2))
```
Canvas(toolbar=Toolbar(toolitems=[('Home', 'Reset original view', 'home', 'home'), ('Back', 'Back to previous …
<lambdifygenerated-17>:2: RuntimeWarning: divide by zero encountered in true_divide
return (exp(s*(1 - 2*sqrt(7)/sqrt(t))))
[<matplotlib.lines.Line2D at 0x7f3119d6e490>]
### Compressive strength
```python
f_cm_28 = sp.Symbol("f_cm28", positive=True)
f_cm_28
```
```python
f_cm_t = beta_cc * f_cm_28
f_cm_t
```
```python
get_f_cm_t = sp.lambdify((t, f_cm_28, s), f_cm_t)
```
### Tensile strength
```python
f_ctm = sp.Symbol("f_ctm", positive=True)
alpha_f = sp.Symbol("alpha_f", positive=True)
```
```python
f_ctm_t = beta_cc * f_ctm
f_ctm_t
```
```python
get_f_ctm_t = sp.lambdify((t, f_ctm, s), f_ctm_t)
```
### Elastic modulus
```python
E_cm_28 = sp.Symbol("E_cm28", positive=True)
```
```python
E_cm_t = (f_cm_t / f_cm_28)**0.3 * E_cm_28
E_cm_t
```
```python
get_E_cm_t = sp.lambdify((t, E_cm_28, s), E_cm_t)
```
## Uncracked state
- Specimen is clamped at both sides. Then $\varepsilon_\mathrm{app} = 0, \forall x \in \Omega$
- Then the matrix stress is given as
\begin{align}
\sigma^\mathrm{m}(x,t) = - E^\mathrm{m}(t)
\cdot \alpha \int_0^t T^\prime(x,\theta)\, \mathrm{d}\theta
\end{align}
```python
alpha = sp.Symbol("alpha", positive=True )
```
```python
eps_eff = alpha * T_max_t
```
```python
dot_T_max_t = sp.simplify(T_max_t.diff(t))
```
```python
dot_eps_eff = alpha * dot_T_max_t
dot_E_cm_t = E_cm_t.diff(t)
```
```python
sig_t = E_cm_t * eps_eff
```
```python
dot_sig_t = E_cm_t * dot_eps_eff + dot_E_cm_t * eps_eff
```
```python
sp.simplify(dot_sig_t)
```
Integral cannot be resolved algebraically - numerical integration is used
```python
#sig2_t = sp.integrate(dot_sig_t, (t,0,t))
```
# Single crack state
## Time-dependent debonding process
### Fibers
- If there is a crack at $x_I$, then there can be non-zero apparent strains within the debonded zone - measurable using local strain sensors, i.e.
\begin{align}
\exists x \in (L_I^{(-)},L_I^{(+)}), \; \varepsilon_\mathrm{app}^\mathrm{f}(x,t) \neq 0.
\end{align}
- However, the integral of apparent strain in the fibers must disappear within the debonded zone, i.e.
\begin{align}
\int_{L^{(-)}}^{L^{(+)}}\varepsilon^\mathrm{f}_\mathrm{app}(x,t)\, \mathrm{d}x = 0
\end{align}
- Crack bridging fiber stress is given as
\begin{align}
\sigma^{\mathrm{f}}(x=0, t) = E^{\mathrm{f}} \varepsilon^{\mathrm{f}}_\mathrm{eff}(x=0, t)
\end{align}
### Matrix
- The integrated apparent strain in the matrix must be equal to crack opening $w_I$, i.e.
\begin{align}
\int_{L_I^{(-)}}^{L_I^{(+)}}\varepsilon^\mathrm{m}_\mathrm{app}(x,t)\, \mathrm{d}x + w_I = 0
\end{align}
- Considering symmetry, we can write
\begin{align}
\int_{0}^{L_I^{(+)}}\varepsilon^\mathrm{m}_\mathrm{app}(x,t)\, \mathrm{d}x
+ \frac{1}{2} w_I(t) = 0
\end{align}
This relation holds for a homogeneous strain distribution along the bar specimen.
Considering a non reinforced concrete bar, it is possible to detect the time of
a crack occurrence by requiring setting:
\begin{align}
f_\mathrm{ct}(t) = \sigma_\mathrm{c}(t)
\end{align}
# Multiple cracks
The temperature development during the hydration process follows the relation
\begin{align}
T(t,x)
\end{align}
At the same time, the material parameters of the concrete matrix and of bond are
defined as time functions
\begin{align}
E(t), f_\mathrm{ct}(t), \tau(t)
\end{align}
Temperature-induced concrete strain in a point $x$ at time $t$ is expressed as
\begin{align}
\bar{\varepsilon}_{T}(t,x) = \alpha \int_0^t \frac{\mathrm{d} T(t,x)}{\mathrm{d} t} {\mathrm{d} t}
\end{align}
\begin{align}
\bar{\varepsilon}_\mathrm{app} = \bar{\varepsilon}_\mathrm{eff} + \bar{\varepsilon}_\mathrm{\Delta T}
\end{align}
If the apparent strain is suppressed, i.e. $\bar{\varepsilon}_\mathrm{app} = 0$, the effective stress is given as
\begin{align}
0 = \bar{\varepsilon}_\mathrm{eff} +
\bar{\varepsilon}_{\Delta T} \implies
\bar{\varepsilon}_\mathrm{eff} = - \alpha \Delta T
\end{align}
More precisely, this equation reads
\begin{align}
\bar{\varepsilon}_\mathrm{eff}(t) = - \alpha \, \int_0^t \frac{\mathrm{d}T}{ \mathrm{d}t} \, \mathrm{d} t
\end{align}
Current force at the boundary of the specimen is then given as
\begin{align}
\sigma = E(t) \, \varepsilon_{\mathrm{eff}}(t)
\end{align}
\begin{align}
\sigma = E(t) \left(\varepsilon_{\mathrm{app}}(x,t) - \alpha \int_0^t T^\prime(x,\theta) \, \mathrm{d}\theta \right)
\end{align}
**Salient features of the algorithm**
Non-linearity included by cracking stress
- find the time and location of the next crack occurrence
- provide a local, crack-centered solution of the cracking problem
| c0fe65f552589e931cc19437d7f4ea4f2bc500ef | 72,293 | ipynb | Jupyter Notebook | bmcs_beam/tension/time_dependent_cracking.ipynb | bmcs-group/bmcs_beam | b53967d0d0461657ec914a3256ec40f9dcff80d5 | [
"MIT"
] | 1 | 2021-05-07T11:10:27.000Z | 2021-05-07T11:10:27.000Z | bmcs_beam/tension/time_dependent_cracking.ipynb | bmcs-group/bmcs_beam | b53967d0d0461657ec914a3256ec40f9dcff80d5 | [
"MIT"
] | null | null | null | bmcs_beam/tension/time_dependent_cracking.ipynb | bmcs-group/bmcs_beam | b53967d0d0461657ec914a3256ec40f9dcff80d5 | [
"MIT"
] | null | null | null | 76.662778 | 23,612 | 0.800562 | true | 2,424 | Qwen/Qwen-72B | 1. YES
2. YES | 0.90599 | 0.746139 | 0.675994 | __label__eng_Latn | 0.635677 | 0.408892 |
# Analytical problem
Defining a problem with an explicit mathematical representation is straightforwars.
As an example, consider the following multiobjective optimization problem
\begin{equation}
\begin{aligned}
& \underset{\mathbf x}{\text{min}}
& & x_1^2 - x_2; x_2^2 - 3x_1 \\
& \text{s.t.} & & x_1 + x_2 \leq 10 \\
& & & \mathbf{x} \; \in S, \\
\end{aligned}
\end{equation}
where the feasible region is
\begin{equation}
x_i \in \left[-5, 5\right] \; \forall i \;\in \left[1,2\right].
\end{equation}
Begin by importing the necessary classes:
```python
from desdeov2.problem.Problem import ScalarMOProblem
from desdeov2.problem.Objective import ScalarObjective
from desdeov2.problem.Variable import Variable
from desdeov2.problem.Constraint import ScalarConstraint
```
Define the variables:
```python
# Args: name, starting value, lower bound, upper bound
x1 = Variable("x_1", 0, -0.5, 0.5)
x2 = Variable("x_2", 0, -0.5, 0.5)
```
Define the objectives, notice the argument of the callable objective function, it is assumed to be array-like.
```python
# Args: name, callable
obj1 = ScalarObjective("f_1", lambda x: x[0]**2 - x[1])
obj2 = ScalarObjective("f_2", lambda x: x[1]**2 - 3*x[0])
```
Define the constraints. Constraint may depend on objective function as well (second argument to the lambda, notice the underscore). In that case, the objectives should not be defined inline, like above, but as their own function definitions. The constraint should be defined so, that when evaluated, it should return a positive value, if the constraint is adhered to, and a negative, if the constraint is breached.
```python
# Args: name, n of variables, n of objectives, callable
cons1 = ScalarConstraint("c_1", 2, 2, lambda x, _: 10 - (x[0] + x[1]))
```
Finally, put it all together and create the problem.
```python
# Args: list of objevtives, variables and constraints
problem = ScalarMOProblem([obj1, obj2]
,[x1, x2]
,[cons1])
```
Now, the problem is fully specified and can be evaluated and played around with.
```python
import numpy as np
print("N of objectives:", problem.n_of_objectives)
print("N of variables:", problem.n_of_variables)
print("N of constraints:", problem.n_of_constraints)
res1, eval_cons1 = problem.evaluate(np.array([2, 4]))
res2, eval_cons2 = problem.evaluate(np.array([6, 6]))
res3, eval_cons3 = problem.evaluate(np.array([[6, 3], [4,3], [7,4]]))
print("Single feasible decision variables:", res1, "with constraint values", eval_cons1)
print("Single non-feasible decision variables:", res2, "with constraint values", eval_cons2)
print("Multiple decision variables:", res3, "with constraint values", eval_cons3)
```
| 62f4b411b18df3ff4d60773c795725dc6ec5e25e | 4,737 | ipynb | Jupyter Notebook | notebooks/analytical_problem.ipynb | gialmisi/DESDEOv2 | 0eeb4687d2e539845ab86a5018ff99b92e4ca5cf | [
"MIT"
] | 1 | 2019-08-08T05:11:21.000Z | 2019-08-08T05:11:21.000Z | notebooks/analytical_problem.ipynb | gialmisi/DESDEOv2 | 0eeb4687d2e539845ab86a5018ff99b92e4ca5cf | [
"MIT"
] | 3 | 2019-08-25T08:49:33.000Z | 2019-09-06T08:06:46.000Z | notebooks/analytical_problem.ipynb | gialmisi/DESDEOv2 | 0eeb4687d2e539845ab86a5018ff99b92e4ca5cf | [
"MIT"
] | 1 | 2019-11-07T14:42:29.000Z | 2019-11-07T14:42:29.000Z | 28.709091 | 420 | 0.568503 | true | 751 | Qwen/Qwen-72B | 1. YES
2. YES | 0.972415 | 0.897695 | 0.872932 | __label__eng_Latn | 0.954038 | 0.866447 |
###### Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2014 L.A. Barba, C.D. Cooper, G.F. Forsyth.
# Reaction-diffusion model
This IPython Notebook presents the context and set-up for the coding assignment of Module 4: _Spreading out: Diffusion problems_, of the course [**"Practical Numerical Methods with Python"**](https://github.com/numerical-mooc/numerical-mooc) (a.k.a., numericalmooc).
So far in this module, we've studied diffusion in 1D and 2D. Now it's time to add in some more interesting physics. You'll study a model represented by *reaction-diffusion* equations. What are they? The name says it all—it's a system that has the physics of diffusion but also has some kind of reaction that adds different behaviors to the solution.
We're going to look at the _Gray-Scott model_, which simulates the interaction of two generic chemical species reacting and ... you guessed it ... diffusing! Some amazing patterns can emerge with simple reaction models, eerily reminiscent of patterns formed in nature. It's fascinating! Check out this simulation by Karl Sims posted on You Tube ... it looks like a growing coral reef, doesn't it?
```
from IPython.display import YouTubeVideo
YouTubeVideo('8dTmUr5qKvI')
```
## Gray-Scott model
The Gray-Scott model represents the reaction and diffusion of two generic chemical species, $U$ and $V$, whose concentration at a point in space is represented by variables $u$ and $v$. The model follows some simple rules.
* Each chemical _diffuses_ through space at its own rate.
* Species $U$ is added at a constant feed rate into the system.
* Two units of species V can 'turn' a unit of species U into V: $\; 2V+U\rightarrow 3V$
* There's a constant kill rate removing species $V$.
This model results in the following system of partial differential equations for the concentrations $u(x,y,t)$ and $v(x,y,t)$ of both chemical species:
\begin{align}
\frac{\partial u}{\partial t} &= D_u \nabla ^2 u - uv^2 + F(1-u)\\
\frac{\partial v}{\partial t} &= D_v \nabla ^2 v + uv^2 - (F + k)v
\end{align}
You should see some familiar terms, and some unfamiliar ones. On the left-hand side of each equation, we have the time rate of change of the concentrations. The first term on the right of each equation correspond to the spatial diffusion of each concentration, with $D_u$ and $D_v$ the respective rates of diffusion.
In case you forgot, the operator $\nabla ^2$ is the Laplacian:
$$
\nabla ^2 u = \frac{\partial ^2 u}{\partial x^2} + \frac{\partial ^2 u}{\partial y^2}
$$
The second term on the right-hand side of each equation corresponds to the reaction. You see that this term decreases $u$ while it increases $v$ in the same amount: $uv^2$. The reaction requires one unit of $U$ and two units of $V$, resulting in a reaction rate proportional to the concentration $u$ and to the square of the concentration $v$. This result derives from the _law of mass action_, which we can explain in terms of probability: the odds of finding one molecule of species $U$ at a point in space is proportional to the concentration $u$, while the odds of finding two molecules of $V$ is proportional to the concentration squared, $v^2$. We assume here a reaction rate constant equal to $1$, which just means that the model is non-dimensionalized in some way.
The final terms in the two equations are the "feed" and "kill" rates, respectively: $F(1-u)$ replenishes the species $U$ (which would otherwise run out, as it is being turned into $V$ by the reaction); $-(F+k)v$ is diminishing the species $V$ (otherwise the concentration $v$ would simply increase without bound).
The values of $F$ and $k$ are chosen parameters and part of the fun of this assignment is to change these values, together with the diffusion constants, and see what happens.
### Problem setup
The system is represented by two arrays, `U` and `V`, holding the discrete values of the concentrations $u$ and $v$, respectively. We start by setting `U = 1` everywhere and `V = 0` everywhere, then introduce areas of difference, as initial conditions. We then add a little noise to the whole system to help the $u$ and $v$ reactions along.
Below is the code segment we used to generate the initial conditions for `U` and `V`.
**NOTE**: *DO NOT USE THIS CODE IN YOUR ASSIGNMENT*.
We are showing it here to help you understand how the system is constructed. However, you _must use the data we've supplied below_ as your starting condition or your answers will not match those that the grading system expects.
```[Python]
num_blocks = 30
randx = numpy.random.randint(1, nx-1, num_blocks)
randy = numpy.random.randint(1, nx-1, num_blocks)
U = numpy.ones((n,n))
V = numpy.zeros((n,n))
r = 10
U[:,:] = 1.0
for i, j in zip(randx, randy):
U[i-r:i+r,j-r:j+r] = 0.50
V[i-r:i+r,j-r:j+r] = 0.25
U += 0.05*numpy.random.random((n,n))
V += 0.05*numpy.random.random((n,n))
```
## Your assignment
* Discretize the reaction-diffusion equations using forward-time/central-space and assume that $\Delta x = \Delta y = \delta$.
* For your timestep, set
$$\Delta t = \frac{9}{40}\frac{\delta^2}{\max(D_u, D_v)}$$
* Use zero Neumann boundary conditions on all sides of the domain.
You should use the initial conditions and constants listed in the cell below. They correspond to the following domain:
* Grid of points with dimension `192x192` points
* Domain is $5{\rm m} \times 5{\rm m}$
* Final time is $8000{\rm s}$.
```
import numpy
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
```
```
n = 192
Du, Dv, F, k = 0.00016, 0.00008, 0.035, 0.065 # Bacteria 1
dh = 5./(n-1)
T = 8000
dt = .9 * dh**2 / (4*max(Du,Dv))
nt = int(T/dt)
```
### Initial condition data files
In order to ensure that you start from the same initial conditions as we do, please download the file [uvinitial.npz](https://github.com/numerical-mooc/numerical-mooc/blob/master/lessons/04_spreadout/data/uvinitial.npz?raw=true)
This is a NumPy save-file that contains two NumPy arrays, holding the initial values for `U` and `V`, respectively. Once you have downloaded the file into your working directory, you can load the data using the following code snippet.
```
uvinitial = numpy.load('./data/uvinitial.npz')
U = uvinitial['U']
V = uvinitial['V']
```
```
fig = plt.figure(figsize=(8,5))
plt.subplot(121)
plt.imshow(U, cmap = cm.RdBu)
plt.xticks([]), plt.yticks([]);
plt.subplot(122)
plt.imshow(V, cmap = cm.RdBu)
plt.xticks([]), plt.yticks([]);
```
## Sample Solution
Below is an animated gif showing the results of this solution for a different set of randomized initial block positions. Each frame of the animation represents 100 timesteps.
Just to get your juices flowing!
## Exploring extra patterns
Once you have completed the assignment, you might want to explore a few more of the interesting patterns that can be obtained with the Gray-Scott model. The conditions below will result in a variety of patterns and should work without any other changes to your existing code.
This pattern is called "Fingerprints."
```
#Du, Dv, F, k = 0.00014, 0.00006, 0.035, 0.065 # Bacteria 2
#Du, Dv, F, k = 0.00016, 0.00008, 0.060, 0.062 # Coral
#Du, Dv, F, k = 0.00019, 0.00005, 0.060, 0.062 # Fingerprint
#Du, Dv, F, k = 0.00010, 0.00010, 0.018, 0.050 # Spirals
#Du, Dv, F, k = 0.00012, 0.00008, 0.020, 0.050 # Spirals Dense
#Du, Dv, F, k = 0.00010, 0.00016, 0.020, 0.050 # Spirals Fast
#Du, Dv, F, k = 0.00016, 0.00008, 0.020, 0.055 # Unstable
#Du, Dv, F, k = 0.00016, 0.00008, 0.050, 0.065 # Worms 1
#Du, Dv, F, k = 0.00016, 0.00008, 0.054, 0.063 # Worms 2
#Du, Dv, F, k = 0.00016, 0.00008, 0.035, 0.060 # Zebrafish
```
## References
* Reaction-diffusion tutorial, by Karl Sims
http://www.karlsims.com/rd.html
* Pearson, J. E. (1993). [Complex patterns in a simple system](http://www.sciencemag.org/content/261/5118/189), _Science_, Vol. 261(5118), 189-192 // [PDF](http://www3.nd.edu/~powers/pearson.pdf) from nd.edu.
* Pattern Parameters from [http://www.aliensaint.com/uo/java/rd/](http://www.aliensaint.com/uo/java/rd/)
---
###### The cell below loads the style of the notebook
```
from IPython.core.display import HTML
css_file = '../../styles/numericalmoocstyle.css'
HTML(open(css_file, "r").read())
```
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Arvo:400,700,400italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=PT+Mono' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Shadows+Into+Light' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Nixie+One' rel='stylesheet' type='text/css'>
<style>
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
#notebook_panel { /* main background */
background: rgb(245,245,245);
}
div.cell { /* set cell width */
width: 750px;
}
div #notebook { /* centre the content */
background: #fff; /* white background for content */
width: 1000px;
margin: auto;
padding-left: 0em;
}
#notebook li { /* More space between bullet points */
margin-top:0.8em;
}
/* draw border around running cells */
div.cell.border-box-sizing.code_cell.running {
border: 1px solid #111;
}
/* Put a solid color box around each cell and its output, visually linking them*/
div.cell.code_cell {
background-color: rgb(256,256,256);
border-radius: 0px;
padding: 0.5em;
margin-left:1em;
margin-top: 1em;
}
div.text_cell_render{
font-family: 'Alegreya Sans' sans-serif;
line-height: 140%;
font-size: 125%;
font-weight: 400;
width:600px;
margin-left:auto;
margin-right:auto;
}
/* Formatting for header cells */
.text_cell_render h1 {
font-family: 'Nixie One', serif;
font-style:regular;
font-weight: 400;
font-size: 45pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h2 {
font-family: 'Nixie One', serif;
font-weight: 400;
font-size: 30pt;
line-height: 100%;
color: rgb(0,51,102);
margin-bottom: 0.1em;
margin-top: 0.3em;
display: block;
}
.text_cell_render h3 {
font-family: 'Nixie One', serif;
margin-top:16px;
font-size: 22pt;
font-weight: 600;
margin-bottom: 3px;
font-style: regular;
color: rgb(102,102,0);
}
.text_cell_render h4 { /*Use this for captions*/
font-family: 'Nixie One', serif;
font-size: 14pt;
text-align: center;
margin-top: 0em;
margin-bottom: 2em;
font-style: regular;
}
.text_cell_render h5 { /*Use this for small titles*/
font-family: 'Nixie One', sans-serif;
font-weight: 400;
font-size: 16pt;
color: rgb(163,0,0);
font-style: italic;
margin-bottom: .1em;
margin-top: 0.8em;
display: block;
}
.text_cell_render h6 { /*use this for copyright note*/
font-family: 'PT Mono', sans-serif;
font-weight: 300;
font-size: 9pt;
line-height: 100%;
color: grey;
margin-bottom: 1px;
margin-top: 1px;
}
.CodeMirror{
font-family: "PT Mono";
font-size: 90%;
}
</style>
| be0d7f2dddf8fb43d17b72548ca3d1157e2189f9 | 287,907 | ipynb | Jupyter Notebook | lessons/04_spreadout/06_Reaction_Diffusion.ipynb | SrLobo1/numerical-mooc | 202c3859c5545099cbe8e69702c45475eadf5329 | [
"CC-BY-3.0"
] | 1 | 2017-02-10T12:09:09.000Z | 2017-02-10T12:09:09.000Z | lessons/04_spreadout/06_Reaction_Diffusion.ipynb | albertonogueira/numerical-mooc | dd95e650310502b5cdfe6e405ed7ab7e1496d233 | [
"CC-BY-3.0"
] | null | null | null | lessons/04_spreadout/06_Reaction_Diffusion.ipynb | albertonogueira/numerical-mooc | dd95e650310502b5cdfe6e405ed7ab7e1496d233 | [
"CC-BY-3.0"
] | null | null | null | 531.193727 | 268,055 | 0.92524 | true | 3,351 | Qwen/Qwen-72B | 1. YES
2. YES | 0.870597 | 0.808067 | 0.703501 | __label__eng_Latn | 0.964743 | 0.4728 |
#Commutator and expansion based computations with Python & Sympy
```
from sympy.physics.quantum import Commutator, Dagger, Operator
from sympy import simplify, exp, series
init_printing()
t = Symbol("t")
```
Here's a quick demo on how to do computations with commutators and expansions involving operators with Python and Sympy. Let's define a couple of operators first.
```
L1= Operator("L1")
L2= Operator("L2")
L = L1+L2;
```
Note that those operators by definition do **not** commute.
```
Commutator(L1,L2)==0
```
False
Now let's move on to an operator defined by an expansion, the exponential operator. This is a well-defined operator as long as L is bounded. Most often it's denoted by $e^{Lt}$ and is really nothing more than
$$e^{Lt}=\sum_{k=0}^{\infty}\frac{(Lt)^k}{k!}$$
Let's define a function that approximates this operator, up to a certain order.
```
def expseries(L,t,k):
return series(exp(L*t),x=t,n=k);
```
Now we can compute the expansions of those operators and combinations of them. For example, the first terms of $e^{(L_1+2L_2)t}$ are
```
expseries(L1+2*L2,t,3)
```
$$1 + t \left(L_{1} + 2 L_{2}\right) + t^{2} \left(L_{1} L_{2} + \frac{\left(L_{1}\right)^{2}}{2} + L_{2} L_{1} + 2 \left(L_{2}\right)^{2}\right) + \mathcal{O}\left(t^{3}\right)$$
We can also compute the difference between different exponential operators. For example, it is a fact that we can approximate the operator $e^{Lt}=e^{(L_1+L_2)t}$ with $e^{L_1t}e^{L_2t}$
This is called the Lie splitting and with Python & Sympy we can figure out the difference between those two, which is a new operator.
```
expseries(L1+L2,t,3)-expseries(L1,t,3)*expseries(L2,t,3)
```
$$- \left(1 + t L_{1} + \frac{t^{2} \left(L_{1}\right)^{2}}{2} + \mathcal{O}\left(t^{3}\right)\right) \left(1 + t L_{2} + \frac{t^{2} \left(L_{2}\right)^{2}}{2} + \mathcal{O}\left(t^{3}\right)\right) + 1 + t \left(L_{1} + L_{2}\right) + t^{2} \left(\frac{L_{1} L_{2}}{2} + \frac{\left(L_{1}\right)^{2}}{2} + \frac{L_{2} L_{1}}{2} + \frac{\left(L_{2}\right)^{2}}{2}\right) + \mathcal{O}\left(t^{3}\right)$$
Those are the error terms, but clearly we can have python do more simplifications for us. We use sympy's *simplify* for that.
```
simplify(expseries(L1+L2,t,3)-expseries(L1,t,3)*expseries(L2,t,3))
```
$$\frac{1}{2} \left(t^{2} L_{2} L_{1} - t^{2} L_{1} L_{2} + \mathcal{O}\left(t^{3}\right)\right)$$
Now things look a lot better! Note that the term in there is actually the commutator of $L_1$ and $L_2$. It's easy to isolate that term in python with an *expand* and *coeff*.
```
expand(simplify(expseries(L1+L2,t,3)-expseries(L1,t,3)*expseries(L2,t,3))).coeff(t,2)
```
$$- \frac{L_{1} L_{2}}{2} + \frac{L_{2} L_{1}}{2}$$
We can use the same idea to compute the errors between arbitrarily complex splittings.
For example, we can also approximate $e^{Lt}$ with $e^{L_1t/2}e^{L_2t}e^{L_1t/2}$. This is called the Strang splitting. Again we try to compute the error term.
```
simplify(expseries(L1+L2,t,3)-expseries(L1/2,t,3)*expseries(L2,t,3)*expseries(L1,t,3))
```
$$\frac{1}{8} \left(- 4 t L_{1} - 4 t^{2} L_{2} L_{1} - 5 t^{2} \left(L_{1}\right)^{2} + \mathcal{O}\left(t^{3}\right)\right)$$
In this case, the error term contains first order terms. Since we *know* that Strang is more accurate than Lie, it must be the case that we don't have enough terms in the expansion. A lot of multiplications will be hidden under the following *simplify*.
```
simplify(expseries(L1+L2,t,4)-expseries(L1/2,t,4)*expseries(L2,t,4)*expseries(L1/2,t,4))
```
$$\frac{1}{24} \left(- 2 t^{3} \left(L_{2}\right)^{2} L_{1} + t^{3} L_{2} \left(L_{1}\right)^{2} + 4 t^{3} L_{2} L_{1} L_{2} + t^{3} \left(L_{1}\right)^{2} L_{2} - 2 t^{3} L_{1} \left(L_{2}\right)^{2} - 2 t^{3} L_{1} L_{2} L_{1} + \mathcal{O}\left(t^{4}\right)\right)$$
Once again, we can collect the important terms by *expand* and *coeff*.
```
expand(Out[222]).coeff(t,3)
```
$$- \frac{L_{1} L_{2}}{12} L_{1} - \frac{L_{1} \left(L_{2}\right)^{2}}{12} + \frac{\left(L_{1}\right)^{2} L_{2}}{24} + \frac{L_{2} L_{1}}{6} L_{2} + \frac{L_{2} \left(L_{1}\right)^{2}}{24} - \frac{\left(L_{2}\right)^{2} L_{1}}{12}$$
```
```
| daf0767276fc78e3114e41a809195bc72f160bb7 | 12,143 | ipynb | Jupyter Notebook | ipythonNotebooks/commutators_and_sympy.ipynb | kgourgou/blog | c9da56dc87a2b349efe06972a59706bfb181b197 | [
"MIT"
] | 2 | 2015-12-02T06:18:58.000Z | 2016-10-07T20:21:04.000Z | ipythonNotebooks/commutators_and_sympy.ipynb | kgourgou/blog | c9da56dc87a2b349efe06972a59706bfb181b197 | [
"MIT"
] | null | null | null | ipythonNotebooks/commutators_and_sympy.ipynb | kgourgou/blog | c9da56dc87a2b349efe06972a59706bfb181b197 | [
"MIT"
] | null | null | null | 50.807531 | 1,032 | 0.550688 | true | 1,573 | Qwen/Qwen-72B | 1. YES
2. YES | 0.887205 | 0.847968 | 0.752321 | __label__eng_Latn | 0.952906 | 0.586226 |
```python
from IPython.display import Image
Image('../../../python_for_probability_statistics_and_machine_learning.jpg')
```
[Python for Probability, Statistics, and Machine Learning](https://www.springer.com/fr/book/9783319307152)
```python
from __future__ import division
%pylab inline
```
Populating the interactive namespace from numpy and matplotlib
As we have seen, outside of some toy problems, it can be very difficult or
impossible to determine the probability density distribution of the estimator
of some quantity. The idea behind the bootstrap is that we can use computation
to approximate these functions which would otherwise be impossible to solve
for analytically.
Let's start with a simple example. Suppose we have the following set of random
variables, $\lbrace X_1, X_2, \ldots, X_n \rbrace$ where each $X_k \sim F$. In
other words the samples are all drawn from the same unknown distribution $F$.
Having run the experiment, we thereby obtain the following sample set:
$$
\lbrace x_1, x_2, \ldots, x_n \rbrace
$$
The sample mean is computed from this set as,
$$
\bar{x} = \frac{1}{n}\sum_{i=1}^n x_i
$$
The next question is how close is the sample mean to the true mean,
$\theta = \mathbb{E}_F(X)$. Note that the second central moment of $X$ is as
follows:
$$
\mu_2(F) := \mathbb{E}_F (X^2) - (\mathbb{E}_F (X))^2
$$
The standard deviation of the sample mean, $\bar{x}$, given $n$
samples from an underlying distribution $F$, is the following:
$$
\sigma(F) = (\mu_2(F)/n)^{1/2}
$$
Unfortunately, because we have only the set of samples $\lbrace x_1,
x_2, \ldots, x_n \rbrace$ and not $F$ itself, we cannot compute this and
instead must use the estimated standard error,
$$
\bar{\sigma} = (\bar{\mu}_2/n)^{1/2}
$$
where $\bar{\mu}_2 = \sum (x_i -\bar{x})^2/(n-1) $, which is the
unbiased estimate of $\mu_2(F)$. However, that is not the only way to proceed.
Instead, we could replace $F$ by some estimate, $\hat{F}$ obtained as a
piecewise function of $\lbrace x_1, x_2, \ldots, x_n \rbrace$ by placing
probability mass $1/n$ on each $x_i$. With that in place, we can compute the
estimated standard error as the following:
$$
\hat{\sigma}_B = (\mu_2(\hat{F})/n)^{1/2}
$$
which is called the *bootstrap estimate* of the standard error.
Unfortunately, the story effectively ends here. In even a slightly more general
setting, there is no clean formula $\sigma(F)$ within which $F$ can be swapped
for $\hat{F}$.
This is where the computer saves the day. We actually do not need to know the
formula $\sigma(F)$ because we can compute it using a resampling method. The
key idea is to sample with replacement from $\lbrace x_1, x_2, \ldots, x_n
\rbrace$. The new set of $n$ independent draws (with replacement) from this set
is the *bootstrap sample*,
$$
y^* = \lbrace x_1^*, x_2^*, \ldots, x_n^* \rbrace
$$
The Monte Carlo algorithm proceeds by first by selecting a large number of
bootstrap samples, $\lbrace y^*_k\rbrace$, then computing the statistic on each
of these samples, and then computing the sample standard deviation of the
results in the usual way. Thus, the bootstrap estimate of the statistic
$\theta$ is the following,
$$
\hat{\theta}^*_B = \frac{1}{B} \sum_k \hat{\theta}^*(k)
$$
with the corresponding square of the sample standard deviation as
$$
\hat{\sigma}_B^2 = \frac{1}{B-1} \sum_k (\hat{\theta}^*(k)-\hat{\theta}^*_B )^2
$$
The process is much simpler than the notation implies.
Let's explore this with a simple example using Python. The next block
of code sets up some samples from a $\beta(3,2)$ distribution,
```python
import numpy as np
_=np.random.seed(123456)
```
```python
import numpy as np
from scipy import stats
rv = stats.beta(3,2)
xsamples = rv.rvs(50)
```
Because this is simulation data, we already know that the
mean is $\mu_1 = 3/5$ and the standard deviation of the sample mean
for $n=50$ is $\bar{\sigma} =1/\sqrt {1250}$, which we will verify
this later.
```python
%matplotlib inline
from matplotlib.pylab import subplots
fig,ax = subplots()
fig.set_size_inches(8,4)
_=ax.hist(xsamples,normed=True,color='gray')
ax2 = ax.twinx()
_=ax2.plot(np.linspace(0,1,100),rv.pdf(np.linspace(0,1,100)),lw=3,color='k')
_=ax.set_xlabel('$x$',fontsize=28)
_=ax2.set_ylabel(' $y$',fontsize=28,rotation='horizontal')
fig.tight_layout()
#fig.savefig('fig-statistics/Bootstrap_001.png')
```
<!-- dom:FIGURE: [fig-statistics/Bootstrap_001.png, width=500 frac=0.85] The $\beta(3,2)$ distribution and the histogram that approximates it. <div id="fig:Bootstrap_001"></div> -->
<!-- begin figure -->
<div id="fig:Bootstrap_001"></div>
<p>The $\beta(3,2)$ distribution and the histogram that approximates it.</p>
<!-- end figure -->
[Figure](#fig:Bootstrap_001) shows the $\beta(3,2)$ distribution and
the corresponding histogram of the samples. The histogram represents
$\hat{F}$ and is the distribution we sample from to obtain the
bootstrap samples. As shown, the $\hat{F}$ is a pretty crude estimate
for the $F$ density (smooth solid line), but that's not a serious
problem insofar as the following bootstrap estimates are concerned.
In fact, the approximation $\hat{F}$ has a naturally tendency to
pull towards where most of the probability mass is. This is a
feature, not a bug; and is the underlying mechanism for why
bootstrapping works, but the formal proofs that exploit this basic
idea are far out of our scope here. The next block generates the
bootstrap samples
```python
yboot = np.random.choice(xsamples,(100,50))
yboot_mn = yboot.mean()
```
and the bootstrap estimate is therefore,
```python
np.std(yboot.mean(axis=1)) # approx sqrt(1/1250)
```
0.025598763883825818
[Figure](#fig:Bootstrap_002) shows the distribution of computed
sample means from the bootstrap samples. As promised, the next block
shows how to use `sympy.stats` to compute the $\beta(3,2)$ parameters we quoted
earlier.
```python
fig,ax = subplots()
fig.set_size_inches(8,4)
_=ax.hist(yboot.mean(axis=1),normed=True,color='gray')
_=ax.set_title('Bootstrap std of sample mean %3.3f vs actual %3.3f'%
(np.std(yboot.mean(axis=1)),np.sqrt(1/1250.)))
fig.tight_layout()
#fig.savefig('fig-statistics/Bootstrap_002.png')
```
<!-- dom:FIGURE: [fig-statistics/Bootstrap_002.png, width=500 frac=0.85] For each bootstrap draw, we compute the sample mean. This is the histogram of those sample means that will be used to compute the bootstrap estimate of the standard deviation. <div id="fig:Bootstrap_002"></div> -->
<!-- begin figure -->
<div id="fig:Bootstrap_002"></div>
<p>For each bootstrap draw, we compute the sample mean. This is the histogram of those sample means that will be used to compute the bootstrap estimate of the standard deviation.</p>
<!-- end figure -->
```python
import sympy as S
import sympy.stats
for i in range(50): # 50 samples
# load sympy.stats Beta random variables
# into global namespace using exec
execstring = "x%d = S.stats.Beta('x'+str(%d),3,2)"%(i,i)
exec(execstring)
# populate xlist with the sympy.stats random variables
# from above
xlist = [eval('x%d'%(i)) for i in range(50) ]
# compute sample mean
sample_mean = sum(xlist)/len(xlist)
# compute expectation of sample mean
sample_mean_1 = S.stats.E(sample_mean)
# compute 2nd moment of sample mean
sample_mean_2 = S.stats.E(S.expand(sample_mean**2))
# standard deviation of sample mean
# use sympy sqrt function
sigma_smn = S.sqrt(sample_mean_2-sample_mean_1**2) # 1/sqrt(1250)
print sigma_smn
```
sqrt(-1/(20000*beta(3, 2)**2) + 1/(1500*beta(3, 2)))
**Programming Tip.**
Using the `exec` function enables the creation of a sequence of Sympy
random variables. Sympy has the `var` function which can automatically
create a sequence of Sympy symbols, but there is no corresponding
function in the statistics module to do this for random variables.
<!-- @@@CODE src-statistics/Bootstrap.py from-to:^import sympy as S@^print sigma_smn -->
<!-- p.505 casella -->
**Example.** Recall the delta method from the section ref{sec:delta_method}. Suppose we have a set of Bernoulli coin-flips
($X_i$) with probability of head $p$. Our maximum likelihood estimator
of $p$ is $\hat{p}=\sum X_i/n$ for $n$ flips. We know this estimator
is unbiased with $\mathbb{E}(\hat{p})=p$ and $\mathbb{V}(\hat{p}) =
p(1-p)/n$. Suppose we want to use the data to estimate the variance of
the Bernoulli trials ($\mathbb{V}(X)=p(1-p)$). By the notation the
delta method, $g(x) = x(1-x)$. By the plug-in principle, our maximum
likelihood estimator of this variance is then $\hat{p}(1-\hat{p})$. We
want the variance of this quantity. Using the results of the delta
method, we have
$$
\begin{align*}
\mathbb{V}(g(\hat{p})) &=(1-2\hat{p})^2\mathbb{V}(\hat{p}) \\\
\mathbb{V}(g(\hat{p})) &=(1-2\hat{p})^2\frac{\hat{p}(1-\hat{p})}{n} \\\
\end{align*}
$$
Let's see how useful this is with a short simulation.
```python
import numpy as np
np.random.seed(123)
```
```python
from scipy import stats
import numpy as np
p= 0.25 # true head-up probability
x = stats.bernoulli(p).rvs(10)
print x
```
[0 0 0 0 0 0 1 0 0 0]
The maximum likelihood estimator of $p$ is $\hat{p}=\sum X_i/n$,
```python
phat = x.mean()
print phat
```
0.1
Then, plugging this into the delta method approximant above,
```python
print (1-2*phat)**2*(phat)**2/10.
```
0.00064
Now, let's try this using the bootstrap estimate of the variance
```python
phat_b=np.random.choice(x,(50,10)).mean(1)
print np.var(phat_b*(1-phat_b))
```
0.005049
This shows that the delta method's estimated variance
is different from the bootstrap method, but which one is better?
For this situation we can solve for this directly using Sympy
```python
import sympy as S
from sympy.stats import E, Bernoulli
xdata =[Bernoulli(i,p) for i in S.symbols('x:10')]
ph = sum(xdata)/float(len(xdata))
g = ph*(1-ph)
```
**Programming Tip.**
The argument in the `S.symbols('x:10')` function returns a sequence of Sympy
symbols named `x1,x2` and so on. This is shorthand for creating and naming each
symbol sequentially.
Note that `g` is the $g(\hat{p})=\hat{p}(1- \hat{p})$
whose variance we are trying to estimate. Then,
we can plug in for the estimated $\hat{p}$ and get the correct
value for the variance,
```python
print E(g**2) - E(g)**2
```
0.00442968750000000
This case is generally representative --- the delta method tends
to underestimate the variance and the bootstrap estimate is better here.
## Parametric Bootstrap
In the previous example, we used the $\lbrace x_1, x_2, \ldots, x_n \rbrace $
samples themselves as the basis for $\hat{F}$ by weighting each with $1/n$. An
alternative is to *assume* that the samples come from a particular
distribution, estimate the parameters of that distribution from the sample set,
and then use the bootstrap mechanism to draw samples from the assumed
distribution, using the so-derived parameters. For example, the next code block
does this for a normal distribution.
```python
rv = stats.norm(0,2)
xsamples = rv.rvs(45)
# estimate mean and var from xsamples
mn_ = np.mean(xsamples)
std_ = np.std(xsamples)
# bootstrap from assumed normal distribution with
# mn_,std_ as parameters
rvb = stats.norm(mn_,std_) #plug-in distribution
yboot = rvb.rvs(1000)
```
<!-- @@@CODE src-statistics/Bootstrap.py from-to:^# In\[7\]:@^yboot -->
Recall the sample variance estimator is the following:
$$
S^2 = \frac{1}{n-1} \sum (X_i-\bar{X})^2
$$
Assuming that the samples are normally distributed, this
means that $(n-1)S^2/\sigma^2$ has a chi-squared distribution with
$n-1$ degrees of freedom. Thus, the variance, $\mathbb{V}(S^2) = 2
\sigma^4/(n-1) $. Likewise, the MLE plug-in estimate for this is
$\mathbb{V}(S^2) = 2 \hat{\sigma}^4/(n-1)$ The following code computes
the variance of the sample variance, $S^2$ using the MLE and bootstrap
methods.
```python
# MLE-Plugin Variance of the sample mean
print 2*(std_**2)**2/9. # MLE plugin
# Bootstrap variance of the sample mean
print yboot.var()
# True variance of sample mean
print 2*(2**2)**2/9.
```
2.22670148618
3.29467885682
3.55555555556
<!-- @@@CODE src-statistics/Bootstrap.py from-to:^# In\[8\]:@^# end8 -->
This shows that the bootstrap estimate is better here than the MLE
plugin estimate.
Note that this technique becomes even more powerful with multivariate
distributions with many parameters because all the mechanics are the same.
Thus, the bootstrap is a great all-purpose method for computing standard
errors, but, in the limit, is it converging to the correct value? This is the
question of *consistency*. Unfortunately, to answer this question requires more
and deeper mathematics than we can get into here. The short answer is that for
estimating standard errors, the bootstrap is a consistent estimator in a wide
range of cases and so it definitely belongs in your toolkit.
| 2d8aeab96216ebc9c6179fe5bb4c7cc02645db59 | 171,769 | ipynb | Jupyter Notebook | chapters/statistics/notebooks/Bootstrap.ipynb | rajkubp020/helloword | 4bd22691de24b30a0f5b73821c35a7ac0666b034 | [
"MIT"
] | null | null | null | chapters/statistics/notebooks/Bootstrap.ipynb | rajkubp020/helloword | 4bd22691de24b30a0f5b73821c35a7ac0666b034 | [
"MIT"
] | null | null | null | chapters/statistics/notebooks/Bootstrap.ipynb | rajkubp020/helloword | 4bd22691de24b30a0f5b73821c35a7ac0666b034 | [
"MIT"
] | null | null | null | 216.606557 | 114,721 | 0.908115 | true | 3,702 | Qwen/Qwen-72B | 1. YES
2. YES | 0.754915 | 0.843895 | 0.637069 | __label__eng_Latn | 0.991351 | 0.318456 |
# Optimization
- [Least squares](#Least-squares)
- [Gradient descent](#Gradient-descent)
- [Constraint optimization](#Constraint-optimization)
- [Global optimization](#Global-optimization)
## Intro
Biological research uses optimization when performing many types of machine learning, or when it interfaces with engineering. A particular example is metabolic engineering. As a topic in itself, optimization is extremely complex and useful, so much so that it touches to the core of mathematics and computing.
An optimization problem complexity is dependent on several factors, such as:
- Do you intend a local or a global optimization?
- Is the function linear or nonlinear?
- Is the function convex or not?
- Can a gradient be computed?
- Can the Hessian matrix be computed?
- Do we perform optimization under constraints?
- Are those constraints integers?
- Is there a single objective or several?
Scipy does not cover all solvers efficiently but there are several Python packages specialized for certain classes of optimization problems. In general though, the heavier optimization tasks are solved with dedicated programs, many of whom have language bindings for Python.
## Least squares
In practical terms, the most basic application of optimization is computing the local vs global minima of functions. We will exemplify this with the method of least squares. This method is being used to fit the parameters of a function by performing an error minimization.
**Problem context** Having a set of $m$ data points, $(x_1, y_1), (x_2, y_2),\dots,(x_m, y_m)$ and a curve (model function) $y=f(x, \boldsymbol \beta)$ that in addition to the variable $x$ also depends on $n$ parameters, $\boldsymbol \beta = (\beta_1, \beta_2, \dots, \beta_n)$ with $m\ge n$.
It is desired to find the vector $\boldsymbol \beta$ of parameters such that the curve fits best the given data in the least squares sense, that is, the sum of squares of the residuals is minimized:
$$ min \sum_{i=1}^{m}(y_i - f(x_i, \boldsymbol \beta))^2$$
Let us use a similar exercise as the basic linear regression performet in the statistics chapter, but fit a curve instead. That is to say, we are now performing a very basic form of nonlinear regression. While not strictly statistics related, this exercise can be useful for example if we want to decide how a probability distribution fits our data. We will use the least-square again, through the optimization module of scipy.
```python
%matplotlib inline
import numpy as np
import pylab as plt
from scipy import optimize
nsamp = 30
x = np.linspace(0,1,nsamp)
"""
y = -0.5*x**2 + 7*sin(x)
This is what we try to fit against. Suppose we know our function is generated
by this law and want to find the (-0.5, 7) parameters. Alternatively we might
not know anything about this dataset but just want to fit this curve to it.
"""
# define the normal function
f = lambda p, x: p[0]*x*x + p[1]*np.sin(x)
testp = (-0.5, 7)
print("True(unknown) parameter value:", testp)
y = f(testp,x)
yr = y + .5*np.random.normal(size=nsamp) # adding a small noise
# define the residual function
e = lambda p, x, y: (abs((f(p,x)-y))).sum()
p0 = (5, 20) # initial parameter value
print("Initial parameter value:", p0)
# uses the standard least squares algorithm
p_est1 = optimize.least_squares(e, p0, args=(x, yr))
print("Parameters estimated with least squares:",p_est1.x)
y_est1 = f(p_est1.x, x)
plt.plot(x,y_est1,'r-', x,yr,'o', x,y,'b-')
plt.show()
# uses a simplex algorithm
p_est2 = optimize.fmin(e, p0, args=(x,yr))
print("Parameters estimated with the simplex algorithm:",p_est2)
y_est2 = f(p_est2, x)
plt.plot(x,y_est2,'r-', x,yr,'o', x,y,'b-')
plt.show()
```
Exercises:
- Use a different nonlinear function.
- Define a normal Python function f() instead!
- Improve the LS fit by using nonstandart loss functions (soft_l1, cauchy)
- Improve the LS fit by using different methods {‘dogbox’, ‘lm’}
## Gradient descent
Note that least squares is not an optimization method per se, it is a method to frame linear regression in the terms of an optimization problem. Gradient descent is the basis optimizatin method for most of modern machine learning, and any processor software today is judged by its speed to compute gradient descent. On GPUs, it sits as the foundation for Deep Learning and Reinforcement Learning.
The method is making an iterative walk in the direction of the local gradient, until the step size becomes :
$$\mathbf{a}_{n+1} = \mathbf{a}_n-\gamma\nabla F(\mathbf{a}_n)$$
Here is the naive algorithm, adapted from Wikipedia:
```python
%matplotlib inline
import numpy as np
import pylab as plt
cur_x = 6 # The algorithm starts at x=6
gamma = 0.01 # step size multiplier
precision = 0.00001
previous_step_size = 1/precision; # some large value
f = lambda x: x**4 - 3 * x**3 + 2
df = lambda x: 4 * x**3 - 9 * x**2
x = np.linspace(-4,cur_x,100)
while previous_step_size > precision:
prev_x = cur_x
cur_x += -gamma * df(prev_x)
previous_step_size = abs(cur_x - prev_x)
print("The local minimum occurs at %f" % cur_x)
plt.plot(x,f(x),'b-')
```
The naive implementations suffer from many downsides, from speed to oscilating accross a valey. Another typical fault of the naive approach is assuming that the functions to be optimized are smooth, with expected variation for small variations in their parameters. Here is an example of how to compute the gradient descent with scipy, for the rosenbrock function, a function known to be ill-conditioned.
But before, here is a [practical advice from Scipy](https://www.scipy-lectures.org/advanced/mathematical_optimization/index.html):
- Gradient not known:
> In general, prefer BFGS or L-BFGS, even if you have to approximate numerically gradients. These are also the default if you omit the parameter method - depending if the problem has constraints or bounds. On well-conditioned problems, Powell and Nelder-Mead, both gradient-free methods, work well in high dimension, but they collapse for ill-conditioned problems.
- With knowledge of the gradient:
> BFGS or L-BFGS. Computational overhead of BFGS is larger than that L-BFGS, itself larger than that of conjugate gradient. On the other side, BFGS usually needs less function evaluations than CG. Thus conjugate gradient method is better than BFGS at optimizing computationally cheap functions.
With the Hessian:
- If you can compute the Hessian, prefer the Newton method (Newton-CG or TCG).
- If you have noisy measurements: Use Nelder-Mead or Powell.
```python
import numpy as np
import scipy.optimize as optimize
def f(x): # The rosenbrock function
return .5*(1 - x[0])**2 + (x[1] - x[0]**2)**2
def fprime(x):
return np.array((-2*.5*(1 - x[0]) - 4*x[0]*(x[1] - x[0]**2), 2*(x[1] - x[0]**2)))
print(optimize.fmin_ncg(f, [2, 2], fprime=fprime))
def hessian(x): # Computed with sympy
return np.array(((1 - 4*x[1] + 12*x[0]**2, -4*x[0]), (-4*x[0], 2)))
print(optimize.fmin_ncg(f, [2, 2], fprime=fprime, fhess=hessian))
%matplotlib inline
from matplotlib import cm
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
X = np.arange(-1, 1, 0.005)
Y = np.arange(-1, 1, 0.005)
X, Y = np.meshgrid(X, Y)
Z = .5*(1 - X)**2 + (Y - X**2)**2
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
#ax.set_zlim(-1000.01, 1000.01)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
```
## Constraint optimization
Problem definition:
$$
\begin{array}{rcll}
\min &~& f(\mathbf{x}) & \\
\mathrm{subject~to} &~& g_i(\mathbf{x}) = c_i &\text{for } i=1,\ldots,n \quad \text{Equality constraints} \\
&~& h_j(\mathbf{x}) \geqq d_j &\text{for } j=1,\ldots,m \quad \text{Inequality constraints}
\end{array}
$$
Let's take the particular case when the objective function and the constraints linear, such as in the cannonical form:
$$\begin{align}
& \text{maximize} && \mathbf{c}^\mathrm{T} \mathbf{x}\\
& \text{subject to} && A \mathbf{x} \leq \mathbf{b} \\
& \text{and} && \mathbf{x} \ge \mathbf{0}
\end{align}$$
Scipy has methods for optimizing functions under constraints, including linear programming. Aditionally many (linear or nonlinear) constraint optimization problems can be turned into full optimization problems using Lagrange multipliers. We will learn how to run linear problems with PuLP.
```python
"""
maximize: 4x + 3y
x > 0
y >= 2
x + 2y <= 25
2x - 4y <= 8
-2x + y <= -5
"""
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(0, 20, 2000)
y1 = (x*0) + 2
y2 = (25-x)/2.0
y3 = (2*x-8)/4.0
y4 = 2 * x -5
# Make plot
plt.plot(x, y1, label=r'$y\geq2$')
plt.plot(x, y2, label=r'$2y\leq25-x$')
plt.plot(x, y3, label=r'$4y\geq 2x - 8$')
plt.plot(x, y4, label=r'$y\leq 2x-5$')
plt.xlim((0, 16))
plt.ylim((0, 11))
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
# Fill feasible region
y5 = np.minimum(y2, y4)
y6 = np.maximum(y1, y3)
plt.fill_between(x, y5, y6, where=y5>y6, color='grey', alpha=0.5)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
```
The solution to an optimization problem with linear constraints lies somewhere on the gray area, and it is not necessarily unique. However for linear optimization functions the solution is on one of the vertices. Now let us frame it with PuLP:
```
conda install -c conda-forge pulp
```
```python
import pulp
my_lp_problem = pulp.LpProblem("My LP Problem", pulp.LpMaximize)
x = pulp.LpVariable('x', lowBound=0, cat='Continuous')
y = pulp.LpVariable('y', lowBound=2, cat='Continuous')
# Objective function
my_lp_problem += 4 * x + 3 * y, "Z"
# Constraints
my_lp_problem += 2 * y <= 25 - x
my_lp_problem += 4 * y >= 2 * x - 8
my_lp_problem += y <= 2 * x - 5
my_lp_problem
```
My LP Problem:
MAXIMIZE
4*x + 3*y + 0
SUBJECT TO
_C1: x + 2 y <= 25
_C2: - 2 x + 4 y >= -8
_C3: - 2 x + y <= -5
VARIABLES
x Continuous
2 <= y Continuous
```python
my_lp_problem.solve()
print(pulp.LpStatus[my_lp_problem.status])
for variable in my_lp_problem.variables():
print("{} = {}".format(variable.name, variable.varValue))
print(pulp.value(my_lp_problem.objective))
```
Optimal
x = 14.5
y = 5.25
73.75
Further complications arise when some of the constraints need to be integers, in which case the problem becomes known as Mixed Integer Liniar Programming and it computationally more expensive. Yet such problems are quite frequent, for example in metabolic engineering, where you need to deal machines operating on discrete intervals, or when studying protein folding or DNA recombination. In such cases one can also install python packages that deal with the specific problem, such as [cobrapy](https://cobrapy.readthedocs.io/en/latest/) for metabolic engineering.
Further reading:
- For LP with PuLP, I recommend this tutorial, which also uses some real life problems, and has a github link for the notebooks: [http://benalexkeen.com/linear-programming-with-python-and-pulp/](http://benalexkeen.com/linear-programming-with-python-and-pulp/)
- For a great list of LP solvers check [https://stackoverflow.com/questions/26305704/python-mixed-integer-linear-programming](https://stackoverflow.com/questions/26305704/python-mixed-integer-linear-programming)
## Global optimization
The most computation efficient optimization methods can only find local optima, while using different heuristics when needing to access global optima. In the class of global optimization algorithms a few methods to mention are:
- Grid search: These methods belong to the class of brute force or greedy searches and check for solutions in multidimensional sollution spaces. Typicall employed when having to find optimal parameter combinations for machine learning problems (hyperparametrization)
- Branch and Bound: This method, belonging to the more general class called dynamical programming, uses an optimal rooted tree, thus breaking the problem into smaller local optima problems. It is used in LP/MILP for example, or by sequence alignment programs such as BLAST.
- Monte Carlo: This method belongs to the stochastic optimization class, that instead of looking for an exact fit is using random sampling and bayesian statistics. These methods are expected to gain more traction with the evolution of computing power.
- Heuristics: Many methods in this class are nature inspired, such as genetic programming, ant colony optimization etc.
```python
import sys
sys.version
```
'3.6.5 |Anaconda, Inc.| (default, Mar 29 2018, 18:21:58) \n[GCC 7.2.0]'
```python
sys.path
```
['',
'/home/sergiu/programs/miniconda3/envs/pycourse/lib/python36.zip',
'/home/sergiu/programs/miniconda3/envs/pycourse/lib/python3.6',
'/home/sergiu/programs/miniconda3/envs/pycourse/lib/python3.6/lib-dynload',
'/home/sergiu/programs/miniconda3/envs/pycourse/lib/python3.6/site-packages',
'/home/sergiu/programs/miniconda3/envs/pycourse/lib/python3.6/site-packages/IPython/extensions',
'/home/sergiu/.ipython']
```python
```
| 1234964b99c6a467c013fafe683c6bff01e1dd88 | 126,104 | ipynb | Jupyter Notebook | day2/scicomp_optimization.ipynb | grokkaine/biopycourse | cb8b554abb987e6f657c5e522c7e28ecbc9fb4d5 | [
"CC0-1.0"
] | 9 | 2017-05-16T06:07:22.000Z | 2021-08-06T14:58:28.000Z | day2/scicomp_optimization.ipynb | grokkaine/biopycourse | cb8b554abb987e6f657c5e522c7e28ecbc9fb4d5 | [
"CC0-1.0"
] | null | null | null | day2/scicomp_optimization.ipynb | grokkaine/biopycourse | cb8b554abb987e6f657c5e522c7e28ecbc9fb4d5 | [
"CC0-1.0"
] | 18 | 2017-05-16T07:25:08.000Z | 2021-04-22T19:22:53.000Z | 213.373942 | 38,768 | 0.902105 | true | 3,605 | Qwen/Qwen-72B | 1. YES
2. YES | 0.839734 | 0.843895 | 0.708647 | __label__eng_Latn | 0.985462 | 0.484757 |
# Taylor problem 2.20 Template
last revised: 08-Jan-2019 by Dick Furnstahl [furnstahl.1@osu.edu]
**This is a template for solving problem 2.20. Go through and fill in the blanks where ### appears.**
The goal of this problem is to plot and comment on the trajectory of a projectile subject to linear air resistance, considering four different values of the drag coefficient.
The problem statement fixes the initial angle above the horizontal and suggests using convenient values for the initial speed (magnitude of the velocity) and gravitational strength $g$. We'll set up the problem more generally and look at special cases.
The equations are derived in the book:
$$\begin{align}
x(t) &= v_{x0}\tau (1 - e^{-t/\tau}) \\
y(t) &= (v_{y0} + v_{\textrm{ter}}) \tau (1 - e^{-t/\tau}) - v_{\textrm{ter}} t
\end{align}$$
where $v_{\textrm{ter}} = g\tau$.
Plan:
1. Define functions for $x$ and $y$, which will depend on $t$, $\tau$, $g$, and the initial velocity. Make the functions look like the equations from Taylor to reduce the possibility of error.
2. Set up an array of the time $t$.
3. Determine $x$ and $y$ arrays for different values of $\tau$.
4. Make a plot of $y$ versus $x$ for each value of $\tau$, all on the same plot.
5. Save the plot for printing.
```python
### What modules do we need to import? (Can always add more later!)
import numpy as np
```
### 1. Define functions for $x$ and $y$
```python
def x_traj(t, tau, v_x0=1., g=1.):
"""Horizontal position x(t) from equation (2.36) in Taylor.
The initial position at t=0 is x=y=0.
"""
return v_x0 * tau * (1. - np.exp(-t/tau))
def y_traj(t, tau, v_y0=1., g=1.):
"""Vertical position y(t) from equation (2.36) in Taylor.
The initial position at t=0 is x=y=0.
"""
v_ter = g * tau
return (v_y0 + v_ter) * tau * (1. - np.exp(-t/tau)) - v_ter*t
```
### 2. Set up an array of the time $t$
```python
t_min = 0.
t_max = 3.
delta_t = 0.00001 ### pick a reasonable delta_t
t_pts = np.arange(-1, 101) ### fill in the blanks
t_pts # check that we did what we thought!
```
array([ -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24,
25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76,
77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89,
90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100])
### 3., 4. Make $x$ and $y$ arrays for different $\tau$ and plot
```python
%matplotlib inline
### What module(s) should you import?
import matplotlib.pyplot as plt
```
```python
# generate random integer values
from random import seed
from random import randint
# seed random number generator
seed(1)
# generate some integers
def x_pts_fnc(x):
for _ in range(x):
x_pts = randint(0, 10)
```
```python
plt.rcParams.update({'font.size': 16}) # This is to boost the font size
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1) ### How do you create a single subplot?
tau_1 = 0.3
ax.plot(x_traj(t_pts, tau_1), y_traj(t_pts, tau_1), 'b-',
label=r'$\tau = 0.3$')
tau_2 = 1.0
ax.plot(x_traj(t_pts, tau_2), y_traj(t_pts, tau_2), 'r:',
label=r'$\tau = 1.0$')
tau_3 = 3.0
ax.plot(x_traj(t_pts, tau_3), y_traj(t_pts, tau_3), 'g--',
label=r'$\tau = 3.0$')
### plot a line with tau_3 and line type 'g--' with a label
tau_4 = 100.
ax.plot(x_traj(t_pts, tau_4), y_traj(t_pts, tau_4), 'k-',
label=r'$\tau = 100.0$')
### plot a line with tau_4 and line type 'k- with a label
ax.set_ylim(-0.1, 1.5)
ax.set_xlim(0., 3)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_aspect(1) # so that the x and y spacing is the same
ax.legend();
```
### 5. Save the plot for printing
```python
# save the figure
fig.savefig('Taylor_prob_2.20.png', bbox_inches='tight')
### Find the figure file and display it in your browser, then save or print.
### Check you graph against the one from the next section.
```
## More advanced python: plot again with a loop
Now do it as a loop, cycling through properties, and add a vertical line at the asymptotic distance.
```python
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 18})
from cycler import cycler
my_cycler = (cycler(color=['k', 'g', 'b', 'r']) +
cycler(linestyle=['-', '--', ':', '-.']))
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1,1,1)
ax.set_prop_cycle(my_cycler)
v_x0 = 1.
tau_list = [10000, 3.0, 1.0, 0.3, 0.1]
for tau in tau_list:
ax.plot(x_traj(t_pts, tau), y_traj(t_pts, tau),
label=rf'$\tau = {tau:.1f}$')
ax.axvline(v_x0 * tau, color='black', linestyle='dotted')
ax.set_ylim(-0.1, 1.5)
ax.set_xlim(0., 3)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_aspect(1) # so that the x and y spacing is the same
ax.legend();
```
**If it is new to you, look up how a for loop in Python works and try to figure out what is happening here. Ask if you are confused!**
```python
```
| e6b0dacc568b13837728fc1acbed74ec48cf9f99 | 72,783 | ipynb | Jupyter Notebook | 2020_week_1/Taylor_problem_2.20_template.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
] | null | null | null | 2020_week_1/Taylor_problem_2.20_template.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
] | null | null | null | 2020_week_1/Taylor_problem_2.20_template.ipynb | CLima86/Physics_5300_CDL | d9e8ee0861d408a85b4be3adfc97e98afb4a1149 | [
"MIT"
] | null | null | null | 219.888218 | 34,000 | 0.906393 | true | 1,879 | Qwen/Qwen-72B | 1. YES
2. YES | 0.839734 | 0.757794 | 0.636346 | __label__eng_Latn | 0.934838 | 0.316775 |
# Understanding the FFT Algorithm
*This notebook first appeared as a post by Jake Vanderplas on [Pythonic Perambulations](http://jakevdp.github.io/blog/2013/08/28/understanding-the-fft/). The notebook content is BSD-licensed.*
<!-- PELICAN_BEGIN_SUMMARY -->
The Fast Fourier Transform (FFT) is one of the most important algorithms in signal processing and data analysis. I've used it for years, but having no formal computer science background, It occurred to me this week that I've never thought to ask *how* the FFT computes the discrete Fourier transform so quickly. I dusted off an old algorithms book and looked into it, and enjoyed reading about the deceptively simple computational trick that JW Cooley and John Tukey outlined in their classic [1965 paper](http://www.ams.org/journals/mcom/1965-19-090/S0025-5718-1965-0178586-1/) introducing the subject.
The goal of this post is to dive into the Cooley-Tukey FFT algorithm, explaining the symmetries that lead to it, and to show some straightforward Python implementations putting the theory into practice. My hope is that this exploration will give data scientists like myself a more complete picture of what's going on in the background of the algorithms we use.
<!-- PELICAN_END_SUMMARY -->
## The Discrete Fourier Transform
The FFT is a fast, $\mathcal{O}[N\log N]$ algorithm to compute the Discrete Fourier Transform (DFT), which
naively is an $\mathcal{O}[N^2]$ computation. The DFT, like the more familiar continuous version of the Fourier transform, has a forward and inverse form which are defined as follows:
**Forward Discrete Fourier Transform (DFT):**
$$X_k = \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~k~n~/~N}$$
**Inverse Discrete Fourier Transform (IDFT):**
$$x_n = \frac{1}{N}\sum_{k=0}^{N-1} X_k e^{i~2\pi~k~n~/~N}$$
The transformation from $x_n \to X_k$ is a translation from configuration space to frequency space, and can be very useful in both exploring the power spectrum of a signal, and also for transforming certain problems for more efficient computation. For some examples of this in action, you can check out Chapter 10 of our upcoming Astronomy/Statistics book, with figures and Python source code available [here](http://www.astroml.org/book_figures/chapter10/). For an example of the FFT being used to simplify an otherwise difficult differential equation integration, see my post on [Solving the Schrodinger Equation in Python](http://jakevdp.github.io/blog/2012/09/05/quantum-python/).
Because of the importance of the FFT in so many fields, Python contains many standard tools and wrappers to compute this. Both NumPy and SciPy have wrappers of the extremely well-tested FFTPACK library, found in the submodules ``numpy.fft`` and ``scipy.fftpack`` respectively. The fastest FFT I am aware of is in the [FFTW](http://www.fftw.org/) package, which is also available in Python via the [PyFFTW](https://pypi.python.org/pypi/pyFFTW) package.
For the moment, though, let's leave these implementations aside and ask how we might compute the FFT in Python from scratch.
## Computing the Discrete Fourier Transform
For simplicity, we'll concern ourself only with the forward transform, as the inverse transform can be implemented in a very similar manner. Taking a look at the DFT expression above, we see that it is nothing more than a straightforward linear operation: a matrix-vector multiplication of $\vec{x}$,
$$\vec{X} = M \cdot \vec{x}$$
with the matrix $M$ given by
$$M_{kn} = e^{-i~2\pi~k~n~/~N}.$$
With this in mind, we can compute the DFT using simple matrix multiplication as follows:
```python
import numpy as np
def DFT_slow(x):
"""Compute the discrete Fourier Transform of the 1D array x"""
x = np.asarray(x, dtype=float)
N = x.shape[0]
n = np.arange(N)
k = n.reshape((N, 1))
M = np.exp(-2j * np.pi * k * n / N)
return np.dot(M, x)
```
We can double-check the result by comparing to numpy's built-in FFT function:
```python
x = np.random.random(16)
np.allclose(DFT_slow(x), np.fft.fft(x))
```
True
Just to confirm the sluggishness of our algorithm, we can compare the execution times
of these two approaches:
```python
%timeit DFT_slow(x)
%timeit np.fft.fft(x)
```
62.5 µs ± 3.27 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
3.72 µs ± 296 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
We are over 1000 times slower, which is to be expected for such a simplistic implementation. But that's not the worst of it. For an input vector of length $N$, the FFT algorithm scales as $\mathcal{O}[N\log N]$, while our slow algorithm scales as $\mathcal{O}[N^2]$. That means that for $N=10^6$ elements, we'd expect the FFT to complete in somewhere around 50 ms, while our slow algorithm would take nearly 20 hours!
So how does the FFT accomplish this speedup? The answer lies in exploiting symmetry.
## Symmetries in the Discrete Fourier Transform
One of the most important tools in the belt of an algorithm-builder is to exploit symmetries of a problem. If you can show analytically that one piece of a problem is simply related to another, you can compute the subresult
only once and save that computational cost. Cooley and Tukey used exactly this approach in deriving the FFT.
We'll start by asking what the value of $X_{N+k}$ is. From our above expression:
$$
\begin{align*}
X_{N + k} &= \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~(N + k)~n~/~N}\\
&= \sum_{n=0}^{N-1} x_n \cdot e^{- i~2\pi~n} \cdot e^{-i~2\pi~k~n~/~N}\\
&= \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~k~n~/~N}
\end{align*}
$$
where we've used the identity $\exp[2\pi~i~n] = 1$ which holds for any integer $n$.
The last line shows a nice symmetry property of the DFT:
$$X_{N+k} = X_k.$$
By a simple extension,
$$X_{(k + i) \cdot N} = X_k$$
for any integer $i$. As we'll see below, this symmetry can be exploited to compute the DFT much more quickly.
## DFT to FFT: Exploiting Symmetry
Cooley and Tukey showed that it's possible to divide the DFT computation into two smaller parts. From
the definition of the DFT we have:
$$
\begin{align}
X_k &= \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~k~n~/~N} \\
&= \sum_{m=0}^{N/2 - 1} x_{2m} \cdot e^{-i~2\pi~k~(2m)~/~N} + \sum_{m=0}^{N/2 - 1} x_{2m + 1} \cdot e^{-i~2\pi~k~(2m + 1)~/~N} \\
&= \sum_{m=0}^{N/2 - 1} x_{2m} \cdot e^{-i~2\pi~k~m~/~(N/2)} + e^{-i~2\pi~k~/~N} \sum_{m=0}^{N/2 - 1} x_{2m + 1} \cdot e^{-i~2\pi~k~m~/~(N/2)}
\end{align}
$$
We've split the single Discrete Fourier transform into two terms which themselves look very similar to smaller Discrete Fourier Transforms, one on the odd-numbered values, and one on the even-numbered values. So far, however, we haven't saved any computational cycles. Each term consists of $(N/2)*N$ computations, for a total of $N^2$.
The trick comes in making use of symmetries in each of these terms. Because the range of $k$ is $0 \le k < N$, while the range of $n$ is $0 \le n < M \equiv N/2$, we see from the symmetry properties above that we need only perform half the computations for each sub-problem. Our $\mathcal{O}[N^2]$ computation has become $\mathcal{O}[M^2]$, with $M$ half the size of $N$.
But there's no reason to stop there: as long as our smaller Fourier transforms have an even-valued $M$, we can reapply this divide-and-conquer approach, halving the computational cost each time, until our arrays are small enough that the strategy is no longer beneficial. In the asymptotic limit, this recursive approach scales as $\mathcal{O}[N\log N]$.
This recursive algorithm can be implemented very quickly in Python, falling-back on our slow DFT code when the size of the sub-problem becomes suitably small:
```python
def FFT(x):
"""A recursive implementation of the 1D Cooley-Tukey FFT"""
x = np.asarray(x, dtype=float)
N = x.shape[0]
if N % 2 > 0:
raise ValueError("size of x must be a power of 2")
elif N <= 32: # this cutoff should be optimized
return DFT_slow(x)
else:
X_even = FFT(x[::2])
X_odd = FFT(x[1::2])
factor = np.exp(-2j * np.pi * np.arange(N) / N)
return np.concatenate([X_even + factor[:int(N / 2)] * X_odd,
X_even + factor[int(N / 2):] * X_odd])
```
Here we'll do a quick check that our algorithm produces the correct result:
```python
x = np.random.random(1024)
np.allclose(FFT(x), np.fft.fft(x))
```
True
And we'll time this algorithm against our slow version:
```python
%timeit DFT_slow(x)
%timeit FFT(x)
%timeit np.fft.fft(x)
```
73.4 ms ± 108 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
5.13 ms ± 196 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.88 µs ± 144 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Our calculation is faster than the naive version by over an order of magnitude! What's more, our recursive algorithm is asymptotically $\mathcal{O}[N\log N]$: we've implemented the Fast Fourier Transform.
Note that we still haven't come close to the speed of the built-in FFT algorithm in numpy, and this is to be expected. The FFTPACK algorithm behind numpy's ``fft`` is a Fortran implementation which has received years of tweaks and optimizations. Furthermore, our NumPy solution involves both Python-stack recursions and the allocation of many temporary arrays, which adds significant computation time.
A good strategy to speed up code when working with Python/NumPy is to vectorize repeated computations where possible. We can do this, and in the process remove our recursive function calls, and make our Python FFT even more efficient.
## Vectorized Numpy Version
Notice that in the above recursive FFT implementation, at the lowest recursion level we perform $N~/~32$ identical matrix-vector products. The efficiency of our algorithm would benefit by computing these matrix-vector products all at once as a single matrix-matrix product. At each subsequent level of recursion, we also perform duplicate operations which can be vectorized. NumPy excels at this sort of operation, and we can make use of that fact to create this vectorized version of the Fast Fourier Transform:
```python
def FFT_vectorized(x):
"""A vectorized, non-recursive version of the Cooley-Tukey FFT"""
x = np.asarray(x, dtype=float)
N = x.shape[0]
if np.log2(N) % 1 > 0:
raise ValueError("size of x must be a power of 2")
# N_min here is equivalent to the stopping condition above,
# and should be a power of 2
N_min = min(N, 32)
# Perform an O[N^2] DFT on all length-N_min sub-problems at once
n = np.arange(N_min)
k = n[:, None]
M = np.exp(-2j * np.pi * n * k / N_min)
X = np.dot(M, x.reshape((N_min, -1)))
# build-up each level of the recursive calculation all at once
while X.shape[0] < N:
X_even = X[:, :int(X.shape[1] / 2)]
X_odd = X[:, int(X.shape[1] / 2):]
factor = np.exp(-1j * np.pi * np.arange(X.shape[0])/ X.shape[0])[:, None]
X = np.vstack([X_even + factor * X_odd,
X_even - factor * X_odd])
return X.ravel()
```
Though the algorithm is a bit more opaque, it is simply a rearrangement of the operations used in the recursive version with one exception: we exploit a symmetry in the ``factor`` computation and construct only half of the array. Again, we'll confirm that our function yields the correct result:
```python
x = np.random.random(2048)
np.allclose(FFT_vectorized(x), np.fft.fft(x))
```
True
Because our algorithms are becoming much more efficient, we can use a larger array to compare the timings,
leaving out ``DFT_slow``:
```python
x = np.random.random(1024 * 16)
%timeit FFT(x)
%timeit FFT_vectorized(x)
%timeit np.fft.fft(x)
```
101 ms ± 799 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
2.29 ms ± 29.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
256 µs ± 3.05 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
We've improved our implementation by another order of magnitude! We're now within about a factor of 10 of the FFTPACK benchmark, using only a couple dozen lines of pure Python + NumPy. Though it's still no match computationally speaking, readibility-wise the Python version is far superior to the FFTPACK source, which you can browse [here](http://www.netlib.org/fftpack/fft.c).
So how does FFTPACK attain this last bit of speedup? Well, mainly it's just a matter of detailed bookkeeping. FFTPACK spends a lot of time making sure to reuse any sub-computation that can be reused. Our numpy version still involves an excess of memory allocation and copying; in a low-level language like Fortran it's easier to control and minimize memory use. In addition, the Cooley-Tukey algorithm can be extended to use splits of size other than 2 (what we've implemented here is known as the *radix-2* Cooley-Tukey FFT). Also, other more sophisticated FFT algorithms may be used, including fundamentally distinct approaches based on convolutions (see, e.g. Bluestein's algorithm and Rader's algorithm). The combination of the above extensions and techniques can lead to very fast FFTs even on arrays whose size is not a power of two.
Though the pure-Python functions are probably not useful in practice, I hope they've provided a bit of an intuition into what's going on in the background of FFT-based data analysis. As data scientists, we can make-do with black-box implementations of fundamental tools constructed by our more algorithmically-minded colleagues, but I am a firm believer that the more understanding we have about the low-level algorithms we're applying to our data, the better practitioners we'll be.
*This blog post was written entirely in the IPython Notebook. The full notebook can be downloaded
[here](http://jakevdp.github.io/downloads/notebooks/UnderstandingTheFFT.ipynb),
or viewed statically
[here](http://nbviewer.ipython.org/url/jakevdp.github.io/downloads/notebooks/UnderstandingTheFFT.ipynb).*
```python
```
| c8bd85d482f44e5b4141d30ddc73154d99dec2c9 | 20,214 | ipynb | Jupyter Notebook | 10. Fast_Fourier_Transform/FFT.ipynb | mriosrivas/DSP_Student_2021 | 7d978d5a538e2eb198dfbe073b4d8dcbf1aa756f | [
"MIT"
] | 2 | 2022-01-25T04:58:58.000Z | 2022-03-24T23:00:13.000Z | 10. Fast_Fourier_Transform/FFT.ipynb | mriosrivas/DSP_Student_2021 | 7d978d5a538e2eb198dfbe073b4d8dcbf1aa756f | [
"MIT"
] | 1 | 2021-11-25T00:39:40.000Z | 2021-11-25T00:39:40.000Z | 10. Fast_Fourier_Transform/FFT.ipynb | mriosrivas/DSP_Student_2021 | 7d978d5a538e2eb198dfbe073b4d8dcbf1aa756f | [
"MIT"
] | null | null | null | 38.429658 | 851 | 0.605719 | true | 3,839 | Qwen/Qwen-72B | 1. YES
2. YES | 0.774583 | 0.874077 | 0.677046 | __label__eng_Latn | 0.994309 | 0.411335 |
# Simulate Euclid Images Using HST Ones
In this notebook, we are going to simulate step by a Euclid space telescope image using a HST one.
First things first, we start by preparing the worksapce.
```python
# to correctly show figures
%matplotlib inline
# import libraries here
import galsim
import numpy as np
import matplotlib.pyplot as plt
```
Here we are going to load the Euclid and HST required parameters.
> Euclid telescope specifications can be found [here](https://github.com/LSSTDESC/WeakLensingDeblending/blob/9f851f79f6f820f815528d11acabf64083b6e111/descwl/survey.py#L366).
```python
pixel_scale = 0.101
wcs = galsim.wcs.PixelScale(pixel_scale) #wcs: world coordinate system. Variable used to draw images in galsim
lam = 700 # nm
diam = 1.3 # meters
lam_over_diam = (lam * 1.e-9) / diam # radians
lam_over_diam *= 206265 # Convert to arcsec
exp_time = 2260# exposure time
euclid_eff_area = 1.15 #effective area
```
Load the [COSMOS](https://github.com/GalSim-developers/GalSim/wiki/RealGalaxy%20Data) catalog and generate a galaxy and a PSF.
```python
catalog = galsim.COSMOSCatalog() # load catalog
img_len = 64 # arbitrary value, practical because power of 2
gal_ind = 133 # galaxy index in the catalog
gal = catalog.makeGalaxy(gal_ind, noise_pad_size=img_len * pixel_scale * np.sqrt(2))
psf = galsim.OpticalPSF(lam=lam, diam=diam, scale_unit=galsim.arcsec)
```
Now that we have loaded a galaxy from the catalog, let's rescale its flux such that it corresponds to a Euclid flux.
> Flux rescaling formula telescope specifications can be found [here](https://github.com/GalSim-developers/GalSim/blob/releases/2.2/examples/demo11.py#L110).
```python
hst_eff_area = 2.4**2 * (1.-0.33**2)
flux_scaling = (euclid_eff_area/hst_eff_area) * exp_time
gal *= flux_scaling
```
Apply the simulated Euclid PSF on the galaxy image.
```python
gal = galsim.Convolve(gal, psf)
```
Let's have a look at the galaxy and the PSF. In the galaxy image, noted $X$, we try to visually separate the noise (which standard deviation is noted $\sigma$) from the useful signal by applying the following transform:
\begin{equation}
\text{ArcSinh}\left(\frac{X}{k\sigma}\right).k\sigma
\end{equation}
> <b>Technical precision:</b> Usually the noise standard deviation is usually estimated with more accurate methods such as using a window to mask the galaxy then estimate the standard deviation on the rest of the samples which only contain noise. For sake of simplicity, in this example, we only considered an area of the image that only contains noise and estimated the noise standard deviation in it.
```python
# Get the standard deviation value of the noise for real images
gal_im = gal.drawImage(wcs=wcs, nx=img_len,ny=img_len)
# Empirically estimate the standard deviation by considering a part of the image containing only noise
hst_std = np.std(gal_im.array[0:25,0:25])
k=4
plt.figure(figsize=(20,20))
plt.subplot(121)
plt.title('ArcSinh of Convolved COSMOS Galaxy {}'.format(gal_ind))
plt.imshow(np.arcsinh(gal_im.array/(k*hst_std))*k*hst_std)
plt.subplot(122)
plt.imshow(np.log10(psf.drawImage(wcs=wcs, nx=img_len,ny=img_len).array))
plt.title(r'Log$_{10}$ Euclid-like PSF')
plt.show()
```
The noise that we see in image above corresponds to HST noise (which is also correlated due to the division by the HST PSF and the multiplication by the Euclid-like one), we are going to adapt this noise to Euclid. First we compute Euclid global noise standard deviation.
To do so, we compute $\lambda$ (in electrons per pixel), the Poisson parameter of the noise and aprroximate it with a white Gaussian noise such that its standard deviation is $\sqrt{\lambda}$.
> The $\lambda$ parameter corresponds to the `mean_sky_level` which expression can be find [here](https://github.com/LSSTDESC/WeakLensingDeblending/blob/9f851f79f6f820f815528d11acabf64083b6e111/descwl/survey.py#L110)
```python
def get_flux(ab_magnitude):
zero_point = 6.85
return exp_time*zero_point*10**(-0.4*(ab_magnitude-24))
sky_brightness = 22.9207
pixel_scale = 0.101
mean_sky_level = get_flux(sky_brightness)*pixel_scale**2 # it is the Poisson noise parameter
sigma = np.sqrt(mean_sky_level) # we modelize the noise as a Gaussian noise such that it std
# is the sqrt of the Poisson parameter
print('Euclid global noise standard deviation: {:.2f}'.format(sigma))
```
Euclid global noise standard deviation: 20.66
Then we estimate the value of HST noise standard deviation and we take it into to account while adding the noise such that we end up with Euclid noise standard deviation.
> <b>Reminder:</b> For any independent random variables, the variance of the sum of those variables is equal to the sum of the variances.
```python
# Add noise
delta_std = np.sqrt(sigma**2 - hst_std**2)
random_seed = 24783923 #same as galsim demo 11
noise = galsim.GaussianNoise(galsim.BaseDeviate(random_seed), sigma=delta_std)
gal_im.addNoise(noise)
image = gal_im.array
```
Now that we simulated the Euclid observed image, let's show it and estimate its noise standard deviation as a check.
```python
plt.figure(2, figsize=(10,10))
plt.imshow(image)
plt.imshow(np.arcsinh(image/(k*sigma)*k*sigma))
plt.show()
print('Standard Deviation Value of Euclid Simulated Image: {:.2f}'.format(np.std(image[0:25,0:25])))
```
| 9b62475ed5c5b72a1a3872bdadd15863dca1183d | 109,757 | ipynb | Jupyter Notebook | data/euclid_generation_example/HST2Euclid.ipynb | CosmoStat/ShapeDeconv | 3869cb6b9870ff1060498eedcb99e8f95908f01a | [
"MIT"
] | 4 | 2020-12-17T14:58:28.000Z | 2022-01-22T06:03:55.000Z | data/euclid_generation_example/HST2Euclid.ipynb | CosmoStat/ShapeDeconv | 3869cb6b9870ff1060498eedcb99e8f95908f01a | [
"MIT"
] | 9 | 2021-01-13T10:38:28.000Z | 2021-07-06T23:37:08.000Z | data/euclid_generation_example/HST2Euclid.ipynb | CosmoStat/ShapeDeconv | 3869cb6b9870ff1060498eedcb99e8f95908f01a | [
"MIT"
] | null | null | null | 374.59727 | 58,460 | 0.91135 | true | 1,457 | Qwen/Qwen-72B | 1. YES
2. YES | 0.845942 | 0.743168 | 0.628677 | __label__eng_Latn | 0.944272 | 0.298959 |
# <center> Single compartment model using double exponentials</center>
## Summary
### 1. Setup and testing
The model is trying to simulate a single compartment,
$$ C_m \frac{dV_m}{dt} = g_{leak}(V_m - E_{leak}) + g_{exc}(V_m - E_{AMPA}) + g_{inh}(V_m - E_{GABA})$$
Here $E$'s are reversal potentials, $V_m$ is membrane potential, $g$'s are conductances, and $C_m$ is the membrane capacitance.
The synaptic conductances are modeled as double exponentials: $$g(t) = \bar{g}\frac{( e^\frac{\delta_{onset} - t }{\tau_{decay}} - e^\frac{\delta_{onset} - t }{\tau_{rise}})}{- \left(\frac{\tau_{rise}}{\tau_{decay}}\right)^{\frac{\tau_{decay}}{\tau_{decay} - \tau_{rise}}} + \left(\frac{\tau_{rise}}{\tau_{decay}}\right)^{\frac{\tau_{rise}}{\tau_{decay} - \tau_{rise}}}}$$
Here $\bar{g}$ is maximum conductance, $\tau$ and $\delta$ are time course and onset delay parameters respectively. The denominator is to normalize the term to 1. All the quantities in the model have been set up with sympy and have units.
In this notebook, later the model is tested by plotting excitatory and inhibitory PSPs with the parameters.
### 2. Exploring hypotheses
After setting up the model with reasonable parameters from literature or data, the following hypotheses are explored:
1. What can give rise to divisive normalization?
1. No inhibition?
2. Proportional excitation and inhibition?
3. Divisive recruitment kinetics of inhibition? (Here g_i recruitment is a nonlinear function in g_e).
4. Proportional excitation and inhibition, with delays as a function of excitation?
### 3. Still to be done:
1. Put a thresholding non-linearity and check spike times as a function of g_e.
___
```python
from IPython.display import display, Markdown
```
```python
# Parallelizing over the cores
import os
# import ipyparallel as ipp
# clients = ipp.Client()
# dview = clients.direct_view()
# print(clients.ids)
```
```python
#with dview.sync_imports():
from sympy import symbols, exp, solve, logcombine, simplify, Piecewise, lambdify, N, init_printing, Eq
import numpy
#from sympy import *
from sympy.physics.units import seconds, siemens, volts, farads, amperes, milli, micro, nano, pico, ms, s, kg, meters
from matplotlib import pyplot as plt
#%px plt = pyplot
```
```python
#dview.block=True
```
```python
init_printing()
```
```python
#matplotlib notebook
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10,6)
import matplotlib.pyplot as plt
plt.style.use('seaborn-notebook')
```
#### Simulation time
```python
samplingRate = 20 # kHz, to get milliseconds
sample_every = 1 # ms
timeStep, maxTime = (sample_every*1.)/ samplingRate, 100. # ms
trange = numpy.arange(
0., maxTime, timeStep) # We will always use 100. ms timecourse of PSPs.
```
#### Range of $g_e$ explored
```python
emax = 4
e_step = 0.1
erange = numpy.arange(0., emax, e_step)
```
#### Range of proportionality ($P$) between $E$ and $I$
```python
prop_array = numpy.arange(1, 5, 1)
```
```python
# dview.push(dict(trange=trange, erange=erange, prop_array=prop_array))
```
## Setting up the variables, parameters and units for simulation
```python
t, P, e_r, e_d, delta_e, rho_e, g_e, i_r, i_d, delta_i, rho_i, g_i, b, Cm, g_L = symbols(
't P \\tau_{er} \\tau_{ed} \\delta_e \\rho_e \\bar{g}_e \\tau_{ir} \\tau_{id} \\delta_i \\rho_i \\bar{g}_i \\beta C_m \\bar{g}_L',
positive=True,
real=True)
```
```python
leak_rev, e_rev, i_rev, Vm = symbols(
'Leak_{rev} Exc_{rev} Inh_{rev} V_m', real=True)
```
```python
SymbolDict = {
t: "Time (ms)",
P: "Proportion of $g_i/g_e$",
e_r: "Excitatory Rise (ms)",
e_d: "Excitatory Fall (ms)",
delta_e: "Excitatory onset time (ms)",
rho_e: "Excitatory $tau$ ratio (fall/rise)",
g_e: "Excitatory max conductance",
i_r: "Inhibitory Rise (ms)",
i_d: "Inhibitory Fall(ms)",
delta_i: "Inhibitory onset time(ms)",
rho_i: "Inhibitory $tau$ ratio (fall/rise)",
g_i: "Inhibitory max conductance",
b: "Inhibitory/Excitatory $tau$ rise ratio"
}
```
```python
unitsDict = {
's': seconds,
'exp': exp,
'S': siemens,
'V': volts,
'A': amperes,
'm': meters,
'kg': kg
} # This is for lamdify
```
```python
nS, pF, mV, pA = nano * siemens, pico * farads, milli * volts, pico*amperes
```
### Estimates from data and averaging them to get a number
```python
estimateDict = {
P: (1.9, 2.1),
e_r: (1.5 * ms, 5 * ms),
e_d: (8. * ms, 20. * ms),
delta_e: (0. * ms, 0. * ms),
rho_e: (2., 7.),
g_e: (0.02 * nS, 0.25 * nS),
i_r: (1.5 * ms, 5. * ms),
i_d: (14. * ms, 60. * ms),
delta_i: (2. * ms, 4. * ms),
rho_i: (5., 20.),
g_i: (0.04 * nS, 0.5 * nS),
b: (0.5, 5.)
}
```
```python
averageEstimateDict = {
key: value[0] + value[1] / 2
for key, value in estimateDict.items()
}
```
```python
# Customized some numbers.
```
```python
averageEstimateDict[i_r], averageEstimateDict[e_r] = 3.6 * ms, 6. * ms
```
```python
print ("| Variable | Meaning | Range |")
print ("|---|---|---|")
print ("|$)t$|Time (ms)|0-100|")
for i in [P, e_r, e_d, delta_e, rho_e, g_e, i_r, i_d, delta_i, rho_i, g_i, b]:
print ("|${}$|{}|{}-{}|".format(i, SymbolDict[i], estimateDict[i][0],
estimateDict[i][1]))
```
| Variable | Meaning | Range |
|---|---|---|
|$)t$|Time (ms)|0-100|
|$P$|Proportion of $g_i/g_e$|1.9-2.1|
|$\tau_{er}$|Excitatory Rise (ms)|0.0015*s-s/200|
|$\tau_{ed}$|Excitatory Fall (ms)|0.008*s-0.02*s|
|$\delta_e$|Excitatory onset time (ms)|0-0|
|$\rho_e$|Excitatory $tau$ ratio (fall/rise)|2.0-7.0|
|$\bar{g}_e$|Excitatory max conductance|2.0e-11*A**2*s**3/(kg*m**2)-2.5e-10*A**2*s**3/(kg*m**2)|
|$\tau_{ir}$|Inhibitory Rise (ms)|0.0015*s-0.005*s|
|$\tau_{id}$|Inhibitory Fall(ms)|0.014*s-0.06*s|
|$\delta_i$|Inhibitory onset time(ms)|0.002*s-0.004*s|
|$\rho_i$|Inhibitory $tau$ ratio (fall/rise)|5.0-20.0|
|$\bar{g}_i$|Inhibitory max conductance|4.0e-11*A**2*s**3/(kg*m**2)-5.0e-10*A**2*s**3/(kg*m**2)|
|$\beta$|Inhibitory/Excitatory $tau$ rise ratio|0.5-5.0|
| Variable | Meaning | Range |
|---|---|---|
|$t$|Time (ms)|0-100|
|$P$|Proportion of $g_i/g_e$|1.9-2.1|
|$\tau_{er}$|Excitatory Rise (ms)|1.5-5|
|$\tau_{ed}$|Excitatory Fall (ms)|8-20|
|$\delta_e$|Excitatory onset time (ms)|0-0|
|$\rho_e$|Excitatory $tau$ ratio (fall/rise)|2-7|
|$\bar{g}_e$|Excitatory max conductance|0.02-0.25|
|$\tau_{ir}$|Inhibitory Rise (ms)|1.5-5|
|$\tau_{id}$|Inhibitory Fall(ms)|14-60|
|$\delta_i$|Inhibitory onset time(ms)|3-15|
|$\rho_i$|Inhibitory $tau$ ratio (fall/rise)|5-20|
|$\bar{g}_i$|Inhibitory max conductance|0.04-0.5|
|$\beta$|Inhibitory/Excitatory $tau$ rise ratio|0.5-5|
### Approximating the rest from literature
```python
approximateDict = {
g_L: 10 * nS,
e_rev: 0. * mV,
i_rev: -70. * mV,
leak_rev: -65. * mV,
Cm: 100 * pF
}
sourceDict = {
g_L: "None",
e_rev: "None",
i_rev: "None",
leak_rev: "None",
Cm: "Neuroelectro.org"
}
```
```python
print ("| Variable | Meaning | Source | Value |")
print ("|---|---|---|")
print ("|$g_L$|Leak conductance|Undefined| 10 nS |")
print ("|$Exc_{rev}$|Excitatory reversal|Undefined| 0 mV|")
print ("|$Inh_{rev}$|Inhibitory reversal |Undefined| -70 mV |")
print ("|$Leak_{rev}$|Leak reversal |Undefined| -65 mV |")
print ("|$C_m$|Membrane capacitance |neuroelectro.org| 100 pF|")
```
| Variable | Meaning | Source | Value |
|---|---|---|
|$g_L$|Leak conductance|Undefined| 10 nS |
|$Exc_{rev}$|Excitatory reversal|Undefined| 0 mV|
|$Inh_{rev}$|Inhibitory reversal |Undefined| -70 mV |
|$Leak_{rev}$|Leak reversal |Undefined| -65 mV |
|$C_m$|Membrane capacitance |neuroelectro.org| 100 pF|
| Variable | Meaning | Source | Value |
|---|---|---|
|$g_L$|Leak conductance|Undefined| 10 nS |
|$Exc_{rev}$|Excitatory reversal|Undefined| 0 mV|
|$Inh_{rev}$|Inhibitory reversal |Undefined| -70 mV |
|$Leak_{rev}$|Leak reversal |Undefined| -65 mV |
|$C_m$|Membrane capacitance |neuroelectro.org| 100 pF|
## Functions
### Check spike times
```python
def find_spike_time(voltage, threshold= 25*mV ):
''' Returns time at spike'''
return numpy.argmax(voltage > threshold) * timeStep * ms
```
---
### Double exponential to explain the net synaptic conductance.
```python
alpha = exp(-(t - delta_e) / e_d) - exp(-(t - delta_e) / e_r)
```
```python
alpha
```
```python
#alpha = alpha.subs(e_d, (rho_e*e_r)).doit()
```
```python
alpha_prime = alpha.diff(t)
```
```python
alpha_prime
```
```python
theta_e = solve(alpha_prime, t) # Time to peak
```
```python
theta_e = logcombine(theta_e[0])
```
```python
theta_e
```
```python
simplify(theta_e.subs(averageEstimateDict))
```
```python
alpha_star = simplify(alpha.subs(t, theta_e).doit())
```
```python
alpha_star
```
```python
#alpha_star = simplify(alpha) # Replacing e_d/e_r with tau_e
```
### Finding maximum of the curve and substituting ratio of taus
```python
alpha_star
```
```python
g_E = Piecewise((0. * nS, t / ms < delta_e / ms), (g_e * (alpha / alpha_star),
True))
```
### Final equation for Excitation normalized to be maximum at $g_e$
```python
g_E
```
### Verifying that E Behaves
```python
E_check = g_E.subs(averageEstimateDict).evalf()
```
```python
E_check
```
```python
E_check.free_symbols
```
```python
f = lambdify(
(t), simplify(E_check / nS), modules=("sympy", "numpy", unitsDict))
```
```python
fig, ax = plt.subplots()
ax.plot(trange, [f(dt * ms) for dt in trange], label="Excitation")
ax.set_xlabel("Time (in ms)")
ax.set_ylabel("Conductance (in nS)")
ax.legend()
```
```python
plt.close(fig)
```
### Doing the same with inhibition
```python
g_I = g_E.xreplace({
g_e: g_i,
rho_e: rho_i,
e_r: i_r,
e_d: i_d,
delta_e: delta_i
})
```
```python
alpha_I = alpha.xreplace({e_r: i_r, e_d: i_d, delta_e: delta_i})
alpha_star_I = alpha_star.xreplace({e_r: i_r, e_d: i_d})
```
```python
g_I = Piecewise((0. * nS, t / ms < delta_i / ms),
(g_i * (alpha_I / alpha_star_I), True))
```
```python
g_I
```
### Verifying that I Behaves
```python
I_check = simplify(g_I.subs(averageEstimateDict).evalf())
```
```python
N(I_check.subs({t: 4.9 * ms}))
```
```python
f = lambdify(t, simplify(I_check / nS), modules=("sympy", "numpy", unitsDict))
```
```python
f(5 * ms)
```
array(0.0868746200026281, dtype=object)
```python
plt.plot(trange, [-f(dt * ms) for dt in trange], label="Inhibition")
plt.xlabel("Time (in ms)")
plt.ylabel("Conductance (in nS)")
plt.legend()
plt.show()
```
### Now finding the control peak using difference of these double-exponentials
```python
compartment = Eq((1 / Cm) * (g_E * (Vm - e_rev) + g_I * (Vm - i_rev) + g_L *
(Vm - leak_rev)), Vm.diff(t))
```
```python
compartment
```
```python
Vm_t = solve(compartment, Vm, rational=False, simplify=True)
```
```python
check_vm_t = Vm_t[0].subs(averageEstimateDict).subs(approximateDict)/mV
vm_change = [check_vm_t.subs({t:dt*ms}) for dt in trange]
```
```python
plt.plot(trange, vm_change)
plt.show()
```
### Sending to clusters
```python
# symbols_to_pass = dict(t=t, P=P, e_r=e_r, e_d=e_d, delta_e=delta_e, rho_e=rho_e, g_e=g_e, i_r=i_r, i_d=i_d, delta_i=delta_i, rho_i=rho_i, g_i=g_i, b=b, Cm=Cm, g_L=g_L, leak_rev=leak_rev, e_rev=e_rev, i_rev=i_rev, Vm=Vm)
# units_to_pass = dict(nS=nS, pF=pF, mV=mV)
# expressions_to_pass = dict(Vm_t=Vm_t)
```
```python
# dview.push(symbols_to_pass)
# dview.push(units_to_pass)
# dview.push(expressions_to_pass)
# dview.push(dict(unitsDict=unitsDict))
```
---
# Testing hypotheses
### Varying $g_e$ and proportionality $P$
```python
check_vm_t = Vm_t[0].subs({ i: averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e, g_i, P] }).subs(approximateDict).subs({ g_i: P * g_e })
```
```python
check_vm_t
```
```python
f = lambdify((g_e, P, t), check_vm_t/mV, (unitsDict, "numpy"))
```
#### Case 1: No inhibition
```python
prop = 0 # No inhibition
norm = matplotlib.colors.Normalize(
vmin=numpy.min(erange),
vmax=numpy.max(erange))
c_m = matplotlib.cm.viridis
s_m = matplotlib.cm.ScalarMappable(cmap=c_m, norm=norm)
s_m.set_array([])
```
```python
fig, ax = plt.subplots()
excitation_only = [] # Excitation only vector
for e in erange:
e_t = [float(f(e * nS, prop, dt * ms)) for dt in trange]
ax.plot(trange, e_t, color=s_m.to_rgba(e))
excitation_only.append(e_t)
plt.xlabel("Time")
plt.ylabel("$V_m(mV)$")
plt.title("No Inhibition")
plt.colorbar(s_m, label="g_e(nS)")
```
```python
numpy.savetxt('excitation_only.txt', excitation_only, header="0-{}:{}".format(emax, e_step))
```
#### Case 2: With inhibition proportional to excitation, or $g_i = P \times g_e$
```python
prop = 3. # Proportionality
fig, ax = plt.subplots()
proportional_e_i = []
for e in erange:
# dview.push(dict(e=e, prop=prop))
# v_t = dview.map_sync(lambda dt: float(f(e * nS, prop, dt * ms)), trange)
v_t = [float(f(e * nS, prop, dt * ms)) for dt in trange]
proportional_e_i.append(v_t)
ax.plot(trange, v_t, color=s_m.to_rgba(e))
plt.xlabel("Time")
plt.ylabel("$V_m(mV)$")
plt.title("$g_i = {:.2f} \\times g_e$".format(prop))
plt.colorbar(s_m, label="g_e(nS)")
plt.legend()
```
```python
numpy.savetxt('excitation_inhibition_proportional_only.txt', proportional_e_i, header="0-{}:{}, P={}".format(emax, e_step, prop))
```
```python
fig, ax = plt.subplots()
for prop in prop_array:
v_max = []
e_max = []
for e, e_t in zip(erange, excitation_only):
v_t = [float(f(e * nS, prop, dt * ms)) for dt in trange]
v_max.append(max(v_t) - float(approximateDict[leak_rev]/mV))
e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV))
# dview.push(dict(e=e, prop=prop))
# vm_change = dview.map_sync(lambda dt: float(f(e * nS, prop, dt * ms)),
# trange)
# em_change = dview.map_sync(lambda dt: float(f(e * nS, 0., dt * ms)),
# trange)
# v_max.append(
# max(vm_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
# e_max.append(
# max(em_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
ax.scatter(e_max, v_max, label="$P= {:.2f}$".format(prop))
numpy.savetxt('excitation_inhibition_proportional_{}_only_scatter.txt'.format(prop), (v_max, e_max), header="0-{}:{}, P={}".format(emax, e_step, prop))
ax.set_xlabel("Excitation $V_{max}$")
ax.set_ylabel("Control $V_{max}$")
ax.legend()
```
#### Case 3: With inhibition recruited, $I = \frac{l}{1+\frac{1}{kE}}\times E$
```python
p = lambda k,l: (k*l*e)/(1.+ (k*e))
```
```python
k, l = 1, 2
fig, ax = plt.subplots()
for e in erange:
prop = p(k, l)
# dview.push(dict(e=e, prop=prop))
v_t = [float(f(e * nS, prop, dt * ms)) for dt in trange]
# v_t = dview.map_sync(lambda dt: float(f(e * nS, prop, dt * ms)), trange)
ax.plot(
trange, v_t, color=s_m.to_rgba(e) )
plt.colorbar(s_m, label="g_e(nS)")
plt.legend()
```
```python
l = 2
fig, ax = plt.subplots()
for k in numpy.arange(0., 5., 1.):
v_max = []
e_max = []
for e, e_t in zip(erange, excitation_only):
prop = p(k, l)
v_t = [float(f(e * nS, prop, dt * ms)) for dt in trange]
v_max.append(max(v_t) - float(approximateDict[leak_rev]/mV))
e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV))
# dview.push(dict(e=e, prop=prop))
# vm_change = dview.map_sync(lambda dt: float(f(e * nS, prop, dt * ms)),
# trange)
# em_change = dview.map_sync(lambda dt: float(f(e * nS, 0., dt * ms)),
# trange)
# v_max.append(
# max(vm_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
# e_max.append(
# max(em_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
ax.scatter(e_max, v_max, label="$P= {:.2f}$".format(prop))
ax.set_xlabel("Excitation $V_{max}$")
ax.set_ylabel("Control $V_{max}$")
numpy.savetxt('excitation_inhibition_divisive_recruited_only.txt', (e_max, v_max), header="0-{}:{}, k={},l={}".format(emax, e_step, k, l))
plt.legend()
```
## Changing kinetics as functions of the excitation.
### Changing $\delta_i$ = $\delta_{min}( 1 + \frac{1}{k\times{g_e}})$
```python
d = lambda minDelay,k,e: minDelay*(1 + (1./(k*e)))
```
```python
k, minDelay = 10, 1.5*ms
```
```python
fig, ax = plt.subplots()
ax.scatter(erange, [d(minDelay, k, e) / ms for e in erange])
ax.set_xlabel("$g_e$")
ax.set_ylabel("$\\delta_i$")
```
### Changing $\delta_i$ = $\delta_{min} + me^{-k\times{g_e}}$
```python
d = lambda minDelay,k,e: minDelay + m*exp(-(k*e))
```
```python
nS = nano*siemens
k, m, minDelay = 2./nS, 9*ms, 1.*ms
```
```python
ax = plt.subplot()
ax.scatter(erange, [d(minDelay,k,e*nS)/ms for e in erange])
plt.xlabel("$g_e$")
plt.ylabel("$\\delta_i$")
plt.show()
```
```python
check_vm = simplify(Vm_t[0].subs({i:averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e, g_i, delta_i]}).subs(approximateDict).subs({g_i: P*g_e, delta_i: d(minDelay,k,g_e)}).evalf())
```
```python
f = lambdify((g_e, P, t), check_vm/mV, (unitsDict, "numpy"))
```
```python
prop = 3.
ax = plt.subplot()
for e in erange:
# dview.push(dict(e=e, prop=prop))
v_t = [float(f(e * nS, prop, dt * ms)) for dt in trange]
#[float(f(e * nS, prop, dt * ms)) for dt in trange]
delay = float(d(minDelay,k,e*nS)/ms)
# v_t = dview.map_sync(lambda dt: float(f(e * nS, prop, dt * ms)), trange)
# delay = float(dview['float(d(minDelay,k,e*nS)/ms)'][0])
ax.plot(
trange, v_t, color=s_m.to_rgba(e))
plt.colorbar(s_m, label="g_e(nS)")
plt.legend()
```
```python
fig, ax = plt.subplots(1, 2)
for prop in prop_array:
v_max = []
e_max = []
v_ttp = []
e_ttp = []
for e, e_t in zip(erange, excitation_only):
# dview.push(dict(e=e, prop=prop))
v_t = [float(f(e * nS, prop, dt * ms)) for dt in trange]
v_max.append(max(v_t) - float(approximateDict[leak_rev]/mV))
e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV))
# vm_change = dview.map_sync(lambda dt: float(f(e * nS, prop, dt * ms)),
# trange)
# em_change = dview.map_sync(lambda dt: float(f(e * nS, 0., dt * ms)),
# trange)
# v_max.append(
# max(vm_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
# e_max.append(
# max(em_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
v_ttp.append(numpy.argmax(v_t) * timeStep)
e_ttp.append(numpy.argmax(e_t) * timeStep)
e_max, v_max = numpy.array(e_max), numpy.array(v_max)
ax[0].scatter(e_max, v_max, label=str(prop))
ax[1].scatter(e_max, e_max - v_max, label=str(prop))
#ax[1].scatter(e_ttp, v_ttp, label=str(p))
numpy.savetxt('excitation_inhibition_proportional_{}_delay_scatter.txt'.format(prop), (v_max, e_max), header="0-{}:{}, P={}".format(emax, e_step, prop))
maxCoord = max(ax[0].get_xlim()[1], ax[0].get_ylim()[1])
ax[0].plot([0, 1], [0, 1], '--', transform=ax[1].transAxes)
ax[1].plot([0, 1], [0, 1], '--', transform=ax[1].transAxes)
ax[0].set(xlim=(0, maxCoord), ylim=(0, maxCoord))
ax[1].set(xlim=(0, maxCoord), ylim=(0, maxCoord))
ax[0].set_xlabel("Excitation $V_{max}$")
ax[0].set_ylabel("Control $V_{max}$")
ax[1].set_xlabel("Excitation $V_{max}$")
ax[1].set_ylabel("Excitation - Control $V_{max}$")
#ax[1].set_xlabel("Excitation $t_{peak}$")
#ax[1].set_ylabel("Control $t_{peak}$")
ax[0].legend()
ax[1].legend()
plt.show()
```
```python
plt.scatter( e_max[1:], v_ttp[1:])
plt.title("Time to peak with excitation max")
plt.xlabel("Excitation Max")
plt.ylabel("Time to peak")
plt.show()
```
---
## Robustness testing of Divisive Normalization
### Divisive normalization is robust to 10% noise in proportionality between E and I.
```python
prop = 3.
delta_prop = numpy.linspace(-1,1,len(erange))
numpy.random.shuffle(delta_prop) # Randomizing order of change
ax = plt.subplot()
for index, e in enumerate(erange):
# dview.push(dict(e=e, prop=prop))
v_t = [float(f(e * nS, prop + delta_prop[index], dt * ms)) for dt in trange]
delay = float(d(minDelay,k,e*nS)/ms)
# v_t = dview.map_sync(lambda dt: float(f(e * nS, prop, dt * ms)), trange)
# delay = float(dview['float(d(minDelay,k,e*nS)/ms)'][0])
ax.plot(
trange, v_t, color=s_m.to_rgba(e))
plt.colorbar(s_m, label="g_e(nS)")
plt.legend()
```
```python
fig, ax = plt.subplots(1, 2)
delta_prop = numpy.linspace(0.9,1.1,len(erange))
for prop in prop_array:
v_max = []
e_max = []
v_ttp = []
e_ttp = []
numpy.random.shuffle(delta_prop) # Randomizing order of change
for index, e in enumerate(erange):
# dview.push(dict(e=e, prop=prop))
v_t = [float(f(e * nS, prop * delta_prop[index], dt * ms)) for dt in trange]
e_t = excitation_only[index]
v_max.append(max(v_t) - float(approximateDict[leak_rev]/mV))
e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV))
# vm_change = dview.map_sync(lambda dt: float(f(e * nS, prop, dt * ms)),
# trange)
# em_change = dview.map_sync(lambda dt: float(f(e * nS, 0., dt * ms)),
# trange)
# v_max.append(
# max(vm_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
# e_max.append(
# max(em_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
v_ttp.append(numpy.argmax(v_t) * timeStep)
e_ttp.append(numpy.argmax(e_t) * timeStep)
e_max, v_max = numpy.array(e_max), numpy.array(v_max)
ax[0].scatter(e_max, v_max, label=str(prop))
ax[1].scatter(e_max, e_max - v_max, label=str(prop))
#ax[1].scatter(e_ttp, v_ttp, label=str(p))
maxCoord = max(ax[0].get_xlim()[1], ax[0].get_ylim()[1])
ax[0].plot([0, 1], [0, 1], '--', transform=ax[1].transAxes)
ax[1].plot([0, 1], [0, 1], '--', transform=ax[1].transAxes)
ax[0].set(xlim=(0, maxCoord), ylim=(0, maxCoord))
ax[1].set(xlim=(0, maxCoord), ylim=(0, maxCoord))
ax[0].set_xlabel("Excitation $V_{max}$")
ax[0].set_ylabel("Control $V_{max}$")
ax[1].set_xlabel("Excitation $V_{max}$")
ax[1].set_ylabel("Excitation - Control $V_{max}$")
#ax[1].set_xlabel("Excitation $t_{peak}$")
#ax[1].set_ylabel("Control $t_{peak}$")
ax[0].legend()
ax[1].legend()
plt.show()
```
```python
plt.scatter( e_max[1:], v_ttp[1:])
plt.title("Time to peak with excitation max")
plt.xlabel("Excitation Max")
plt.ylabel("Time to peak")
plt.show()
```
### Permutation Test shows that Divisive Normalization cannot be achieved without E-I balance with just delays
```python
check_vm = Vm_t[0].subs({ i: averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e, g_i, P] }).subs(approximateDict)
```
```python
f = lambdify((g_e, g_i, t), check_vm/mV, (unitsDict, "numpy"))
```
```python
fig, ax = plt.subplots(1, 2)
for prop in prop_array:
v_max = []
e_max = []
v_ttp = []
e_ttp = []
irange = prop*erange
numpy.random.shuffle(irange)
for index, e in enumerate(erange):
# dview.push(dict(e=e, prop=prop))
v_t = [float(f(e * nS, irange[index]* nS, dt * ms)) for dt in trange]
e_t = excitation_only[index]
v_max.append(max(v_t) - float(approximateDict[leak_rev]/mV))
e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV))
# vm_change = dview.map_sync(lambda dt: float(f(e * nS, prop, dt * ms)),
# trange)
# em_change = dview.map_sync(lambda dt: float(f(e * nS, 0., dt * ms)),
# trange)
# v_max.append(
# max(vm_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
# e_max.append(
# max(em_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
v_ttp.append(numpy.argmax(v_t) * timeStep)
e_ttp.append(numpy.argmax(e_t) * timeStep)
e_max, v_max = numpy.array(e_max), numpy.array(v_max)
ax[0].scatter(e_max, v_max, label=str(prop))
ax[1].scatter(e_max, e_max - v_max, label=str(prop))
#ax[1].scatter(e_ttp, v_ttp, label=str(p))
maxCoord = max(ax[0].get_xlim()[1], ax[0].get_ylim()[1])
ax[0].plot([0, 1], [0, 1], '--', transform=ax[1].transAxes)
ax[1].plot([0, 1], [0, 1], '--', transform=ax[1].transAxes)
ax[0].set(xlim=(0, maxCoord), ylim=(0, maxCoord))
ax[1].set(xlim=(0, maxCoord), ylim=(0, maxCoord))
ax[0].set_xlabel("Excitation $V_{max}$")
ax[0].set_ylabel("Control $V_{max}$")
ax[1].set_xlabel("Excitation $V_{max}$")
ax[1].set_ylabel("Excitation - Control $V_{max}$")
#ax[1].set_xlabel("Excitation $t_{peak}$")
#ax[1].set_ylabel("Control $t_{peak}$")
ax[0].legend()
ax[1].legend()
plt.show()
```
```python
plt.scatter( e_max[1:], v_ttp[1:])
plt.title("Time to peak with excitation max")
plt.xlabel("Excitation Max")
plt.ylabel("Time to peak")
plt.show()
```
### Divisive normalization is robust to ± 10% change in delay.
```python
check_vm = simplify(Vm_t[0].subs({i:averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e, g_i, delta_i]}).subs(approximateDict).subs({g_i: P*g_e}).evalf())
```
```python
f = lambdify((g_e, P, delta_i, t), check_vm/mV, (unitsDict, "numpy"))
```
```python
fig, ax = plt.subplots(1, 2)
for prop in prop_array:
v_max = []
e_max = []
v_ttp = []
e_ttp = []
for index, e in enumerate(erange):
# dview.push(dict(e=e, prop=prop))
randomFactor = numpy.random.uniform(0.9,1.1)
v_t = [float(f(e * nS, prop, d(minDelay,k,e*nS) * randomFactor, dt * ms)) for dt in trange]
e_t = excitation_only[index]
v_max.append(max(v_t) - float(approximateDict[leak_rev]/mV))
e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV))
# vm_change = dview.map_sync(lambda dt: float(f(e * nS, prop, dt * ms)),
# trange)
# em_change = dview.map_sync(lambda dt: float(f(e * nS, 0., dt * ms)),
# trange)
# v_max.append(
# max(vm_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
# e_max.append(
# max(em_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
v_ttp.append(numpy.argmax(v_t) * timeStep)
e_ttp.append(numpy.argmax(e_t) * timeStep)
e_max, v_max = numpy.array(e_max), numpy.array(v_max)
ax[0].scatter(e_max, v_max, label=str(prop))
ax[1].scatter(e_max, e_max - v_max, label=str(prop))
#ax[1].scatter(e_ttp, v_ttp, label=str(p))
maxCoord = max(ax[0].get_xlim()[1], ax[0].get_ylim()[1])
ax[0].plot([0, 1], [0, 1], '--', transform=ax[1].transAxes)
ax[1].plot([0, 1], [0, 1], '--', transform=ax[1].transAxes)
ax[0].set(xlim=(0, maxCoord), ylim=(0, maxCoord))
ax[1].set(xlim=(0, maxCoord), ylim=(0, maxCoord))
ax[0].set_xlabel("Excitation $V_{max}$")
ax[0].set_ylabel("Control $V_{max}$")
ax[1].set_xlabel("Excitation $V_{max}$")
ax[1].set_ylabel("Excitation - Control $V_{max}$")
#ax[1].set_xlabel("Excitation $t_{peak}$")
#ax[1].set_ylabel("Control $t_{peak}$")
ax[0].legend()
ax[1].legend()
plt.show()
```
```python
plt.scatter( e_max[1:], v_ttp[1:])
plt.title("Time to peak with excitation max")
plt.xlabel("Excitation Max")
plt.ylabel("Time to peak")
plt.show()
```
### Permutation test shows that Divisive Normalization does not work with permuted delays for large values of excitation.
```python
fig, ax = plt.subplots(1, 2)
for prop in prop_array:
v_max = []
e_max = []
v_ttp = []
e_ttp = []
delayRange = [d(minDelay,k,e*nS) for e in erange]
numpy.random.shuffle(delayRange)
for index, e in enumerate(erange):
# dview.push(dict(e=e, prop=prop))
v_t = [float(f(e * nS, prop, delayRange[index], dt * ms)) for dt in trange]
e_t = excitation_only[index]
v_max.append(max(v_t) - float(approximateDict[leak_rev]/mV))
e_max.append(max(e_t) - float(approximateDict[leak_rev]/mV))
# vm_change = dview.map_sync(lambda dt: float(f(e * nS, prop, dt * ms)),
# trange)
# em_change = dview.map_sync(lambda dt: float(f(e * nS, 0., dt * ms)),
# trange)
# v_max.append(
# max(vm_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
# e_max.append(
# max(em_change) - dview['float(approximateDict[leak_rev]/mV)'][0])
v_ttp.append(numpy.argmax(v_t) * timeStep)
e_ttp.append(numpy.argmax(e_t) * timeStep)
e_max, v_max = numpy.array(e_max), numpy.array(v_max)
ax[0].scatter(e_max, v_max, label=str(prop))
ax[1].scatter(e_max, e_max - v_max, label=str(prop))
#ax[1].scatter(e_ttp, v_ttp, label=str(p))
maxCoord = max(ax[0].get_xlim()[1], ax[0].get_ylim()[1])
ax[0].plot([0, 1], [0, 1], '--', transform=ax[1].transAxes)
ax[1].plot([0, 1], [0, 1], '--', transform=ax[1].transAxes)
ax[0].set(xlim=(0, maxCoord), ylim=(0, maxCoord))
ax[1].set(xlim=(0, maxCoord), ylim=(0, maxCoord))
ax[0].set_xlabel("Excitation $V_{max}$")
ax[0].set_ylabel("Control $V_{max}$")
ax[1].set_xlabel("Excitation $V_{max}$")
ax[1].set_ylabel("Excitation - Control $V_{max}$")
#ax[1].set_xlabel("Excitation $t_{peak}$")
#ax[1].set_ylabel("Control $t_{peak}$")
ax[0].legend()
ax[1].legend()
plt.show()
```
```python
plt.scatter( e_max[1:], v_ttp[1:])
plt.title("Time to peak with excitation max")
plt.xlabel("Excitation Max")
plt.ylabel("Time to peak")
plt.show()
```
---
## Voltage clamping to see current responses.
```python
I_t = (g_E * (Vm - e_rev) + g_I * (Vm - i_rev) + g_L *
(Vm - leak_rev))
```
```python
I_t_check = I_t.subs({i:averageEstimateDict[i] for i in averageEstimateDict if i not in [g_e, g_i, delta_i]}).subs(approximateDict).subs({g_i: P*g_e})
```
```python
I_t_check
```
```python
f = lambdify((Vm,P,g_e, delta_i,t), I_t_check/pA, (unitsDict, "numpy"))
```
### Clamping voltage at -40mv
```python
clamp_voltage = -40
prop = 3
delayRange = [d(minDelay,k,e*nS) for e in erange[::10]]
ax = plt.subplot()
for index, e in enumerate(erange[::10]):
I_clamped_t = [float(f(clamp_voltage *mV, prop, e*nS, delayRange[index], dt * ms)) for dt in trange]
ax.plot(trange, I_clamped_t)
ax.set_xlabel("Time (ms)")
ax.set_ylabel("Current (pA)")
```
### Clamping at -70
```python
clamp_voltage = -70
prop = 3
delayRange = [d(minDelay,k,e*nS) for e in erange[::10]]
ax = plt.subplot()
for index, e in enumerate(erange[::10]):
I_clamped_t = [float(f(clamp_voltage *mV, prop, e*nS, delayRange[index], dt * ms)) for dt in trange]
ax.plot(trange, I_clamped_t)
ax.set_xlabel("Time (ms)")
ax.set_ylabel("Current (pA)")
```
### Clamping at 0 mV
```python
clamp_voltage = 0
prop = 3
delayRange = [d(minDelay,k,e*nS) for e in erange[::10]]
ax = plt.subplot()
for index, e in enumerate(erange[::10]):
I_clamped_t = [float(f(clamp_voltage *mV, prop, e*nS, delayRange[index], dt * ms)) for dt in trange]
ax.plot(trange, I_clamped_t)
ax.set_xlabel("Time (ms)")
ax.set_ylabel("Current (pA)")
```
### Overplotting 0 and -40
```python
prop = 3
delayRange = [d(minDelay,k,e*nS) for e in erange[::5]]
ax = plt.subplot()
baseCurrent_0 = float(f(clamp_voltage *mV, 0, 0, 0, 0 * ms))
for index, e in enumerate(erange[::5]):
clamp_voltage = 0
delay = d(minDelay,k,e*nS)
I_clamped_t = [float(f(clamp_voltage *mV, prop, e*nS, delay, dt * ms)) for dt in trange]
ax.plot(trange, I_clamped_t, color=s_m.to_rgba(e))
maxCurrent = numpy.max(I_clamped_t)
clamp_voltage = -40
I_clamped_t = [float(f(clamp_voltage *mV, prop, e*nS, delay, dt * ms)) for dt in trange]
ax.plot(trange, I_clamped_t, color=s_m.to_rgba(e))
minCurrent = numpy.min(I_clamped_t)
ax.vlines(delay/ms, minCurrent,maxCurrent, color=s_m.to_rgba(e), linestyle='--')
ax.set_xlabel("Time (ms)")
ax.set_ylabel("Current (pA)")
```
---
## <span style="color:red"> Try out the following: </span>
1. Estimate how many more synapses can be accommodated because of Divisive normalization before spiking.
2. Check out spike time changes because of DN.
## Trying out other kinetic changes.
### Changing $\tau_{id} = k\times{g_e}$
```python
tp = lambda rise, decay: ((decay*rise)/(decay-rise))*log(decay/rise)
```
```python
k, rise = 10 * ms / nS, 3
plt.scatter(erange, [tp(rise, k * e * nS / ms) for e in erange])
plt.xlabel("$g_e$")
plt.ylabel("$t_{peak}$")
plt.show()
```
```python
check_vm = simplify(Vm_t[0].subs({
i: averageEstimateDict[i]
for i in averageEstimateDict if i not in [g_e, g_i, i_d]
}).subs(approximateDict).subs({
g_i: P * g_e,
i_d: k * g_e
})).evalf()
```
```python
f = lambdify((g_e, P, t), check_vm/mV, (unitsDict, "numpy"))
```
```python
for e in erange:
plt.plot(
trange, [f(e * nS, 1., dt * ms) for dt in trange],
label="$g_e={}nS, \\delta_i={:.2f}ms$".format(e, k * e / nS))
plt.legend()
plt.show()
```
```python
fig, ax = plt.subplots(1, 2)
for p in np.arange(1, 4, 1):
v_max = []
e_max = []
v_ttp = []
e_ttp = []
for e in erange:
vm_change = [f(e * nS, p, dt * ms) for dt in trange]
em_change = [f(e * nS, 0., dt * ms) for dt in trange]
v_max.append(max(vm_change) - approximateDict[leak_rev] / mV)
e_max.append(max(em_change) - approximateDict[leak_rev] / mV)
v_ttp.append(np.argmax(vm_change) * timeStep)
e_ttp.append(np.argmax(em_change) * timeStep)
ax[0].scatter(e_max, v_max, label=str(p))
ax[1].scatter(e_ttp, v_ttp, label=str(p))
ax[0].set_xlabel("Excitation $V_{max}$")
ax[0].set_ylabel("Control $V_{max}$")
ax[1].set_xlabel("Excitation $t_{peak}$")
ax[1].set_ylabel("Control $t_{peak}$")
ax[0].legend()
ax[1].legend()
plt.show()
```
```python
f(10,1,50)
```
```python
for new_P in np.arange(1, 3, 0.5):
v_max = []
for e in erange:
f = lambdify(g_e, P, vm_change, "numpy")
trange = np.arange(0, 200, 0.1)
vm_change = [
check_vm_t.subs({
t: dt,
g_e: new_g_e,
P: new_P
}) for dt in trange
]
v_max.append(max(vm_change) - leak_rev.subs(approximateDict))
plt.scatter(erange, v_max)
plt.show()
```
```python
```
```python
compartment.subs(averageEstimateDict).subs(approximateDict)
```
### Now trying to change parameters and look at how $V_m$ changes
```python
# 1. Get rid of the normalization factor.
# 2. The functions must approximate each other when close to 0.
# 3. The precision of the parameters in the equations must increase with g_e.
```
```python
g_e.subs(averageEstimateDict)
```
```python
A = Piecewise((0, t < delta_i), (exp(t/(2*ms)), True))
```
```python
A = A.subs({delta_i:10*ms})
```
```python
f = lambdify(t,A, {'s':seconds, 'exp':exp})
```
| 55d53b42817f1c7a202a85d50ac1430af7c24e8b | 924,178 | ipynb | Jupyter Notebook | model/Single_comp_conductance_model.ipynb | elifesciences-publications/linearity | 777769212ac43d854d23d5b967c6323747c56c09 | [
"MIT"
] | 1 | 2019-04-22T17:07:37.000Z | 2019-04-22T17:07:37.000Z | model/Single_comp_conductance_model.ipynb | elifesciences-publications/linearity | 777769212ac43d854d23d5b967c6323747c56c09 | [
"MIT"
] | null | null | null | model/Single_comp_conductance_model.ipynb | elifesciences-publications/linearity | 777769212ac43d854d23d5b967c6323747c56c09 | [
"MIT"
] | 3 | 2019-04-25T13:10:24.000Z | 2021-09-05T03:45:36.000Z | 293.669527 | 126,634 | 0.891809 | true | 11,695 | Qwen/Qwen-72B | 1. YES
2. YES | 0.826712 | 0.833325 | 0.688919 | __label__eng_Latn | 0.256521 | 0.438922 |
# Example usage of rdsolver
(c) 2018 Justin Bois. This work is licensed under a [Creative Commons Attribution License CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/). All code contained herein is licensed under an [MIT license](https://opensource.org/licenses/MIT).
`rdsolver` solves the following system of PDEs on a 2D Cartesian domain with periodic boundary conditions. The governing equations are
\begin{align}
\partial_t c_i = D_i(\partial_x^2 + \partial_y^2) c_i + \beta_i + \sum_j\gamma_{ij} c_j + f_i(\mathbf{c}),
\end{align}
where summation over like indices is not implied; all summation is explicit. The last term represents all nonlinear chemical reactions. The $\beta_i + \gamma_{ij}c_j$ terms are linear chemical dynamics. Note that we assume a diagonal constant diffusion tensor.
To specify the problem, the user needs to supply:
* The physical dimension of the system, which we will call $\mathbf{L} = (L_x, L_y)$
* The number of grid points $\mathbf{n} = (n_x, n_y)$
* The desired time points $t_0, t_1, \ldots$
* The initial concentration profiles of all species, $\mathbf{c}^0(x, y)$
* The values of the parameters $D_i$, $\beta_i$, and $\gamma_{ij}$.
* The function $f_i$ and any parametric arguments that need to be passed to it.
Here, I present an example of how to use `rdsolver`. To learn more about it and installation instructions, see the [README file](https://github.com/justinbois/rdsolver#reaction-diffusion-solver).
## Necessary imports
We work with `rdsolver` in a Jupyter notebook (recommended), you need to import `rdsolver` and `bokeh`, being sure to call `bokeh.io.output_notebook()` to enable interactive plotting in the Jupyter notebook. And it's pretty much automatic to import NumPy!
```python
%load_ext autoreload
%autoreload 2
import numpy as np
import numba
import rdsolver as rd
import bokeh
import bokeh.io
bokeh.io.output_notebook()
```
<div class="bk-root">
<a href="https://bokeh.org" target="_blank" class="bk-logo bk-logo-small bk-logo-notebook"></a>
<span id="1001">Loading BokehJS ...</span>
</div>
## The activator-substrate depletion model (ASDM)
The ASDM is a classic system that gives Turing patterns. In its simplest form, the dimensionless equations are
\begin{align}
\partial_t a &= d(\partial_x^2 + \partial_y^2)a + a^2s - a \\[0.5em]
\partial_t s &= (\partial_x^2 + \partial_y^2)s + \mu(1 - a^2s).
\end{align}
Thus, we have
\begin{align}
D &= (d, 1) \\[0.5em]
\beta &= (0, \mu) \\[0.5em]
\gamma &= \begin{pmatrix}
-1 & 0 \\
0 & 0
\end{pmatrix}.
\end{align}
We start by specifying the easy stuff, the physical size of the system, the number of grid points, and the time points we want. Because this particular system does not have very sharp gradients and we are not using a very large physical space, we do not need many grid points at all. Because `rdsolver` uses spectral methods, we can have very accurate calculations even with few grid points. We will use as 32 $\times$ 32 grid here.
```python
# Physical size of system
L = (10, 10)
# Number of grid points in x and y (can often be small with spec. meth.)
n = (32, 32)
# Specify times points we want
t = np.linspace(0, 250, 100)
```
Now, we need to define our parameters. We'll start with $D$, $\beta$, and $\gamma$, choosing $d = 0.05$ and $\mu = 1.4$.
```python
d = 0.05
mu = 1.4
D = (d, 1)
beta = (0, mu)
gamma = np.array([[-1, 0], [0, 0]])
```
Now, we need to define our nonlinear function $f$. This function has call signature `f(u, t, *f_args)`. The first argument, `u`, is an array containing the concentrations that can be unpacked as `a, s = u`. Note, however, that unpacking 3D arrays like this is not yet supported in Numba, so you should use the more verbose version below for Numba'd nonlinear functions.
Next, the function `f` takes the current time as an input. For the ASDM, and indeed for many R-D applications, the nonlinear chemical dynamics do not explicitly depend on time. Finally, `f_args` is a tuple containing any other arguments the function `f` needs.
The function must return an array the same shape as the input array `u` that gives the nonlinear terms in the dynamics. This is most easily accomplished using the `np.stack()` function.
```python
@numba.jit(nopython=True)
def f(u, t, mu):
"""Nonlinear terms for ASDM"""
a = u[0,:,:]
s = u[1,:,:]
return np.stack((a**2 * s, -mu * a**2 * s))
```
We also have to specify the arguments that need to be passed to `f` as a tuple.
```python
# Specify the arguments that need to be passed to f
f_args = (mu, )
```
Now, we need to specify the initial conditions. The initial conditions must be a three-dimensional array of shape $(n_s, n_x, n_y)$, where $n_s$ is the number of chemical species. For convenience, you can specify a homogeneous steady state and use the `rd.initial_condition()` function that will generate an initial condition that is a small perturbation about the specified steady state.
```python
# Homogenous steady state in activator and substrate to perturb
uniform_conc = (1, 1)
# Generate perturbed initial condition
np.random.seed(42)
c0 = rd.initial_condition(uniform_conc=uniform_conc, n=n, L=L)
# Show the shape as a demonstration
c0.shape
```
(2, 32, 32)
Now, everything is in place. We just have to solve. We do this by calling the `rd.solve()` function. The arguments are obvious from the call below.
```python
# Solve the system
c = rd.solve(c0, t, D=D, beta=beta, gamma=gamma, f=f, f_args=f_args, L=L)
# Look at the shape of the solution
c.shape
```
/Users/bois/Dropbox/git/rdsolver/rdsolver/rd.py:172: NumbaPerformanceWarning: [1m[1mnp.dot() is faster on contiguous arrays, called on (array(float64, 2d, C), array(float64, 1d, A))[0m[0m
out[:,i,j] = np.dot(gamma, c[:,i,j])
0%| | 0/100 [00:00<?, ?it/s]/Users/bois/Dropbox/git/rdsolver/rdsolver/rd.py:497: NumbaPerformanceWarning: [1m[1mnp.dot() is faster on contiguous arrays, called on (array(complex128, 2d, C), array(complex128, 1d, A))[0m[0m
+ np.dot(A_rhs, c_hat[:,i,j])
/Users/bois/Dropbox/git/rdsolver/rdsolver/rd.py:497: NumbaPerformanceWarning: [1m[1mnp.dot() is faster on contiguous arrays, called on (array(complex128, 2d, C), array(complex128, 1d, A))[0m[0m
+ np.dot(A_rhs, c_hat[:,i,j])
100%|██████████| 100/100 [00:14<00:00, 7.00it/s]
(2, 32, 32, 100)
We note that the solution is of the shape $(n_s, n_x, n_y, n_t)$, where $n_t$ is the number of time points we used. This structure is useful to know for slicing out species and time points of interest.
For plotting purposes, it is useful to interpolate the solution to have smooth concentration profiles. This is purely aesthetic; the solver will give a pixelated, but spectrally accurate, solution. We can use the `rd.viz.interpolate_concs()` function to get the interpolated concentration profiles.
```python
c_interp = rd.viz.interpolate_concs(c)
```
Finally, we are ready to display the solution. We use the `rd.viz.display_notebook()` function that will give a picture of the concentration field with a slider for adjusting the time. By default, for multiple species problems, (like the ASDM), up to three species are shown. The cyan channel is the first, magenta the second, and yellow is the third (though it is not present for the ASDM).
```python
# Use whatever your notebook URL is here
notebook_url = 'localhost:8889'
bokeh.io.show(rd.viz.display_notebook(t, c_interp), notebook_url=notebook_url)
```
If we like, we can look at a single species, in which case the colormap is Viridis.
```python
bokeh.io.show(rd.viz.display_notebook(t, c_interp[0]), notebook_url=notebook_url)
```
## The full ASDM model
We just solved a more simplified ASDM model, but we may consider a more complicated one, as proposed by Koch and Meinhardt (*Rev. Mod. Phys.*, 1994), in which the autocatalysis reactions can saturate.
\begin{align}
\partial_t a &= D_a (\partial_x^2 + \partial_y^2)a + \frac{\rho_a a^2 s}{1 + \kappa_a a^2} - \mu_a a + \sigma_a \\[0.5em]
\partial_t s &= D_s (\partial_x^2 + \partial_y^2)s - \frac{\rho_s a^2 s}{1 + \kappa_a a^2} + \sigma_s
\end{align}
For convenience, `rdsolver` has a growing set of models that you can pre-load. We can use `rd.models.asdm()` to load in the parameters, as well as the homogeneous steady state, for the more complete ASDM model.
```python
D, beta, gamma, f, f_args, homo_ss = rd.models.asdm()
```
The parameters in the above equations are inputted as keyword arguments, with the defaults set to what was used to generate Fig. 2 in the Koch and Meinhardt paper. The outputted function `f` is JITted for performance.
Note that if we wanted to recapitulate the simpler example we already did, we can call the function with the appropriate kwargs.
```python
d = 0.05
mu = 1.4
params = {'D_a': d,
'D_s': 1,
'rho_a': 1,
'rho_s': mu,
'sigma_a': 0,
'sigma_s': mu,
'mu_a': 1,
'kappa_a': 0}
D, beta, gamma, f, f_args, homo_ss = rd.models.asdm(**params)
```
We already did that one, so we'll do the saturating model now. We'll now define the grid setup and the time points we want.
```python
n = (32, 32)
L = (50, 50)
t = np.linspace(0, 100000, 100)
```
Next, the initial condition, which will be a small perturbation about the steady state.
```python
np.random.seed(42)
c0 = rd.initial_condition(uniform_conc=homo_ss, n=n, L=L)
```
Now we can solve and interpolate....
```python
c = rd.solve(c0, t, D=D, beta=beta, gamma=gamma, f=f, f_args=f_args, L=L)
c_interp = rd.viz.interpolate_concs(c)
```
100%|██████████| 100/100 [00:09<00:00, 10.46it/s]
And finally, we visualize:
```python
bokeh.io.show(rd.viz.display_notebook(t, c_interp), notebook_url=notebook_url)
```
Finally, if we want to display a single image, we can do so using `rd.viz.display_single_frame()`.
```python
bokeh.io.show(rd.viz.display_single_frame(c_interp, i=-1));
```
<div class="bk-root" id="bba59902-4a07-4c12-9619-3e4c9c7ec8de" data-root-id="4619"></div>
| 5c3b6e9dbc07863d634681f16e5c6a8f4928926b | 256,820 | ipynb | Jupyter Notebook | notebooks/asdm_example.ipynb | emorisse/rdsolver | 89ef35eeadc50bf3618e10fd7e3f1ed0250ead30 | [
"MIT"
] | 2 | 2021-04-27T03:47:17.000Z | 2022-01-17T19:30:06.000Z | notebooks/asdm_example.ipynb | emorisse/rdsolver | 89ef35eeadc50bf3618e10fd7e3f1ed0250ead30 | [
"MIT"
] | 4 | 2017-07-14T22:52:20.000Z | 2017-08-31T22:55:32.000Z | notebooks/asdm_example.ipynb | emorisse/rdsolver | 89ef35eeadc50bf3618e10fd7e3f1ed0250ead30 | [
"MIT"
] | 2 | 2021-08-16T14:59:00.000Z | 2021-10-14T04:55:48.000Z | 279.760349 | 217,076 | 0.766767 | true | 2,932 | Qwen/Qwen-72B | 1. YES
2. YES | 0.891811 | 0.859664 | 0.766658 | __label__eng_Latn | 0.986723 | 0.619535 |
# Mish Derivatves
```python
import torch
from torch.nn import functional as F
```
```python
inp = torch.randn(100) + (torch.arange(0, 1000, 10, dtype=torch.float)-500.)
inp
```
tensor([-500.3069, -490.6361, -480.3858, -471.2755, -459.0872, -451.1570,
-440.2400, -429.6230, -419.9467, -408.3055, -402.3395, -389.1660,
-380.1614, -369.7649, -359.7261, -348.8759, -338.7170, -329.2680,
-319.7470, -309.6079, -301.3083, -290.9236, -279.3832, -267.8622,
-259.3479, -249.7400, -240.8742, -229.2343, -219.3999, -210.0166,
-199.8259, -191.5603, -178.9595, -171.4488, -160.3362, -150.1327,
-139.2230, -130.8046, -121.8909, -108.4913, -100.5724, -88.9087,
-79.9365, -70.3478, -60.1005, -49.9595, -37.6322, -29.9353,
-18.9407, -11.9213, -2.5633, 10.6869, 18.9005, 29.4622,
41.7188, 49.6080, 59.3583, 71.3071, 80.2604, 91.4908,
100.2913, 108.7626, 118.8391, 129.8859, 139.5593, 150.6612,
161.5152, 170.5409, 179.0472, 187.5896, 199.0938, 210.3955,
221.2551, 229.1151, 240.5497, 250.7286, 260.3474, 268.6524,
280.6704, 291.0199, 302.0525, 309.1079, 320.3692, 330.5589,
340.5503, 350.1638, 359.5840, 369.0214, 379.5835, 390.5975,
398.7851, 408.7201, 420.3418, 430.0718, 440.4431, 449.6514,
459.1114, 468.1187, 480.6921, 490.0807])
```python
import sympy
from sympy import Symbol, Function, Expr, diff, simplify, exp, log, tanh
x = Symbol('x')
f = Function('f')
```
## Overall Derivative
```python
diff(x*tanh(log(exp(x)+1)))
```
$\displaystyle \frac{x \left(1 - \tanh^{2}{\left(\log{\left(e^{x} + 1 \right)} \right)}\right) e^{x}}{e^{x} + 1} + \tanh{\left(\log{\left(e^{x} + 1 \right)} \right)}$
```python
simplify(diff(x*tanh(log(exp(x)+1))))
```
$\displaystyle - \frac{x \left(\tanh^{2}{\left(\log{\left(e^{x} + 1 \right)} \right)} - 1\right) e^{x} - \left(e^{x} + 1\right) \tanh{\left(\log{\left(e^{x} + 1 \right)} \right)}}{e^{x} + 1}$
## Softplus
$ \Large \frac{\partial}{\partial x} Softplus(x) = 1 - \frac{1}{e^{x} + 1} $
Or, from PyTorch:
$ \Large \frac{\partial}{\partial x} Softplus(x) = 1 - e^{-Y} $
Where $Y$ is saved output
```python
class SoftPlusTest(torch.autograd.Function):
@staticmethod
def forward(ctx, inp, threshold=20):
y = torch.where(inp < threshold, torch.log1p(torch.exp(inp)), inp)
ctx.save_for_backward(y)
return y
@staticmethod
def backward(ctx, grad_out):
y, = ctx.saved_tensors
res = 1 - (-y).exp_()
return grad_out * res
```
```python
torch.allclose(F.softplus(inp), SoftPlusTest.apply(inp))
```
True
```python
torch.autograd.gradcheck(SoftPlusTest.apply, inp.to(torch.float64).requires_grad_())
```
True
## $tanh(Softplus(x))$
```python
diff(tanh(f(x)))
```
$\displaystyle \left(1 - \tanh^{2}{\left(f{\left(x \right)} \right)}\right) \frac{d}{d x} f{\left(x \right)}$
```python
class TanhSPTest(torch.autograd.Function):
@staticmethod
def forward(ctx, inp, threshold=20):
ctx.save_for_backward(inp)
sp = torch.where(inp < threshold, torch.log1p(torch.exp(inp)), inp)
y = torch.tanh(sp)
return y
@staticmethod
def backward(ctx, grad_out, threshold=20):
inp, = ctx.saved_tensors
sp = torch.where(inp < threshold, torch.log1p(torch.exp(inp)), inp)
grad_sp = 1 - torch.exp(-sp)
tanhsp = torch.tanh(sp)
grad = (1 - tanhsp*tanhsp) * grad_sp
return grad_out * grad
```
```python
torch.allclose(TanhSPTest.apply(inp), torch.tanh(F.softplus(inp)))
```
True
```python
torch.autograd.gradcheck(TanhSPTest.apply, inp.to(torch.float64).requires_grad_())
```
True
## Mish
```python
diff(x * f(x))
```
$\displaystyle x \frac{d}{d x} f{\left(x \right)} + f{\left(x \right)}$
```python
diff(x*tanh(f(x)))
```
$\displaystyle x \left(1 - \tanh^{2}{\left(f{\left(x \right)} \right)}\right) \frac{d}{d x} f{\left(x \right)} + \tanh{\left(f{\left(x \right)} \right)}$
```python
simplify(diff(x*tanh(f(x))))
```
$\displaystyle \frac{x \frac{d}{d x} f{\left(x \right)}}{\cosh^{2}{\left(f{\left(x \right)} \right)}} + \tanh{\left(f{\left(x \right)} \right)}$
```python
diff(tanh(f(x)))
```
$\displaystyle \left(1 - \tanh^{2}{\left(f{\left(x \right)} \right)}\right) \frac{d}{d x} f{\left(x \right)}$
```python
class MishTest(torch.autograd.Function):
@staticmethod
def forward(ctx, inp, threshold=20):
ctx.save_for_backward(inp)
sp = torch.where(inp < threshold, torch.log1p(torch.exp(inp)), inp)
tsp = torch.tanh(sp)
y = inp.mul(tsp)
return y
@staticmethod
def backward(ctx, grad_out, threshold=20):
inp, = ctx.saved_tensors
sp = torch.where(inp < threshold, torch.log1p(torch.exp(inp)), inp)
grad_sp = 1 - torch.exp(-sp)
tsp = torch.tanh(sp)
grad_tsp = (1 - tsp*tsp) * grad_sp
grad = inp * grad_tsp + tsp
return grad_out * grad
```
```python
torch.allclose(MishTest.apply(inp), inp.mul(torch.tanh(F.softplus(inp))))
```
True
```python
torch.autograd.gradcheck(TanhSPTest.apply, inp.to(torch.float64).requires_grad_())
```
True
| 53d868ef3912cc35ae9fdbb2c8085b7ab875169f | 12,605 | ipynb | Jupyter Notebook | extra/Derivatives.ipynb | hiyyg/mish-cuda | b389b9f84433d8b9b4129d3e879ba746d248d8f2 | [
"MIT"
] | 145 | 2019-09-25T17:43:54.000Z | 2022-03-09T08:17:44.000Z | extra/Derivatives.ipynb | hiyyg/mish-cuda | b389b9f84433d8b9b4129d3e879ba746d248d8f2 | [
"MIT"
] | 20 | 2019-11-18T22:20:02.000Z | 2022-02-16T03:04:30.000Z | extra/Derivatives.ipynb | hiyyg/mish-cuda | b389b9f84433d8b9b4129d3e879ba746d248d8f2 | [
"MIT"
] | 51 | 2019-10-10T03:52:05.000Z | 2022-03-24T07:14:01.000Z | 23.918406 | 218 | 0.471083 | true | 1,950 | Qwen/Qwen-72B | 1. YES
2. YES | 0.927363 | 0.897695 | 0.83249 | __label__yue_Hant | 0.15861 | 0.772485 |
<a href="https://colab.research.google.com/github/liadmagen/MedicalImageProcessingCourse/blob/main/medImgproc_00_working_with_images.ipynb" target="_parent"></a>
In this notebook, we'll explore how images are represented by the computer. We'll learn how to load, examine and manipulate images, and to perform basic pre-processing actions.
# Basic Operations and Image Examination
We start by loading the packages.
We will use:
* **matplotlib** - a package for plotting graphs and images
* **numpy** - a package for numerical processing of matrices, and
* **scikit-learn image** - a package for processing scientific images, which also includes scientific images examples.
```python
import cv2 as cv
import matplotlib.pyplot as plt
import numpy as np
from skimage import data
```
```python
# skimage.data has many different image samples. Check them out here: https://scikit-image.org/docs/stable/api/skimage.data.html
# You can play with this notebook, by changing the data image to other images and examin the result.
img = data.brain()[0]
plt.imshow(img)
```
```python
# we can also plot it in black & white by setting the colormap (cmap)
plt.imshow(img, cmap=plt.cm.gray)
```
```python
# The shape of the matrix is the width and height in Pixels
print(img.shape)
img_width, img_height = img.shape
```
(256, 256)
```python
# We can print the matrix itself:
img
```
array([[4, 4, 4, ..., 4, 4, 4],
[4, 4, 4, ..., 4, 4, 4],
[4, 4, 4, ..., 4, 4, 4],
...,
[4, 4, 4, ..., 4, 4, 4],
[4, 4, 4, ..., 4, 4, 4],
[4, 4, 4, ..., 4, 4, 4]], dtype=uint16)
```python
# or see and use parts of it using python slices
img[100:120, 50:58]
```
array([[19044, 16900, 13456, 9604, 7744, 7225, 6889, 7056],
[18496, 15625, 10816, 7921, 7569, 7225, 7056, 6889],
[17424, 15376, 11025, 8100, 7396, 7225, 7396, 7056],
[17161, 15129, 11449, 8464, 7225, 7056, 7225, 7225],
[17689, 15129, 10609, 7921, 7056, 6889, 6889, 7225],
[17956, 14161, 9409, 7396, 7056, 6889, 6889, 7225],
[17689, 12321, 8100, 7225, 7056, 7056, 7056, 7056],
[18496, 11025, 7569, 7056, 7056, 6889, 7225, 7056],
[18769, 10201, 7225, 7056, 6889, 6889, 7056, 7056],
[18225, 9604, 7225, 7056, 7056, 6889, 6724, 6889],
[15625, 8649, 7056, 7056, 7056, 6889, 6889, 7056],
[12321, 7921, 6889, 6889, 6724, 6724, 7056, 7056],
[ 9801, 7396, 6889, 6889, 6561, 6724, 6889, 6889],
[ 7921, 7056, 6724, 6889, 6724, 6724, 6889, 6889],
[ 7225, 7056, 6889, 6889, 6889, 6724, 6724, 6889],
[ 7225, 6889, 7056, 6889, 6889, 6724, 6889, 6889],
[ 7225, 7056, 6889, 6889, 6724, 6889, 6889, 7056],
[ 7056, 6889, 6889, 6889, 6889, 6889, 6889, 7056],
[ 7056, 7056, 7056, 6889, 6889, 6889, 6889, 7056],
[ 6889, 6724, 7056, 6889, 6889, 7056, 7225, 6889]],
dtype=uint16)
```python
# the values of the matrix corresponds to the color value of the pixel at that point
print(img.max())
print(img.min())
```
49284
0
## Histogram
A histogram is sort of a graph or plot, which can give you an overall idea about the intensity distribution of an image. It is a plot with pixel values (ranging from 0 to 255, though not always) in X-axis and corresponding number of pixels in the image on Y-axis.
```python
# Let's get the histogram using numpy:
hist, bins = np.histogram(img, bins=range(img.max()), range=[0, img.max()])
hist
```
array([ 595, 3208, 0, ..., 0, 0, 0])
```python
# And plot it using matplotlib
plt.title("Grayscale Histogram")
plt.xlabel("grayscale value")
plt.ylabel("amt. of pixels")
plt.plot(hist)
plt.show()
```
Another way to think or imagine of the color values is as a 3D surface map: black (or 0) is a low valley, while brighter colors (up to white) are mountains:
```python
# create the x and y coordinate arrays (here we just use pixel indices)
xx, yy = np.mgrid[0:img_width, 0:img_height]
# create the figure
fig = plt.figure()
# gca = Get the Current Axes (or create new if)
ax = fig.gca(projection='3d')
ax.plot_surface(xx, yy, img ,rstride=1, cstride=1, cmap=plt.cm.gray, linewidth=0)
# show it
plt.show()
```
# Image Processing
Image processing is done thorugh a function that takes one or more images as an input and produces an output image.
Image transforms are generally divided to two:
* Point operators: Operators that transform pixels values individually. For example, changing the brightness, adjusting contrast or correcting the image colors.
* Neighborhood (area-based) operators - transform a pixel based on its neighbor pixels
## Pixel Operations
Since images are represented as 2D matrices, we can manupulate or generate images by "turning on or off" pixels in the image.
```python
img_width, img_height = img.shape
mask = np.zeros(img.shape[:2], np.uint8)
mask_width, mask_height = img_width//4, img_height//4
mask[mask_width:img_width-mask_width, mask_height:img_height-mask_height] = img.max()
plt.imshow(mask, cmap='gray')
```
```python
# And we can fuse two images together with byte operations. For example, AND operator would result in '0's where either the mask or the image was black (0).
masked_img = cv.bitwise_and(img,img,mask = mask)
plt.imshow(masked_img, cmap='gray')
```
###Linear blend operator
Images can also be blnded linearly, giving more weight to one image over the other.
$g(x) = (1-\alpha) f_0(x) + \alpha f_1(x)$
By varying α from 0→1 this operator can be used to perform a temporal cross-dissolve between two images or videos, as seen in slide shows and film productions.
```python
alpha = 0.5
beta = (1.0 - alpha)
img1 = data.brain()[0]
img2 = data.brain()[5]
dst = cv.addWeighted(img1, alpha, img2, beta, 0.0)
plt.figure(1)
plt.subplot(221)
plt.imshow(img1)
plt.subplot(222)
plt.imshow(img2)
plt.subplot(223)
plt.imshow(dst)
```
## Contrast
### Linear manipulation
Two commonly used point processes are a multiplication by a factor and an addition with a constant:
$g(x) = \alpha f(x) + \beta$
The parameters $alpha>0$ and $\beta$ are often called the **gain** and **bias** parameters; sometimes these parameters are said to control *contrast* and *brightness* respectively.
You can think of $f(x)$ as the source image pixels and $g(x)$ as the output image pixels. Then, more conveniently we can write the expression as:
$g(i,j)=\alpha⋅f(i,j)+\beta$
where $i$ and $j$ indicates that the pixel is located in the $i$-th row and $j$-th column.
```python
def set_contrast_brightness(img, alpha = 1.0, beta = 0):
out_img = np.zeros(img.shape, img.dtype)
############################## YOUR TURN: ##############################
"""
Implement the function g(x)=αf(x)+β with pixel operations and numpy.
Try implementing it first using two loops, for the rows and columns.
Then, see how you can use Numpy function for matrix operations.
Which method was faster?
Attention: pixels color values must be integers, and in the range of 0 to 255.
The numpy functions `clip` and `rint` might be useful.
"""
### END ###
return out_img
alpha = 2.2 # Simple contrast control
beta = 50. # Simple brightness control
orig_img = data.human_mitosis()
%time new_img = set_contrast_brightness(orig_img, alpha, beta)
plt.figure(1)
plt.subplot(121)
plt.imshow(orig_img)
plt.subplot(122)
plt.imshow(new_img)
plt.show()
```
```python
################################# YOUR TURN: #################################
### Plot the histogram of the original and the altered human_mitosis image ###
##############################################################################
```
```python
# The OpenCV package has implemented such method - convertScaleAbs.
# Ensure your method operates correctly:
alpha, beta = 2.0, 50
assert (cv.convertScaleAbs(orig_img, alpha=alpha, beta=beta) == set_contrast_brightness(orig_img, alpha, beta)).all()
alpha, beta = 1.0, 0
assert (cv.convertScaleAbs(orig_img, alpha=alpha, beta=beta) == set_contrast_brightness(orig_img, alpha, beta)).all()
alpha, beta = 3.0, 250
assert (cv.convertScaleAbs(orig_img, alpha=alpha, beta=beta) == set_contrast_brightness(orig_img, alpha, beta)).all()
```
### Gamma Correction
Gamma Correction can be used to correct the brightness of an image by using a **non linear** transformation between the input values and the mapped output values using this function:
\begin{align}
O = \left ( \frac{I}{255} \right ) ^\gamma × 255
\end{align}
As this relation is non linear, the effect will not be the same for all the pixels and will depend to their original value.
When $γ<1$, the original dark regions will be brighter and the histogram will be shifted to the right whereas it will be the opposite with $γ>1$.
Let's see it in action:
```python
def gamma(img, gamma=1.0):
return np.clip(pow(img / 255.0, gamma) * 255.0, 0, 255)
def plot_rgb_hist(img):
histr = cv.calcHist([img],[0],None,[256],[0,256])
plt.plot(histr)
plt.xlim([0,256])
orig_img = data.human_mitosis()
plt.figure(1)
plt.subplot(231)
plt.imshow(orig_img, cmap='gray')
plt.subplot(232)
plt.imshow(gamma(orig_img, 0.5), cmap='gray')
plt.subplot(233)
plt.imshow(gamma(orig_img, 0.1), cmap='gray')
plt.subplot(234)
plt.hist(orig_img.ravel(),256,[0,256]);
plt.subplot(235)
plt.hist(gamma(orig_img, 0.5).ravel(),256,[0,256]);
plt.subplot(236)
plt.hist(gamma(orig_img, 0.5).ravel(),256,[0,256]);
plt.figure(2)
plt.subplot(231)
plt.imshow(gamma(orig_img, 1.1), cmap='gray')
plt.subplot(232)
plt.imshow(gamma(orig_img, 1.7), cmap='gray')
plt.subplot(233)
plt.imshow(gamma(orig_img, 2.0), cmap='gray')
plt.subplot(234)
plt.hist(gamma(orig_img, 1.1).ravel(),256,[0,256]);
plt.subplot(235)
plt.hist(gamma(orig_img, 1.7).ravel(),256,[0,256]);
plt.subplot(236)
plt.hist(gamma(orig_img, 2.0).ravel(),256,[0,256]);
plt.show()
```
### Contrast Stretching
As we've seen before, contrast Enhancement methods can be divided into Linear and Non-Linear ones. Contrast Stretching belongs to the **Linear** family. Other non-linear methods to adjust the contrast automatically includes [Histogram Equilisation](https://en.wikipedia.org/wiki/Histogram_equalization) and [Gaussian Stretch](https://www.l3harrisgeospatial.com/docs/backgroundstretchtypes.html).
Contrast enhancement is represented well in the following image:
Contrast stretching as the name suggests is an image enhancement technique that tries to improve the contrast by stretching the intensity values of an image to fill the entire dynamic range. The transformation function used is always linear and monotonically increasing.
A typical transformation function for contrast stretching looks something like this:
By changing the location of points (r1, s1) and (r2, s2), we can control the shape of the transformation function. For example,
* When $r1 =s1$ and $r2=s2$, transformation is a **Linear** function.
* When $r1=r2$, $s1=0$ and $s2=L-1$, transformation becomes a **thresholding** function.
* When $(r1, s1) = (rmin, 0)$ and $(r2, s2) = (rmax, L-1)$, this is known as **Min-Max** Stretching.
* When $(r1, s1) = (rmin + c, 0)$ and $(r2, s2) = (rmax – c, L-1)$, this is known as **Percentile** Stretching.
Let’s understand Min-Max and Percentile Stretching in detail.
The general formula for Contrast Stretching is
\begin{align}
S = (r - r_{min}) \left ( \frac{C_{max} - C_{min}}{r_{max} - r_{min}} \right ) + C_{min}
\end{align}
Where $C_{max}$ and $C_{min}$ are the maxmimum and minimum **possible** color values (normally 255 and 0) and the $r_{max}$ and $r_{min}$ are the maximal and minimal color values that appear in the image itself.
For the normal scale of colors (0 - 255) the formula can be simplified to:
\begin{align}
S = 255 \times \left ( \frac{r - r_{min}}{r_{max} - r_{min}} \right ) + C_{min}
\end{align}
This method is also known as a **linear scaling** or **normalization** method, and is also widely used in Machine Learning for any other sort of data (e.g. tabular).
```python
def contrast_stretch(img):
new_img = np.zeros(img.shape, img.dtype)
### YOUR TURN ###
### Implement the contrast stretching method
### END ###
return new_img
orig_img = data.microaneurysms()
streched_img = contrast_stretch(orig_img)
plt.figure(1, figsize=[10,12])
plt.subplot(221)
plt.imshow(orig_img, cmap='gray')
plt.subplot(222)
plt.imshow(streched_img, cmap='gray')
plt.subplot(223)
plt.hist(orig_img.ravel(),256);
plt.subplot(224)
plt.hist(streched_img.ravel(),256);
```
```python
```
| 251f7f5bf2472f202443baa684180711e7fad165 | 666,610 | ipynb | Jupyter Notebook | medImgproc_00_working_with_images.ipynb | liadmagen/MedicalImageProcessingCourse | 64b73269740a636255a3a0626f6e63a574f3248b | [
"CC0-1.0"
] | null | null | null | medImgproc_00_working_with_images.ipynb | liadmagen/MedicalImageProcessingCourse | 64b73269740a636255a3a0626f6e63a574f3248b | [
"CC0-1.0"
] | null | null | null | medImgproc_00_working_with_images.ipynb | liadmagen/MedicalImageProcessingCourse | 64b73269740a636255a3a0626f6e63a574f3248b | [
"CC0-1.0"
] | null | null | null | 709.914803 | 123,861 | 0.944788 | true | 3,899 | Qwen/Qwen-72B | 1. YES
2. YES | 0.890294 | 0.833325 | 0.741904 | __label__eng_Latn | 0.951741 | 0.562024 |
# Inertial Brownian motion simulation
The Inertial Langevin equation for a particle of mass $m$ and some damping $\gamma$ writes:
\begin{equation}
m\ddot{x} = -\gamma \dot{x} + \sqrt{2k_\mathrm{B}T \gamma} \mathrm{d}B_t
\end{equation}
Integrating the latter equation using the Euler method, one can replace $\dot{x}$ by:
\begin{equation}
\dot{x} \simeq \frac{x_i - x_{i-1}}{\tau} ~,
\end{equation}
$\ddot{x}$ by:
\begin{equation}
\begin{aligned}
\ddot{x} &\simeq
\frac{
\frac{x_i - x_{i-1}}{\tau}
-
\frac{x_{i-1} - x_{i-2}}{\tau}
}
{\tau} \\
& = \frac{x_i - 2x_{i - 1} + x_{i-2}}{\tau^2} ~.
\end{aligned}
\end{equation}
and finally, $\mathrm{d}B_t$ by a Gaussian random number $w_i$ with a zero mean value and a $\tau$ variance, on can write $x_i$ as:
\begin{equation}
x_i = \frac{2 + \tau /\tau_\mathrm{B}}{1 + \tau / \tau_\mathrm{B} } x_{i-1}
- \frac{1}{1 + \tau / \tau_\mathrm{B}}x_{i-2}
+ \frac{\sqrt{2k_\mathrm{B}T\gamma}}{m(1 + \tau/\tau_\mathrm{B})} \tau w_i ~,
\end{equation}
In the following, we use Python to simulate such a movement and check the properties of the mean squared displacement. Then, I propose a Cython implementation that permits a $200$x speed improvement on the simulation.
```python
# Import important libraries
import numpy as np
import matplotlib.pyplot as plt
```
```python
# Just some matplotlib tweaks
import matplotlib as mpl
mpl.rcParams["xtick.direction"] = "in"
mpl.rcParams["ytick.direction"] = "in"
mpl.rcParams["lines.markeredgecolor"] = "k"
mpl.rcParams["lines.markeredgewidth"] = 1.5
mpl.rcParams["figure.dpi"] = 200
from matplotlib import rc
rc("font", family="serif")
rc("text", usetex=True)
rc("xtick", labelsize="medium")
rc("ytick", labelsize="medium")
rc("axes", labelsize="large")
def cm2inch(value):
return value / 2.54
```
```python
N = 1000000 # number of time steps
tau = 0.01 # simulation time step
m = 1e-8 # particle mass
a = 1e-6 # radius of the particle
eta = 0.001 # viscosity (here water)
gamma = 6 * np.pi * eta * a
kbT = 4e-21
tauB = m / gamma
```
With such properties we have a characteristic diffusion time $\tau_\mathrm{B} =0.53$ s.
```python
def xi(xi1, xi2):
"""
Function that compute the position of a particle using the full Langevin Equation
"""
t = tau / tauB
wi = np.random.normal(0, np.sqrt(tau))
return (
(2 + t) / (1 + t) * xi1
- 1 / (1 + t) * xi2
+ np.sqrt(2 * kbT * gamma) / (m * (1 + t)) * np.power(tau,1) * wi
)
```
```python
def trajectory(N):
"""
Function generating a trajectory of length N.
"""
x = np.zeros(N)
for i in range(2, len(x)):
x[i] = xi(x[i - 1], x[i - 2])
return x
```
Now that the functions are setup one can simply generate a trajectory of length $N$ by simply calling the the function ```trajectory()```
```python
# Generate a trajectory of 10e6 points.
x = trajectory(1000000)
```
```python
plt.plot(np.arange(len(x))*tau, x)
plt.title("Intertial Brownian trajectory")
plt.ylabel("$x$ (m)")
plt.xlabel("$t$ (s)")
plt.show()
```
## Cross checking
We now check that the simulated trajectory gives us the correct MSD properties to ensure the simulation si done properly. The MSD given by:
\begin{equation}
\mathrm{MSD}(\Delta t) = \left. \langle \left( x(t) - x(t+\Delta t \right)^2 \rangle \right|_t ~,
\end{equation}
with $\Delta t$ a lag time. The MSD, can be computed using the function defined in the cell below. For a lag time $\Delta t \ll \tau_B$ we should have:
\begin{equation}
\mathrm{MSD}(\Delta t) = \frac{k_\mathrm{B}T}{m} \Delta t ^2 ~,
\end{equation}
and for $\Delta t \gg \tau_B$:
\begin{equation}
\mathrm{MSD}(\tau) = 2 D \Delta t~,
\end{equation}
with $D = k_\mathrm{B}T / (6 \pi \eta a)$.
```python
t = np.array([*np.arange(3,10,1), *np.arange(10,100,10), *np.arange(100,1000,100), *np.arange(1000,8000,1000)])
def msd(x,Dt):
"""Function that return the MSD for a list of time index t for a trajectory x"""
_msd = lambda x, t : np.mean((x[:-t] - x[t:])**2)
return [_msd(x,i) for i in t]
MSD = msd(x,t)
```
```python
D = kbT/(6*np.pi*eta*a)
t_plot = t*tau
plt.loglog(t*tau,MSD, "o")
plt.plot(t*tau, (2*D*t_plot), "--", color = "k", label="long time theory")
plt.plot(t*tau, kbT/m * t_plot**2, ":", color = "k", label="short time theory")
plt.ylabel("MSD (m$^2$)")
plt.xlabel("$\Delta t$ (s)")
horiz_data = [1e-8, 1e-17]
t_horiz = [tauB, tauB]
plt.plot(t_horiz, horiz_data, "k", label="$\\tau_\mathrm{B}$")
plt.legend()
plt.show()
```
The simulations gives expected results. However, with the computer used, 6 seconds are needed to generate this trajectory. If someone wants to look at fine effects and need to generate millions of trajectories it is too long. In order to fasten the process, in the following I use Cython to generate the trajectory using C language.
## Cython acceleration
```python
# Loading Cython library
%load_ext Cython
```
We now write the same functions as in the first part of the appendix. However, we now indicate the type of each variable.
```cython
%%cython
import cython
cimport numpy as np
import numpy as np
from libc.math cimport sqrt
ctypedef np.float64_t dtype_t
cdef int N = 1000000 # length of the simulation
cdef dtype_t tau = 0.01 # simulation time step
cdef dtype_t m = 1e-8 # particle mass
cdef dtype_t a = 1e-6 # radius of the particle
cdef dtype_t eta = 0.001 # viscosity (here water)
cdef dtype_t gamma = 6 * 3.14 * eta * a
cdef dtype_t kbT = 4e-21
cdef dtype_t tauB = m/gamma
cdef dtype_t[:] x = np.zeros(N)
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
@cython.cdivision(True)
cdef dtype_t xi_cython( dtype_t xi1, dtype_t xi2, dtype_t wi):
cdef dtype_t t = tau / tauB
return (
(2 + t) / (1 + t) * xi1
- 1 / (1 + t) * xi2
+ sqrt(2 * kbT * gamma) / (m * (1 + t)) * tau * wi
)
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.nonecheck(False)
cdef dtype_t[:] _traj(dtype_t[:] x, dtype_t[:] wi):
cdef int i
for i in range(2, N):
x[i] = xi_cython(x[i-1], x[i-2], wi[i])
return x
def trajectory_cython():
cdef dtype_t[:] wi = np.random.normal(0, np.sqrt(tau), N).astype('float64')
return _traj(x, wi)
```
```python
%timeit trajectory(1000000)
```
6.79 s ± 92.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```python
%timeit trajectory_cython()
```
30.6 ms ± 495 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Again, we check that the results given through the use of Cython gives the correct MSD
```python
x=np.asarray(trajectory_cython())
D = kbT/(6*np.pi*eta*a)
t_plot = t*tau
plt.loglog(t*tau,MSD, "o")
plt.plot(t*tau, (2*D*t_plot), "--", color = "k", label="long time theory")
plt.plot(t*tau, kbT/m * t_plot**2, ":", color = "k", label="short time theory")
horiz_data = [1e-8, 1e-17]
t_horiz = [tauB, tauB]
plt.plot(t_horiz, horiz_data, "k", label="$\\tau_\mathrm{B}$")
plt.xlabel("$\\Delta t$ (s)")
plt.ylabel("MSD (m$^2$)")
plt.legend()
plt.show()
```
### Conclusion
Finally, one only needs $\simeq 30$ ms to generate the trajectory instead of $\simeq 7$ s which is a
$\simeq 250\times$ improvement speed. The simulation si here bound to the time needed to generate the array of random numbers which is still done using numpy function. After further checking, Numpy random generation si as optimize as one could do so there is no benefit on cythonizing the random generation. For the sake of completness one could fine a Cython version to generate random numbers. Found thanks to Senderle on [Stackoverflow](https://stackoverflow.com/questions/42767816/what-is-the-most-efficient-and-portable-way-to-generate-gaussian-random-numbers). Tacking into account that, the time improvment on the actual computation of the trajectory **without** the random number generation is done with an $\simeq 1100\times$ improvement speed.
```cython
%%cython
from libc.stdlib cimport rand, RAND_MAX
from libc.math cimport log, sqrt
import numpy as np
import cython
cdef double random_uniform():
cdef double r = rand()
return r / RAND_MAX
cdef double random_gaussian():
cdef double x1, x2, w
w = 2.0
while (w >= 1.0):
x1 = 2.0 * random_uniform() - 1.0
x2 = 2.0 * random_uniform() - 1.0
w = x1 * x1 + x2 * x2
w = ((-2.0 * log(w)) / w) ** 0.5
return x1 * w
@cython.boundscheck(False)
cdef void assign_random_gaussian_pair(double[:] out, int assign_ix):
cdef double x1, x2, w
w = 2.0
while (w >= 1.0):
x1 = 2.0 * random_uniform() - 1.0
x2 = 2.0 * random_uniform() - 1.0
w = x1 * x1 + x2 * x2
w = sqrt((-2.0 * log(w)) / w)
out[assign_ix] = x1 * w
out[assign_ix + 1] = x2 * w
@cython.boundscheck(False)
def my_uniform(int n):
cdef int i
cdef double[:] result = np.zeros(n, dtype='f8', order='C')
for i in range(n):
result[i] = random_uniform()
return result
@cython.boundscheck(False)
def my_gaussian(int n):
cdef int i
cdef double[:] result = np.zeros(n, dtype='f8', order='C')
for i in range(n):
result[i] = random_gaussian()
return result
@cython.boundscheck(False)
def my_gaussian_fast(int n):
cdef int i
cdef double[:] result = np.zeros(n, dtype='f8', order='C')
for i in range(n // 2): # Int division ensures trailing index if n is odd.
assign_random_gaussian_pair(result, i * 2)
if n % 2 == 1:
result[n - 1] = random_gaussian()
return result
```
```python
%timeit my_gaussian_fast(1000000)
```
30.9 ms ± 941 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
```python
%timeit np.random.normal(0,1,1000000)
```
26.4 ms ± 1.87 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
One can thus see, that even a pure C implementation can be slower than the Numpy one, thanks to a great optimization.
```python
fig = plt.figure(figsize = (cm2inch(16), cm2inch(10)))
gs = fig.add_gridspec(2, 1)
f_ax1 = fig.add_subplot(gs[0, 0])
for i in range(100):
x = np.asarray(trajectory_cython())* 1e6
plt.plot(np.arange(N)*tau / 60, x)
plt.ylabel("$x$ ($\mathrm{\mu m}$)")
plt.xlabel("$t$ (min)")
plt.text(5,100, "a)")
plt.xlim([0,160])
f_ax1 = fig.add_subplot(gs[1, 0])
x=np.asarray(trajectory_cython())
D = kbT/(6*np.pi*eta*a)
plt.loglog(t*tau,MSD, "o")
t_plot = np.linspace(0.5e-2,5e3,1000)
plt.plot(t_plot, (2*D*t_plot), "--", color = "k", label="long time theory")
plt.plot(t_plot, kbT/m * t_plot**2, ":", color = "k", label="short time theory")
horiz_data = [1e-7, 1e-18]
t_horiz = [tauB, tauB]
plt.plot(t_horiz, horiz_data, "k", label="$\\tau_\mathrm{B}$")
plt.ylabel("MSD (m$^2$)")
plt.xlabel("$\\Delta t$ (s)")
ax = plt.gca()
locmaj = mpl.ticker.LogLocator(base=10.0, subs=(1.0, ), numticks=100)
ax.yaxis.set_major_locator(locmaj)
locmin = mpl.ticker.LogLocator(base=10.0, subs=np.arange(2, 10) * .1,
numticks=100)
ax.yaxis.set_minor_locator(locmin)
ax.yaxis.set_minor_formatter(mpl.ticker.NullFormatter())
plt.legend(frameon=False)
plt.text(0.7e2,1e-15, "b)")
plt.xlim([0.8e-2,1e2])
plt.ylim([1e-16,1e-10])
plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0)
plt.savefig("intertial_langevin.pdf")
plt.show()
```
| be32e71bbcec574a7ec139f5ff4e06eb52789e19 | 606,134 | ipynb | Jupyter Notebook | 03_tail/inertial_sim/inertial_Brownian_motion.ipynb | eXpensia/Confined-Brownian-Motion | bd0eb6dea929727ea081dae060a7d1aa32efafd1 | [
"MIT"
] | null | null | null | 03_tail/inertial_sim/inertial_Brownian_motion.ipynb | eXpensia/Confined-Brownian-Motion | bd0eb6dea929727ea081dae060a7d1aa32efafd1 | [
"MIT"
] | null | null | null | 03_tail/inertial_sim/inertial_Brownian_motion.ipynb | eXpensia/Confined-Brownian-Motion | bd0eb6dea929727ea081dae060a7d1aa32efafd1 | [
"MIT"
] | null | null | null | 914.229261 | 369,168 | 0.9503 | true | 3,724 | Qwen/Qwen-72B | 1. YES
2. YES | 0.944177 | 0.870597 | 0.821998 | __label__eng_Latn | 0.748395 | 0.748109 |