|
arXiv:1001.0024v1 [q-fin.CP] 30 Dec 2009November 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
Journal of Circuits, Systems, and Computers |
|
c/circlecopyrtWorld Scientific Publishing Company |
|
BAYESIAN INFERENCE OF STOCHASTIC VOLATILITY MODEL |
|
BY HYBRID MONTE CARLO |
|
Tetsuya Takaishi† |
|
Hiroshima University of Economics, |
|
Hiroshima 731-0192 JAPAN |
|
†takaishi@hiroshima-u.ac.jp |
|
Received (Day Month Year) |
|
Revised (Day Month Year) |
|
Accepted (Day Month Year) |
|
The hybrid Monte Carlo (HMC) algorithm is applied for the Bay esian inference of the |
|
stochastic volatility (SV) model. We use the HMC algorithm f or the Markov chain Monte |
|
Carloupdates of volatility variables of the SV model. First we compute parameters of the |
|
SV model by using the artificial financial data and compare the results from the HMC |
|
algorithm with those from the Metropolis algorithm. We find t hat the HMC algorithm |
|
decorrelates the volatility variables faster than the Metr opolis algorithm. Second we |
|
make an empirical study for the time series of the Nikkei 225 s tock index by the HMC |
|
algorithm. We find the similar correlation behavior for the s ampled data to the results |
|
from the artificial financial data and obtain a φvalue close to one ( φ≈0.977), which |
|
means that the time series has the strong persistency of the v olatility shock. |
|
Keywords : Hybrid Monte Carlo Algorithm, Stochastic Volatility Mode l, Markov Chain |
|
Monte Carlo, Bayesian Inference, Financial Data Analysis |
|
1. Introduction |
|
Many empirical studies of financial prices such as stock indexes, ex change rates |
|
have confirmed that financial time series of price returns shows va rious interesting |
|
properties which can not be derived from a simple assumption that th e price re- |
|
turns follow the geometric Brownian motion. Those properties are n ow classified |
|
as stylized facts1,2. Some examples of the stylized facts are (i) fat-tailed distribu- |
|
tion of return (ii) volatility clustering (iii) slow decay of the autocorre lation time |
|
of the absolute returns. The true dynamics behind the stylized fac ts is not fully |
|
understood. In order to imitate the real financial markets and to understand the |
|
origins of the stylized facts, a variety of models have been propose d and examined. |
|
Actually many models are able to capture some of the stylized facts3-14. |
|
In empirical finance the volatilityis an important value to measurethe risk. One |
|
of the stylized facts of the volatility is that the volatility of price retu rns changes |
|
in time and shows clustering, so called ”volatility clustering”. Then the histogram |
|
of the resulting price returns shows a fat-tailed distribution which in dicates that |
|
1November 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
2Authors’ Names |
|
the probability of having a large price change is higher than that of th e Gaussian |
|
distribution. In order to mimic these empirical properties of the vola tility and to |
|
forecast the future volatility values, Engle advocated the autore gressive conditional |
|
hetroskedasticity (ARCH) model15where the volatility variable changes determin- |
|
istically depending on the past squared value of the return. Later t he ARCH model |
|
is generalized by adding also the past volatility dependence to the vola tility change. |
|
This model is known asthe generalizedARCH(GARCH) model16. The parameters |
|
of the GARCH model applied to financial time series are conventionally determined |
|
by the maximum likelihood method. There are many extended versions of GARCH |
|
models, such as EGARCH17, GJR18, QGARCH19,20models etc., which are de- |
|
signed to increase the ability to forecast the volatility value. |
|
The stochastic volatility (SV) model21,22is another model which captures the |
|
propertiesofthevolatility.IncontrasttotheGARCHmodel,thevo latilityoftheSV |
|
model changes stochastically in time. As a result the likelihood functio n of the SV |
|
model is given as a multiple integral of the volatility variables. Such an in tegral in |
|
general is not analytically calculable and thus the determination of th e parameters |
|
of the SV model by the maximum likelihood method becomes difficult. To o vercome |
|
this difficulty in the maximum likelihood method the Markov Chain Monte Ca rlo |
|
(MCMC) method based on the Bayesian approach is proposed and de veloped21. In |
|
the MCMC of the SV model one has to update not only the parameter variables |
|
but also the volatility ones from a joint probability distribution of the p arameters |
|
and the volatility variables. The number of the volatility variables to be updated |
|
increases with the data size of time series. The first proposed upda te scheme of |
|
the volatility variables is based on the local update such as the Metro polis-type |
|
algorithm21. It is however known that when the local update scheme is used for |
|
the volatility variables having interactions to their neighbor variables in time, the |
|
autocorrelationtime ofsampledvolatilityvariablesbecomeslargeand thusthe local |
|
update scheme becomes ineffective23. In order to improve the efficiency of the local |
|
update method the blocked scheme which updates several variable s at once is also |
|
proposed23,24. A recent survey on the MCMC studies of the SV model is seen in |
|
Ref.25. |
|
In our study we use the HMC algorithm26which had not been considered |
|
seriously for the MCMC simulation of the SV model. In finance there ex ists an |
|
application of the HMC algorithm to the GARCH model27where three GARCH |
|
parameters are updated by the HMC scheme. It is more interesting to apply the |
|
HMC for updates of the volatility variables because the HMC algorithm is a global |
|
update scheme which can update all variables at once. This feature of the HMC |
|
algorithm can be used for the global update of the volatility variables which can not |
|
be achieved by the standard Metropolis algorithm. A preliminary stud y28shows |
|
that the HMC algorithmsamplesthe volatilityvariableseffectively.In t his paperwe |
|
give a detailed description of the HMC algorithm and examine the HMC alg orithm |
|
with artificial financial data up to the data size of T=5000. We also ma ke an |
|
empirical analysis of the Nikkei 225 stock index by the HMC algorithm.November 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
Instructions for Typesetting Manuscripts (Condensed Titl e for the Paper) 3 |
|
2. Stochastic Volatility Model |
|
The standard version of the SV model21,22is given by |
|
yt=σtǫt= exp(ht/2)ǫt, (1) |
|
ht=µ+φ(ht−1−µ)+ηt, (2) |
|
whereyt= (y1,y2,...,yn) represents the time series data, htis defined by ht= lnσ2 |
|
t |
|
andσtiscalledvolatility.Wealsocall htvolatilityvariable.Theerrorterms ǫtandηt |
|
are taken from independent normal distributions N(0,1) andN(0,σ2 |
|
η) respectively. |
|
We assume that |φ|<1. When φis close to one, the model exhibits the strong |
|
persistency of the volatility shock. |
|
For this model the parameters to be determined are µ,φandσ2 |
|
η. Let us use θ |
|
asθ= (µ,φ,σ2 |
|
η). Then the likelihood function L(θ) for the SV model is written as |
|
L(θ) =/integraldisplayn/productdisplay |
|
t=1f(ǫt|σ2 |
|
t)f(ht|θ)dh1dh2...dhn, (3) |
|
where |
|
f(ǫt|σ2 |
|
t) =/parenleftbig |
|
2πσ2 |
|
t/parenrightbig−1 |
|
2exp/parenleftbigg |
|
−y2 |
|
t |
|
2σ2 |
|
t/parenrightbigg |
|
, (4) |
|
f(h1|θ) =/parenleftBigg |
|
2πσ2 |
|
η |
|
1−φ2/parenrightBigg−1 |
|
2 |
|
exp/parenleftbigg |
|
−[h1−µ]2 |
|
2σ2η/(1−φ2)/parenrightbigg |
|
, (5) |
|
f(ht|θ) =/parenleftbig |
|
2πσ2 |
|
η/parenrightbig−1 |
|
2exp/parenleftbigg |
|
−[ht−µ−φ(ht−1−µ)]2 |
|
2σ2η/parenrightbigg |
|
. (6) |
|
As seen in Eq.(3), L(θ) is constructed as a multiple integral of the volatility vari- |
|
ables. For such an integral it is difficult to apply the maximum likelihood me thod |
|
which estimates values of θby maximizing the likelihood function. Instead of using |
|
the maximum likelihood method we perform the MCMC simulations based o n the |
|
Bayesian inference as explained in the next section. |
|
3. Bayesian inference for the SV model |
|
From the Bayes’ rule, the probability distribution of the parameter sθis given by |
|
f(θ|y) =1 |
|
ZL(θ)π(θ), (7) |
|
whereZis the normalization constant Z=/integraltext |
|
L(θ)π(θ)dθandπ(θ) is a prior disti- |
|
bution of θfor which we make a certian assumption. The values of the paramete rs |
|
are inferred as the expectation values of θgiven by |
|
/an}bracketle{tθ/an}bracketri}ht=/integraldisplay |
|
θf(θ|y)dθ. (8) |
|
In general this integral can not be performed analytically. For tha t case, one can |
|
use the MCMC method to estimate the expectation values numerically .November 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
4Authors’ Names |
|
In the MCMC method, we first generate a series of θwith a probability of |
|
P(θ) =f(θ|y). Letθ(i)= (θ(1),θ(2),...,θ(k)) be values of θgenerated by the MCMC |
|
sampling. Then using these kvalues the expectation value of θis estimated by an |
|
average as |
|
/an}bracketle{tθ/an}bracketri}ht=1 |
|
kk/summationdisplay |
|
i=1θ(i). (9) |
|
The statistical error for kindependent samples is proportional to1√ |
|
k. When the |
|
sampled data are correlated the statistical error will be proportio nal to/radicalbigg |
|
2τ |
|
kwhere |
|
τis the autocorrelation time between the sampled data. The value of τdepends |
|
on the MCMC sampling scheme we take. In order to reduce the statis tical error |
|
within limited sampled data it is better to choose an MCMC method which is able |
|
to generate data with a small τ. |
|
3.1.MCMC Sampling of θ |
|
For the SV model, in addition to θ, volatility variables htalso have to be updated |
|
sincetheyshouldbeintegratedoutasinEq.(3).Let P(θ,ht)be thejointprobability |
|
distribution of θandht. ThenP(θ,ht) is given by |
|
P(θ,ht)∼¯L(θ,ht)π(θ), (10) |
|
where |
|
¯L(θ,ht) =n/productdisplay |
|
t=1f(ǫt|ht)f(ht|θ). (11) |
|
For the prior π(θ) we assume that π(σ2 |
|
η)∼(σ2 |
|
η)−1and for others π(µ) =π(φ) = |
|
constant. |
|
The MCMC sampling methods for θare given in the following21,22. The prob- |
|
ability distribution for each parameter can be derived from Eq.(10) b y extracting |
|
the part including the corresponding parameter. |
|
•σ2 |
|
ηupdate scheme. |
|
The probability distribution of σ2 |
|
ηis given by |
|
P(σ2 |
|
η)∼(σ2 |
|
η)−n |
|
2−1exp/parenleftbigg |
|
−A |
|
σ2η/parenrightbigg |
|
, (12) |
|
where |
|
A=1 |
|
2{(1−φ2)(h1−µ)2+n/summationdisplay |
|
t=2[ht−µ−φ(ht−1−µ)]2}.(13) |
|
Since Eq.(12) is an inverse gamma distribution we can easily draw a value |
|
ofσ2 |
|
ηby using an appropriate statistical library in the computer.November 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
Instructions for Typesetting Manuscripts (Condensed Titl e for the Paper) 5 |
|
•µupdate scheme. |
|
The probability distribution of µis given by |
|
P(µ)∼exp/braceleftbigg |
|
−B |
|
2σ2η(µ−C |
|
B)2/bracerightbigg |
|
, (14) |
|
where |
|
B= (1−φ2)+(n−1)(1−φ)2, (15) |
|
and |
|
C= (1−φ2)h1+(1−φ)n/summationdisplay |
|
t=2(ht−φht−1). (16) |
|
µis drawn from a Gaussian distribution of Eq.(14). |
|
•φupdate scheme. |
|
The probability distribution of φis given by |
|
P(φ)∼(1−φ2)1/2exp{−D |
|
2σ2η(φ−E |
|
D)2}, (17) |
|
where |
|
D=−(h1−µ)2+n/summationdisplay |
|
t=2(ht−1−µ)2, andE=/summationtextn |
|
t=1(ht−µ)(ht−1−µ).(18) |
|
In order to update φwith Eq.(17), we use the Metropolis-Hastings |
|
algorithm30,31. Let us write Eq.(17) as P(φ)∼P1(φ)P2(φ) where |
|
P1(φ) = (1−φ2)1/2, (19) |
|
P2(φ)∼exp{−D |
|
2σ2η(φ−E |
|
D)2}. (20) |
|
SinceP2(φ) is a Gaussian distribution we can easily draw φfrom Eq.(20). |
|
Letφnewbe a candidate given from Eq.(20). Then in order to obtain the |
|
correct distribution, φnewis accepted with the following probability PMH. |
|
PMH= min/braceleftbiggP(φnew)P2(φ) |
|
P(φ)P2(φnew),1/bracerightbigg |
|
= min/braceleftBigg/radicalBigg |
|
(1−φ2new) |
|
(1−φ2),1/bracerightBigg |
|
.(21) |
|
In addition to the abovestep we restrict φwithin [−1,1]to avoida negative |
|
value in the calculation of square root. |
|
3.2.Probability distribution for ht |
|
The probability distribution of the volatility variables htis given by |
|
P(ht)≡P(h1,h2,...,hn)∼ (22) |
|
exp/parenleftBig |
|
−/summationtextn |
|
i=1{ht |
|
2+ǫ2 |
|
t |
|
2e−ht}−[h1−µ]2 |
|
2σ2 |
|
η/(1−φ2)−/summationtextn |
|
i=2[ht−µ−φ(ht−1−µ)]2 |
|
2σ2 |
|
η/parenrightBig |
|
.November 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
6Authors’ Names |
|
Thisprobabilitydistributionisnotasimplefunction todrawvaluesof ht.Aconven- |
|
tional method is the Metropolis method30,31which updates the variables locally. |
|
There are several methods21,22,23,24developed to update htfrom Eq.(22). Here |
|
we use the HMC algorithm to update htglobally. The HMC algorithm is described |
|
in the next section. |
|
4. Hybrid Monte Carlo Algorithm |
|
Originallythe HMCalgorithmis developedforthe MCMCsimulationsofthe lattice |
|
QuantumChromoDynamics(QCD) calculations26. Amajordifficultyofthe lattice |
|
QCDcalculationsistheinclusionofdynamicalfermions.Theeffectoft hedynamical |
|
fermions is incorporated by the determinant of the fermion matrix. The computa- |
|
tional work of the determinant calculation requires O(V3) arithmetic operations29, |
|
whereVis the volume of a 4-dimensional lattice. A typical size of the volume is |
|
V >104. The standard Metropolis algorithm which locally updates variables do es |
|
not work since each local update requires O(V3) arithmetic operations for a deter- |
|
minant calculation,which results in unacceptable computational cos t in total. Since |
|
the HMC algorithm is a global update method, the computational cos t remains in |
|
the acceptable region. |
|
The basic idea of the HMC algorithm is a combination of molecular dynamic s |
|
(MD) simulation and Metropolis accept/reject step. Let us conside r to evaluate the |
|
following expectation value /an}bracketle{tO(x)/an}bracketri}htby the HMC algorithm. |
|
/an}bracketle{tO(x)/an}bracketri}ht=/integraldisplay |
|
O(x)f(x)dx=/integraldisplay |
|
O(x)elnf(x)dx, (23) |
|
wherex= (x1,x2,...,xn),f(x) is a probability density and O(x) stands for an |
|
function of x. First we introduce momentum variables p= (p1,p2,...,pn) conjugate |
|
to the variables xand then rewrite Eq.(23) as |
|
/an}bracketle{tO(x)/an}bracketri}ht=1 |
|
Z/integraldisplay |
|
O(x)e−1 |
|
2p2+lnf(x)dxdp=1 |
|
Z/integraldisplay |
|
O(x)e−H(p,x)dxdp. (24) |
|
whereZis a normalization constant given by |
|
Z=/integraldisplay |
|
exp/parenleftbigg |
|
−1 |
|
2p2/parenrightbigg |
|
dp, (25) |
|
andp2stands for/summationtextn |
|
i=1p2 |
|
i.H(p,x) is the Hamiltonian defined by |
|
H(p,x) =1 |
|
2p2−lnf(x). (26) |
|
Note that the introduction of pdoes not change the value of /an}bracketle{tO(x)/an}bracketri}ht. |
|
In the HMC algorithm, new candidates of the variables ( p,x) are drawn by |
|
integrating the Hamilton’s equations of motion, |
|
dxi |
|
dt=∂H |
|
∂pi, (27) |
|
dpi |
|
dt=−∂H |
|
∂xi. (28)November 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
Instructions for Typesetting Manuscripts (Condensed Titl e for the Paper) 7 |
|
In general the Hamilton’s equations of motion arenot solved analytic ally. Therefore |
|
wesolvethemnumericallybydoingthe MDsimulation.Let TMD(∆t) beanelemen- |
|
tary MD step with a step size ∆ t, which evolves ( p(t),x(t)) to (p(t+∆t),x(t+∆t)): |
|
TMD(∆t) : (p(t),x(t))→(p(t+∆t),x(t+∆t)). (29) |
|
Any integrator can be used for the MD simulation provided that the f ollowing |
|
conditions are satisfied26 |
|
•area preserving |
|
dp(t)dx(t)dx=dp(t+∆t)dx(t+∆t). (30) |
|
•time reversibility |
|
TMD(−∆t) : (p(t+∆t),x(t+∆t))→(p(t),x(t)). (31) |
|
The simplest and often used integrator satisfying the above two co nditions is |
|
the 2nd order leapfrog integrator given by |
|
xi(t+∆t/2) =xi(t)+∆t |
|
2pi(t) |
|
pi(t+∆t) =p(t)i−∆t∂H |
|
∂xi |
|
xi(t+∆t) =xi(t+∆t/2)+∆t |
|
2pi(t+∆t). (32) |
|
In this study we use this integrator.The numericalintegration is pe rformedNsteps |
|
repeatedly by Eq.(32) and in this case the total trajectory length λof the MD is |
|
λ=N×∆t. |
|
At the end of the trajectory we obtain new candidates ( p′,x′). These candidates |
|
are accepted with the Metropolis test, i.e. ( p′,x′) are globally accepted with the |
|
following probability, |
|
P= min{1,exp(−H(p′,x′)) |
|
exp(−H(p,x))}= min{1,exp(−∆H)}, (33) |
|
where∆Histhe energydifferencegivenby∆ H=H(p′,x′)−H(p,x). Sinceweinte- |
|
grate the Hamilton’s equations of motion approximately by an integra tor, the total |
|
Hamiltonianisnotconserved,i.e.∆ H/ne}ationslash= 0.Theacceptanceorthe magnitudeof∆ H |
|
is tuned by the step size ∆ tto obtain a reasonable acceptance. Actually there ex- |
|
ists the optimal acceptance which is about 60 −70%for 2nd order integrators32,33. |
|
Surprisingly the optimal acceptance is not dependent of the model we consider. For |
|
the n-th order integrator the optimal acceptance is expected to be32∼exp/parenleftbigg |
|
−1 |
|
n/parenrightbigg |
|
. |
|
We could also use higher order integrators which give us a smaller ener gy dif- |
|
ference ∆ H. However the higher order integrators are not always effective sin ce |
|
they need more arithmetic operations than the lower order integra tors32,33. The |
|
efficiency of the higher order integrators depends on the model we consider. ThereNovember 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
8Authors’ Names |
|
also exist improved integrators which have less arithmetic operation s than the con- |
|
ventional integrators34. |
|
For the volatility variables ht, from Eq.(22), the Hamiltonian can be defined by |
|
H(pt,ht) =n/summationdisplay |
|
i=11 |
|
2p2 |
|
i+n/summationdisplay |
|
i=1{hi |
|
2+ǫ2 |
|
i |
|
2e−hi}+[h1−µ]2 |
|
2σ2η/(1−φ2)+n/summationdisplay |
|
i=2[hi−µ−φ(hi−1−µ)]2 |
|
2σ2η,(34) |
|
wherepiis defined as a conjugate momentum to hi. Using this Hamiltonian we |
|
perform the HMC algorithm for updates of ht. |
|
5. Numerical Studies |
|
In order to test the HMC algorithm we use artificial financial time ser ies data |
|
generatedbythe SVmodel with a setofknownparametersand per formthe MCMC |
|
simulations to the artificial financial data by the HMC algorithm. We als o perform |
|
the MCMC simulations by the Metropolis algorithm to the same artificial data and |
|
compare the results with those from the HMC algorithm. |
|
Using Eq.(1) with φ= 0.97,σ2 |
|
η= 0.05 andµ=−1 we have generated 5000 |
|
time series data. The time series generated by Eq.(1) is shown in Fig.1. From those |
|
data we prepared 3 data sets: (1)T=1000 data (the first 1000 of the time series), |
|
(2)T=2000data (the first 2000ofthe time series)and (3) T=5000 (the whole data). |
|
To these data sets we made the Bayesian inference by the HMC and M etropolis |
|
algorithms.Preciselyspeakingboth algorithmsareusedonlyfor the MCMC update |
|
of the volatility variables. For the update of the SV parameters we u sed the update |
|
schemes in Sec.3.1. |
|
For the volatility update in the Metropolis algorithm, we draw a new can didate |
|
of the volatility variables randomly, i.e. a new volatility hnew |
|
tis given from the |
|
previous value hold |
|
tby |
|
hnew |
|
t=hold |
|
t+δ(r−0.5), (35) |
|
whereris a uniform random number in [0 ,1) andδis a parameter to tune the |
|
acceptance. The new volatility hnew |
|
tis accepted with the acceptance Pmetro |
|
Pmetro= min/braceleftbigg |
|
1,P(hnew |
|
t) |
|
P(hold |
|
t)/bracerightbigg |
|
, (36) |
|
whereP(ht) is given by Eq.(22). |
|
The initial parameters for the MCMC simulations are set to φ= 0.5,σ2 |
|
η= 1.0 |
|
andµ= 0. The first 10000 samples are discarded as thermalization or burn -in |
|
process. Then 200000samples are recorded for analysis. The tot al trajectory length |
|
λof the HMC algorithm is set to λ= 1 and the step size ∆ tis tuned so that the |
|
acceptance of the volatility variables becomes more than 50%. |
|
First we analyze the sampled volatility variables. Fig.2 shows the Mont e Carlo |
|
(MC) history of the volatility variable h100fromT= 2000 data set. We take h100 |
|
as the representative one of the volatility variables since we have ob served theNovember 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
Instructions for Typesetting Manuscripts (Condensed Titl e for the Paper) 9 |
|
0 1000 2000 3000 4000 5000t-6-4-20246yt |
|
Fig. 1. The artificial SV time series used for this study. |
|
50000 55000 60000 |
|
Monte Carlo history-2-10123h100HMC |
|
50000 55000 60000 |
|
Monte Carlo history-2-10123h100Metropolis |
|
Fig. 2. Monte Carlo histories of h100generated by HMC (left) and Metropolis (right) with |
|
T= 2000 data set. The Monte Carlo histories in the window from 5 0000 to 60000 are shown. |
|
similar behavior for other volatility variables. See also Fig.3 for the sim ilarity of the |
|
autocorrelation functions of the volatility variables. |
|
AcomparisonofthevolatilityhistoriesinFig.2clearlyindicatesthatth ecorrela- |
|
tion of the volatility variable sampled from the HMC algorithm is smaller th an that |
|
from the Metropolis algorithm. To quantify this we calculate the auto correlation |
|
function (ACF) of the volatility variable. The ACF is defined as |
|
ACF(t) =1 |
|
N/summationtextN |
|
j=1(x(j)−/an}bracketle{tx/an}bracketri}ht)(x(j+t)−/an}bracketle{tx/an}bracketri}ht) |
|
σ2x, (37) |
|
where/an}bracketle{tx/an}bracketri}htandσ2 |
|
xare the average value and the variance of xrespectively. |
|
Fig.3 shows the ACF for three volatility variables, h10,h20andh100sampled |
|
by the HMC. It is seen that those volatility variables have the similar co rrelation |
|
behavior. Other volatility variables also show the similar behavior. Thu s hereafter |
|
we only focus on the volatility variable h100as the representative one. |
|
Fig.4 compares the ACF of h100by the HMC and Metropolis algorithms. It |
|
is obvious that the ACF by the HMC decreases more rapidly than that by theNovember 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
10Authors’ Names |
|
0 20 40 60 80t0.010.11ACFh10 |
|
h20 |
|
h100 |
|
Fig. 3. Autocorrelation functions of three volatility vari ablesh10,h20andh100sampled by the |
|
HMC algorithm for T= 2000 data set. These autocorrelation functions show the si milar behavior. |
|
0 100 200 300 400 500t0.010.11ACFHMC |
|
Metropolis |
|
Fig. 4. Autocorrelation function of the volatility variabl eh100by the HMC and Metropolis |
|
algorithms for T= 2000 data set. |
|
Metropolis algorithm. We also calculate the autocorrelation time τintdefined by |
|
τint=1 |
|
2+∞/summationdisplay |
|
t=1ACF(t). (38) |
|
The results of τintof the volatility variables are given in Table 1. The values in |
|
the parentheses represent the statistical errors estimated by the jackknife method. |
|
We find that the HMC algorithm gives a smaller autocorrelation time tha n the |
|
Metropolis algorithm, which means that the HMC algorithm samples the volatility |
|
variables more effectively than the Metropolis algorithm. |
|
Next we analyze the sampled SV parameters. Fig.5 shows MC histories of the |
|
φparameter sampled by the HMC and Metropolis algorithms. It seems t hat both |
|
algorithms have the similar correlationfor φ. This similarity is also seen in the ACF |
|
in Fig.6(left), i.e. both autocorrelation functions decrease in the sim ilar rate with |
|
timet. The autocorrelation times of φare very large as seen in Table 1. We also |
|
find the similar behavior for σ2 |
|
η, i.e. both autocorrelation times of σ2 |
|
ηare large. |
|
On the other hand we see small autocorrelations for µas seen in Fig.6(right). |
|
Furthermore we observe that the HMC algorithm gives a smaller τintforµthanNovember 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
Instructions for Typesetting Manuscripts (Condensed Titl e for the Paper) 11 |
|
φ µ σ2 |
|
η h100 |
|
true 0.97 -1 0.05 |
|
T=1000 HMC 0.973 -1.13 0.053 |
|
SD 0.010 0.51 0.017 |
|
SE 0.0004 0.003 0.001 |
|
2τint 360(80) 3.1(5) 820(200) 12(1) |
|
Metropolis 0.973 -1.14 0.053 |
|
SD 0.011 0.40 0.017 |
|
SE 0.0005 0.003 0.0013 |
|
2τint 320(60) 10.1(8) 720(160) 190(20) |
|
T=2000 HMC 0.978 -0.92 0.053 |
|
SD 0.007 0.26 0.012 |
|
SE 0.0003 0.001 0.0009 |
|
2τint 540(60) 3(1) 1200(150) 18(1) |
|
Metropolis 0.978 -0.92 0.052 |
|
SD 0.007 0.26 0.011 |
|
SE 0.0003 0.003 0.0009 |
|
2τint 400(100) 13(2) 1000(270) 210(50) |
|
T=5000 HMC 0.969 -1.00 0.056 |
|
SD 0.005 0.11 0.009 |
|
SE 0.0003 0.0004 0.0007 |
|
2τint 670(100) 4.2(7) 1250(170) 10(1) |
|
Metropolis 0.970 -1.00 0.054 |
|
SD 0.005 0.12 0.008 |
|
SE 0.00023 0.0011 0.0005 |
|
2τint 510(90) 30(10) 960(180) 230(28) |
|
Table 1. Results estimated by the HMC and Metropolis algorit hms.SDstands for Standard |
|
Deviation and SEstands for Statistical Error. The statistical errors are es timated by the jackknife |
|
method. We observe no significant differences on the autocorr elation times among three data sets. |
|
that of the Metropolis algorithm, which means that HMC algorithm sam plesµ |
|
more effectively than the Metropolis algorithm although the values of τintforµ |
|
take already very small even for the Metropolis algorithm. |
|
The values of the SV parameters estimated by the HMC and the Metr opolis |
|
algorithms are listed in Table 1. The results from both algorithms well r eproduce |
|
the true values used for the generation of the artificial financial d ata. Furthermore |
|
for each parameter and each data set, the estimated parameter s by the HMC and |
|
the Metropolis algorithms agree well. And their standard deviations a lso agree |
|
well. This is not surprising because the same artificial financial data, thus the same |
|
likelihood function is usedfor both MCMC simulationsby the HMC and Met ropolis |
|
algorithms. Therefore they should agree each other.November 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
12Authors’ Names |
|
40000 45000 50000 |
|
MC history0.940.950.960.970.980.991φ |
|
HMC |
|
40000 45000 50000 |
|
MC history0.940.950.960.970.980.991φ |
|
Metropolis |
|
Fig. 5. Monte Carlo histories of φgenerated by HMC (left) and Metropolis (right) for T= 2000 |
|
data set. |
|
0 1000t0.010.11ACFHMC |
|
Metropolis |
|
0 100 200 300t0.0010.010.1 ACFHMC |
|
Metropolis |
|
Fig. 6. Autocorrelation functions of φ(left) and µ(right) by the HMC and Metropolis algorithm |
|
forT= 2000 data set. |
|
6. Empirical Analysis |
|
In this section we make an empirical study of the SV model by the HMC algorithm. |
|
The empirical study is based on daily data of the Nikkei 225 stock inde x. The |
|
sampling period is 4 January 1995 to 30 December 2005 and the numbe r of the |
|
observations is 2706. Fig.7(left) shows the time series of the data. Letpibe the |
|
Nikkei 225 index at time i. The Nikkei 225 index piare transformed to returns as |
|
ri= 100ln( pi/pi−1−¯s), (39) |
|
where ¯sis the average value of ln( pi/pi−1). Fig.7(right) shows the time series of |
|
returns calculated by Eq.(39). We perform the same MCMC sampling b y the HMC |
|
algorithm as in the previous section. The first 10000 MC samples are d iscarded and |
|
then 20000 samples are recorded for the analysis. The ACF of samp ledh100and |
|
sampled parameters are shown in Fig.8. Qualitatively the results of t he ACF are |
|
similar to those from the artificial financial data, i.e. the ACF of the v olatility and |
|
µdecrease quickly although the ACF of φandσ2 |
|
ηdecrease slowly. The estimated |
|
values of the parameters are summarized in Table 2. The value of φis estimated to |
|
beφ≈0.977. This value is very close to one, which means the time series has th e |
|
strong persistency of the volatility shock. The similar values are also seen in theNovember 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
Instructions for Typesetting Manuscripts (Condensed Titl e for the Paper) 13 |
|
HMC φ µ σ2 |
|
η h100 |
|
0.977 0.52 0.020 |
|
SD 0.006 0.13 0.005 |
|
SE 0.001 0.0016 0.001 |
|
2τint560(190) 4(1) 1120(360) 21(5) |
|
Table 2. Results estimated by the HMC for the Nikkei 225 index data. |
|
050010001500200025003000t10000150002000025000 |
|
Nikkei 225 Index |
|
050010001500200025003000t-505rt |
|
Fig. 7. Nikkei 225 stock index from 4 January 1995 to 30 Decemb er 2005(left) and returns(right). |
|
0 20 40 60t0.010.11ACFh100 |
|
0 200 400 600800 1000t0.010.11ACFφ |
|
ση2 |
|
µ |
|
Fig. 8. Autocorrelation functions of the volatility variab leh100(left) and the sampled parameters |
|
(right). |
|
previous studies21,22. |
|
7. Conclusions |
|
We applied the HMC algorithm to the Bayesian inference of the SV mode l and |
|
examined the property of the HMC algorithm in terms of the autocor relation times |
|
of the sampled data. We observed that the autocorrelation times o f the volatility |
|
variables and µparameter are small. On the other hand large autocorrelation times |
|
are observed for the sampled data of φandσ2 |
|
ηparameters. The similar behavior |
|
for the autocorrelation times are also seen in the literature22. |
|
From comparison of the HMC and Metropolis algorithms we find that th e HMCNovember 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
14Authors’ Names |
|
algorithmsamplesthevolatilityvariablesand µmoreeffectivelythantheMetropolis |
|
algorithm. However there is no significant difference for φandσ2 |
|
ηsampling. Since |
|
the autocorrelation times of µfor both algorithms are estimated to be rather small |
|
the improvement of sampling µby the HMC algorithm is limited. Therefore the |
|
overall efficiency is considered to be similar to that of the Metropolis a lgorithm. |
|
By using the artificial financial data we confirmed that the HMC algor ithm cor- |
|
rectly reproduces the true parameter values used to generate t he artificial financial |
|
data. Thus it is concluded that the HMC algorithm can be used as an alt ernative |
|
algorithm for the Bayesian inference of the SV model. |
|
If we are only interested in parameter estimations of the SV model, t he HMC |
|
algorithm may not be a superior algorithm. However the HMC algorithm samples |
|
thevolatilityvariableseffectively.ThustheHMC algorithmmayservea sanefficient |
|
algorithm for calculating a certain quantity including the volatility varia bles. |
|
Acknowledgments. |
|
The numerical calculations were carried out on SX8 at the Yukawa In stitute for |
|
Theoretical Physics in Kyoto University and on Altix at the Institute of Statistical |
|
Mathematics. |
|
Note added in proof. After this work was completed the author noticed a sim- |
|
ilar approach by Liu35. The author is grateful to M.A. Girolami for drawing his |
|
attention to this. |
|
References |
|
1. R.Mantegna and H.E.Stanley, Introduction to Econophysics (Cambride University |
|
Press, 1999). |
|
2. R. Cont, Empirical Properties of Asset Returns: Stylized Facts and Statistical Issues, |
|
Quantitative Finance 1(2001) 223–236. |
|
3. D. Stauffer and T.J.P. Penna, Crossover in the Cont-Boucha ud percolation model for |
|
market fluctuations, Physics A 256(1998) 284–290. |
|
4. T. Lux and M. Marchesi, Scaling and Criticality in a Stocha stic Multi-Agent Model |
|
of a Financial Market Nature397(1999) 498–500. |
|
5. G. Iori, Avalanche Dynamics and Trading Friction Effects o n Stock Market Returns, |
|
Int. J. Mod. Phys. C 10(1999) 1149–1162. |
|
6. L.R. da Silva and D. Stauffer, Ising-correlated clusters i n the Cont-Bouchaud stock |
|
market model, Physics A 294(2001) 235–238. |
|
7. D. Challet, A. Chessa, M. Marsili and Y-C. Zhang, From Mino rity Games to real |
|
markets, Quantitative Finance 1(2001) 168–176. (2001) |
|
8. M. Raberto, S. Cincotti, S.M. Focardi and M. Marchesi, Age nt-based Simulation of a |
|
Financial Market, Physics A 299(2001) 319–327. |
|
9. S. Bornholdt, Expectation Bubbles in a Spin Model of Marke ts: Intermittency from |
|
Frustration across Scales. Int. J. Mod. Phys. C 12(2001) 667–674. |
|
10. K. Sznajd-Weron and R. Weron, A Simple Model of Price Form ation.Int. J. Mod. |
|
Phys. C 13(2002) 115–123. |
|
11. J.R. Sanchez, A Simple Model for Stocks Markets, Int. J. Mod. Phys. C 13(2002) |
|
639–644.November 10, 2018 20:49 WSPC/INSTRUCTION FILE svJCSC3 |
|
Instructions for Typesetting Manuscripts (Condensed Titl e for the Paper) 15 |
|
12. T. Yamano, Bornholdt’s Spin Model of a Market Dynamics in High Dimensions, Int. |
|
J. Mod. Phys. C 13(2002) 89–96. |
|
13. T. Kaizoji, S. Bornholdt and Y. Fujiwara, Dynamics of Pri ce and Trading Volume in |
|
a Spin Model of Stock Markets with Heterogeneous Agents. Physica A 316(2002) |
|
441–452. |
|
14. T. Takaishi, Simulations of Financial Markets in a Potts -like Model, Int. J. Mod. |
|
Phys. C 13(2005) 1311–1317. |
|
15. R.F. Engle, Autoregressive Conditional Heteroskedast icity with Estimates of theVari- |
|
ance of the United Kingdom inflation, Econometrica 60(1982) 987–1007. |
|
16. T. Bollerslev, Generalized Autoregressive Conditiona l Heteroskedasticity, Journal of |
|
Econometrics 31(1986) 307–327. |
|
17. D.B. Nelson, Conditional Heteroskedasticity in Asset R eturns: A New Approach, |
|
Econometrica 59(1991) 347–370. |
|
18. L.R. Glston, R. Jaganathan and D.E. Runkle, On the Relati on Between the Expected |
|
Value and the Volatility of the Nominal Excess on Stocks, Journal of Finance 48 |
|
(1993) 1779–1801. |
|
19. R.F. Engle and V. Ng, Measuring and Testing the Impact of N ews on Volatility, |
|
Journal of Finance 48(1993) 1749–1778. |
|
20. E. Sentana,Quadratic ARCHmodels. Review of Economic Studies 62(1995) 639–661. |
|
21. E. Jacquier, N.G. Polson and P.E. Rossi, Bayesian Analys is of Stochastic Volatility |
|
Models. Journal of Business & Economic Statistics ,12(1994) 371–389. |
|
22. S. Kim, N. Shephard and S. Chib, Stochastic Volatility: L ikelihood Inference and |
|
Comparison with ARCH Models, Review of Economic Studies 65(1998) 361–393. |
|
23. N. Shephard and M.K. Pitt, Likelihood Analysis of Non-Ga ussian Measurement Time |
|
Series,Biometrika 84(1997) 653–667. |
|
24. T. Watanabe and Y. Omori, A Multi-move Sampler for Estima ting Non-Gaussian |
|
Time Series Models, Biometrika 91(2004) 246–248. |
|
25. M. Asai, Comparison of MCMC Methods for Estimating Stoch astic Volatility Models, |
|
Computational Economics 25(2005) 281–301. |
|
26. S. Duane, A.D. Kennedy, B.J. Pendleton and D. Roweth, Hyb rid Monte Carlo, Phys. |
|
Lett. B195(1987) 216–222. |
|
27. T. Takaishi, Bayesian estimation of GARCH model by Hybri d Monte Carlo, Proceed- |
|
ings of the 9th Joint Conference on Information Sciences 200 6, CIEF-214 |
|
doi:10.2991/jcis.2006.159 |
|
28. T. Takaishi, Financial Time Series Analysis of SV Model b y Hybrid Monte Carlo, |
|
Lecture Notes in Computer Science 5226(2008) 929–936. |
|
29. A. Ukawa, Lattice QCD Simulations Beyond the Quenched Ap proximation, Nucl. |
|
Phys. B (Proc. Suppl.) 10(1989) 66–145 |
|
30. N. Metropolis et al.Equations of State Calculations by Fast Computing Machines ,J. |
|
of Chem. Phys. 21(1953) 1087–1091. |
|
31. W.K Hastings, Monte Carlo Sampling Methods Using Markov Chains and Their Ap- |
|
plications, Biometrika 57(1970) 97–109. |
|
32. T. Takaishi, Choice of Integrators in the Hybrid Monte Ca rlo Algorithm, Comput. |
|
Phys. Commun. 133(2000) 6–17. |
|
33. T. Takaishi, Higher Order Hybrid Monte Carlo at Finite Te mperature, Phys. Lett. B |
|
540(2002) 159–165. |
|
34. T. Takaishi and Ph. de Forcrand, Testing and Tuning Sympl ectic Integrators for |
|
Hybrid Monte Carlo Algorithm in Lattice QCD, Phys. Rev. E 73(2006) 036706. |
|
35. J.S. Liu, Monte Carlo Strategies in Scientific Computing (Springer, 2001). |