<TeXmacs|1.99.12>

<style|<tuple|generic|number-europe>>

<\body>
  <doc-data|<doc-title|Monte Carlo methods>>

  Ref:\ 

  1. <em|\<#8499\>\<#7279\>\<#5361\>\<#7F57\>\<#65B9\>\<#6CD5\>\<#7406\>\<#8BBA\>\<#548C\>\<#5E94\>\<#7528\>>,
  \<#5510\>\<#5D07\>\<#7984\>(2014)

  2. Monte Carlo Statistical Methods,\ 

  <\table-of-contents|toc>
    <vspace*|1fn><with|font-series|bold|math-font-series|bold|1<space|2spc>Main
    idea> <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-1><vspace|0.5fn>

    <vspace*|1fn><with|font-series|bold|math-font-series|bold|2<space|2spc>Random
    variable generator> <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-2><vspace|0.5fn>

    <with|par-left|1tab|2.1<space|2spc>Famous distrubutions
    <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-3>>

    <with|par-left|1tab|2.2<space|2spc>Inverse transform method
    <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-4>>

    <with|par-left|1tab|2.3<space|2spc>Composition method
    <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-5>>

    <with|par-left|1tab|2.4<space|2spc>Acceptance-Rejection method
    <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-6>>

    <vspace*|1fn><with|font-series|bold|math-font-series|bold|3<space|2spc>Controlling
    the variance> <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-7><vspace|0.5fn>

    <with|par-left|1tab|3.1<space|2spc>Common and Antithetic random variables
    <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-8>>

    <with|par-left|1tab|3.2<space|2spc>Control variables
    <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-9>>

    <with|par-left|1tab|3.3<space|2spc>Stratified sampling
    <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-10>>

    <with|par-left|1tab|3.4<space|2spc>importance sampling
    <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-11>>

    <with|par-left|2tab|3.4.1<space|2spc>Variance-minimization method (VM
    method) <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-12>>

    <with|par-left|2tab|3.4.2<space|2spc>Cross-entropy method (CE method)
    <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-13>>

    <vspace*|1fn><with|font-series|bold|math-font-series|bold|4<space|2spc>Markov
    Chain Monte Carlo (MCMC)> <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-14><vspace|0.5fn>

    <with|par-left|1tab|4.1<space|2spc>Markov Chain
    <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-15>>

    <with|par-left|1tab|4.2<space|2spc>Metropolis-Hasting algorithm
    <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
    <no-break><pageref|auto-16>>
  </table-of-contents>

  <section|Main idea>

  In statistics, one uses the mean-value of some samples to estimate a random
  variable. This is essentially what monte carlo method does.

  Let's define the problem, <math|X> is a random variable whose possible
  values are in space <math|\<Omega\>>, the problem is to estimate the
  mean-vaule of a function of the random variable <math|H<around*|(|X|)>>,
  i.e. <math|\<mu\>\<equiv\>E<around*|[|H<around*|(|X|)>|]>>.

  The monte carlo method is to use the samples' mean value,

  <\equation*>
    \<mu\><rsub|j>=<frac|1|n><big|sum><rsub|j=1><rsup|n>H<around*|(|x<rsub|j>|)>
  </equation*>

  to estimate <math|\<mu\>>.

  <\note*>
    (i) One can simplify above statement as the mean-value problem of a
    real-value random variable as <math|h<around*|(|X|)>\<backassign\>Z\<in\><with|font|cal|D>\<subset\>\<bbb-R\>>.
    The reason that why we state the problem as the mean-value of a
    real-value function of a random-variable <math|h<around*|(|X|)>> but not
    <math|Z>, is that the probability distribution of <math|Z> is harder to
    address, moreover one can try to find some other random varable like
    <math|<wide|X|~>>, such that <math|Z=g<around*|(|<wide|X|~>|)>>. The
    center limit theorem (see below) will tell that the transformation from
    <math|X> to <math|<wide|X|~>> is important.

    (iii) Monte-carlo method works because computers can generate
    <strong|psudo-random numbers> fast.
  </note*>

  The Law of Larege Numbers and the Central Limit theorem ensure thate
  <math|\<mu\><rsub|n>> can be a good approximation of <math|\<mu\>> and how
  good of the approximation.

  <\theorem>
    [<strong|Law of Large Numbers>] Let random varables
    <math|Z<rsub|1>,\<cdots\>,Z<rsub|n>,\<cdots\>> have iid(independent and
    identical distribution) with mean value <math|\<mu\>>. Then (\Pa.s.\Q
    means amost surely)

    <\equation*>
      <wide|Z|\<bar\>><rsub|n><above|\<longrightarrow\>|a.s.>\<mu\>,
    </equation*>

    i.e.,

    <\equation*>
      lim<rsub|n\<rightarrow\>\<infty\>>Pro<around*|(|<around*|\||<wide|Z|\<bar\>><rsub|n>-\<mu\>|\|>\<geqslant\>\<varepsilon\>|)>=0,\<forall\>\<varepsilon\>\<gtr\>0,
    </equation*>

    where

    <\equation>
      <wide|Z|\<bar\>><rsub|n>\<assign\><frac|1|n><big|sum><rsub|j=1><rsup|n>Z<rsub|j><label|estimate>.
    </equation>
  </theorem>

  \ tells how good of <math|\<mu\><rsub|n>> (a sample of
  <math|<wide|Z|\<bar\>><rsub|n>>) to approximate <math|\<mu\>>:

  <\theorem>
    [<strong|Central Limit theorem>, Lindeberg]

    Let <math|Z<rsub|1>,\<cdots\>,Z<rsub|n>,\<cdots\>> are iid with mean
    value <math|\<mu\>> and variance <math|\<sigma\><rsup|2>>
    (<math|0\<less\>\<sigma\><rsup|2>\<less\>\<infty\>>), then

    <\equation*>
      <sqrt|n><around*|(|<wide|Z|\<bar\>><rsub|n>-\<mu\>|)>/\<sigma\><above|\<longrightarrow\>|<with|font|cal|L>><with|font|cal|N><around*|(|0,1|)>,
    </equation*>

    whose meaning is,

    <\equation*>
      lim<rsub|n\<rightarrow\>\<infty\>>Pro<around*|(|<sqrt|n><around*|(|<wide|Z|\<bar\>><rsub|n>-\<mu\>|)>/\<sigma\>\<leqslant\>z|)>=\<Phi\><around*|(|z|)>,
    </equation*>

    where <math|\<Phi\><around*|(|z|)>> is the distribution function of
    <math|<with|font|cal|N><around*|(|0,1|)>>, i.e.

    <\equation*>
      \<Phi\><around*|(|z|)>=<frac|1|<sqrt|2\<pi\>>><big|int><rsub|-\<infty\>><rsup|z>e<rsup|-t<rsup|2>/2>d
      t.
    </equation*>
  </theorem>

  Let <math|u<rsub|\<alpha\>/2>\<geqslant\>0> be defined in
  <math|\<Phi\><around*|(|u<rsub|\<alpha\>/2>|)>\<assign\>1-<frac|\<alpha\>|2>>.
  [e.g. <math|\<alpha\>=5%,u<rsub|\<alpha\>/2>\<simeq\>1.9600>]

  By central-limit thm, since <math|\<mu\><rsub|n>> is a sample point of
  <math|<wide|Z|\<bar\>><rsub|n>>, the probability that <math|\<mu\><rsub|n>>
  satisfies <math|<around*|\||<sqrt|n><around*|(|\<mu\><rsub|n>-\<mu\>|)>/\<sigma\>|\|>\<leqslant\>u<rsub|\<alpha\>/2>>
  is <math|\<Phi\><around*|(|u<rsub|\<alpha\>/2>|)>-\<Phi\><around*|(|-u<rsub|\<alpha\>/2>|)>>,
  that is to say:

  <\equation>
    Pro<around*|(|\<mu\>\<in\> <around*|[|\<mu\><rsub|n>\<pm\><with|color|red|<frac|\<sigma\>|<sqrt|n>>u<rsub|\<alpha\>/2>>|]>|)>=1-\<alpha\>,if
    n is large enough<label|err-of-estimate>.
  </equation>

  note that <math|1-\<alpha\>> is called <em|confidence-level>.

  <\note>
    (i) The relative-error

    <\equation*>
      \<varepsilon\><rsub|r,n>\<assign\><frac|\<sigma\>|\<mu\><sqrt|n>>u<rsub|\<alpha\>/2>\<sim\><with|font|cal|O><around*|(|n<rsup|-1/2>|)>
    </equation*>

    is <strong|independent of the dimension> of the problem which is a quite
    appealing property!

    (ii) The error <strong|depends on variance> <math|\<sigma\>> linearly, so
    decreasing <math|\<sigma\>> will improve the simulating efficiency. This
    is to say one finds a new random variable <math|<wide|Z|~>> with same
    mean-value <math|E<around*|[|<wide|Z|~>|]>=E<around*|[|Z|]>>, but the
    variance is smaller <math|<wide|\<sigma\>|~>\<less\>\<sigma\>>.

    <em|Finding new random-variable is the key work of monte-carlo method.>

    (iii) In pratice, the relative-error can not be used directly, since
    <math|\<mu\>> and <math|\<sigma\>> are not known, one can use

    <\equation*>
      \<mu\><rsub|n>,\<sigma\><rsub|n>=<sqrt|s<rsub|n>>,with
      s<rsub|n>\<assign\><frac|1|n-1><big|sum><rsub|j><around*|(|z<rsub|j>-\<mu\><rsub|n>|)><rsup|2>=<frac|1|n-1><around*|(|<big|sum><rsub|j>z<rsub|j><rsup|2>-n\<mu\><rsub|n><rsup|2>|)>
    </equation*>

    instead of <math|\<mu\>> and <math|\<sigma\>>.

    (iv) It is often to use quantity <math|\<kappa\>\<assign\><frac|<sqrt|Var<around*|(|<wide|Z|\<bar\>><rsub|n>|)>>|E<around*|[|<wide|Z|\<bar\>><rsub|n>|]>>>
    to esitimate the efficiency of the estimators, the square-version is

    <\equation>
      \<kappa\><rsup|2>=<frac|Var<around*|(|Z|)>|n
      E<around*|[|Z|]><rsup|2>><label|kappa-suqare>,
    </equation>

    whose sample is

    <\equation*>
      \<kappa\><rsub|n><rsup|2>=<frac|s<rsub|n>|n\<mu\><rsub|n><rsup|2>>.
    </equation*>

    Note that <math|\<kappa\><rsub|n>> is just the the relative-error without
    the coefficient <math|u<rsub|\<alpha\>/2>>.
  </note>

  <strong|Steps of monte carlo method> are in the following,

  1. Use the sampler to get many samples of <math|X>, i.e. a random-seq
  <math|<around*|{|x<rsub|1>,\<cdots\>,x<rsub|n>,\<cdots\>|}>>;

  2. Define the samples of <math|Y>, by mapping <math|x<rsub|n>> to
  <math|h<around*|(|x<rsub|j>|)>\<equiv\>h<rsub|j>>;

  2. Do the calculation and estimation to get
  <math|<around*|(|\<mu\><rsub|n>,err<rsub|n>|)>> where err is
  <math|\<kappa\>>, relative-err or absolut-err.

  <\equation*>
    <choice|<tformat|<table|<row|<cell|\<mu\><rsub|n>>|<cell|\<assign\>>|<cell|<big|sum><rsub|j=1><rsup|n>h<rsub|j>/n,>>|<row|<cell|s<rsub|n>>|<cell|\<assign\>>|<cell|<frac|1|n-1><around*|(|<big|sum><rsub|j>h<rsub|j><rsup|2>-n\<mu\><rsub|n><rsup|2>|)>,>>|<row|<cell|\<sigma\><rsub|n>>|<cell|\<assign\>>|<cell|<sqrt|s<rsub|n>>,>>|<row|<cell|\<kappa\><rsub|n>>|<cell|\<assign\>>|<cell|<frac|\<sigma\><rsub|n>|\<mu\><rsub|n><sqrt|n>>=<frac|<sqrt|s<rsub|n>/n>|\<mu\><rsub|n>>.>>>>>
  </equation*>

  <\question>
    \<#8BEF\>\<#5DEE\>\<#540C\>\<#6837\>\<#662F\>\<#7EBF\>\<#6027\>\<#6536\>\<#655B\>\<#7684\>,
    \<#96BE\>\<#9053\>\<#4E0D\>\<#53EF\>\<#4EE5\>\<#52A0\>\<#901F\>\<#5417\>?
  </question>

  Of course, the key point is how to get a sampler? In ordinary computer
  language, one can use the uniform-distribution
  <math|<with|font|cal|U><rsub|<around*|[|0,1|]>>>. By this sampler, one can
  get many other samplers which will be consider in this note.\ 

  Here we give some simple examples to show how monte-carlo method works.

  <\example>
    Estimate the integral <math|<big|int><rsub|0><rsup|1>\<cdots\><big|int><rsub|0><rsup|1>h<around*|(|<wide|x|\<vect\>>|)>d<rsup|D>
    x>.
  </example>

  <section|Random variable generator>

  The unit-distribution <math|<with|font|cal|U><rsub|<around*|[|0,1|]>>> is
  only a special random generator, but one can use this generator to get
  other random-number generators, this section will give some of them.

  <subsection|Famous distrubutions>

  The subsection gives some famous distributions:

  (1) <strong|Normal distribution>: <math|<with|font|cal|N><around*|(|\<mu\>,\<sigma\>|)>>

  <\theorem>
    Let <math|U<rsub|1>,U<rsub|2><above|\<sim\>|iid><with|font|cal|U><rsub|<around*|[|0,1|]>>>,
    and define

    <\equation*>
      <choice|<tformat|<table|<row|<cell|X<rsub|1>>|<cell|\<assign\>>|<cell|<sqrt|-2ln<around*|(|U<rsub|1>|)>>cos<around*|(|2\<pi\>U<rsub|2>|)>>>|<row|<cell|X<rsub|2>>|<cell|\<assign\>>|<cell|<sqrt|-2ln<around*|(|U<rsub|1>|)>>sin<around*|(|2\<pi\>U<rsub|2>|)>>>>>>.
    </equation*>

    Then

    <\equation*>
      X<rsub|1>,X<rsub|2> <above|\<sim\>|iid>
      <with|font|cal|N><around*|(|0,1|)>,
    </equation*>

    and so

    <\equation*>
      \<sigma\>X+\<mu\>\<sim\><with|font|cal|N><around*|(|\<mu\>,\<sigma\>|)>,if
      X\<sim\><with|font|cal|N><around*|(|0,1|)>.
    </equation*>
  </theorem>

  <\proof>
    Consider the joint probability of <math|<around*|(|X<rsub|1>,X<rsub|2>|)>>:

    <\equation*>
      \<rho\><around*|(|x<rsub|1>,x<rsub|2>|)>=<frac|1|2\<pi\>>e<rsup|-<around*|(|x<rsub|1><rsup|2>+x<rsub|2><rsup|2>|)>/2>,
    </equation*>

    and introduce the polar coordiantes <math|<around*|(|r,\<theta\>|)>>, the
    joint probability of <math|<around*|(|R,\<Theta\>|)>> whose sample is
    <math|<around*|(|r,\<theta\>|)>> is

    <\equation*>
      \<rho\><around*|(|r,\<theta\>|)>=\<rho\><around*|(|x<rsub|1>,x<rsub|2>|)><around*|\||<tabular|<tformat|<table|<row|<cell|<frac|\<partial\>x<rsub|1>|\<partial\>r>>|<cell|<frac|\<partial\>x<rsub|2>|\<partial\>r>>>|<row|<cell|<frac|\<partial\>x<rsub|1>|\<partial\>\<theta\>>>|<cell|<frac|\<partial\>x<rsub|2>|\<partial\>\<theta\>>>>>>>|\|>=<frac|1|2\<pi\>>e<rsup|-r<rsup|2>/2>r.
    </equation*>

    This means <math|R,\<Theta\>> are also independent random variables whose
    pdfs are

    <\equation*>
      \<rho\><around*|(|r|)>=e<rsup|-r<rsup|2>/2>r,r\<in\><around*|(|0,+\<infty\>|)>,
    </equation*>

    and

    <\equation*>
      \<rho\><around*|(|\<theta\>|)>=<frac|1|2\<pi\>>,\<theta\>\<in\><around*|[|0,2\<pi\>|)>.
    </equation*>

    For <math|\<Theta\>>, it is <math|<with|font|cal|U><rsub|<around*|[|0,2\<pi\>|]>>>,
    while for <math|R>, one can use the inverse transform method(see next
    subsection) can give result, which as a result is the theorem says.
  </proof>

  (2) <strong|Uniform distribution on a unit-hypersphere <math|S<rsup|n>>>:
  <math|<with|font|cal|U><rsub|S<rsup|n>>>

  <\theorem>
    Let <math|X<rsub|0>,\<cdots\>,X<rsub|n><above|\<sim\>|iid><with|font|cal|U><rsub|<around*|[|0,1|]>>>,
    and <math|<around*|\<\|\|\>|<wide|X|\<vect\>>|\<\|\|\>>\<assign\><sqrt|<big|sum><rsub|i>X<rsub|i><rsup|2>>>,
    then

    <\equation*>
      <wide|Y|\<vect\>>\<assign\><frac|<wide|X|\<vect\>>|<around*|\<\|\|\>|<wide|X|\<vect\>>|\<\|\|\>>>\<equiv\><around*|(|<frac|X<rsub|0>|<around*|\<\|\|\>|<wide|X|\<vect\>>|\<\|\|\>>>,\<cdots\>,<frac|X<rsub|n>|<around*|\<\|\|\>|<wide|X|\<vect\>>|\<\|\|\>>>|)>\<sim\><with|font|cal|U><rsub|<around*|{|<wide|y|\<vect\>>:<around*|\<\|\|\>|<wide|y|\<vect\>>|\<\|\|\>>=1|}>>,
    </equation*>

    where <math|<around*|{|<wide|y|\<vect\>>:<around*|\<\|\|\>|<wide|y|\<vect\>>|\<\|\|\>>=1|}>>
    is the unit-hypersphere <math|S<rsup|n>>.
  </theorem>

  <\proof>
    The proof is hinted in the proof of above theorem.
  </proof>

  (3) <strong|Uniform distribution on a unit-hyperball>:
  <math|<with|font|cal|U><rsub|B<rsup|n>>>.

  <\theorem>
    Let <math|X<rsub|1>,\<cdots\>,X<rsub|n><above|\<sim\>|iid><with|font|cal|U><rsub|<around*|[|0,1|]>>>,
    and <math|R=U<rsup|1/n>,>with <math|U\<sim\><with|font|cal|U><rsub|<around*|[|0,1|]>>>,
    then

    <\equation*>
      <wide|Z|\<vect\>>\<assign\>R<wide|X|\<vect\>>/<around*|\<\|\|\>|<wide|X|\<vect\>>|\<\|\|\>>\<sim\><with|font|cal|U><rsub|<around*|{|<wide|z|\<vect\>>:<around*|\<\|\|\>|<wide|z|\<vect\>>|\<\|\|\>>\<leqslant\>1|}>>,
    </equation*>

    where <math|<around*|{|<wide|z|\<vect\>>:<around*|\<\|\|\>|<wide|z|\<vect\>>|\<\|\|\>>\<leqslant\>1|}>>
    is the unit-hyperball <math|B<rsup|n>>.
  </theorem>

  <\proof>
    We just need to show that <math|Pro<around*|{|<around*|\<\|\|\>|<wide|z|\<vect\>>|\<\|\|\>>\<leqslant\>r<rsub|1>|}>=r<rsub|1><rsup|n>>
    which is obvious, since <math|<around*|\<\|\|\>|<wide|z|\<vect\>>|\<\|\|\>>=r=u<rsup|1/n>>.
  </proof>

  (4) <strong|Cosine distribution> on <math|<around*|[|\<mu\>-<frac|\<Delta\>|2>,\<mu\>+<frac|\<Delta\>|2>|]>>:
  <math|<with|font|cal|C><around*|(|\<mu\>,\<Delta\>|)>>.

  The pdf is

  <\equation*>
    f<around*|(|X|)>=<frac|\<pi\>|2\<Delta\>>cos<around*|(|<frac|\<pi\>|\<Delta\>><around*|(|X-\<mu\>|)>|)>.
  </equation*>

  This distribution can be got by inverse transform method shown next
  subsection, where the inverse of cdf is

  <\equation*>
    F<rsup|-1><around*|(|U|)>=\<mu\>+<frac|\<Delta\>|\<pi\>>arcsin<around*|(|2U-1|)>.
  </equation*>

  <subsection|Inverse transform method>

  If one has a cdf <math|F<around*|(|X|)>> of <math|X>, then he can get a
  generator as follows:

  <\theorem>
    Let <math|F:\<Omega\>\<rightarrow\><around*|[|0,1|]>,x\<mapsto\>F<around*|(|x|)>>
    be <strong|non-decreasing>, define generalized inverse of <math|F> as

    <\equation*>
      F<rsup|-1><around*|(|u|)>\<assign\>inf<around*|{|x:F<around*|(|x|)>\<geqslant\>u|}>.
    </equation*>

    Then, one can by unit-distribution <math|U\<sim\><with|font|cal|U><rsub|<around*|[|0,1|]>>>
    get a random variable <math|X\<assign\>F<rsup|-1><around*|(|U|)>>
    satisfies <math|F>, i.e.

    <\equation*>
      F<rsup|-1><around*|(|U|)>\<sim\>F\<Leftrightarrow\>U\<sim\><with|font|cal|U><rsub|<around*|[|0,1|]>>.
    </equation*>
  </theorem>

  <\example>
    Esitimate <math|\<pi\>=<big|int><rsub|0><rsup|1>4<sqrt|1-x<rsup|2>>d x>.

    We take pdf <math|\<rho\><around*|(|x|)>\<assign\><sqrt|2>-x,x\<in\><around*|[|0,<sqrt|2>|]>>
    (tangent to the unit-circle) instead of the uniform distribution
    <math|<with|font|cal|U><rsub|<around*|[|0,1|]>>>. Then
    <math|\<pi\>=<big|int><rsub|0><rsup|1>4<frac|<sqrt|1-x<rsup|2>>|<sqrt|2>-x>\<rho\><around*|(|x|)>d
    x>.

    cdf <math|F<around*|(|x|)>=<sqrt|2>x-<frac|x<rsup|2>|2>> gives
    <math|F<rsup|-1><around*|(|u|)>=<sqrt|2><around*|(|1-<sqrt|1-u>|)>>.

    So the problem is to estimate <math|h<around*|(|X|)>\<assign\>4<frac|<sqrt|1-X<rsup|2>>|<sqrt|2>-X>,x\<in\><around*|[|0,1|]>>,
    where

    <\equation*>
      X=<sqrt|2><around*|(|1-<sqrt|1-U>|)>,U\<sim\><with|font|cal|U><rsub|<around*|[|0,1|]>>.
    </equation*>

    ;;this method is (7+1)th faster than ordinary method:
    <math|h<around*|(|x|)>=4<sqrt|1-x<rsup|2>>> where <math|x> is
    uniform-distribution.
  </example>

  <subsection|Composition method>

  If the pdf is of the form

  <\equation*>
    \<rho\><around*|(|x|)>=p<rsub|1>\<rho\><rsub|1><around*|(|x|)>+\<cdots\>+p<rsub|K>\<rho\><rsub|K><around*|(|x|)>=<big|sum><rsub|j>p<rsub|j>\<rho\><rsub|j>
  </equation*>

  with <math|<big|sum><rsub|j>p<rsub|j>=1> and <math|\<rho\><rsub|j>>s are
  all pdf, moreover if we can sample <math|\<rho\><rsub|j>> distributions,
  then we can sample <math|X> by <math|\<rho\><rsub|1>> in the probability
  <math|p<rsub|1>>, by <math|\<rho\><rsub|2>> in the probability
  <math|p<rsub|2>>, etc.

  <\example>
    Random variable pdf is <math|\<rho\><around*|(|x|)>=<frac|2|3>x+x<rsup|2>=<frac|1|3>\<times\>2x+<frac|2|3>\<times\>3x<rsup|2>>,
    where <math|p<rsub|1>=<frac|1|3>,\<rho\><rsub|1>=2x;p<rsub|2>=<frac|2|3>,\<rho\><rsub|2>=3x<rsup|2>>.
    Two steps:\ 

    (i) sample <math|u\<sim\><with|font|cal|U><rsub|<around*|[|0,1|]>>>;

    (ii) if <math|u\<in\><around*|[|0,<frac|1|3>|]>>, choose
    <math|\<rho\><rsub|1>> to sample <math|x>; if
    <math|u\<in\><around*|[|<frac|1|3>,<frac|1|3>+<frac|2|3>|]>>, choose
    <math|\<rho\><rsub|2>> to sample <math|x>.
  </example>

  <\note>
    The sampling for composition pdf should be made by stratified method(see
    next section).
  </note>

  <subsection|Acceptance-Rejection method>

  Suppose our <em|target> pdf <math|\<rho\><around*|(|x|)>> is controled by
  anther pdf <math|g<around*|(|x|)>> as

  <\equation*>
    \<rho\><around*|(|x|)>\<leqslant\>C g<around*|(|x|)>.
  </equation*>

  And the pdf <math|g<around*|(|x|)>>, called <em|proposal> pdf is known how
  to sample it. Then there is a method to sample the target pdf
  <math|\<rho\>>.

  <\named-algorithm|[Acceptance-Rejection]>
    1. Genetate <math|x> from <math|g<around*|(|X|)>>;

    2. Generate <math|y> from <math|<with|font|cal|U><rsub|<around*|[|0,C
    g<around*|(|x|)>|]>>>;

    3. If <math|y\<leqslant\>\<rho\><around*|(|x|)>>, accept <math|x>,
    otherwise return 1.
  </named-algorithm>

  <\theorem>
    The random variable generated by [Acceptance-Rejection] algorithm has the
    desired pdf <math|\<rho\><around*|(|x|)>>.
  </theorem>

  <\proof>
    Denote <math|<with|font|cal|A>=<around*|{|<around*|(|x,y|)>:y\<leqslant\>C
    g<around*|(|x|)>|}>> and <math|<with|font|cal|B>=<around*|{|<around*|(|x,y|)>:y\<leqslant\>\<rho\><around*|(|x|)>|}>>.

    First, we say that the point <math|<around*|(|x,y|)>> generated by steps
    1,2 is unformed in region <math|<with|font|cal|A>>.

    Consider the probability the point <math|<around*|(|x,y|)>> in
    <math|<around*|[|d x|]>\<times\><around*|[|d y|]>> is

    <\eqnarray>
      <tformat|<table|<row|<cell|>|<cell|>|<cell|P<around*|{|<around*|(|x,y|)>\<in\><around*|[|d
      x|]>\<times\><around*|[|d y|]>|}>>>|<row|<cell|>|<cell|=>|<cell|P<around*|{|x\<in\><around*|[|d
      x|]>|}>\<times\>P<around*|{|y\<in\><around*|[|d
      y|]><around*|\|||\<nobracket\>>x\<in\><around*|[|d
      x|]>|}>>>|<row|<cell|>|<cell|=>|<cell|g<around*|(|x|)>d
      x\<times\><frac|d y|C g*<around*|(|x|)>>=<frac|d x d
      y|C>>>|<row|<cell|>|<cell|=>|<cell|<frac|d x d
      y|Area<around*|(|<with|font|cal|A>|)>>,>>>>
    </eqnarray>

    which means the pdf of <math|<around*|(|x,y|)>> is
    <math|1/Area<around*|(|<with|font|cal|A>|)>>.

    Second, if we get a accept <math|x<rsup|\<ast\>>>, i.e.
    <math|<around*|(|x<rsup|\<ast\>>,y<rsup|\<ast\>>|)>> is generated by
    steps1,2 and it sits in region <math|<with|font|cal|B>>. Since it is
    unformed in <math|<with|font|cal|A>>, the pdf of
    <math|<around*|(|x<rsup|\<ast\>>,y<rsup|\<ast\>>|)>> is
    <math|<frac|1|Area<around*|(|<with|font|cal|B>|)>>=1>. Now the marginal
    pdf of <math|x<rsup|\<ast\>>> can be got as

    <\equation*>
      <big|int><rsub|0><rsup|\<rho\><around*|(|x<rsup|\<ast\>>|)>>1d
      y=\<rho\><around*|(|x<rsup|\<ast\>>|)>
    </equation*>

    which is just the target pdf.
  </proof>

  <\note>
    (i) The efficiency of acceptance-rejection method depends on the ratio of
    acceptance which is equal to <math|<frac|Area<around*|(|<with|font|cal|B>|)>|Area<around*|(|<with|font|cal|A>|)>>=<frac|1|C>>,
    which means <math|C\<geqslant\>1> should not be large.

    (ii) For the multivariate, the method above can be used directly.

    (iii) But for the multivariate, suppose the dimension is <math|n>, the
    ration of acceptance <math|1/C\<rightarrow\>0> which means this method is
    bad!
  </note>

  \;

  <section|Controlling the variance>

  We have known that the relative/absolute error is related to the variance
  of the random <math|Var<around*|(|X|)>>. To make monte carlo method useful,
  one needs to reduce the variance. Variance reduction can be viewed as a
  means of utilizing known information about the model. Generally, the more
  we know about the system, the more effective is the variance reduction.

  <subsection|Common and Antithetic random variables>

  (i) For simplicity, first consider the mean <math|E<around*|[|X-Y|]>>,
  where <math|X\<sim\>F> and <math|Y\<sim\>G>. One can use estimator

  <\equation*>
    H=X-Y,
  </equation*>

  and <math|X=F<rsup|-1><around*|(|U<rsub|1>|)>,Y=G<rsup|-1><around*|(|U<rsub|2>|)>,U<rsub|1>,U<rsub|2>\<sim\><with|font|cal|U><rsub|<around*|[|0,1|]>>>.
  But this is not the best.

  The common/antithetic random variables method is from the aspect that
  <math|X,Y> need <em|not to be independent>, since what we need to do is
  just <math|E<around*|[|X-Y|]>>.

  By

  <\equation*>
    Var<around*|(|X-Y|)>=Var<around*|(|X|)>+Var<around*|(|Y|)>-2Cov<around*|(|X,Y|)>,
  </equation*>

  one can make the covariance <math|Cov<around*|(|X,Y|)>> as large as
  possible. It is shown that using the <em|common variables>, i.e.

  <\equation*>
    X=F<rsup|-1><around*|(|U|)>,Y=G<rsup|-1><around*|(|U|)>,U\<sim\><with|font|cal|U><rsub|<around*|[|0,1|]>>,
  </equation*>

  can maximize the covariance <math|Var<around*|(|X-Y|)>>.

  Similarly, <math|Var<around*|(|X+Y|)>> can be minimized by <em|antithetic
  variables>:

  <\equation*>
    X=F<rsup|-1><around*|(|U|)>,Y=G<rsup|-1><around*|(|1-U|)>,U\<sim\><with|font|cal|U><rsub|<around*|[|0,1|]>>.
  </equation*>

  (ii) Now let's estimate <math|E<around*|[|H<rsub|1><around*|(|X|)>-H<rsub|2><around*|(|Y|)>|]>>
  where <math|H<rsub|1>> and <math|H<rsub|2>> are monotonic in the <em|same
  direction (or oppsite direction)>, the mimimum
  <math|Var<around*|(|H<rsub|1><around*|(|X|)>-H<rsub|2><around*|(|Y|)>|)>>
  also happends at above common variables(antithetic variables).

  <subsection|Control variables>

  We want to estimate <math|E<around*|[|X|]>>, but one can introduce a random
  variables <math|C> with known mean value <math|r>, then
  <math|X<rsub|\<alpha\>>\<assign\>X-\<alpha\><around*|(|C-r|)>> also has the
  same mean value with <math|X>. Consider the variance of
  <math|X<rsub|\<alpha\>>>,

  <\equation*>
    Var<around*|(|X<rsub|\<alpha\>>|)>=Var<around*|(|X|)>-2\<alpha\>Cov<around*|(|X,C|)>+\<alpha\><rsup|2>Var<around*|(|C|)>.
  </equation*>

  When

  <\equation*>
    \<alpha\>=\<alpha\><rsup|\<ast\>>=<frac|Cov<around*|(|X,C|)>|Var<around*|(|C|)>>,
  </equation*>

  the variance takes minimum,

  <\equation*>
    Var<around*|(|X<rsub|\<alpha\><rsup|\<ast\>>>|)>=<around*|(|1-\<varrho\><rsub|X
    C><rsup|2>|)>Var<around*|(|X|)>.
  </equation*>

  In this case, one can guess if he can find a control variable <math|C>,
  makes <math|\<varrho\><rsub|X C>\<rightarrow\>1>, then the variance of
  <math|X<rsub|\<alpha\><rsup|\<ast\>>>> quite small. In practice, the value
  <math|\<alpha\><rsup|\<ast\>>> can be found by samples:

  <\equation*>
    Cov<around*|(|X,C|)>=E<around*|[|<around*|(|X-\<mu\>|)><around*|(|C-r|)>|]>=E<around*|[|X
    C|]>+r E<around*|[|X|]>.
  </equation*>

  <\example>
    Estimate <math|\<pi\>=<big|int><rsub|0><rsup|1><sqrt|1-x<rsup|2>>d x>.

    The random variable <math|X\<sim\><with|font|cal|U><rsub|<around*|[|0,1|]>>>,
    <math|H<around*|(|X|)>=<sqrt|1-X<rsup|2>>>. Introduce control variable
    <math|C=1-X> whose mean value <math|r=1/2>. Note that
    <math|Var<around*|(|C|)>=<frac|1|12>>.
  </example>

  <\remark>
    It seems that control variable method does not improve the efficiency
    quite. This can be found by considering the efficiency function

    <\equation*>
      \<kappa\><rsub|\<alpha\><rsup|\<ast\>>>=<frac|<sqrt|Var<around*|(|X<rsub|\<alpha\><rsup|\<ast\>>>|)>>|<sqrt|n>\<mu\><rsub|X<rsub|\<alpha\><rsup|\<ast\>>>>>=<around*|(|1-\<rho\><rsub|X
      C><rsup|2>|)><rsup|1/2>\<kappa\>.
    </equation*>

    If <math|\<rho\><rsub|X C>=1-\<varepsilon\>> with small
    <math|\<varepsilon\>>, <math|<around*|(|1-\<rho\><rsub|X
    C><rsup|2>|)><rsup|1/2>=<sqrt|2\<varepsilon\>><around*|(|1+<with|font|cal|O><around*|(|\<varepsilon\>|)>|)>>
    which approach to zero slow.
  </remark>

  <subsection|Stratified sampling>

  In this subsection, we consider stratification sampling method, which
  almost does not change the random stream, but just \Pre-arrange order\Q.

  Split the variable <math|X>'s space <math|\<Omega\>=\<Omega\><rsub|1>\<cup\>\<cdots\>\<cup\>\<Omega\><rsub|K>>
  with un-joint <math|\<Omega\><rsub|j>,j=1,\<cdots\>,K>. Suppose the
  probability that <math|X> sits in <math|\<Omega\><rsub|j>> is
  <math|p<rsub|j>\<gtr\>0>, with <math|<big|sum><rsub|j=1><rsup|K>p<rsub|j>=1>.
  (Obviously, <math|<around*|{|<around*|(|p<rsub|j>,\<Omega\><rsub|j>|)>,j=1,\<cdots\>K|}>>
  is a new information of <math|X>.) Introduce new random variables
  <math|Z<rsub|j>>, whose value-space is <math|\<Omega\><rsub|j>>, and
  distribution is <math|\<rho\><rsub|j><around*|(|x<rsub|j>|)>=<frac|1|p<rsub|j>>\<rho\><around*|(|x|)>,x<rsub|j>\<in\>\<Omega\><rsub|j>,x\<in\>\<Omega\>>,
  where <math|x<rsub|j>,x> are the samples of <math|X<rsub|j>,X> respectly.

  It is easy to see that

  <\equation*>
    E<around*|[|Z\<equiv\>H<around*|(|X|)>|]>=<big|sum><rsub|j=1><rsup|K>p<rsub|j>E<around*|[|H<around*|(|X<rsub|j>|)>|]>,\<mu\>=<big|sum>p<rsub|j>\<mu\><rsub|j>.
  </equation*>

  This indicates a new sampling framework: Take <math|K> <em|parallel
  samplings>, where <math|x<rsub|j>> is related to the <math|j>th sampling
  flow whose mean is <math|\<mu\><rsub|j>>. By collecting
  <math|<around*|(|\<mu\><rsub|1>,\<cdots\>,\<mu\><rsub|K>|)>>, one can get
  <math|\<mu\>> is also the mean of random-variable:

  <\equation*>
    <wide|Z|\<bar\>><rsub|<around*|{|n<rsub|j>|}>>\<assign\><big|sum><rsub|j=1><rsup|K>p<rsub|j><wide|<around*|(|Z<rsub|j>|)>|\<bar\>><rsub|n<rsub|j>>\<equiv\><big|sum><rsub|j=1><rsup|K>p<rsub|j><frac|1|n<rsub|j>><big|sum><rsub|l=1><rsup|n<rsub|j>>H<around*|(|X<rsub|j,l>|)>,
  </equation*>

  where <math|X<rsub|j,l>> are copies of <math|X<rsub|j>\<in\>\<Omega\><rsub|j>>.

  So how to choose <math|n<rsub|j>> for each <math|\<Omega\><rsub|j>>? The
  best choice is to make the variance <math|Var<around*|(|<wide|Z|\<bar\>><rsub|<around*|{|n<rsub|j>|}>>|)>>
  minimal, where the result is as the following theorem:

  <\theorem>
    [<strong|Stratified sampling>] The minimum of the variance
    <math|Var<around*|(|<wide|Z|\<bar\>><rsub|<around*|{|n<rsub|j>|}>>|)>>
    takes at

    <\equation*>
      n<rsub|j>=n<frac|p<rsub|j>\<sigma\><rsub|j>|<big|sum>p<rsub|j>\<sigma\><rsub|j>>,
    </equation*>

    where <math|\<sigma\><rsub|j>=<sqrt|Var<around*|(|H<around*|(|X<rsub|j>|)>|)>>,n=<big|sum>n<rsub|j>>
    and the minimun value is

    <\equation*>
      Var<around*|(|<wide|Z|\<bar\>><rsub|<around*|{|n<rsub|j>|}>>|)>=<frac|1|n><around*|(|<big|sum><rsub|j=1><rsup|K>p<rsub|j>\<sigma\><rsub|j>|)><rsup|2>.
    </equation*>
  </theorem>

  In practice, since <math|\<sigma\><rsub|j>>s are unknown, one needs to
  pre-calculate some samples to estimate <math|\<sigma\><rsub|j>>s, then
  above method works. Note that taking <math|n<rsub|j>\<propto\>p<rsub|j>>
  also better than un-stratified sampling method.

  <\example>
    Integration in 1-dimensional space, <math|<big|int><rsub|0><rsup|1>h<around*|(|x|)>d
    x>.

    Obviously, one can split <math|<around*|[|0,1|]>=\<cup\><rsub|j=0><rsup|K><around*|[|<frac|j|K>,<frac|j+1|K>|]>\<backassign\>\<cup\><rsub|j=0><rsup|K>\<Omega\><rsub|j>>,
    with <math|p<rsub|j>=<frac|1|K>>.
  </example>

  <subsection|importance sampling>

  Fot the target quantity

  <\equation*>
    E<rsub|f><around*|[|H<around*|(|X|)>|]>\<assign\><big|int><rsub|\<Omega\>>
    H<around*|(|x|)>f<around*|(|x|)>d x,
  </equation*>

  where <math|f<around*|(|X|)>> is the pdf of <math|X> in <math|\<Omega\>>,
  one can introduce a new distribution with pdf <math|g<around*|(|X|)>> of
  <math|X>, and re-express above quantity as the mean of a new function

  <\equation*>
    <wide|H|~><around*|(|X|)>\<assign\>H<around*|(|X|)>W<around*|(|X|)>,W<around*|(|X|)>\<assign\><frac|f<around*|(|X|)>|g<around*|(|X|)>>,
  </equation*>

  since

  <\equation*>
    E<rsub|f><around*|[|H<around*|(|X|)>|]>=<big|int><rsub|\<Omega\>>
    H<around*|(|x|)>f<around*|(|x|)>d x=<big|int><rsub|\<Omega\>>d x
    H<around*|(|x|)><frac|f<around*|(|x|)>|g<around*|(|x|)>>g<around*|(|x|)>=E<rsub|g><around*|[|<wide|H|~><around*|(|X|)>|]>.
  </equation*>

  <\note>
    Since <math|E<rsub|g><around*|(|W<around*|(|X|)>|)>=<big|int><rsub|\<Omega\>>g<around*|(|x|)><frac|f<around*|(|x|)>|g<around*|(|x|)>>d
    x=1>, one can rewrite

    <\equation*>
      E<rsub|f><around*|[|H<around*|(|X|)>|]>=<frac|E<rsub|g><around*|[|H<around*|(|X|)>W<around*|(|X|)>|]>|E<rsub|g><around*|[|W<around*|(|X|)>|]>>=<frac|E<rsub|g><around*|[|H<around*|(|X|)>w<around*|(|X|)>|]>|E<rsub|g><around*|[|w<around*|(|X|)>|]>>,
    </equation*>

    where <math|w<around*|(|X|)>=c W<around*|(|X|)>> with constant <math|c>.
    The advandage of using <math|w> instead of <math|W> is that one does not
    need to normalize <math|g<around*|(|X|)>> which in practice is hard to
    get. This will be needed in MCMC method.
  </note>

  To make <math|g<around*|(|X|)>> be better than <math|f<around*|(|X|)>>, one
  needs to make <math|Var<rsub|g><around*|(|<wide|H|~><around*|(|X|)>|)>\<less\>Var<rsub|f><around*|(|H<around*|(|X|)>|)>>.

  Note one can prove that the best <math|g<around*|(|X|)>> is

  <\equation*>
    g<rsup|\<ast\>><around*|(|X|)>=<frac|<around*|\||H<around*|(|X|)>|\|>f<around*|(|X|)>|<big|int><rsub|\<Omega\>><around*|\||H<around*|(|x|)>|\|>f<around*|(|x|)>d
    x>.
  </equation*>

  [If <math|H<around*|(|x|)>\<geqslant\>0>, it is easy to find that
  <math|Var<rsub|g<rsup|\<ast\>>><around*|(|<wide|H|~><around*|(|X|)>|)>=0>
  which is of course best.]

  However, obviously, such <math|g<rsup|\<ast\>><around*|(|X|)>> is as hard
  as the target quantity, so what we can do is to find a as good as possible
  of <math|g<around*|(|X|)>>. No matter how hard to find such
  <math|g<around*|(|X|)>>, this method to find a good <math|g<around*|(|X|)>>
  is called importance sampling method, where the quantity
  <math|W<around*|(|X|)>> is called weight of the sampling.

  A possible framework is that suppose we have a class of distributions
  parameterized by <math|v>,

  <\equation*>
    <with|font|cal|F>=<around*|{|f<rsub|v><around*|(|\<cdot\>|)>\<equiv\>f<around*|(|\<cdot\>;v|)>:v\<in\><with|font|cal|V>|}>,
  </equation*>

  where <math|<with|font|cal|V>> is the parameter-space, and <math|v> is
  called <em|reference parameter vector>. We also suppose the orginal
  distribution <math|f> is also the element of <math|<with|font|cal|F>>,
  whose paremeter is denoted as <math|u>, i.e. <math|f\<equiv\>f<rsub|u>>.

  What we want to know is which <math|f<rsub|v>> is the best distribution.
  There are two methods, one is variance-minimization method, the other one
  is cross-entropy method.

  <subsubsection|Variance-minimization method (VM method)>

  The best distribution in this class of distributions should make the
  variance <math|Var<rsub|v><around*|(|<wide|H|~><around*|(|X|)>|)>> minimal,
  i.e.,

  <\equation*>
    min<rsub|v\<in\><with|font|cal|V>>Var<rsub|v><around*|(|H<around*|(|X|)>W<around*|(|X;u,v|)>|)>,W<around*|(|X;u,v|)>\<assign\><frac|f<rsub|u><around*|(|X|)>|f<rsub|v><around*|(|X|)>>,
  </equation*>

  which is equivalent to the optimal solution of problem:
  <math|min<rsub|v\<in\><with|font|cal|V>>V<around*|(|v|)>>, where
  <math|V<around*|(|v|)>\<assign\>E<rsub|v><around*|[|H<rsup|2><around*|(|X|)>W<rsup|2><around*|(|X;u,v|)>|]>=E<rsub|u><around*|[|H<rsup|2><around*|(|X|)>W<around*|(|X;u,v|)>|]>>,
  i.e.

  <\equation*>
    min<rsub|v\<in\><with|font|cal|V>> E<rsub|u><around*|[|H<around*|(|X|)><rsup|2>W<around*|(|X;u,v|)>|]>.
  </equation*>

  \ How to solve the optimal problem?

  xxx

  <subsubsection|Cross-entropy method (CE method)>

  We know the best pdf is <math|g<rsup|\<ast\>><around*|(|X|)>\<propto\><around*|\||H<around*|(|X|)>|\|>f<around*|(|X|)>>,
  so the best pdf in <math|<with|font|cal|F>> should be closest to
  <math|g<rsup|\<ast\>><around*|(|X|)>>. The Cross-Entropy can measure the
  \Pdistance\Q between <math|g<rsup|\<ast\>>> with
  <math|f<rsub|v>\<in\><with|font|cal|F>>.

  <\definition>
    [<strong|Kullback-Leibler Cross-Entropy>] The KL Cross-Entropy between
    pdfs <math|g> and <math|h> is

    <\equation*>
      <with|font|cal|D><around*|(|g,h|)>\<assign\>E<rsub|g><around*|[|ln<frac|g<around*|(|X|)>|h<around*|(|X|)>>|]>=<big|int><rsub|\<Omega\>>d
      x g<around*|(|x|)>ln g<around*|(|x|)>-<big|int><rsub|\<Omega\>>d x
      g<around*|(|x|)>ln h<around*|(|x|)>.
    </equation*>
  </definition>

  Note that <math|<with|font|cal|D><around*|(|g,h|)>\<geqslant\>0> where \P
  <math|=> \Q happends at <math|g=h>.

  So what we want to get from <math|<with|font|cal|F>> is the pdf
  <math|f<rsub|v>> that makes

  <\equation*>
    min<rsub|v\<in\><with|font|cal|V>><with|font|cal|D><around*|(|g<rsup|\<ast\>>,f<rsub|v>|)>.
  </equation*>

  Since <math|g<rsup|\<ast\>>\<propto\><around*|\||H<around*|(|X|)>|\|>f<around*|(|X|)>>,
  above optimal problem is equivalent to the maximaizing problem:

  <\equation*>
    max<rsub|v\<in\><with|font|cal|V>>D<around*|(|v|)>=max<rsub|v\<in\><with|font|cal|V>>E<rsub|u><around*|[|<around*|\||H<around*|(|X|)>|\|>
    ln f<rsub|v><around*|(|X|)>|]>.
  </equation*>

  Note that the pdf derived by VM method should be better than CE method, by
  the advandage of CE method is that the optimal-problem of CE method can
  often be solved <em|analytically>.

  xxx

  <section|Markov Chain Monte Carlo (MCMC)>

  In this section, we show that a Markov chain can generate an arbitary
  distribution. First we need some concepts on Markov chain.

  <subsection|Markov Chain>

  A Markov chain is a stochastic process (i.e. a discrete/continous time-seq
  whose elements are random variables) whose futures are conditionally
  independent of their pasts given their present values. Formal definition is
  in following.

  <\definition>
    A stochastic process is a class of random variables
    <math|X<rsub|t>\<in\>\<Omega\>>, denoted as
    <math|<around*|{|X<rsub|t>,t\<in\><with|font|cal|T>|}>> with
    <math|<with|font|cal|T>\<subseteq\>\<bbb-R\>>, where the parameter
    <math|t> called time.
  </definition>

  <\definition>
    A <strong|Markov process> is a stochastic process
    <math|<around*|{|X<rsub|t>,t\<in\><with|font|cal|T>|}>,<with|font|cal|T>\<subseteq\>\<bbb-R\>,X<rsub|t>\<in\>\<Omega\>>,
    if for every <math|s\<gtr\>0> and <math|t>,

    <\equation*>
      <around*|(|X<rsub|t+s><around*|\||X<rsub|u>,u\<leqslant\>t|\<nobracket\>>|)>\<sim\><around*|(|X<rsub|t+s><around*|\||X<rsub|t>|\<nobracket\>>|)>.
    </equation*>

    In other words, the conditional distribution of the future variable
    <math|X<rsub|t+s>> given the entire past of the process
    <math|<around*|{|X<rsub|u>,u\<leqslant\>t|}>> is the same as the
    conditional distribution of <math|X<rsub|t+s>> given only the present
    <math|X<rsub|t>>.

    A <strong|Markov chain> is a Markov process with discrete time
    <math|<with|font|cal|T>=\<bbb-N\>>, i.e.,
    <math|X=<around*|{|X<rsub|t>,t\<in\>\<bbb-N\>|}>> with a <em|discrete>
    state space <math|<with|font|cal|E>>, and satisfies

    <\equation*>
      \<bbb-P\><around*|(|X<rsub|t+1>=x<rsub|t+1><around*|\||X<rsub|0>=x<rsub|0>,\<cdots\>,X<rsub|t>=x<rsub|t>|\<nobracket\>>|)>=\<bbb-P\><around*|(|X<rsub|t+1>=x<rsub|t+1><around*|\||X<rsub|t>=x<rsub|t>|\<nobracket\>>|)>.
    </equation*>
  </definition>

  <\note>
    Since the time set <math|<with|font|cal|T>=\<bbb-N\>> is countable in a
    Markov chain, the state space <math|<with|font|cal|E>> is also countable,
    which allows us to order states <math|X<rsub|t>>s by
    <math|i,j,\<cdots\>\<in\>\<bbb-N\>>, i.e. we will assume

    <\equation*>
      <with|font|cal|E>=<around*|{|1,2,\<cdots\>,m|}><infix-or><with|font|cal|E>=<around*|{|1,2,\<cdots\>,m,\<cdots\>|}>.
    </equation*>
  </note>

  Denote the conditional probability

  <\equation*>
    \<bbb-P\><around*|(|X<rsub|t+1>=j<around*|\||X<rsub|t>=i|\<nobracket\>>|)>\<equiv\>p<rsub|i
    j><around*|(|t|)>,i,j\<in\><with|font|cal|E>.
  </equation*>

  <\note>
    We only consider <math|p<rsub|i j>> does not depend on time <math|t>,
    which means the chain is <em|time-homogeneous>.
  </note>

  We call <math|p<rsub|i j>> the <em|transition probabilities> of <math|X>
  and the distribution of <math|X<rsub|0>> is called the <em|initial
  distribution>. Obviously, the distribution of <math|X> is <em|determined>
  by the initial distribution and the transition probabilities. Namely,

  <\equation*>
    \<bbb-P\><around*|(|X<rsub|0>=x<rsub|0>,\<cdots\>,X<rsub|t>=x<rsub|t>|)>=\<bbb-P\><around*|(|X<rsub|0>=x<rsub|0>|)>\<bbb-P\><around*|(|X<rsub|1>=x<rsub|1><around*|\||X<rsub|0>=x<rsub|0>|\<nobracket\>>|)>\<cdots\>\<bbb-P\><around*|(|X<rsub|t>=x<rsub|t><around*|\||X<rsub|t-1>=x<rsub|t-1>|\<nobracket\>>|)>.
  </equation*>

  Since <math|<with|font|cal|E>> is countable, one can arrange the transition
  probabilites in an matrix, called <em|transition-matrix> of <math|X>,

  <\equation*>
    P=<around*|(|p<rsub|i j>|)>=<matrix|<tformat|<table|<row|<cell|p<rsub|00>>|<cell|p<rsub|01>>|<cell|p<rsub|02>>|<cell|\<cdots\>>>|<row|<cell|p<rsub|10>>|<cell|p<rsub|11>>|<cell|p<rsub|12>>|<cell|\<cdots\>>>|<row|<cell|p<rsub|20>>|<cell|p<rsub|21>>|<cell|p<rsub|22>>|<cell|\<cdots\>>>|<row|<cell|\<vdots\>>|<cell|\<vdots\>>|<cell|\<vdots\>>|<cell|\<ddots\>>>>>>,
  </equation*>

  where <math|p<rsub|i j>\<geqslant\>0> and <math|<big|sum><rsub|j>p<rsub|i
  j>=1>.

  Define the <strong|distribution vector> of <math|X<rsub|t>> as

  <\equation*>
    \<pi\><rsup|<around*|(|t|)>>\<assign\><around*|(|\<bbb-P\><around*|(|X<rsub|t>=1|)>,\<bbb-P\><around*|(|X<rsub|t>=2|)>,\<cdots\>|)>.
  </equation*>

  It is easy to find the following relation between
  <math|\<pi\><rsup|<around*|(|t|)>>> with
  <math|\<pi\><rsup|<around*|(|0|)>>>:

  <\equation>
    \<pi\><rsup|<around*|(|t|)>>=\<pi\><rsup|<around*|(|0|)>>P<rsup|t>,t=0,1,\<cdots\>.
  </equation>

  <\definition>
    Let <math|X> be a Markov chain, and the transition matrix <math|P>. The
    states of <math|<with|font|cal|E>> are denoted as <math|i,j,\<cdots\>>.

    (i) <math|i> <strong|leads> j: if <math|P<rsup|t><around*|(|i,j|)>\<gtr\>0>
    for some <math|t\<geqslant\>0>. Write this as <math|i\<rightarrow\>j>.

    (ii) i <strong|communicates> with j: if <math|i\<rightarrow\>j> and
    <math|j\<rightarrow\>i>. Write this as <math|i\<leftrightarrow\>j>.

    A Markov chain is <strong|irreducible>, if all states of
    <math|<with|font|cal|E>> communicate with each other.

    (iii) A set of states <math|<with|font|cal|A>> is <strong|closed>, if
    <math|<big|sum><rsub|j\<in\><with|font|cal|A>>P<around*|(|i,j|)>=1,\<forall\>i\<in\><with|font|cal|A>>.

    (iv) A state i is an <strong|absorbing state>, if <math|<around*|{|i|}>>
    is closed.
  </definition>

  <\definition>
    Let <math|T> be the time the chain first visits to <math|j> or first
    returns to <math|j> if it started there.

    (i) A state <math|j> is <strong|recurrent> is
    <math|\<bbb-P\><around*|(|T\<less\>\<infty\><around*|\||X<rsub|0>=j|\<nobracket\>>|)>=1>;
    otherwise, <math|j> is called <strong|transient>.

    (ii) A state <math|j> is <strong|periodic> with period
    <math|\<delta\>\<geqslant\>2>, if <math|\<delta\>> is the largest integer
    for which <math|\<bbb-P\><around*|(|T=n\<delta\>,n\<geqslant\>1<around*|\||X<rsub|0>=j|\<nobracket\>>|)>=1>;
    otherwise, <math|j> is called <strong|aperiodic>.
  </definition>

  We are interest in the steady-state of a Markov chain,\ 

  <\definition>
    [<strong|Limiting distribution>] <math|\<pi\>=<around*|(|\<pi\><rsub|1>,\<pi\><rsub|2>,\<cdots\>|)>>
    is called limiting distribution of the Markov chain, if it is the limit
    of <math|P<rsup|t><around*|(|i,j|)>>:

    <\equation*>
      \<pi\><rsub|j>\<assign\>lim<rsub|t\<rightarrow\>\<infty\>>P<rsup|t><around*|(|i,j|)>,\<forall\>i
    </equation*>

    with <math|\<pi\><rsub|j>\<in\><around*|[|0,1|]>> and
    <math|<big|sum>\<pi\><rsub|j>=1>.

    [?? what about <math|i>??]
  </definition>

  The following theorem tells how to find the limiting distribution.

  [xxx the following theorem is not precise??? where is Harris-recurrent xxx]

  <\theorem>
    For an <strong|irreducible>, <strong|aperiodic> Markov chain with
    transition matrix <math|P>, if the limiting distribution <math|\<pi\>>
    exists, then it is unqiuely determined by the solution of

    <\equation>
      \<pi\>=\<pi\>P<label|stationary-dist>.
    </equation>

    Note that a distribution <math|\<pi\>> with
    eqn(<reference|stationary-dist>) <strong|stationary distribution>.
  </theorem>

  <\note>
    Since <math|<big|sum><rsub|j>p<rsub|i j>=1>,
    eqn(<reference|stationary-dist>) can be rewrote by

    <\equation*>
      <big|sum><rsub|j>\<pi\><rsub|i>p<rsub|i
      j>=<big|sum><rsub|j>\<pi\><rsub|j>p<rsub|j
      i>,\<forall\>i\<in\><with|font|cal|E>,
    </equation*>

    which is called the <strong|(global) balance equations>. This word of
    \Pbalance\Q is from the interpretation: LHS is the probability of out of
    state <math|i>, while RHS is the probability of into state <math|i>, they
    are balanced.

    One can extend above interpretation to an set of states
    <math|<with|font|cal|A>>, by <math|<around*|{|i|}>\<rightarrow\><with|font|cal|A>,<around*|{|all
    j|}> \<rightarrow\><with|font|cal|A><rsup|c>>:

    <\equation*>
      <big|sum><rsub|i\<in\><with|font|cal|A>><big|sum><rsub|j\<nin\><with|font|cal|A>>\<pi\><rsub|i>p<rsub|i
      j>=<big|sum><rsub|i\<in\><with|font|cal|A>><big|sum><rsub|j\<nin\><with|font|cal|A>>\<pi\><rsub|j>p<rsub|j
      i>.
    </equation*>

    A <em|stronger> version is the <em|local> balance eqns as follows.
  </note>

  <\definition>
    [<strong|detailed(local) balance equations>]

    <\equation>
      \<pi\><rsub|i>p<rsub|i j>=\<pi\><rsub|j>p<rsub|j
      i>,\<forall\>i,j\<in\><with|font|cal|E><label|detailed-balance>.
    </equation>
  </definition>

  <\note>
    (i) Since detailed balance eqns are stronger than global balance eqns, if
    a transition <math|P> satisfies detailed balance eqn with <math|\<pi\>>,
    then <math|\<pi\>> is the limiting distribution of the Markov chain;

    (ii) Detailed balance means the Markov chain is <strong|reversible>,
    i.e., for any <math|n\<in\>\<bbb-N\><rsub|+>> and
    <math|t<rsub|1>,\<cdots\>t<rsub|n>>, the vector
    <math|<around*|(|X<rsub|t<rsub|1>>,\<cdots\>,X<rsub|t<rsub|n>>|)>> has
    the same distribution as <math|<around*|(|X<rsub|-t<rsub|1>>,\<cdots\>,X<rsub|-t<rsub|n>>|)>>.
  </note>

  <subsection|Metropolis-Hasting algorithm>

  Markov chain monte carlo (MCMC) gives a generic method to generate samples
  from an arbitrary distribution. This subsection we gives the classic mcmc,
  Metropolis-Hasting algorithm.

  Let random variable <math|X> take values in the state space
  <math|<with|font|cal|E>=<around*|{|1,2,\<cdots\>,m|}>>, the target
  distribution is <math|\<pi\>> with

  <\equation*>
    \<pi\><rsub|i>=<frac|b<rsub|i>|C>,i\<in\><with|font|cal|E>,
  </equation*>

  where <math|C=<big|sum>b<rsub|i>> is the normalization constant. [MCMC does
  not need to know <math|C>, see below]

  We construct a Markov chain <math|<around*|{|X<rsub|t>,t=0,1,\<cdots\>|}>>
  on <math|<with|font|cal|E>> whose evolution relies on a given
  pre-transition matrix <math|Q=<around*|(|q<rsub|i j>|)>> in the following
  way:

  (i) When <math|X<rsub|t>=i>, generate a random variable Y satisfing
  <math|\<bbb-P\><around*|(|Y=j|)>=q<rsub|i j>,j\<in\><with|font|cal|E>>;

  (ii) If <math|Y=j>, let

  <\equation*>
    X<rsub|t+1>=<choice|<tformat|<table|<row|<cell|j,>|<cell|with Prob
    \<alpha\><rsub|i j>>>|<row|<cell|i,>|<cell|with Prob 1-\<alpha\><rsub|i
    j>>>>>>,
  </equation*>

  where

  <\equation*>
    \<alpha\><rsub|i j>=min<around*|(|<frac|\<pi\><rsub|j>|\<pi\><rsub|i>><frac|q<rsub|j
    i>|q<rsub|i j>>,1|)>=min<around*|(|<frac|b<rsub|j>|b<rsub|i>><frac|q<rsub|j
    i>|q<rsub|i j>>,1|)>.
  </equation*>

  <\theorem>
    The transition matrix <math|P> of the Markov-chain defined by
    Metropolis-Hasting algorithm is

    <\equation*>
      p<rsub|i j>=<choice|<tformat|<table|<row|<cell|q<rsub|i
      j>\<alpha\><rsub|i j>,>|<cell|if i\<neq\>j>>|<row|<cell|1-<big|sum><rsub|k\<neq\>i>q<rsub|i
      k>\<alpha\><rsub|i k>,>|<cell|if i=j>>>>>,
    </equation*>

    which satifies the detailed balance equation
    (<reference|detailed-balance>) which means <math|\<pi\>> is its limiting
    distribution.
  </theorem>

  <\proof>
    The transition matrix <math|p<rsub|i j>> is easy to get; We only give
    prove the detailed balance eqn:

    we only need to prove the case <math|i\<neq\>j>:

    If <math|<frac|\<pi\><rsub|j>|\<pi\><rsub|i>><frac|q<rsub|j i>|q<rsub|i
    j>>\<leqslant\>1>, then <math|\<alpha\><rsub|i
    j>=<frac|\<pi\><rsub|j>|\<pi\><rsub|i>><frac|q<rsub|j i>|q<rsub|i
    j>>\<leqslant\>1> while <math|\<alpha\><rsub|j i>=1>, and so\ 

    <\equation*>
      \<pi\><rsub|i>p<rsub|i j>=\<pi\><rsub|i>q<rsub|i j>\<alpha\><rsub|i
      j>=\<pi\><rsub|i>q<rsub|i j><frac|\<pi\><rsub|j>|\<pi\><rsub|i>><frac|q<rsub|j
      i>|q<rsub|i j>>=\<pi\><rsub|j>q<rsub|j
      i>=\<pi\><rsub|j>\<alpha\><rsub|j i>q<rsub|j i>=\<pi\><rsub|j>p<rsub|j
      i>.
    </equation*>

    If <math|<frac|\<pi\><rsub|j>|\<pi\><rsub|i>><frac|q<rsub|j i>|q<rsub|i
    j>>\<gtr\>1>, just by replacing <math|i\<leftrightarrow\>j>, one can gets
    the wanted result.
  </proof>

  \;

  Note that the method can be extended, directly, to <em|continuous> case,
  including multidimensional case: <math|<around*|(|\<pi\><rsub|i>|)>> is
  replaced by pdf <math|f<around*|(|x|)>,x\<in\>\<Omega\>>; <math|q<rsub|i
  j>> is replaced by <math|q<around*|(|x,y|)>\<equiv\>q<around*|(|y<around*|\||x|\<nobracket\>>|)>>,
  <math|\<alpha\><rsub|i j>> is replaced by
  <math|\<alpha\><around*|(|x,y|)>>.

  <\named-algorithm|[Metropolis-Hasting Algorithm]>
    Given the current state <math|X<rsub|t>>, to get <math|X<rsub|t+1>>:

    1. generate <math|Y\<sim\>q<around*|(|X<rsub|t>,y|)>>;

    2. generate <math|U\<sim\><with|font|cal|U><rsub|<around*|[|0,1|]>>> and
    deliver\ 

    <\equation*>
      X<rsub|t+1>=<choice|<tformat|<table|<row|<cell|Y,>|<cell|if
      U\<leqslant\>\<alpha\><around*|(|X,Y|)>>>|<row|<cell|X<rsub|t>,>|<cell|otherwise>>>>>,
    </equation*>

    where

    <\equation*>
      \<alpha\><around*|(|x,y|)>\<assign\>min<around*|{|\<varrho\><around*|(|x,y|)>,1|}>,
    </equation*>

    with

    <\equation*>
      \<varrho\><around*|(|x,y|)>\<assign\><frac|f<around*|(|y|)>|f<around*|(|x|)>><frac|q<around*|(|y,x|)>|q<around*|(|x,y|)>>.
    </equation*>
  </named-algorithm>

  <\note>
    The efficiency depends on the acceptance probability
    <math|\<alpha\><rsub|i j>> or <math|\<alpha\><around*|(|x,y|)>>, which by
    the definition depends on the choice <math|q<rsub|i j>> or
    <math|q<around*|(|x,y|)>> to make <math|\<varrho\><around*|(|x,y|)>> as
    large as possible.

    So How to choose <math|q>?
  </note>

  <\example>
    [Independence sampler] One method is to let
    <math|q<around*|(|x,y|)>=g<around*|(|y|)>> which is independent of
    <math|x>, where <math|g*<around*|(|y|)>> is a known pdf on
    <math|\<Omega\>>. In this way,

    <\equation*>
      \<alpha\><around*|(|x,y|)>=min<around*|{|<frac|f<around*|(|Y|)>|f<around*|(|X|)>><frac|g<around*|(|X|)>|g<around*|(|Y|)>>,1|}>.
    </equation*>

    \;
  </example>

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;

  \;
</body>

<\initial>
  <\collection>
    <associate|page-medium|papyrus>
  </collection>
</initial>

<\references>
  <\collection>
    <associate|auto-1|<tuple|1|1>>
    <associate|auto-10|<tuple|3.3|8>>
    <associate|auto-11|<tuple|3.4|9>>
    <associate|auto-12|<tuple|3.4.1|10>>
    <associate|auto-13|<tuple|3.4.2|10>>
    <associate|auto-14|<tuple|4|10>>
    <associate|auto-15|<tuple|4.1|10>>
    <associate|auto-16|<tuple|4.2|13>>
    <associate|auto-2|<tuple|2|3>>
    <associate|auto-3|<tuple|2.1|3>>
    <associate|auto-4|<tuple|2.2|5>>
    <associate|auto-5|<tuple|2.3|5>>
    <associate|auto-6|<tuple|2.4|6>>
    <associate|auto-7|<tuple|3|7>>
    <associate|auto-8|<tuple|3.1|7>>
    <associate|auto-9|<tuple|3.2|7>>
    <associate|detailed-balance|<tuple|6|13>>
    <associate|err-of-estimate|<tuple|2|2>>
    <associate|estimate|<tuple|1|2>>
    <associate|kappa-suqare|<tuple|3|3>>
    <associate|stationary-dist|<tuple|5|12>>
  </collection>
</references>

<\auxiliary>
  <\collection>
    <\associate|toc>
      <vspace*|1fn><with|font-series|<quote|bold>|math-font-series|<quote|bold>|1<space|2spc>Main
      idea> <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-1><vspace|0.5fn>

      <vspace*|1fn><with|font-series|<quote|bold>|math-font-series|<quote|bold>|2<space|2spc>Random
      variable generator> <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-2><vspace|0.5fn>

      <with|par-left|<quote|1tab>|2.1<space|2spc>Famous distrubutions
      <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-3>>

      <with|par-left|<quote|1tab>|2.2<space|2spc>Inverse transform method
      <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-4>>

      <with|par-left|<quote|1tab>|2.3<space|2spc>Composition method
      <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-5>>

      <with|par-left|<quote|1tab>|2.4<space|2spc>Acceptance-Rejection method
      <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-6>>

      <vspace*|1fn><with|font-series|<quote|bold>|math-font-series|<quote|bold>|3<space|2spc>Controlling
      the variance> <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-7><vspace|0.5fn>

      <with|par-left|<quote|1tab>|3.1<space|2spc>Common and Antithetic random
      variables <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-8>>

      <with|par-left|<quote|1tab>|3.2<space|2spc>Control variables
      <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-9>>

      <with|par-left|<quote|1tab>|3.3<space|2spc>Stratified sampling
      <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-10>>

      <with|par-left|<quote|1tab>|3.4<space|2spc>importance sampling
      <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-11>>

      <with|par-left|<quote|2tab>|3.4.1<space|2spc>Variance-minimization
      method (VM method) <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-12>>

      <with|par-left|<quote|2tab>|3.4.2<space|2spc>Cross-entropy method (CE
      method) <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-13>>

      <vspace*|1fn><with|font-series|<quote|bold>|math-font-series|<quote|bold>|4<space|2spc>Markov
      Chain Monte Carlo (MCMC)> <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-14><vspace|0.5fn>

      <with|par-left|<quote|1tab>|4.1<space|2spc>Markov Chain
      <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-15>>

      <with|par-left|<quote|1tab>|4.2<space|2spc>Metropolis-Hasting algorithm
      <datoms|<macro|x|<repeat|<arg|x>|<with|font-series|medium|<with|font-size|1|<space|0.2fn>.<space|0.2fn>>>>>|<htab|5mm>>
      <no-break><pageref|auto-16>>
    </associate>
  </collection>
</auxiliary>