Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
File size: 11,555 Bytes
c0ecdaa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
In mathematics, Stirling's approximation (or Stirling's formula) is an approximation for factorials. It is a good approximation, leading to accurate results even for small values of $n$. It is named after James Stirling, though it was first stated by Abraham de Moivre.

The version of the formula typically used in applications is

<math display=block>\ln(n!) = n\ln n - n +\Theta(\ln n)</math>

(in Big Theta notation, as $n\to\infty$), or, by changing the base of the logarithm (for instance in the worst-case lower bound for comparison sorting),

<math display=block>\log_2 (n!) = n\log_2 n - n\log_2 e +\Theta(\log_2 n).</math> Specifying the constant in the O(ln n) error term gives 1/2ln(2πn), yielding the more precise formula:

<math display=block>n! \sim \sqrt{2 \pi n}\left(\frac{n}{e}\right)^n,</math>

where the sign ~ means that the two quantities are asymptotic: their ratio tends to 1 as $n$ tends to infinity. The following version of the bound holds for all $n \ge 1$, rather than only asymptotically:

<math display=block>\sqrt{2 \pi n}\ \left(\frac{n}{e}\right)^n e^{\frac{1}{12n + 1}} < n! < \sqrt{2 \pi n}\ \left(\frac{n}{e}\right)^n e^{\frac{1}{12n}}. </math>

Roughly speaking, the simplest version of Stirling's formula can be quickly obtained by approximating the sum

<math display=block>\ln(n!) = \sum_{j=1}^n \ln j</math>

with an integral:

<math display=block>\sum_{j=1}^n \ln j \approx \int_1^n \ln x {\rm d}x = n\ln n - n + 1.</math>

The full formula, together with precise estimates of its error, can be derived as follows. Instead of approximating $n!$, one considers its natural logarithm, as this is a slowly varying function:

<math display=block>\ln(n!) = \ln 1 + \ln 2 + \cdots + \ln n.</math>

The right-hand side of this equation minus

<math display=block>\tfrac{1}{2}(\ln 1 + \ln n) = \tfrac{1}{2}\ln n</math>

is the approximation by the trapezoid rule of the integral

<math display=block>\ln(n!) - \tfrac{1}{2}\ln n \approx \int_1^n \ln x{\rm d}x = n \ln n - n + 1,</math>

and the error in this approximation is given by the Euler–Maclaurin formula:

<math display=block>\begin{align}

\ln(n!) - \tfrac{1}{2}\ln n & = \tfrac{1}{2}\ln 1 + \ln 2 + \ln 3 + \cdots + \ln(n-1) + \tfrac{1}{2}\ln n\\

& = n \ln n - n + 1 + \sum_{k=2}^{m} \frac{(-1)^k B_k}{k(k-1)} \left( \frac{1}{n^{k-1}} - 1 \right) + R_{m,n},

\end{align}</math>

where $B_k$ is a Bernoulli number, and R<sub>m,n</sub> is the remainder term in the Euler–Maclaurin formula. Take limits to find that

<math display=block>\lim_{n \to \infty} \left( \ln(n!) - n \ln n + n - \tfrac{1}{2}\ln n \right) = 1 - \sum_{k=2}^{m} \frac{(-1)^k B_k}{k(k-1)} + \lim_{n \to \infty} R_{m,n}.</math>

Denote this limit as $y$. Because the remainder R<sub>m,n</sub> in the Euler–Maclaurin formula satisfies

<math display=block>R_{m,n} = \lim_{n \to \infty} R_{m,n} + O \left( \frac{1}{n^m} \right),</math>

where big-O notation is used, combining the equations above yields the approximation formula in its logarithmic form:

<math display=block>\ln(n!) = n \ln \left( \frac{n}{e} \right) + \tfrac{1}{2}\ln n + y + \sum_{k=2}^{m} \frac{(-1)^k B_k}{k(k-1)n^{k-1}} + O \left( \frac{1}{n^m} \right).</math>

Taking the exponential of both sides and choosing any positive integer $m$, one obtains a formula involving an unknown quantity $e^y$. For m = 1, the formula is

<math display=block>n! = e^y \sqrt{n} \left( \frac{n}{e} \right)^n \left( 1 + O \left( \frac{1}{n} \right) \right).</math>

The quantity $e^y$ can be found by taking the limit on both sides as $n$ tends to infinity and using Wallis' product, which shows that $e^y=\sqrt{2\pi}$. Therefore, one obtains Stirling's formula:

<math display=block>n! = \sqrt{2 \pi n} \left( \frac{n}{e} \right)^n \left( 1 + O \left( \frac{1}{n} \right) \right).</math>

An alternative formula for $n!$ using the gamma function is

<math display=block> n! = \int_0^\infty x^n e^{-x}{\rm d}x.</math>

(as can be seen by repeated integration by parts). Rewriting and changing variables x = ny, one obtains

<math display=block> n! = \int_0^\infty e^{n\ln x-x}{\rm d}x = e^{n \ln n} n \int_0^\infty e^{n(\ln y -y)}{\rm d}y.</math>

Applying Laplace's method one has

<math display=block>\int_0^\infty e^{n(\ln y -y)}{\rm d}y \sim \sqrt{\frac{2\pi}{n}} e^{-n},</math>

which recovers Stirling's formula:

<math display=block>n! \sim e^{n \ln n} n \sqrt{\frac{2\pi}{n}} e^{-n}

= \sqrt{2\pi n}\left(\frac{n}{e}\right)^n.

</math>

In fact, further corrections can also be obtained using Laplace's method. For example, computing two-order expansion using Laplace's method yields (using little-o notation)

<math display=block>\int_0^\infty e^{n(\ln y-y)}{\rm d}y = \sqrt{\frac{2\pi}{n}} e^{-n}

\left(1+\frac{1}{12 n}+o\left(\frac{1}{n}\right)\right)</math>

and gives Stirling's formula to two orders:

<math display=block> n! = \sqrt{2\pi n}\left(\frac{n}{e}\right)^n \left(1 + \frac{1}{12 n}+o\left(\frac{1}{n}\right) \right).

</math>

A complex-analysis version of this method is to consider $\frac{1}{n!}$ as a Taylor coefficient of the exponential function $e^z = \sum_{n=0}^\infty \frac{z^n}{n!}$, computed by Cauchy's integral formula as

<math display=block>\frac{1}{n!} = \frac{1}{2\pi i} \oint\limits_{2N(2N-1)|z|^{2N-1}}

\begin{cases}

1 & \text{ if } |\arg z| \leq \frac{\pi}{4}, \\

|\csc(\arg z)| & \text{ if } \frac{\pi}{4}<|\arg z| < \frac{\pi}{2}, \\

\sec^{2N}\left(\tfrac{\arg z}{2}\right) & \text{ if } |\arg z| < \pi,

\end{cases} \\[6pt]

\left |\widetilde{R}_N(z) \right | &\le

\left(\frac{\left |a_N \right |}{(k + 1)(k + 2)},</math>

where s(n, k) denotes the Stirling numbers of the first kind. From this one obtains a version of Stirling's series

<math display=block>\begin{align}

\ln\Gamma(x) &= x\ln x - x + \tfrac12\ln\frac{2\pi}{x} + \frac{1}{12(x+1)} + \frac{1}{12(x+1)(x+2)} + \\

&\quad + \frac{59}{360(x+1)(x+2)(x+3)} + \frac{29}{60(x+1)(x+2)(x+3)(x+4)} + \cdots,

\end{align}</math>

which converges when Re(x) > 0.

The approximation

<math display=block>\Gamma(z) \approx \sqrt{\frac{2 \pi}{z}} \left(\frac{z}{e} \sqrt{z \sinh\frac{1}{z} + \frac{1}{810z^6} } \right)^z</math>

and its equivalent form

<math display=block>2\ln\Gamma(z) \approx \ln(2\pi) - \ln z + z \left(2\ln z + \ln\left(z\sinh\frac{1}{z} + \frac{1}{810z^6}\right) - 2\right)</math>

can be obtained by rearranging Stirling's extended formula and observing a coincidence between the resultant power series and the Taylor series expansion of the hyperbolic sine function. This approximation is good to more than 8 decimal digits for z with a real part greater than 8. Robert H. Windschitl suggested it in 2002 for computing the gamma function with fair accuracy on calculators with limited program or register memory.

Gergő Nemes proposed in 2007 an approximation which gives the same number of exact digits as the Windschitl approximation but is much simpler:

<math display=block>\Gamma(z) \approx \sqrt{\frac{2\pi}{z} } \left(\frac{1}{e} \left(z + \frac{1}{12z - \frac{1}{10z}}\right)\right)^z,</math>

or equivalently,

<math display=block> \ln\Gamma(z) \approx \tfrac{1}{2} \left(\ln(2\pi) - \ln z\right) + z\left(\ln\left(z + \frac{1}{12z - \frac{1}{10z}}\right) - 1\right). </math>

An alternative approximation for the gamma function stated by Srinivasa Ramanujan (Ramanujan 1988) is

<math display=block>\Gamma(1+x) \approx \sqrt{\pi} \left(\frac{x}{e}\right)^x \left( 8x^3 + 4x^2 + x + \frac{1}{30} \right)^{\frac{1}{6}}</math>

for x ≥ 0. The equivalent approximation for ln n! has an asymptotic error of 1/1400n<sup>3</sup> and is given by

<math display=block>\ln n! \approx n\ln n - n + \tfrac{1}{6}\ln(8n^3 + 4n^2 + n + \tfrac{1}{30}) + \tfrac{1}{2}\ln\pi .</math>

The approximation may be made precise by giving paired upper and lower bounds; one such inequality is

<math display=block> \sqrt{\pi} \left(\frac{x}{e}\right)^x \left( 8x^3 + 4x^2 + x + \frac{1}{100} \right)^{1/6} < \Gamma(1+x) < \sqrt{\pi} \left(\frac{x}{e}\right)^x \left( 8x^3 + 4x^2 + x + \frac{1}{30} \right)^{1/6}.</math>

In computer science, especially in the context of randomized algorithms, it is common to generate random bit vectors that are powers of two in length. Many algorithms producing and consuming these bit vectors are sensitive to the population count of the bit vectors generated, or of the Manhattan distance between two such vectors. Often of particular interest is the density of "fair" vectors, where the population count of an n-bit vector is exactly $n/2$. This amounts to the probability that an iterated coin toss over many trials leads to a tie game. 

Stirling's approximation to ${n \choose n/2}$, the central and maximal binomial coefficient of the binomial distribution, simplifies especially nicely where $n$ takes the form of $4^k$, for an integer $k$. Here we are interested in how the density of the central population count is diminished compared to $2^n$, deriving the last form in decibel attenuation: 

<math display=block>\begin{align}

\log_2 {n \choose n/2} - n & = -k - \frac{\log_2(\pi)-1}{2} +O(\log_2 n)\\

& \approx -k - 0.3257481 \\

& \approx -k -\frac13 \\

& \approx \mathbf {3k+1} ~~ \mathrm{dB}~(\text{attenuation})

\end{align}</math>

This simple approximation exhibits surprising accuracy: 

<math display=block>\begin{align}

10\log_{10}(2^{-1024} {1024 \choose 512}) &\approx -16.033159 

~~~\begin{cases}

k &= 5 \\

n = 4^k &= 1024 \\

3 k + 1 &= \mathbf {16} \\

\end{cases} \\

10\log_{10}(2^{-1048576} {1048576 \choose 524288}) &\approx -31.083600

~~~\begin{cases}

k &= 10 \\

n = 4^k &= 1048576 \\

3 k + 1 &= \mathbf {31} \\

\end{cases}

\end{align}</math>

Binary diminishment obtains from dB on dividing by $10\log(2)/\log(10) \approx 3.0103 \approx 3$.

As a direct fractional estimate: 

<math display=block>\begin{align}

{n \choose n/2}/2^n & = 2^{\frac{1-\log_2(\pi)}{2}-k} +O(n) \\

& \approx \sqrt{\frac{2}{\pi}} ~ 2^{-k} \\

& \approx 0.7978846 ~ 2^{-k} \\

& \approx \mathbf {\frac{4}{5} 2^{-k}}

\end{align}</math>

Once again, both examples exhibit accuracy easily besting 1%: 

<math display=block>\begin{align}

{256 \choose 128} 2^{-256} &\approx 20.072619^{-1} 

~~~\begin{cases}

k &= 4 \\

n = 4^k &= 256 \\

\frac{4}{5} \times \frac{1}{2^4} &= \mathbf {20}^{-1} \\

\end{cases} \\

{1048576 \choose 524288} 2^{-1048576} &\approx 1283.3940^{-1}

~~~\begin{cases}

k &= 10 \\

n = 4^k &= 1048576 \\

\frac{4}{5} \times \frac{1}{2^{10}} &= \mathbf {1280}^{-1} 

\end{cases}

\end{align}</math>

Interpreted at an iterated coin toss, a session involving slightly over a million coin flips (a binary million) has one chance in roughly 1300 of ending in a draw. 

Both of these approximations (one in log space, the other in linear space) are simple enough for many software developers to obtain the estimate mentally, with exceptional accuracy by the standards of mental estimates. 

The binomial distribution closely approximates the normal distribution for large $n$, so these estimates based on Stirling's approximation also relate to the peak value of the probability mass function for large $n$ and $p = 0.5$, as specified for the following distribution: $ \mathcal{N}(np,np(1-p))$.

The formula was first discovered by Abraham de Moivre in the form

<math display=block>n! \sim [{\rm constant}] \cdot n^{n+\frac12} e^{-n}.</math>

De Moivre gave an approximate rational-number expression for the natural logarithm of the constant. Stirling's contribution consisted of showing that the constant is precisely $\sqrt{2\pi} $.