prompt
stringlengths
12
1.27k
context
stringlengths
2.29k
64.4k
A
stringlengths
1
145
B
stringlengths
1
129
C
stringlengths
3
138
D
stringlengths
1
158
E
stringlengths
1
143
answer
stringclasses
5 values
Calculate the uncertainty $\Delta L_z$ for the hydrogen-atom stationary state: $2 p_z$.
H_x+H_p \approx 0.69 + 0.53 = 1.22 >\ln\left(\frac{e}{2}\right)-\ln 1 \approx 0.31 ==Uncertainty relation with three angular momentum components== For a particle of spin-j the following uncertainty relation holds \sigma_{J_x}^2+\sigma_{J_y}^2+\sigma_{J_z}^2\ge j, where J_l are angular momentum components. H_p = -\sum_{j=-\infty}^\infty \operatorname P[p_j] \ln \operatorname P[p_j] = -\operatorname P[p_0] \ln \operatorname P[p_0]-2 \cdot \sum_{j=1}^{\infty} \operatorname P[p_j] \ln \operatorname P[p_j] \approx 0.53 The entropic uncertainty is indeed larger than the limiting value. Entropic uncertainty of the normal distribution We demonstrate this method on the ground state of the QHO, which as discussed above saturates the usual uncertainty based on standard deviations. Under the above definition, the entropic uncertainty relation is H_x + H_p > \ln\left(\frac{e}{2}\right)-\ln\left(\frac{\delta x \delta p}{h} \right). Its value is The number in parenthesis denotes the uncertainty of the last digits. ==Definition and value== The Bohr radius is defined asDavid J. Griffiths, Introduction to Quantum Mechanics, Prentice- Hall, 1995, p. 137. a_0 = \frac{4 \pi \varepsilon_0 \hbar^2}{e^2 m_{\text{e}}} = \frac{\varepsilon_0 h^2}{\pi e^2 m_{\text{e}}} = \frac{\hbar}{m_{\text{e}} c \alpha} , where * \varepsilon_0 is the permittivity of free space, * \hbar is the reduced Planck constant, * m_{\text{e}} is the mass of an electron, * e is the elementary charge, * c is the speed of light in vacuum, and * \alpha is the fine-structure constant. Thus, we need to calculate the bound of the Robertson–Schrödinger uncertainty for the mixed components of the quantum state rather than for the quantum state, and compute an average of their square roots. Since |\psi(x)|^2 is a probability density function for position, we calculate its standard deviation. Normal distribution example We demonstrate this method first on the ground state of the QHO, which as discussed above saturates the usual uncertainty based on standard deviations. \psi(x)=\left(\frac{m \omega}{\pi \hbar}\right)^{1/4} \exp{\left( -\frac{m \omega x^2}{2\hbar}\right)} The probability of lying within one of these bins can be expressed in terms of the error function. \begin{align} \operatorname P[x_j] &= \sqrt{\frac{m \omega}{\pi \hbar}} \int_{(j-1/2)\delta x}^{(j+1/2)\delta x} \exp\left( -\frac{m \omega x^2}{\hbar}\right) \, dx \\\ &= \sqrt{\frac{1}{\pi}} \int_{(j-1/2)\delta x\sqrt{m \omega / \hbar}}^{(j+1/2)\delta x\sqrt{m \omega / \hbar}} e^{u^2} \, du \\\ &= \frac{1}{2} \left[ \operatorname{erf} \left( \left(j+\frac{1}{2}\right)\delta x \cdot \sqrt{\frac{m \omega}{\hbar}}\right)- \operatorname {erf} \left( \left(j-\frac{1}{2}\right)\delta x \cdot \sqrt{\frac{m \omega}{\hbar}}\right) \right] \end{align} The momentum probabilities are completely analogous. \operatorname P[p_j] = \frac{1}{2} \left[ \operatorname{erf} \left( \left(j+\frac{1}{2}\right)\delta p \cdot \frac{1}{\sqrt{\hbar m \omega}}\right)- \operatorname{erf} \left( \left(j-\frac{1}{2}\right)\delta x \cdot \frac{1}{\sqrt{\hbar m \omega}}\right) \right] For simplicity, we will set the resolutions to \delta x = \sqrt{\frac{h}{m \omega}} \delta p = \sqrt{h m \omega} so that the probabilities reduce to \operatorname P[x_j] = \operatorname P[p_j] = \frac{1}{2} \left[ \operatorname {erf} \left( \left(j+\frac{1}{2}\right) \sqrt{2\pi} \right)- \operatorname {erf} \left( \left(j-\frac{1}{2}\right) \sqrt{2\pi} \right) \right] The Shannon entropy can be evaluated numerically. \begin{align} H_x = H_p &= -\sum_{j=-\infty}^\infty \operatorname P[x_j] \ln \operatorname P[x_j] \\\ &= -\sum_{j=-\infty}^\infty \frac{1}{2} \left[ \operatorname {erf} \left( \left(j+\frac{1}{2}\right) \sqrt{2\pi} \right)- \operatorname {erf} \left( \left(j-\frac{1}{2}\right) \sqrt{2\pi} \right) \right] \ln \frac{1}{2} \left[ \operatorname {erf} \left( \left(j+\frac{1}{2}\right) \sqrt{2\pi} \right)- \operatorname {erf} \left( \left(j-\frac{1}{2}\right) \sqrt{2\pi} \right) \right] \\\ &\approx 0.3226 \end{align} The entropic uncertainty is indeed larger than the limiting value. 180px|thumb|right|Diagram of a helium atom, showing the electron probability density as shades of gray. This precision may be quantified by the standard deviations, \sigma_x=\sqrt{\langle \hat{x}^2 \rangle-\langle \hat{x}\rangle^2} \sigma_p=\sqrt{\langle \hat{p}^2 \rangle-\langle \hat{p}\rangle^2}. In his celebrated 1927 paper, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik" ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement, but he did not give a precise definition for the uncertainties Δx and Δp. When applying the BO approximation, two smaller, consecutive steps can be used: For a given position of the nuclei, the electronic Schrödinger equation is solved, while treating the nuclei as stationary (not "coupled" with the dynamics of the electrons). In the published 1927 paper, Heisenberg originally concluded that the uncertainty principle was ΔpΔq ≈ h using the full Planck constant.Werner Heisenberg, Encounters with Einstein and Other Essays on People, Places and Particles, Published October 21st 1989 by Princeton University Press, p.53.Kumar, Manjit. "On the energy- time uncertainty relation. Orbital uncertainty is related to several parameters used in the orbit determination process including the number of observations (measurements), the time spanned by those observations (observation arc), the quality of the observations (e.g. radar vs. optical), and the geometry of the observations. The matrix element in the numerator is : \langle\chi_{k'}| [P_{A\alpha}, H_\mathrm{e}] |\chi_k\rangle_{(\mathbf{r})} = iZ_A\sum_i \left\langle\chi_{k'}\left|\frac{(\mathbf{r}_{iA})_\alpha}{r_{iA}^3}\right|\chi_k\right\rangle_{(\mathbf{r})} \quad\text{with}\quad \mathbf{r}_{iA} \equiv \mathbf{r}_i - \mathbf{R}_A. The variances of x and p can be calculated explicitly: \sigma_x^2=\frac{L^2}{12}\left(1-\frac{6}{n^2\pi^2}\right) \sigma_p^2=\left(\frac{\hbar n\pi}{L}\right)^2. On the other hand, the standard deviation of the position is \sigma_x = \frac{x_0}{\sqrt{2}} \sqrt{1+\omega_0^2 t^2} such that the uncertainty product can only increase with time as \sigma_x(t) \sigma_p(t) = \frac{\hbar}{2} \sqrt{1+\omega_0^2 t^2} ==Additional uncertainty relations== ===Systematic and statistical errors=== The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation \sigma. Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables). Everett's Dissertation proven in 1975 by W. Beckner and in the same year interpreted as a generalized quantum mechanical uncertainty principle by Białynicki-Birula and Mycielski. The second stronger uncertainty relation is given by \sigma_A^2 + \sigma_B^2 \ge \frac{1}{2}| \langle {\bar \Psi}_{A+B} \mid(A + B)\mid \Psi \rangle|^2 where | {\bar \Psi}_{A+B} \rangle is a state orthogonal to |\Psi \rangle . The product of the standard deviations is therefore \sigma_x \sigma_p = \frac{\hbar}{2} \sqrt{\frac{n^2\pi^2}{3}-2}.
9.73
0.064
0.0
8
61
C
5.8-5. If the distribution of $Y$ is $b(n, 0.25)$, give a lower bound for $P(|Y / n-0.25|<0.05)$ when (a) $n=100$.
If X has a standard normal distribution, i.e. X ~ N(0,1), : \mathrm{P}(X > 1.96) \approx 0.025, \, : \mathrm{P}(X < 1.96) \approx 0.975, \, and as the normal distribution is symmetric, : \mathrm{P}(-1.96 < X < 1.96) \approx 0.95. The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. The lower bound is expressed in terms of the probabilities for pairs of events. Because of the central limit theorem, this number is used in the construction of approximate 95% confidence intervals. In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. From the probability density function of the standard normal distribution, the exact value of z.975 is determined by : \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{z_{.975}} e^{-x^2/2} \, \mathrm{d}x = 0.975. == History == thumb|right|200px|Ronald Fisher The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925: In Table 1 of the same work, he gave the more precise value 1.959964. , Table 1 In 1970, the value truncated to 20 decimal places was calculated to be :1.95996 39845 40054 23552... thumb|Plot of S_n/n (red), its standard deviation 1/\sqrt{n} (blue) and its bound \sqrt{2\log\log n/n} given by LIL (green). right|thumb|300px| Probability mass function for Fisher's noncentral hypergeometric distribution for different values of the odds ratio ω. m1 = 80, m2 = 60, n = 100, ω = 0.01, ..., 1000 In probability theory and statistics, Fisher's noncentral hypergeometric distribution is a generalization of the hypergeometric distribution where sampling probabilities are modified by weight factors. In statistics, probable error defines the half-range of an interval about a central point for the distribution, such that half of the values from the distribution will lie within the interval and half outside.Dodge, Y. (2006) The Oxford Dictionary of Statistical Terms, OUP. For example, when n = 50 it takes about 225E(50) = 50(1 + 1/2 + 1/3 + ... + 1/50) = 224.9603, the expected number of trials to collect all 50 coupons. The probability function and a simple approximation to the mean are given to the right. Bound the desired probability using the Chebyshev inequality: :\operatorname{P}\left(|T- n H_n| \geq cn\right) \le \frac{\pi^2}{6c^2}. ===Tail estimates=== A stronger tail estimate for the upper tail be obtained as follows. Probability. Then : \begin{align} P\left [ {Z}_i^r \right ] = \left(1-\frac{1}{n}\right)^r \le e^{-r / n}. \end{align} Thus, for r = \beta n \log n, we have P\left [ {Z}_i^r \right ] \le e^{(-\beta n \log n ) / n} = n^{-\beta}. The probable error can also be expressed as a multiple of the standard deviation σ,Zwillinger, D.; Kokosa, S. (2000) CRC Standard Probability and Statistics Tables and Formulae, Chapman & Hall/CRC. Their odds ratio is given as : \omega = \frac{\omega_X}{\omega_Y} = \frac{\pi_X/(1-\pi_X)}{\pi_Y/(1-\pi_Y)} . It asks the following question: If each box of a brand of cereals contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? In probability theory, the Chung–Erdős inequality provides a lower bound on the probability that one out of many (possibly dependent) events occurs. The mathematical analysis of the problem reveals that the expected number of trials needed grows as \Theta(n\log(n)). The commonly used approximate value of 1.96 is therefore accurate to better than one part in 50,000, which is more than adequate for applied work. The approximation n\log n+\gamma n+1/2 for this expected number gives in this case 50\log 50+50\gamma+1/2 \approx 195.6011+28.8608+0.5\approx 224.9619. trials on average to collect all 50 coupons. ==Solution== ===Calculating the expectation=== Let time T be the number of draws needed to collect all n coupons, and let ti be the time to collect the i-th coupon after i − 1 coupons have been collected. The trial can be summarized and analyzed in terms of the following contingency table. responder non-responder Total X x . mX Y y . mY Total n .
0.2
6.3
0.25
0.8185
0.5
C
5.3-13. A device contains three components, each of which has a lifetime in hours with the pdf $$ f(x)=\frac{2 x}{10^2} e^{-(x / 10)^2}, \quad 0 < x < \infty . $$ The device fails with the failure of one of the components. Assuming independent lifetimes, what is the probability that the device fails in the first hour of its operation? HINT: $G(y)=P(Y \leq y)=1-P(Y>y)=1-P$ (all three $>y$ ).
Note that this is a conditional probability, where the condition is that no failure has occurred before time t. Although the failure rate, \lambda (t), is often thought of as the probability that a failure occurs in a specified interval given no failure before time t, it is not actually a probability because it can exceed 1. It is based on an exponential failure distribution (see failure rate for a full derivation). A continuous failure rate depends on the existence of a failure distribution, F(t), which is a cumulative distribution function that describes the probability of failure (at least) up to and including time t, :\operatorname{Pr}(T\le t)=F(t)=1-R(t),\quad t\ge 0. \\! where {T} is the failure time. It can be defined with the aid of the reliability function, also called the survival function, R(t)=1-F(t), the probability of no failure before time t. ::\lambda(t) = \frac{f(t)}{R(t)}, where f(t) is the time to (first) failure distribution (i.e. the failure density function). ::\lambda(t) = \frac{R(t_1)-R(t_2)}{(t_2-t_1) \cdot R(t_1)} = \frac{R(t)-R(t+\Delta t)}{\Delta t \cdot R(t)} \\! over a time interval \Delta t = (t_2-t_1) from t_1 (or t) to t_2. Solving the differential equation :h(t)=\frac{f(t)}{1-F(t)}=\frac{F'(t)}{1-F(t)} for F(t), it can be shown that :F(t) = 1 - \exp{\left(-\int_0^t h(t) dt \right)}. ==Decreasing failure rate== A decreasing failure rate (DFR) describes a phenomenon where the probability of an event in a fixed time interval in the future decreases over time. The Failures In Time (FIT) rate of a device is the number of failures that can be expected in one billion (109) device-hours of operation. The failure rate of a system usually depends on time, with the rate varying over the life cycle of the system. The pdf for the standard fatigue life distribution reduces to : f(x) = \frac{\sqrt{x}+\sqrt{\frac{1}{x}}}{2\gamma x}\phi\left(\frac{\sqrt{x}-\sqrt{\frac{1}{x}}}{\gamma}\right)\quad x > 0; \gamma >0 Since the general form of probability functions can be expressed in terms of the standard distribution, all of the subsequent formulas are given for the standard form of the function. ==Cumulative distribution function== The formula for the cumulative distribution function is : F(x) = \Phi\left(\frac{\sqrt{x} - \sqrt{\frac{1}{x}}}{\gamma}\right)\quad x > 0; \gamma > 0 where Φ is the cumulative distribution function of the standard normal distribution. ==Quantile function== The formula for the quantile function is : G(p) = \frac{1}{4}\left[\gamma\Phi^{-1}(p) + \sqrt{4+\left(\gamma\Phi^{-1}(p)\right)^2}\right]^2 where Φ −1 is the quantile function of the standard normal distribution. ==References== * * * * * * * ==External links== *Fatigue life distribution Category:Continuous distributions The results are as follows: Estimated failure rate is : \frac{6\text{ failures}}{7502\text{ hours}} = 0.0007998\, \frac{\text{failures}}{\text{hour}} = 799.8 \times 10^{-6}\, \frac{\text{failures}}{\text{hour}}, or 799.8 failures for every million hours of operation. ==See also== *Annualized failure rate *Burn-in *Failure *Failure mode *Failure modes, effects, and diagnostic analysis *Force of mortality *Frequency of exceedance *Reliability engineering *Reliability theory *Reliability theory of aging and longevity *Survival analysis *Weibull distribution ==References== ==Further reading== * * * *Federal Standard 1037C * * * * * * * *U.S. Department of Defense, (1991) Military Handbook, “Reliability Prediction of Electronic Equipment, MIL-HDBK-217F, 2 ==External links== *Bathtub curve issues , ASQC *Fault Tolerant Computing in Industrial Automation by Hubert Kirrmann, ABB Research Center, Switzerland Category:Actuarial science Category:Engineering failures Category:Reliability engineering Category:Survival analysis Category:Maintenance Category:Statistical ratios Category:Error measures Category:Rates Thus, for an exponential failure distribution, the hazard rate is a constant with respect to time (that is, the distribution is "memory-less"). The failure distribution function is the integral of the failure density function, f(t), :F(t)=\int_{0}^{t} f(\tau)\, d\tau. Failure rate is the frequency with which an engineered system or component fails, expressed in failures per unit of time. Many probability distributions can be used to model the failure distribution (see List of important probability distributions). The Birnbaum-Saunders distribution, also known as the fatigue life distribution, is a probability distribution used extensively in reliability applications to model failure times. Failures most commonly occur near the beginning and near the ending of the lifetime of the parts, resulting in the bathtub curve graph of failure rates. # Reliability of semiconductor devices may depend on assembly, use, environmental, and cooling conditions. X is then distributed normally with a mean of zero and a variance of α2 / 4. ==Probability density function== The general formula for the probability density function (pdf) is : f(x) = \frac{\sqrt{\frac{x-\mu}{\beta}}+\sqrt{\frac{\beta}{x-\mu}}}{2\gamma\left(x-\mu\right)}\phi\left(\frac{\sqrt{\frac{x-\mu}{\beta}}-\sqrt{\frac{\beta}{x-\mu}}}{\gamma}\right)\quad x > \mu; \gamma,\beta>0 where γ is the shape parameter, μ is the location parameter, β is the scale parameter, and \phi is the probability density function of the standard normal distribution. ==Standard fatigue life distribution== The case where μ = 0 and β = 1 is called the standard fatigue life distribution. Failure rates are often expressed in engineering notation as failures per million, or 10−6, especially for individual components, since their failure rates are often very low. Reliability of semiconductor devices can be summarized as follows: # Semiconductor devices are very sensitive to impurities and particles. For many devices, the wear-out failure point is measured by the number of cycles performed before the device fails, and can be discovered by cycle testing. The failure can occur invisibly inside the packaging and is measurable.
0.1800
41.40
0.03
109
5.4
C
5.6-13. The tensile strength $X$ of paper, in pounds per square inch, has $\mu=30$ and $\sigma=3$. A random sample of size $n=100$ is taken from the distribution of tensile strengths. Compute the probability that the sample mean $\bar{X}$ is greater than 29.5 pounds per square inch.
thumb|300px|Probability density of stress S (red, top) and resistance R (blue, top), and of equality (m = R - S = 0, black, bottom). thumb|300px|Distribution of stress S and strength R: all the (R, S) situations have a probability density (grey level surface). The "ISO 534:2011, Paper and board — Determination of thickness, density and specific volume" indicates that the paper density is expressed in grams per cubic centimeter (g/cm3). ==See also== * Grammage * Density ** Area density ** Linear density * ==References== ==External links== * Paper Weight – Conversion Chart * Understanding Paper Weights * Understanding paper weight (Staples, Inc.) * M-weight Calculator * Paper Weight Calculator Category:Paper Category:Printing Consequently, : \Pr\left(\bar{X} - \frac{cS}{\sqrt{n}} \le \mu \le \bar{X} + \frac{cS}{\sqrt{n}} \right)=0.95\, and we have a theoretical (stochastic) 95% confidence interval for μ. If X has a standard normal distribution, i.e. X ~ N(0,1), : \mathrm{P}(X > 1.96) \approx 0.025, \, : \mathrm{P}(X < 1.96) \approx 0.975, \, and as the normal distribution is symmetric, : \mathrm{P}(-1.96 < X < 1.96) \approx 0.95. thumb|Weighing scale to determine paper weight Paper density is a paper product's mass per unit volume. thumb|upright=1.3|Each row of points is a sample from the same normal distribution. :Human hair strength varies by ethnicity and chemical treatments. == Typical properties of annealed elements == Typical properties for annealed elementsA.M. Howatson, P. G. Lund, and J. D. Todd, Engineering Tables and Data, p. 41 Element Young's modulus (GPa) Yield strength (MPa) Ultimate strength (MPa) Silicon 107 5000–9000 Tungsten 411 550 550–620 Iron 211 80–100 350 Titanium 120 100–225 246–370 Copper 130 117 210 Tantalum 186 180 200 Tin 47 9–14 15–200 Zinc 85–105 200–400 200–400 Nickel 170 140–350 140–195 Silver 83 170 Gold 79 100 Aluminium 70 15–20 40–50 Lead 16 12 ==See also== *Flexural strength *Strength of materials *Tensile structure *Toughness *Failure *Tension (physics) *Young's modulus ==References== ==Further reading== *Giancoli, Douglas, Physics for Scientists & Engineers Third Edition (2000). Bond paper is a high-quality durable writing paper similar to bank paper but having a weight greater than 50 g/m2. The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. The ultimate tensile strength is a common engineering parameter to design members made of brittle material because such materials have no yield point. ==Testing== thumb|Round bar specimen after tensile stress testing Typically, the testing involves taking a small sample with a fixed cross-sectional area, and then pulling it with a tensometer at a constant strain (change in gauge length divided by initial gauge length) rate until the sample breaks. Then, denoting c as the 97.5th percentile of this distribution, : \Pr(-c\le T \le c)=0.95 Note that "97.5th" and "0.95" are correct in the preceding expressions. For a large number of independent identically distributed random variables \ X_1, ..., X_n\ , with finite variance, the average \ \overline{X}_n\ approximately has a normal distribution, no matter what the distribution of the \ X_i\ is, with the approximation roughly improving in proportion to \ \sqrt{n\ }. == Example == Suppose {X1, …, Xn} is an independent sample from a normally distributed population with unknown parameters mean μ and variance σ2. Suppose we wanted to calculate a 95% confidence interval for μ. In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. From the probability density function of the standard normal distribution, the exact value of z.975 is determined by : \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{z_{.975}} e^{-x^2/2} \, \mathrm{d}x = 0.975. == History == thumb|right|200px|Ronald Fisher The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925: In Table 1 of the same work, he gave the more precise value 1.959964. , Table 1 In 1970, the value truncated to 20 decimal places was calculated to be :1.95996 39845 40054 23552... They are tabulated for common materials such as alloys, composite materials, ceramics, plastics, and wood. == Definition == The ultimate tensile strength of a material is an intensive property; therefore its value does not depend on the size of the test specimen. Tensile strength is defined as a stress, which is measured as force per unit area. Environmental stresses have a distribution with a mean \left(\mu_x\right) and a standard deviation \left(s_x\right) and component strengths have a distribution with a mean \left(\mu_y\right) and a standard deviation \left(s_y\right). This important relation permits economically important nondestructive testing of bulk metal deliveries with lightweight, even portable equipment, such as hand-held Rockwell hardness testers.E.J. Pavlina and C.J. Van Tyne, "Correlation of Yield Strength and Tensile Strength with Hardness for Steels", Journal of Materials Engineering and Performance, 17:6 (December 2008) This practical correlation helps quality assurance in metalworking industries to extend well beyond the laboratory and universal testing machines. ==Typical tensile strengths== Typical tensile strengths of some materials Material Yield strength (MPa) Ultimate tensile strength (MPa) Density (g/cm3) Steel, structural ASTM A36 steel 250 400–550 7.8 Steel, 1090 mild 247 841 7.58 Chromium-vanadium steel AISI 6150 620 940 7.8 Steel, 2800 Maraging steel 2617 2693 8.00 Steel, AerMet 340 2160 2430 7.86 Steel, Sandvik Sanicro 36Mo logging cable precision wire 1758 2070 8.00 Steel, AISI 4130, water quenched 855 °C (1570 °F), 480 °C (900 °F) temper 951 1110 7.85 Steel, API 5L X65 448 531 7.8 Steel, high strength alloy ASTM A514 690 760 7.8 Acrylic, clear cast sheet (PMMA) IAPD Typical Properties of Acrylics 72 87strictly speaking this figure is the flexural strength (or modulus of rupture), which is a more appropriate measure for brittle materials than "ultimate strength." The ultimate tensile strength is usually found by performing a tensile test and recording the engineering stress versus strain. The density depends on the manufacturing method, and the lowest value is 0.037 or 0.55 (solid). The density can be calculated by dividing the grammage of paper (in grams per square metre or "gsm") by its caliper (usually in micrometres, occasionally in mils).
0.6247
0.166666666
'-1.78'
0.9522
6.3
D
5.6-3. Let $\bar{X}$ be the mean of a random sample of size 36 from an exponential distribution with mean 3 . Approximate $P(2.5 \leq \bar{X} \leq 4)$
The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. If X has a standard normal distribution, i.e. X ~ N(0,1), : \mathrm{P}(X > 1.96) \approx 0.025, \, : \mathrm{P}(X < 1.96) \approx 0.975, \, and as the normal distribution is symmetric, : \mathrm{P}(-1.96 < X < 1.96) \approx 0.95. In mathematical notation, these facts can be expressed as follows, where is the probability function, is an observation from a normally distributed random variable, (mu) is the mean of the distribution, and (sigma) is its standard deviation: \begin{align} \Pr(\mu-1\sigma \le X \le \mu+1\sigma) & \approx 68.27\% \\\ \Pr(\mu-2\sigma \le X \le \mu+2\sigma) & \approx 95.45\% \\\ \Pr(\mu-3\sigma \le X \le \mu+3\sigma) & \approx 99.73\% \end{align} The usefulness of this heuristic especially depends on the question under consideration. To compute the probability that an observation is within two standard deviations of the mean (small differences due to rounding): \Pr(\mu-2\sigma \le X \le \mu+2\sigma) = \Phi(2) - \Phi(-2) \approx 0.9772 - (1 - 0.9772) \approx 0.9545 This is related to confidence interval as used in statistics: \bar{X} \pm 2\frac{\sigma}{\sqrt{n}} is approximately a 95% confidence interval when \bar{X} is the average of a sample of size n. ==Normality tests== The "68–95–99.7 rule" is often used to quickly get a rough probability estimate of something, given its standard deviation, if the population is assumed to be normal. The assumed mean is the centre of the range from 174 to 177 which is 175.5. Because of the central limit theorem, this number is used in the construction of approximate 95% confidence intervals. From the probability density function of the standard normal distribution, the exact value of z.975 is determined by : \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{z_{.975}} e^{-x^2/2} \, \mathrm{d}x = 0.975. == History == thumb|right|200px|Ronald Fisher The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925: In Table 1 of the same work, he gave the more precise value 1.959964. , Table 1 In 1970, the value truncated to 20 decimal places was calculated to be :1.95996 39845 40054 23552... There are other rapid calculation methods which are more suited for computers which also ensure more accurate results than the obvious methods. ==Example== First: The mean of the following numbers is sought: : 219, 223, 226, 228, 231, 234, 235, 236, 240, 241, 244, 247, 249, 255, 262 Suppose we start with a plausible initial guess that the mean is about 240. Therefore, that is what we need to add to the assumed mean to get the correct mean: : correct mean = 240 − 2 = 238. ==Method== The method depends on estimating the mean and rounding to an easy value to calculate with. The average of these 15 deviations from the assumed mean is therefore −30/15 = −2\. We only need to calculate each integral for the cases n = 1,2,3. \begin{align} &\Pr(\mu -1\sigma \leq X \leq \mu + 1\sigma) = \frac{1}{\sqrt{2\pi}} \int_{-1}^{1} e^{-\frac{u^2}{2}}du \approx 0.6827 \\\ &\Pr(\mu -2\sigma \leq X \leq \mu + 2\sigma) =\frac{1}{\sqrt{2\pi}}\int_{-2}^{2} e^{-\frac{u^2}{2}}du \approx 0.9545 \\\ &\Pr(\mu -3\sigma \leq X \leq \mu + 3\sigma) = \frac{1}{\sqrt{2\pi}}\int_{-3}^{3} e^{-\frac{u^2}{2}}du \approx 0.9973. \end{align} ==Cumulative distribution function== These numerical values "68%, 95%, 99.7%" come from the cumulative distribution function of the normal distribution. In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. In the empirical sciences, the so-called three-sigma rule of thumb (or 3 rule) expresses a conventional heuristic that nearly all values are taken to lie within three standard deviations of the mean, and thus it is empirically useful to treat 99.7% probability as near certainty.This usage of "three-sigma rule" entered common usage in the 2000s, e.g. cited in * * In the social sciences, a result may be considered "significant" if its confidence level is of the order of a two-sigma effect (95%), while in particle physics, there is a convention of a five-sigma effect (99.99994% confidence) being required to qualify as a discovery. In statistics, the 68–95–99.7 rule, also known as the empirical rule, is a shorthand used to remember the percentage of values that lie within an interval estimate in a normal distribution: 68%, 95%, and 99.7% of the values lie within one, two, and three standard deviations of the mean, respectively. For a data set with assumed mean x0 suppose: :d_i=x_i-x_0 \, :A = \sum_{i=1}^N d_i \, :B = \sum_{i=1}^N d_i^2 \, :D = \frac{A}{N} \, Then :\overline{x} = x_0 + D \, :\sigma = \sqrt{\frac{B - N D^2}{N}} \, or for a sample standard deviation using Bessel's correction: :\sigma = \sqrt{\frac{ B - N D^2}{N-1}} \, ==Example using class ranges== Where there are a large number of samples a quick reasonable estimate of the mean and standard deviation can be got by grouping the samples into classes using equal size ranges. The probable error can also be expressed as a multiple of the standard deviation σ,Zwillinger, D.; Kokosa, S. (2000) CRC Standard Probability and Statistics Tables and Formulae, Chapman & Hall/CRC. In statistical hypothesis testing, the error exponent of a hypothesis testing procedure is the rate at which the probabilities of Type I and Type II decay exponentially with the size of the sample used in the test. Then the deviations from this "assumed" mean are the following: :−21, −17, −14, −12, −9, −6, −5, −4, 0, 1, 4, 7, 9, 15, 22 In adding these up, one finds that: : 22 and −21 almost cancel, leaving +1, : 15 and −17 almost cancel, leaving −2, : 9 and −9 cancel, : 7 + 4 cancels −6 − 5, and so on. Some people even use the value of 2 in the place of 1.96, reporting a 95.4% confidence interval as a 95% confidence interval. A weaker three-sigma rule can be derived from Chebyshev's inequality, stating that even for non-normally distributed variables, at least 88.8% of cases should fall within properly calculated three-sigma intervals. The standard deviation is estimated as :CS \sqrt{\frac{B-\frac{A^2}{N}}{N-1}}=5.57 ==References== Category:Means In statistics the assumed mean is a method for calculating the arithmetic mean and standard deviation of a data set.
0.70710678
0.8185
1.4
0.9974
1.61
B
5.3-9. Let $X_1, X_2$ be a random sample of size $n=2$ from a distribution with pdf $f(x)=3 x^2, 0 < x < 1$. Determine (a) $P\left(\max X_i < 3 / 4\right)=P\left(X_1<3 / 4, X_2<3 / 4\right)$
Thus if one has a sample \\{X_1,\dots,X_n\\}, and one picks another observation X_{n+1}, then this has 1/(n+1) probability of being the largest value seen so far, 1/(n+1) probability of being the smallest value seen so far, and thus the other (n-1)/(n+1) of the time, X_{n+1} falls between the sample maximum and sample minimum of \\{X_1,\dots,X_n\\}. * Generalized extreme value distribution, possible limit distributions of sample maximum (opposite question). Thus the sampling distribution of the quantile of the sample maximum is the graph x1/k from 0 to 1: the p-th to q-th quantile of the sample maximum m are the interval [p1/kN, q1/kN]. The minimum and the maximum value are the first and last order statistics (often denoted X(1) and X(n) respectively, for a sample size of n). In statistics, the sample maximum and sample minimum, also called the largest observation and smallest observation, are the values of the greatest and least elements of a sample. A related bound is Edelman's : P\left( \left| \sum_{ i = 1 }^n a_i X_i \right| \ge k \right) \le 2 \left( 1 - \Phi\left[ k - \frac{ 1.5 }{ k } \right] \right) = 2 B_{ Ed }( k ) , where Φ(x) is cumulative distribution function of the standard normal distribution. A derivation of the expected value and the variance of the sample maximum are shown in the page of the discrete uniform distribution. Ann Probab 22(4):1679–1706 : P( S_n \ge x ) \le \frac{ 2e^3 }{ 9 } P( Z \ge x ) The constant in the last inequality is approximately 4.4634. Eaton showed that : P\left( \left| \sum_{ i = 1 }^n a_i X_i \right| \ge k \right) \le 2 \inf_{ 0 \le c \le k } \int_c^\infty \left( \frac{ z - c }{ k - c } \right)^3 \phi( z ) \, dz = 2 B_E( k ) , where φ(x) is the probability density function of the standard normal distribution. A smooth maximum, for example, : g(x1, x2, …, xn) = log( exp(x1) + exp(x2) + … + exp(xn) ) is a good approximation of the sample maximum. ===Summary statistics=== The sample maximum and minimum are basic summary statistics, showing the most extreme observations, and are used in the five-number summary and a version of the seven-number summary and the associated box plot. ===Prediction interval=== The sample maximum and minimum provide a non- parametric prediction interval: in a sample from a population, or more generally an exchangeable sequence of random variables, each observation is equally likely to be the maximum or minimum. In probability theory, Eaton's inequality is a bound on the largest values of a linear combination of bounded random variables. * The range shrinks rapidly, reflecting the exponentially decaying probability that all observations in the sample will be significantly below the maximum. If the sample has outliers, they necessarily include the sample maximum or sample minimum, or both, depending on whether they are extremely high or low. Likewise, n = 39 gives a 95% prediction interval, and n = 199 gives a 99% prediction interval. ===Estimation=== Due to their sensitivity to outliers, the sample extrema cannot reliably be used as estimators unless data is clean – robust alternatives include the first and last deciles. This did not count as an official maximum, however, as the break was made on a non- templated table used during the event. Annals of Statistics 2(3) 609–614 ==Statement of the inequality== Let {Xi} be a set of real independent random variables, each with an expected value of zero and bounded above by 1 ( |Xi | ≤ 1, for 1 ≤ i ≤ n). * The confidence interval exhibits positive skew, as N can never be below the sample maximum, but can potentially be arbitrarily high above it. Inverting this yields the corresponding confidence interval for the population maximum of [m/q1/k, m/p1/k]. J Amer Statist Assoc 58: 13–30 MR144363 Let : S_n = a_i b_i + \cdots + a_n b_n ThenPinelis I (1994) Optimum bounds for the distributions of martingales in Banach spaces. If only the top endpoint is unknown, the sample maximum is a biased estimator for the population maximum, but the unbiased estimator \frac{k+1}{k}m - 1 (where m is the sample maximum and k is the sample size) is the UMVU estimator; see German tank problem for details. The largest sample serial number is m. In the statistical theory of estimation, the German tank problem consists of estimating the maximum of a discrete uniform distribution from sampling without replacement.
234.4
-17
0.178
399
15
C
7.4-1. Let $X$ equal the tarsus length for a male grackle. Assume that the distribution of $X$ is $N(\mu, 4.84)$. Find the sample size $n$ that is needed so that we are $95 \%$ confident that the maximum error of the estimate of $\mu$ is 0.4 .
Since the sample error can often be estimated beforehand as a function of the sample size, various methods of sample size determination are used to weigh the predicted accuracy of an estimator against the predicted cost of taking a larger sample. ===Bootstrapping and Standard Error=== As discussed, a sample statistic, such as an average or percentage, will generally be subject to sample-to-sample variation. The likely size of the sampling error can generally be reduced by taking a larger sample. ===Sample Size Determination=== The cost of increasing a sample size may be prohibitive in reality. thumb|350px|Estimation of distribution algorithm. thumb|right|A specimen sheet for the regular weight of Normal-Grotesk. Since sampling is almost always done to estimate population parameters that are unknown, by definition exact measurement of the sampling errors will not be possible; however they can often be estimated, either by general methods such as bootstrapping, or by specific methods incorporating some assumptions (or guesses) regarding the true population distribution and parameters thereof. ==Description== ===Sampling Error=== The sampling error is the error caused by observing a sample instead of the whole population. Franklin's gull (Leucophaeus pipixcan) is a small (length 12.6–14.2 in, 32–36 cm) gull. This is a source of genetic drift, as certain alleles become more or less common), and has been referred to as "sampling error", despite not being an "error" in the statistical sense. ==See also== * Margin of error * Propagation of uncertainty * Ratio estimator * Sampling (statistics) ==References== Category:Sampling (statistics) Category:Errors and residuals Category:Auditing terms By comparing many samples, or splitting a larger sample up into smaller ones (potentially with overlap), the spread of the resulting sample statistics can be used to estimate the standard error on the sample. ==In Genetics== The term "sampling error" has also been used in a related but fundamentally different sense in the field of genetics; for example in the bottleneck effect or founder effect, when natural disasters or migrations dramatically reduce the size of a population, resulting in a smaller population that may or may not fairly represent the original one. The difference between the sample statistic and population parameter is considered the sampling error.Sarndal, Swenson, and Wretman (1992), Model Assisted Survey Sampling, Springer-Verlag, For example, if one measures the height of a thousand individuals from a population of one million, the average height of the thousand is typically not the same as the average height of all one million people in the country. thumb|A circle of radius 5 centered at the origin has area 25, approximately 78.54, but it contains 81 integer points, so the error in estimating its area by counting grid points is approximately 2.46. The sampling error is the difference between a sample statistic used to estimate a population parameter and the actual but unknown value of the parameter. ===Effective Sampling=== In statistics, a truly random sample means selecting individuals from a population with an equivalent probability; in other words, picking individuals from a group without bias. For a circle with slightly smaller radius, the area is nearly the same, but the circle contains only 69 points, producing a larger error of approximately 9.54. In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. This number is approximated by the area of the circle, so the real problem is to accurately bound the error term describing how the number of points differs from the area. The Gauss circle problem concerns bounding this error more generally, as a function of the radius of the circle. - P. 288, nota 35. The sampling (following a normal distribution N) concentrates around the optimum as one goes along unwinding algorithm. Even in a perfectly non-biased sample, the sample error will still exist due to the remaining statistical component; consider that measuring only two or three individuals and taking the average would produce a wildly varying result each time. At each generation, \mu individuals are sampled and \lambda\leq \mu are selected. Measurements: * Length: 12.6-14.2 in (32-36 cm) * Weight: 8.1-10.6 oz (230-300 g) * Wingspan: 33.5-37.4 in (85-95 cm) Although the bird is uncommon on the coasts of North America, it occurs as a rare vagrant to northwest Europe, south and west Africa, Australia and Japan, with a single record from Eilat, Israel, in 2011 (Smith 2011), and a single record from Larnaca, Cyprus, July 2006. Without assuming the Riemann hypothesis, the best known upper bound is :V(r)=\frac{6}{\pi}r^2+O(r\exp(-c(\log r)^{3/5}(\log\log r^2)^{-1/5})) for a positive constant c. At the beginning of 2017 has been observed also in Southern Romania, southeast Europe. ===Behaviour=== They are omnivores like most gulls, and they will scavenge as well as seeking suitable small prey.
+11
117
'-22.1'
-501
260
B
5.4-17. In a study concerning a new treatment of a certain disease, two groups of 25 participants in each were followed for five years. Those in one group took the old treatment and those in the other took the new treatment. The theoretical dropout rate for an individual was $50 \%$ in both groups over that 5 -year period. Let $X$ be the number that dropped out in the first group and $Y$ the number in the second group. Assuming independence where needed, give the sum that equals the probability that $Y \geq X+2$. HINT: What is the distribution of $Y-X+25$ ?
* The event dropout rate estimates the percentage of high school students who left high school between the beginning of one school year and the beginning of the next without earning a high school diploma or its equivalent (e.g., a GED). * The status dropout rate reports the percentage of individuals in a given age range who are not in school and have not earned a high school diploma or equivalent credential. It is estimated 1.2 million students annually drop out of high school in the United States, where high school graduation rates rank 19th in the world.High School Dropouts - Do Something. In 2010 the dropout rates of 16- through 24-year- olds who are not enrolled in school and have not earned a high school credential were: 5.1% for white students, 8% for black students, 15.1% for Hispanic students, and 4.2% for Asian students. ===Academic risk factors=== Academic risk factors refer to the students' performance in school and are highly related to school level problems. "The Influence of Selected Academic, Demographic and Instructional Program Related Factors on High School Student Dropout Rates". Large schools, enrolling between 1,500 and 2,500 students, were found to have the largest proportion of students who dropped out, 12%. Event rates can be used to track annual changes in the dropout behavior of students in the U.S. school system. This percentage jumps to 38% in adolescents aged 15 to 17 years who also provided this reason for their disengagement with the education system. ==Dropout recovery== A "dropout recovery" initiative is any community, government, non-profit or business program in which students who have previously left school are sought out for the purpose of re-enrollment. Using this tool, assessing educational attainment and school attendance can calculate a dropout rate (Gilmore, 2010). A study by Battin- Pearson found that these two factors did not contribute significantly to dropout beyond what was explained by poor academic achievement. ==Motivation for dropping out== While the above factors certainly place a student at risk for dropout, they are not always the reason the student identifies as their motivation for dropping out. Although since 1990 dropout rates have gone down from 20% to a low of 9% in 2010, the rate does not seem to be dropping since this time (2010). A dropout is a momentary loss of signal in a communications system, usually caused by noise, propagation anomalies, or system malfunctions. There has been contention over the influence of ethnicity on dropout rates. Because of these factors, an average high school dropout will cost the government over $292,000. ==Measurement of the dropout rate== The U.S. Department of Education identifies four different rates to measure high school dropout and completion in the United States. Grade retention can increase the odds of dropping out by as much as 250 percent above those of similar students who were not retained.McNeil 2008 Students who drop out typically have a history of absenteeism, grade retention and academic trouble and are more disengaged from school life. The study also found that men still have higher dropout rates than women, and that students outside of major cities and in the northern territories also have a higher risk of dropping out. Allowing students to interact with support dogs and their owners allowed students to feel connected to their peers, school and school community (Binfet et al., 2016., & Binfet et al., 2018). ==United Kingdom== In the United Kingdom, a dropout is anyone who leaves school, college or university without either completing their course of study or transferring to another educational institution. The United States Department of Education's measurement of the status dropout rate is the percentage of 16-24-year-olds who are not enrolled in school and have not earned a high school credential.NCES 2011 This rate is different from the event dropout rate and related measures of the status completion and average freshman completion rates.NCES 2009 The status high school dropout rate in 2009 was 8.1%. The average Canadian dropout earns $70 less per week than their peers with a high school diploma. Graduates (without post-secondary) earned an average of $621 per week, whereas dropout students earned an average of $551 (Gilmore, 2010). The United States Department of Education's measurement of the status dropout rate is the percentage of 16 to 24-year-olds who are not enrolled in school and have not earned a high school credential.NCES 2011 This rate is different from the event dropout rate and related measures of the status completion and average freshman completion rates.NCES 2009 The status high school dropout rate in 2009 was 8.1%. As such, this theory examines the relationship between family background and dropout rates.
4.8
0.3359
54.394
1590
0.118
B
9.6-111. Let $X$ and $Y$ have a bivariate normal distribution with correlation coefficient $\rho$. To test $H_0: \rho=0$ against $H_1: \rho \neq 0$, a random sample of $n$ pairs of observations is selected. Suppose that the sample correlation coefficient is $r=0.68$. Using a significance level of $\alpha=0.05$, find the smallest value of the sample size $n$ so that $H_0$ is rejected.
For example, if we were expecting a population correlation between intelligence and job performance of around 0.50, a sample size of 20 will give us approximately 80% power ( = 0.05, two-tail) to reject the null hypothesis of zero correlation. Furthermore, assume that the null hypothesis will be rejected at the significance level of \alpha = 0.05\,. If rrb is calculated as above then the smaller of : (1+r_{rb})\frac{n_1n_0}{2} and : (1-r_{rb})\frac{n_1n_0}{2} is distributed as Mann–Whitney U with sample sizes n1 and n0 when the null hypothesis is true. == Notes == * MacCallum Robert C. et all Psychological Methods. 2002, Vol. 7, N°1, 49-40 ==References== ==External links== *Point Biserial Coefficient (Keith Calkins, 2005) Category:Correlation indicators It remains the case that very small values are relatively unlikely if the null-hypothesis is true, and that a significance test at level \alpha is obtained by rejecting the null-hypothesis if the significance level is less than or equal to \alpha. For a simple hypothesis, :\alpha = P(\text{test rejects } H_0 \mid H_0). The minimum (infimum) value of the power is equal to the confidence level of the test, \alpha, in this example 0.05. In the case of a composite null hypothesis, the size is the supremum over all data generating processes that satisfy the null hypotheses. :\alpha = \sup_{h\in H_0} P(\text{test rejects } H_0 \mid h). If the criterion is 0.05, the probability of the data implying an effect at least as large as the observed effect when the null hypothesis is true must be less than 0.05, for the null hypothesis of no effect to be rejected. If it is desirable to have enough power, say at least 0.90, to detect values of \theta > 1, the required sample size can be calculated approximately: B(1) \approx 1 - \Phi \left (1.64-\frac{\sqrt{n}}{\hat{\sigma}_D}\right) >0.90, from which it follows that \Phi \left( 1.64 - \frac{\sqrt{n}}{\hat{\sigma}_D} \right) < 0.10\,. It is possible to use this to test the null hypothesis of zero correlation in the population from which the sample was drawn. A little algebra shows that the usual formula for assessing the significance of a correlation coefficient, when applied to rpb, is the same as the formula for an unpaired t-test and so : r_{pb} \sqrt{ \frac{n_1+n_0-2}{1-r_{pb}^2}} follows Student's t-distribution with (n1+n0 − 2) degrees of freedom when the null hypothesis is true. However, in doing this study we are probably more interested in knowing whether the correlation is 0.30 or 0.60 or 0.50. If we set the significance level alpha to 0.05, and only reject the null hypothesis if the p-value is less than or equal to 0.05, then our hypothesis test will indeed have significance level (maximal type 1 error rate) 0.05. A test is said to have significance level \alpha if its size is less than or equal to \alpha . In the above example: * Null hypothesis (H0): The coin is fair, with Pr(heads) = 0.5 * Test statistic: Number of heads * Alpha level (designated threshold of significance): 0.05 * Observation O: 14 heads out of 20 flips; and * Two-tailed p-value of observation O given H0 = 2 × min(Pr(no. of heads ≥ 14 heads), Pr(no. of heads ≤ 14 heads)) = 2 × min(0.058, 0.978) = 2*0.058 = 0.115. Correlation (iii) is : r_{upb}=\frac{M_1-M_0-1}{\sqrt{\frac{n^2s_n^2}{n_1n_0}-2(M_1-M_0)+1}}. If the distribution of T is symmetric about zero, then p =\Pr(|T| \geq |t| \mid H_0) === Interpretations === ==== p-value as the statistic for performing significance tests ==== In a significance test, the null hypothesis H_0 is rejected if the p-value is less than or equal to a predefined threshold value \alpha, which is referred to as the alpha level or significance level. \alpha is not derived from the data, but rather is set by the researcher before examining the data. \alpha is commonly set to 0.05, though lower alpha levels are sometimes used. We can test the null hypothesis that the correlation is zero in the population. Some factors may be particular to a specific testing situation, but at a minimum, power nearly always depends on the following three factors: * the statistical significance criterion used in the test * the magnitude of the effect of interest in the population * the sample size used to detect the effect A significance criterion is a statement of how unlikely a positive result must be, if the null hypothesis of no effect is true, for the null hypothesis to be rejected. Hence, the null hypothesis is not rejected at the .05 level. The lower the p-value is, the lower the probability of getting that result if the null hypothesis were true. Thus, \text{power} = \Pr \big( \text{reject } H_0 \mid H_1 \text{ is true} \big).
9
5300
2.19
0.59
-11.2
A
7.3-5. In order to estimate the proportion, $p$, of a large class of college freshmen that had high school GPAs from 3.2 to 3.6 , inclusive, a sample of $n=50$ students was taken. It was found that $y=9$ students fell into this interval. (a) Give a point estimate of $p$.
* The averaged freshman graduation rate estimates the proportion of public high school freshmen who graduate with a regular diploma four years after starting ninth grade. Large schools, enrolling between 1,500 and 2,500 students, were found to have the largest proportion of students who dropped out, 12%. The rate focuses on public high school students as opposed to all high school students or the general population and is designed to provide an estimate of on-time graduation from high school. "The Influence of Selected Academic, Demographic and Instructional Program Related Factors on High School Student Dropout Rates". "The Impact of High School Size on Math Achievement and Dropout Rate". More precisely, the tertiary enrollment rate is the percentage of total enrollment, regardless of age, in post-secondary institutions to the population of people within five years of the age at which students normally graduate high school. ==Rankings== 1 United States: 72.6% 2 Finland: 70.4% 3 Norway: 70% 3 Sweden: 70% 5 New Zealand: 69.2% 6 Russia: 64.1% 7 Australia: 63.3% 8 Latvia: 63.1% 9 Slovenia: 60.5% 10 Canada: 60% ==References== Category:Higher education Tertiary enrollment rates are an expression of the percentage of high school graduates that successfully enroll into university. This is a list of States and Union Territories of India ranked according to Gross Enrollment Ratio (GER) of students in Classes I to VIII (6–13 yrs). * The event dropout rate estimates the percentage of high school students who left high school between the beginning of one school year and the beginning of the next without earning a high school diploma or its equivalent (e.g., a GED). In 2010 the dropout rates of 16- through 24-year- olds who are not enrolled in school and have not earned a high school credential were: 5.1% for white students, 8% for black students, 15.1% for Hispanic students, and 4.2% for Asian students. ===Academic risk factors=== Academic risk factors refer to the students' performance in school and are highly related to school level problems. While in the U.S. highly competitive students have A grades, in Chile these same students tend to average 6,8 , 6,9 or 7,0, all of which are considered near perfect grades. Graduating students from high school who are not prepared for college, however, also generates problems, as the college dropout rate exceeds the high school rate. The list is compiled from the Statistics of School Education- 2010–11 Report by Ministry of HRD, Government of India. == List == Gross enrolment ratio (GER) is a statistical measure used in the education sector and by the UN in its Education Index to determine the number of students enrolled in school at several different grade levels (like elementary, middle school and high school), and examine it to analyze the ratio of the number of students who live in that country to those who qualify for the particular grade level. The grade point average (GPA) in Chile ranges from 1.0 up to 7.0 (with one decimal place). "Predictors of Early High School Dropout: A Test of Five Theories". “Exceptions to High School Dropout Predictions in a Low-Income Sample: Do Adults Make a Difference?” Thus, it provides a measure of the extent to which public high schools are graduating students within the expected period of four years. ==Notable dropouts== * Don Adams (1923–2005), actor; dropped out of DeWitt Clinton High SchoolSmith, Austin. Table of Chilean GPA GPA % Achievement Meaning Honours 6.0 - 7.0 83% - 100% Outstanding (7.0) Highest Honours 5.0 - 5.9 66% - 82% Good Honours 4.0 - 4.9 50% - 65% Sufficient Passed 3.0 - 3.9 33% - 49% Less than Sufficient Failed 2.0 - 2.9 16% - 32% Deficient Failed 1.0 - 1.9 0% - 15% Very Deficient Failed ==References== Chile Grading Grading An overall GPA in university degrees that ranges from 5.5 to 5.9 is uncommon and is considered a "very good" academic standing. Students who attended schools that offered Calculus or fewer courses below the level of Algebra 1 had a reduced risk of dropping out of school by 56%. The United States Department of Education's measurement of the status dropout rate is the percentage of 16 to 24-year-olds who are not enrolled in school and have not earned a high school credential.NCES 2011 This rate is different from the event dropout rate and related measures of the status completion and average freshman completion rates.NCES 2009 The status high school dropout rate in 2009 was 8.1%. This rate focuses on an overall age group as opposed to individuals in the U.S. school system, so it can be used to study general population issues.
0.011
0.1800
6.0
2.89
-4564.7
B
7.4-15. If $\bar{X}$ and $\bar{Y}$ are the respective means of two independent random samples of the same size $n$, find $n$ if we want $\bar{x}-\bar{y} \pm 4$ to be a $90 \%$ confidence interval for $\mu_X-\mu_Y$. Assume that the standard deviations are known to be $\sigma_X=15$ and $\sigma_Y=25$.
The mean value calculated from the sample, \bar{x}, will have an associated standard error on the mean, {\sigma}_\bar{x}, given by: :{\sigma}_\bar{x}\ = \frac{\sigma}{\sqrt{n}}. The following expressions can be used to calculate the upper and lower 95% confidence limits, where \bar{x} is equal to the sample mean, \operatorname{SE} is equal to the standard error for the sample mean, and 1.96 is the approximate value of the 97.5 percentile point of the normal distribution: :Upper 95% limit = \bar{x} + (\operatorname{SE}\times 1.96) , and :Lower 95% limit = \bar{x} - (\operatorname{SE}\times 1.96) . Consequently, : \Pr\left(\bar{X} - \frac{cS}{\sqrt{n}} \le \mu \le \bar{X} + \frac{cS}{\sqrt{n}} \right)=0.95\, and we have a theoretical (stochastic) 95% confidence interval for μ. Therefore, the standard error of the mean is usually estimated by replacing \sigma with the sample standard deviation \sigma_{x} instead: :{\sigma}_\bar{x}\ \approx \frac{\sigma_{x}}{\sqrt{n}}. The variance of the mean is then :\operatorname{Var}(\bar{x}) = \operatorname{Var}\left(\frac{T}{n}\right) = \frac{1}{n^2}\operatorname{Var}(T) = \frac{1}{n^2}n\sigma^2 = \frac{\sigma^2}{n}. The standard error is, by definition, the standard deviation of \bar{x} which is simply the square root of the variance: :\sigma_{\bar{x}} = \sqrt{\frac{\sigma^2}{n}} = \frac{\sigma}{\sqrt{n}} . Suppose we wanted to calculate a 95% confidence interval for μ. The mean of these measurements \bar{x} is simply given by :\bar{x} = T/n . After observing the sample we find values for and s for S, from which we compute the confidence interval : \left[ \bar{x} - \frac{cs}{\sqrt{n}}, \bar{x} + \frac{cs}{\sqrt{n}} \right]. == Interpretation == Various interpretations of a confidence interval can be given (taking the 95% confidence interval as an example in the following). The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. If X has a standard normal distribution, i.e. X ~ N(0,1), : \mathrm{P}(X > 1.96) \approx 0.025, \, : \mathrm{P}(X < 1.96) \approx 0.975, \, and as the normal distribution is symmetric, : \mathrm{P}(-1.96 < X < 1.96) \approx 0.95. If the sampling distribution is normally distributed, the sample mean, the standard error, and the quantiles of the normal distribution can be used to calculate confidence intervals for the true population mean. For a large number of independent identically distributed random variables \ X_1, ..., X_n\ , with finite variance, the average \ \overline{X}_n\ approximately has a normal distribution, no matter what the distribution of the \ X_i\ is, with the approximation roughly improving in proportion to \ \sqrt{n\ }. == Example == Suppose {X1, …, Xn} is an independent sample from a normally distributed population with unknown parameters mean μ and variance σ2. Because of the central limit theorem, this number is used in the construction of approximate 95% confidence intervals. Practically this tells us that when trying to estimate the value of a population mean, due to the factor 1/\sqrt{n}, reducing the error on the estimate by a factor of two requires acquiring four times as many observations in the sample; reducing it by a factor of ten requires a hundred times as many observations. === Estimate === The standard deviation \sigma of the population being sampled is seldom known. Hence the estimator of \operatorname{Var}(T) becomes nS^2_X + n\bar{X}^2, leading the following formula for standard error: :\operatorname{Standard~Error}(\bar{X})= \sqrt{\frac{S^2_X + \bar{X}^2}{n}} (since the standard deviation is the square root of the variance) ==Student approximation when σ value is unknown== In many practical applications, the true value of σ is unknown. As this is only an estimator for the true "standard error", it is common to see other notations here such as: :\widehat{\sigma}_{\bar{x}} \approx \frac{\sigma_{x}}{\sqrt{n}} or alternately {s}_\bar{x}\ \approx \frac{s}{\sqrt{n}}. Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation "s" instead of σ, and we could use this value to calculate confidence intervals. Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. Some people even use the value of 2 in the place of 1.96, reporting a 95.4% confidence interval as a 95% confidence interval. * The confidence interval can be expressed in terms of statistical significance, e.g.:
0.321
2
4.16
435
144
E
7.4-7. For a public opinion poll for a close presidential election, let $p$ denote the proportion of voters who favor candidate $A$. How large a sample should be taken if we want the maximum error of the estimate of $p$ to be equal to (a) 0.03 with $95 \%$ confidence?
The values for \hat{p} = 0.68, n = 400, z^* = 1.96 can now be substituted into the formula for one- sample proportion in the Z-interval: \hat{p} \pm z^* \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} \Rightarrow (0.68) \pm (1.96) \sqrt{\frac{(0.68)(1-0.68)}{(400)}} \Rightarrow 0.68 \pm 1.96 \sqrt{0.000544} \Rightarrow \bigl(0.63429,0.72571\bigr) Based on the conditions of inference and the formula for the one-sample proportion in the Z-interval, it can be concluded with a 95% confidence level that the percentage of the voter population in this democracy supporting candidate B is between 63.429% and 72.571%. === Value of the parameter in the confidence interval range === A commonly asked question in inferential statistics is whether the parameter is included within a confidence interval. To answer the political scientist's question, a one- sample proportion in the Z-interval with a confidence level of 95% can be constructed in order to determine the population proportion of eligible voters in this democracy that support candidate B. ==== Solution ==== It is known from the random sample that \hat{p} = \frac{272}{400} = 0.68 with sample size n = 400. Since the sample error can often be estimated beforehand as a function of the sample size, various methods of sample size determination are used to weigh the predicted accuracy of an estimator against the predicted cost of taking a larger sample. ===Bootstrapping and Standard Error=== As discussed, a sample statistic, such as an average or percentage, will generally be subject to sample-to-sample variation. :N \geq 10(400) \Rightarrow N \geq 4000 :The population size N for this democracy's voters can be assumed to be at least 4,000. Thus, p_{\min} < p < p_{\max}, where: :\frac{\Gamma(n+1)}{\Gamma(x )\Gamma(n-x+1)}\int_0^{ p_{\min}} t^{x-1}(1-t)^{n-x}dt = \frac{\alpha}{2} :\frac{\Gamma(n+1)}{\Gamma(x+1)\Gamma(n-x)}\int_0^{ p_{\max}} t^{x}(1-t)^{n-x-1}dt = 1-\frac{\alpha}{2} The binomial proportion confidence interval is then ( p_{\min}, p_{\max}), as follows from the relation between the Binomial distribution cumulative distribution function and the regularized incomplete beta function. Under this formulation, the confidence interval represents those values of the population parameter that would have large p-values if they were tested as a hypothesized population proportion. A population proportion can be estimated through the usage of a confidence interval known as a one-sample proportion in the Z-interval whose formula is given below: :\hat{p} \pm z^* \sqrt{\frac{\hat{p}(1-\hat{p})}{n}} where \hat{p} is the sample proportion, n is the sample size, and z^* is the upper \frac{1-C}{2} critical value of the standard normal distribution for a level of confidence C. === Proof === In order to derive the formula for the one-sample proportion in the Z-interval, a sampling distribution of sample proportions needs to be taken into consideration. This can be verified mathematically with the following definition: #* Let n be the sample size of a given random sample and let \hat{p} be its sample proportion. * Since a random sample of 400 voters was obtained from the voting population, the condition for a simple random sample has been met. In other words, a binomial proportion confidence interval is an interval estimate of a success probability p when only the number of experiments n and the number of successes nS are known. This approximation is based on the central limit theorem and is unreliable when the sample size is small or the success probability is close to 0 or 1. The value of 72% is a sample proportion. Suppose the following probability is calculated: :P(-z^*<\frac{\hat{p}-P}{\sqrt{\frac{\hat{p}(1-\hat{p})}{n}}}, where 0 and \pm z^* are the standard critical values. thumb|The sampling distribution of sample proportions is approximately normal when it satisfies the requirements of the Central Limit Theorem. A political scientist wants to determine what percentage of the voter population support candidate B. A random sample of 400 eligible voters in the democracy's voter population shows that 272 voters support candidate B. The likely size of the sampling error can generally be reduced by taking a larger sample. ===Sample Size Determination=== The cost of increasing a sample size may be prohibitive in reality. The mean of the sampling distribution of sample proportions is usually denoted as \mu_\hat{p} = P and its standard deviation is denoted as: :\sigma_\hat{p} = \sqrt{\frac{P(1-P)}{n}} Since the value of P is unknown, an unbiased statistic \hat{p} will be used for P. A 95% confidence interval for the proportion, for instance, will contain the true proportion 95% of the times that the procedure for constructing the confidence interval is employed. ==Normal approximation interval or Wald interval == A commonly used formula for a binomial confidence interval relies on approximating the distribution of error about a binomially-distributed observation, \hat p, with a normal distribution. Hence, the values of C fall between 0 and 1, exclusively. === Estimation of P using ranked set sampling === A more precise estimate of P can be obtained by choosing ranked set sampling instead of simple random sampling ==See also== *Binomial proportion confidence interval *Confidence interval *Prevalence *Statistical hypothesis testing *Statistical inference *Statistical parameter *Tolerance interval == References == Category:Ratios In statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments (Bernoulli trials). The solution for estimates the upper and lower limits of the confidence interval for . Since the X_i are independent and each one has variance \text{Var}(X_i) = p(1-p), the sampling variance of the proportion therefore is:How to calculate the standard error of a proportion using weighted data? :\text{Var}(\hat p) = \sum_{i=1}^n \text{Var}(w_i X_i) = p(1-p)\sum_{i=1}^n w_i^2.
1068
-167
199.4
0.3359
0.082
A
5.5-15. Let the distribution of $T$ be $t(17)$. Find (a) $t_{0.01}(17)$.
"Noncentral Student's t-Distribution." The noncentral t-distribution generalizes Student's t-distribution using a noncentrality parameter. When the scaling term is estimated based on the data, the test statistic—under certain conditions—follows a Student's t distribution. Once the t value and degrees of freedom are determined, a p-value can be found using a table of values from Student's t-distribution. However, in practice the distribution is rarely used, since tabulated values for T are hard to find. The t statistic is calculated as :t = \frac{\bar{X}_D - \mu_0}{s_D/\sqrt n} where \bar{X}_D and s_D are the average and standard deviation of the differences between all pairs. However, the central t-distribution can be used as an approximation to the noncentral t-distribution. The location/scale generalization of the central t-distribution is a different distribution from the noncentral t-distribution discussed in this article. In statistics, the t-statistic is the ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error. Another is Hotelling's T statistic follows a T distribution. The test statistic is approximately equal to 1.959, which gives a two-tailed p-value of 0.07857. ==Related statistical tests== ===Alternatives to the t-test for location problems=== The t-test provides an exact test for the equality of the means of two i.i.d. normal populations with unknown, but equal, variances. T helper 17 cells (Th17) are a subset of pro-inflammatory T helper cells defined by their production of interleukin 17 (IL-17). * The t-test p-value for the difference in means, and the regression p-value for the slope, are both 0.00805. In each case, the formula for a test statistic that either exactly follows or closely approximates a t-distribution under the null hypothesis is given. The key property of the t statistic is that it is a pivotal quantity – while defined in terms of the sample mean, its sampling distribution does not depend on the population parameters, and thus it can be used regardless of what these may be. The t-distribution also appeared in a more general form as Pearson Type IV distribution in Karl Pearson's 1895 paper. The t-distribution also appeared in a more general form as Pearson Type IV distribution in Karl Pearson's 1895 paper. Usually, T is converted instead to an F statistic. Galaxy 17 is a communications satellite owned by Intelsat to be located at 91° West longitude, serving the North American market. The t-statistic is used in a t-test to determine whether to support or reject the null hypothesis. It is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. If the test procedure rejects the null hypothesis whenever |T|>t_{1-\alpha/2}\,\\!, where t_{1-\alpha/2}\,\\! is the upper α/2 quantile of the (central) Student's t-distribution for a pre-specified α ∈ (0, 1), then the power of this test is given by :1-F_{n-1,\sqrt{n}\theta/\sigma}(t_{1-\alpha/2})+F_{n-1,\sqrt{n}\theta/\sigma}(-t_{1-\alpha/2}) .
+2.9
1.1
2.567
1.88
0.139
C
5.5-1. Let $X_1, X_2, \ldots, X_{16}$ be a random sample from a normal distribution $N(77,25)$. Compute (a) $P(77<\bar{X}<79.5)$.
If X has a standard normal distribution, i.e. X ~ N(0,1), : \mathrm{P}(X > 1.96) \approx 0.025, \, : \mathrm{P}(X < 1.96) \approx 0.975, \, and as the normal distribution is symmetric, : \mathrm{P}(-1.96 < X < 1.96) \approx 0.95. To compute the probability that an observation is within two standard deviations of the mean (small differences due to rounding): \Pr(\mu-2\sigma \le X \le \mu+2\sigma) = \Phi(2) - \Phi(-2) \approx 0.9772 - (1 - 0.9772) \approx 0.9545 This is related to confidence interval as used in statistics: \bar{X} \pm 2\frac{\sigma}{\sqrt{n}} is approximately a 95% confidence interval when \bar{X} is the average of a sample of size n. ==Normality tests== The "68–95–99.7 rule" is often used to quickly get a rough probability estimate of something, given its standard deviation, if the population is assumed to be normal. Then, denoting c as the 97.5th percentile of this distribution, : \Pr(-c\le T \le c)=0.95 Note that "97.5th" and "0.95" are correct in the preceding expressions. In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. Consequently, : \Pr\left(\bar{X} - \frac{cS}{\sqrt{n}} \le \mu \le \bar{X} + \frac{cS}{\sqrt{n}} \right)=0.95\, and we have a theoretical (stochastic) 95% confidence interval for μ. Let X_1, X_2, \dots, X_n denote a random sample of n independent observations from a population with overall expected value (average) \mu and finite variance, and denote the sample mean of that sample – itself a random variable – by \bar{X}_n. In mathematical notation, these facts can be expressed as follows, where is the probability function, is an observation from a normally distributed random variable, (mu) is the mean of the distribution, and (sigma) is its standard deviation: \begin{align} \Pr(\mu-1\sigma \le X \le \mu+1\sigma) & \approx 68.27\% \\\ \Pr(\mu-2\sigma \le X \le \mu+2\sigma) & \approx 95.45\% \\\ \Pr(\mu-3\sigma \le X \le \mu+3\sigma) & \approx 99.73\% \end{align} The usefulness of this heuristic especially depends on the question under consideration. From the probability density function of the standard normal distribution, the exact value of z.975 is determined by : \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{z_{.975}} e^{-x^2/2} \, \mathrm{d}x = 0.975. == History == thumb|right|200px|Ronald Fisher The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925: In Table 1 of the same work, he gave the more precise value 1.959964. , Table 1 In 1970, the value truncated to 20 decimal places was calculated to be :1.95996 39845 40054 23552... Suppose we are interested in the sample average \bar{X}_n \equiv \frac{X_1 + \cdots + X_n}{n}. Because of the central limit theorem, this number is used in the construction of approximate 95% confidence intervals. For large enough n, the distribution of \bar{X}_n gets arbitrarily close to the normal distribution with mean \mu and variance \sigma^2/n. After observing the sample we find values for and s for S, from which we compute the confidence interval : \left[ \bar{x} - \frac{cs}{\sqrt{n}}, \bar{x} + \frac{cs}{\sqrt{n}} \right]. == Interpretation == Various interpretations of a confidence interval can be given (taking the 95% confidence interval as an example in the following). We only need to calculate each integral for the cases n = 1,2,3. \begin{align} &\Pr(\mu -1\sigma \leq X \leq \mu + 1\sigma) = \frac{1}{\sqrt{2\pi}} \int_{-1}^{1} e^{-\frac{u^2}{2}}du \approx 0.6827 \\\ &\Pr(\mu -2\sigma \leq X \leq \mu + 2\sigma) =\frac{1}{\sqrt{2\pi}}\int_{-2}^{2} e^{-\frac{u^2}{2}}du \approx 0.9545 \\\ &\Pr(\mu -3\sigma \leq X \leq \mu + 3\sigma) = \frac{1}{\sqrt{2\pi}}\int_{-3}^{3} e^{-\frac{u^2}{2}}du \approx 0.9973. \end{align} ==Cumulative distribution function== These numerical values "68%, 95%, 99.7%" come from the cumulative distribution function of the normal distribution. 77 (seventy-seven) is the natural number following 76 and preceding 78. If this procedure is performed many times, resulting in a collection of observed averages, the central limit theorem says that if the sample size was large enough, the probability distribution of these averages will closely approximate a normal distribution. In statistics, the 68–95–99.7 rule, also known as the empirical rule, is a shorthand used to remember the percentage of values that lie within an interval estimate in a normal distribution: 68%, 95%, and 99.7% of the values lie within one, two, and three standard deviations of the mean, respectively. Then the limit as n\to\infty of the distribution of \frac{\bar{X}_n-\mu}{\sigma_{\bar{X}_n}}, where \sigma_{\bar{X}_n}=\frac{\sigma}{\sqrt{n}}, is the standard normal distribution. The number , whose typical value is close to but not greater than 1, is sometimes given in the form \ 1 - \alpha\ (or as a percentage \ 100%\cdot( 1 - \alpha )\ ), where \ \alpha\ is a small positive number, often 0.05 . thumb|upright=1.3|Each row of points is a sample from the same normal distribution. There is no single accepted name for this number; it is also commonly referred to as the "standard normal deviate", "normal score" or "Z score" for the 97.5 percentile point, the .975 point, or just its approximate value, 1.96. Suppose we wanted to calculate a 95% confidence interval for μ.
0.0526315789
0.66
0.1353
0.4772
-1.00
D
5.4-19. A doorman at a hotel is trying to get three taxicabs for three different couples. The arrival of empty cabs has an exponential distribution with mean 2 minutes. Assuming independence, what is the probability that the doorman will get all three couples taken care of within 6 minutes?
* Exponential distribution is a special case of type 3 Pearson distribution. In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. The exponential distribution may be viewed as a continuous counterpart of the geometric distribution, which describes the number of Bernoulli trials necessary for a discrete process to change state. Reliability theory and reliability engineering also make extensive use of the exponential distribution. The original Three Prisoners problem can be seen in this light: The warden in that problem still has these six cases, each with a probability of occurring. And for the group of 23 people, the probability of sharing is :p(23) \approx 1 - \left(\frac{364}{365}\right)^\binom{23}{2} = 1 - \left(\frac{364}{365}\right)^{253} \approx 0.500477 . ===Poisson approximation=== Applying the Poisson approximation for the binomial on the group of 23 people, :\operatorname{Poi}\left(\frac{\binom{23}{2}}{365}\right) =\operatorname{Poi}\left(\frac{253}{365}\right) \approx \operatorname{Poi}(0.6932) so :\Pr(X>0)=1-\Pr(X=0) \approx 1-e^{-0.6932} \approx 1-0.499998=0.500002. In queuing theory, the service times of agents in a system (e.g. how long it takes for a bank teller etc. to serve a customer) are often modeled as exponentially distributed variables. X =\sum_{i=1}^k \sum_{j=i+1}^k X_{ij} \begin{alignat}{3} E[X] & = \sum_{i=1}^k \sum_{j=i+1}^k E[X_{ij}]\\\ & = \binom{k}{2} \frac{1}{n}\\\ & = \frac{k(k-1)}{2n}\\\ \end{alignat} For , if , the expected number of people with the same birthday is ≈ 1.0356. (The arrival of customers for instance is also modeled by the Poisson distribution if the arrivals are independent and distributed identically.) This is a large class of probability distributions that includes the exponential distribution as one of its members, but also includes many other distributions, like the normal, binomial, gamma, and Poisson distributions. ==Definitions== ===Probability density function=== The probability density function (pdf) of an exponential distribution is : f(x;\lambda) = \begin{cases} \lambda e^{ - \lambda x} & x \ge 0, \\\ 0 & x < 0\. \end{cases} Here λ > 0 is the parameter of the distribution, often called the rate parameter. This can be seen by considering the complementary cumulative distribution function: \begin{align} \Pr\left(T > s + t \mid T > s\right) &= \frac{\Pr\left(T > s + t \cap T > s\right)}{\Pr\left(T > s\right)} \\\\[4pt] &= \frac{\Pr\left(T > s + t \right)}{\Pr\left(T > s\right)} \\\\[4pt] &= \frac{e^{-\lambda(s + t)}}{e^{-\lambda s}} \\\\[4pt] &= e^{-\lambda t} \\\\[4pt] &= \Pr(T > t). \end{align} When T is interpreted as the waiting time for an event to occur relative to some initial time, this relation implies that, if T is conditioned on a failure to observe the event over some initial period of time s, the distribution of the remaining waiting time is the same as the original unconditional distribution. The exponential distribution is however not appropriate to model the overall lifetime of organisms or technical devices, because the "failure rates" here are not constant: more failures occur for very young and for very old systems. Similar caveats apply to the following examples which yield approximately exponentially distributed variables: * The time until a radioactive particle decays, or the time between clicks of a Geiger counter * The time it takes before your next telephone call * The time until default (on payment to company debt holders) in reduced-form credit risk modeling Exponential variables can also be used to model situations where certain events occur with a constant probability per unit length, such as the distance between mutations on a DNA strand, or between roadkills on a given road. The first few values are as follows: >50% probability of 3 people sharing a birthday - 88 people; >50% probability of 4 people sharing a birthday - 187 people . ===Probability of a shared birthday (collision)=== The birthday problem can be generalized as follows: :Given random integers drawn from a discrete uniform distribution with range , what is the probability that at least two numbers are the same? ( gives the usual birthday problem.) But if we focus on a time interval during which the rate is roughly constant, such as from 2 to 4 p.m. during work days, the exponential distribution can be used as a good approximate model for the time until the next phone call arrives. \\\ V_{t} &= n^{k} = 365^{23} \\\ P(A) &= \frac{V_{nr}}{V_{t}} \approx 0.492703 \\\ P(B) &= 1 - P(A) \approx 1 - 0.492703 \approx 0.507297 (50.7297%)\end{align} Another way the birthday problem can be solved is by asking for an approximate probability that in a group of people at least two have the same birthday. The exponential distribution is not the same as the class of exponential families of distributions. This implies that the expected number of people with a non-shared (unique) birthday is: : n \left( \frac{d-1}{d} \right)^{n-1} Similar formulas can be derived for the expected number of people who share with three, four, etc. other people. === Number of people until every birthday is achieved === The expected number of people needed until every birthday is achieved is called the Coupon collector's problem. Note that the vertical scale is logarithmic (each step down is 1020 times less likely). : 1 0.0% 5 2.7% 10 11.7% 20 41.1% 23 50.7% 30 70.6% 40 89.1% 50 97.0% 60 99.4% 70 99.9% 75 99.97% 100 % 200 % 300 (100 − )% 350 (100 − )% 365 (100 − )% ≥ 366 100% ==Approximations== thumb|right|upright=1.4|Graphs showing the approximate probabilities of at least two people sharing a birthday () and its complementary event () thumb|right|upright=1.4|A graph showing the accuracy of the approximation () The Taylor series expansion of the exponential function (the constant ) : e^x = 1 + x + \frac{x^2}{2!}+\cdots provides a first-order approximation for for |x| \ll 1: : e^x \approx 1 + x. Consequently, the desired probability is . In operating-rooms management, the distribution of surgery duration for a category of surgeries with no typical work-content (like in an emergency room, encompassing all types of surgeries). ===Prediction=== Having observed a sample of n data points from an unknown exponential distribution a common task is to use these samples to make predictions about future data from the same source. In probability theory and statistics, the normal-exponential-gamma distribution (sometimes called the NEG distribution) is a three-parameter family of continuous probability distributions.
1.60
1.88
1.86
2.25
0.5768
E
7.3-9. Consider the following two groups of women: Group 1 consists of women who spend less than $\$ 500$ annually on clothes; Group 2 comprises women who spend over $\$ 1000$ annually on clothes. Let $p_1$ and $p_2$ equal the proportions of women in these two groups, respectively, who believe that clothes are too expensive. If 1009 out of a random sample of 1230 women from group 1 and 207 out of a random sample 340 from group 2 believe that clothes are too expensive, (a) Give a point estimate of $p_1-p_2$.
The joint hypothesis problem is the problem that testing for market efficiency is difficult, or even impossible. The following estimate only replaces the population variances by the sample variances: :\hat u \approx \frac{(g_1 + g_2)^2}{g_1^2/(n_1-1) + g_2^2/(n_2-1)} \quad \text{ where } g_i = s_i^2/n_i. Many other methods of treating the problem have been proposed since, and the effect on the resulting confidence intervals have been investigated. ===Welch's approximate t solution=== A widely used method is that of B. L. Welch,Welch (1938, 1947) who, like Fisher, was at University College London. This p_i is proportional to some known quantity x_i so that p_i = \frac{x_i}{\sum_{i=1}^N x_i}.Skinner, Chris J. "Probability proportional to size (PPS) sampling." thumb|250px|Budget constraint, where A=\frac{m}{P_y} and B=\frac{m}{P_x} In economics, a budget constraint represents all the combinations of goods and services that a consumer may purchase given current prices within his or her given income. Since the sample error can often be estimated beforehand as a function of the sample size, various methods of sample size determination are used to weigh the predicted accuracy of an estimator against the predicted cost of taking a larger sample. ===Bootstrapping and Standard Error=== As discussed, a sample statistic, such as an average or percentage, will generally be subject to sample-to-sample variation. * Estimator of true probability (Frequentist approach). "On the theory of sampling from finite populations." *In statistics, the estimate of a proportion of a sample (denoted by p) has a standard error given by: :s_p = \sqrt{ \frac {p \, (1-p) } {n} } where n is the number of trials (which was denoted by N in the previous section). The likely size of the sampling error can generally be reduced by taking a larger sample. ===Sample Size Determination=== The cost of increasing a sample size may be prohibitive in reality. If consideration is restricted to classical statistical inference only, it is possible to seek solutions to the inference problem that are simple to apply in a practical sense, giving preference to this simplicity over any inaccuracy in the corresponding probability statements. In statistics, the question of checking whether a coin is fair is one whose importance lies, firstly, in providing a simple problem on which to illustrate basic ideas of statistical inference and, secondly, in providing a simple problem that can be used to compare various competing methods of statistical inference, including decision theory. In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. Sampling Techniques (3rd ed.). :E = \frac {Z}{ 2 \, \sqrt{n} } :E = \frac {Z}{ 2 \, \sqrt{ 10000 } } = \frac {Z}{ 200 } :E = 0.0050\, at 68.27% level of confidence (Z=1) :E = 0.0100\, at 95.45% level of confidence (Z=2) :E = 0.0165\, at 99.90% level of confidence (Z=3.3) 3\. The difference between the sample statistic and population parameter is considered the sampling error.Sarndal, Swenson, and Wretman (1992), Model Assisted Survey Sampling, Springer-Verlag, For example, if one measures the height of a thousand individuals from a population of one million, the average height of the thousand is typically not the same as the average height of all one million people in the country. For example, attempting to measure the average height of the entire human population of the Earth, but measuring a sample only from one country, could result in a large over- or under-estimation. (The TOTs are given by the price ratio Px/Py, where x is the exportable commodity and y is the importable). == Many goods == While low-level demonstrations of budget constraints are often limited to less than two good situations which provide easy graphical representation, it is possible to demonstrate the relationship between multiple goods through a budget constraint. In such a case, assuming there are n\, goods, called x_i\, for i=1,\dots,n\,, that the price of good x_i\, is denoted by p_i\,, and if \,W\, is the total amount that may be spent, then the budget constraint is: :\sum_{i=1}^np_ix_i\leq W. The pps sampling results in a fixed sample size n (as opposed to Poisson sampling which is similar but results in a random sample size with expectancy of n). In survey methodology, probability-proportional-to-size (pps) sampling is a sampling process where each element of the population (of size N) has some (independent) chance p_i to be selected to the sample when performing one draw. The sampling error is the difference between a sample statistic used to estimate a population parameter and the actual but unknown value of the parameter. ===Effective Sampling=== In statistics, a truly random sample means selecting individuals from a population with an equivalent probability; in other words, picking individuals from a group without bias.
4152
0.2115
'-24.0'
1.45
-1
B
5.6-9. In Example 5.6-4, compute $P(1.7 \leq Y \leq 3.2)$ with $n=4$ and compare your answer with the normal approximation of this probability.
If X has a standard normal distribution, i.e. X ~ N(0,1), : \mathrm{P}(X > 1.96) \approx 0.025, \, : \mathrm{P}(X < 1.96) \approx 0.975, \, and as the normal distribution is symmetric, : \mathrm{P}(-1.96 < X < 1.96) \approx 0.95. From the probability density function of the standard normal distribution, the exact value of z.975 is determined by : \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{z_{.975}} e^{-x^2/2} \, \mathrm{d}x = 0.975. == History == thumb|right|200px|Ronald Fisher The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925: In Table 1 of the same work, he gave the more precise value 1.959964. , Table 1 In 1970, the value truncated to 20 decimal places was calculated to be :1.95996 39845 40054 23552... In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. Then, denoting c as the 97.5th percentile of this distribution, : \Pr(-c\le T \le c)=0.95 Note that "97.5th" and "0.95" are correct in the preceding expressions. Because of the central limit theorem, this number is used in the construction of approximate 95% confidence intervals. Inverse Probability. There is no single accepted name for this number; it is also commonly referred to as the "standard normal deviate", "normal score" or "Z score" for the 97.5 percentile point, the .975 point, or just its approximate value, 1.96. For a large number of independent identically distributed random variables \ X_1, ..., X_n\ , with finite variance, the average \ \overline{X}_n\ approximately has a normal distribution, no matter what the distribution of the \ X_i\ is, with the approximation roughly improving in proportion to \ \sqrt{n\ }. == Example == Suppose {X1, …, Xn} is an independent sample from a normally distributed population with unknown parameters mean μ and variance σ2. right|thumb|Lines 10580–10594, columns 21–40, from A Million Random Digits with 100,000 Normal Deviates A Million Random Digits with 100,000 Normal Deviates is a random number book by the RAND Corporation, originally published in 1955. Probability. The probable error can also be expressed as a multiple of the standard deviation σ,Zwillinger, D.; Kokosa, S. (2000) CRC Standard Probability and Statistics Tables and Formulae, Chapman & Hall/CRC. Consequently, : \Pr\left(\bar{X} - \frac{cS}{\sqrt{n}} \le \mu \le \bar{X} + \frac{cS}{\sqrt{n}} \right)=0.95\, and we have a theoretical (stochastic) 95% confidence interval for μ. In statistics, probable error defines the half-range of an interval about a central point for the distribution, such that half of the values from the distribution will lie within the interval and half outside.Dodge, Y. (2006) The Oxford Dictionary of Statistical Terms, OUP. thumb|upright=1.3|Each row of points is a sample from the same normal distribution. The upper bounds for C0 were subsequently lowered from the original estimate 7.59 due to to (considering recent results only) 0.9051 due to , 0.7975 due to , 0.7915 due to , 0.6379 and 0.5606 due to and . the best estimate is 0.5600 obtained by . ===Multidimensional version=== As with the multidimensional central limit theorem, there is a multidimensional version of the Berry–Esseen theorem.Bentkus, Vidmantas. The 95% probability relates to the reliability of the estimation procedure, not to a specific calculated interval. Thus, the probability that T will be between -c and +c is 95%. Approximation Theorems of Mathematical Statistics. * A particular confidence level of 95% calculated from an experiment does not mean that there is a 95% probability of a sample parameter from a repeat of the experiment falling within this interval. == Counterexamples == Since confidence interval theory was proposed, a number of counter-examples to the theory have been developed to show how the interpretation of confidence intervals can be problematic, at least if one interprets them naïvely. === Confidence procedure for uniform location === Welch presented an example which clearly shows the difference between the theory of confidence intervals and other theories of interval estimation (including Fisher's fiducial intervals and objective Bayesian intervals). * Category:Probabilistic inequalities Category:Theorems in statistics Category:Central limit theorem Welch showed that the first confidence procedure dominates the second, according to desiderata from confidence interval theory; for every \theta_1 eq\theta, the probability that the first procedure contains \theta_1 is less than or equal to the probability that the second procedure contains \theta_1.
0.6749
435
1.2
4.5
15.1
A
5.8-5. If the distribution of $Y$ is $b(n, 0.25)$, give a lower bound for $P(|Y / n-0.25|<0.05)$ when (c) $n=1000$.
If X has a standard normal distribution, i.e. X ~ N(0,1), : \mathrm{P}(X > 1.96) \approx 0.025, \, : \mathrm{P}(X < 1.96) \approx 0.975, \, and as the normal distribution is symmetric, : \mathrm{P}(-1.96 < X < 1.96) \approx 0.95. The lower bound is expressed in terms of the probabilities for pairs of events. The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. thumb|Plot of S_n/n (red), its standard deviation 1/\sqrt{n} (blue) and its bound \sqrt{2\log\log n/n} given by LIL (green). From the probability density function of the standard normal distribution, the exact value of z.975 is determined by : \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{z_{.975}} e^{-x^2/2} \, \mathrm{d}x = 0.975. == History == thumb|right|200px|Ronald Fisher The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925: In Table 1 of the same work, he gave the more precise value 1.959964. , Table 1 In 1970, the value truncated to 20 decimal places was calculated to be :1.95996 39845 40054 23552... Because of the central limit theorem, this number is used in the construction of approximate 95% confidence intervals. In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. Then : \begin{align} P\left [ {Z}_i^r \right ] = \left(1-\frac{1}{n}\right)^r \le e^{-r / n}. \end{align} Thus, for r = \beta n \log n, we have P\left [ {Z}_i^r \right ] \le e^{(-\beta n \log n ) / n} = n^{-\beta}. right|thumb|300px| Probability mass function for Fisher's noncentral hypergeometric distribution for different values of the odds ratio ω. m1 = 80, m2 = 60, n = 100, ω = 0.01, ..., 1000 In probability theory and statistics, Fisher's noncentral hypergeometric distribution is a generalization of the hypergeometric distribution where sampling probabilities are modified by weight factors. Bound the desired probability using the Chebyshev inequality: :\operatorname{P}\left(|T- n H_n| \geq cn\right) \le \frac{\pi^2}{6c^2}. ===Tail estimates=== A stronger tail estimate for the upper tail be obtained as follows. The probability function and a simple approximation to the mean are given to the right. It asks the following question: If each box of a brand of cereals contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? For example, when n = 50 it takes about 225E(50) = 50(1 + 1/2 + 1/3 + ... + 1/50) = 224.9603, the expected number of trials to collect all 50 coupons. The mathematical analysis of the problem reveals that the expected number of trials needed grows as \Theta(n\log(n)). Probability. In probability theory, the Chung–Erdős inequality provides a lower bound on the probability that one out of many (possibly dependent) events occurs. In statistics, probable error defines the half-range of an interval about a central point for the distribution, such that half of the values from the distribution will lie within the interval and half outside.Dodge, Y. (2006) The Oxford Dictionary of Statistical Terms, OUP. The probable error can also be expressed as a multiple of the standard deviation σ,Zwillinger, D.; Kokosa, S. (2000) CRC Standard Probability and Statistics Tables and Formulae, Chapman & Hall/CRC. The calculation time for the probability function can be high when the sum in P0 has many terms. Using the Markov inequality to bound the desired probability: :\operatorname{P}(T \geq cn H_n) \le \frac{1}{c}. Their odds ratio is given as : \omega = \frac{\omega_X}{\omega_Y} = \frac{\pi_X/(1-\pi_X)}{\pi_Y/(1-\pi_Y)} . Then : \Pr\left( \limsup_n \frac{S_n}{\sqrt{n}} \geq M \right) \geqslant \limsup_n \Pr\left( \frac{S_n}{\sqrt{n}} \geq M \right) = \Pr\left( \mathcal{N}(0, 1) \geq M \right) > 0 so :\limsup_n \frac{S_n}{\sqrt{n}}=\infty \qquad \text{with probability 1.}
0.925
2.3
1.5377
-1.00
71
A
7.5-1. Let $Y_1 < Y_2 < Y_3 < Y_4 < Y_5 < Y_6$ be the order statistics of a random sample of size $n=6$ from a distribution of the continuous type having $(100 p)$ th percentile $\pi_p$. Compute (a) $P\left(Y_2 < \pi_{0.5} < Y_5\right)$.
If the sample values are :6, 9, 3, 8, the order statistics would be denoted :x_{(1)}=3,\ \ x_{(2)}=6,\ \ x_{(3)}=8,\ \ x_{(4)}=9,\, where the subscript enclosed in parentheses indicates the th order statistic of the sample. Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median. === Large sample sizes === For the uniform distribution, as n tends to infinity, the pth sample quantile is asymptotically normally distributed, since it is approximated by : U_{(\lceil np \rceil)} \sim AN\left(p,\frac{p(1-p)}{n}\right). In many applications all order statistics are required, in which case a sorting algorithm can be used and the time taken is O(n log n). == See also == * Rankit * Box plot * BRS-inequality * Concomitant (statistics) * Fisher–Tippett distribution * Bapat–Beg theorem for the order statistics of independent but not necessarily identically distributed random variables * Bernstein polynomial * L-estimator – linear combinations of order statistics * Rank-size distribution * Selection algorithm === Examples of order statistics === * Sample maximum and minimum * Quantile * Percentile * Decile * Quartile * Median == References == == External links == * Retrieved Feb 02,2005 * Retrieved Feb 02,2005 * C++ source Dynamic Order Statistics Category:Nonparametric statistics Category:Summary statistics Category:Permutations Similar remarks apply to all sample quantiles. == Probabilistic analysis == Given any random variables X1, X2..., Xn, the order statistics X(1), X(2), ..., X(n) are also random variables, defined by sorting the values (realizations) of X1, ..., Xn in increasing order. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf. When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution. == Notation and examples == For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. In this particular case, a better confidence interval for the median is the one delimited by the 2nd and 5th order statistics, which contains the population median with probability :\left[{6\choose 2}+{6\choose 3}+{6\choose 4}\right](1/2)^{6} = {25\over 32} \approx 78\%. Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles. In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. To find the probabilities of the k^\text{th} order statistics, three values are first needed, namely :p_1=P(Xx)=1-F(x). In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Using the above formulas, one can derive the distribution of the range of the order statistics, that is the distribution of U_{(n)}-U_{(1)}, i.e. maximum minus the minimum. The cumulative distribution function of the k^\text{th} order statistic can be computed by noting that : \begin{align} P(X_{(k)}\leq x)& =P(\text{there are at least }k\text{ observations less than or equal to }x) ,\\\ & =P(\text{there are at most }n-k\text{ observations greater than }x) ,\\\ & =\sum_{j=0}^{n-k}{n\choose j}p_3^j(p_1+p_2)^{n-j} . \end{align} Similarly, P(X_{(k)} is given by : \begin{align} P(X_{(k)}< x)& =P(\text{there are at least }k\text{ observations less than }x) ,\\\ & =P(\text{there are at most }n-k\text{ observations greater than or equal to }x) ,\\\ & =\sum_{j=0}^{n-k}{n\choose j}(p_2+p_3)^j(p_1)^{n-j} . \end{align} Note that the probability mass function of X_{(k)} is just the difference of these values, that is to say : \begin{align} P(X_{(k)}=x)&=P(X_{(k)}\leq x)-P(X_{(k)}< x) ,\\\ &=\sum_{j=0}^{n-k}{n\choose j}\left(p_3^j(p_1+p_2)^{n-j}-(p_2+p_3)^j(p_1)^{n-j}\right) ,\\\ &=\sum_{j=0}^{n-k}{n\choose j}\left((1-F(x))^j(F(x))^{n-j}-(1-F(x)+f(x))^j(F(x)-f(x))^{n-j}\right). \end{align} == Computing order statistics == The problem of computing the kth smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm. In statistics, the percentile rank (PR) of a given score is the percentage of scores in its frequency distribution that are less than that score.Roscoe, J. T. (1975). With such a small sample size, if one wants at least 95% confidence, one is reduced to saying that the median is between the minimum and the maximum of the 6 observations with probability 31/32 or approximately 97%. If the distribution is normally distributed, the percentile rank can be inferred from the standard score. ==See also== * Quantile * Percentile ==References== Category:Summary statistics The mean of this distribution is k / (n + 1). ==== The joint distribution of the order statistics of the uniform distribution ==== Similarly, for i < j, the joint probability density function of the two order statistics U(i) < U(j) can be shown to be :f_{U_{(i)},U_{(j)}}(u,v) = n!{u^{i-1}\over (i-1)!}{(v-u)^{j-i-1}\over(j-i-1)!}{(1-v)^{n-j}\over (n-j)!} which is (up to terms of higher order than O(du\,dv)) the probability that i − 1, 1, j − 1 − i, 1 and n − j sample elements fall in the intervals (0,u), (u,u+du), (u+du,v), (v,v+dv), (v+dv,1) respectively. However, we know from the preceding discussion that the probability that this interval actually contains the population median is :{6\choose 3}(1/2)^{6} = {5\over 16} \approx 31\%. One way to understand this is that the unordered sample does have constant density equal to 1, and that there are n! different permutations of the sample corresponding to the same sequence of order statistics. In other words, all n order statistics are needed from the n observations in a sample. Perhaps surprisingly, the joint density of the n order statistics turns out to be constant: :f_{U_{(1)},U_{(2)},\ldots,U_{(n)}}(u_{1},u_{2},\ldots,u_{n}) = n!. The sample median may or may not be an order statistic, since there is a single middle value only when the number of observations is odd.
3.54
-11.2
240.0
0.7812
1.27
D
5.2-13. Let $X_1, X_2$ be independent random variables representing lifetimes (in hours) of two key components of a device that fails when and only when both components fail. Say each $X_i$ has an exponential distribution with mean 1000. Let $Y_1=\min \left(X_1, X_2\right)$ and $Y_2=\max \left(X_1, X_2\right)$, so that the space of $Y_1, Y_2$ is $ 0< y_1 < y_2 < \infty $ (a) Find $G\left(y_1, y_2\right)=P\left(Y_1 \leq y_1, Y_2 \leq y_2\right)$.
The exponential-logarithmic model, together with its various properties, are studied by Tahmasbi and Rezaei (2008).Tahmasbi, R., Rezaei, S., (2008), "A two-parameter lifetime distribution with decreasing failure rate", Computational Statistics and Data Analysis, 52 (8), 3889-3901. The pdf for the standard fatigue life distribution reduces to : f(x) = \frac{\sqrt{x}+\sqrt{\frac{1}{x}}}{2\gamma x}\phi\left(\frac{\sqrt{x}-\sqrt{\frac{1}{x}}}{\gamma}\right)\quad x > 0; \gamma >0 Since the general form of probability functions can be expressed in terms of the standard distribution, all of the subsequent formulas are given for the standard form of the function. ==Cumulative distribution function== The formula for the cumulative distribution function is : F(x) = \Phi\left(\frac{\sqrt{x} - \sqrt{\frac{1}{x}}}{\gamma}\right)\quad x > 0; \gamma > 0 where Φ is the cumulative distribution function of the standard normal distribution. ==Quantile function== The formula for the quantile function is : G(p) = \frac{1}{4}\left[\gamma\Phi^{-1}(p) + \sqrt{4+\left(\gamma\Phi^{-1}(p)\right)^2}\right]^2 where Φ −1 is the quantile function of the standard normal distribution. ==References== * * * * * * * ==External links== *Fatigue life distribution Category:Continuous distributions Then there are seven nonempty subsets of { 1, ..., b } = { 1, 2, 3 }; hence seven different exponential random variables: : E_{\\{1\\}}, E_{\\{2\\}}, E_{\\{3\\}}, E_{\\{1,2\\}}, E_{\\{1,3\\}}, E_{\\{2,3\\}}, E_{\\{1,2,3\\}} Then we have: : \begin{align} T_1 & = \min\\{ E_{\\{1\\}}, E_{\\{1,2\\}}, E_{\\{1,3\\}}, E_{\\{1,2,3\\}} \\} \\\ T_2 & = \min\\{ E_{\\{2\\}}, E_{\\{1,2\\}}, E_{\\{2,3\\}}, E_{\\{1,2,3\\}} \\} \\\ T_3 & = \min\\{ E_{\\{3\\}}, E_{\\{1,3\\}}, E_{\\{2,3\\}}, E_{\\{1,2,3\\}} \\} \\\ \end{align} ==References== * Xu M, Xu S. The joint distribution of T=(T_1,\ldots,T_b) is called the Marshall–Olkin exponential distribution with parameters \\{\lambda _B,B\subset \\{1,2,\ldots,b\\}\\}. === Concrete example === Suppose b = 3\. In this situation, the energy distance is zero if and only if X and Y are identically distributed. Vilnius, 2009 If X is defined to be the random variable which is the minimum of N independent realisations from an exponential distribution with rate parameter β, and if N is a realisation from a logarithmic distribution (where the parameter p in the usual parameterisation is replaced by ), then X has the exponential- logarithmic distribution in the parameterisation used above. ==References== Category:Continuous distributions Category:Survival analysis X is then distributed normally with a mean of zero and a variance of α2 / 4. ==Probability density function== The general formula for the probability density function (pdf) is : f(x) = \frac{\sqrt{\frac{x-\mu}{\beta}}+\sqrt{\frac{\beta}{x-\mu}}}{2\gamma\left(x-\mu\right)}\phi\left(\frac{\sqrt{\frac{x-\mu}{\beta}}-\sqrt{\frac{\beta}{x-\mu}}}{\gamma}\right)\quad x > \mu; \gamma,\beta>0 where γ is the shape parameter, μ is the location parameter, β is the scale parameter, and \phi is the probability density function of the standard normal distribution. ==Standard fatigue life distribution== The case where μ = 0 and β = 1 is called the standard fatigue life distribution. thumb|Diagram showing queueing system equivalent of a hyperexponential distribution In probability theory, a hyperexponential distribution is a continuous probability distribution whose probability density function of the random variable X is given by : f_X(x) = \sum_{i=1}^n f_{Y_i}(x)\;p_i, where each Yi is an exponentially distributed random variable with rate parameter λi, and pi is the probability that X will take on the form of the exponential distribution with rate λi. {1-(1-p) e^{-\beta x}} | cdf = 1-\frac{\ln(1-(1-p) e^{-\beta x})}{\ln p} | mean = -\frac{\text{polylog}(2,1-p)}{\beta\ln p} | median = \frac{\ln(1+\sqrt{p})}{\beta} | mode = 0 | variance = -\frac{2 \text{polylog}(3,1-p)}{\beta^2\ln p} -\frac{ \text{polylog}^2(2,1-p)}{\beta^2\ln^2 p} | skewness = | kurtosis = | entropy = | mgf = -\frac{\beta(1-p)}{\ln p (\beta-t)} \text{hypergeom}_{2,1} ([1,\frac{\beta-t}{\beta}],[\frac{2\beta-t}{\beta}],1-p) | cf = | pgf = | fisher = }} In probability theory and statistics, the Exponential-Logarithmic (EL) distribution is a family of lifetime distributions with decreasing failure rate, defined on the interval [0, ∞). If T is the number of cycles to failure then the cumulative distribution function (cdf) of T is : P( T \le t ) = 1 - \Phi\left( \frac{ \omega - t \mu }{ \sigma \sqrt{ t } } \right) = \Phi\left( \frac{ t \mu - \omega }{ \sigma \sqrt{ t } } \right) = \Phi\left( \frac{ \mu \sqrt{ t } }{ \sigma } - \frac{ \omega }{ \sigma \sqrt{t} } \right) = \Phi\left( \frac{ \sqrt{ \mu \omega } }{ \sigma } \left[ \left( \frac{ t }{ \omega / \mu } \right)^{ 0.5 } - \left( \frac{ \omega / \mu }{ t } \right)^{ 0.5 } \right] \right) The more usual form of this distribution is: : F( x; \alpha, \beta ) = \Phi\left( \frac{ 1 }{ \alpha } \left[ \left( \frac{ x }{ \beta } \right)^{0.5} - \left( \frac{ \beta }{ x } \right)^{0.5} \right] \right) Here α is the shape parameter and β is the scale parameter. ==Properties== The Birnbaum–Saunders distribution is unimodal with a median of β. In applied statistics, the Marshall–Olkin exponential distribution is any member of a certain family of continuous multivariate probability distributions with positive-valued components. The Birnbaum-Saunders distribution, also known as the fatigue life distribution, is a probability distribution used extensively in reliability applications to model failure times. Category:Statistics articles needing expert attention Category:Continuous distributions Category:Exponentials Category:Exponential family distributions In general, the lifetime of a device is expected to exhibit decreasing failure rate (DFR) when its behavior over time is characterized by 'work-hardening' (in engineering terms) or 'immunity' (in biological terms). If X and Y are independent random vectors in Rd with cumulative distribution functions (cdf) F and G respectively, then the energy distance between the distributions F and G is defined to be the square root of : D^2(F, G) = 2\operatorname E\|X - Y\| - \operatorname E\|X - X'\| - \operatorname E\|Y - Y'\| \geq 0, where (X, X', Y, Y') are independent, the cdf of X and X' is F, the cdf of Y and Y' is G, \operatorname E is the expected value, and || . || denotes the length of a vector. Hence the mean and variance of the EL distribution are given, respectively, by :E(X)=-\frac{\operatorname{Li}_2(1-p)}{\beta\ln p}, :\operatorname{Var}(X)=-\frac{2 \operatorname{Li}_3(1-p)}{\beta^2\ln p}-\left(\frac{ \operatorname{Li}_2(1-p)}{\beta\ln p}\right)^2. === The survival, hazard and mean residual life functions === thumb|300px|Hazard function The survival function (also known as the reliability function) and hazard function (also known as the failure rate function) of the EL distribution are given, respectively, by : s(x)=\frac{\ln(1-(1-p)e^{-\beta x})}{\ln p}, : h(x)=\frac{-\beta(1-p)e^{-\beta x}}{(1-(1-p)e^{-\beta x})\ln(1-(1-p)e^{-\beta x})}. The mean residual lifetime of the EL distribution is given by : m(x_0;p,\beta)=E(X-x_0|X\geq x_0;\beta,p)=-\frac{\operatorname{Li}_2(1-(1-p)e^{-\beta x_0})}{\beta \ln(1-(1-p)e^{-\beta x_0})} where \operatorname{Li}_2 is the dilogarithm function === Random number generation === Let U be a random variate from the standard uniform distribution. The EM iteration is given by : \beta^{(h+1)} = n \left( \sum_{i=1}^n\frac{x_i}{1-(1-p^{(h)})e^{-\beta^{(h)}x_i}} \right)^{-1}, : p^{(h+1)}=\frac{-n(1-p^{(h+1)})} { \ln( p^{(h+1)}) \sum_{i=1}^n \\{1-(1-p^{(h)})e^{-\beta^{(h)} x_i}\\}^{-1}}. ==Related distributions== The EL distribution has been generalized to form the Weibull-logarithmic distribution.Ciumara, Roxana; Preda, Vasile (2009) "The Weibull-logarithmic distribution in lifetime analysis and its properties". The power system reliability is the probability of a normal operation of the electrical grid at a given time. Energy distance and E-statistic were considered as N-distances and N-statistic in Zinger A.A., Kakosyan A.V., Klebanov L.B. Characterization of distributions by means of mean values of some statistics in connection with some probability metrics, Stability Problems for Stochastic Models. A class of Probability Metrics and its Statistical Applications, Statistics in Industry and Technology: Statistical Data Analysis, Yadolah Dodge, Ed. Energy distance is a statistical distance between probability distributions.
362880
0.5117
0.8
15
+10
B
5.4-5. Let $Z_1, Z_2, \ldots, Z_7$ be a random sample from the standard normal distribution $N(0,1)$. Let $W=Z_1^2+Z_2^2+$ $\cdots+Z_7^2$. Find $P(1.69 < W < 14.07)$
To find a negative value such as -0.83, one could use a cumulative table for negative z-values which yield a probability of 0.20327. If X has a standard normal distribution, i.e. X ~ N(0,1), : \mathrm{P}(X > 1.96) \approx 0.025, \, : \mathrm{P}(X < 1.96) \approx 0.975, \, and as the normal distribution is symmetric, : \mathrm{P}(-1.96 < X < 1.96) \approx 0.95. From the probability density function of the standard normal distribution, the exact value of z.975 is determined by : \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{z_{.975}} e^{-x^2/2} \, \mathrm{d}x = 0.975. == History == thumb|right|200px|Ronald Fisher The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925: In Table 1 of the same work, he gave the more precise value 1.959964. , Table 1 In 1970, the value truncated to 20 decimal places was calculated to be :1.95996 39845 40054 23552... The probability distribution fZ(z) is given in this case by :f_Z(z)=\frac{1}{\sqrt{2 \pi}\sigma_+ }\exp\left(-\frac{z^2}{2\sigma_+^2}\right) where :\sigma_+ = \sqrt{\sigma_x^2+\sigma_y^2+2\rho\sigma_x \sigma_y}. Since probability tables cannot be printed for every normal distribution, as there are an infinite variety of normal distributions, it is common practice to convert a normal to a standard normal (known as a z-score) and then use the standard normal table to find probabilities. ==Normal and standard normal distribution== Normal distributions are symmetrical, bell-shaped distributions that are useful in describing real-world data. * What is the probability that a student scores an 82 or less? \begin{align} P(X \le 82) &= P \\!\\! \left(Z \le \frac{82 - 80}{5}\right) \\\ &= P(Z \le 0.40) \\\\[2pt] &= 0.15542 + 0.5 \\\\[2pt] &= 0.65542 \end{align} * What is the probability that a student scores a 90 or more? \begin{align} P(X \ge 90) &= P \\!\\! \left(Z \ge \frac{90 - 80}{5}\right) \\\ &= P(Z \ge 2.00) \\\\[2pt] &= 1 - P(Z \le 2.00) \\\\[2pt] &= 1 - (0.47725 + 0.5) \\\\[2pt] &= 0.02275 \end{align} * What is the probability that a student scores a 74 or less? \begin{align} P(X \le 74) &= P \\!\\! \left(Z \le \frac{74 - 80}{5}\right) \\\ &= P(Z \le - 1.20) \end{align} Since this table does not include negatives, the process involves the following additional step: \begin{align} \qquad \qquad \quad ={} & P(Z \ge 1.20) \\\\[2pt] ={} & 1 - (0.38493 + 0.5) \\\\[2pt] ={} & 0.11507 \end{align} * What is the probability that a student scores between 74 and 82? \begin{align} P(74 \le X \le 82) &= P(X \le 82) - P(X \le 74) \\\\[2pt] &= 0.65542 - 0.11507 \\\\[2pt] &= 0.54035 \end{align} * What is the probability that an average of three scores is 82 or less? \begin{align} P(X \le 82) &= P\left(Z \le \frac{82 - 80}{5/\sqrt{3}}\right) \\\ &= P(Z \le 0.69) \\\\[2pt] &= 0.2549 + 0.5 \\\\[2pt] &= 0.7549 \end{align} ==See also== * 68–95–99.7 rule * t-distribution table ==References== Category:Normal distribution Category:Mathematical tables In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. In probability theory, calculation of the sum of normally distributed random variables is an instance of the arithmetic of random variables, which can be quite complex based on the probability distributions of the random variables involved and their relationships. This means that the sum of two independent normally distributed random variables is normal, with its mean being the sum of the two means, and its variance being the sum of the two variances (i.e., the square of the standard deviation is the sum of the squares of the standard deviations). The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. The standard normal distribution, represented by , is the normal distribution having a mean of 0 and a standard deviation of 1. ===Conversion=== If is a random variable from a normal distribution with mean and standard deviation , its Z-score may be calculated from by subtracting and dividing by the standard deviation: : Z = \frac{X - \mu}{\sigma } If \overline{X} is the mean of a sample of size from some population in which the mean is and the standard deviation is , the standard error is : Z = \frac{\overline{X} - \mu}{\sigma / \sqrt n} If \sum X is the total of a sample of size from some population in which the mean is and the standard deviation is , the expected total is and the standard error is : Z = \frac{\sum{X} - n\mu}{\sigma \sqrt{n}} ==Reading a Z table== ===Formatting / layout=== tables are typically composed as follows: * The label for rows contains the integer part and the first decimal place of . The 7TP (siedmiotonowy polski - 7-tonne Polish) was a Polish light tank of the Second World War. In statistics, a standard normal table, also called the unit normal table or Z table, is a mathematical table for the values of , the cumulative distribution function of the normal distribution. This is not to be confused with the sum of normal distributions which forms a mixture distribution. ==Independent random variables== Let X and Y be independent random variables that are normally distributed (and therefore also jointly so), then their sum is also normally distributed. i.e., if :X \sim N(\mu_X, \sigma_X^2) :Y \sim N(\mu_Y, \sigma_Y^2) :Z=X+Y, then :Z \sim N(\mu_X + \mu_Y, \sigma_X^2 + \sigma_Y^2). At the same time, one 7TP was captured by the Soviets during their invasion of Poland. This equates to the area of the distribution below . But since the normal distribution curve is symmetrical, probabilities for only positive values of are typically given. These probabilities are calculations of the area under the normal curve from the starting point (0 for cumulative from mean, negative infinity for cumulative and positive infinity for complementary cumulative) to . Example: To find 0.69, one would look down the rows to find 0.6 and then across the columns to 0.09 which would yield a probability of 0.25490 for a cumulative from mean table or 0.75490 from a cumulative table. * The values within the table are the probabilities corresponding to the table type. : \Phi(z) = \frac12\left[1 + \operatorname{erf}\left( \frac z {\sqrt 2} \right) \right] Note that for , one obtains (after multiplying by 2 to account for the interval) the results , characteristic of the 68–95–99.7 rule. ===Cumulative (less than Z)=== This table gives a probability that a statistic is less than (i.e. between negative infinity and ). z −0.00 −0.01 −0.02 −0.03 −0.04 −0.05 −0.06 −0.07 −0.08 −0.09 -4.0 0.00003 0.00003 0.00003 0.00003 0.00003 0.00003 0.00002 0.00002 0.00002 0.00002 -3.9 0.00005 0.00005 0.00004 0.00004 0.00004 0.00004 0.00004 0.00004 0.00003 0.00003 -3.8 0.00007 0.00007 0.00007 0.00006 0.00006 0.00006 0.00006 0.00005 0.00005 0.00005 -3.7 0.00011 0.00010 0.00010 0.00010 0.00009 0.00009 0.00008 0.00008 0.00008 0.00008 -3.6 0.00016 0.00015 0.00015 0.00014 0.00014 0.00013 0.00013 0.00012 0.00012 0.00011 -3.5 0.00023 0.00022 0.00022 0.00021 0.00020 0.00019 0.00019 0.00018 0.00017 0.00017 −3.4 0.00034 0.00032 0.00031 0.00030 0.00029 0.00028 0.00027 0.00026 0.00025 0.00024 −3.3 0.00048 0.00047 0.00045 0.00043 0.00042 0.00040 0.00039 0.00038 0.00036 0.00035 −3.2 0.00069 0.00066 0.00064 0.00062 0.00060 0.00058 0.00056 0.00054 0.00052 0.00050 −3.1 0.00097 0.00094 0.00090 0.00087 0.00084 0.00082 0.00079 0.00076 0.00074 0.00071 −3.0 0.00135 0.00131 0.00126 0.00122 0.00118 0.00114 0.00111 0.00107 0.00104 0.00100 −2.9 0.00187 0.00181 0.00175 0.00169 0.00164 0.00159 0.00154 0.00149 0.00144 0.00139 −2.8 0.00256 0.00248 0.00240 0.00233 0.00226 0.00219 0.00212 0.00205 0.00199 0.00193 −2.7 0.00347 0.00336 0.00326 0.00317 0.00307 0.00298 0.00289 0.00280 0.00272 0.00264 −2.6 0.00466 0.00453 0.00440 0.00427 0.00415 0.00402 0.00391 0.00379 0.00368 0.00357 −2.5 0.00621 0.00604 0.00587 0.00570 0.00554 0.00539 0.00523 0.00508 0.00494 0.00480 −2.4 0.00820 0.00798 0.00776 0.00755 0.00734 0.00714 0.00695 0.00676 0.00657 0.00639 −2.3 0.01072 0.01044 0.01017 0.00990 0.00964 0.00939 0.00914 0.00889 0.00866 0.00842 −2.2 0.01390 0.01355 0.01321 0.01287 0.01255 0.01222 0.01191 0.01160 0.01130 0.01101 −2.1 0.01786 0.01743 0.01700 0.01659 0.01618 0.01578 0.01539 0.01500 0.01463 0.01426 −2.0 0.02275 0.02222 0.02169 0.02118 0.02068 0.02018 0.01970 0.01923 0.01876 0.01831 −1.9 0.02872 0.02807 0.02743 0.02680 0.02619 0.02559 0.02500 0.02442 0.02385 0.02330 −1.8 0.03593 0.03515 0.03438 0.03362 0.03288 0.03216 0.03144 0.03074 0.03005 0.02938 −1.7 0.04457 0.04363 0.04272 0.04182 0.04093 0.04006 0.03920 0.03836 0.03754 0.03673 −1.6 0.05480 0.05370 0.05262 0.05155 0.05050 0.04947 0.04846 0.04746 0.04648 0.04551 −1.5 0.06681 0.06552 0.06426 0.06301 0.06178 0.06057 0.05938 0.05821 0.05705 0.05592 −1.4 0.08076 0.07927 0.07780 0.07636 0.07493 0.07353 0.07215 0.07078 0.06944 0.06811 −1.3 0.09680 0.09510 0.09342 0.09176 0.09012 0.08851 0.08692 0.08534 0.08379 0.08226 −1.2 0.11507 0.11314 0.11123 0.10935 0.10749 0.10565 0.10383 0.10204 0.10027 0.09853 −1.1 0.13567 0.13350 0.13136 0.12924 0.12714 0.12507 0.12302 0.12100 0.11900 0.11702 −1.0 0.15866 0.15625 0.15386 0.15151 0.14917 0.14686 0.14457 0.14231 0.14007 0.13786 −0.9 0.18406 0.18141 0.17879 0.17619 0.17361 0.17106 0.16853 0.16602 0.16354 0.16109 −0.8 0.21186 0.20897 0.20611 0.20327 0.20045 0.19766 0.19489 0.19215 0.18943 0.18673 −0.7 0.24196 0.23885 0.23576 0.23270 0.22965 0.22663 0.22363 0.22065 0.21770 0.21476 −0.6 0.27425 0.27093 0.26763 0.26435 0.26109 0.25785 0.25463 0.25143 0.24825 0.24510 −0.5 0.30854 0.30503 0.30153 0.29806 0.29460 0.29116 0.28774 0.28434 0.28096 0.27760 −0.4 0.34458 0.34090 0.33724 0.33360 0.32997 0.32636 0.32276 0.31918 0.31561 0.31207 −0.3 0.38209 0.37828 0.37448 0.37070 0.36693 0.36317 0.35942 0.35569 0.35197 0.34827 −0.2 0.42074 0.41683 0.41294 0.40905 0.40517 0.40129 0.39743 0.39358 0.38974 0.38591 −0.1 0.46017 0.45620 0.45224 0.44828 0.44433 0.44038 0.43644 0.43251 0.42858 0.42465 −0.0 0.50000 0.49601 0.49202 0.48803 0.48405 0.48006 0.47608 0.47210 0.46812 0.46414 z −0.00 −0.01 −0.02 −0.03 −0.04 −0.05 −0.06 −0.07 −0.08 −0.09 z + 0.00 + 0.01 + 0.02 + 0.03 + 0.04 + 0.05 + 0.06 + 0.07 + 0.08 + 0.09 0.0 0.50000 0.50399 0.50798 0.51197 0.51595 0.51994 0.52392 0.52790 0.53188 0.53586 0.1 0.53983 0.54380 0.54776 0.55172 0.55567 0.55962 0.56360 0.56749 0.57142 0.57535 0.2 0.57926 0.58317 0.58706 0.59095 0.59483 0.59871 0.60257 0.60642 0.61026 0.61409 0.3 0.61791 0.62172 0.62552 0.62930 0.63307 0.63683 0.64058 0.64431 0.64803 0.65173 0.4 0.65542 0.65910 0.66276 0.66640 0.67003 0.67364 0.67724 0.68082 0.68439 0.68793 0.5 0.69146 0.69497 0.69847 0.70194 0.70540 0.70884 0.71226 0.71566 0.71904 0.72240 0.6 0.72575 0.72907 0.73237 0.73565 0.73891 0.74215 0.74537 0.74857 0.75175 0.75490 0.7 0.75804 0.76115 0.76424 0.76730 0.77035 0.77337 0.77637 0.77935 0.78230 0.78524 0.8 0.78814 0.79103 0.79389 0.79673 0.79955 0.80234 0.80511 0.80785 0.81057 0.81327 0.9 0.81594 0.81859 0.82121 0.82381 0.82639 0.82894 0.83147 0.83398 0.83646 0.83891 1.0 0.84134 0.84375 0.84614 0.84849 0.85083 0.85314 0.85543 0.85769 0.85993 0.86214 1.1 0.86433 0.86650 0.86864 0.87076 0.87286 0.87493 0.87698 0.87900 0.88100 0.88298 1.2 0.88493 0.88686 0.88877 0.89065 0.89251 0.89435 0.89617 0.89796 0.89973 0.90147 1.3 0.90320 0.90490 0.90658 0.90824 0.90988 0.91149 0.91308 0.91466 0.91621 0.91774 1.4 0.91924 0.92073 0.92220 0.92364 0.92507 0.92647 0.92785 0.92922 0.93056 0.93189 1.5 0.93319 0.93448 0.93574 0.93699 0.93822 0.93943 0.94062 0.94179 0.94295 0.94408 1.6 0.94520 0.94630 0.94738 0.94845 0.94950 0.95053 0.95154 0.95254 0.95352 0.95449 1.7 0.95543 0.95637 0.95728 0.95818 0.95907 0.95994 0.96080 0.96164 0.96246 0.96327 1.8 0.96407 0.96485 0.96562 0.96638 0.96712 0.96784 0.96856 0.96926 0.96995 0.97062 1.9 0.97128 0.97193 0.97257 0.97320 0.97381 0.97441 0.97500 0.97558 0.97615 0.97670 2.0 0.97725 0.97778 0.97831 0.97882 0.97932 0.97982 0.98030 0.98077 0.98124 0.98169 2.1 0.98214 0.98257 0.98300 0.98341 0.98382 0.98422 0.98461 0.98500 0.98537 0.98574 2.2 0.98610 0.98645 0.98679 0.98713 0.98745 0.98778 0.98809 0.98840 0.98870 0.98899 2.3 0.98928 0.98956 0.98983 0.99010 0.99036 0.99061 0.99086 0.99111 0.99134 0.99158 2.4 0.99180 0.99202 0.99224 0.99245 0.99266 0.99286 0.99305 0.99324 0.99343 0.99361 2.5 0.99379 0.99396 0.99413 0.99430 0.99446 0.99461 0.99477 0.99492 0.99506 0.99520 2.6 0.99534 0.99547 0.99560 0.99573 0.99585 0.99598 0.99609 0.99621 0.99632 0.99643 2.7 0.99653 0.99664 0.99674 0.99683 0.99693 0.99702 0.99711 0.99720 0.99728 0.99736 2.8 0.99744 0.99752 0.99760 0.99767 0.99774 0.99781 0.99788 0.99795 0.99801 0.99807 2.9 0.99813 0.99819 0.99825 0.99831 0.99836 0.99841 0.99846 0.99851 0.99856 0.99861 3.0 0.99865 0.99869 0.99874 0.99878 0.99882 0.99886 0.99889 0.99893 0.99896 0.99900 3.1 0.99903 0.99906 0.99910 0.99913 0.99916 0.99918 0.99921 0.99924 0.99926 0.99929 3.2 0.99931 0.99934 0.99936 0.99938 0.99940 0.99942 0.99944 0.99946 0.99948 0.99950 3.3 0.99952 0.99953 0.99955 0.99957 0.99958 0.99960 0.99961 0.99962 0.99964 0.99965 3.4 0.99966 0.99968 0.99969 0.99970 0.99971 0.99972 0.99973 0.99974 0.99975 0.99976 3.5 0.99977 0.99978 0.99978 0.99979 0.99980 0.99981 0.99981 0.99982 0.99983 0.99983 3.6 0.99984 0.99985 0.99985 0.99986 0.99986 0.99987 0.99987 0.99988 0.99988 0.99989 3.7 0.99989 0.99990 0.99990 0.99990 0.99991 0.99991 0.99992 0.99992 0.99992 0.99992 3.8 0.99993 0.99993 0.99993 0.99994 0.99994 0.99994 0.99994 0.99995 0.99995 0.99995 3.9 0.99995 0.99995 0.99996 0.99996 0.99996 0.99996 0.99996 0.99996 0.99997 0.99997 4.0 0.99997 0.99997 0.99997 0.99997 0.99997 0.99997 0.99998 0.99998 0.99998 0.99998 z +0.00 +0.01 +0.02 +0.03 +0.04 +0.05 +0.06 +0.07 +0.08 +0.09 0.5 + each value in Cumulative from mean table ===Complementary cumulative=== This table gives a probability that a statistic is greater than . :f(z) = 1 - \Phi(z) z +0.00 +0.01 +0.02 +0.03 +0.04 +0.05 +0.06 +0.07 +0.08 +0.09 0.0 0.50000 0.49601 0.49202 0.48803 0.48405 0.48006 0.47608 0.47210 0.46812 0.46414 0.1 0.46017 0.45620 0.45224 0.44828 0.44433 0.44038 0.43640 0.43251 0.42858 0.42465 0.2 0.42074 0.41683 0.41294 0.40905 0.40517 0.40129 0.39743 0.39358 0.38974 0.38591 0.3 0.38209 0.37828 0.37448 0.37070 0.36693 0.36317 0.35942 0.35569 0.35197 0.34827 0.4 0.34458 0.34090 0.33724 0.33360 0.32997 0.32636 0.32276 0.31918 0.31561 0.31207 0.5 0.30854 0.30503 0.30153 0.29806 0.29460 0.29116 0.28774 0.28434 0.28096 0.27760 0.6 0.27425 0.27093 0.26763 0.26435 0.26109 0.25785 0.25463 0.25143 0.24825 0.24510 0.7 0.24196 0.23885 0.23576 0.23270 0.22965 0.22663 0.22363 0.22065 0.21770 0.21476 0.8 0.21186 0.20897 0.20611 0.20327 0.20045 0.19766 0.19489 0.19215 0.18943 0.18673 0.9 0.18406 0.18141 0.17879 0.17619 0.17361 0.17106 0.16853 0.16602 0.16354 0.16109 1.0 0.15866 0.15625 0.15386 0.15151 0.14917 0.14686 0.14457 0.14231 0.14007 0.13786 1.1 0.13567 0.13350 0.13136 0.12924 0.12714 0.12507 0.12302 0.12100 0.11900 0.11702 1.2 0.11507 0.11314 0.11123 0.10935 0.10749 0.10565 0.10383 0.10204 0.10027 0.09853 1.3 0.09680 0.09510 0.09342 0.09176 0.09012 0.08851 0.08692 0.08534 0.08379 0.08226 1.4 0.08076 0.07927 0.07780 0.07636 0.07493 0.07353 0.07215 0.07078 0.06944 0.06811 1.5 0.06681 0.06552 0.06426 0.06301 0.06178 0.06057 0.05938 0.05821 0.05705 0.05592 1.6 0.05480 0.05370 0.05262 0.05155 0.05050 0.04947 0.04846 0.04746 0.04648 0.04551 1.7 0.04457 0.04363 0.04272 0.04182 0.04093 0.04006 0.03920 0.03836 0.03754 0.03673 1.8 0.03593 0.03515 0.03438 0.03362 0.03288 0.03216 0.03144 0.03074 0.03005 0.02938 1.9 0.02872 0.02807 0.02743 0.02680 0.02619 0.02559 0.02500 0.02442 0.02385 0.02330 2.0 0.02275 0.02222 0.02169 0.02118 0.02068 0.02018 0.01970 0.01923 0.01876 0.01831 2.1 0.01786 0.01743 0.01700 0.01659 0.01618 0.01578 0.01539 0.01500 0.01463 0.01426 2.2 0.01390 0.01355 0.01321 0.01287 0.01255 0.01222 0.01191 0.01160 0.01130 0.01101 2.3 0.01072 0.01044 0.01017 0.00990 0.00964 0.00939 0.00914 0.00889 0.00866 0.00842 2.4 0.00820 0.00798 0.00776 0.00755 0.00734 0.00714 0.00695 0.00676 0.00657 0.00639 2.5 0.00621 0.00604 0.00587 0.00570 0.00554 0.00539 0.00523 0.00508 0.00494 0.00480 2.6 0.00466 0.00453 0.00440 0.00427 0.00415 0.00402 0.00391 0.00379 0.00368 0.00357 2.7 0.00347 0.00336 0.00326 0.00317 0.00307 0.00298 0.00289 0.00280 0.00272 0.00264 2.8 0.00256 0.00248 0.00240 0.00233 0.00226 0.00219 0.00212 0.00205 0.00199 0.00193 2.9 0.00187 0.00181 0.00175 0.00169 0.00164 0.00159 0.00154 0.00149 0.00144 0.00139 3.0 0.00135 0.00131 0.00126 0.00122 0.00118 0.00114 0.00111 0.00107 0.00104 0.00100 3.1 0.00097 0.00094 0.00090 0.00087 0.00084 0.00082 0.00079 0.00076 0.00074 0.00071 3.2 0.00069 0.00066 0.00064 0.00062 0.00060 0.00058 0.00056 0.00054 0.00052 0.00050 3.3 0.00048 0.00047 0.00045 0.00043 0.00042 0.00040 0.00039 0.00038 0.00036 0.00035 3.4 0.00034 0.00032 0.00031 0.00030 0.00029 0.00028 0.00027 0.00026 0.00025 0.00024 3.5 0.00023 0.00022 0.00022 0.00021 0.00020 0.00019 0.00019 0.00018 0.00017 0.00017 3.6 0.00016 0.00015 0.00015 0.00014 0.00014 0.00013 0.00013 0.00012 0.00012 0.00011 3.7 0.00011 0.00010 0.00010 0.00010 0.00009 0.00009 0.00008 0.00008 0.00008 0.00008 3.8 0.00007 0.00007 0.00007 0.00006 0.00006 0.00006 0.00006 0.00005 0.00005 0.00005 3.9 0.00005 0.00005 0.00004 0.00004 0.00004 0.00004 0.00004 0.00004 0.00003 0.00003 4.0 0.00003 0.00003 0.00003 0.00003 0.00003 0.00003 0.00002 0.00002 0.00002 0.00002 0.5 − each value in Cumulative from mean (0 to Z) table This table gives a probability that a statistic is greater than Z, for large integer Z values. z +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 0 5.00000 E −1 1.58655 E −1 2.27501 E −2 1.34990 E −3 3.16712 E −5 2.86652 E −7 9.86588 E −10 1.27981 E −12 6.22096 E −16 1.12859 E −19 10 7.61985 E −24 1.91066 E −28 1.77648 E −33 6.11716 E −39 7.79354 E −45 3.67097 E −51 6.38875 E −58 4.10600 E −65 9.74095 E −73 8.52722 E −81 20 2.75362 E -89 3.27928 E -98 1.43989 E -107 2.33064 E -117 1.39039 E -127 3.05670 E -138 2.47606 E -149 7.38948 E -161 8.12387 E -173 3.28979 E -185 30 4.90671 E -198 2.69525 E -211 5.45208 E -225 4.06119 E -239 1.11390 E -253 1.12491 E -268 4.18262 E -284 5.72557 E -300 2.88543 E -316 5.35312 E -333 40 3.65589 E -350 9.19086 E -368 8.50515 E -386 2.89707 E -404 3.63224 E -423 1.67618 E -442 2.84699 E -462 1.77976 E -482 4.09484 E -503 3.46743 E -524 50 1.08060 E -545 1.23937 E -567 5.23127 E -590 8.12606 E -613 4.64529 E -636 9.77237 E -660 7.56547 E -684 2.15534 E -708 2.25962 E -733 8.71741 E -759 60 1.23757 E -784 6.46517 E -811 1.24283 E -837 8.79146 E -865 2.28836 E -892 2.19180 E -920 7.72476 E -949 1.00178 E -977 4.78041 E -1007 8.39374 E -1037 70 5.42304 E -1067 1.28921 E -1097 1.12771 E -1128 3.62960 E -1160 4.29841 E -1192 1.87302 E -1224 3.00302 E -1257 1.77155 E -1290 3.84530 E -1324 3.07102 E -1358 ==Examples of use== A professor's exam scores are approximately distributed normally with mean 80 and standard deviation 5. The 7.39 is a British drama television film that was broadcast in two parts on BBC One on 6 January and 7 January 2014.
4.09
0.0024
0.33333333
0.24995
0.925
E
5.4-19. A doorman at a hotel is trying to get three taxicabs for three different couples. The arrival of empty cabs has an exponential distribution with mean 2 minutes. Assuming independence, what is the probability that the doorman will get all three couples taken care of within 6 minutes?
* Exponential distribution is a special case of type 3 Pearson distribution. In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. Reliability theory and reliability engineering also make extensive use of the exponential distribution. The exponential distribution may be viewed as a continuous counterpart of the geometric distribution, which describes the number of Bernoulli trials necessary for a discrete process to change state. The original Three Prisoners problem can be seen in this light: The warden in that problem still has these six cases, each with a probability of occurring. And for the group of 23 people, the probability of sharing is :p(23) \approx 1 - \left(\frac{364}{365}\right)^\binom{23}{2} = 1 - \left(\frac{364}{365}\right)^{253} \approx 0.500477 . ===Poisson approximation=== Applying the Poisson approximation for the binomial on the group of 23 people, :\operatorname{Poi}\left(\frac{\binom{23}{2}}{365}\right) =\operatorname{Poi}\left(\frac{253}{365}\right) \approx \operatorname{Poi}(0.6932) so :\Pr(X>0)=1-\Pr(X=0) \approx 1-e^{-0.6932} \approx 1-0.499998=0.500002. In queuing theory, the service times of agents in a system (e.g. how long it takes for a bank teller etc. to serve a customer) are often modeled as exponentially distributed variables. X =\sum_{i=1}^k \sum_{j=i+1}^k X_{ij} \begin{alignat}{3} E[X] & = \sum_{i=1}^k \sum_{j=i+1}^k E[X_{ij}]\\\ & = \binom{k}{2} \frac{1}{n}\\\ & = \frac{k(k-1)}{2n}\\\ \end{alignat} For , if , the expected number of people with the same birthday is ≈ 1.0356. Similar caveats apply to the following examples which yield approximately exponentially distributed variables: * The time until a radioactive particle decays, or the time between clicks of a Geiger counter * The time it takes before your next telephone call * The time until default (on payment to company debt holders) in reduced-form credit risk modeling Exponential variables can also be used to model situations where certain events occur with a constant probability per unit length, such as the distance between mutations on a DNA strand, or between roadkills on a given road. This is a large class of probability distributions that includes the exponential distribution as one of its members, but also includes many other distributions, like the normal, binomial, gamma, and Poisson distributions. ==Definitions== ===Probability density function=== The probability density function (pdf) of an exponential distribution is : f(x;\lambda) = \begin{cases} \lambda e^{ - \lambda x} & x \ge 0, \\\ 0 & x < 0\. \end{cases} Here λ > 0 is the parameter of the distribution, often called the rate parameter. (The arrival of customers for instance is also modeled by the Poisson distribution if the arrivals are independent and distributed identically.) The exponential distribution is however not appropriate to model the overall lifetime of organisms or technical devices, because the "failure rates" here are not constant: more failures occur for very young and for very old systems. This can be seen by considering the complementary cumulative distribution function: \begin{align} \Pr\left(T > s + t \mid T > s\right) &= \frac{\Pr\left(T > s + t \cap T > s\right)}{\Pr\left(T > s\right)} \\\\[4pt] &= \frac{\Pr\left(T > s + t \right)}{\Pr\left(T > s\right)} \\\\[4pt] &= \frac{e^{-\lambda(s + t)}}{e^{-\lambda s}} \\\\[4pt] &= e^{-\lambda t} \\\\[4pt] &= \Pr(T > t). \end{align} When T is interpreted as the waiting time for an event to occur relative to some initial time, this relation implies that, if T is conditioned on a failure to observe the event over some initial period of time s, the distribution of the remaining waiting time is the same as the original unconditional distribution. But if we focus on a time interval during which the rate is roughly constant, such as from 2 to 4 p.m. during work days, the exponential distribution can be used as a good approximate model for the time until the next phone call arrives. In operating-rooms management, the distribution of surgery duration for a category of surgeries with no typical work-content (like in an emergency room, encompassing all types of surgeries). ===Prediction=== Having observed a sample of n data points from an unknown exponential distribution a common task is to use these samples to make predictions about future data from the same source. The first few values are as follows: >50% probability of 3 people sharing a birthday - 88 people; >50% probability of 4 people sharing a birthday - 187 people . ===Probability of a shared birthday (collision)=== The birthday problem can be generalized as follows: :Given random integers drawn from a discrete uniform distribution with range , what is the probability that at least two numbers are the same? ( gives the usual birthday problem.) Note that the vertical scale is logarithmic (each step down is 1020 times less likely). : 1 0.0% 5 2.7% 10 11.7% 20 41.1% 23 50.7% 30 70.6% 40 89.1% 50 97.0% 60 99.4% 70 99.9% 75 99.97% 100 % 200 % 300 (100 − )% 350 (100 − )% 365 (100 − )% ≥ 366 100% ==Approximations== thumb|right|upright=1.4|Graphs showing the approximate probabilities of at least two people sharing a birthday () and its complementary event () thumb|right|upright=1.4|A graph showing the accuracy of the approximation () The Taylor series expansion of the exponential function (the constant ) : e^x = 1 + x + \frac{x^2}{2!}+\cdots provides a first-order approximation for for |x| \ll 1: : e^x \approx 1 + x. Consequently, the desired probability is . \\\ V_{t} &= n^{k} = 365^{23} \\\ P(A) &= \frac{V_{nr}}{V_{t}} \approx 0.492703 \\\ P(B) &= 1 - P(A) \approx 1 - 0.492703 \approx 0.507297 (50.7297%)\end{align} Another way the birthday problem can be solved is by asking for an approximate probability that in a group of people at least two have the same birthday. The exponential distribution is not the same as the class of exponential families of distributions. This implies that the expected number of people with a non-shared (unique) birthday is: : n \left( \frac{d-1}{d} \right)^{n-1} Similar formulas can be derived for the expected number of people who share with three, four, etc. other people. === Number of people until every birthday is achieved === The expected number of people needed until every birthday is achieved is called the Coupon collector's problem. In probability theory and statistics, the normal-exponential-gamma distribution (sometimes called the NEG distribution) is a three-parameter family of continuous probability distributions.
4.8
-0.041
167.0
5
0.15
D
5.3-1. Let $X_1$ and $X_2$ be independent Poisson random variables with respective means $\lambda_1=2$ and $\lambda_2=3$. Find (a) $P\left(X_1=3, X_2=5\right)$. HINT. Note that this event can occur if and only if $\left\{X_1=1, X_2=0\right\}$ or $\left\{X_1=0, X_2=1\right\}$.
The multiple Poisson distribution, its characteristics and a variety of forms. Mixed poisson processes, volume 77. In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson- distributed variable. It should not be confused with compound Poisson distribution or compound Poisson process. == Definition == A random variable X satisfies the mixed Poisson distribution with density (λ) if it has the probability distribution : \operatorname{P}(X=k) = \int_0^\infty \frac{\lambda^k}{k!}e^{-\lambda} \,\,\pi(\lambda)\,\mathrm d\lambda. \, Via the law of total cumulance it can be shown that, if the mean of the Poisson distribution λ = 1, the cumulants of Y are the same as the moments of X1. In this situation, the number of points at \textstyle x is a Poisson random variable with mean \textstyle \Lambda({x}). If the points belong to a homogeneous Poisson process with parameter \textstyle \lambda>0, then the probability of \textstyle n points existing in \textstyle B is given by: : \Pr \\{N(B)=n\\}=\frac{(\lambda|B|)^n}{n!} e^{-\lambda|B|} where \textstyle |B| denotes the area of \textstyle B. The two separate Poisson point processes formed respectively from the removed and kept points are stochastically independent of each other. It follows that \lambda is the expected number of arrivals that occur per unit of time. ====Key properties==== The previous definition has two important features shared by Poisson point processes in general: * the number of arrivals in each finite interval has a Poisson distribution; * the number of arrivals in disjoint intervals are independent random variables. In other words, for each point of the original Poisson process, there is an independent and identically distributed non-negative random variable, and then the compound Poisson process is formed from the sum of all the random variables corresponding to points of the Poisson process located in some region of the underlying mathematical space. The two properties are not logically independent; indeed, independence implies the Poisson distribution of point counts, but not the converse. ===Poisson distribution of point counts=== A Poisson point process is characterized via the Poisson distribution. If a point x is sampled from a countable n union of Poisson processes, then the probability that the point \textstyle x belongs to the jth Poisson process N_j is given by: : \Pr \\{x\in N_j\\}=\frac{\Lambda_j}{\sum_{i=1}^n\Lambda_i}. The result can be either a continuous or a discrete distribution. ==Definition== Suppose that :N\sim\operatorname{Poisson}(\lambda), i.e., N is a random variable whose distribution is a Poisson distribution with expected value λ, and that :X_1, X_2, X_3, \dots are identically distributed random variables that are mutually independent and also independent of N. For two real numbers \textstyle a and \textstyle b, where \textstyle a\leq b, denote by \textstyle N(a,b] the number points of an inhomogeneous Poisson process with intensity function \textstyle \lambda(t) occurring in the interval \textstyle (a,b]. The Poisson distribution is the probability distribution of a random variable N (called a Poisson random variable) such that the probability that \textstyle N equals \textstyle n is given by: : \Pr \\{N=n\\}=\frac{\Lambda^n}{n!} e^{-\Lambda} where n! denotes factorial and the parameter \Lambda determines the shape of the distribution. A mixed Poisson distribution is a univariate discrete probability distribution in stochastics. In probability theory and statistics, the Poisson binomial distribution is the discrete probability distribution of a sum of independent Bernoulli trials that are not necessarily identically distributed. The probability of \textstyle n points existing in the above interval \textstyle (a,b] is given by: : \Pr \\{N(a,b]=n\\}=\frac{[\Lambda(a,b)]^n}{n!} e^{-\Lambda(a,b)}. where the mean or intensity measure is: : \Lambda(a,b)=\int_a^b \lambda (t)\,\mathrm dt, which means that the random variable \textstyle N(a,b] is a Poisson random variable with mean \textstyle \operatorname E[N(a,b]] = \Lambda(a,b). When r = 1,2, DCP becomes Poisson distribution and Hermite distribution, respectively. If we denote the probabilities of the Poisson distribution by qλ(k), then : \operatorname{P}(X=k) = \int_0^\infty q_\lambda(k) \,\,\pi(\lambda)\,\mathrm d\lambda. == Properties == * The variance is always bigger than the expected value. In probability, statistics and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one another. Then the probability distribution of the sum of N i.i.d. random variables :Y = \sum_{n=1}^N X_n is a compound Poisson distribution.
76
35.2
0.0182
0.38
0.064
C
5.9-1. Let $Y$ be the number of defectives in a box of 50 articles taken from the output of a machine. Each article is defective with probability 0.01 . Find the probability that $Y=0,1,2$, or 3 (a) By using the binomial distribution.
In such a case, the probability distribution of the number of failures that appear will be a negative binomial distribution. Thus, the probability of failure, q, is given by :q = 1 - p = 1 - \tfrac{1}{2} = \tfrac{1}{2}. That number of successes is a negative-binomially distributed random variable. When counting the number of successes before the r-th failure, as in alternative formulation (3) above, the variance is rp/(1 − p)2. ===Relation to the binomial theorem=== Suppose Y is a random variable with a binomial distribution with parameters n and p. Let p be the probability of success in a Bernoulli trial, and q be the probability of failure. The following table describes four distributions related to the number of successes in a sequence of draws: With replacements No replacements Given number of draws binomial distribution hypergeometric distribution Given number of failures negative binomial distribution negative hypergeometric distribution ===(a,b,0) class of distributions=== The negative binomial, along with the Poisson and binomial distributions, is a member of the (a,b,0) class of distributions. * Beta-binomial distribution. In each trial the probability of success is p and of failure is 1-p. The number of successes before the third failure belongs to the infinite set { 0, 1, 2, 3, ... In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of successes (denoted r) occurs. A random variable corresponding to a binomial experiment is denoted by B(n,p), and is said to have a binomial distribution. In statistics, binomial regression is a regression analysis technique in which the response (often referred to as Y) has a binomial distribution: it is the number of successes in a series of independent Bernoulli trials, where each trial has probability of success . In other words, the negative binomial distribution is the probability distribution of the number of successes before the rth failure in a Bernoulli process, with probability p of successes on each trial. If r is a counting number, the coin tosses show that the count of successes before the rth failure follows a negative binomial distribution with parameters r and p. Because the coin is assumed to be fair, the probability of success is p = \tfrac{1}{2}. In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is conducted. Probability in the Engineering and Informational Sciences is an international journal published by Cambridge University Press. Then the random number of observed failures, X, follows the negative binomial (or Pascal) distribution: : X\sim\operatorname{NB}(r, p) ===Probability mass function=== The probability mass function of the negative binomial distribution is : f(k; r, p) \equiv \Pr(X = k) = \binom{k+r-1}{k} (1-p)^k p^r where r is the number of successes, k is the number of failures, and p is the probability of success on each trial. * When k = 2, the multinomial distribution is the binomial distribution. Find the probability that exactly two of the tosses result in heads. ===Solution=== For this experiment, let a heads be defined as a success and a tails as a failure. Thus, the expected number of failures would be this value, minus the successes: : E[\operatorname{NB}(r, p)] = \frac{r}{p} - r = \frac{r(1-p)}{p} ===Expectation of successes=== The expected total number of failures in a negative binomial distribution with parameters is r(1 − p)/p. In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables.
-32
0.9984
1.07
0.648004372
0.3359
B
7.4-11. Some dentists were interested in studying the fusion of embryonic rat palates by a standard transplantation technique. When no treatment is used, the probability of fusion equals approximately 0.89 . The dentists would like to estimate $p$, the probability of fusion, when vitamin A is lacking. (a) How large a sample $n$ of rat embryos is needed for $y / n \pm 0.10$ to be a $95 \%$ confidence interval for $p$ ?
Given n_S successes in n trials, define :\tilde{n} = n + z^2 and :\tilde{p} = \frac{1}{\tilde{n}}\left(n_S + \frac{z^2}{2}\right) Then, a confidence interval for p is given by : \tilde{p} \pm z \sqrt{\frac{\tilde{p}}{\tilde{n}}\left(1 - \tilde{p} \right)} where z = \Phi^{-1}\\!\left(1 - \frac{\alpha}{2}\\!\right) is the quantile of a standard normal distribution, as before (for example, a 95% confidence interval requires \alpha = 0.05, thereby producing z = 1.96). In other words, a binomial proportion confidence interval is an interval estimate of a success probability p when only the number of experiments n and the number of successes nS are known. By symmetry, one could expect for only successes (\hat p = 1), the interval is . ==Comparison and discussion== There are several research papers that compare these and other confidence intervals for the binomial proportion. Follow the examples below for guidance: Fusion range: * PFR cc (6m) 8Δ BI → 20Δ BO * PFR sc (1/3m) 16Δ BI → 45Δ BO c diplopia Break + recovery: * PFR sc (6m) -8/6Δ → +20/15Δ c diplopia * PFR cc (1/3m) -16/14Δ → +45/40Δ c diplopia Patient results should be compared to the normal values for prism fusional amplitudes to determine if the patient has any anomalies. The parameter a has to be estimated for the data set. ==Rule of three — for when no successes are observed== The rule of three is used to provide a simple way of stating an approximate 95% confidence interval for p, in the special case that no successes (\hat p = 0) have been observed.Steve Simon (2010) "Confidence interval with zero events", The Children's Mercy Hospital, Kansas City, Mo. (website: "Ask Professor Mean at Stats topics or Medical Research ) The interval is . This method may be used to estimate the variance of p but its use is problematic when p is close to 0 or 1\. ==ta transform== Let p be the proportion of successes. For a 95% confidence level, the error \alpha=1-0.95=0.05, so 1 - \tfrac \alpha 2=0.975 and z=1.96. Combining the two, and squaring out the radical, gives an equation that is quadratic in : : \left(\, \hat{p} - p \,\right)^{2} = z^{2}\cdot\frac{\,p\left(1-p\right)\,}{n} Transforming the relation into a standard-form quadratic equation for , treating \hat p and as known values from the sample (see prior section), and using the value of that corresponds to the desired confidence for the estimate of gives this: \left( 1 + \frac{\,z^2\,}{n} \right) p^2 + \left( - 2 {\hat p} - \frac{\,z^2\,}{n} \right) p + \biggl( {\hat p}^2 \biggr) = 0 ~, where all of the values in parentheses are known quantities. Under this formulation, the confidence interval represents those values of the population parameter that would have large p-values if they were tested as a hypothesized population proportion. Tooth fusion arises through union of two normally separated tooth germs, and depending upon the stage of development of the teeth at the time of union, it may be either complete or incomplete. A 95% confidence interval for the proportion, for instance, will contain the true proportion 95% of the times that the procedure for constructing the confidence interval is employed. ==Normal approximation interval or Wald interval == A commonly used formula for a binomial confidence interval relies on approximating the distribution of error about a binomially-distributed observation, \hat p, with a normal distribution. The solution for estimates the upper and lower limits of the confidence interval for . However, fusion can also be the union of a normal tooth bud to a supernumerary tooth germ. thumb|400px|Lawson criterion of important magnetic confinement fusion experiments The Lawson criterion is a figure of merit used in nuclear fusion research. In statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments (Bernoulli trials). There are several formulas for a binomial confidence interval, but all of them rely on the assumption of a binomial distribution. Because we do not know p(1-p), we have to estimate it. The Clopper–Pearson interval can be written as : S_{\le} \cap S_{\ge} or equivalently, : \left( \inf S_{\ge}\,,\, \sup S_{\le} \right) with : S_{\le} := \left\\{ p \,\,\Big|\,\, P \left[ \operatorname{Bin}\left( n; p \right) \le x \right] > \frac{\alpha}{2} \right\\} \text{ and } S_{\ge} := \left\\{ p \,\,\Big|\,\, P \left[ \operatorname{Bin}\left( n; p \right) \ge x \right] > \frac{\alpha}{2} \right\\}, where 0 ≤ x ≤ n is the number of successes observed in the sample and Bin(n; p) is a binomial random variable with n trials and probability of success p. Thus, p_{\min} < p < p_{\max}, where: :\frac{\Gamma(n+1)}{\Gamma(x )\Gamma(n-x+1)}\int_0^{ p_{\min}} t^{x-1}(1-t)^{n-x}dt = \frac{\alpha}{2} :\frac{\Gamma(n+1)}{\Gamma(x+1)\Gamma(n-x)}\int_0^{ p_{\max}} t^{x}(1-t)^{n-x-1}dt = 1-\frac{\alpha}{2} The binomial proportion confidence interval is then ( p_{\min}, p_{\max}), as follows from the relation between the Binomial distribution cumulative distribution function and the regularized incomplete beta function. Using the normal approximation, the success probability p is estimated as : \hat p \pm z \sqrt{\frac{\hat p \left(1 - \hat p\right)}{n}}, or the equivalent : \frac{n_S}{n} \pm \frac{z}{n \sqrt{n}} \sqrt{n_S n_F}, where \hat p = n_S / n is the proportion of successes in a Bernoulli trial process, measured with n trials yielding n_S successes and n_F = n - n_S failures, and z is the 1 - \tfrac{\alpha}{2} quantile of a standard normal distribution (i.e., the probit) corresponding to the target error rate \alpha. When such is the case, the fusion power density is proportional to p2<σv>/T 2. Because the binomial distribution is a discrete probability distribution (i.e., not continuous) and difficult to calculate for large numbers of trials, a variety of approximations are used to calculate this confidence interval, all with their own tradeoffs in accuracy and computational intensity.
1.41
2.74
12.0
38
9.8
D
7.1-3. To determine the effect of $100 \%$ nitrate on the growth of pea plants, several specimens were planted and then watered with $100 \%$ nitrate every day. At the end of two weeks, the plants were measured. Here are data on seven of them: $$ \begin{array}{lllllll} 17.5 & 14.5 & 15.2 & 14.0 & 17.3 & 18.0 & 13.8 \end{array} $$ Assume that these data are a random sample from a normal distribution $N\left(\mu, \sigma^2\right)$. (a) Find the value of a point estimate of $\mu$.
The average of these 15 deviations from the assumed mean is therefore −30/15 = −2\. Suppose we wanted to calculate a 95% confidence interval for μ. The assumed mean is the centre of the range from 174 to 177 which is 175.5. In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. Then, denoting c as the 97.5th percentile of this distribution, : \Pr(-c\le T \le c)=0.95 Note that "97.5th" and "0.95" are correct in the preceding expressions. The standard deviation is estimated as :CS \sqrt{\frac{B-\frac{A^2}{N}}{N-1}}=5.57 ==References== Category:Means thumb|upright=1.3|Each row of points is a sample from the same normal distribution. thumb|Plot of the standard deviation line (SD line), dashed, and the regression line, solid, for a scatter diagram of 20 points. thumb|upright=1.3|right|Relationship of phosphate to nitrate uptake for photosynthesis in various regions of the ocean. This value is then subtracted from all the sample values. Therefore, that is what we need to add to the assumed mean to get the correct mean: : correct mean = 240 − 2 = 238. ==Method== The method depends on estimating the mean and rounding to an easy value to calculate with. The colored lines are 50% confidence intervals for the mean, μ. Observed numbers in ranges Range tally-count frequency class diff freq×diff freq×diff2 159—161 / 1 −5 −5 25 162—164 ~~////~~ / 6 −4 −24 96 165—167 ~~////~~ ~~////~~ 10 −3 −30 90 168—170 ~~////~~ ~~////~~ /// 13 −2 −26 52 171—173 ~~////~~ ~~////~~ ~~////~~ / 16 −1 −16 16 174—176 ~~////~~ ~~////~~ ~~////~~ ~~////~~ ~~////~~ 25 0 0 0 177—179 ~~////~~ ~~////~~ ~~////~~ / 16 1 16 16 180—182 ~~////~~ ~~////~~ / 11 2 22 44 183—185 0 3 0 0 186—188 // 2 4 8 32 Sum N = 100 A = −55 B = 371 The mean is then estimated to be :x_0 + CS \times \frac{A}{N} = 175.5+3\times -55 / 100 = 173.85 which is very close to the actual mean of 173.846. From the probability density function of the standard normal distribution, the exact value of z.975 is determined by : \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{z_{.975}} e^{-x^2/2} \, \mathrm{d}x = 0.975. == History == thumb|right|200px|Ronald Fisher The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925: In Table 1 of the same work, he gave the more precise value 1.959964. , Table 1 In 1970, the value truncated to 20 decimal places was calculated to be :1.95996 39845 40054 23552... Then the deviations from this "assumed" mean are the following: :−21, −17, −14, −12, −9, −6, −5, −4, 0, 1, 4, 7, 9, 15, 22 In adding these up, one finds that: : 22 and −21 almost cancel, leaving +1, : 15 and −17 almost cancel, leaving −2, : 9 and −9 cancel, : 7 + 4 cancels −6 − 5, and so on. After observing the sample we find values for and s for S, from which we compute the confidence interval : \left[ \bar{x} - \frac{cs}{\sqrt{n}}, \bar{x} + \frac{cs}{\sqrt{n}} \right]. == Interpretation == Various interpretations of a confidence interval can be given (taking the 95% confidence interval as an example in the following). (Section 9.5) Note that the distribution of T does not depend on the values of the unobservable parameters μ and σ2; i.e., it is a pivotal quantity. A confidence interval for the true mean can be constructed centered on the sample mean with a width which is a multiple of the square root of the sample variance. ====Likelihood theory==== Estimates can be constructed using the maximum likelihood principle, the likelihood theory for this provides two ways of constructing confidence intervals or confidence regions for the estimates. ====Estimating equations==== The estimation approach here can be considered as both a generalization of the method of moments and a generalization of the maximum likelihood approach. * The confidence interval can be expressed in terms of statistical significance, e.g.: If X has a standard normal distribution, i.e. X ~ N(0,1), : \mathrm{P}(X > 1.96) \approx 0.025, \, : \mathrm{P}(X < 1.96) \approx 0.975, \, and as the normal distribution is symmetric, : \mathrm{P}(-1.96 < X < 1.96) \approx 0.95. At the center of each interval is the sample mean, marked with a diamond.
20.2
15.757
8.8
420
4.0
B
5.5-7. Suppose that the distribution of the weight of a prepackaged "1-pound bag" of carrots is $N\left(1.18,0.07^2\right)$ and the distribution of the weight of a prepackaged "3-pound bag" of carrots is $N\left(3.22,0.09^2\right)$. Selecting bags at random, find the probability that the sum of three 1-pound bags exceeds the weight of one 3-pound bag. HInT: First determine the distribution of $Y$, the sum of the three, and then compute $P(Y>W)$, where $W$ is the weight of the 3-pound bag.
Even if P≠NP, the O(nW) complexity does not contradict the fact that the knapsack problem is NP-complete, since W, unlike n, is not polynomial in the length of the input to the problem. Baggett v. During the process of the running of this method, how do we get the weight w? Nevertheless, a simple modification allows us to solve this case: Assume for simplicity that all items individually fit in the sack (w_i \le W for all i). The target is to maximize the sum of the values of the items in the knapsack so that the sum of weights in each dimension d does not exceed W_d. Define value[n, W] Initialize all value[i, j] = -1 Define m:=(i,j) // Define function m so that it represents the maximum value we can get under the condition: use first i items, total weight limit is j { if i == 0 or j <= 0 then: value[i, j] = 0 return if (value[i-1, j] == -1) then: // m[i-1, j] has not been calculated, we have to call function m m(i-1, j) if w[i] > j then: // item cannot fit in the bag value[i, j] = value[i-1, j] else: if (value[i-1, j-w[i]] == -1) then: // m[i-1,j-w[i]] has not been calculated, we have to call function m m(i-1, j-w[i]) value[i, j] = max(value[i-1,j], value[i-1, j-w[i]] + v[i]) } Run m(n, W) For example, there are 10 different items and the weight limit is 67. For a given item i, suppose we could find a set of items J such that their total weight is less than the weight of i, and their total value is greater than the value of i. His version sorts the items in decreasing order of value per unit of weight, v_1/w_1\ge\cdots\ge v_n/w_n. Informally, the problem is to maximize the sum of the values of the items in the knapsack so that the sum of the weights is less than or equal to the knapsack's capacity. The knapsack problem is the following problem in combinatorial optimization: :Given a set of items, each with a weight and a value, determine which items to include in the collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. Given a set of n items numbered from 1 up to n, each with a weight w_i and a value v_i, along with a maximum weight capacity W, : maximize \sum_{i=1}^n v_i x_i : subject to \sum_{i=1}^n w_i x_i \leq W and x_i \in \\{0,1\\}. There are only i ways and the previous weights are w-w_1, w-w_2,..., w-w_i where there are total i kinds of different item (by saying different, we mean that the weight and the value are not completely the same). Here x_i represents the number of instances of item i to include in the knapsack. In the Bag is a 1956 American animated short comedy film produced by Walt Disney Productions, directed by Jack Hannah,“Bearly” a Star: A Tribute To Disney’s Humphrey the Bear-Cartoon Research and featuring park ranger J. Audubon Woodlore and his comedic foil Humphrey the Bear.BCDB.com This was the last Disney theatrical cartoon short subject distributed by RKO Radio Pictures.Amazon.com ==Plot== Tourists have departed Brownstone National Park where Humphrey lives, leaving trash everywhere, despite signs asking tourists not to litter the park. Where are the hard knapsack problems? Three Bags Full: A Sheep Detective Story (original German title: Glennkill: Ein Schafskrimi) is 2005 novel by Leonie Swann. It can be shown that the average performance converges to the optimal solution in distribution at the error rate n^{-1/2} ==== Fully polynomial time approximation scheme ==== The fully polynomial time approximation scheme (FPTAS) for the knapsack problem takes advantage of the fact that the reason the problem has no known polynomial time solutions is because the profits associated with the items are not restricted. Baggett is a surname. The ranger prepares to reward Humphrey with a dish full of cacciatore, but before Humphrey can take it, the geyser suddenly erupts, spouting the garbage everywhere, resulting in Humphrey having to start all over again at cleaning up the park.Internet Archive ==In the Bag song== The song featured in In The Bag was so popular that Disney released a version of it (with similar instrumentation and different vocals) as a single, "The Humphrey Hop". Weight distribution is the apportioning of weight within a vehicle, especially cars, airplanes, and trains. The bounded knapsack problem (BKP) removes the restriction that there is only one of each item, but restricts the number x_i of copies of each kind of item to a maximum non-negative integer value c: : maximize \sum_{i=1}^n v_i x_i : subject to \sum_{i=1}^n w_i x_i \leq W and x_i \in \\{0,1,2,\dots,c\\}. For this reason weight distribution varies with the vehicle's intended usage.
0.01961
210
22.0
0.9830
2500
D
5.3-7. The distributions of incomes in two cities follow the two Pareto-type pdfs $$ f(x)=\frac{2}{x^3}, 1 < x < \infty , \text { and } g(y)= \frac{3}{y^4} , \quad 1 < y < \infty, $$ respectively. Here one unit represents $\$ 20,000$. One person with income is selected at random from each city. Let $X$ and $Y$ be their respective incomes. Compute $P(X < Y)$.
As applied to distribution of incomes, this means that the larger the value of the Pareto index θ the smaller the proportion of incomes many times as big as the smallest incomes. The family of Pareto distributions is parameterized by * a positive number κ that is the smallest value that a random variable with a Pareto distribution can take. For example, it may be observed that 45% of individuals in the sample have incomes below a = $35,000 per year, and 55% have incomes below b = $40,000 per year. Multivariate Pareto distributions have been defined for many of these types. ==Bivariate Pareto distributions== ===Bivariate Pareto distribution of the first kind=== Mardia (1962) defined a bivariate distribution with cumulative distribution function (CDF) given by : F(x_1, x_2) = 1 -\sum_{i=1}^2\left(\frac{x_i}{\theta_i}\right)^{-a}+ \left(\sum_{i=1}^2 \frac{x_i}{\theta_i} - 1\right)^{-a}, \qquad x_i > \theta_i > 0, i=1,2; a>0, and joint density function : f(x_1, x_2) = (a+1)a(\theta_1 \theta_2)^{a+1}(\theta_2x_1 + \theta_1x_2 - \theta_1 \theta_2)^{-(a+2)}, \qquad x_i \geq \theta_i>0, i=1,2; a>0. This distribution is called a multivariate Pareto distribution of type II by Arnold. This distribution is called a multivariate Pareto distribution of type II by Arnold. In statistics, a multivariate Pareto distribution is a multivariate extension of a univariate Pareto distribution. There are several different types of univariate Pareto distributions including Pareto Types I−IV and Feller−Pareto. (This definition is not equivalent to Mardia's bivariate Pareto distribution of the second kind.) There were huge differences between white and the other people, not only in wages, but also in the place they can enter and so on. == Development of income distribution as a stochastic process == It is difficult to create a realistic and not complicated theoretical model, because the forces determining the distribution of income (DoI) are varied and complex and they continuously interact and fluctuate. The marginal distributions are Pareto Type 1 with density functions : f(x_i)=a\theta_i^a x_i^{-(a+1)}, \qquad x_i \geq \theta_i>0, i=1,2. Pareto interpolation is a method of estimating the median and other properties of a population that follows a Pareto distribution. Income inequality is amount to which income is distributed unequally in a population., additional text. If the location and scale parameter are allowed to differ, the complementary CDF is : \overline{F}(x_1,x_2) = \left(1 + \sum_{i=1}^2 \frac{x_i-\mu_i}{\sigma_i} \right)^{-a}, \qquad x_i > \mu_i, i=1,2, which has Pareto Type II univariate marginal distributions. In a model by Champernowne, the author assumes that the income scale is divided into an enumerable infinity of income ranges, which have uniform proportionate distribution. Modern economists have also addressed issues of income distribution, but have focused more on the distribution of income across individuals and households. As applied to distribution of incomes, κ is the lowest income of any person in the population; and * a positive number θ the "Pareto index"; as this increases, the tail of the distribution gets thinner. The top 1% had a 71.9% of the overall shared income. == Household inequality == Household inequality is the extent to which income is distributed unequally among people living in a houses collectively in a population. If the location and scale parameter are allowed to differ, the complementary CDF is : \overline{F}(x_1,\dots,x_k) = \left(1 + \sum_{i=1}^k \frac{x_i-\mu_i}{\sigma_i} \right)^{-a}, \qquad x_i > \mu_i, \quad i=1,\dots,k, \qquad (3) which has marginal distributions of the same type (3) and Pareto Type II univariate marginal distributions. Category:Estimation methods Category:Income inequality metrics Category:Theory of probability distributions Category:Parametric statistics Category:Vilfredo Pareto While it is common to refer to pareto as "80/20" rule, under the assumption that, in all situations, 20% of causes determine 80% of problems, this ratio is merely a convenient rule of thumb and is not, nor should it be considered, an immutable law of nature. The marginal distributions and conditional distributions are of the same type (5); that is, they are multivariate Feller–Pareto distributions.
5.85
0.4
93.4
1.07
0.66666666666
B
7.3-3. Let $p$ equal the proportion of triathletes who suffered a training-related overuse injury during the past year. Out of 330 triathletes who responded to a survey, 167 indicated that they had suffered such an injury during the past year. (a) Use these data to give a point estimate of $p$.
A prospective cohort study of 76 runners followed for one year showed that 51 percent reported an injury. "A prospective cohort study of 300 runners followed for two years showed that 73 percent of women and 62 percent of men sustained an injury, with 56 percent of the injured runners sustaining more than one injury during the study period." Many of the common injuries that affect runners are chronic, developing over longer periods as the result of overuse. Because of this mechanism, stress fractures are common overuse injuries in athletes. "Over 60% of male injured runners and over 50% of female injured runners had increased their weekly running distance by >30% between consecutive weeks at least once in the 4 weeks prior to injury." However, this has not been proven and is still debated. == Overview == > "The causes of running injuries are so multifactorial and diverse, and > apparently vary greatly from individual to individual, that any preventive > measure proposed would probably be of help to only a small minority. Common overuse injuries include shin splints, stress fractures, Achilles tendinitis, Iliotibial band syndrome, Patellofemoral pain (runner's knee), and plantar fasciitis. In general, overuse injuries are the result of repetitive impact between the foot and the ground. The 2013 European Triathlon Championships was held in Alanya, Turkey from 14 June to 16 June 2013. ==Medallists== Elite Elite Elite Elite Elite Elite Elite Men 1:42:09 1:42:16 1:42:22 Women 1:55:43 1:55:45 1:55:53 Mixed Relay 1:32:05 1:32:25 1:32:29 Junior Junior Junior Junior Junior Junior Junior Men 0:52:40 0:52:59 0:53:05 Women 0:58:46 0:59:03 0:59:11 Mixed Relay 1:34:18 1:34:38 1:34:46 == Results == === Men's === ;Key * # denotes the athlete's bib number for the event * Swimming denotes the time it took the athlete to complete the swimming leg * Cycling denotes the time it took the athlete to complete the cycling leg * Running denotes the time it took the athlete to complete the running leg * Difference denotes the time difference between the athlete and the event winner * Lapped denotes that the athlete was lapped and removed from the course Rank # Triathlete Swimming Cycling Running Total time Difference 1 16:46 0:53:18 30:34 1:42:09 — 2 16:42 0:53:23 30:43 1:42:16 +00:07 3 17:04 0:54:46 29:05 1:42:22 +00:13 4 46 16:39 0:53:23 31:07 1:42:42 +00:33 5 7 16:37 0:54:46 31:16 1:42:48 +00:39 6 8 16:41 0:53:28 31:26 1:42:57 +00:48 7 4 16:44 0:53:27 30:07 1:43:29 +01:20 8 9 16:51 0:53:22 30:14 1:43:31 +01:22 9 22 16:46 0:55:11 32:05 1:43:43 +01:34 10 11 17:06 0:55:02 30:31 1:43:50 +01:41 11 5 17:09 0:53:19 30:34 1:43:54 +01:45 12 55 17:08 0:54:44 30:34 1:44:01 +01:52 13 29 17:14 0:54:40 30:50 1:44:16 +02:07 14 19 17:06 0:54:49 31:08 1:44:25 +02:16 15 12 17:02 0:54:40 31:18 1:44:34 +02:25 16 10 16:49 0:54:46 31:29 1:44:48 +02:39 17 23 17:08 0:54:47 31:38 1:44:59 +02:50 18 21 16:44 0:55:01 33:32 1:45:10 +03:01 19 31 16:50 0:54:41 32:06 1:45:27 +03:18 20 24 17:10 0:53:20 32:36 1:45:53 +03:44 21 41 17:05 0:54:59 32:43 1:46:03 +03:54 22 33 17:30 0:54:42 30:48 1:46:13 +04:04 23 14 17:42 0:54:46 31:00 1:46:24 +04:15 24 42 17:07 0:56:31 33:29 1:46:54 +04:45 25 26 16:44 0:56:19 35:44 1:47:17 +05:08 Source: Official results === Women's === ;Key * # denotes the athlete's bib number for the event * Swimming denotes the time it took the athlete to complete the swimming leg * Cycling denotes the time it took the athlete to complete the cycling leg * Running denotes the time it took the athlete to complete the running leg * Difference denotes the time difference between the athlete and the event winner * Lapped denotes that the athlete was lapped and removed from the course Rank # Triathlete Swimming Cycling Running Total time Difference 9 18:09 1:02:11 33:53 1:55:43 — 22 18:17 1:02:07 33:55 1:55:45 +00:02 4 18:21 1:02:03 34:06 1:55:53 +00:10 4 11 18:27 1:02:07 34:17 1:56:03 +00:20 5 37 18:25 1:02:03 34:24 1:56:15 +00:32 6 10 18:16 1:01:56 34:55 1:56:39 +00:56 7 23 18:19 1:02:03 34:49 1:56:42 +00:59 8 20 18:22 1:02:03 35:05 1:56:55 +01:12 9 12 18:12 1:02:01 35:15 1:57:07 +01:24 10 14 19:03 1:02:03 33:56 1:57:09 +01:26 11 3 18:20 1:02:11 35:33 1:57:18 +01:35 12 6 18:20 1:02:47 35:37 1:57:26 +01:43 13 28 18:22 1:02:02 35:49 1:57:40 +01:57 14 17 18:13 1:02:02 36:01 1:57:50 +02:07 15 5 18:39 1:01:56 34:38 1:57:52 +02:09 16 21 18:19 1:02:10 36:07 1:57:59 +02:16 17 26 19:03 1:03:11 34:50 1:58:08 +02:25 18 24 18:07 1:02:04 36:28 1:58:19 +02:36 19 33 18:19 1:02:44 36:32 1:58:24 +02:41 20 1 19:07 1:02:14 35:24 1:58:40 +02:57 21 16 18:24 1:02:04 37:07 1:59:02 +03:19 22 15 18:48 1:02:39 35:59 1:59:19 +03:36 23 18 18:59 1:02:01 36:29 1:59:42 +03:59 24 19 19:06 1:02:55 34:37 1:59:47 +04:04 25 2 19:00 1:02:47 36:43 1:59:57 +04:14 Source: Official results == References == == External links == * Official page Category:European Triathlon Championships Category:Triathlon in Turkey Category:International sports competitions hosted by Turkey Category:Alanya The 2015 European Triathlon Championships was held in Geneva, Switzerland from 9 July to 12 July 2015. ==Medallists== Elite Elite Elite Elite Elite Elite Elite Men 1:52:55 1:53:13 1:53:16 Women 2:07:15 2:08:14 2:09:16 Mixed Relay Jeanne Lehair David Hauss Emmie Charayron Simon Viain 1:25:21 Jolanda Annen Andrea Salvisberg Nicola Spirig Sven Riederer 1:25:30 Jodie Stimpson Lucy Hall Thomas Bishop Matthew Sharp 1:25:31 Junior Junior Junior Junior Junior Junior Junior Men 0:57:41 0:57:42 0:57:42 Women 1:04:06 1:04:42 1:04:50 Mixed Relay Margot Garabedian Maxime Hueber-Moosbrugger Emilie Morier Léo Bergere 1:28:37 Lena Meißner Linus Stimmel Lisa Tertsch Lasse Lührs 1:28:59 Alberte Kjær Pedersen Daniel Bækkegård Anne Holm Emil Deleuran Hansen 1:29:14 == Results == === Men's === ;Key * # denotes the athlete's bib number for the event * Swimming denotes the time it took the athlete to complete the swimming leg * Cycling denotes the time it took the athlete to complete the cycling leg * Running denotes the time it took the athlete to complete the running leg * Difference denotes the time difference between the athlete and the event winner * Lapped denotes that the athlete was lapped and removed from the course 0 Rank # Triathlete Swimming Cycling Running Total time Difference 16 17:54 1:02:39 31:19 1:52:55 — 7 17:55 1:02:38 31:42 1:53:13 +00:18 24 18:06 1:02:25 31:43 1:53:16 +00:21 4 9 17:41 1:02:38 31:46 1:53:20 +00:25 5 18 18:08 1:02:25 31:42 1:53:22 +00:27 6 1 17:41 1:02:52 32:09 1:53:46 +00:51 7 23 17:54 1:02:26 32:29 1:54:00 +01:05 8 3 17:50 1:02:51 32:30 1:54:03 +01:08 9 5 28:08 1:02:38 00:00 1:54:07 +01:12 10 32 17:56 1:02:40 33:20 1:54:59 +02:04 11 41 18:10 0:00:00 33:30 1:55:08 +02:13 12 8 17:55 1:02:36 33:39 1:55:09 +02:14 13 50 17:49 1:02:24 33:55 1:55:27 +02:32 14 31 17:55 1:02:40 34:07 1:55:43 +02:48 15 15 17:53 1:02:42 34:15 1:55:49 +02:54 16 61 17:44 1:02:38 34:13 1:55:52 +02:57 17 21 18:46 1:02:40 32:23 1:56:42 +03:47 18 2 17:55 1:02:45 33:13 1:56:47 +03:52 19 36 17:56 1:04:35 35:11 1:56:52 +03:57 20 19 18:50 1:04:36 32:47 1:57:16 +04:21 21 14 18:58 1:02:34 33:03 1:57:27 +04:32 22 26 17:51 1:04:34 33:20 1:57:40 +04:45 23 35 17:59 1:04:23 36:16 1:57:53 +04:58 24 34 19:01 1:05:13 33:40 1:58:05 +05:10 25 25 18:47 1:02:40 33:44 1:58:12 +05:17 Source: Official results === Women's === ;Key * # denotes the athlete's bib number for the event * Swimming denotes the time it took the athlete to complete the swimming leg * Cycling denotes the time it took the athlete to complete the cycling leg * Running denotes the time it took the athlete to complete the running leg * Difference denotes the time difference between the athlete and the event winner * Lapped denotes that the athlete was lapped and removed from the course 0 Rank # Triathlete Swimming Cycling Running Total time Difference 1 19:48 1:10:49 35:33 2:07:15 — 2 19:34 1:11:03 36:31 2:08:14 +00:59 7 19:46 1:10:51 37:31 2:09:16 +02:01 4 5 19:50 1:11:03 35:44 2:09:45 +02:30 5 9 18:48 1:10:51 38:14 2:09:59 +02:44 6 22 20:39 1:13:07 36:13 2:10:20 +03:05 7 30 19:42 1:11:48 38:40 2:10:26 +03:11 8 20 20:33 1:12:21 36:25 2:10:33 +03:18 9 6 19:34 1:10:54 37:54 2:10:50 +03:35 10 11 20:23 1:12:27 36:52 2:10:56 +03:41 11 37 18:50 1:12:18 39:11 2:11:00 +03:45 12 24 20:32 1:12:36 00:00 2:11:11 +03:56 13 17 20:28 1:11:45 37:04 2:11:14 +03:59 14 16 19:33 0:00:00 38:25 2:11:23 +04:08 15 8 18:55 1:12:33 39:35 2:11:28 +04:13 16 25 19:56 1:12:17 37:49 2:11:57 +04:42 17 35 20:27 1:11:42 38:20 2:12:24 +05:09 18 23 20:34 1:12:58 00:00 2:12:38 +05:23 19 36 20:44 1:12:30 35:40 2:12:49 +05:34 20 21 20:30 0:00:00 38:51 2:13:02 +05:47 21 12 19:44 1:15:07 40:01 2:13:04 +05:49 22 44 19:36 1:12:26 39:11 2:13:19 +06:04 23 39 19:52 1:12:09 39:15 2:13:24 +06:09 24 18 20:27 1:13:20 39:19 2:13:27 +06:12 25 15 19:53 1:13:06 40:13 2:14:24 +07:09 Source: Official results == References == == External links == * Official page Category:European Triathlon Championships Category:Triathlon in Switzerland Category:International sports competitions hosted by Switzerland Category:Geneva The 2017 European Triathlon Championships was held in Kitzbühel, Austria from 16 June to 18 June 2017. ==Medallists== Elite Elite Elite Elite Elite Elite Elite Men 1:45:31 1:45:32 1:45:35 Women 1:57:50 1:58:05 1:58:31 Mixed Relay Anne Holm Andreas Schilling Sif Bendix Madsen Emil Deleuran Hansen 1:15:17 Cassandre Beaugrand Simon Viain Emilie Morier Raphael Montoya 1:15:24 Anastasia Gorbunova Dmitry Polyanskiy Anastasia Abrosimova Vladimir Turbaevskiy 1:15:32 Junior Junior Junior Junior Junior Junior Junior Men 53:39 53:40 53:40 Women 59:20 59:23 59:34 Mixed Relay Lili Mátyus Gergő Soós Dorka Putnóczki Csongor Lehmann 1:18:31 Daria Lushnikova Mikhail Antipov Ekaterina Matiukh Grigory Antipov 1:19:01 Bianca Bogen Moritz Horn Nina Eim Tim Siepmann 1:19:15 == Results == === Men's === ;Key * # denotes the athlete's bib number for the event * Swimming denotes the time it took the athlete to complete the swimming leg * Cycling denotes the time it took the athlete to complete the cycling leg * Running denotes the time it took the athlete to complete the running leg * Difference denotes the time difference between the athlete and the event winner * Lapped denotes that the athlete was lapped and removed from the course Rank # Triathlete Swimming Cycling Running Total time Difference 4 18:52 53:34 31:54 1:45:31 — 21 18:49 53:36 31:52 1:45:32 +0:01 5 18:57 53:29 31:58 1:45:35 +0:04 4 9 18:49 53:35 32:02 1:45:40 +0:09 5 10 18:44 53:42 32:12 1:45:47 +0:16 6 7 19:25 53:47 31:29 1:45:51 +0:20 7 6 19:11 54:03 31:32 1:45:54 +0:23 8 17 19:00 53:29 32:32 1:46:12 +0:81 9 16 19:21 53:46 32:01 1:46:16 +0:85 10 1 18:38 53:49 32:45 1:46:21 +0:90 11 11 19:23 53:47 32:12 1:46:33 +1:02 12 20 18:46 54:22 32:24 1:46:47 +1:16 13 18 19:00 53:26 33:18 1:46:50 +1:19 14 8 19:04 53:21 33:16 1:46:51 +1:20 15 37 19:02 53:20 33:12 1:46:51 +1:20 16 28 18:41 53:44 33:18 1:46:52 +1:21 17 3 18:39 53:44 33:16 1:46:52 +1:21 18 39 19:13 53:39 32:27 1:46:54 +1:23 19 14 19:23 53:48 33:04 1:47:20 +1:89 20 38 19:00 53:25 33:49 1:47:25 +1:94 21 40 19:16 47:00 33:01 1:47:27 +1:96 22 49 19:15 53:55 33:06 1:47:30 +1:99 23 15 19:10 54:00 33:34 1:47:50 +2:19 24 2 18:43 53:41 34:31 1:48:03 +2:72 25 25 18:42 53:44 34:43 1:48:15 +2:84 26 29 19:25 53:48 33:53 1:48:21 +2:90 27 33 19:20 53:49 34:07 1:48:23 +2:92 28 27 19:20 53:51 34:07 1:48:30 +2:99 29 43 19:14 53:54 34:15 1:48:35 +3:04 30 45 18:41 53:42 35:08 1:48:44 +3:13 31 32 19:17 53:56 34:40 1:48:58 +3:27 32 26 19:21 53:50 34:53 1:49:09 +3:78 33 23 18:45 53:41 36:04 1:49:44 +4:13 34 53 19:18 53:52 36:12 1:50:37 +5:06 35 50 18:54 53:31 37:26 1:51:07 +5:76 36 31 18:59 54:11 36:58 1:51:18 +5:87 37 22 19:32 57:38 33:10 1:51:30 +5:99 38 36 19:32 56:31 34:38 1:51:53 +6:22 39 48 19:08 54:02 37:41 1:52:03 +6:72 40 47 19:17 57:51 35:10 1:53:33 +8:02 41 41 19:30 57:39 36:05 1:54:22 +8:91 42 34 19:06 58:02 36:11 1:54:31 +9:00 43 19 20:55 56:11 36:58 1:55:14 +9:83 44 55 19:27 59:18 35:20 1:55:24 +9:93 45 51 20:55 56:11 37:26 1:55:44 +10:13 46 52 19:22 57:47 38:25 1:56:47 +11:16 47 54 19:31 59:18 37:55 1:57:54 +12:23 — 35 20:13 56:55 did not finish did not finish did not finish — 12 18:36 53:47 did not finish did not finish did not finish — 24 19:23 did not finish did not finish did not finish did not finish — 42 19:31 did not finish did not finish did not finish did not finish — 44 18:57 did not advance did not advance did not advance did not advance — 46 19:08 Lapped Lapped Lapped Lapped Source: Official results === Women's === ;Key * # denotes the athlete's bib number for the event * Swimming denotes the time it took the athlete to complete the swimming leg * Cycling denotes the time it took the athlete to complete the cycling leg * Running denotes the time it took the athlete to complete the running leg * Difference denotes the time difference between the athlete and the event winner * Lapped denotes that the athlete was lapped and removed from the course Rank # Triathlete Swimming Cycling Running Total time Difference 8 19:09 1:00:11 37:14 1:57:50 15 19:49 59:34 37:27 1:58:05 +0:15 14 19:45 59:36 37:53 1:58:31 +0:41 4 3 20:53 1:00:44 35:51 1:58:41 +0:51 5 9 20:56 1:00:45 35:44 1:58:41 +0:51 6 6 20:18 1:01:21 35:55 1:58:47 +0:57 7 7 20:51 1:00:46 36:08 1:59:00 +1:10 8 10 19:50 1:01:51 36:29 1:59:24 +1:34 9 2 21:00 1:00:43 36:30 1:59:28 +1:38 10 23 20:05 1:03:13 34:59 1:59:37 +1:47 11 19 19:27 59:52 39:06 1:59:46 +1:56 12 11 21:00 1:00:39 37:15 2:00:13 +2:23 13 12 20:59 1:00:41 37:29 2:00:24 +2:34 14 22 20:57 1:00:43 37:42 2:00:37 +2:47 15 1 20:16 1:01:22 38:27 2:01:24 +3:34 16 17 Michelle Flipo 20:56 1:02:16 37:38 2:02:14 +4:24 17 27 20:48 1:00:47 39:34 2:02:28 +4:38 18 26 21:13 1:02:02 38:01 2:02:41 +4:51 19 18 21:02 1:02:13 38:37 2:03:09 +5:19 20 4 21:14 1:02:00 38:46 2:03:19 +5:29 21 24 21:01 1:00:39 42:08 2:05:07 +7:17 22 20 20:17 1:01:20 44:13 2:07:13 +9:23 16 20:17 1:01:26 did not finish did not finish did not finish did not finish 28 22:20 did not finish did not finish did not finish did not finish 21 21:13 did not finish did not finish did not finish did not finish 29 23:34 did not finish did not finish did not finish did not finish 25 21:12 Lapped Lapped Lapped Lapped 30 22:50 Lapped Lapped Lapped Lapped Source: Official results == References == == External links == * Official page Category:European Triathlon Championships Category:Triathlon in Austria Category:2017 in Austrian sport Category:June 2017 sports events in Europe Category:International sports competitions hosted by Austria Category:Kitzbühel Running injuries (or running-related injuries, RRI) affect about half of runners annually. The 2016 European Triathlon Championships was held in Lisbon, Portugal from 26 May to 29 May 2016. ==Medallists== Elite Elite Elite Elite Elite Elite Elite Men 1:49:30 1:50:09 1:50:32 Women 2:04:03 2:04:19 2:04:24 Mixed Relay Lucy Hall Thomas Bishop India Lee Grant Sheldon 1:07:03 Mariya Shorets Igor Polyanskiy Alexandra Razarenova Dmitry Polyanskiy 1:07:08 Zsófia Kovács Tamás Tóth Margit Vanek Ákos Vanek 1:07:19 Junior Junior Junior Junior Junior Junior Junior Men 0:58:03 0:58:04 0:58:08 Women 1:02:42 1:02:54 1:03:14 Mixed Relay Sian Rainsley Samuel Dickinson Kate Waugh Alex Yee 1:07:48 Ines Santiago Moron Alberto Gonzalez Garcia Cecilia Santamaria Surroca Javier Lluch Perez 1:07:59 Lena Meißner Paul Weindl Lisa Tertsch Moritz Horn 1:08:00 == Results == === Men's === ;Key * # denotes the athlete's bib number for the event * Swimming denotes the time it took the athlete to complete the swimming leg * Cycling denotes the time it took the athlete to complete the cycling leg * Running denotes the time it took the athlete to complete the running leg * Difference denotes the time difference between the athlete and the event winner * Lapped denotes that the athlete was lapped and removed from the course Rank # Triathlete Swimming Cycling Running Total time Difference 9 16:55 0:59:45 31:25 1:49:30 — 6 16:51 0:59:51 32:02 1:50:09 +00:39 19 16:49 0:59:47 32:31 1:50:32 +01:02 4 8 16:58 0:59:51 32:32 1:50:37 +01:07 5 15 16:46 0:59:47 32:30 1:50:38 +01:08 6 10 17:13 0:59:39 31:00 1:50:39 +01:09 7 16 17:07 0:59:51 30:57 1:50:40 +01:10 8 7 16:45 1:01:03 32:40 1:50:48 +01:18 9 14 17:14 1:01:09 31:12 1:51:01 +01:31 10 52 16:58 0:59:57 31:44 1:51:05 +01:35 11 38 16:52 1:01:07 33:01 1:51:09 +01:39 12 4 16:44 1:00:55 33:07 1:51:12 +01:42 13 12 17:10 0:59:48 31:44 1:51:31 +02:01 14 24 17:14 0:59:53 00:00 1:51:46 +02:16 15 48 16:54 1:01:09 34:14 1:51:49 +02:19 16 45 17:18 0:00:00 32:00 1:51:50 +02:20 17 30 17:06 0:59:45 32:23 1:52:03 +02:33 18 26 17:05 1:01:04 32:19 1:52:05 +02:35 19 5 16:50 1:01:10 34:00 1:52:09 +02:39 20 42 17:19 1:01:09 32:32 1:52:14 +02:44 21 36 17:06 0:59:46 33:07 1:52:22 +02:52 22 29 17:09 1:00:55 32:46 1:52:24 +02:54 23 23 17:13 1:01:08 32:50 1:52:29 +02:59 24 33 17:07 1:01:05 33:40 1:52:31 +03:01 25 18 17:12 1:01:00 32:45 1:52:34 +03:04 Source: Official results === Women's === ;Key * # denotes the athlete's bib number for the event * Swimming denotes the time it took the athlete to complete the swimming leg * Cycling denotes the time it took the athlete to complete the cycling leg * Running denotes the time it took the athlete to complete the running leg * Difference denotes the time difference between the athlete and the event winner * Lapped denotes that the athlete was lapped and removed from the course 0 Rank # Triathlete Swimming Cycling Running Total time Difference 20 18:41 1:06:05 36:57 2:04:03 — 6 19:11 1:07:09 35:36 2:04:19 +00:16 9 18:53 1:07:43 35:38 2:04:24 +00:21 4 1 18:04 1:07:09 36:03 2:04:40 +00:37 5 14 19:26 1:07:43 36:03 2:04:45 +00:42 6 7 19:05 1:08:27 36:03 2:04:51 +00:48 7 25 18:28 1:07:02 36:13 2:05:04 +01:01 8 11 18:29 1:07:21 36:28 2:05:09 +01:06 9 3 19:24 1:08:03 36:41 2:05:23 +01:20 10 2 17:55 1:07:49 38:20 2:05:29 +01:26 11 16 18:28 1:07:05 36:55 2:05:43 +01:40 12 26 18:09 1:06:58 37:07 2:05:48 +01:45 13 17 19:08 1:07:57 37:07 2:05:53 +01:50 14 19 18:48 1:08:21 37:09 2:06:00 +01:57 15 8 18:03 1:07:21 37:12 2:06:02 +01:59 16 24 18:06 1:07:42 37:17 2:06:03 +02:00 17 30 18:05 1:08:27 38:05 2:06:49 +02:46 18 22 18:31 1:08:25 38:42 2:07:23 +03:20 19 15 18:07 1:08:19 39:17 2:08:05 +04:02 20 29 18:08 1:07:52 39:56 2:08:36 +04:33 21 27 18:40 1:08:24 40:35 2:09:11 +05:08 22 32 18:03 1:08:07 40:57 2:09:39 +05:36 23 21 18:44 1:07:46 42:42 2:11:22 +07:19 24 34 18:32 1:08:21 43:10 2:12:01 +07:58 25 28 19:27 1:07:30 42:23 2:13:02 +08:59 Source: Official results == References == == External links == * Official page Category:European Triathlon Championships Category:Triathlon in Portugal Category:International sports competitions hosted by Portugal Category:Lisbon Injured runners were heavier. Some injuries are acute, caused by sudden overstress, such as side stitch, strains, and sprains. Instead of resulting from a single severe impact, stress fractures are the result of accumulated injury from repeated submaximal loading, such as running or jumping. In the 1984 Bern 16 km race questionnaire, runners who had no shoe brand preference and presumably changed brands frequently had significantly fewer running injuries. These findings suggest that focusing on proper running form, particularly when fatigued, could reduce the risk of running-related injuries. Pete Jacobs File:silver medal icon.svg 2011 File:gold medal icon.svg 2012 Sebastian Kienle File:bronze medal icon.svg 2013 File:gold medal icon.svg 2014 File:silver medal icon.svg 2016 Patrick Lange File:bronze medal icon.svg 2016 File:gold medal icon.svg 2017 He is the record holder for the Ironman World Championship James Lawrence Holds record for most triathlons completed in a single year Chris Lieto File:silver medal icon.svg 2009 Eneko Llanos (23) 2000 (20) 2004 File:silver medal icon.svg 2008 Chris McCormack File:gold medal icon.svg 2010 File:gold medal icon.svg 2007 File:silver medal icon.svg 2006 File:gold medal icon.svg 1997 File:gold medal icon.svg 1997 Javier Gomez File:silver medal icon.svg 2012 File:silver medal icon.svg 2007 File:gold medal icon.svg 2008 File:silver medal icon.svg 2009 File:gold medal icon.svg 2010 File:bronze medal icon.svg 2011 File:silver medal icon.svg 2012 File:gold medal icon.svg 2013 File:gold medal icon.svg 2014 File:gold medal icon.svg 2006 File:gold medal icon.svg 2007 File:gold medal icon.svg 2008 Andreas Raelert (12) 2000 (6) 2004 File:bronze medal icon.svg 2009 File:silver medal icon.svg 2010 File:bronze medal icon.svg 2011 File:silver medal icon.svg 2012 File:silver medal icon.svg 2015 Jan Rehula File:bronze medal icon.svg 2000 Sven Riederer File:bronze medal icon.svg 2004 (23) 2008 Marino Vanhoenacker File:bronze medal icon.svg 2010 Stephan Vuckovic File:silver medal icon.svg 2000 Simon Whitfield File:gold medal icon.svg 2000 (11) 2004 File:silver medal icon.svg 2008 (DNF) 2012 First man to win a gold medal at the Olympics ==Women== Name Country Olympics Ironman WTS WC Other Ref Kate Allen File:gold medal icon.svg 2004 (14) 2008 Erin Densham (22) 2008 File:bronze medal icon.svg 2012 Vanessa Fernandes (8) 2004 File:silver medal icon.svg 2008 Jude Flannery From 1991-96, won six US age group national championship and four world age-group triathlon championships. Injuries are more likely to occur in novice barefoot runners. Rutger Beke File:silver medal icon.svg 2008 Alistair Brownlee (12) 2008 File:gold medal icon.svg 2012 File:gold medal icon.svg 2016 File:gold medal icon.svg 2009 File:gold medal icon.svg 2011 Jonathan Brownlee File:bronze medal icon.svg 2012 File:silver medal icon.svg 2016 File:gold medal icon.svg 2012 File:silver medal icon.svg 2013 File:bronze medal icon.svg 2014 Hamish Carter (26) 2000 File:gold medal icon.svg 2004 Bevan Docherty File:silver medal icon.svg 2004 File:bronze medal icon.svg 2008 (12) 2012 Jan Frodeno File:gold medal icon.svg 2008 (6) 2012 File:bronze medal icon.svg 2014 File:gold medal icon.svg 2015 File:gold medal icon.svg 2016 Arthur Gilbert At 90 years of age, confirmed as the world's oldest competing triathlete in 2011. Extrinsic risk factors include deconditioning, hard surfaces, inadequate stretching and poor footwear. == Footwear == === Traditional running shoes === Study participants wearing running shoes with moderate lateral torsional stiffness "were 49% less likely to incur any type of lower extremity injury and 52% less likely to incur an overuse lower extremity injury than" participants wearing running shoes with minimal lateral torsional stiffness, both of which were statistically significant observations."
129
3.8
0.08
0.5061
-20
D
6.I-I. One characteristic of a car's storage console that is checked by the manufacturer is the time in seconds that it takes for the lower storage compartment door to open completely. A random sample of size $n=5$ yielded the following times: $\begin{array}{lllll}1.1 & 0.9 & 1.4 & 1.1 & 1.0\end{array}$ (a) Find the sample mean, $\bar{x}$.
Where \bar{x_i} and w_i are the mean and size of sample i respectively. The mean of is the residence time, : \bar{\tau}(y_0) = E[\tau(y_0)\mid y_0]. Similarly, the mean of a sample x_1,x_2,\ldots,x_n, usually denoted by \bar{x}, is the sum of the sampled values divided by the number of items in the sample. : \bar{x} = \frac{1}{n}\left (\sum_{i=1}^n{x_i}\right ) = \frac{x_1+x_2+\cdots +x_n}{n} For example, the arithmetic mean of five values: 4, 36, 45, 50, 75 is: :\frac{4+36+45+50+75}{5} = \frac{210}{5} = 42. ==== Geometric mean (GM) ==== The geometric mean is an average that is useful for sets of positive numbers, that are interpreted according to their product (as is the case with rates of growth) and not their sum (as is the case with the arithmetic mean): :\bar{x} = \left( \prod_{i=1}^n{x_i} \right )^\frac{1}{n} = \left(x_1 x_2 \cdots x_n \right)^\frac{1}{n} For example, the geometric mean of five values: 4, 36, 45, 50, 75 is: :(4 \times 36 \times 45 \times 50 \times 75)^\frac{1}{5} = \sqrt[5]{24\;300\;000} = 30. ==== Harmonic mean (HM) ==== The harmonic mean is an average which is useful for sets of numbers which are defined in relation to some unit, as in the case of speed (i.e., distance per unit of time): : \bar{x} = n \left ( \sum_{i=1}^n \frac{1}{x_i} \right ) ^{-1} For example, the harmonic mean of the five values: 4, 36, 45, 50, 75 is :\frac{5}{\tfrac{1}{4}+\tfrac{1}{36}+\tfrac{1}{45} + \tfrac{1}{50} + \tfrac{1}{75}} = \frac{5}{\;\tfrac{1}{3}\;} = 15. ==== Relationship between AM, GM, and HM ==== AM, GM, and HM satisfy these inequalities: : \mathrm{AM} \ge \mathrm{GM} \ge \mathrm{HM} \, Equality holds if all the elements of the given sample are equal. ===Statistical location=== thumb|100px|Geometric visualization of the mode, median and mean of an arbitrary probability density function. If the data set were based on a series of observations obtained by sampling from a statistical population, the arithmetic mean is the sample mean (\bar{x}) to distinguish it from the mean, or expected value, of the underlying distribution, the population mean (denoted \mu or \mu_x).Underhill, L.G.; Bradfield d. (1998) Introstat, Juta and Company Ltd. p. 181 Outside probability and statistics, a wide range of other notions of mean are often used in geometry and mathematical analysis; examples are given below. ==Types of means== ===Pythagorean means=== ==== Arithmetic mean (AM) ==== The arithmetic mean (or simply mean) of a list of numbers, is the sum of all of the numbers divided by the number of numbers. The mean of a probability distribution is the long-run average value of a random variable with that distribution. The mean of this time is the residence time, : \bar{\tau}(y_0) = \operatorname{E}[\tau(y_0)\mid y_0]. ===Logarithmic residence time=== The logarithmic residence time is a dimensionless variation of the residence time. It is simply the arithmetic mean after removing the lowest and the highest quarter of values. : \bar{x} = \frac{2}{n} \;\sum_{i = \frac{n}{4} + 1}^{\frac{3}{4}n}\\!\\! x_i assuming the values have been ordered, so is simply a specific example of a weighted mean for a specific set of weights. ===Mean of a function=== In some circumstances, mathematicians may calculate a mean of an infinite (or even an uncountable) set of values. For a discrete probability distribution, the mean is given by \textstyle \sum xP(x), where the sum is taken over all possible values of the random variable and P(x) is the probability mass function. The arithmetic mean of a set of numbers x1, x2, ..., xn is typically denoted using an overhead bar, \bar{x}. The mean sojourn time (or sometimes mean waiting time) for an object in a system is the amount of time an object is expected to spend in a system before leaving the system for good. == Calculation == Imagine you are standing in line to buy a ticket at the counter. For a continuous distribution, the mean is \textstyle \int_{-\infty}^{\infty} xf(x)\,dx, where f(x) is the probability density function. In all cases, including those in which the distribution is neither discrete nor continuous, the mean is the Lebesgue integral of the random variable with respect to its probability measure. For example, when n = 50 it takes about 225E(50) = 50(1 + 1/2 + 1/3 + ... + 1/50) = 224.9603, the expected number of trials to collect all 50 coupons. The approximation n\log n+\gamma n+1/2 for this expected number gives in this case 50\log 50+50\gamma+1/2 \approx 195.6011+28.8608+0.5\approx 224.9619. trials on average to collect all 50 coupons. ==Solution== ===Calculating the expectation=== Let time T be the number of draws needed to collect all n coupons, and let ti be the time to collect the i-th coupon after i − 1 coupons have been collected. thumb|400px|Graph of number of coupons, n vs the expected number of trials (i.e., time) needed to collect them all, E (T ) In probability theory, the coupon collector's problem describes "collect all coupons and win" contests. In statistics, the residence time is the average amount of time it takes for a random process to reach a certain boundary value, usually a boundary far from the mean. ==Definition== Suppose is a real, scalar stochastic process with initial value , mean and two critical values }, where and . The mathematical analysis of the problem reveals that the expected number of trials needed grows as \Theta(n\log(n)). The law of averages is the commonly held belief that a particular outcome or event will, over certain periods of time, occur at a frequency that is similar to its probability. A typical estimate for the sample variance from a set of sample values x_i uses a divisor of the number of values minus one, n-1, rather than n as in a simple quadratic mean, and this is still called the "mean square" (e.g. in analysis of variance): :s^2=\textstyle\frac{1}{n-1}\sum(x_i-\bar{x})^2 The second moment of a random variable, E(X^{2}) is also called the mean square. There are several kinds of mean in mathematics, especially in statistics. That is, the mean sojourn time of a subsystem is the total time a particle is expected to spend in the subsystem s before leaving the system S for good. In descriptive statistics, the mean may be confused with the median, mode or mid-range, as any of these may incorrectly be called an "average" (more formally, a measure of central tendency).
24
0.1792
1.1
3.2
2
C
5.5-1. Let $X_1, X_2, \ldots, X_{16}$ be a random sample from a normal distribution $N(77,25)$. Compute (b) $P(74.2<\bar{X}<78.4)$.
If X has a standard normal distribution, i.e. X ~ N(0,1), : \mathrm{P}(X > 1.96) \approx 0.025, \, : \mathrm{P}(X < 1.96) \approx 0.975, \, and as the normal distribution is symmetric, : \mathrm{P}(-1.96 < X < 1.96) \approx 0.95. To compute the probability that an observation is within two standard deviations of the mean (small differences due to rounding): \Pr(\mu-2\sigma \le X \le \mu+2\sigma) = \Phi(2) - \Phi(-2) \approx 0.9772 - (1 - 0.9772) \approx 0.9545 This is related to confidence interval as used in statistics: \bar{X} \pm 2\frac{\sigma}{\sqrt{n}} is approximately a 95% confidence interval when \bar{X} is the average of a sample of size n. ==Normality tests== The "68–95–99.7 rule" is often used to quickly get a rough probability estimate of something, given its standard deviation, if the population is assumed to be normal. In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. Then, denoting c as the 97.5th percentile of this distribution, : \Pr(-c\le T \le c)=0.95 Note that "97.5th" and "0.95" are correct in the preceding expressions. The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. Consequently, : \Pr\left(\bar{X} - \frac{cS}{\sqrt{n}} \le \mu \le \bar{X} + \frac{cS}{\sqrt{n}} \right)=0.95\, and we have a theoretical (stochastic) 95% confidence interval for μ. From the probability density function of the standard normal distribution, the exact value of z.975 is determined by : \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{z_{.975}} e^{-x^2/2} \, \mathrm{d}x = 0.975. == History == thumb|right|200px|Ronald Fisher The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925: In Table 1 of the same work, he gave the more precise value 1.959964. , Table 1 In 1970, the value truncated to 20 decimal places was calculated to be :1.95996 39845 40054 23552... In mathematical notation, these facts can be expressed as follows, where is the probability function, is an observation from a normally distributed random variable, (mu) is the mean of the distribution, and (sigma) is its standard deviation: \begin{align} \Pr(\mu-1\sigma \le X \le \mu+1\sigma) & \approx 68.27\% \\\ \Pr(\mu-2\sigma \le X \le \mu+2\sigma) & \approx 95.45\% \\\ \Pr(\mu-3\sigma \le X \le \mu+3\sigma) & \approx 99.73\% \end{align} The usefulness of this heuristic especially depends on the question under consideration. We only need to calculate each integral for the cases n = 1,2,3. \begin{align} &\Pr(\mu -1\sigma \leq X \leq \mu + 1\sigma) = \frac{1}{\sqrt{2\pi}} \int_{-1}^{1} e^{-\frac{u^2}{2}}du \approx 0.6827 \\\ &\Pr(\mu -2\sigma \leq X \leq \mu + 2\sigma) =\frac{1}{\sqrt{2\pi}}\int_{-2}^{2} e^{-\frac{u^2}{2}}du \approx 0.9545 \\\ &\Pr(\mu -3\sigma \leq X \leq \mu + 3\sigma) = \frac{1}{\sqrt{2\pi}}\int_{-3}^{3} e^{-\frac{u^2}{2}}du \approx 0.9973. \end{align} ==Cumulative distribution function== These numerical values "68%, 95%, 99.7%" come from the cumulative distribution function of the normal distribution. Because of the central limit theorem, this number is used in the construction of approximate 95% confidence intervals. After observing the sample we find values for and s for S, from which we compute the confidence interval : \left[ \bar{x} - \frac{cs}{\sqrt{n}}, \bar{x} + \frac{cs}{\sqrt{n}} \right]. == Interpretation == Various interpretations of a confidence interval can be given (taking the 95% confidence interval as an example in the following). thumb|upright=1.3|Each row of points is a sample from the same normal distribution. 77 (seventy-seven) is the natural number following 76 and preceding 78. Then the optimal 50% confidence procedure for \theta is : \bar{X} \pm \begin{cases} \dfrac{|X_1-X_2|}{2} & \text{if } |X_1-X_2| < 1/2 \\\\[8pt] \dfrac{1-|X_1-X_2|}{2} &\text{if } |X_1-X_2| \geq 1/2 . \end{cases} A fiducial or objective Bayesian argument can be used to derive the interval estimate : \bar{X} \pm \frac{1-|X_1-X_2|}{4}, which is also a 50% confidence procedure. In statistics, the 68–95–99.7 rule, also known as the empirical rule, is a shorthand used to remember the percentage of values that lie within an interval estimate in a normal distribution: 68%, 95%, and 99.7% of the values lie within one, two, and three standard deviations of the mean, respectively. The probable error can also be expressed as a multiple of the standard deviation σ,Zwillinger, D.; Kokosa, S. (2000) CRC Standard Probability and Statistics Tables and Formulae, Chapman & Hall/CRC. Thus, the probability that T will be between -c and +c is 95%. Some people even use the value of 2 in the place of 1.96, reporting a 95.4% confidence interval as a 95% confidence interval. Suppose we wanted to calculate a 95% confidence interval for μ. * The confidence interval can be expressed in terms of statistical significance, e.g.: For a large number of independent identically distributed random variables \ X_1, ..., X_n\ , with finite variance, the average \ \overline{X}_n\ approximately has a normal distribution, no matter what the distribution of the \ X_i\ is, with the approximation roughly improving in proportion to \ \sqrt{n\ }. == Example == Suppose {X1, …, Xn} is an independent sample from a normally distributed population with unknown parameters mean μ and variance σ2. 74 (seventy-four) is the natural number following 73 and preceding 75. ==In mathematics== 74 is: * the twenty-first distinct semiprime and the eleventh of the form 2×q. * a palindromic number in bases 6 (2026) and 36 (2236). * a nontotient. * the number of collections of subsets of {1, 2, 3} that are closed under union and intersection. * φ(74) = φ(σ(74)).
13.2
0.8561
226.0
432
-7.5
B
5.3-3. Let $X_1$ and $X_2$ be independent random variables with probability density functions $f_1\left(x_1\right)=2 x_1, 0 < x_1 <1 $, and $f_2 \left(x_2\right) = 4x_2^3$ , $0 < x_2 < 1 $, respectively. Compute (a) $P \left(0.5 < X_1 < 1\right.$ and $\left.0.4 < X_2 < 0.8\right)$.
That means, If X1,X2,…,Xn are discrete random variables, then the marginal probability mass function should be p_{X_i}(k)=\sum p(x_1,x_2,\dots,x_{i-1},k,x_{i+1},\dots,x_n); if X1,X2,…,Xn are continuous random variables, then the marginal probability density function should be f_{X_i}(x_i)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \cdots \int_{-\infty}^{\infty} f(x_1,x_2,\dots,x_n) dx_1 dx_2 \cdots dx_{i-1} dx_{i+1} \cdots dx_n . ==See also== * Compound probability distribution * Joint probability distribution * Marginal likelihood * Wasserstein metric * Conditional distribution ==References== ==Bibliography== * * Category:Theory of probability distributions Assuming that X and Y are discrete random variables, the joint distribution of X and Y can be described by listing all the possible values of p(xi,yj), as shown in Table.3. That is, for any two random variables X1, X2, both have the same probability distribution if and only if \varphi_{X_1}=\varphi_{X_2}. The Pearson type III distribution is a gamma distribution or chi-squared distribution. ==== The Pearson type V distribution ==== Defining new parameters: :\begin{align} C_1 &= \frac{b_1}{2 b_2}, \\\ \lambda &= \mu_1-\frac{a-C_1} {1-2 b_2}, \end{align} x-\lambda follows an \operatorname{InverseGamma}(\frac{1}{b_2}-1,\frac{a-C_1}{b_2}). Apply the substitution :x = a_1 + y (a_2 - a_1), where 0, which yields a solution in terms of y that is supported on the interval (0, 1): :p(y) \propto \left(\frac{a_1-a_2}{a_1}y\right)^{(-a_1+a) u} \left(\frac{a_2-a_1}{a_2}(1-y)\right)^{(a_2-a) u}. The diagram on the right shows which Pearson type a given concrete distribution (identified by a point (β1, β2)) belongs to. Since this density is only known up to a hidden constant of proportionality, that constant can be changed and the density written as follows: :p(x) \propto \left(1-\frac{x}{a_1}\right)^{- u (a_1-a)} \left(1-\frac{x}{a_2}\right)^{ u (a_2-a)}. ==== The Pearson type I distribution ==== The Pearson type I distribution (a generalization of the beta distribution) arises when the roots of the quadratic equation (2) are of opposite sign, that is, a_1 < 0 < a_2. thumb|280px|right|The characteristic function of a uniform U(–1,1) random variable. Recall that: * For discrete random variables, F(x,y) = P(X\leq x, Y\leq y) * For continuous random variables, F(x,y) = \int_{a}^{x} \int_{c}^{y} f(x',y') \, dy' dx' If X and Y jointly take values on [a, b] × [c, d] then :F_X(x)=F(x,d) and F_Y(y)=F(b,y) If d is ∞, then this becomes a limit F_X(x) = \lim_{y \to \infty} F(x,y). The marginal distributions are shown in red and blue. 300px|thumb|Diagram of the Pearson system, showing distributions of types I, III, VI, V, and IV in terms of β1 (squared skewness) and β2 (traditional kurtosis) The Pearson distribution is a family of continuous probability distributions. Conditional distribution: P(H\mid L) Red Yellow Green Not Hit 0.99 0.9 0.2 Hit 0.01 0.1 0.8 To find the joint probability distribution, more data is required. Several different analyses may be done, each treating a different subset of variables as the marginal distribution. == Definition == === Marginal probability mass function === Given a known joint distribution of two discrete random variables, say, and , the marginal distribution of either variable – for example – is the probability distribution of when the values of are not taken into consideration. (Modern treatments define kurtosis γ2 in terms of cumulants instead of moments, so that for a normal distribution we have γ2 = 0 and β2 = 3. One can often start with the next step below, if bounds of the form (5.1) are already available (which is the case for many distributions). ===An abstract approximation theorem=== We are now in a position to bound the left hand side of (3.1). Multiplying each column in the conditional distribution by the probability of that column occurring results in the joint probability distribution of H and L, given in the central 2×3 block of entries. Joint distribution: Red Yellow Green Marginal probability P(H) Not Hit 0.198 0.09 0.14 0.428 Hit 0.002 0.01 0.56 0.572 Total 0.2 0.1 0.7 1 The marginal probability P(H = Hit) is the sum 0.572 along the H = Hit row of this joint distribution table, as this is the probability of being hit when the lights are red OR yellow OR green. That is :f_X(x) = \int_{c}^{d} f(x,y) \, dy :f_Y(y) = \int_{a}^{b} f(x,y) \, dx where x\in[a,b], and y\in[c,d]. === Marginal cumulative distribution function === Finding the marginal cumulative distribution function from the joint cumulative distribution function is easy. One specific case is the sum of two independent random variables X1 and X2 in which case one has \varphi_{X_1+X_2}(t) = \varphi_{X_1}(t)\cdot\varphi_{X_2}(t). * Let X and Y be two random variables with characteristic functions \varphi_{X} and \varphi_{Y}. Stein's method is a general method in probability theory to obtain bounds on the distance between two probability distributions with respect to a probability metric. From Theorem A we obtain that : (7.1)\quad d_W(\mathcal{L}(W),N(0,1)) \leq \frac{5 E|X_1|^3}{n^{1/2}}.
3.7
1.44
210.0
7
0.01961
B
5.8-1. If $X$ is a random variable with mean 33 and variance 16, use Chebyshev's inequality to find (a) A lower bound for $P(23 < X < 43)$.
Chebyshev's inequality is more general, stating that a minimum of just 75% of values must lie within two standard deviations of the mean and 88.89% within three standard deviations for a broad range of different probability distributions. Chebyshev's inequality states that at most approximately 11.11% of the distribution will lie at least three standard deviations away from the mean. The additional fraction of 4/9 present in these tail bounds lead to better confidence intervals than Chebyshev's inequality. To improve the sharpness of the bounds provided by Chebyshev's inequality a number of methods have been developed; for a review see eg.Savage, I. Richard. The second of these inequalities with is the Chebyshev bound. The Chebyshev inequality for the distribution gives 95% and 99% confidence intervals of approximately ±4.472 standard deviations and ±10 standard deviations respectively. ====Samuelson's inequality==== Although Chebyshev's inequality is the best possible bound for an arbitrary distribution, this is not necessarily true for finite samples. However, the benefit of Chebyshev's inequality is that it can be applied more generally to get confidence bounds for ranges of standard deviations that do not depend on the number of samples. ====Semivariances==== An alternative method of obtaining sharper bounds is through the use of semivariances (partial variances). By comparison, Chebyshev's inequality states that all but a 1/N fraction of the sample will lie within standard deviations of the mean. *Grechuk et al. developed a general method for deriving the best possible bounds in Chebyshev's inequality for any family of distributions, and any deviation risk measure in place of standard deviation. This inequality is related to Jensen's inequality, Kantorovich's inequality, the Hermite–Hadamard inequality and Walter's conjecture. ===Other inequalities=== There are also a number of other inequalities associated with Chebyshev: *Chebyshev's sum inequality *Chebyshev–Markov–Stieltjes inequalities ==Notes== The Environmental Protection Agency has suggested best practices for the use of Chebyshev's inequality for estimating confidence intervals. ==See also== *Multidimensional Chebyshev's inequality *Concentration inequality – a summary of tail-bounds on random variables. In this setting we can state the following: :General version of Chebyshev's inequality. \forall k > 0: \quad \Pr\left( \|X - \mu\|_\alpha \ge k \sigma_\alpha \right) \le \frac{1}{ k^2 }. One way to prove Chebyshev's inequality is to apply Markov's inequality to the random variable with a = (kσ)2: : \Pr(|X - \mu| \geq k\sigma) = \Pr((X - \mu)^2 \geq k^2\sigma^2) \leq \frac{\mathbb{E}[(X - \mu)^2]}{k^2\sigma^2} = \frac{\sigma^2}{k^2\sigma^2} = \frac{1}{k^2}. The American Statistician, 56(3), pp.186-190 When bounding the event random variable deviates from its mean in only one direction (positive or negative), Cantelli's inequality gives an improvement over Chebyshev's inequality. In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. The first provides a lower bound for the value of P(x). ==Finite samples== === Univariate case === Saw et al extended Chebyshev's inequality to cases where the population mean and variance are not known and may not exist, but the sample mean and sample standard deviation from N samples are to be employed to bound the expected value of a new drawing from the same distribution. Chebyshev Inequalities with Law Invariant Deviation Measures, Probability in the Engineering and Informational Sciences, 24(1), 145-170. ==Related inequalities== Several other related inequalities are also known. ===Paley–Zygmund inequality=== The Paley–Zygmund inequality gives a lower bound on tail probabilities, as opposed to Chebyshev's inequality which gives an upper bound.Godwin H. J. (1964) Inequalities on distribution functions. For k ≥ 1, n > 4 and assuming that the nth moment exists, this bound is tighter than Chebyshev's inequality. Chebyshev's inequality can now be written : \Pr(x \le m - k \sigma) \le \frac { 1 } { k^2 } \frac { \sigma_-^2 } { \sigma^2 }. On the other hand, for two-sided tail bounds, Cantelli's inequality gives : \Pr(|X-\mathbb{E}[X]|\ge\lambda) = \Pr(X-\mathbb{E}[X]\ge\lambda) + \Pr(X-\mathbb{E}[X]\le-\lambda) \le \frac{2\sigma^2}{\sigma^2 + \lambda^2}, which is always worse than Chebyshev's inequality (when \lambda \geq \sigma; otherwise, both inequalities bound a probability by a value greater than one, and so are trivial). ==Proof== Let X be a real-valued random variable with finite variance \sigma^2 and expectation \mu, and define Y = X - \mathbb{E}[X] (so that \mathbb{E}[Y] = 0 and \operatorname{Var}(Y) = \sigma^2). In terms of the lower semivariance Chebyshev's inequality can be written : \Pr(x \le m - a \sigma_-) \le \frac { 1 } { a^2 }. The Chebyshev inequality has "higher moments versions" and "vector versions", and so does the Cantelli inequality. == Comparison to Chebyshev's inequality == For one-sided tail bounds, Cantelli's inequality is better, since Chebyshev's inequality can only get : \Pr(X - \mathbb{E}[X] \geq \lambda) \leq \Pr(|X-\mathbb{E}[X]|\ge\lambda) \le \frac{\sigma^2}{\lambda^2}. For n = 2 we obtain Chebyshev's inequality.
0.84
1.16
1.2
257
0.33333333
A
6.3-5. Let $Y_1 < Y_2 < \cdots < Y_8$ be the order statistics of eight independent observations from a continuous-type distribution with 70 th percentile $\pi_{0.7}=27.3$. (a) Determine $P\left(Y_7<27.3\right)$.
If the sample values are :6, 9, 3, 8, the order statistics would be denoted :x_{(1)}=3,\ \ x_{(2)}=6,\ \ x_{(3)}=8,\ \ x_{(4)}=9,\, where the subscript enclosed in parentheses indicates the th order statistic of the sample. Similar remarks apply to all sample quantiles. == Probabilistic analysis == Given any random variables X1, X2..., Xn, the order statistics X(1), X(2), ..., X(n) are also random variables, defined by sorting the values (realizations) of X1, ..., Xn in increasing order. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf. To find the probabilities of the k^\text{th} order statistics, three values are first needed, namely :p_1=P(Xx)=1-F(x). In many applications all order statistics are required, in which case a sorting algorithm can be used and the time taken is O(n log n). == See also == * Rankit * Box plot * BRS-inequality * Concomitant (statistics) * Fisher–Tippett distribution * Bapat–Beg theorem for the order statistics of independent but not necessarily identically distributed random variables * Bernstein polynomial * L-estimator – linear combinations of order statistics * Rank-size distribution * Selection algorithm === Examples of order statistics === * Sample maximum and minimum * Quantile * Percentile * Decile * Quartile * Median == References == == External links == * Retrieved Feb 02,2005 * Retrieved Feb 02,2005 * C++ source Dynamic Order Statistics Category:Nonparametric statistics Category:Summary statistics Category:Permutations When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution. == Notation and examples == For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. In other words, all n order statistics are needed from the n observations in a sample. Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles. In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. thumb|right|Major seventh In music from Western culture, a seventh is a musical interval encompassing seven staff positions (see Interval number for more details), and the major seventh is one of two commonly occurring sevenths. Note that the order statistics also satisfy U_{(i)}=F_X(X_{(i)}). Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median. === Large sample sizes === For the uniform distribution, as n tends to infinity, the pth sample quantile is asymptotically normally distributed, since it is approximated by : U_{(\lceil np \rceil)} \sim AN\left(p,\frac{p(1-p)}{n}\right). One way to understand this is that the unordered sample does have constant density equal to 1, and that there are n! different permutations of the sample corresponding to the same sequence of order statistics. The 7.39 is a British drama television film that was broadcast in two parts on BBC One on 6 January and 7 January 2014. In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Using the above formulas, one can derive the distribution of the range of the order statistics, that is the distribution of U_{(n)}-U_{(1)}, i.e. maximum minus the minimum. In statistics, some Monte Carlo methods require independent observations in a sample to be drawn from a one-dimensional distribution in sorted order. Perhaps surprisingly, the joint density of the n order statistics turns out to be constant: :f_{U_{(1)},U_{(2)},\ldots,U_{(n)}}(u_{1},u_{2},\ldots,u_{n}) = n!. The small major seventh is a ratio of 9:5,Royal Society (Great Britain) (1880, digitized Feb 26, 2008). The sample median may or may not be an order statistic, since there is a single middle value only when the number of observations is odd. On the other hand, when is even, and there are two middle values, X_{(m)} and X_{(m+1)}, and the sample median is some function of the two (usually the average) and hence not an order statistic. The probability density function of the order statistic U_{(k)} is equal to. :f_{U_{(k)}}(u)={n!\over (k-1)!(n-k)!}u^{k-1}(1-u)^{n-k} that is, the kth order statistic of the uniform distribution is a beta-distributed random variable.
92
257
27.0
0.2553
2.84367
D
6.3-5. Let $Y_1 < Y_2 < \cdots < Y_8$ be the order statistics of eight independent observations from a continuous-type distribution with 70 th percentile $\pi_{0.7}=27.3$. (a) Determine $P\left(Y_7<27.3\right)$.
If the sample values are :6, 9, 3, 8, the order statistics would be denoted :x_{(1)}=3,\ \ x_{(2)}=6,\ \ x_{(3)}=8,\ \ x_{(4)}=9,\, where the subscript enclosed in parentheses indicates the th order statistic of the sample. Similar remarks apply to all sample quantiles. == Probabilistic analysis == Given any random variables X1, X2..., Xn, the order statistics X(1), X(2), ..., X(n) are also random variables, defined by sorting the values (realizations) of X1, ..., Xn in increasing order. To find the probabilities of the k^\text{th} order statistics, three values are first needed, namely :p_1=P(Xx)=1-F(x). We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf. In many applications all order statistics are required, in which case a sorting algorithm can be used and the time taken is O(n log n). == See also == * Rankit * Box plot * BRS-inequality * Concomitant (statistics) * Fisher–Tippett distribution * Bapat–Beg theorem for the order statistics of independent but not necessarily identically distributed random variables * Bernstein polynomial * L-estimator – linear combinations of order statistics * Rank-size distribution * Selection algorithm === Examples of order statistics === * Sample maximum and minimum * Quantile * Percentile * Decile * Quartile * Median == References == == External links == * Retrieved Feb 02,2005 * Retrieved Feb 02,2005 * C++ source Dynamic Order Statistics Category:Nonparametric statistics Category:Summary statistics Category:Permutations When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution. == Notation and examples == For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles. In other words, all n order statistics are needed from the n observations in a sample. In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. thumb|right|Major seventh In music from Western culture, a seventh is a musical interval encompassing seven staff positions (see Interval number for more details), and the major seventh is one of two commonly occurring sevenths. Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median. === Large sample sizes === For the uniform distribution, as n tends to infinity, the pth sample quantile is asymptotically normally distributed, since it is approximated by : U_{(\lceil np \rceil)} \sim AN\left(p,\frac{p(1-p)}{n}\right). Note that the order statistics also satisfy U_{(i)}=F_X(X_{(i)}). In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. The 7.39 is a British drama television film that was broadcast in two parts on BBC One on 6 January and 7 January 2014. One way to understand this is that the unordered sample does have constant density equal to 1, and that there are n! different permutations of the sample corresponding to the same sequence of order statistics. Using the above formulas, one can derive the distribution of the range of the order statistics, that is the distribution of U_{(n)}-U_{(1)}, i.e. maximum minus the minimum. Perhaps surprisingly, the joint density of the n order statistics turns out to be constant: :f_{U_{(1)},U_{(2)},\ldots,U_{(n)}}(u_{1},u_{2},\ldots,u_{n}) = n!. In statistics, some Monte Carlo methods require independent observations in a sample to be drawn from a one-dimensional distribution in sorted order. The sample median may or may not be an order statistic, since there is a single middle value only when the number of observations is odd. The small major seventh is a ratio of 9:5,Royal Society (Great Britain) (1880, digitized Feb 26, 2008). On the other hand, when is even, and there are two middle values, X_{(m)} and X_{(m+1)}, and the sample median is some function of the two (usually the average) and hence not an order statistic. The cumulative distribution function of the k^\text{th} order statistic can be computed by noting that : \begin{align} P(X_{(k)}\leq x)& =P(\text{there are at least }k\text{ observations less than or equal to }x) ,\\\ & =P(\text{there are at most }n-k\text{ observations greater than }x) ,\\\ & =\sum_{j=0}^{n-k}{n\choose j}p_3^j(p_1+p_2)^{n-j} . \end{align} Similarly, P(X_{(k)} is given by : \begin{align} P(X_{(k)}< x)& =P(\text{there are at least }k\text{ observations less than }x) ,\\\ & =P(\text{there are at most }n-k\text{ observations greater than or equal to }x) ,\\\ & =\sum_{j=0}^{n-k}{n\choose j}(p_2+p_3)^j(p_1)^{n-j} . \end{align} Note that the probability mass function of X_{(k)} is just the difference of these values, that is to say : \begin{align} P(X_{(k)}=x)&=P(X_{(k)}\leq x)-P(X_{(k)}< x) ,\\\ &=\sum_{j=0}^{n-k}{n\choose j}\left(p_3^j(p_1+p_2)^{n-j}-(p_2+p_3)^j(p_1)^{n-j}\right) ,\\\ &=\sum_{j=0}^{n-k}{n\choose j}\left((1-F(x))^j(F(x))^{n-j}-(1-F(x)+f(x))^j(F(x)-f(x))^{n-j}\right). \end{align} == Computing order statistics == The problem of computing the kth smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm.
72
0.318
0.9731
0.2553
7
D
5.4-21. Let $X$ and $Y$ be independent with distributions $N(5,16)$ and $N(6,9)$, respectively. Evaluate $P(X>Y)=$ $P(X-Y>0)$.
Y = 2 Y = 4 Y = 6 Y = 8 X = 1 0 0.1 0 0.1 X = 3 0 0 0.2 0 X = 5 0.3 0 0 0.15 X = 7 0 0 0.15 0 Solution: using the given table of probabilities for each potential range of X and Y, the joint cumulative distribution function may be constructed in tabular form: Y < 2 2 ≤ Y < 4 4 ≤ Y < 6 6 ≤ Y < 8 Y ≥ 8 X < 1 0 0 0 0 0 1 ≤ X < 3 0 0 0.1 0.1 0.2 3 ≤ X < 5 0 0 0.1 0.3 0.4 5 ≤ X < 7 0 0.3 0.4 0.6 0.85 X ≥ 7 0 0.3 0.4 0.75 1 ===Definition for more than two random variables=== For N random variables X_1,\ldots,X_N, the joint CDF F_{X_1,\ldots,X_N} is given by Interpreting the N random variables as a random vector \mathbf{X} = (X_1, \ldots, X_N)^T yields a shorter notation: F_{\mathbf{X}}(\mathbf{x}) = \operatorname{P}(X_1 \leq x_1,\ldots,X_N \leq x_N) ===Properties=== Every multivariate CDF is: # Monotonically non-decreasing for each of its variables, # Right-continuous in each of its variables, # 0\leq F_{X_1 \ldots X_n}(x_1,\ldots,x_n)\leq 1, # \lim_{x_1,\ldots,x_n \rightarrow+\infty}F_{X_1 \ldots X_n}(x_1,\ldots,x_n)=1 \text{ and } \lim_{x_i\rightarrow-\infty}F_{X_1 \ldots X_n}(x_1,\ldots,x_n)=0, \text{for all } i. Example of joint cumulative distribution function: For two continuous variables X and Y: \Pr(a < X < b \text{ and } c < Y < d) = \int_a^b \int_c^d f(x,y) \, dy \, dx; For two discrete random variables, it is beneficial to generate a table of probabilities and address the cumulative probability for each potential range of X and Y, and here is the example: given the joint probability mass function in tabular form, determine the joint cumulative distribution function. To see the distribution of Y conditional on X=70, one can first visualize the line X=70 in the X,Y plane, and then visualize the plane containing that line and perpendicular to the X,Y plane. The probability of success on each trial is 5/6. If the points in the joint probability distribution of X and Y that receive positive probability tend to fall along a line of positive (or negative) slope, ρXY is near +1 (or −1). Sum those probabilities: : f(5) = 0.07776 \, : f(6) = 0.15552 \, : f(7) = 0.18662 \, : f(8) = 0.17418 \, :\sum_{j=5}^8 f(j) = 0.59408. That number of successes is a negative-binomially distributed random variable. For discrete random variables this means P(Y=y|X=x) = P(Y=y) for all possible y and x with P(X=x)>0. The relation with the probability distribution of X given Y is given by: :f_{Y\mid X}(y \mid x)f_X(x) = f_{X,Y}(x, y) = f_{X|Y}(x \mid y)f_Y(y). The intersection of that plane with the joint normal density, once rescaled to give unit area under the intersection, is the relevant conditional density of Y. Y\mid X=70 \ \sim\ \mathcal{N}\left(\mu_1+\frac{\sigma_1}{\sigma_2}\rho( 70 - \mu_2),\, (1-\rho^2)\sigma_1^2\right). ==Relation to independence== Random variables X, Y are independent if and only if the conditional distribution of Y given X is, for all possible realizations of X, equal to the unconditional distribution of Y. The joint probability distribution is presented in the following table: A=Red A=Blue P(B) B=Red (2/3)(2/3)=4/9 (1/3)(2/3)=2/9 4/9+2/9=2/3 B=Blue (2/3)(1/3)=2/9 (1/3)(1/3)=1/9 2/9+1/9=1/3 P(A) 4/9+2/9=2/3 2/9+1/9=1/3 Each of the four inner cells shows the probability of a particular combination of results from the two draws; these probabilities are the joint distribution. The relation with the probability distribution of X given Y is: :P(Y=y \mid X=x) P(X=x) = P(\\{X=x\\} \cap \\{Y=y\\}) = P(X=x \mid Y=y)P(Y=y). ===Example=== Consider the roll of a fair and let X=1 if the number is even (i.e., 2, 4, or 6) and X=0 otherwise. If the joint probability density function of random variable X and Y is f_{X,Y}(x,y) , the marginal probability density function of X and Y, which defines the marginal distribution, is given by: f_{X}(x)= \int f_{X,Y}(x,y) \; dy f_{Y}(y)= \int f_{X,Y}(x,y) \; dx where the first integral is over all points in the range of (X,Y) for which X=x and the second integral is over all points in the range of (X,Y) for which Y=y. ==Joint cumulative distribution function== For a pair of random variables X,Y, the joint cumulative distribution function (CDF) F_{XY} is given by where the right-hand side represents the probability that the random variable X takes on a value less than or equal to x and that Y takes on a value less than or equal to y. Furthermore, if Bs+r is a random variable following the binomial distribution with parameters s + r and p, then : \begin{align} \Pr(Y_r \leq s) & {} = 1 - I_p(s+1, r) \\\\[5pt] & {} = 1 - I_{p}((s+r)-(r-1), (r-1)+1) \\\\[5pt] & {} = 1 - \Pr(B_{s+r} \leq r-1) \\\\[5pt] & {} = \Pr(B_{s+r} \geq r) \\\\[5pt] & {} = \Pr(\text{after } s+r \text{ trials, there are at least } r \text{ successes}). \end{align} In this sense, the negative binomial distribution is the "inverse" of the binomial distribution. The probability that X lies in the semi-closed interval (a,b], where a < b, is therefore In the definition above, the "less than or equal to" sign, "≤", is a convention, not a universally used one (e.g. Hungarian literature uses "<"), but the distinction is important for discrete distributions. At each house, there is a 0.6 probability of selling one candy bar and a 0.4 probability of selling nothing. What's the probability of selling the last candy bar at the nth house? One must use the "mixed" joint density when finding the cumulative distribution of this binary outcome because the input variables (X,Y) were initially defined in such a way that one could not collectively assign it either a probability density function or a probability mass function. The joint probability mass function of A and B defines probabilities for each pair of outcomes. In probability theory and statistics, given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X is the probability distribution of Y when X is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value x of X as a parameter. Formally, f_{X,Y}(x,y) is the probability density function of (X,Y) with respect to the product measure on the respective supports of X and Y. Either of these two decompositions can then be used to recover the joint cumulative distribution function: : \begin{align} F_{X,Y}(x,y)&=\sum\limits_{t\le y}\int_{s=-\infty}^x f_{X,Y}(s,t)\;ds. \end{align} The definition generalizes to a mixture of arbitrary numbers of discrete and continuous random variables. ==Additional properties== ===Joint distribution for independent variables=== In general two random variables X and Y are independent if and only if the joint cumulative distribution function satisfies : F_{X,Y}(x,y) = F_X(x) \cdot F_Y(y) Two discrete random variables X and Y are independent if and only if the joint probability mass function satisfies : P(X = x \ \mbox{and} \ Y = y ) = P( X = x) \cdot P( Y = y) for all x and y. Then the CDF of X is given by F(k;n,p) = \Pr(X\leq k) = \sum _{i=0}^{\lfloor k\rfloor }{n \choose i} p^{i} (1-p)^{n-i} Here p is the probability of success and the function denotes the discrete probability distribution of the number of successes in a sequence of n independent experiments, and \lfloor k\rfloor is the "floor" under k, i.e. the greatest integer less than or equal to k. ==Derived functions== ===Complementary cumulative distribution function (tail distribution)=== Sometimes, it is useful to study the opposite question and ask how often the random variable is above a particular level.
0.24995
-45
15.757
0.4207
0.9984
D
7.4-5. A quality engineer wanted to be $98 \%$ confident that the maximum error of the estimate of the mean strength, $\mu$, of the left hinge on a vanity cover molded by a machine is 0.25 . A preliminary sample of size $n=32$ parts yielded a sample mean of $\bar{x}=35.68$ and a standard deviation of $s=1.723$. (a) How large a sample is required?
For example, if we are interested in estimating the amount by which a drug lowers a subject's blood pressure with a 95% confidence interval that is six units wide, and we know that the standard deviation of blood pressure in the population is 15, then the required sample size is \frac{4\times1.96^2\times15^2}{6^2} = 96.04, which would be rounded up to 97, because the obtained value is the minimum sample size, and sample sizes must be integers and must lie on or above the calculated minimum. ==Required sample sizes for hypothesis tests == A common problem faced by statisticians is calculating the sample size required to yield a certain power for a test, given a predetermined Type I error rate α. The mean value calculated from the sample, \bar{x}, will have an associated standard error on the mean, {\sigma}_\bar{x}, given by: :{\sigma}_\bar{x}\ = \frac{\sigma}{\sqrt{n}}. Generally, at a confidence level \gamma, a sample sized n of a population having expected standard deviation \sigma has a margin of error :MOE_\gamma = z_\gamma \times \sqrt{\frac{\sigma^2}{n}} where z_\gamma denotes the quantile (also, commonly, a z-score), and \sqrt{\frac{\sigma^2}{n}} is the standard error. == Standard deviation and standard error == We would expect the average of normally distributed values p_1,p_2,\ldots to have a standard deviation which somehow varies with n. Therefore, the standard error of the mean is usually estimated by replacing \sigma with the sample standard deviation \sigma_{x} instead: :{\sigma}_\bar{x}\ \approx \frac{\sigma_{x}}{\sqrt{n}}. The standard error is, by definition, the standard deviation of \bar{x} which is simply the square root of the variance: :\sigma_{\bar{x}} = \sqrt{\frac{\sigma^2}{n}} = \frac{\sigma}{\sqrt{n}} . Practically this tells us that when trying to estimate the value of a population mean, due to the factor 1/\sqrt{n}, reducing the error on the estimate by a factor of two requires acquiring four times as many observations in the sample; reducing it by a factor of ten requires a hundred times as many observations. === Estimate === The standard deviation \sigma of the population being sampled is seldom known. Since \max \sigma_P^2 = \max P(1-P) = 0.25 at p = 0.5, we can arbitrarily set p=\overline{p} = 0.5, calculate \sigma_{P}, \sigma_\overline{p}, and z_\gamma\sigma_\overline{p} to obtain the maximum margin of error for P at a given confidence level \gamma and sample size n, even before having actual results. Knowing that the value of the n is the minimum number of sample points needed to acquire the desired result, the number of respondents then must lie on or above the minimum. ===Estimation of a mean=== When estimating the population mean using an independent and identically distributed (iid) sample of size n, where each data value has variance σ2, the standard error of the sample mean is: :\frac{\sigma}{\sqrt{n}}. Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. The arrows show that the maximum margin error for a sample size of 1000 is ±3.1% at 95% confidence level, and ±4.1% at 99%. Using the central limit theorem to justify approximating the sample mean with a normal distribution yields a confidence interval of the form : \left(\bar x - \frac{Z\sigma}{\sqrt{n}}, \quad \bar x + \frac{Z\sigma}{\sqrt{n}} \right ) , :where Z is a standard Z-score for the desired level of confidence (1.96 for a 95% confidence interval). Consequently, : \Pr\left(\bar{X} - \frac{cS}{\sqrt{n}} \le \mu \le \bar{X} + \frac{cS}{\sqrt{n}} \right)=0.95\, and we have a theoretical (stochastic) 95% confidence interval for μ. The following expressions can be used to calculate the upper and lower 95% confidence limits, where \bar{x} is equal to the sample mean, \operatorname{SE} is equal to the standard error for the sample mean, and 1.96 is the approximate value of the 97.5 percentile point of the normal distribution: :Upper 95% limit = \bar{x} + (\operatorname{SE}\times 1.96) , and :Lower 95% limit = \bar{x} - (\operatorname{SE}\times 1.96) . The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem. Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. The standard deviation is estimated as :CS \sqrt{\frac{B-\frac{A^2}{N}}{N-1}}=5.57 ==References== Category:Means For the single result from our survey, we assume that p = \overline{p}, and that all subsequent results p_1,p_2,\ldots together would have a variance \sigma_{P}^2=P(1-P). : \text{Standard error} = \sigma_\overline{p} \approx \sqrt{\frac{\sigma_{P}^2}{n}} \approx \sqrt{\frac{p(1-p)}{n}} Note that p(1-p) corresponds to the variance of a Bernoulli distribution. == Maximum margin of error at different confidence levels == thumb|350pxFor a confidence level \gamma, there is a corresponding confidence interval about the mean \mu\plusmn z_\gamma\sigma, that is, the interval [\mu-z_\gamma\sigma,\mu+z_\gamma\sigma] within which values of P should fall with probability \gamma. If we wish to have a confidence interval that is W units total in width (W/2 being the margin of error on each side of the sample mean), we would solve : \frac{Z\sigma}{\sqrt{n}} = W/2 for n, yielding the sample size n = \frac{4Z^2\sigma^2}{W^2}. For example, if we are interested in estimating the proportion of the US population who supports a particular presidential candidate, and we want the width of 95% confidence interval to be at most 2 percentage points (0.02), then we would need a sample size of (1.96)2/ (0.022) = 9604. In particular, the standard error of a sample statistic (such as sample mean) is the actual or estimated standard deviation of the sample mean in the process by which it was generated. Standard errors provide simple measures of uncertainty in a value and are often used because: *in many cases, if the standard error of several individual quantities is known then the standard error of some function of the quantities can be easily calculated; *when the probability distribution of the value is known, it can be used to calculate an exact confidence interval; *when the probability distribution is unknown, Chebyshev's or the Vysochanskiï–Petunin inequalities can be used to calculate a conservative confidence interval; and * as the sample size tends to infinity the central limit theorem guarantees that the sampling distribution of the mean is asymptotically normal. ===Standard error of mean versus standard deviation=== In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation of the sample data or the mean with the standard error. According to the 68-95-99.7 rule, we would expect that 95% of the results p_1,p_2,\ldots will fall within about two standard deviations (\plusmn2\sigma_{P}) either side of the true mean \overline{p}.
0.15
0.6296296296
257.0
0.3359
-1.49
C
5.2-5. Let the distribution of $W$ be $F(8,4)$. Find the following: (a) $F_{0.01}(8,4)$.
When filling out a Form W-4 an employee calculates the number of Form W-4 allowances to claim based on their expected tax filing situation for the year. There are typically three Tracy–Widom distributions, F_\beta, with \beta \in \\{1, 2, 4\\}. These distributions have been tabulated in to four significant digits for values of the argument in increments of 0.01; a statistical table for p-values was also given in this work. gave accurate and fast algorithms for the numerical evaluation of F_\beta and the density functions f_\beta(s)=dF_\beta/ds for \beta=1,2,4. :Note: the Wigner distribution function is abbreviated here as WD rather than WDF as used at Wigner distribution function A Modified Wigner distribution function is a variation of the Wigner distribution function (WD) with reduced or removed cross-terms. (I.e., withholding is calculated as if the employee earned this amount every payday on an annual basis.) == Filing == The W-4 Form includes a series of worksheets for calculating the number of allowances to claim. thumb|right|300px|Densities of Tracy–Widom distributions for β = 1, 2, 4 The Tracy–Widom distribution is a probability distribution from random matrix theory introduced by . The W-4 Form is usually not sent to the IRS; rather, the employer uses the form in order to calculate how much of an employee's salary is withheld. An alternative to the above method is to define the PDF parametrically as (W(p),1/w(p)), \ 0\le p \le 1. The W-4 form tells the employer the correct amount of federal tax to withhold from an employee's paycheck. == Motivation == The W-4 is based on the idea of "allowances"; the more allowances claimed, the less money the employer withholds for tax purposes. thumb|Form W-4, 2012 Form W-4 (officially, the "Employee's Withholding Allowance Certificate") is an Internal Revenue Service (IRS) tax form completed by an employee in the United States to indicate his or her tax situation (exemptions, status, etc.) to the employer. thumb|250px|right|Full width at half maximum In a distribution, full width at half maximum (FWHM) is the difference between the two values of the independent variable at which the dependent variable is equal to half of its maximum value. The distribution F_1 is of particular interest in multivariate statistics.. : W_x(t,f)=\int_{-\infty}^{\infty} C_x\left(t + \frac{\tau}{2}, t - \frac{\tau}{2}\right) \, e^{-2\pi i\tau f} \, d\tau . The definition of the Tracy–Widom distributions F_\beta may be extended to all \beta >0 (Slide 56 in , ). By WDF :\begin{align} W_x(t,f) &= \int_{-\infty}^{\infty}\delta\left(t + \frac{\tau}{2}\right)\delta\left(t - \frac{\tau}{2}\right) e^{-i2\pi\tau\,f}\,d\tau \\\ &= 4\int_{-\infty}^{\infty}\delta(2t + \tau)\delta(2t - \tau)e^{-i2\pi\tau f}\,d\tau \\\ &= 4\delta(4t)e^{i4\pi tf} \\\ &= \delta(t)e^{i4\pi tf} \\\ &= \delta(t). \end{align} The Wigner distribution function is best suited for time-frequency analysis when the input signal's phase is 2nd order or lower. In the latter case, this creates an oddity in that the employee will have one more exemption on the W-4 than on the 1040 tax return. There are specialized versions of the W-4 Form for other types of payment; for example, W-4P for pensions, and the voluntary W-4V for certain government payments such as unemployment compensation. == See also == * Form W-2 * Form W-9 * Form 1040 * Personal exemption * Tax withholding in the United States == References == ==External links== * IRS Form W-4 W-4 W-4 The corresponding area within this FWHM accounts to approximately 76%. For example, if x(t) = 1, then :W_x(t,f)=\int_{-\infty}^\infty e^{-i2\pi\tau\,f}\,d\tau=\delta(f). ===Sinusoidal input signal=== When the input signal is a sinusoidal function, its time-frequency distribution is a horizontal line parallel to the time axis, displaced from it by the sinusoidal signal's frequency. This can be set up as a probability density function, f(x), by solving for the unique p in the equation W(p)=x and returning 1/w(p). == See also == * Generalized Pareto distribution == References == == External links == * Discussion of the naming of the distribution on Stack Exchange :Note: this work is based on a NIST document that is in the public domain as a work of the U.S. federal government Category:Continuous distributions For those signals, WDF can exactly generate the time frequency distribution of the input signal. ===Boxcar function=== :x(t) = \begin{cases} 1 & |t|<1/2 \\\ 0 & \text{otherwise} \end{cases} \qquad , the rectangular function ⇒ : W_x(t,f) = \begin{cases} \frac{1}{\pi f}\sin (2\pi f\\{1 - 2|t|\\}) &|t|<1/2 \\\ 0 & \mbox{otherwise} \end{cases} ==Cross term property== The Wigner distribution function is not a linear transform. See and for experimental testing (and verifying) that the interface fluctuations of a growing droplet (or substrate) are described by the TW distribution F_2 (or F_1) as predicted by .
2.84367
0.042
0.0
14.80
3.51
D
5.8-5. If the distribution of $Y$ is $b(n, 0.25)$, give a lower bound for $P(|Y / n-0.25|<0.05)$ when (b) $n=500$.
If X has a standard normal distribution, i.e. X ~ N(0,1), : \mathrm{P}(X > 1.96) \approx 0.025, \, : \mathrm{P}(X < 1.96) \approx 0.975, \, and as the normal distribution is symmetric, : \mathrm{P}(-1.96 < X < 1.96) \approx 0.95. The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. thumb|Plot of S_n/n (red), its standard deviation 1/\sqrt{n} (blue) and its bound \sqrt{2\log\log n/n} given by LIL (green). From the probability density function of the standard normal distribution, the exact value of z.975 is determined by : \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{z_{.975}} e^{-x^2/2} \, \mathrm{d}x = 0.975. == History == thumb|right|200px|Ronald Fisher The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925: In Table 1 of the same work, he gave the more precise value 1.959964. , Table 1 In 1970, the value truncated to 20 decimal places was calculated to be :1.95996 39845 40054 23552... Because of the central limit theorem, this number is used in the construction of approximate 95% confidence intervals. In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. In statistics, probable error defines the half-range of an interval about a central point for the distribution, such that half of the values from the distribution will lie within the interval and half outside.Dodge, Y. (2006) The Oxford Dictionary of Statistical Terms, OUP. Bound the desired probability using the Chebyshev inequality: :\operatorname{P}\left(|T- n H_n| \geq cn\right) \le \frac{\pi^2}{6c^2}. ===Tail estimates=== A stronger tail estimate for the upper tail be obtained as follows. Then : \begin{align} P\left [ {Z}_i^r \right ] = \left(1-\frac{1}{n}\right)^r \le e^{-r / n}. \end{align} Thus, for r = \beta n \log n, we have P\left [ {Z}_i^r \right ] \le e^{(-\beta n \log n ) / n} = n^{-\beta}. For example, when n = 50 it takes about 225E(50) = 50(1 + 1/2 + 1/3 + ... + 1/50) = 224.9603, the expected number of trials to collect all 50 coupons. The mathematical analysis of the problem reveals that the expected number of trials needed grows as \Theta(n\log(n)). The probable error can also be expressed as a multiple of the standard deviation σ,Zwillinger, D.; Kokosa, S. (2000) CRC Standard Probability and Statistics Tables and Formulae, Chapman & Hall/CRC. Probability. The commonly used approximate value of 1.96 is therefore accurate to better than one part in 50,000, which is more than adequate for applied work. It can be shown that these inequalities are the best possible and that further sharpening of the bounds requires that additional restrictions be placed on the distributions. ==See also== *Vysochanskiï–Petunin inequality, a similar result for the distance from the mean rather than the mode *Chebyshev's inequality, concerns distance from the mean without requiring unimodality * Concentration inequality – a summary of tail-bounds on random variables. ==References== * * * * Category:Probabilistic inequalities It asks the following question: If each box of a brand of cereals contains a coupon, and there are n different types of coupons, what is the probability that more than t boxes need to be bought to collect all n coupons? thumb|The gaussian correlation inequality states that probability of hitting both circle and rectangle with a dart is greater than or equal to the product of the individual probabilities of hitting the circle or the rectangle. Then : \Pr\left( \limsup_n \frac{S_n}{\sqrt{n}} \geq M \right) \geqslant \limsup_n \Pr\left( \frac{S_n}{\sqrt{n}} \geq M \right) = \Pr\left( \mathcal{N}(0, 1) \geq M \right) > 0 so :\limsup_n \frac{S_n}{\sqrt{n}}=\infty \qquad \text{with probability 1.} In probability theory, Gauss's inequality (or the Gauss inequality) gives an upper bound on the probability that a unimodal random variable lies more than any given distance from its mode. The approximation n\log n+\gamma n+1/2 for this expected number gives in this case 50\log 50+50\gamma+1/2 \approx 195.6011+28.8608+0.5\approx 224.9619. trials on average to collect all 50 coupons. ==Solution== ===Calculating the expectation=== Let time T be the number of draws needed to collect all n coupons, and let ti be the time to collect the i-th coupon after i − 1 coupons have been collected. By Kolmogorov's zero–one law, for any fixed M, the probability that the event \limsup_n \frac{S_n}{\sqrt{n}} \geq M occurs is 0 or 1. thumb|400px|Graph of number of coupons, n vs the expected number of trials (i.e., time) needed to collect them all, E (T ) In probability theory, the coupon collector's problem describes "collect all coupons and win" contests.
1.07
96.4365076099
17.4
58.2
0.85
E
5.6-1. Let $\bar{X}$ be the mean of a random sample of size 12 from the uniform distribution on the interval $(0,1)$. Approximate $P(1 / 2 \leq \bar{X} \leq 2 / 3)$.
If X has a standard normal distribution, i.e. X ~ N(0,1), : \mathrm{P}(X > 1.96) \approx 0.025, \, : \mathrm{P}(X < 1.96) \approx 0.975, \, and as the normal distribution is symmetric, : \mathrm{P}(-1.96 < X < 1.96) \approx 0.95. The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. Because of the central limit theorem, this number is used in the construction of approximate 95% confidence intervals. The lower bound is very close to m, thus more informative is the asymmetric confidence interval from p = 5% to 100%; for k = 5 this yields 0.051/5 ≈ 0.55 and the interval [m, 1.82m]. For example, taking the symmetric 95% interval p = 2.5% and q = 97.5% for k = 5 yields 0.0251/5 ≈ 0.48, 0.9751/5 ≈ 0.995, so the confidence interval is approximately [1.005m, 2.08m]. An approximation can be given by replacing with , yielding: \hat\sigma = \sqrt{\frac{1}{N - 1.5} \sum_{i=1}^N \left(x_i - \bar{x}\right)^2}, The error in this approximation decays quadratically (as ), and it is suited for all but the smallest samples or highest precision: for the bias is equal to 1.3%, and for the bias is already less than 0.1%. For other distributions, the correct formula depends on the distribution, but a rule of thumb is to use the further refinement of the approximation: \hat\sigma = \sqrt{\frac{1}{N - 1.5 - \frac{1}{4}\gamma_2} \sum_{i=1}^N \left(x_i - \bar{x}\right)^2}, where denotes the population excess kurtosis. This means that most men (about 68%, assuming a normal distribution) have a height within 3 inches of the mean (67–73 inches)one standard deviationand almost all men (about 95%) have a height within 6 inches of the mean (64–76 inches)two standard deviations. The median in this example is 74.5, in close agreement with the frequentist formula. For various values of , the percentage of values expected to lie in and outside the symmetric interval, , are as follows: thumb|Percentage within(z) thumb|z(Percentage within) Confidence interval Proportion within Proportion without Proportion without Confidence interval Percentage Percentage Fraction 25% 75% 3 / 4 % % 1 / 66.6667% 33.3333% 1 / 3 68% 32% 1 / 3.125 1 % % 1 / 80% 20% 1 / 5 90% 10% 1 / 10 95% 5% 1 / 20 2 % % 1 / 99% 1% 1 / 100 3 % % 1 / 370.398 99.9% 0.1% 1 / 99.99% 0.01% 1 / 4 % % 1 / 99.999% 0.001% 1 / 1 / 6.8 / % % 1 / 5 % % 1 / % % 1 / % % 1 / % % 1 / % % 1 / % % 1 / % % 1 / 7 % 1 / ==Relationship between standard deviation and mean== The mean and the standard deviation of a set of data are descriptive statistics usually reported together. If a data distribution is approximately normal then about 68 percent of the data values are within one standard deviation of the mean (mathematically, , where is the arithmetic mean), about 95 percent are within two standard deviations (), and about 99.7 percent lie within three standard deviations (). The Pareto distribution with parameter \alpha \in (1,2] has a mean, but not a standard deviation (loosely speaking, the standard deviation is infinite). The ratio of uniforms is a method initially proposed by Kinderman and Monahan in 1977 for pseudo-random number sampling, that is, for drawing random samples from a statistical distribution. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by: \sigma_\text{mean} = \frac{1}{\sqrt{N}}\sigma where is the number of observations in the sample used to estimate the mean. For a sample population , this is down to 0.88 × SD to 1.16 × SD. Distance from mean Minimum population \sqrt{2}\,\sigma 50% 2\sigma 75% 3\sigma 89% 4\sigma 94% 5\sigma 96% 6\sigma 97% k\sigma 1 - \frac{1}{k^2} \frac{1}{\sqrt{1 - \ell}}\, \sigma \ell ===Rules for normally distributed data=== The central limit theorem states that the distribution of an average of many independent, identically distributed random variables tends toward the famous bell-shaped normal distribution with a probability density function of f\left(x, \mu, \sigma^2\right) = \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}\left(\frac{x - \mu}{\sigma}\right)^2} where is the expected value of the random variables, equals their distribution's standard deviation divided by , and is the number of random variables. This has a variance : \operatorname{var}\left(\widehat{N}\right) = \frac{1}{k}\frac{(N-k)(N+1)}{(k+2)} \approx \frac{N^2}{k^2} \text{ for small samples } k \ll N, so the standard deviation is approximately N/k, the expected size of the gap between sorted observations in the sample. These are easily computed, based on the observation that the probability that k observations in the sample will fall in an interval covering p of the range (0 ≤ p ≤ 1) is pk (assuming in this section that draws are with replacement, to simplify computations; if draws are without replacement, this overstates the likelihood, and intervals will be overly conservative). An estimate of the standard deviation for data taken to be approximately normal follows from the heuristic that 95% of the area under the normal curve lies roughly two standard deviations to either side of the mean, so that, with 95% probability the total range of values represents four standard deviations so that . From the probability density function of the standard normal distribution, the exact value of z.975 is determined by : \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{z_{.975}} e^{-x^2/2} \, \mathrm{d}x = 0.975. == History == thumb|right|200px|Ronald Fisher The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925: In Table 1 of the same work, he gave the more precise value 1.959964. , Table 1 In 1970, the value truncated to 20 decimal places was calculated to be :1.95996 39845 40054 23552... Thus the sampling distribution of the quantile of the sample maximum is the graph x1/k from 0 to 1: the p-th to q-th quantile of the sample maximum m are the interval [p1/kN, q1/kN]. More generally, the (downward biased) 95% confidence interval is [m, m/0.051/k] = [m, m·201/k].
135.36
1.56
3.23
-1270
0.4772
E
5.2-9. Determine the constant $c$ such that $f(x)= c x^3(1-x)^6$, $0 < x < 1$ is a pdf.
thumb|300px|As the degree of the Taylor polynomial rises, it approaches the correct function. C63 or C-63 may refer to: * Caldwell 63, a planetary nebula * Convention concerning Statistics of Wages and Hours of Work, 1938 of the International Labour Organization * JNR Class C63, a proposed Japanese steam locomotive * Lockheed C-63 Hudson, an American military transport aircraft * Mercedes-AMG C 63, a German automobile * Ruy Lopez, a chess opening This image shows and its Taylor approximations by polynomials of degree 1, 3, 5, 7, 9, 11, and 13 at . The molecular formula C6O6 (molar mass: 168.06 g/mol, exact mass: 167.9695 u) may refer to: * Cyclohexanehexone, also known as hexaketocyclohexane or triquinoyl * Ethylenetetracarboxylic dianhydride The molecular formula C6H6O3 may refer to: * Cyclohexanetriones * Hydroxymethylfurfural * Hydroxyquinol * Isomaltol * Levoglucosenone * Maltol * Phloroglucinol * Pyrogallol * Triacetic acid lactone The molecular formula C3F6O (molar mass: 166.02 g/mol, exact mass: 165.9853 u) may refer to: * Hexafluoroacetone (HFA) * Hexafluoropropylene oxide (HFPO) The molecular formula C3H2F6O (molar mass: 168.038 g/mol, exact mass: 168.0010 u) may refer to: * Desflurane * Hexafluoro-2-propanol (HFIP) (In addition, the series for converges for , and the series for converges for .) === Geometric series === The geometric series and its derivatives have Maclaurin series :\begin{align} \frac{1}{1-x} &= \sum^\infty_{n=0} x^n \\\ \frac{1}{(1-x)^2} &= \sum^\infty_{n=1} nx^{n-1}\\\ \frac{1}{(1-x)^3} &= \sum^\infty_{n=2} \frac{(n-1)n}{2} x^{n-2}. \end{align} All are convergent for |x| < 1. In order to compute a second-order Taylor series expansion around point of the function :f(x,y)=e^x\ln(1+y), one first computes all the necessary partial derivatives: :\begin{align} f_x &= e^x\ln(1+y) \\\\[6pt] f_y &= \frac{e^x}{1+y} \\\\[6pt] f_{xx} &= e^x\ln(1+y) \\\\[6pt] f_{yy} &= - \frac{e^x}{(1+y)^2} \\\\[6pt] f_{xy} &=f_{yx} = \frac{e^x}{1+y} . \end{align} Evaluating these derivatives at the origin gives the Taylor coefficients :\begin{align} f_x(0,0) &= 0 \\\ f_y(0,0) &=1 \\\ f_{xx}(0,0) &=0 \\\ f_{yy}(0,0) &=-1 \\\ f_{xy}(0,0) &=f_{yx}(0,0)=1. \end{align} Substituting these values in to the general formula :\begin{align} T(x,y) = &f;(a,b) +(x-a) f_x(a,b) +(y-b) f_y(a,b) \\\ &{}+\frac{1}{2!}\left( (x-a)^2f_{xx}(a,b) \+ 2(x-a)(y-b)f_{xy}(a,b) +(y-b)^2 f_{yy}(a,b) \right)+ \cdots \end{align} produces :\begin{align} T(x,y) &= 0 + 0(x-0) + 1(y-0) + \frac{1}{2}\big( 0(x-0)^2 + 2(x-0)(y-0) + (-1)(y-0)^2 \big) + \cdots \\\ &= y + xy - \tfrac12 y^2 + \cdots \end{align} Since is analytic in , we have :e^x\ln(1+y)= y + xy - \tfrac12 y^2 + \cdots, \qquad |y| < 1. == Comparison with Fourier series == The trigonometric Fourier series enables one to express a periodic function (or a function defined on a closed interval ) as an infinite sum of trigonometric functions (sines and cosines). In particular, for , the error is less than 0.000003. So, by substituting for , the Taylor series of at is :1 - (x-1) + (x-1)^2 - (x-1)^3 + \cdots. In 1691–1692, Isaac Newton wrote down an explicit statement of the Taylor and Maclaurin series in an unpublished version of his work De Quadratura Curvarum. For , Taylor polynomials of higher degree provide worse approximations. 300px|thumb|right|The Taylor approximations for (black). For most common functions, the function and the sum of its Taylor series are equal near this point. The error in this approximation is no more than . Collecting the terms up to fourth order yields : e^x =c_0 + c_1x + \left(c_2 - \frac{c_0}{2}\right)x^2 + \left(c_3 - \frac{c_1}{2}\right)x^3+\left(c_4-\frac{c_2}{2}+\frac{c_0}{4!}\right)x^4 + \cdots\\! However, is not the zero function, so does not equal its Taylor series around the origin. Taylor polynomials are approximations of a function, which become generally better as increases. By integrating the above Maclaurin series, we find the Maclaurin series of , where denotes the natural logarithm: :-x - \tfrac{1}{2}x^2 - \tfrac{1}{3}x^3 - \tfrac{1}{4}x^4 - \cdots. For these functions the Taylor series do not converge if is far from . This method uses the known Taylor expansion of the exponential function. #The (truncated) series can be used to compute function values numerically, (often by recasting the polynomial into the Chebyshev form and evaluating it with the Clenshaw algorithm).
-1
14.44
16.0
840
1.7
D
5.3-15. Three drugs are being tested for use as the treatment of a certain disease. Let $p_1, p_2$, and $p_3$ represent the probabilities of success for the respective drugs. As three patients come in, each is given one of the drugs in a random order. After $n=10$ "triples" and assuming independence, compute the probability that the maximum number of successes with one of the drugs exceeds eight if, in fact, $p_1=p_2=p_3=0.7$
Let p be the probability of success in a Bernoulli trial, and q be the probability of failure. The problem was: Pepys initially thought that outcome C had the highest probability, but Newton correctly concluded that outcome A actually has the highest probability. ==Solution== The probabilities of outcomes A, B and C are: :P(A)=1-\left(\frac{5}{6}\right)^{6} = \frac{31031}{46656} \approx 0.6651\, , :P(B)=1-\sum_{x=0}^1\binom{12}{x}\left(\frac{1}{6}\right)^x\left(\frac{5}{6}\right)^{12-x} = \frac{1346704211}{2176782336} \approx 0.6187\, , :P(C)=1-\sum_{x=0}^2\binom{18}{x}\left(\frac{1}{6}\right)^x\left(\frac{5}{6}\right)^{18-x} = \frac{15166600495229}{25389989167104} \approx 0.5973\, . When multiple Bernoulli trials are performed, each with its own probability of success, these are sometimes referred to as Poisson trials.Rajeev Motwani and P. Raghavan. Then the probability of success and the probability of failure sum to one, since these are complementary events: "success" and "failure" are mutually exclusive and exhaustive. Alternatively, these can be stated in terms of odds: given probability p of success and q of failure, the odds for are p:q and the odds against are q:p. The probability of success (POS) is a statistics concept commonly used in the pharmaceutical industry including by health authorities to support decision making. Thus, the probability of failure, q, is given by :q = 1 - p = 1 - \tfrac{1}{2} = \tfrac{1}{2}. Triple therapy may refer to : * a first line therapy in Helicobacter pylori eradication protocols * any of the three drug treatments used in Management of HIV/AIDS * the combination of methotrexate, sulfasalazine, and hydroxychloroquine used to treat rheumatoid arthritis The probability of exactly k successes in the experiment B(n,p) is given by: :P(k)={n \choose k} p^k q^{n-k} where {n \choose k} is a binomial coefficient. Statistics & Probability Letters 83 (5), 1472-1478. If u_1, u_2 , n, k are positive natural numbers, and u_1 \le u_2, k \le n, p \in [0, 1] then P(r = u_1 k ; u_1 n, p) \ge P(r = u_2 k ; u_2 n, p). ==References== Category:Factorial and binomial topics Category:Probability problems Category:Isaac Newton Category:Mathematical problems The first criterion ensures that the probability of success is large. Find the probability that exactly two of the tosses result in heads. ===Solution=== For this experiment, let a heads be defined as a success and a tails as a failure. Closely related to a Bernoulli trial is a binomial experiment, which consists of a fixed number n of statistically independent Bernoulli trials, each with a probability of success p, and counts the number of successes. In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is conducted. However, it is a very important method for counts when the appropriate order of magnitude is unknown a priori and sampling is necessarily destructive. ==See also== *Dilution assay == External links == *A downloadable MPN calculator to take your data and get estimates *A five-replicate MPN table *Details of practical implementation, but not theory *US FDA article on MPN method *Information on the MPN method and ballast water treatment *Downloadable EXCEL program for the determination of the Most Probable Numbers (MPN), their standard deviations, confidence bounds and rarity values according to Jarvis, B., Wilrich, C., and P.-T. Wilrich: Reconsideration of the derivation of Most Probable Numbers, their standard deviations, confidence bounds and rarity values. Finding the optimal design is equivalent to finding the solution to the following equations: # mCPOS=c1 # lCPOS=c2 == See also == * Credible interval * Posterior probability * Interim analysis == References == Category:Pharmaceutical statistics He imagined that B and C toss their dice in groups of six, and said that A was most favorable because it required a 6 in only one toss, while B and C required a 6 in each of their tosses. Using the equation above, the probability of exactly two tosses out of four total tosses resulting in a heads is given by: :\begin{align} P(2) &= {4 \choose 2} p^{2} q^{4-2} \\\ &= 6 \times \left(\tfrac{1}{2}\right)^2 \times \left(\tfrac{1}{2}\right)^2 \\\ &= \dfrac {3}{8}. \end{align} ==See also== *Bernoulli scheme *Bernoulli sampling *Bernoulli distribution *Binomial distribution *Binomial coefficient *Binomial proportion confidence interval *Poisson sampling *Sampling design *Coin flipping *Jacob Bernoulli *Fisher's exact test *Boschloo's test ==References== ==External links== * * Category:Discrete distributions Category:Coin flipping Category:Experiment (probability theory) Because the coin is assumed to be fair, the probability of success is p = \tfrac{1}{2}. If k, n_1, n_2 are positive natural numbers, and n_1 < n_2, then P(r \ge k ; k n_1, \frac{1}{n_1}) > P(r \ge k ; k n_2, \frac{1}{n_2}). (from Varagnolo, Pillonetto and Schenato (2013)):D. Varagnolo, L. Schenato, G. Pillonetto, 2013. As n grows, P(n) decreases monotonically towards an asymptotic limit of 1/2. ==Example in R == The solution outlined above can be implemented in R as follows: for (s in 1:3) { # looking for s = 1, 2 or 3 sixes n = 6*s # ... in n = 6, 12 or 18 dice q = pbinom(s-1, n, 1/6) # q = Prob( ~~==Newton's explanation== Although Newton correctly calculated the odds of each bet, he provided a separate intuitive explanation to Pepys.
0.082
+93.4
4.86
0.0384
30
D
5.2- II. Evaluate $$ \int_0^{0.4} \frac{\Gamma(7)}{\Gamma(4) \Gamma(3)} y^3(1-y)^2 d y $$ (a) Using integration.
Thus * \Gamma_2(a)=\pi^{1/2}\Gamma(a)\Gamma(a-1/2) * \Gamma_3(a)=\pi^{3/2}\Gamma(a)\Gamma(a-1/2)\Gamma(a-1) and so on. Note that \Gamma_1(a) reduces to the ordinary gamma function. Introduction to the Gamma Function * S. Finch. The other one, more useful to obtain a numerical result is: : \Gamma_p(a)= \pi^{p(p-1)/4}\prod_{j=1}^p \Gamma(a+(1-j)/2). Numerically, :\Gamma\left(\tfrac13\right) \approx 2.678\,938\,534\,707\,747\,6337 :\Gamma\left(\tfrac14\right) \approx 3.625\,609\,908\,221\,908\,3119 :\Gamma\left(\tfrac15\right) \approx 4.590\,843\,711\,998\,803\,0532 :\Gamma\left(\tfrac16\right) \approx 5.566\,316\,001\,780\,235\,2043 :\Gamma\left(\tfrac17\right) \approx 6.548\,062\,940\,247\,824\,4377 :\Gamma\left(\tfrac18\right) \approx 7.533\,941\,598\,797\,611\,9047 . The integral is usually taken along a contour which is a deformation of the imaginary axis passing to the right of all poles of factors of the form Γ(a + s) and to the left of all poles of factors of the form Γ(a − s). ==Hypergeometric series== The hypergeometric function is given as a Barnes integral by :{}_2F_1(a,b;c;z) =\frac{\Gamma(c)}{\Gamma(a)\Gamma(b)} \frac{1}{2\pi i} \int_{-i\infty}^{i\infty} \frac{\Gamma(a+s)\Gamma(b+s)\Gamma(-s)}{\Gamma(c+s)}(-z)^s\,ds, see also . Integrating the reciprocal gamma function along the positive real axis also gives the Fransén–Robinson constant. The second Barnes lemma states :\frac{1}{2\pi i} \int_{-i\infty}^{i\infty} \frac{\Gamma(a+s)\Gamma(b+s)\Gamma(c+s)\Gamma(1-d-s)\Gamma(-s)}{\Gamma(e+s)}ds :=\frac{\Gamma(a)\Gamma(b)\Gamma(c)\Gamma(1-d+a)\Gamma(1-d+b)\Gamma(1-d+c)}{\Gamma(e-a)\Gamma(e-b)\Gamma(e-c)} where e = a + b + c − d + 1\. The gamma function is an important special function in mathematics. The following two representations for were given by I. Mező :\sqrt{\frac{\pi\sqrt{e^\pi}}{2}}\frac{1}{\Gamma^2\left(\frac34\right)}=i\sum_{k=-\infty}^\infty e^{\pi(k-2k^2)}\theta_1\left(\frac{i\pi}{2}(2k-1),e^{-\pi}\right), and :\sqrt{\frac{\pi}{2}}\frac{1}{\Gamma^2\left(\frac34\right)}=\sum_{k=-\infty}^\infty\frac{\theta_4(ik\pi,e^{-\pi})}{e^{2\pi k^2}}, where and are two of the Jacobi theta functions. == Products == Some product identities include: : \prod_{r=1}^2 \Gamma\left(\tfrac{r}{3}\right) = \frac{2\pi}{\sqrt 3} \approx 3.627\,598\,728\,468\,435\,7012 : \prod_{r=1}^3 \Gamma\left(\tfrac{r}{4}\right) = \sqrt{2\pi^3} \approx 7.874\,804\,972\,861\,209\,8721 : \prod_{r=1}^4 \Gamma\left(\tfrac{r}{5}\right) = \frac{4\pi^2}{\sqrt 5} \approx 17.655\,285\,081\,493\,524\,2483 : \prod_{r=1}^5 \Gamma\left(\tfrac{r}{6}\right) = 4\sqrt{\frac{\pi^5}3} \approx 40.399\,319\,122\,003\,790\,0785 : \prod_{r=1}^6 \Gamma\left(\tfrac{r}{7}\right) = \frac{8\pi^3}{\sqrt 7} \approx 93.754\,168\,203\,582\,503\,7970 : \prod_{r=1}^7 \Gamma\left(\tfrac{r}{8}\right) = 4\sqrt{\pi^7} \approx 219.828\,778\,016\,957\,263\,6207 In general: : \prod_{r=1}^n \Gamma\left(\tfrac{r}{n+1}\right) = \sqrt{\frac{(2\pi)^n}{n+1}} From those products can be deduced other values, for example, from the former equations for \prod_{r=1}^3 \Gamma\left(\tfrac{r}{4}\right) , \Gamma\left(\tfrac{1}{4}\right) and \Gamma\left(\tfrac{2}{4}\right) , can be deduced: \Gamma\left(\tfrac{3}{4}\right) =\left(\tfrac{\pi} {2}\right) ^{\tfrac{1}{4}} {\operatorname{AGM}\left(\sqrt 2, 1\right)}^{\tfrac{1}{2}} Other rational relations include :\frac{\Gamma\left(\tfrac15\right)\Gamma\left(\tfrac{4}{15}\right)}{\Gamma\left(\tfrac13\right)\Gamma\left(\tfrac{2}{15}\right)} = \frac{\sqrt{2}\,\sqrt[20]{3}}{\sqrt[6]{5}\,\sqrt[4]{5-\frac{7}{\sqrt 5}+\sqrt{6-\frac{6}{\sqrt 5}}}} :\frac{\Gamma\left(\tfrac{1}{20}\right)\Gamma\left(\tfrac{9}{20}\right)}{\Gamma\left(\tfrac{3}{20}\right)\Gamma\left(\tfrac{7}{20}\right)} = \frac{\sqrt[4]{5}\left(1+\sqrt{5}\right)}{2} :\frac{\Gamma\left(\frac{1}{5}\right)^2}{\Gamma\left(\frac{1}{10}\right)\Gamma\left(\frac{3}{10}\right)} = \frac{\sqrt{1+\sqrt{5}}}{2^{\tfrac{7}{10}}\sqrt[4]{5}} and many more relations for where the denominator d divides 24 or 60.Raimundas Vidūnas, Expressions for Values of the Gamma Function Gamma quotients with algebraic values must be "poised" in the sense that the sum of arguments is the same (modulo 1) for the denominator and the numerator. Given proper convergence conditions, one can relate more general Barnes' integrals and generalized hypergeometric functions pFq in a similar way . ==Barnes lemmas== The first Barnes lemma states :\frac{1}{2\pi i} \int_{-i\infty}^{i\infty} \Gamma(a+s)\Gamma(b+s)\Gamma(c-s)\Gamma(d-s)ds =\frac{\Gamma(a+c)\Gamma(a+d)\Gamma(b+c)\Gamma(b+d)}{\Gamma(a+b+c+d)}. Euler Gamma Function Constants * * * * * Category:Gamma and related functions Category:Mathematical constants In mathematics and mathematical physics, Slater integrals are certain integrals of products of three spherical harmonics. In particular, where AGM() is the arithmetic–geometric mean, we have :\Gamma\left(\tfrac13\right) = \frac{2^\frac{7}{9}\cdot \pi^\frac23}{3^\frac{1}{12}\cdot \operatorname{AGM}\left(2,\sqrt{2+\sqrt{3}}\right)^\frac13} :\Gamma\left(\tfrac14\right) = \sqrt \frac{(2 \pi)^\frac32}{\operatorname{AGM}\left(\sqrt 2, 1\right)} :\Gamma\left(\tfrac16\right) = \frac{2^\frac{14}{9}\cdot 3^\frac13\cdot \pi^\frac56}{\operatorname{AGM}\left(1+\sqrt{3},\sqrt{8}\right)^\frac23}. For non-positive integers, the gamma function is not defined. Beta integral may refer to: *beta function *Barnes beta integral It may also be given in terms of the Barnes -function: :\Gamma(i) = \frac{G(1+i)}{G(i)} = e^{-\log G(i)+ \log G(1+i)}. Category:Gamma and related functions In mathematics, the multivariate gamma function Γp is a generalization of the gamma function. The gamma function with other complex arguments returns :\Gamma(1 + i) = i\Gamma(i) \approx 0.498 - 0.155i :\Gamma(1 - i) = -i\Gamma(-i) \approx 0.498 + 0.155i :\Gamma(\tfrac12 + \tfrac12 i) \approx 0.818\,163\,9995 - 0.763\,313\,8287\, i :\Gamma(\tfrac12 - \tfrac12 i) \approx 0.818\,163\,9995 + 0.763\,313\,8287\, i :\Gamma(5 + 3i) \approx 0.016\,041\,8827 - 9.433\,293\,2898\, i :\Gamma(5 - 3i) \approx 0.016\,041\,8827 + 9.433\,293\,2898\, i. ==Other constants== The gamma function has a local minimum on the positive real axis :x_{\min} = 1.461\,632\,144\,968\,362\,341\,262\ldots\, with the value :\Gamma\left(x_{\min}\right) = 0.885\,603\,194\,410\,888\ldots\, . In mathematics, a Barnes integral or Mellin-Barnes integral is a contour integral involving a product of gamma functions. Curiously enough, \Gamma(i) appears in the below integral evaluation:The webpage of István Mező :\int_0^{\pi/2}\\{\cot(x)\\}\,dx=1-\frac{\pi}{2}+\frac{i}{2}\log\left(\frac{\pi}{\sinh(\pi)\Gamma(i)^2}\right).
3857
0.36
'-0.029'
3.2
0.1792
E
5.6-7. Let $X$ equal the maximal oxygen intake of a human on a treadmill, where the measurements are in milliliters of oxygen per minute per kilogram of weight. Assume that, for a particular population, the mean of $X$ is $\mu=$ 54.030 and the standard deviation is $\sigma=5.8$. Let $\bar{X}$ be the sample mean of a random sample of size $n=47$. Find $P(52.761 \leq \bar{X} \leq 54.453)$, approximately.
Consequently, : \Pr\left(\bar{X} - \frac{cS}{\sqrt{n}} \le \mu \le \bar{X} + \frac{cS}{\sqrt{n}} \right)=0.95\, and we have a theoretical (stochastic) 95% confidence interval for μ. If X has a standard normal distribution, i.e. X ~ N(0,1), : \mathrm{P}(X > 1.96) \approx 0.025, \, : \mathrm{P}(X < 1.96) \approx 0.975, \, and as the normal distribution is symmetric, : \mathrm{P}(-1.96 < X < 1.96) \approx 0.95. For a finite population with equal probabilities at all points, we have \sqrt{\frac{1}{N}\sum_{i=1}^N\left(x_i - \bar{x}\right)^2} = \sqrt{\frac{1}{N}\left(\sum_{i=1}^N x_i^2\right) - {\bar{x}}^2} = \sqrt{\left(\frac{1}{N}\sum_{i=1}^N x_i^2\right) - \left(\frac{1}{N} \sum_{i=1}^{N} x_i\right)^2}, which means that the standard deviation is equal to the square root of the difference between the average of the squares of the values and the square of the average value. For a large number of independent identically distributed random variables \ X_1, ..., X_n\ , with finite variance, the average \ \overline{X}_n\ approximately has a normal distribution, no matter what the distribution of the \ X_i\ is, with the approximation roughly improving in proportion to \ \sqrt{n\ }. == Example == Suppose {X1, …, Xn} is an independent sample from a normally distributed population with unknown parameters mean μ and variance σ2. Assuming statistical independence of the values in the sample, the standard deviation of the mean is related to the standard deviation of the distribution by: \sigma_\text{mean} = \frac{1}{\sqrt{N}}\sigma where is the number of observations in the sample used to estimate the mean. Three standard deviations account for 99.73% of the sample population being studied, assuming the distribution is normal or bell-shaped (see the 68–95–99.7 rule, or the empirical rule, for more information). ==Definition of population values== Let μ be the expected value (the average) of random variable with density : \mu \equiv \operatorname{E}[X] = \int_{-\infty}^{+\infty} x f(x) \, \mathrm dx The standard deviation of is defined as \sigma \equiv \sqrt{\operatorname E\left[(X - \mu)^2\right]} = \sqrt{ \int_{-\infty}^{+\infty} (x-\mu)^2 f(x) \, \mathrm dx }, which can be shown to equal \sqrt{\operatorname E\left[X^2\right] - (\operatorname E[X])^2}. The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. An estimate of the standard deviation for data taken to be approximately normal follows from the heuristic that 95% of the area under the normal curve lies roughly two standard deviations to either side of the mean, so that, with 95% probability the total range of values represents four standard deviations so that . This means that most men (about 68%, assuming a normal distribution) have a height within 3 inches of the mean (67–73 inches)one standard deviationand almost all men (about 95%) have a height within 6 inches of the mean (64–76 inches)two standard deviations. The sample standard deviation can be computed as: s(X) = \sqrt{\frac{N}{N-1}} \sqrt{\operatorname E\left[(X - \operatorname E[X])^2\right]}. The standard deviation is estimated as :CS \sqrt{\frac{B-\frac{A^2}{N}}{N-1}}=5.57 ==References== Category:Means The average of these 15 deviations from the assumed mean is therefore −30/15 = −2\. If a data distribution is approximately normal then about 68 percent of the data values are within one standard deviation of the mean (mathematically, , where is the arithmetic mean), about 95 percent are within two standard deviations (), and about 99.7 percent lie within three standard deviations (). For various values of , the percentage of values expected to lie in and outside the symmetric interval, , are as follows: thumb|Percentage within(z) thumb|z(Percentage within) Confidence interval Proportion within Proportion without Proportion without Confidence interval Percentage Percentage Fraction 25% 75% 3 / 4 % % 1 / 66.6667% 33.3333% 1 / 3 68% 32% 1 / 3.125 1 % % 1 / 80% 20% 1 / 5 90% 10% 1 / 10 95% 5% 1 / 20 2 % % 1 / 99% 1% 1 / 100 3 % % 1 / 370.398 99.9% 0.1% 1 / 99.99% 0.01% 1 / 4 % % 1 / 99.999% 0.001% 1 / 1 / 6.8 / % % 1 / 5 % % 1 / % % 1 / % % 1 / % % 1 / % % 1 / % % 1 / % % 1 / 7 % 1 / ==Relationship between standard deviation and mean== The mean and the standard deviation of a set of data are descriptive statistics usually reported together. Suppose we wanted to calculate a 95% confidence interval for μ. In this case, the standard deviation will be \sigma = \sqrt{\sum_{i=1}^N p_i(x_i - \mu)^2},\text{ where } \mu = \sum_{i=1}^N p_i x_i. ===Continuous random variable=== The standard deviation of a continuous real-valued random variable with probability density function is \sigma = \sqrt{\int_\mathbf{X} (x - \mu)^2\, p(x)\, \mathrm dx},\text{ where } \mu = \int_\mathbf{X} x\, p(x)\, \mathrm dx, and where the integrals are definite integrals taken for ranging over the set of possible values of the random variable . It has a mean of 1007 meters, and a standard deviation of 5 meters. For other distributions, the correct formula depends on the distribution, but a rule of thumb is to use the further refinement of the approximation: \hat\sigma = \sqrt{\frac{1}{N - 1.5 - \frac{1}{4}\gamma_2} \sum_{i=1}^N \left(x_i - \bar{x}\right)^2}, where denotes the population excess kurtosis. An approximation can be given by replacing with , yielding: \hat\sigma = \sqrt{\frac{1}{N - 1.5} \sum_{i=1}^N \left(x_i - \bar{x}\right)^2}, The error in this approximation decays quadratically (as ), and it is suited for all but the smallest samples or highest precision: for the bias is equal to 1.3%, and for the bias is already less than 0.1%. The result is that a 95% CI of the SD runs from 0.45 × SD to 31.9 × SD; the factors here are as follows: \Pr\left(q_\frac{\alpha}{2} < k \frac{s^2}{\sigma^2} < q_{1 - \frac{\alpha}{2}}\right) = 1 - \alpha, where q_p is the -th quantile of the chi-square distribution with degrees of freedom, and is the confidence level. Similarly for sample standard deviation, s = \sqrt{\frac{Ns_2 - s_1^2}{N(N - 1)}}. The proportion that is less than or equal to a number, , is given by the cumulative distribution function: \text{Proportion} \le x = \frac{1}{2}\left[1 + \operatorname{erf}\left(\frac{x - \mu}{\sigma\sqrt{2}}\right)\right] = \frac{1}{2}\left[1 + \operatorname{erf}\left(\frac{z}{\sqrt{2}}\right)\right].
0.6247
34
'-131.1'
1.51
13.45
A
5.3-19. Two components operate in parallel in a device, so the device fails when and only when both components fail. The lifetimes, $X_1$ and $X_2$, of the respective components are independent and identically distributed with an exponential distribution with $\theta=2$. The cost of operating the device is $Z=2 Y_1+Y_2$, where $Y_1=\min \left(X_1, X_2\right)$ and $Y_2=\max \left(X_1, X_2\right)$. Compute $E(Z)$.
It is based on an exponential failure distribution (see failure rate for a full derivation). Inputs to this process include unit and system failure rates. This failure rate changes throughout the life of the product. :* Provide necessary input to unit and system-level life cycle cost analyses. In semiconductor devices, problems in the device package may cause failures due to contamination, mechanical stress of the device, or open or short circuits. This degradation drastically limits the overall operating life of a relay or contactor to a range of perhaps 100,000 operations, a level representing 1% or less than the mechanical life expectancy of the same device. ==Semiconductor failures== Many failures result in generation of hot electrons. Life cycle cost studies determine the cost of a product over its entire life. Another important factor in estimating a NPPs lifetime cost derives from its capacity factor. A CMU 2007 study showed an estimated 3% mean AFR over 1–5 years based on replacement logs for a large sample of drives.. ==See also== * Failure rate * Frequency of exceedance == References == Category:Engineering failures Category:Rates Electronic components have a wide range of failure modes. Member of Optical Society of America, IEEE, "Automated Reliability Prediction, SR-332, Issue 3", January 2011; "Automated Reliability Prediction (ARPP), FD-ARPP-01, Issue 11", January 2011 Every product has a failure rate, λ which is the number of units failing per unit time. It is necessary to know how often different parts of the system are going to fail even for redundant components. If this part of the sample is the only option and is weaker than the bond itself, the sample will fail before the bond. ==See also== * Reliability (semiconductor) ==References== ==Further reading== *Herfst, R.W., Steeneken, P.G., Schmitz, J., Time and voltage dependence of dielectric charging in RF MEMS capacitive switches, (2007) Annual Proceedings – Reliability Physics (Symposium), art. no. 4227667, pp. 417–421. ==External links== * http://www.esda.org - ESD Association Category:Semiconductor device defects Category:Engineering failures This leaves a product with a useful life period during which failures occur randomly i.e., λ is constant, and finally a wear-out period, usually beyond the products useful life, where λ is increasing. == Definition of reliability == A practical definition of reliability is “the probability that a piece of equipment operating under specified conditions shall perform satisfactorily for a given period of time”. Gallium arsenide monolithic microwave integrated circuits can have these failures:Chapter 4. Gaudenzio Meneghesso from the University of Padova, Padova, Italy was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2013 for contributions to the reliability physics of compound semiconductors devices. ==References== Category:Fellow Members of the IEEE Category:Living people Category:Year of birth missing (living people) Category:Place of birth missing (living people) Analysis of the statistical properties of failures can give guidance in designs to establish a given level of reliability. Annualized failure rate (AFR) gives the estimated probability that a device or component will fail during a full year of use. In semiconductor devices, parasitic structures, irrelevant for normal operation, become important in the context of failures; they can be both a source and protection against failure. [[File:Flamanville-3 2010-07-15.jpg|thumb|upright=1.35|right|EDF has said its third-generation Flamanville 3 project (seen here in 2010) will be delayed until 2018, due to "both structural and economic reasons," and the project's total cost had climbed to EUR 11 billion by 2012.EDF raises French EPR reactor cost to over $11 billion, Reuters, Dec 3, 2012. During the ‘useful life period’ assuming a constant failure rate, MTBF is the inverse of the failure rate and the terms can be used interchangeably. == Importance of reliability prediction == Reliability predictions: :* Help assess the effect of product reliability on the maintenance activity and on the quantity of spare units required for acceptable field performance of any particular system. Parametric failures occur at intermediate discharge voltages and occur more often, with latent failures the most common.
0.000226
1.7
5275.0
+3.03
5
E
5.8-1. If $X$ is a random variable with mean 33 and variance 16, use Chebyshev's inequality to find (b) An upper bound for $P(|X-33| \geq 14)$.
Chebyshev's inequality states that at most approximately 11.11% of the distribution will lie at least three standard deviations away from the mean. Chebyshev's inequality is more general, stating that a minimum of just 75% of values must lie within two standard deviations of the mean and 88.89% within three standard deviations for a broad range of different probability distributions. To improve the sharpness of the bounds provided by Chebyshev's inequality a number of methods have been developed; for a review see eg.Savage, I. Richard. The second of these inequalities with is the Chebyshev bound. In this setting we can state the following: :General version of Chebyshev's inequality. \forall k > 0: \quad \Pr\left( \|X - \mu\|_\alpha \ge k \sigma_\alpha \right) \le \frac{1}{ k^2 }. However, the benefit of Chebyshev's inequality is that it can be applied more generally to get confidence bounds for ranges of standard deviations that do not depend on the number of samples. ====Semivariances==== An alternative method of obtaining sharper bounds is through the use of semivariances (partial variances). *Grechuk et al. developed a general method for deriving the best possible bounds in Chebyshev's inequality for any family of distributions, and any deviation risk measure in place of standard deviation. : P( | X - \mu | \ge k \sigma ) \le \frac{ 4 }{ 3k^2 } - \frac13 \quad \text{if} \quad k \le \sqrt{8/3}. The bounds are sharp for the following example: for any k ≥ 1, : X = \begin{cases} -1, & \text{with probability }\frac{1}{2k^2} \\\ 0, & \text{with probability }1 - \frac{1}{k^2} \\\ 1, & \text{with probability }\frac{1}{2k^2} \end{cases} For this distribution, the mean μ = 0 and the standard deviation σ = , so : \Pr(|X-\mu| \ge k\sigma) = \Pr(|X| \ge 1) = \frac{1}{k^2}. One way to prove Chebyshev's inequality is to apply Markov's inequality to the random variable with a = (kσ)2: : \Pr(|X - \mu| \geq k\sigma) = \Pr((X - \mu)^2 \geq k^2\sigma^2) \leq \frac{\mathbb{E}[(X - \mu)^2]}{k^2\sigma^2} = \frac{\sigma^2}{k^2\sigma^2} = \frac{1}{k^2}. By comparison, Chebyshev's inequality states that all but a 1/N fraction of the sample will lie within standard deviations of the mean. For k ≥ 1, n > 4 and assuming that the nth moment exists, this bound is tighter than Chebyshev's inequality. Chebyshev's inequality can now be written : \Pr(x \le m - k \sigma) \le \frac { 1 } { k^2 } \frac { \sigma_-^2 } { \sigma^2 }. The additional fraction of 4/9 present in these tail bounds lead to better confidence intervals than Chebyshev's inequality. The Chebyshev inequality for the distribution gives 95% and 99% confidence intervals of approximately ±4.472 standard deviations and ±10 standard deviations respectively. ====Samuelson's inequality==== Although Chebyshev's inequality is the best possible bound for an arbitrary distribution, this is not necessarily true for finite samples. * If 1\le r\le \sqrt{8/3}, the bound is tight when X=r with probability \frac{4}{3r^2}-\frac{1}{3} and is otherwise distributed uniformly in the interval \left[-\frac{r}{2},r\right]. === Specialization to mean and variance === If X has mean \mu and finite, non-zero variance \sigma^2, then taking \alpha=\mu and r=\lambda \sigma gives that for any \lambda > \sqrt{\frac{8}{3}} = 1.63299..., :\operatorname{Pr}(\left|X-\mu\right|\geq \lambda\sigma)\leq\frac{4}{9\lambda^2}. === Proof Sketch === For a relatively elementary proof see.Pukelsheim, F., 1994. The first provides a lower bound for the value of P(x). ==Finite samples== === Univariate case === Saw et al extended Chebyshev's inequality to cases where the population mean and variance are not known and may not exist, but the sample mean and sample standard deviation from N samples are to be employed to bound the expected value of a new drawing from the same distribution. If we put : \sigma_u^2 = \max(\sigma_-^2, \sigma_+^2) , Chebyshev's inequality can be written : \Pr(| x \le m - k \sigma |) \le \frac 1 {k^2} \frac { \sigma_u^2 } { \sigma^2 } . If X is a unimodal distribution with mean μ and variance σ2, then the inequality states that : P( | X - \mu | \ge k \sigma ) \le \frac{ 4 }{ 9k^2 } \quad \text{if} \quad k \ge \sqrt{8/3} = 1.633. In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. In terms of the lower semivariance Chebyshev's inequality can be written : \Pr(x \le m - a \sigma_-) \le \frac { 1 } { a^2 }. The American Statistician, 56(3), pp.186-190 When bounding the event random variable deviates from its mean in only one direction (positive or negative), Cantelli's inequality gives an improvement over Chebyshev's inequality.
0.444444444444444
135.36
'-0.1'
0.082
-45
D
5.5-9. Suppose that the length of life in hours (say, $X$ ) of a light bulb manufactured by company $A$ is $N(800,14400)$ and the length of life in hours (say, $Y$ ) of a light bulb manufactured by company $B$ is $N(850,2500)$. One bulb is randomly selected from each company and is burned until "death." (a) Find the probability that the length of life of the bulb from company $A$ exceeds the length of life of the bulb from company $B$ by at least 15 hours.
Additionally, the consistency of the power delivery to the bulb and how little it fluctuates prevents the bulb's filaments from being damaged by dirty power (brown-outs cause damage to electrical systems). ==Other long-lasting light bulbs== ===Second=== The second-longest-lasting light bulb is in Fort Worth, Texas. The Livermore-Pleasanton Fire Department plans to house and maintain the bulb for the rest of its life, regardless of length. While it might seem astonishing that so many longest-lasting light bulbs have been so infrequently turned off, this is the precise reason for their longevity. The bulb has been on ever since, and may in fact have the longest continuous service in the world with other bulbs having interruptions in operation during their existence. === Fourth === The Fourth-longest-lasting light bulb was above the back door of Gasnick Supply, a New York City hardware store on Second Avenue, between 52nd and 53rd Streets. The Mangum Light Bulb burned out on Friday, December 13, 2019. ===Sixth=== The sixth-longest-lasting light bulb was in a washroom at the Martin & Newby Electrical Shop in Ipswich, England. The Centennial Light is the world's longest-lasting light bulb, burning since 1901, and almost never turned off. Another reason for the longevity of bulbs is the size, quality and material of the filament. This indicates that the broken bulb must be one of the last three (B). The store, as well as the entire half-block on which it stood, was razed in 2003. ===Fifth=== The fifth-longest-lasting light bulb was located in a fire house in Mangum, Oklahoma. Due to its longevity, the bulb has been noted by The Guinness Book of World Records,. This is a list of the longest-lasting incandescent light bulbs. ==Longest- lasting light bulb== The world's longest-lasting light bulb is the Centennial Light located at 4550 East Avenue, Livermore, California. Research continued with inoculated canning pack studies that were published by the NCA in 1968. ==Mathematical formulas== Thermal death time can be determined one of two ways: 1) by using graphs or 2) by using mathematical formulas. ===Graphical method=== This is usually expressed in minutes at the temperature of . It burned out in 2001.Martin & Newby Bulb ===Seventh=== The seventh-longest- lasting light bulb is located in the Cinema Napoleón in Río Chico, Miranda, Venezuela. Thermal death time is how long it takes to kill a specific bacterium at a specific temperature. It is titled "A Million Hours of Service". thumb|right|130px|The pendant light at Fire Station #6 in which the bulb is installed.|alt=A photo of the pendant light at Fire Station #6 in which the bulb is installed.In 1976, the fire department moved to Fire Station #6 with the bulb; the bulb socket's cord was severed for fear that unscrewing the bulb could damage it. The bulb, known as the Eternal Light, was credited as being the longest-lasting bulb in the 1970 edition of the Guinness Book of World Records, two years before the discovery of the Livermore bulb.Livermore's Centennial Light Guinness Book of World Records The bulb was originally at the Byers Opera House, and was installed by a stage-hand, Barry Burke, on , above the backstage door. thumb|400px|An illustration of the lightbulb problem, where one is searching for a broken bulb among six lightbulbs. The objective is to find the broken bulb using the smallest number of tests (where a test is when some of the bulbs are connected to a power supply). The wagon is now part of a museum, and the light bulb is in use several times per week. ===Third=== The third longest lasting light bulb began operation in 1929-30 when BC Electric's Ruskin Generating Station (British Columbia Canada) commenced service. Dunstan contacted the Guinness Book of World Records, Ripley's Believe It or Not, and General Electric, who all confirmed it as the longest-lasting bulb known in existence. The bulb is cared for by the Centennial Light Bulb Committee, a partnership of the Livermore-Pleasanton Fire Department, Livermore Heritage Guild, Lawrence Livermore National Laboratories, and Sandia National Laboratories. Bulb customer numbers Date Customers (approx.) Source January 2017 15,000 August 2017 100,000 January 2018 200,000 300,000 August 2018 670,000 January 2019 870,000 March 2019 1,130,000 November 2021 1,700,000 ==References== Category:2022 mergers and acquisitions Category:Electric power companies of the United Kingdom Category:Utilities of the United Kingdom Category:Companies based in London Category:British companies established in 2015 Category:Companies that have entered administration in the United Kingdom
0.0547
27.211
435.0
7
0.3085
E
An urn contains 10 red and 10 white balls. The balls are drawn from the urn at random, one at a time. Find the probability that the fourth white ball is the fourth ball drawn if the sampling is done with replacement.
Assume that an urn contains m_1 red balls and m_2 white balls, totalling N = m_1 + m_2 balls. n balls are drawn at random from the urn one by one without replacement. The probability that the red ball is not taken in the third draw, under the condition that it was not taken in the first two draws, is 998/1998 ≈ . While black balls are set aside after a draw (non-replacement), white balls are returned to the urn after a draw (replacement). The probability that the red ball is not taken in the second draw, under the condition that it was not taken in the first draw, is 999/1999 ≈ . The probability that a particular ball is taken in a particular draw depends not only on its own weight, but also on the total weight of the competing balls that remain in the urn at that moment. The probability that the red ball is not taken in the first draw is 1000/2000 = . This is referred to as "drawing without replacement", by opposition to "drawing with replacement". * multivariate hypergeometric distribution: the balls are not returned to the urn once extracted, but with balls of more than two colors. * geometric distribution: number of draws before the first successful (correctly colored) draw. When the mutator is drawn it is replaced along with an additional ball of an entirely new colour. * hypergeometric distribution: the balls are not returned to the urn once extracted. See Pólya urn model. * binomial distribution: the distribution of the number of successful draws (trials), i.e. extraction of white balls, given n draws with replacement in an urn with black and white balls. One ball is drawn randomly from the urn and its color observed; it is then placed back in the urn (or not), and the selection process is repeated.Urn Model: Simple Definition, Examples and Applications — The basic urn model Possible questions that can be answered in this model are: * Can I infer the proportion of white and black balls from n observations? One pretends to remove one or more balls from the urn; the goal is to determine the probability of drawing one color or another, or some other properties. * Mixed replacement/non-replacement: the urn contains black and white balls. (A variation both on the first and the second question) ==Examples of urn problems== * beta- binomial distribution: as above, except that every time a ball is observed, an additional ball of the same color is added to the urn. Continuing in this way, we can calculate that the probability of not taking the red ball in n draws is approximately 2−n as long as n is small compared to N. We want to calculate the probability that the red ball is not taken. * Pólya urn: each time a ball of a particular colour is drawn, it is replaced along with an additional ball of the same colour. In probability and statistics, an urn problem is an idealized mental exercise in which some objects of real interest (such as atoms, people, cars, etc.) are represented as colored balls in an urn or other container. * Occupancy problem: the distribution of the number of occupied urns after the random assignment of k balls into n urns, related to the coupon collector's problem and birthday problem. thumb|Two urns containing white and red balls. The probability that the second ball picked is red depends on whether the first ball was red or white. Here the draws are independent and the probabilities are therefore not multiplied together. * The probability of taking a particular item at a particular draw is equal to its fraction of the total "weight" of all items that have not yet been taken at that moment.
0.3359
0.0625
0.0
1.39
524
B
If $P(A)=0.8, P(B)=0.5$, and $P(A \cup B)=0.9$. What is $P(A \cap B)$?
* This results in P(A \mid B) = P(A \cap B)/P(B) whenever P(B) > 0 and 0 otherwise. This can also be understood as the fraction of probability B that intersects with A, or the ratio of the probabilities of both events happening to the "given" one happening (how many times A occurs rather than not assuming B has occurred): P(A \mid B) = \frac{P(A \cap B)}{P(B)}. We have P(A\mid B)=\tfrac{P(A \cap B)}{P(B)} = \tfrac{3/36}{10/36}=\tfrac{3}{10}, as seen in the table. == Use in inference == In statistical inference, the conditional probability is an update of the probability of an event based on new information. The conditional probability can be found by the quotient of the probability of the joint intersection of events and (P(A \cap B))—the probability at which A and B occur together, although not necessarily occurring at the same time—and the probability of : :P(A \mid B) = \frac{P(A \cap B)}{P(B)}. * Without the knowledge of the occurrence of B, the information about the occurrence of A would simply be P(A) * The probability of A knowing that event B has or will have occurred, will be the probability of A \cap B relative to P(B), the probability that B has occurred. In this event, the event B can be analyzed by a conditional probability with respect to A. We denote the quantity \frac{P(A \cap B)}{P(B)} as P(A\mid B) and call it the "conditional probability of given ." The technique is wrong because the eight events whose probabilities got added are not mutually exclusive. Moreover, this "multiplication rule" can be practically useful in computing the probability of A \cap B and introduces a symmetry with the summation axiom for Poincaré Formula: :P(A \cup B) = P(A) + P(B) - P(A \cap B) :Thus the equations can be combined to find a new representation of the : : P(A \cap B)= P(A) + P(B) - P(A \cup B) = P(A \mid B)P(B) : P(A \cup B)= {P(A) + P(B) - P(A \mid B){P(B)}} ==== As the probability of a conditional event ==== Conditional probability can be defined as the probability of a conditional event A_B. Therefore, it can be useful to reverse or convert a conditional probability using Bayes' theorem: P(A\mid B) = {{P(B\mid A) P(A)}\over{P(B)}}. For events in B, two conditions must be met: the probability of B is one and the relative magnitudes of the probabilities must be preserved. This shows that P(A|B) P(B) = P(B|A) P(A) i.e. P(A|B) = . It can be shown that :P(A_B)= \frac{P(A \cap B)}{P(B)} which meets the Kolmogorov definition of conditional probability. === Conditioning on an event of probability zero === If P(B)=0 , then according to the definition, P(A \mid B) is undefined. For a value in and an event , the conditional probability is given by P(A \mid X=x) . These probabilities are linked through the law of total probability: :P(A) = \sum_n P(A \cap B_n) = \sum_n P(A\mid B_n)P(B_n). where the events (B_n) form a countable partition of \Omega. In probability theory, the complement of any event A is the event [not A], i.e. the event that A does not occur.Robert R. Johnson, Patricia J. Kuby: Elementary Statistics. It may be tempting to say that : Pr(["1" on 1st trial] or ["1" on second trial] or ... or ["1" on 8th trial]) := Pr("1" on 1st trial) + Pr("1" on second trial) + ... + P("1" on 8th trial) := 1/6 + 1/6 + ... + 1/6 := 8/6 := 1.3333... Equivalently, the probabilities of an event and its complement must always total to 1. Therefore, the probability of an event's complement must be unity minus the probability of the event. That is, for an event A, :P(A^c) = 1 - P(A). That is, P(A) is the probability of A before accounting for evidence E, and P(A|E) is the probability of A after having accounted for evidence E or after having updated P(A). Similar reasoning can be used to show that P(Ā|B) = etc.
+107
479
0.9
41.40
10.7598
C
Suppose that the alleles for eye color for a certain male fruit fly are $(R, W)$ and the alleles for eye color for the mating female fruit fly are $(R, W)$, where $R$ and $W$ represent red and white, respectively. Their offspring receive one allele for eye color from each parent. Assume that each of the four possible outcomes has equal probability. If an offspring ends up with either two white alleles or one red and one white allele for eye color, its eyes will look white. Given that an offspring's eyes look white, what is the conditional probability that it has two white alleles for eye color?
When assessing phenotype from this, "3" of the offspring have "Brown" eyes and only one offspring has "green" eyes. (3 are "B?" thumb|Punnett squares for each combination of parents' colour vision status giving probabilities of their offsprings' status, each cell having 25% probability in theory. However, when they crossed a red-eyed male with a white-eyed female, the male offspring had white eyes while the female offspring had red eyes. The probability of an individual offspring's having the genotype BB is 25%, Bb is 50%, and bb is 25%. As every individual has a 50% chance of passing on an allele to the next generation, the formula depends on 0.5 raised to the power of however many generations separate the individual from the common ancestor of its parents, on both the father's side and mother's side. These tables can be used to examine the genotypical outcome probabilities of the offspring of a single trait (allele), or when crossing multiple traits from the parents. Via principles of dominant and recessive alleles, they could then (perhaps after cross-breeding the offspring as well) make an inference as to which sex chromosome contains the gene Z, if either in fact did. ==Reciprocal cross in practice== Given that the trait of interest is either autosomal or sex-linked and follows by either complete dominance or incomplete dominance, a reciprocal cross following two generations will determine the mode of inheritance of the trait. ===White-eye mutation in Drosophila melanogaster=== Sex linkage was first reported by Doncaster and Raynor in 1906Doncaster L and Raynor GH (1906). The ratio 9:3:3:1 is the expected outcome when crossing two double-heterozygous parents with unlinked genes. Mutant Male x Wild- type Female ( X(mut)Y x X(wt)X(wt) ) X (wt) X (wt) X (mut) X (mut) X (wt) Red eye Female X (mut) X (wt) Red eye Female Y X (wt) Y Red eye Male X (wt) Y Red eye Male As shown in Table 1, the male offspring are white-eyed and the female offspring are red-eyed. The Punnett square works, however, only if the genes are independent of each other, which means that having a particular allele of gene "A" does not alter the probability of possessing an allele of gene "B". In this example, both parents have the genotype Bb. A represents the dominant allele for color (yellow), while a represents the recessive allele (green). He found that a white-eyed male crossed with a red-eyed female produced only red-eyed offspring. and 1 is "bb") B b B BB Bb b Bb bb The way in which the B and b alleles interact with each other to affect the appearance of the offspring depends on how the gene products (proteins) interact (see Mendelian inheritance). The reason was that the white eye allele is sex-linked (more specifically, on the X chromosome) and recessive. RA Ra rA ra RA RRAA RRAa RrAA RrAa Ra RRAa RRaa RrAa Rraa rA RrAA RrAa rrAA rrAa ra RrAa Rraa rrAa rraa Since dominant traits mask recessive traits (assuming no epistasis), there are nine combinations that have the phenotype round yellow, three that are round green, three that are wrinkled yellow, and one that is wrinkled green. The female offspring are carrying the mutant white-eye allele X(mut), but do not express it phenotypically because it is recessive. Next, they would cross an A-trait female with a Z-trait male and observe the offspring. As stated above, the phenotypic ratio is expected to be 9:3:3:1 if crossing unlinked genes from two double-heterozygotes. In genetics, a gametic phase represents the original allelic combinations that a diploid individual inherits from both parents. As shown in Table 2, all offspring are Red-eyed. The diagram is used by biologists to determine the probability of an offspring having a particular genotype.
273
0.25
0.36
0.33333333
7
D
Consider the trial on which a 3 is first observed in successive rolls of a six-sided die. Let $A$ be the event that 3 is observed on the first trial. Let $B$ be the event that at least two trials are required to observe a 3 . Assuming that each side has probability $1 / 6$, find (a) $P(A)$.
So the likelihood of B beating any other randomly selected die is: :{1 \over 3}\times \left( {2 \over 3} + {1 \over 3} + {1 \over 2} \right) = {1 \over 2} Die C beats D two-thirds of the time but beats B only one-third of the time. So the likelihood of A beating any other randomly selected die is: :{1 \over 3}\times \left( {2 \over 3} + {1 \over 3} + {4 \over 9} \right) = {13 \over 27} Similarly, die B beats C two-thirds of the time but beats A only one-third of the time. With the second set of dice, die C′ will win with the lowest probability () and dice A′ and B′ will each win with a probability of . ==Variations== ===Efron's dice=== Efron's dice are a set of four intransitive dice invented by Bradley Efron. thumb|320px|Representation of Efron's dice The four dice A, B, C, D have the following numbers on their six faces: * A: 4, 4, 4, 4, 0, 0 * B: 3, 3, 3, 3, 3, 3 * C: 6, 6, 2, 2, 2, 2 * D: 5, 5, 5, 1, 1, 1 ====Probabilities==== Each die is beaten by the previous die in the list, with a probability of : :P(A>B) = P(B>C) = P(C>D) = P(D>A) = {2 \over 3} B's value is constant; A beats it on rolls because four of its six faces are higher. P(C>D) can be calculated by summing conditional probabilities for two events: * C rolls 6 (probability ); wins regardless of D (probability 1) * C rolls 2 (probability ); wins only if D rolls 1 (probability ) The total probability of win for C is therefore :\left( {1 \over 3}\times1 \right) + \left( {2 \over 3}\times{1 \over 2} \right) = {2 \over 3} With a similar calculation, the probability of D winning over A is :\left( {1 \over 2}\times1 \right) + \left( {1 \over 2}\times{1 \over 3} \right) = {2 \over 3} ====Best overall die==== The four dice have unequal probabilities of beating a die chosen at random from the remaining three: As proven above, die A beats B two-thirds of the time but beats D only one-third of the time. Consider a set of three dice, III, IV and V such that * die III has sides 1, 2, 5, 6, 7, 9 * die IV has sides 1, 3, 4, 5, 8, 9 * die V has sides 2, 3, 4, 6, 7, 8 Then: * the probability that III rolls a higher number than IV is * the probability that IV rolls a higher number than V is * the probability that V rolls a higher number than III is === Three-dice set with minimal alterations to standard dice === The following intransitive dice have only a few differences compared to 1 through 6 standard dice: * as with standard dice, the total number of pips is always 21 * as with standard dice, the sides only carry pip numbers between 1 and 6 * faces with the same number of pips occur a maximum of twice per dice * only two sides on each die have numbers different from standard dice: ** A: 1, 1, 3, 5, 5, 6 ** B: 2, 3, 3, 4, 4, 5 ** C: 1, 2, 2, 4, 6, 6 Like Miwin’s set, the probability of A winning versus B (or B vs. C, C vs. So the likelihood of C beating any other randomly selected die is: :{1 \over 3}\times \left( {2 \over 3} + {1 \over 3} + {5 \over 9} \right) = {14 \over 27} Finally, die D beats A two-thirds of the time but beats C only one-third of the time. The probability of die D beating B is (only when D rolls 5). He imagined that B and C toss their dice in groups of six, and said that A was most favorable because it required a 6 in only one toss, while B and C required a 6 in each of their tosses. The probability of die B beating D is (only when D rolls 1). Die 1 1 4 Die 2 2 3 === Three players === An optimal and permutation- fair solution for 3 six-sided dice was found by Robert Ford in 2010. Player 1 chooses die A Player 2 chooses die C Player 1 chooses die B Player 2 chooses die A Player 1 chooses die C Player 2 chooses die B 2 4 9 1 6 8 3 5 7 3 C A A 2 A B B 1 C C C 5 C C A 4 A B B 6 B B C 7 C C A 9 A A A 8 B B B == Comment regarding the equivalency of intransitive dice == Though the three intransitive dice A, B, C (first set of dice) * A: 2, 2, 6, 6, 7, 7 * B: 1, 1, 5, 5, 9, 9 * C: 3, 3, 4, 4, 8, 8 P(A > B) = P(B > C) = P(C > A) = and the three intransitive dice A′, B′, C′ (second set of dice) * A′: 2, 2, 4, 4, 9, 9 * B′: 1, 1, 6, 6, 8, 8 * C′: 3, 3, 5, 5, 7, 7 P(A′ > B′) = P(B′ > C′) = P(C′ > A′) = win against each other with equal probability they are not equivalent. ;Set 2: * A: 3, 3, 3, 6 * B: 2, 2, 5, 5 * C: 1, 4, 4, 4 P(A > B) = P(B > C) = , P(C > A) = 9/16 == Intransitive 12-sided dice == In analogy to the intransitive six-sided dice, there are also dodecahedra which serve as intransitive twelve-sided dice. Consequently, for arbitrarily chosen two dice there is a third one that beats both of them. The following tables show all possible outcomes for all three pairs of dice. So the likelihood of D beating any other randomly selected die is: :{1 \over 3}\times \left( {2 \over 3} + {1 \over 3} + {1 \over 2} \right) = {1 \over 2} Therefore, the best overall die is C with a probability of winning of 0.5185. With the first set of dice, die B will win with the highest probability () and dice A and C will each win with a probability of . The probability of die C beating A is . This explanation assumes that a group does not produce more than one 6, so it does not actually correspond to the original problem. ==Generalizations== A natural generalization of the problem is to consider n non-necessarily fair dice, with p the probability that each die will select the 6 face when thrown (notice that actually the number of faces of the dice and which face should be selected are irrelevant). The probability that A rolls a higher number than B, the probability that B rolls higher than C, and the probability that C rolls higher than A are all , so this set of dice is intransitive. With adjacent pairs, one die's probability of winning is 2/3. When thrown or rolled, the die comes to rest showing a random integer from one to six on its upper surface, with each value being equally likely. The probability of die A beating C is (A must roll 4 and C must roll 2).
35.91
2.19
0.166666666
2
0.4908
C
An urn contains four balls numbered 1 through 4 . The balls are selected one at a time without replacement. A match occurs if the ball numbered $m$ is the $m$ th ball selected. Let the event $A_i$ denote a match on the $i$ th draw, $i=1,2,3,4$. Extend this exercise so that there are $n$ balls in the urn. What is the limit of this probability as $n$ increases without bound?
The probability that a particular ball is taken in a particular draw depends not only on its own weight, but also on the total weight of the competing balls that remain in the urn at that moment. Assume that an urn contains m_1 red balls and m_2 white balls, totalling N = m_1 + m_2 balls. n balls are drawn at random from the urn one by one without replacement. Continuing in this way, we can calculate that the probability of not taking the red ball in n draws is approximately 2−n as long as n is small compared to N. * Occupancy problem: the distribution of the number of occupied urns after the random assignment of k balls into n urns, related to the coupon collector's problem and birthday problem. In other words, the probability of not taking a very heavy ball in n draws falls almost exponentially with n in Wallenius' model. Hence, the number of total balls in the urn grows. The probability that the red ball is not taken in the first draw is 1000/2000 = . This is referred to as "drawing without replacement", by opposition to "drawing with replacement". * multivariate hypergeometric distribution: the balls are not returned to the urn once extracted, but with balls of more than two colors. * geometric distribution: number of draws before the first successful (correctly colored) draw. One pretends to remove one or more balls from the urn; the goal is to determine the probability of drawing one color or another, or some other properties. The probability that the red ball is not taken in the second draw, under the condition that it was not taken in the first draw, is 999/1999 ≈ . One ball is drawn randomly from the urn and its color observed; it is then placed back in the urn (or not), and the selection process is repeated.Urn Model: Simple Definition, Examples and Applications — The basic urn model Possible questions that can be answered in this model are: * Can I infer the proportion of white and black balls from n observations? While black balls are set aside after a draw (non-replacement), white balls are returned to the urn after a draw (replacement). And the weight of the competing balls depends on the outcomes of all preceding draws. (A variation both on the first and the second question) ==Examples of urn problems== * beta- binomial distribution: as above, except that every time a ball is observed, an additional ball of the same color is added to the urn. The probability that the red ball is not taken in the third draw, under the condition that it was not taken in the first two draws, is 998/1998 ≈ . See Pólya urn model. * binomial distribution: the distribution of the number of successful draws (trials), i.e. extraction of white balls, given n draws with replacement in an urn with black and white balls. In probability and statistics, an urn problem is an idealized mental exercise in which some objects of real interest (such as atoms, people, cars, etc.) are represented as colored balls in an urn or other container. When the mutator is drawn it is replaced along with an additional ball of an entirely new colour. * hypergeometric distribution: the balls are not returned to the urn once extracted. What is the distribution of the number of black balls drawn after m draws? * multinomial distribution: there are balls of more than two colors. Hence, the number of total marbles in the urn decreases. The probability of not taking the heavy red ball in Fisher's model is approximately 1/(n + 1). * The probability of taking a particular item at a particular draw is equal to its fraction of the total "weight" of all items that have not yet been taken at that moment.
0.5
425
773.0
1
0.6321205588
E
Of a group of patients having injuries, $28 \%$ visit both a physical therapist and a chiropractor and $8 \%$ visit neither. Say that the probability of visiting a physical therapist exceeds the probability of visiting a chiropractor by $16 \%$. What is the probability of a randomly selected person from this group visiting a physical therapist?
Finally, the principle of conditional probability implies that is equal to the product of these individual probabilities: The terms of equation () can be collected to arrive at: Evaluating equation () gives Therefore, (50.7297%). Further results showed that psychology students and women did better on the task than casino visitors/personnel or men, but were less confident about their estimates. ===Reverse problem=== The reverse problem is to find, for a fixed probability , the greatest for which the probability is smaller than the given , or the smallest for which the probability is greater than the given . Consequently, the desired probability is . The first few values are as follows: >50% probability of 3 people sharing a birthday - 88 people; >50% probability of 4 people sharing a birthday - 187 people . ===Probability of a shared birthday (collision)=== The birthday problem can be generalized as follows: :Given random integers drawn from a discrete uniform distribution with range , what is the probability that at least two numbers are the same? ( gives the usual birthday problem.) In the standard case of , substituting gives about 6.1%, which is less than 1 chance in 16. The following table shows the probability for some other values of (for this table, the existence of leap years is ignored, and each birthday is assumed to be equally likely): thumb|right|upright=1.4|The probability that no two people share a birthday in a group of people. This is a list of people in the chiropractic profession, comprising chiropractors and other people who have been notably connected with the profession. thumb|upright=1.3|The computed probability of at least two people sharing a birthday versus the number of people In probability theory, the birthday problem asks for the probability that, in a set of randomly chosen people, at least two will share a birthday. Then, because and are the only two possibilities and are also mutually exclusive, Here is the calculation of for 23 people. The answer is 20—if there is a prize for first match, the best position in line is 20th. ===Same birthday as you=== thumb|right|upright=1.4|Comparing = probability of a birthday match with = probability of matching your birthday In the birthday problem, neither of the two people is chosen in advance. For example, the usual 50% probability value is realized for both a 32-member group of 16 men and 16 women and a 49-member group of 43 women and 6 men. ==Other birthday problems== ===First match=== A related question is, as people enter a room one at a time, which one is most likely to be the first to have the same birthday as someone already in the room? Where the event is the probability of finding a group of 23 people with at least two people sharing same birthday, . is the ratio of the total number of birthdays, V_{nr}, without repetitions and order matters (e.g. for a group of 2 people, mm/dd birthday format, one possible outcome is \left \\{ \left \\{01/02,05/20\right \\},\left \\{05/20,01/02\right \\},\left \\{10/02,08/04\right\\},...\right \\} divided by the total number of birthdays with repetition and order matters, V_{t}, as it is the total space of outcomes from the experiment (e.g. 2 people, one possible outcome is \left \\{ \left \\{01/02,01/02\right \\},\left \\{10/02,08/04\right \\},...\right \\}. This number is significantly higher than : the reason is that it is likely that there are some birthday matches among the other people in the room. === Number of people with a shared birthday === For any one person in a group of n people the probability that he or she shares his birthday with someone else is q(n-1;d) , as explained above. *List Chiropractors Category:Chiropractic And for the group of 23 people, the probability of sharing is :p(23) \approx 1 - \left(\frac{364}{365}\right)^\binom{23}{2} = 1 - \left(\frac{364}{365}\right)^{253} \approx 0.500477 . ===Poisson approximation=== Applying the Poisson approximation for the binomial on the group of 23 people, :\operatorname{Poi}\left(\frac{\binom{23}{2}}{365}\right) =\operatorname{Poi}\left(\frac{253}{365}\right) \approx \operatorname{Poi}(0.6932) so :\Pr(X>0)=1-\Pr(X=0) \approx 1-e^{-0.6932} \approx 1-0.499998=0.500002. Probability in the Engineering and Informational Sciences is an international journal published by Cambridge University Press. Applied probability is the application of probability theory to statistical problems and other scientific and engineering domains. ==Scope== Much research involving probability is done under the auspices of applied probability. In short can be multiplied by itself times, which gives us :\bar p(n) \approx \left(\frac{364}{365}\right)^\binom{n}{2}. Therefore, its probability is : p(n) = 1 - \bar p(n). The formula :n(d)=\left\lceil \sqrt{2d\ln2}+\frac{3-2\ln2}{6}+\frac{9-4(\ln2)^2}{72\sqrt{2d\ln2}}-\frac{2(\ln2)^2}{135d}\right\rceil holds for all , and it is conjectured that this formula holds for all . ===More than two people sharing a birthday=== It is possible to extend the problem to ask how many people in a group are necessary for there to be a greater than 50% probability that at least 3, 4, 5, etc. of the group share the same birthday. Note that the vertical scale is logarithmic (each step down is 1020 times less likely). : 1 0.0% 5 2.7% 10 11.7% 20 41.1% 23 50.7% 30 70.6% 40 89.1% 50 97.0% 60 99.4% 70 99.9% 75 99.97% 100 % 200 % 300 (100 − )% 350 (100 − )% 365 (100 − )% ≥ 366 100% ==Approximations== thumb|right|upright=1.4|Graphs showing the approximate probabilities of at least two people sharing a birthday () and its complementary event () thumb|right|upright=1.4|A graph showing the accuracy of the approximation () The Taylor series expansion of the exponential function (the constant ) : e^x = 1 + x + \frac{x^2}{2!}+\cdots provides a first-order approximation for for |x| \ll 1: : e^x \approx 1 + x. The birthday problem has been generalized to consider an arbitrary number of types.M. C. Wendl (2003) Collision Probability Between Sets of Random Variables, Statistics and Probability Letters 64(3), 249–254.
35.64
0.68
8.87
0.166666666
0.1353
B
A doctor is concerned about the relationship between blood pressure and irregular heartbeats. Among her patients, she classifies blood pressures as high, normal, or low and heartbeats as regular or irregular and finds that 16\% have high blood pressure; (b) 19\% have low blood pressure; (c) $17 \%$ have an irregular heartbeat; (d) of those with an irregular heartbeat, $35 \%$ have high blood pressure; and (e) of those with normal blood pressure, $11 \%$ have an irregular heartbeat. What percentage of her patients have a regular heartbeat and low blood pressure?
Clinicians consider a pulse pressure of 60 mmHg to likely be associated with diseases, with a pulse pressure of 50 mmHg or more increasing the risk of cardiovascular disease. == Calculation == Pulse pressure is calculated as the difference between the systolic blood pressure and the diastolic blood pressure. It is measured by right heart catheterization or may be estimated by transthoracic echocardiography Normal pulmonary artery pressure is between 8mmHg -20 mm Hg at rest. : e.g. normal: 15mmHg - 8mmHg = 7mmHg : high: 25mmHg - 10mmHg = 15mmHg ==Values and variation== ===Low (narrow) pulse pressure === A pulse pressure is considered abnormally low if it is less than 25% of the systolic value. Many studies further indicate a J-shaped relationship between blood pressure and mortality, whereby both very high and very low levels are associated with notable increases in mortality. However, pulse pressure has usually been found to be a stronger independent predictor of cardiovascular events, especially in older populations, than has systolic, diastolic, or mean arterial pressure. The systemic pulse pressure is approximately proportional to stroke volume, or the amount of blood ejected from the left ventricle during systole (pump action) and inversely proportional to the compliance (similar to elasticity) of the aorta. * Systemic pulse pressure (usually measured at upper arm artery) = Psystolic \- Pdiastolic :e.g. normal 120mmHg - 80mmHg = 40mmHg : low: 107mmHg - 80mmHg = 27mmHg : high: 160mmHg - 80mmHg = 80mmHg * Pulmonary pulse pressure is normally much lower than systemic blood pressure due to the higher compliance of the pulmonary system compared to the arterial circulation. Pulse pressure is the difference between systolic and diastolic blood pressure. Proportion can be written as \frac{a}{b}=\frac{c}{d}, where ratios are expressed as fractions. Readings greater than or equal to 130/80 mm Hg are considered hypertension by ACC/AHA and if greater than or equal to 140/90 mm Hg by ESC/ESH. Clonidine (decrease of 6.3 mm Hg), diltiazem (decrease of 5.5 mm Hg), and prazosin (decrease of 5.0 mm Hg) were intermediate. == See also == * Mean arterial pressure * Cold pressor test * Hypertension * Prehypertension * Antihypertensive * Patent ductus arteriosus == References == Category:Medical signs Category:Cardiovascular physiology Heart is a biweekly peer-reviewed medical journal covering all areas of cardiovascular medicine and surgery. Thus, blood pressures above normal can go undiagnosed for a long period of time. ==Causes== Elevated blood pressure develops gradually over many years usually without a specific identifiable cause. Normal pulse pressure is around 40 mmHg. If the aorta becomes rigid because of disorders, such as arteriosclerosis or atherosclerosis, the pulse pressure would be high due to less compliance of the aorta. The ACC/AHA define elevated blood pressure as readings with a systolic pressure from 120 to 129 mm Hg and a diastolic pressure under 80 mm Hg, and the European Society of Cardiology and European Society of Hypertension (ESC/ESH) define "high normal blood pressure" as readings with a systolic pressure from 130 to 139 mm Hg and a diastolic pressure 85-89 mm Hg. On the other hand, the National Heart, Lung, and Blood Institute suggests that people with prehypertension are at a higher risk for developing hypertension, or high blood pressure, compared to people with normal blood pressure.National Heart, Lung and Blood Institute<> A 2014 meta-analysis concluded that prehypertension increases the risk of stroke, and that even low-range prehypertension significantly increases stroke risk and a 2019 meta-analysis found elevated blood pressure increases the risk of heart attack by 86% and stroke by 66%. ==Epidemiology== Data from the 1999 and 2000 National Health and Nutrition Examination Survey (NHANES III) estimated that the prevalence of prehypertension among adults in the United States was approximately 31 percent and decreased to 28 percent in the 2011–2012 National Health and Nutrition Examination Survey. If the usual resting pulse pressure is consistently greater than 100 mmHg, potential factors are stiffness of the major arteries, aortic regurgitation (a leak in the aortic valve), or arteriovenous malformation, among others. Such a proportion is known as geometrical proportion, not to be confused with arithmetical proportion and harmonic proportion. ==Properties of proportions== * Fundamental rule of proportion. The prevalence was higher among men than women. === Risk factors === A primary risk factor for prehypertension is being overweight. This suggests that interventions that lower diastolic pressure without also lowering systolic pressure (and thus lowering pulse pressure) could actually be counterproductive. The most common cause of a low (narrow) pulse pressure is a drop in left ventricular stroke volume. High blood pressure that develops over time without a specific cause is considered benign or essential hypertension.
3.51
72
2.3
1.7
15.1
E
Roll a fair six-sided die three times. Let $A_1=$ $\{1$ or 2 on the first roll $\}, A_2=\{3$ or 4 on the second roll $\}$, and $A_3=\{5$ or 6 on the third roll $\}$. It is given that $P\left(A_i\right)=1 / 3, i=1,2,3 ; P\left(A_i \cap A_j\right)=(1 / 3)^2, i \neq j$; and $P\left(A_1 \cap A_2 \cap A_3\right)=(1 / 3)^3$. Use Theorem 1.1-6 to find $P\left(A_1 \cup A_2 \cup A_3\right)$.
Consider a set of three dice, III, IV and V such that * die III has sides 1, 2, 5, 6, 7, 9 * die IV has sides 1, 3, 4, 5, 8, 9 * die V has sides 2, 3, 4, 6, 7, 8 Then: * the probability that III rolls a higher number than IV is * the probability that IV rolls a higher number than V is * the probability that V rolls a higher number than III is === Three-dice set with minimal alterations to standard dice === The following intransitive dice have only a few differences compared to 1 through 6 standard dice: * as with standard dice, the total number of pips is always 21 * as with standard dice, the sides only carry pip numbers between 1 and 6 * faces with the same number of pips occur a maximum of twice per dice * only two sides on each die have numbers different from standard dice: ** A: 1, 1, 3, 5, 5, 6 ** B: 2, 3, 3, 4, 4, 5 ** C: 1, 2, 2, 4, 6, 6 Like Miwin’s set, the probability of A winning versus B (or B vs. C, C vs. P(C>D) can be calculated by summing conditional probabilities for two events: * C rolls 6 (probability ); wins regardless of D (probability 1) * C rolls 2 (probability ); wins only if D rolls 1 (probability ) The total probability of win for C is therefore :\left( {1 \over 3}\times1 \right) + \left( {2 \over 3}\times{1 \over 2} \right) = {2 \over 3} With a similar calculation, the probability of D winning over A is :\left( {1 \over 2}\times1 \right) + \left( {1 \over 2}\times{1 \over 3} \right) = {2 \over 3} ====Best overall die==== The four dice have unequal probabilities of beating a die chosen at random from the remaining three: As proven above, die A beats B two-thirds of the time but beats D only one-third of the time. The following tables show all possible outcomes for all three pairs of dice. With the second set of dice, die C′ will win with the lowest probability () and dice A′ and B′ will each win with a probability of . ==Variations== ===Efron's dice=== Efron's dice are a set of four intransitive dice invented by Bradley Efron. thumb|320px|Representation of Efron's dice The four dice A, B, C, D have the following numbers on their six faces: * A: 4, 4, 4, 4, 0, 0 * B: 3, 3, 3, 3, 3, 3 * C: 6, 6, 2, 2, 2, 2 * D: 5, 5, 5, 1, 1, 1 ====Probabilities==== Each die is beaten by the previous die in the list, with a probability of : :P(A>B) = P(B>C) = P(C>D) = P(D>A) = {2 \over 3} B's value is constant; A beats it on rolls because four of its six faces are higher. Player 1 chooses die A Player 2 chooses die C Player 1 chooses die B Player 2 chooses die A Player 1 chooses die C Player 2 chooses die B 2 4 9 1 6 8 3 5 7 3 C A A 2 A B B 1 C C C 5 C C A 4 A B B 6 B B C 7 C C A 9 A A A 8 B B B == Comment regarding the equivalency of intransitive dice == Though the three intransitive dice A, B, C (first set of dice) * A: 2, 2, 6, 6, 7, 7 * B: 1, 1, 5, 5, 9, 9 * C: 3, 3, 4, 4, 8, 8 P(A > B) = P(B > C) = P(C > A) = and the three intransitive dice A′, B′, C′ (second set of dice) * A′: 2, 2, 4, 4, 9, 9 * B′: 1, 1, 6, 6, 8, 8 * C′: 3, 3, 5, 5, 7, 7 P(A′ > B′) = P(B′ > C′) = P(C′ > A′) = win against each other with equal probability they are not equivalent. Rolling the three dice of a set and always using the highest score for evaluation will show a different winning pattern for the two sets of dice. Consequently, for arbitrarily chosen two dice there is a third one that beats both of them. The 2003 A3 Champions Cup was first edition of A3 Champions Cup. Consider the following set of dice. With adjacent pairs, one die's probability of winning is 2/3. So the likelihood of B beating any other randomly selected die is: :{1 \over 3}\times \left( {2 \over 3} + {1 \over 3} + {1 \over 2} \right) = {1 \over 2} Die C beats D two-thirds of the time but beats B only one-third of the time. ;Set 2: * A: 3, 3, 3, 6 * B: 2, 2, 5, 5 * C: 1, 4, 4, 4 P(A > B) = P(B > C) = , P(C > A) = 9/16 == Intransitive 12-sided dice == In analogy to the intransitive six-sided dice, there are also dodecahedra which serve as intransitive twelve-sided dice. The 2005 A3 Champions Cup was third edition of A3 Champions Cup. The 2004 A3 Champions Cup was second edition of A3 Champions Cup. So the likelihood of A beating any other randomly selected die is: :{1 \over 3}\times \left( {2 \over 3} + {1 \over 3} + {4 \over 9} \right) = {13 \over 27} Similarly, die B beats C two-thirds of the time but beats A only one-third of the time. A set of dice is intransitive (or nontransitive) if it contains three dice, A, B, and C, with the property that A rolls higher than B more than half the time, and B rolls higher than C more than half the time, but it is not true that A rolls higher than C more than half the time. So the likelihood of D beating any other randomly selected die is: :{1 \over 3}\times \left( {2 \over 3} + {1 \over 3} + {1 \over 2} \right) = {1 \over 2} Therefore, the best overall die is C with a probability of winning of 0.5185. Consequently, whatever dice the two opponents choose, the third player can always find one of the remaining dice that beats them both (as long as the player is then allowed to choose between the one-die option and the two-die option): : Sets chosen by opponents Winning set of dice Type Number A B E 1 A C E 2 A D C 2 A E D 1 B C A 1 B D A 2 B E D 2 C D B 1 C E B 2 D E C 1 There are two major issues with this set, however. So the likelihood of C beating any other randomly selected die is: :{1 \over 3}\times \left( {2 \over 3} + {1 \over 3} + {5 \over 9} \right) = {14 \over 27} Finally, die D beats A two-thirds of the time but beats C only one-third of the time. With the first set of dice, die B will win with the highest probability () and dice A and C will each win with a probability of . Awarded by Mathematical Association of America * Timothy Gowers' project on intransitive dice * Category:Probability theory paradoxes Category:Dice The probability of die C beating A is .
-0.40864
0.6296296296
'-3.141592'
2.6
3
B
Let $A$ and $B$ be independent events with $P(A)=$ $1 / 4$ and $P(B)=2 / 3$. Compute $P(A \cap B)$
* This results in P(A \mid B) = P(A \cap B)/P(B) whenever P(B) > 0 and 0 otherwise. In this event, the event B can be analyzed by a conditional probability with respect to A. This can also be understood as the fraction of probability B that intersects with A, or the ratio of the probabilities of both events happening to the "given" one happening (how many times A occurs rather than not assuming B has occurred): P(A \mid B) = \frac{P(A \cap B)}{P(B)}. The conditional probability can be found by the quotient of the probability of the joint intersection of events and (P(A \cap B))—the probability at which A and B occur together, although not necessarily occurring at the same time—and the probability of : :P(A \mid B) = \frac{P(A \cap B)}{P(B)}. * Without the knowledge of the occurrence of B, the information about the occurrence of A would simply be P(A) * The probability of A knowing that event B has or will have occurred, will be the probability of A \cap B relative to P(B), the probability that B has occurred. We have P(A\mid B)=\tfrac{P(A \cap B)}{P(B)} = \tfrac{3/36}{10/36}=\tfrac{3}{10}, as seen in the table. == Use in inference == In statistical inference, the conditional probability is an update of the probability of an event based on new information. We denote the quantity \frac{P(A \cap B)}{P(B)} as P(A\mid B) and call it the "conditional probability of given ." For events in B, two conditions must be met: the probability of B is one and the relative magnitudes of the probabilities must be preserved. It can be shown that :P(A_B)= \frac{P(A \cap B)}{P(B)} which meets the Kolmogorov definition of conditional probability. === Conditioning on an event of probability zero === If P(B)=0 , then according to the definition, P(A \mid B) is undefined. Substituting 1 and 2 into 3 to select α: :\begin{align} 1 &= \sum_{\omega \in \Omega} {P(\omega \mid B)} \\\ &= \sum_{\omega \in B} {P(\omega\mid B)} + \cancelto{0}{\sum_{\omega otin B} P(\omega\mid B)} \\\ &= \alpha \sum_{\omega \in B} {P(\omega)} \\\\[5pt] &= \alpha \cdot P(B) \\\\[5pt] \Rightarrow \alpha &= \frac{1}{P(B)} \end{align} So the new probability distribution is #\omega \in B: P(\omega\mid B) = \frac{P(\omega)}{P(B)} #\omega otin B: P(\omega\mid B) = 0 Now for a general event A, :\begin{align} P(A\mid B) &= \sum_{\omega \in A \cap B} {P(\omega \mid B)} + \cancelto{0}{\sum_{\omega \in A \cap B^c} P(\omega\mid B)} \\\ &= \sum_{\omega \in A \cap B} {\frac{P(\omega)}{P(B)}} \\\\[5pt] &= \frac{P(A \cap B)}{P(B)} \end{align} == See also == * Bayes' theorem * Bayesian epistemology * Borel–Kolmogorov paradox * Chain rule (probability) * Class membership probabilities * Conditional independence * Conditional probability distribution * Conditioning (probability) * Joint probability distribution * Monty Hall problem * Pairwise independent distribution * Posterior probability * Regular conditional probability == References == ==External links== * *Visual explanation of conditional probability Category:Mathematical fallacies Category:Statistical ratios All events that are not in B will have null probability in the new distribution. Therefore, it can be useful to reverse or convert a conditional probability using Bayes' theorem: P(A\mid B) = {{P(B\mid A) P(A)}\over{P(B)}}. These probabilities are linked through the law of total probability: :P(A) = \sum_n P(A \cap B_n) = \sum_n P(A\mid B_n)P(B_n). where the events (B_n) form a countable partition of \Omega. This particular method relies on event B occurring with some sort of relationship with another event A. It can be interpreted as "the probability of B occurring multiplied by the probability of A occurring, provided that B has occurred, is equal to the probability of the A and B occurrences together, although not necessarily occurring at the same time". Generally, there is only one event B such that A and B are both mutually exclusive and exhaustive; that event is the complement of A. In probability theory, an event is a set of outcomes of an experiment (a subset of the sample space) to which a probability is assigned. That is, for an event A, :P(A^c) = 1 - P(A). Moreover, this "multiplication rule" can be practically useful in computing the probability of A \cap B and introduces a symmetry with the summation axiom for Poincaré Formula: :P(A \cup B) = P(A) + P(B) - P(A \cap B) :Thus the equations can be combined to find a new representation of the : : P(A \cap B)= P(A) + P(B) - P(A \cup B) = P(A \mid B)P(B) : P(A \cup B)= {P(A) + P(B) - P(A \mid B){P(B)}} ==== As the probability of a conditional event ==== Conditional probability can be defined as the probability of a conditional event A_B. If statistically independent If mutually exclusive P(A\mid B)= P(A) 0 P(B\mid A)= P(B) 0 P(A \cap B)= P(A) P(B) 0 In fact, mutually exclusive events cannot be statistically independent (unless both of them are impossible), since knowing that one occurs gives information about the other (in particular, that the latter will certainly not occur). == Common fallacies == :These fallacies should not be confused with Robert K. Shope's 1978 "conditional fallacy", which deals with counterfactual examples that beg the question. === Assuming conditional probability is of similar size to its inverse === thumb|450x450px|A geometric visualization of Bayes' theorem. The former is required by the axioms of probability, and the latter stems from the fact that the new probability measure has to be the analog of P in which the probability of B is one - and every event that is not in B, therefore, has a null probability. This shows that P(A|B) P(B) = P(B|A) P(A) i.e. P(A|B) = .
0.396
0.166666666
102.0
7.00
8.3147
B
How many four-letter code words are possible using the letters in IOWA if The letters may not be repeated?
The state of Iowa is covered by five area codes. None of the Iowa codes are expected to need relief in the immediate future. * 319: Cedar Rapids, Waterloo, Iowa City, and Cedar Falls (original area code created in 1947) * 515: Des Moines, Ames, West Des Moines, Urbandale and Fort Dodge (original area code created in 1947) * 563: Davenport, Dubuque, Bettendorf, Clinton, Muscatine (split from 319 in 2001) * 641: Mason City, Marshalltown, Ottumwa, Tama (split from 515 in 2000) * 712: Sioux City, Council Bluffs (original area code created in 1947) ==See also== *State of Iowa Area codes Iowa thumb|right|The area codes of Kentucky: 270 and 364 in light green. The Code of Iowa contains the statutory laws of the U.S. state of Iowa. 734 is an area code in the North American Numbering Plan. "Splitting the area code in two avoids ten digit dialing but requires changing all current area code 270 numbers within the new area code to 364. It is republished in full every odd year, and is supplemented in even years. ==External reference== *Iowa Code online at Iowa General Assembly. Such codes are half the size of two-part codes but are more vulnerable since an attacker who recovers some code word meanings can often infer the meaning of nearby code words. thumb|Page 187 of the State Department 1899 code book, a one part code with a choice of code word or numeric ciphertext. Area code 270 is a telephone area code in the North American Numbering Plan (NANP) for the Commonwealth of Kentucky's western and south central counties. On June 13, 2007, the PSC announced that the new area code will be 364, but also announced that the previously announced implementation would be delayed in favor of number conservation measures including expanded number pooling. Planning for the introduction of a second area code for the region, area code 364, was assigned in 2007. These subsequent delays of the implementation of the 270 / 364 area code split were due to further use of number conservation measures, including mandatory and expanded number pooling, as well as a weakened economy and a reduced usage of telephone numbers dedicated for use by computer and fax modems. Surprisingly, Kentucky's two most urbanized area codes, 502 (Louisville) and 859 (serving Lexington and Northern Kentucky), were not expected to exhaust until 2017 at the earliest, even though they have fewer numbers than 270. Under the NANPA proposal, existing 270 numbers would be retained by customers, but 10-digit dialing for local calls would be required across western Kentucky. Codebook come in two forms, one-part or two-part: * In one part codes, the plain text words and phrases and the corresponding code words are in the same alphabetical order. *Iowa Online Law Reference Category:Iowa statutes Iowa The distribution and physical security of codebooks presents a special difficulty in the use of codes, compared to the secret information used in ciphers, the key, which is typically much shorter. The JN-25 code used in World War II used a code book of 30,000 code groups superencrypted with 30,000 random additives. ABC Codes are five-digit alpha codes (e.g., AAAAA) used by licensed and non- licensed healthcare practitioners to supplement medical codes (e.g. CPT and HCPCS II) on standard electronic (e.g. American National Standards Institute, Accredited Standards Committee X12 N 837P healthcare claims and on standard paper claims (e.g., CMS 1500 Form) to describe services, remedies and/or supply items provided and/or used during patient visits. Numbers of the new area code were made available for assignment on March 3, 2014. This area had been historically served by area code 313, which today only applies to Detroit and its closest suburbs. ==See also== * List of Michigan area codes ==External links== * Map of Michigan area codes at North American Numbering Plan Administration's website * List of exchanges from AreaCodeDownload.com, 734 Area Code Category:Telecommunications-related introductions in 1997 734 734
773
0.5
24.0
0.333333333333333
269
C
A boy found a bicycle lock for which the combination was unknown. The correct combination is a four-digit number, $d_1 d_2 d_3 d_4$, where $d_i, i=1,2,3,4$, is selected from $1,2,3,4,5,6,7$, and 8 . How many different lock combinations are possible with such a lock?
A combination lock is a type of locking device in which a sequence of symbols, usually numbers, is used to open the lock. The number of wheels in the mechanism determines the number of specific dial positions that must be entered to open the lock, so a three- sequence combination is required for a three-wheel lock. In 1978 a combination lock which could be set by the user to a sequence of his own choosing was invented by Andrew Elliot Rae. If the arrangement of numbers is fixed, it is easy to determine the lock sequence by viewing several successful accesses. Many combination locks have three wheels, but the lock may be equipped with additional wheels, each with a drive pin and fly, in a similar manner. The first commercially viable single-dial combination lock was patented on 1 February 1910 by John Junkunc, owner of American Lock Company. == Types == ===Multiple-dial locks=== One of the simplest types of combination lock, often seen in low-security bicycle locks and in briefcases, uses several rotating discs with notches cut into them. The other side of the lock, or the other end of the cable, has a pin with several protruding teeth. frame|When the toothed pin is inserted and the discs are rotated to an incorrect combination, the inner faces of the discs block the pin from being extracted. thumb|right|250px|A simple combination lock. ==History== The earliest known combination lock was excavated in a Roman period tomb on the Kerameikos, Athens. In this case, the combination is 9-2-4. frame|The discs are mounted on one side of the lock, which may in turn be attached to the end of a chain or cable. Unlike ordinary padlocks, combination locks do not use keys. frame|Exploded view of the rotating discs. This leads to some limitations on what combinations are possible. Types range from inexpensive three-digit luggage locks to high-security safes. Nearly all safes made after World War II have relock triggers in their combination locks. ==Manufacturers== *ABUS *Master Lock *Sargent and Greenleaf *Wordlock *Dudley *Conair *Kaba Mas *CJSJ ==See also== * Electronic lock * Password * Immobiliser * Keycard ==References== ==External links== * How Combination Locks Work HowStuffWorks.com Category:Locks (security device) Category:Locksmithing de:Schloss (Technik)#Zahlenschloss Wheels may be made of radiotransparent materials such as Nylon, Lexan, or Delrin to prevent the use of X-ray imaging to determine the wheel position and required combination. == See also == * Combination lock * Safes ==References== ==External links== * How rotary combination locks work, HowStuffWorks * * Locraker - Automatic combination lock cracker, Neil Fraser, 13 March 2002 - rotary combination lock cracking machine * - contains a detailed description, with photographs, of rotary combination locks and their security concerns Category:Locks (security device) The original Three Prisoners problem can be seen in this light: The warden in that problem still has these six cases, each with a probability of occurring. This type of locking mechanism consists of a single dial which must be rotated left and right in a certain combination in order to open the lock. ==Design and operation== thumb|right|upright=1.5|Internal mechanism of a rotary combination lock with a retractable bolt. There is a variation of the traditional dial based combination lock wherein the "secret" is encoded in an electronic microcontroller. A rotary combination lock is a lock commonly used to secure safes and as an unkeyed padlock mechanism. When the notches in the discs align with the teeth on the pin, the lock can be opened. thumb|right|The component parts of a Stoplock combination padlock. ===Single-dial locks=== The rotary combination locks found on padlocks, lockers, or safes may use a single dial which interacts with several parallel discs or cams. The remaining numbers can be arranged in (100-l)! ways. US Patents regarding combination padlocks by J.B. Gray in 1841Permutation padlock. In the case of eight prisoners, this cycle- following strategy is successful if and only if the length of the longest cycle of the permutation is at most 4. This in turn allows the owner to set a custom combination. ===Additional security=== Some rotary combination locks include internal relockers or relocking devices that separately lock the shackle or bolt when an attack is detected, including mechanical levers that respond to attempts to dislodge the locking mechanism ("punching"), thermal (fusible) links that melt in response to a cutting attempt, or tempered glass that breaks in response to a drilling attempt.
+93.4
1.44
4096.0
+116.0
−2
C
An urn contains eight red and seven blue balls. A second urn contains an unknown number of red balls and nine blue balls. A ball is drawn from each urn at random, and the probability of getting two balls of the same color is $151 / 300$. How many red balls are in the second urn?
Assume that an urn contains m_1 red balls and m_2 white balls, totalling N = m_1 + m_2 balls. n balls are drawn at random from the urn one by one without replacement. The probability that the red ball is not taken in the second draw, under the condition that it was not taken in the first draw, is 999/1999 ≈ . (A variation both on the first and the second question) ==Examples of urn problems== * beta- binomial distribution: as above, except that every time a ball is observed, an additional ball of the same color is added to the urn. The probability that the second ball picked is red depends on whether the first ball was red or white. In probability and statistics, an urn problem is an idealized mental exercise in which some objects of real interest (such as atoms, people, cars, etc.) are represented as colored balls in an urn or other container. The probability that a particular ball is taken in a particular draw depends not only on its own weight, but also on the total weight of the competing balls that remain in the urn at that moment. One pretends to remove one or more balls from the urn; the goal is to determine the probability of drawing one color or another, or some other properties. The probability that the red ball is not taken in the third draw, under the condition that it was not taken in the first two draws, is 998/1998 ≈ . One ball is drawn randomly from the urn and its color observed; it is then placed back in the urn (or not), and the selection process is repeated.Urn Model: Simple Definition, Examples and Applications — The basic urn model Possible questions that can be answered in this model are: * Can I infer the proportion of white and black balls from n observations? The probability that the red ball is not taken in the first draw is 1000/2000 = . When the mutator is drawn it is replaced along with an additional ball of an entirely new colour. * hypergeometric distribution: the balls are not returned to the urn once extracted. This is referred to as "drawing without replacement", by opposition to "drawing with replacement". * multivariate hypergeometric distribution: the balls are not returned to the urn once extracted, but with balls of more than two colors. * geometric distribution: number of draws before the first successful (correctly colored) draw. thumb|Two urns containing white and red balls. * Pólya urn: each time a ball of a particular colour is drawn, it is replaced along with an additional ball of the same colour. We want to calculate the probability that the red ball is not taken. The probability that the first ball picked is red is equal to the weight fraction of red balls: : p_1 = \frac{m_1 \omega_1}{m_1 \omega_1 + m_2 \omega_2}. What is the distribution of the number of black balls drawn after m draws? * multinomial distribution: there are balls of more than two colors. While black balls are set aside after a draw (non-replacement), white balls are returned to the urn after a draw (replacement). Continuing in this way, we can calculate that the probability of not taking the red ball in n draws is approximately 2−n as long as n is small compared to N. Here the draws are independent and the probabilities are therefore not multiplied together. See Pólya urn model. * binomial distribution: the distribution of the number of successful draws (trials), i.e. extraction of white balls, given n draws with replacement in an urn with black and white balls. * Mixed replacement/non-replacement: the urn contains black and white balls.
11
258.14
48.6
1.91
0.14
A
A typical roulette wheel used in a casino has 38 slots that are numbered $1,2,3, \ldots, 36,0,00$, respectively. The 0 and 00 slots are colored green. Half of the remaining slots are red and half are black. Also, half of the integers between 1 and 36 inclusive are odd, half are even, and 0 and 00 are defined to be neither odd nor even. A ball is rolled around the wheel and ends up in one of the slots; we assume that each slot has equal probability of $1 / 38$, and we are interested in the number of the slot into which the ball falls. Let $A=\{0,00\}$. Give the value of $P(A)$.
The payout given by the casino for a win is based on the roulette wheel having 36 outcomes, and the payout for a bet is given by \frac{36}{p}. It is worth noting that the odds for the player in American roulette are even worse, as the bet profitability is at worst -\frac{3}{38}r \approx -0.0789r, and never better than -\frac{r}{19} \approx -0.0526r. ===Simplified mathematical model=== For a roulette wheel with n green numbers and 36 other unique numbers, the chance of the ball landing on a given number is \frac{1}{(36+n)}. By law, the game must use cards and not slots on the roulette wheel to pick the winning number. ==Roulette wheel number sequence== The pockets of the roulette wheel are numbered from 0 to 36. In the United Kingdom, the farthest outside bets (low/high, red/black, even/odd) result in the player losing only half of their bet if a zero comes up. ==Bet odds table== The expected value of a $1 bet (except for the special case of Top line bets), for American and European roulette, can be calculated as :\mathrm{expected value} = \frac{1}{n} (36 - n)= \frac{36}{n} - 1, where n is the number of pockets in the wheel. Since this roulette has 37 cells with equal odds of hitting, this is a final model of field probability (\Omega, 2^\Omega, \mathbb{P}), where \Omega = \\{0, \ldots, 36\\}, \mathbb{P}(A) = \frac{|A|}{37} for all A \in 2^\Omega. Therefore, SD for Roulette even-money bet is equal to 2b\sqrt{npq}, where b is the flat bet per round, n is the number of rounds, p=18/38, and q=20/38. The roulette wheel. The ball eventually loses momentum, passes through an area of deflectors, and falls onto the wheel and into one of thirty-seven (single-zero, French or European style roulette) or thirty-eight (double-zero, American style roulette) or thirty-nine (triple- zero, "Sands Roulette") colored and numbered pockets on the wheel. The expected value is: :−1 × + 35 × = −0.0526 (5.26% house edge) For European roulette, a single number wins and loses : :−1 × + 35 × = −0.0270 (2.70% house edge) For triple-zero wheels, a single number wins and loses : :−1 × + 35 × = −0.0769 (7.69% house edge) ==Mathematical model== As an example, the European roulette model, that is, roulette with only one zero, can be examined. All roulette tables operated by a casino have the same basic mechanics: * There is a balanced mechanical wheel with colored pockets separated by identical vanes and the wheel which spins freely on a supporting post. Pocket number order on the roulette wheel adheres to the following clockwise sequence in most casinos: ;Single- zero wheel : 0-32-15-19-4-21-2-25-17-34-6-27-13-36-11-30-8-23-10-5-24-16-33-1-20-14-31-9-22-18-29-7-28-12-35-3-26 ;Double-zero wheel : 0-28-9-26-30-11-7-20-32-17-5-22-34-15-3-24-36-13-1-00-27-10-25-29-12-8-19-31-18-6-21-33-16-4-23-35-14-2 ;Triple-zero wheel : 0-000-00-32-15-19-4-21-2-25-17-34-6-27-13-36-11-30-8-23-10-5-24-16-33-1-20-14-31-9-22-18-29-7-28-12-35-3-26 ==Roulette table layout== thumbnail|upright|French style layout, French single zero wheel The cloth-covered betting area on a roulette table is known as the layout. The ball landed on "Red 7" and Revell walked away with $270,600. ==See also== *Bauernroulette *Boule *Eudaemons *Monte Carlo Paradox *Russian roulette *Straperlo *The Gambler, a novel written by Fyodor Dostoevsky inspired by his addiction to roulette *Le multicolore; a game similar to roulette ==Notes== ==External links== Category:Gambling games Category:Roulette and wheel games Category:French inventions The sum of all the numbers on the roulette wheel (from 0 to 36) is 666, which is the "Number of the Beast".The last term in a sequence of partial sums composed of either sequence is 666, the "beast number". ==Rules of play against a casino== thumb|left|220px|Roulette with red 12 as the winner Roulette players have a variety of betting options. In some casinos, a player may bet full complete for less than the table straight-up maximum, for example, "number 17 full complete by $25" would cost $1000, that is 40 chips each at $25 value. ==Betting strategies and tactics== Over the years, many people have tried to beat the casino, and turn roulette—a game designed to turn a profit for the house—into one on which the player expects to win. The standard deviation for the even-money Roulette bet is one of the lowest out of all casinos games. Take the coin toss for example, the chances of heads and tails are equal, 50% each, if a player bets $10 on the coin landing heads up and they win, the casino pays them $10. Players can continue to place bets as the ball spins around the wheel until the dealer announces "no more bets" or "rien ne va plus". thumb|220px|Croupier's rake pushing chips across a roulette layout When a winning number and color is determined by the roulette wheel, the dealer will place a marker, also known as a dolly, on that winning number on the roulette table layout. In the early 1990s, Gonzalo Garcia-Pelayo believed that casino roulette wheels were not perfectly random, and that by recording the results and analysing them with a computer, he could gain an edge on the house by predicting that certain numbers were more likely to occur next than the 1-in-36 odds offered by the house suggested. According to Hoyle "the single 0, the double 0, and eagle are never bars; but when the ball falls into either of them, the banker sweeps every thing upon the table, except what may happen to be bet on either one of them, when he pays twenty-seven for one, which is the amount paid for all sums bet upon any single figure". thumb|left|250px|1800s engraving of the French roulette In the 19th century, roulette spread all over Europe and the US, becoming one of the most famous and most popular casino games. The initial bet is returned in addition to the mentioned payout: it can be easily demonstrated that this payout formula would lead to a zero expected value of profit if there were only 36 numbers (that is, the casino would break even). Therefore, the VI for the even-money American Roulette bet is \sqrt{18/38\cdot20/38}\approx0.499. Here, the profit margin for the roulette owner is equal to approximately 2.7%.
5.4
1.3
5.5
0.0526315789
2.3613
D
In the gambling game "craps," a pair of dice is rolled and the outcome of the experiment is the sum of the points on the up sides of the six-sided dice. The bettor wins on the first roll if the sum is 7 or 11. The bettor loses on the first roll if the sum is 2,3 , or 12 . If the sum is $4,5,6$, 8,9 , or 10 , that number is called the bettor's "point." Once the point is established, the rule is as follows: If the bettor rolls a 7 before the point, the bettor loses; but if the point is rolled before a 7 , the bettor wins. Find the probability that the bettor wins on the first roll. That is, find the probability of rolling a 7 or 11 , $P(7$ or 11$)$.
Take the coin toss for example, the chances of heads and tails are equal, 50% each, if a player bets $10 on the coin landing heads up and they win, the casino pays them $10. Note: Expert bet settlers before the introduction of bet-settling software would have invariably used an algebraic- type method together with a simple calculator to determine the return on a bet (see below). ==Algebraic interpretation== If a, b, c, d... represent the decimal odds, i.e. (fractional odds + 1), then an 'odds multiplier' OM can be calculated algebraically by multiplying the expressions (a + 1), (b + 1), (c + 1), ... etc. together in the required manner and adding or subtracting additional components. If they lose, all the $10 is lost to the casino, in this case, the casino advantage is zero (the casino is certainly not stupid enough to open this game); but if they win, the casino only pays them $9, if they lose, all the $10 is lost to the casino. Of course, the casino can't win exactly 53 cents; this figure is the average casino profit from each player if it had millions of players each betting 10 rounds at $1 per round. The place part of each-way bets is calculated separately from the win part; the method is identical but the odds are reduced by whatever the place factor is for the particular event (see Accumulator below for detailed example). In games such as Blackjack or Spanish 21, the final bet may be several times the original bet, if the player doubles or splits. When gambles are selected through a choice process – when people indicate which gamble they prefer from a set of gambles (e.g., win/lose, over/under) – people tend to prefer to bet on the outcome that is more likely to occur. In algebraic terms the OM for the Yankee bet is given by: :OM = (a + 1)(b + 1)(c + 1)(d + 1) − 1 − (a + b + c + d) In the days before software became available for use by bookmakers and those settling bets in Licensed Betting Offices (LBOs) this method was virtually de rigueur for saving time and avoiding the multiple repetitious calculations necessary in settling bets of the full cover type. ==Settling other types of winning bets== Up and down :Returns (£20 single at 7-2 ATC £20 single at 15-8) = £20 × 7/2 + £20 × (15/8 + 1) = £127.50 :Returns (£20 single at 15-8 ATC £20 single at 7-2) = £20 × 15/8 + £20 × (7/2 + 1) = £127.50 :Total returns = £255.00 :Note: This is the same as two £20 single bets at twice the odds; i.e. £20 singles at 7-1 and 15-4 and is the preferred manual way of calculating the bet. Spread betting allows gamblers to wagering on the outcome of an event where the pay-off is based on the accuracy of the wager, rather than a simple "win or lose" outcome. Casinos do not have in-house expertise in this field, so they outsource their requirements to experts in the gaming analysis field. === Bingo probability === The probability of winning a game of Bingo (ignoring simultaneous winners, making wins mutually exclusive) may be calculated as: : P(Win)=1-P(Loss) since winning and losing are mutually exclusive. Gambling (also known as betting or gaming) is the wagering of something of value ("the stakes") on a random event with the intent of winning something else of value, where instances of strategy are discounted. A Round Robin with 1 winner is calculated as two Up and Down bets with one winner in each. For example, if a game is played by wagering on the number that would result from the roll of one die, true odds would be 5 times the amount wagered since there is a 1/6 probability of any single number appearing. The mathematics of gambling is a collection of probability applications encountered in games of chance and can get included in game theory. Furthermore, if we flat bet at 10 units per round instead of 1 unit, the range of possible outcomes increases 10 fold. If a player bets $1 on red, his chance of winning $1 is therefore 18/38 and his chance of losing $1 (or winning -$1) is 20/38. Thus, an each-way Lucky 63 on six horses with three winners and a further two placed horses is settled as a win Patent and a place Lucky 31. ==Algebraic interpretation== Returns on any bet may be considered to be calculated as 'stake unit' × 'odds multiplier'. Gambling is not luck, but a contest of intellect, strategy, and yield. All bets are taken as 'win' bets unless 'each-way' is specifically stated. A Round Robin with 2 winners is calculated as a double plus one Up and Down bet with 2 winners plus two Up and Down bets with 1 winner in each. * Carnival Games such as The Razzle or Hanky Pank * Coin-tossing games such as Head and Tail, Two-up* * Confidence tricks such as Three-card Monte or the Shell game * Dice-based games, such as Backgammon, Liar's dice, Passe-dix, Hazard, Threes, Pig, or Mexico (or Perudo); *Although coin tossing is not usually played in a casino, it has been known to be an official gambling game in some Australian casinos ===Fixed-odds betting=== Fixed-odds betting and Parimutuel betting frequently occur at many types of sporting events, and political elections. The end-of-the-day betting effect is a cognitive bias reflected in the tendency for bettors to take gambles with higher risk and higher reward at the end of their betting session to try to make up for losses.
0.5117
0
0.6247
11
0.22222222
E
Given that $P(A \cup B)=0.76$ and $P\left(A \cup B^{\prime}\right)=0.87$, find $P(A)$.
That is, for an event A, :P(A^c) = 1 - P(A). 1983 Emperor's Cup Final was the 63rd final of the Emperor's Cup competition. It may be tempting to say that : Pr(["1" on 1st trial] or ["1" on second trial] or ... or ["1" on 8th trial]) := Pr("1" on 1st trial) + Pr("1" on second trial) + ... + P("1" on 8th trial) := 1/6 + 1/6 + ... + 1/6 := 8/6 := 1.3333... Generally, there is only one event B such that A and B are both mutually exclusive and exhaustive; that event is the complement of A. The technique is wrong because the eight events whose probabilities got added are not mutually exclusive. The 1983–84 Gold Cup was the 65th edition of the Gold Cup, a cup competition in Northern Irish football. Therefore, the probability of an event's complement must be unity minus the probability of the event. The 1983–84 Ulster Cup was the 36th edition of the Ulster Cup, a cup competition in Northern Irish football. In probability theory, the complement of any event A is the event [not A], i.e. the event that A does not occur.Robert R. Johnson, Patricia J. Kuby: Elementary Statistics. The 1983–84 Irish Cup was the 104th edition of the Irish Cup, Northern Ireland's premier football knock-out cup competition. The 1884–85 Belfast Charity Cup was the 2nd edition of the Belfast Charity Cup, a cup competition in Irish football. Equivalently, the probabilities of an event and its complement must always total to 1. Cengage Learning 2007, , p. 229 () The event A and its complement [not A] are mutually exclusive and exhaustive. This result cannot be right because a probability cannot be more than 1. One may resolve this overlap by the principle of inclusion- exclusion, or, in this case, by simply finding the probability of the complementary event and subtracting it from 1, thus: : Pr(at least one "1") = 1 − Pr(no "1"s) := 1 − Pr([no "1" on 1st trial] and [no "1" on 2nd trial] and ... and [no "1" on 8th trial]) := 1 − Pr(no "1" on 1st trial) × Pr(no "1" on 2nd trial) × ... × Pr(no "1" on 8th trial) := 1 −(5/6) × (5/6) × ... × (5/6) := 1 − (5/6)8 := 0.7674... ==See also== *Logical complement *Exclusive disjunction *Binomial probability ==References== ==External links== *Complementary events - (free) page from probability book of McGraw-Hill Category:Experiment (probability theory) The complement of an event A is usually denoted as A′, Ac, egA or . For two events to be complements, they must be collectively exhaustive, together filling the entire sample space. Oldpark won the tournament for the 1st time, defeating Cliftonville 1–0 in the final. ==Results== ===Semi- finals=== |} ====Replay==== |} ===Final=== ==References== ==External links== * Northern Ireland - List of Belfast Charity Cup Winners Category:1884–85 in Irish association football What is the probability that one sees a "1" at least once? Given an event, the event and its complementary event define a Bernoulli trial: did the event occur or not? This does not, however, mean that any two events whose probabilities total to 1 are each other's complements; complementary events must also fulfill the condition of mutual exclusivity. ===Example of the utility of this concept=== Suppose one throws an ordinary six-sided die eight times. Ballymena United won their fourth Irish Cup, defeating Carrick Rangers 4–1 in the final. ==Results== ===First round=== |} ====Replays==== |} ===Second round=== |} ====Replays==== |} ====Second replay==== |} ===Quarter-finals=== |} ===Semi-finals=== |} ===Final=== ==References== 1983–84 Category:1983–84 domestic association football cups Category:1983–84 in Northern Ireland association football Category:1983 in Northern Ireland sport Category:1984 in Northern Ireland sport
-87.8
0.63
205.0
2.84367
0.72
B
Three students $(S)$ and six faculty members $(F)$ are on a panel discussing a new college policy. In how many different ways can the nine participants be lined up at a table in the front of the auditorium?
Table of Six () is a political conference established by the Republican People's Party, Good Party, Felicity Party, Democrat Party, Democracy and Progress Party and Future Party, with the first meeting held on 12 February 2022. In enumerative geometry, Steiner's conic problem is the problem of finding the number of smooth conics tangent to five given conics in the plane in general position. Table topics are topics on various subjects that are discussed by a group of people around a table. There will be a table topic master for each meeting, who will prepare questions beforehand and ask the participants questions one by one for which they are called upon to answer. So the conics tangent to 5 given conics correspond to the intersection points of 5 degree 6 hypersurfaces, and by Bézout's theorem the number of intersection points of 5 generic degree 6 hypersurfaces is 65 = 7776, which was Steiner's incorrect solution. If the five conics have the properties that *there is no line such that every one of the 5 conics is either tangent to it or passes through one of two fixed points on it (otherwise there is a "double line with 2 marked points" tangent to all 5 conics) *no three of the conics pass through any point (otherwise there is a "double line with 2 marked points" tangent to all 5 conics passing through this triple intersection point) *no two of the conics are tangent *no three of the five conics are tangent to a line *a pair of lines each tangent to two of the conics do not intersect on the fifth conic (otherwise this pair is a degenerate conic tangent to all 5 conics) then the total number of conics C tangent to all 5 (counted with multiplicities) is 3264. The problem is named after Jakob Steiner who first posed it and who gave an incorrect solution in 1848. ==History== claimed that the number of conics tangent to 5 given conics in general position is 7776 = 65, but later realized this was wrong. Many personality or public speaking clubs like the 'Toastmasters' have a separate session in their meetings known as a table topic session. Student Number 1 was panned by critics and flopped at the box office. ==Cast== ==Production== Shooting was commenced at Chennai, for a fifteen-days schedule, after which the unit moved to Russia to shoot two songs. The Table of Six was originally an independent entity from the Nation Alliance. In particular if C intersects each of the five conics in exactly 3 points (one double point of tangency and two others) then the multiplicity is 1, and if this condition always holds then there are exactly 3264 conics tangent to the 5 given conics. Some chapters of Toastmasters also host Table Topics contests. ==See also== * TableTopics ==References== Category:Public speaking However the number of conics is not (6H)5 but (6H−2E)5 because the strict transform of the hypersurface of conics tangent to a given conic is 6H−2E. On 21 January 2023, Table of Six defined itself as the "Nation Alliance" for the first time after its 11th meeting. As practiced by Toastmasters International, the topics to be discussed are written on pieces of paper which are placed in a box in the middle of a table. Five Guys Walk into a Bar... has received a largely positive response from critics since its release. The participants pick up one paper each and start talking about the topic written on the paper. Graphs and Combinatorics (ISSN 0911-0119, abbreviated Graphs Combin.) is a peer-reviewed academic journal in graph theory, combinatorics, and discrete geometry published by Springer Japan. In the text of the memorandum, lowering the electoral threshold to 3%, treasury aid to the parties that received at least 1% of the votes, ending the omnibus law practice, removing the veto power of the president and extending his term of office to 7 years, recognising the authority to issue a no-confidence question on the government, human rights and human rights in the education curriculum. Steiner observed that the conics tangent to a given conic form a degree 6 hypersurface in CP5. Five Guys Walk into a Bar... is a comprehensive four-disc retrospective of the British rock group Faces released in 2004, collecting sixty-seven tracks from among the group's four studio albums, assorted rare single A and B-sides, BBC sessions, rehearsal tapes and one track from a promotional flexi-disc, "Dishevelment Blues" - a deliberately-sloppy studio romp, captured during the sessions for their Ooh La La album, which was never actually intended for official release. On 3 March 2023, Good Party leader Meral Akşener announced that she took the decision to withdraw from the Table of Six and said her party would not support main opposition Republican People's Party leader Kemal Kılıçdaroğlu as the joint candidate in the 2023 Turkish presidential election.
-0.55
0.00539
1.4
+116.0
-0.10
B
How many different license plates are possible if a state uses two letters followed by a four-digit integer (leading zeros are permissible and the letters and digits can be repeated)?
In New Zealand, vehicle registration plates (usually called number plates) contain up to six alphanumeric characters, depending on the type of vehicle and the date of registration. The vehicle registration plates of Cyprus are composed of three letters and three digits (e.g. ABC 123). Greek vehicle registration plates are composed of three letters and four digits per plate (e.g. `ΑΑΑ–1000`) printed in black on a white background. From 1964 until March 2001 these number plates had two letters followed by one to four numbers (format LLnnnn), the sequence having started with AA1 and continuing through to ZZ9989 chronologically (for example, XE3782 would have been issued in 1998). Each of the 49 states of the United States of America plus several of its territories and the District of Columbia issued individual passenger license plates for the year 1959. ==Passenger baseplates== Image Region Design Slogan Serial format Serials issued Notes 150px Alabama Embossed blue lettering, heart logo and rims on white base. The letters represent the district (prefecture) that issues the plates while the numbers range from 1000 to 9999. None 123-456 1 to approximately 999-999 150px Washington Embossed white numbers on green plate; "54 WASHINGTON" embossed in white block letters at bottom. Number plates had to be changed before the end of January 2019. 1973 - 2018 150px 150px 2018 - today 150px ===Special plates=== Taxis Taxi plates show the prefix T, followed by two letters and three digits, formerly one letter only. Since 2013, some numbers have been reissued on white plates. 2013 - today Composed of five numerals and the prefix P, on white plates. The final one or two letters in the sequence changes in Greek alphabetical order after 8,999 issued plates. Similar plates but of square size with numbers ranging from 1 to 999 are issued for motorcycles which exceed 50 cc in engine size. If the visibility of a regular number plate is obstructed, for example by a bike rack mounted to a car's trailer hitch, a supplementary plate with the same registration number must be obtained and affixed to the obstruction (or the vehicle) such that it will be visible from the same direction as the regular number plate would have been. == Standard numbering sequences == thumb|right|A vehicle registration plate of New Zealand in the optional 'Europlate' style === Cars and heavy vehicles === * 1964–1987: AAnnnn * 1987–2001: AAnnnn * 2001–present: AAAnnn Private cars, taxis, and heavier road vehicles in New Zealand have number plates with up to six characters. Hence these so-called "six-figure plates" can still occasionally (as of 2018) be spotted on a few old vehicles. ===1973–1985=== In 1972, they became lettered and the system was `LL–NNNN` while trucks used `L–NNNN`. In some later instances issuers coded plates to the area of registration, such as in 1966 with the allocation of plates beginning with "CE" to the Manawatu-Wanganui region, in 1974–1976 with the allocation of plates beginning with "HB" to the Hawke's Bay region, in May 1989 with the allocation of plates beginning with "OG" to Wellington region, and in July 2000 with the allocation of plates beginning with "ZI" to Auckland region. === Motorcycles and tractors === These vehicles use one of several five-character systems. Some early numbers were printed on remaining yellow plates. === Commercial trucks === 1973 - 1990 1990 - 2003 2003 - 2013 2013 - today Composed of three letters and three numerals, on yellow plates. None 1-12345 10-1234 County-coded ==Non-passenger plates== Image (standard) Region Type Design and slogan Serial format Serials issued Notes 150px ==See also== *Antique vehicle registration *Electronic license plate *Motor vehicle registration *Vehicle license ==References== ==External links== Category:1959 in the United States 1959 Although plate character/number combinations can contain "spaces", they do not form part of the unique identification and are typically not stored (for example, in Police computer-systems). Cypriot National Guard plates are composed of an old version of the Greek flag followed by the letters ΕΦ and up to five digits. ==Northern Cyprus== thumb|upright|Northern Cyprus plate thumb|upright|Northern Cyprus rear plate thumb|Taxi number plate (T) thumb|upright=0.5|Trailer number plate (R) thumb|Rental car number plate thumb|Police number plate ===Style and numbering=== Northern Cyprus civilian number plates still use the old format (1973–1990) of Cyprus number plates (AB 123). White front plates were omitted after 2013. thumb|Temporary number plates === Temporary / Visitors === 1973 - 2003 Up to four numerals followed by the letter V, followed by two numerals indicating the year of registration. None 1-1234 1-A123 12-1234 12-A123 Coded by county (1 or 10 prefix). 1957 base plates revalidated for 1959 with red tabs. 150px Tennessee Embossed yellow numbers on black state- shaped plate with border line; "TENN. 54" embossed in yellow block letters centered at bottom. None AA1 to WW999 Letters I, Q and U not used, and X, Y and Z used only on replacement plates. 150px South Carolina Embossed white numbers on blue plate; "SOUTH CAROLINA 59" embossed in white block letters at top. Export plates, from 1973 until 1990, showed the letter E followed by four numerals.
0.333333333333333
0.03
6760000.0
0.16
1.27
C
Let $A$ and $B$ be independent events with $P(A)=$ 0.7 and $P(B)=0.2$. Compute $P(A \cap B)$.
In this event, the event B can be analyzed by a conditional probability with respect to A. * This results in P(A \mid B) = P(A \cap B)/P(B) whenever P(B) > 0 and 0 otherwise. * Without the knowledge of the occurrence of B, the information about the occurrence of A would simply be P(A) * The probability of A knowing that event B has or will have occurred, will be the probability of A \cap B relative to P(B), the probability that B has occurred. This can also be understood as the fraction of probability B that intersects with A, or the ratio of the probabilities of both events happening to the "given" one happening (how many times A occurs rather than not assuming B has occurred): P(A \mid B) = \frac{P(A \cap B)}{P(B)}. The conditional probability can be found by the quotient of the probability of the joint intersection of events and (P(A \cap B))—the probability at which A and B occur together, although not necessarily occurring at the same time—and the probability of : :P(A \mid B) = \frac{P(A \cap B)}{P(B)}. We have P(A\mid B)=\tfrac{P(A \cap B)}{P(B)} = \tfrac{3/36}{10/36}=\tfrac{3}{10}, as seen in the table. == Use in inference == In statistical inference, the conditional probability is an update of the probability of an event based on new information. For events in B, two conditions must be met: the probability of B is one and the relative magnitudes of the probabilities must be preserved. We denote the quantity \frac{P(A \cap B)}{P(B)} as P(A\mid B) and call it the "conditional probability of given ." All events that are not in B will have null probability in the new distribution. These probabilities are linked through the law of total probability: :P(A) = \sum_n P(A \cap B_n) = \sum_n P(A\mid B_n)P(B_n). where the events (B_n) form a countable partition of \Omega. This particular method relies on event B occurring with some sort of relationship with another event A. It can be shown that :P(A_B)= \frac{P(A \cap B)}{P(B)} which meets the Kolmogorov definition of conditional probability. === Conditioning on an event of probability zero === If P(B)=0 , then according to the definition, P(A \mid B) is undefined. Therefore, it can be useful to reverse or convert a conditional probability using Bayes' theorem: P(A\mid B) = {{P(B\mid A) P(A)}\over{P(B)}}. It can be interpreted as "the probability of B occurring multiplied by the probability of A occurring, provided that B has occurred, is equal to the probability of the A and B occurrences together, although not necessarily occurring at the same time". P(\text{dot received}) = P(\text{dot received } \cap \text{ dot sent}) + P(\text{dot received } \cap \text{ dash sent}) P(\text{dot received}) = P(\text{dot received } \mid \text{ dot sent})P(\text{dot sent}) + P(\text{dot received } \mid \text{ dash sent})P(\text{dash sent}) P(\text{dot received}) = \frac{9}{10}\times\frac{3}{7} + \frac{1}{10}\times\frac{4}{7} = \frac{31}{70} Now, P(\text{dot sent } \mid \text{ dot received}) can be calculated: P(\text{dot sent } \mid \text{ dot received}) = P(\text{dot received } \mid \text{ dot sent}) \frac{P(\text{dot sent})}{P(\text{dot received})} = \frac{9}{10}\times \frac{\frac{3}{7}}{\frac{31}{70}} = \frac{27}{31} == Statistical independence == Events A and B are defined to be statistically independent if the probability of the intersection of A and B is equal to the product of the probabilities of A and B: :P(A \cap B) = P(A) P(B). The technique is wrong because the eight events whose probabilities got added are not mutually exclusive. Substituting 1 and 2 into 3 to select α: :\begin{align} 1 &= \sum_{\omega \in \Omega} {P(\omega \mid B)} \\\ &= \sum_{\omega \in B} {P(\omega\mid B)} + \cancelto{0}{\sum_{\omega otin B} P(\omega\mid B)} \\\ &= \alpha \sum_{\omega \in B} {P(\omega)} \\\\[5pt] &= \alpha \cdot P(B) \\\\[5pt] \Rightarrow \alpha &= \frac{1}{P(B)} \end{align} So the new probability distribution is #\omega \in B: P(\omega\mid B) = \frac{P(\omega)}{P(B)} #\omega otin B: P(\omega\mid B) = 0 Now for a general event A, :\begin{align} P(A\mid B) &= \sum_{\omega \in A \cap B} {P(\omega \mid B)} + \cancelto{0}{\sum_{\omega \in A \cap B^c} P(\omega\mid B)} \\\ &= \sum_{\omega \in A \cap B} {\frac{P(\omega)}{P(B)}} \\\\[5pt] &= \frac{P(A \cap B)}{P(B)} \end{align} == See also == * Bayes' theorem * Bayesian epistemology * Borel–Kolmogorov paradox * Chain rule (probability) * Class membership probabilities * Conditional independence * Conditional probability distribution * Conditioning (probability) * Joint probability distribution * Monty Hall problem * Pairwise independent distribution * Posterior probability * Regular conditional probability == References == ==External links== * *Visual explanation of conditional probability Category:Mathematical fallacies Category:Statistical ratios This shows that P(A|B) P(B) = P(B|A) P(A) i.e. P(A|B) = . That is, for an event A, :P(A^c) = 1 - P(A). In probability theory, an event is a set of outcomes of an experiment (a subset of the sample space) to which a probability is assigned. Generally, there is only one event B such that A and B are both mutually exclusive and exhaustive; that event is the complement of A. Moreover, this "multiplication rule" can be practically useful in computing the probability of A \cap B and introduces a symmetry with the summation axiom for Poincaré Formula: :P(A \cup B) = P(A) + P(B) - P(A \cap B) :Thus the equations can be combined to find a new representation of the : : P(A \cap B)= P(A) + P(B) - P(A \cup B) = P(A \mid B)P(B) : P(A \cup B)= {P(A) + P(B) - P(A \mid B){P(B)}} ==== As the probability of a conditional event ==== Conditional probability can be defined as the probability of a conditional event A_B.
0.323
0.14
11.0
322
1.44
B
Suppose that $A, B$, and $C$ are mutually independent events and that $P(A)=0.5, P(B)=0.8$, and $P(C)=$ 0.9 . Find the probabilities that all three events occur?
Event \operatorname{P}(s_1)=1/4 \operatorname{P}(s_2)=1/4 \operatorname{P}(s_3)=1/4 \operatorname{P}(s_4)=1/4 Probability of event A 0 1 0 1 \tfrac{1}{2} B 0 0 1 1 \tfrac{1}{2} C 0 1 1 1 \tfrac{3}{4} and so Event s_1 s_2 s_3 s_4 Probability of event A \cap B 0 0 0 1 \tfrac{1}{4} A \cap C 0 1 0 1 \tfrac{1}{2} B \cap C 0 0 1 1 \tfrac{1}{2} A \cap B \cap C 0 0 0 1 \tfrac{1}{4} In this example, C occurs if and only if at least one of A, B occurs. Unconditionally (that is, without reference to C), A and B are independent of each other because \operatorname{P}(A)—the sum of the probabilities associated with a 1 in row A—is \tfrac{1}{2}, while \operatorname{P}(A\mid B) = \operatorname{P}(A \text{ and } B) / \operatorname{P}(B) = \tfrac{1/4}{1/2} = \tfrac{1}{2} = \operatorname{P}(A). Since in the presence of C the probability of A is affected by the presence or absence of B, A and B are mutually dependent conditional on C. == See also == * * * == References == Category:Independence (probability theory) When A and B are mutually exclusive, .Stats: Probability Rules. But conditional on C having occurred (the last three columns in the table), we have \operatorname{P}(A \mid C) = \operatorname{P}(A \text{ and } C) / \operatorname{P}(C) = \tfrac{1/2}{3/4} = \tfrac{2}{3} while \operatorname{P}(A \mid C \text{ and } B) = \operatorname{P}(A \text{ and } C \text{ and } B) / \operatorname{P}(C \text{ and } B) = \tfrac{1/4}{1/2} = \tfrac{1}{2} < \operatorname{P}(A \mid C). The probability of one or both events occurring is denoted P(A ∪ B) and in general, it equals P(A) + P(B) – P(A ∩ B). In probability theory, conditional dependence is a relationship between two or more events that are dependent when a third event occurs.Introduction to Artificial Intelligence by Sebastian Thrun and Peter Norvig, 2011 "Unit 3: Conditional Dependence"Introduction to learning Bayesian Networks from Data by Dirk Husmeier "Introduction to Learning Bayesian Networks from Data -Dirk Husmeier" For example, if A and B are two events that individually increase the probability of a third event C, and do not directly affect each other, then initially (when it has not been observed whether or not the event C occurs)Conditional Independence in Statistical theory "Conditional Independence in Statistical Theory", A. P. Dawid" Probabilistic independence on Britannica "Probability->Applications of conditional probability->independence (equation 7) " \operatorname{P}(A \mid B) = \operatorname{P}(A) \quad \text{ and } \quad \operatorname{P}(B \mid A) = \operatorname{P}(B) (A \text{ and } B are independent). If event B occurs then the probability of occurrence of the event A will decrease because its positive relation to C is less necessary as an explanation for the occurrence of C (similarly, event A occurring will decrease the probability of occurrence of B). The probability that at least one of the events will occur is equal to one.Scott Bierman. Conditional dependence of A and B given C is the logical negation of conditional independence ((A \perp\\!\\!\\!\perp B) \mid C). In probability theory and logic, a set of events is jointly or collectively exhaustive if at least one of the events must occur. In consequence, mutually exclusive events have the property: P(A ∩ B) = 0.intmath.com; Mutually Exclusive Events. In logic and probability theory, two events (or propositions) are mutually exclusive or disjoint if they cannot both occur at the same time. In probability theory, an event is a set of outcomes of an experiment (a subset of the sample space) to which a probability is assigned. The probabilities of the individual events (red, and club) are multiplied rather than added. Hence, now the two events A and B are conditionally negatively dependent on each other because the probability of occurrence of each is negatively dependent on whether the other occurs. Obviously, we get the following probabilities :\mathbb P(A_1) = \frac 4{52}, \qquad \mathbb P(A_2 \mid A_1) = \frac 3{51}, \qquad \mathbb P(A_3 \mid A_1 \cap A_2) = \frac 2{50}, \qquad \mathbb P(A_4 \mid A_1 \cap A_2 \cap A_3) = \frac 1{49}. We haveIntroduction to Artificial Intelligence by Sebastian Thrun and Peter Norvig, 2011 "Unit 3: Explaining Away" \operatorname{P}(A \mid C \text{ and } B) < \operatorname{P}(A \mid C). When heads occurs, tails can't occur, or p (heads and tails) = 0, so the outcomes are also mutually exclusive. In the case of flipping a coin, flipping a head and flipping a tail are also mutually exclusive events. Another example of events being collectively exhaustive and mutually exclusive at same time are, event "even" (2,4 or 6) and event "odd" (1,3 or 5) in a random experiment of rolling a six-sided die. The events 1 and 6 are mutually exclusive but not collectively exhaustive.
0.6749
10.4
0.36
4
5275
C
A poker hand is defined as drawing 5 cards at random without replacement from a deck of 52 playing cards. Find the probability of four of a kind (four cards of equal face value and one card of a different value).
*The Probability of drawing a given hand is calculated by dividing the number of ways of drawing the hand (Frequency) by the total number of 5-card hands (the sample space; {52 \choose 5} = 2,598,960). In poker, the probability of each type of 5-card hand can be computed by calculating the proportion of hands of that type among all possible hands. == History == Probability and gambling have been ideas since long before the invention of poker. The number of distinct 5-card poker hands that are possible from 7 cards is 4,824. The probability is calculated based on {52 \choose 5} = 2,598,960, the total number of 5-card combinations. To this day, many gamblers still rely on the basic concepts of probability theory in order to make informed decisions while gambling. ==Frequencies== ===5-card poker hands=== In straight poker and five- card draw, where there are no hole cards, players are simply dealt five cards from a deck of 52. The following chart enumerates the (absolute) frequency of each hand, given all combinations of five cards randomly drawn from a full deck of 52 without replacement. Perhaps surprisingly, this is fewer than the number of 5-card poker hands from 5 cards, as some 5-card hands are impossible with 7 cards (e.g. 7-high and 8-high). ===5-card lowball poker hands=== Some variants of poker, called lowball, use a low hand to determine the winning hand. There are 7,462 distinct poker hands. ===7-card poker hands=== In some popular variations of poker such as Texas hold 'em, the most widespread poker variant overall,https://www.casinodaniabeach.com/most-popular-types-of-poker/ a player uses the best five-card poker hand out of seven cards. Hand The five cards (or less) dealt on the screen are known as a hand. ==See also== *Casino comps *Draw poker *Gambling *Gambling mathematics *Problem gambling *Video blackjack *Video Lottery Terminal ==References== ==External links== * Category:Arcade video games Various payout variations are possible, depending on the casino, resulting in a house edge ranging from 1.98% to 6.15%.Wizard of Odds: Four Card Poker ==Rank of hands== The possible four-card hands are (from best to worst): *Four of a kind *Straight flush *Three of a kind *Flush *Straight *Two pair *One pair *High card ==References== ==External links== *ShuffleMaster flash demo of the game Category:Gambling games Category:Poker variants The total number of distinct 7-card hands is {52 \choose 7} = 133{,}784{,}560. The Total line also needs adjusting. ==See also== * Binomial coefficient * Combination * Combinatorial game theory * Effective hand strength algorithm * Event (probability theory) * Game complexity * Gaming mathematics * Odds * Permutation * Probability * Sample space * Set theory ==References== ==External links== * Brian Alspach's mathematics and poker page * MathWorld: Poker * Poker probabilities including conditional calculations * Numerous poker probability tables * 5, 6, and 7 card poker probabilities * Hold'em poker probabilities All the other hand combinations in video poker are the same as in table poker, including such hands as two pair, three of a kind, straight (a sequence of 5 cards of consecutive value), flush (any 5 cards of the same suit), full house (a pair and a three of a kind), four of a kind (four cards of the same value), straight flush (5 consecutive cards of the same suit) and royal flush (a Ten, a Jack, a Queen, a King and an Ace of the same suit). frameless|right Four Card Poker is a casino card game similar to Three Card Poker, invented by Roger Snow and owned by Shuffle Master.ShuffleMaster: Four Card Poker ==Description of play== The player can place an ante bet or an "Aces Up" bet or both. A four flush (also flush draw) is a poker draw or non-standard poker hand that is one card short of being a full flush. Video poker is a casino game based on five-card draw poker. The probability is calculated based on {52 \choose 7} = 133,784,560, the total number of 7-card combinations. Four of a kind may refer to: *Four of a kind (poker), a type of poker hand *Four of a Kind (card game), a patience or solitaire *Four of a Kind (TV series), an American reality series about quadruplets *Four of a Kind (film), an Australian feature film *4 of a Kind, the fourth album by American thrash band D.R.I. For example, there are 4 different ways to draw a royal flush (one for each suit), so the probability is , or one in 649,740. The number of distinct poker hands is even smaller. This pejorative term originated in the 19th century when bluffing poker players misrepresented that they had a flush—a poker hand with five cards all of one suit—when they only had four cards of one suit. Flush A five-card hand that contains cards of the same suit.
0.00024
-1.49
0.00131
1.51
-167
A
Three students $(S)$ and six faculty members $(F)$ are on a panel discussing a new college policy. In how many different ways can the nine participants be lined up at a table in the front of the auditorium?
Table of Six () is a political conference established by the Republican People's Party, Good Party, Felicity Party, Democrat Party, Democracy and Progress Party and Future Party, with the first meeting held on 12 February 2022. thumb|Table seating arrangement A seating plan is a diagram or a set of written or spoken instructions that determines where people should take their seats. Table topics are topics on various subjects that are discussed by a group of people around a table. In this case, it is customary to arrange the host and hostess at the opposite sides of the table, and alternate male and female guests throughout. In enumerative geometry, Steiner's conic problem is the problem of finding the number of smooth conics tangent to five given conics in the plane in general position. There will be a table topic master for each meeting, who will prepare questions beforehand and ask the participants questions one by one for which they are called upon to answer. A seating plan is of crucial importance for musical ensembles or orchestras, where every type of instrument is allocated a specific section. == See also == * Seating assignment * Seating capacity * Table setting * Kids' table == References == ==External links == * Category:Etiquette Category:Diagrams Plan If the five conics have the properties that *there is no line such that every one of the 5 conics is either tangent to it or passes through one of two fixed points on it (otherwise there is a "double line with 2 marked points" tangent to all 5 conics) *no three of the conics pass through any point (otherwise there is a "double line with 2 marked points" tangent to all 5 conics passing through this triple intersection point) *no two of the conics are tangent *no three of the five conics are tangent to a line *a pair of lines each tangent to two of the conics do not intersect on the fifth conic (otherwise this pair is a degenerate conic tangent to all 5 conics) then the total number of conics C tangent to all 5 (counted with multiplicities) is 3264. So the conics tangent to 5 given conics correspond to the intersection points of 5 degree 6 hypersurfaces, and by Bézout's theorem the number of intersection points of 5 generic degree 6 hypersurfaces is 65 = 7776, which was Steiner's incorrect solution. Many personality or public speaking clubs like the 'Toastmasters' have a separate session in their meetings known as a table topic session. The problem is named after Jakob Steiner who first posed it and who gave an incorrect solution in 1848. ==History== claimed that the number of conics tangent to 5 given conics in general position is 7776 = 65, but later realized this was wrong. Honored guests (moms, dads, and in-laws) are placed to the host's and hostess's right and then left." Student Number 1 was panned by critics and flopped at the box office. ==Cast== ==Production== Shooting was commenced at Chennai, for a fifteen-days schedule, after which the unit moved to Russia to shoot two songs. The Table of Six was originally an independent entity from the Nation Alliance. Some chapters of Toastmasters also host Table Topics contests. ==See also== * TableTopics ==References== Category:Public speaking In particular if C intersects each of the five conics in exactly 3 points (one double point of tangency and two others) then the multiplicity is 1, and if this condition always holds then there are exactly 3264 conics tangent to the 5 given conics. However the number of conics is not (6H)5 but (6H−2E)5 because the strict transform of the hypersurface of conics tangent to a given conic is 6H−2E. In the United States according to Peggy Post, "tradition dictates that when everyone is seated together, the host and hostess sit at either end of the table. On 21 January 2023, Table of Six defined itself as the "Nation Alliance" for the first time after its 11th meeting. As practiced by Toastmasters International, the topics to be discussed are written on pieces of paper which are placed in a box in the middle of a table. In the text of the memorandum, lowering the electoral threshold to 3%, treasury aid to the parties that received at least 1% of the votes, ending the omnibus law practice, removing the veto power of the president and extending his term of office to 7 years, recognising the authority to issue a no-confidence question on the government, human rights and human rights in the education curriculum. The participants pick up one paper each and start talking about the topic written on the paper.
22
432.07
0.00539
1.01
362880
E
Each of the 12 students in a class is given a fair 12 -sided die. In addition, each student is numbered from 1 to 12 . If the students roll their dice, what is the probability that there is at least one "match" (e.g., student 4 rolls a 4)?
Numbers on each die Die 1 1 5 10 11 13 17 Die 2 3 4 7 12 15 16 Die 3 2 6 8 9 14 18 === Four players === An optimal and permutation-fair solution for 4 twelve-sided dice was found by Robert Ford in 2010. When thrown or rolled, the die comes to rest showing a random integer from one to six on its upper surface, with each value being equally likely. Die 1 1 4 Die 2 2 3 === Three players === An optimal and permutation- fair solution for 3 six-sided dice was found by Robert Ford in 2010. If r is the total number of dice selecting the 6 face, then P(r \ge k ; n, p) is the probability of having at least k correct selections when throwing exactly n dice. Many players collect or acquire a large number of mixed and unmatching dice. The d20 System includes a four-sided tetrahedral die among other dice with 6, 8, 10, 12 and 20 faces. Dice using both the numerals 6 and 9, which are reciprocally symmetric through rotation, typically distinguish them with a dot or underline. ====Common variations==== Dice are often sold in sets, matching in color, of six different shapes. Numbers on each die Die 1 1 8 11 14 19 22 27 30 35 38 41 48 Die 2 2 7 10 15 18 23 26 31 34 39 42 47 Die 3 3 6 12 13 17 24 25 32 36 37 43 46 Die 4 4 5 9 16 20 21 28 29 33 40 44 45 === Five players === Several candidates exist for a set of 5 dice, but none is known to be optimal. == See also == * Intransitive dice ==References== ==External links== * Go First Dice - Numberphile Category:Dice Some dice, such as those with 10 sides, are usually numbered sequentially beginning with 0, in which case the opposite faces will add to one less than the number of faces. Note the older hand-inked green 12-sided die (showing an 11), manufactured before pre-inked dice were common. Normally, the faces on a die will be placed so opposite faces will add up to one more than the number of faces. These are six-sided dice with sides numbered `2, 3, 3, 4, 4, 5`, which have the same arithmetic mean as a standard die (3.5 for a single die, 7 for a pair of dice), but have a narrower range of possible values (2 through 5 for one, 4 through 10 for a pair). Optimal results have been proven by exhaustion for up to 4 dice. ==Configurations== === Two players === The two player case is somewhat trivial. On some four- sided dice, each face features multiple numbers, with the same number printed near each vertex on all sides. The sum of the numbers on opposite faces is 21 if numbered 1–20. ====Rarer variations==== upright=2.9|thumb|Dice collection: D2–D22, D24, D26, D28, D30, D36, D48, D60 and D100. If several dice of the same type are to be rolled, this is indicated by a leading number specifying the number of dice. Due to circumstances or character skill, the initial roll may have a number added to or subtracted from the final result, or have the player roll extra or fewer dice. This explanation assumes that a group does not produce more than one 6, so it does not actually correspond to the original problem. ==Generalizations== A natural generalization of the problem is to consider n non-necessarily fair dice, with p the probability that each die will select the 6 face when thrown (notice that actually the number of faces of the dice and which face should be selected are irrelevant). Another configuration places only one number on each face, and the rolled number is taken from the downward face. ==References== Category:Dice In general, if P(n) is the probability of throwing at least n sixes with 6n dice, then: :P(n)=1-\sum_{x=0}^{n-1}\binom{6n}{x}\left(\frac{1}{6}\right)^x\left(\frac{5}{6}\right)^{6n-x}\, . "Uniform fair dice" are dice where all faces have equal probability of outcome due to the symmetry of the die as it is face-transitive. Configurations where all die have the same number of sides are presented here, but alternative configurations might instead choose mismatched dice to minimize the number of sides, or minimize the largest number of sides on a single die.
273
0.648004372
0.00024
+11
0.178
B
The World Series in baseball continues until either the American League team or the National League team wins four games. How many different orders are possible (e.g., ANNAAA means the American League team wins in six games) if the series goes four games?
In Major League Baseball (MLB), a game seven can occur in the World Series or in a League Championship Series (LCS), which are contested as best-of-seven series. From 2003 to 2010, the AL and NL had each won the World Series four times, but none of them had gone the full seven games. In baseball, a series refers to two or more consecutive games played between the same two teams. During the Major League Baseball Postseason, there are four wild card series (two in each League), each of which are a best-of-3 series. The World Series has been contested 118 times as of 2022, with the AL winning 67 and the NL winning 51. == Precursors to the modern World Series (1857–1902) == === The original World Series === Until the formation of the American Association in 1882 as a second major league, the National Association of Professional Base Ball Players (1871–1875) and then the National League (founded 1876) represented the top level of organized baseball in the United States. These 16 franchises, all of which are still in existence, have each won at least two World Series titles. The National League Championship Series (NLCS) and American League Championship Series (ALCS), since the expansion to best-of-seven, are always played in a 2–3–2 format: Games 1, 2, 6, and 7 are played in the stadium of the team that has home-field advantage, and Games 3, 4, and 5 are played in the stadium of the team that does not. === 1970s === ==== 1971: World Series at night ==== Night games were played in the major leagues beginning with the Cincinnati Reds in 1935, but the World Series remained a strictly daytime event for years thereafter. The two division winners within each league played each other in a best-of-five League Championship Series to determine who would advance to the World Series. Since then, the 2011, 2014, 2016, 2017, and 2019 World Series have gone the full seven games. When the first modern World Series was played in 1903, there were eight teams in each league. This is known in baseball as a road trip, and a team can be on the road for up to 20 games, or 4-5 series. This is the only time in World Series history in which three teams have won consecutive series in succession. A game seven cannot occur in earlier rounds of the MLB postseason, as Division Series and Wild Card rounds use shorter series. ==Key== (#) Extra innings (the number indicates the number of extra innings played) † Indicates the team that won a game seven after coming back from an 0–3 series deficit § Indicates the team that lost a game seven after coming back from an 0–3 series deficit ∞ Indicates a game seven that was played at a neutral site Road* Indicates a game seven that was won by the road team Year (X) Indicates the number of games seven played in that year's postseason (from 1985 on) Each year is linked to an article about that particular Major League Baseball season Team (#) Indicates team and the number of game sevens played by that team at that point ==All-time game sevens== Year Playoff round Date Venue Winner Result Loser Ref. World Series Bennett Park Pittsburgh Pirates (1) 8–0 Detroit Tigers (1) World Series Fenway Park Boston Red Sox (1) 3–2 (10) New York Giants (1) World Series Griffith Stadium Washington Senators (1) 4–3 (12) New York Giants (2) World Series Forbes Field Pittsburgh Pirates (2) 9–7 Washington Senators (2) World Series Yankee Stadium (I) St. Louis Cardinals (1) 3–2 New York Yankees (1) World Series Sportsman's Park (III) St. Louis Cardinals (2) 4–2 Philadelphia Athletics (1) World Series Navin Field St. Louis Cardinals (3) 11–0 Detroit Tigers (2) World Series Crosley Field Cincinnati Reds (1) 2–1 Detroit Tigers (3) World Series Wrigley Field Detroit Tigers (4) 9–3 Chicago Cubs (1) World Series Sportsman's Park (III) St. Louis Cardinals (4) 4–3 Boston Red Sox (2) World Series Yankee Stadium (I) New York Yankees (2) 5–2 Brooklyn Dodgers (1) World Series Ebbets Field New York Yankees (3) 4–2 Brooklyn Dodgers (2) World Series Yankee Stadium (I) Brooklyn Dodgers (3) 2–0 New York Yankees (4) World Series Ebbets Field New York Yankees (5) 9–0 Brooklyn Dodgers (4) World Series Yankee Stadium (I) Milwaukee Braves (1) 5–0 New York Yankees (6) World Series Milwaukee County Stadium New York Yankees (7) 6–2 Milwaukee Braves (2) World Series Forbes Field Pittsburgh Pirates (3) 10–9 New York Yankees (8) World Series Candlestick Park New York Yankees (9) 1–0 San Francisco Giants (3) World Series Busch Stadium (I) St. Louis Cardinals (5) 7–5 New York Yankees (10) World Series Metropolitan Stadium Los Angeles Dodgers (5) 2–0 Minnesota Twins (3) World Series Fenway Park St. Louis Cardinals (6) 7–2 Boston Red Sox (3) World Series Busch Stadium (II) Detroit Tigers (5) 4–1 St. Louis Cardinals (7) World Series Memorial Stadium Pittsburgh Pirates (4) 2–1 Baltimore Orioles (1) World Series Riverfront Stadium Oakland Athletics (2) 3–2^ Cincinnati Reds (2) World Series Oakland–Alameda County Coliseum Oakland Athletics (3) 5–2 New York Mets (1) World Series Fenway Park Cincinnati Reds (3) 4–3 Boston Red Sox (4) World Series Memorial Stadium Pittsburgh Pirates (5) 4–1 Baltimore Orioles (2) World Series Busch Stadium (II) St. Louis Cardinals (8) 6–3 Milwaukee Brewers (1) ALCS Exhibition Stadium Kansas City Royals (1) 6–2 Toronto Blue Jays (1) World Series Royals Stadium Kansas City Royals (2) 11–0 St. Louis Cardinals (9) ALCS Fenway Park Boston Red Sox (5) 8–1 California Angels (1) World Series Shea Stadium New York Mets (2) 8–5 Boston Red Sox (6) NLCS Busch Stadium (II) St. Louis Cardinals (10) 6–0 San Francisco Giants (4) World Series Hubert H. Humphrey Metrodome Minnesota Twins (4) 4–2 St. Louis Cardinals (11) NLCS Dodger Stadium Los Angeles Dodgers (6) 6–0 New York Mets (3) NLCS Three Rivers Stadium Atlanta Braves (3) 4–0 Pittsburgh Pirates (6) World Series Hubert H. Humphrey Metrodome Minnesota Twins (5) 1–0 (10) Atlanta Braves (4) NLCS Atlanta–Fulton County Stadium Atlanta Braves (5) 3–2 Pittsburgh Pirates (7) NLCS Atlanta–Fulton County Stadium Atlanta Braves (6) 15–0 St. Louis Cardinals (12) World Series Pro Player Stadium Florida Marlins (1) 3–2 (11) Cleveland Indians (1) World Series Bank One Ballpark Arizona Diamondbacks (1) 3–2 New York Yankees (11) World Series Edison International Field Anaheim Angels (2) 4–1 San Francisco Giants (5) NLCS Wrigley Field Florida Marlins (2) 9–6 Chicago Cubs (2) ALCS Yankee Stadium (I) New York Yankees (12) 6–5 (11) Boston Red Sox (7) ALCS Yankee Stadium (I) Boston Red Sox (8)† 10–3 New York Yankees (13) NLCS Busch Stadium (II) St. Louis Cardinals (13) 5–2 Houston Astros (1) NLCS Shea Stadium St. Louis Cardinals (14) 3–1 New York Mets (4) ALCS Fenway Park Boston Red Sox (9) 11–2 Cleveland Indians (2) ALCS Tropicana Field Tampa Bay Rays (1) 3–1 Boston Red Sox (10) World Series Busch Stadium St. Louis Cardinals (15) 6–2 Texas Rangers (1) NLCS AT&T; Park San Francisco Giants (6) 9–0 St. Louis Cardinals (16) World Series Kaufmann Stadium San Francisco Giants (7) 3–2 Kansas City Royals (2) World Series Progressive Field Chicago Cubs (3) 8–7 (10) Cleveland Indians (3) ALCS Minute Maid Park Houston Astros (2) 4–0 New York Yankees (14) World Series Dodger Stadium Houston Astros (3) 5–1 Los Angeles Dodgers (7) NLCS Miller Park Los Angeles Dodgers (8) 5–1 Milwaukee Brewers (2) World Series Minute Maid Park Washington Nationals (1) 6–2 Houston Astros (4) ALCS Petco Park∞ Tampa Bay Rays (2) 4–2 Houston Astros (5)§ NLCS Globe Life Field∞ Los Angeles Dodgers (9) 4–3 Atlanta Braves (7) ==All-time standings== Team Games played Wins Losses Win–loss % St. Louis Cardinals 16 11 5 New York Yankees 14 6 8 Boston Red Sox 10 4 6 Brooklyn / Los Angeles Dodgers 9 5 4 Pittsburgh Pirates 7 5 2 Milwaukee / Atlanta Braves 7 4 3 New York / San Francisco Giants 7 2 5 Washington Senators / Minnesota Twins 5 3 2 Detroit Tigers 5 2 3 Houston Astros 5 2 3 New York Mets 4 1 3 Cincinnati Reds 3 2 1 Kansas City Royals 3 2 1 Chicago Cubs 3 1 2 Philadelphia / Oakland Athletics 3 1 2 Cleveland Indians / Guardians 3 0 3 Florida Marlins 2 2 0 Tampa Bay Rays 2 2 0 California / Anaheim Angels 2 1 1 Milwaukee Brewers 2 0 2 Arizona Diamondbacks 1 1 0 Washington Nationals 1 1 0 Texas Rangers 1 0 1 Toronto Blue Jays 1 0 1 Note: Five teams have never played a game seven: Philadelphia Phillies, Chicago White Sox, San Diego Padres, Seattle Mariners, Colorado Rockies. ESPN selected it as the "Greatest of All Time" in their "World Series 100th Anniversary" countdown, with five of its games being decided by a single run, four games decided in the final at-bat and three games going into extra innings. The remainder of the Postseason consists of the League Division Series, which is a best-of-5 series, and the League Championship Series, which is a best-of-7 series, followed by the World Series, a best-of-7 series to determine the Major League Baseball Champion. There are only two other occasions when a team has won at least three consecutive World Series: 1972 to 1974 by the Oakland Athletics, and 1998 to 2000 by the Yankees. ==== 1947–1964: New York City teams dominate World Series play ==== In an 18-year span from 1947 to 1964, except for 1948 and 1959, the World Series was played in New York City, featuring at least one of the three teams located in New York at the time. Source: MLB.com ;Notes American League (AL) teams have won 67 of the 118 World Series played (56.8%). Starting in 1976, the DH rule was used in the World Series held in even-numbered years. The World Series is the annual championship series of Major League Baseball (MLB) in the United States and Canada. Since then each league has conducted a League Championship Series (ALCS and NLCS) preceding the World Series to determine which teams will advance, while those series have been preceded in turn by Division Series (ALDS and NLDS) since 1995, and Wild Card games or series in each league since 2012. Historically and currently, professional baseball season revolves around a schedule of series, each typically lasting three or four games. The "series" schedule gives its name to the MLB championship series, the World Series.
2
16
26.9
1.6
4.8
A
Draw one card at random from a standard deck of cards. The sample space $S$ is the collection of the 52 cards. Assume that the probability set function assigns $1 / 52$ to each of the 52 outcomes. Let $$ \begin{aligned} A & =\{x: x \text { is a jack, queen, or king }\}, \\ B & =\{x: x \text { is a } 9,10, \text { or jack and } x \text { is red }\}, \\ C & =\{x: x \text { is a club }\}, \\ D & =\{x: x \text { is a diamond, a heart, or a spade }\} . \end{aligned} $$ Find $P(A)$
The queen of spades (Q) is one of 52 playing cards in a standard deck: the queen of the suit of spades (). 52 pickup or 52-card pickup is a humorous prank which consists only of picking up a scattered deck of playing cards. The player first turns the first pile up and looks for either an ace, a ten, a king, a queen, and a jack, cards which comprise a royal flush. Royal Flush is a solitaire card game which is played with a deck of 52 playing cards. First, the cards are dealt into nine columns in such a way that the first column contains nine cards, the second having eight cards, the third seven, and so on until the ninth column has a single card. King Albert is a patience or card solitaire using a deck of 52 playing cards of the open packer type. Royal Marriage is a patience or solitaire game using a deck of 52 playing cards. In Pinochle, the queen of spades and the jack of diamonds combine for a unique two-card meld known as a "pinochle". The remaining fifty cards are shuffled and placed on the top of the King to form the stock. Another version of the prank can be played where one player declares "52-card pick up" and is then granted power to throw each of the 52 cards individually at any of the opponents.. ==As a game== By introducing additional rules, the task of picking up the cards can be made into a solitaire game. In Old Maid and several games of the Hearts family, it serves as a single, powerful card in the deck. ==Roles by game== In the Hearts family of card games, the queen of spades is usually considered an unlucky card; it is the eponym of the Black Maria and Black Lady variants of Hearts. The game is won when the cards of the royal flush are the only ones left in the pile and are arranged in any order. They hold up the deck and take the cards one by one off the bottom as the other player(s) call out "smoke" ... "smoke" ... "smoke" ... and, with the first red card, "fire!" If there are any other cards sandwiched among the royal flush cards, the game is lost. ==Variation== To give the game some variation, Lee and Packard suggests the player to try other poker hands such as four-of- a-kinds, full houses, or straight flushes.Sloane Lee & Gabriel Packard, 100 Best Solitaire Games, The player can simply look for a specific hand or look for certain cards to include in their hand while playing the game. ==See also== * List of solitaire games * Glossary of solitaire terms ==References== ==External sources== Category:Single-deck patience card games Category:Closed non-builders Also, the suit of the first card found determines the suit of the entire royal flush. The game is won when the King and Queen are brought together -- that is, when only one or two cards remain in between them, which can then be discarded. ==Variations== Royal Marriage is possible to play in-hand, rather than on a surface such as a table. The other player must then pick them up.. ==Variations== Genuine card games sometimes end in 52 pickup. The dealer has a pack of cards, they then show the teams a card in the pack, e.g. two of spades. In the seven card stud poker variant known as "The Bitch", a face-up deal of the queen of spades results in the deal being abandoned, all cards being shuffled and a new deal starting with only those players who had not already folded when the queen of spades was dealt. Cards are played from the bottom of the deck onto the Queen, and fanned out to show all cards that could possibly affect play. It may be advantageous to retain as many cards of the Queen's suit as possible, as these may be easily eliminated by the King or Queen at any tie, but may be helpful in eliminating other cards. The discarded cards are set aside.
0.72
29.36
2.57
0.4772
0.2307692308
E
An urn contains four colored balls: two orange and two blue. Two balls are selected at random without replacement, and you are told that at least one of them is orange. What is the probability that the other ball is also orange?
Assume that an urn contains m_1 red balls and m_2 white balls, totalling N = m_1 + m_2 balls. n balls are drawn at random from the urn one by one without replacement. The probability that the second ball picked is red depends on whether the first ball was red or white. One pretends to remove one or more balls from the urn; the goal is to determine the probability of drawing one color or another, or some other properties. (A variation both on the first and the second question) ==Examples of urn problems== * beta- binomial distribution: as above, except that every time a ball is observed, an additional ball of the same color is added to the urn. When the mutator is drawn it is replaced along with an additional ball of an entirely new colour. * hypergeometric distribution: the balls are not returned to the urn once extracted. * Mixed replacement/non-replacement: the urn contains black and white balls. We want to calculate the probability that the red ball is not taken. The probability that the first ball picked is red is equal to the weight fraction of red balls: : p_1 = \frac{m_1 \omega_1}{m_1 \omega_1 + m_2 \omega_2}. This is referred to as "drawing without replacement", by opposition to "drawing with replacement". * multivariate hypergeometric distribution: the balls are not returned to the urn once extracted, but with balls of more than two colors. * geometric distribution: number of draws before the first successful (correctly colored) draw. The probability that a particular ball is taken in a particular draw depends not only on its own weight, but also on the total weight of the competing balls that remain in the urn at that moment. The probability that the red ball is not taken in the second draw, under the condition that it was not taken in the first draw, is 999/1999 ≈ . The probability that the red ball is not taken in the third draw, under the condition that it was not taken in the first two draws, is 998/1998 ≈ . One ball is drawn randomly from the urn and its color observed; it is then placed back in the urn (or not), and the selection process is repeated.Urn Model: Simple Definition, Examples and Applications — The basic urn model Possible questions that can be answered in this model are: * Can I infer the proportion of white and black balls from n observations? In probability and statistics, an urn problem is an idealized mental exercise in which some objects of real interest (such as atoms, people, cars, etc.) are represented as colored balls in an urn or other container. thumb|Two urns containing white and red balls. The probability that the red ball is not taken in the first draw is 1000/2000 = . While black balls are set aside after a draw (non-replacement), white balls are returned to the urn after a draw (replacement). Continuing in this way, we can calculate that the probability of not taking the red ball in n draws is approximately 2−n as long as n is small compared to N. * Pólya urn: each time a ball of a particular colour is drawn, it is replaced along with an additional ball of the same colour. What is the distribution of the number of black balls drawn after m draws? * multinomial distribution: there are balls of more than two colors. * Occupancy problem: the distribution of the number of occupied urns after the random assignment of k balls into n urns, related to the coupon collector's problem and birthday problem. Various generalizations to this distribution exist for cases where the picking of colored balls is biased so that balls of one color are more likely to be picked than balls of another color.
26.9
3920.70763168
0.2
35.2
2
C
Bowl $B_1$ contains two white chips, bowl $B_2$ contains two red chips, bowl $B_3$ contains two white and two red chips, and bowl $B_4$ contains three white chips and one red chip. The probabilities of selecting bowl $B_1, B_2, B_3$, or $B_4$ are $1 / 2,1 / 4,1 / 8$, and $1 / 8$, respectively. A bowl is selected using these probabilities and a chip is then drawn at random. Find $P(W)$, the probability of drawing a white chip.
thumb|Colorized photo of Chips. Eventually, Blumenthal developed the three-stage cooking process known as triple-cooked chips, which he identifies as "the first recipe I could call my own". The Sunday Times described triple-cooked chips as Blumenthal's most influential culinary innovation, which had given the chip "a whole new lease of life". ==History== Blumenthal said he was "obsessed with the idea of the perfect chip",Blumenthal, In Search of Perfection and described how, from 1992 onwards, he worked on a method for making "chips with a glass-like crust and a soft, fluffy centre". Triple-cooked chips are a type of chips developed by the English chef Heston Blumenthal. Bremner Wafers are made by the Bremner Biscuit Company. The Bowl of Baal is a 1975 science fiction novel by Robert Ames Bennet. Four color cards () is a game of the rummy family of card games, with a relatively long history in southern China. Chips served as a sentry dog for the Roosevelt-Churchill conference in 1943. The result is what Blumenthal calls "chips with a glass-like crust and a soft, fluffy centre". As of February 5, 2006, there are 8 variations of the original Bremner Wafer: * Original Wafers: suitable for fine wines as described above * Sesame Wafers: "an excellent complement to cheese, pate, smoked fish or any spread" * Cracked Wheat Wafers: for topping and spreads * Low Sodium Wafers * Caraway Wafers: designed for strong flavors such as Swiss Cheese * Crackers Made with Pure Sunflower Oil: for appetizers * Oyster Crackers Made with Pure Sunflower Oil: best suited for adding to soups. ==See also== * List of crackers ==References== ==External links== *Bremner Biscuit Company website Category:Brand name crackers Blumenthal describes moisture as the "enemy" of crisp chips. ;Meld: A group of one to four cards with specific matching conditions. Chips was a German Shepherd-Collie-Malamute mix owned by Edward J. Wren of Pleasantville, New York. In 2014, the London Fire Brigade attributed an increase in chip pan fires to the increased popularity of "posh chips", including triple-cooked chips. ==Preparation== ===Blumenthal's technique=== Previously, the traditional practice for cooking chips was a two-stage process, in which chipped potatoes were fried in oil first at a relatively low temperature to soften them and then at a higher temperature to crisp up the outside. Second, the cracks that develop in the chips provide places for oil to collect and harden during frying, making them crunchy.Blumenthal, Heston Blumenthal at Home Third, thoroughly drying out the chips drives off moisture that would otherwise keep the crust from becoming crisp. Bloomsbury. ==Further reading== * * ==External links== * Triple-Cooked Chips. The chips are first simmered, then cooled and drained using a sous-vide technique or by freezing; deep fried at and cooled again; and finally deep-fried again at . The dealer starts by taking 4 tiles initially from one of the walls, then the players proceed in anti-clockwise order by taking 4 tiles each from where the previous player left off; after each player has 20 tiles, the dealer takes one more tile as they must discard a tile to start play. Chips (1940–1946) was a trained sentry dog for United States Army, and reputedly the most decorated war dog from World War II. On July 10, 1943, Chips and his handler were pinned down on the beach by an Italian machine-gun team. The second of the three stages is frying the chips at for approximately 5 minutes, after which they are cooled once more in a freezer or sous-vide machine before the third and final stage: frying at for approximately 7 minutes until crunchy and golden. Blumenthal began work on the recipe in 1993, and eventually developed the three-stage cooking process.
0.65625
2.74
0.5
35
5
A
Divide a line segment into two parts by selecting a point at random. Use your intuition to assign a probability to the event that the longer segment is at least two times longer than the shorter segment.
To approximate the mean line segment length of a given shape, two points are randomly chosen in its interior and the distance is measured. In geometry, the mean line segment length is the average length of a line segment connecting two points chosen uniformly at random in a given shape. While the question may seem simple, it has a fairly complicated answer; the exact value for this is \frac{2 + \sqrt{2} + 5 \ln (1 + \sqrt{2})}{15}. == Formal definition == The mean line segment length for an n-dimensional shape S may formally be defined as the expected Euclidean distance ||⋅|| between two random points x and y, : \mathbb E[\|x-y\|]=\frac1{\lambda(S)^2}\int_S \int_S \|x-y\| \,d\lambda(x) \,d\lambda(y) where λ is the n-dimensional Lebesgue measure. The length of a line segment is given by the Euclidean distance between its endpoints. thumb|historical image – create a line segment (1699) In geometry, a line segment is a part of a straight line that is bounded by two distinct end points, and contains every point on the line that is between its endpoints. Length of line. For the two- dimensional case, this is defined using the distance formula for two points (x1, y1) and (x2, y2) : \frac1{\lambda(S)^2}\iint_S \iint_S \sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2} \,dx_1 \,dy_1 \,dx_2 \,dy_2. == Approximation methods == Since computing the mean line segment length involves calculating multidimensional integrals, various methods for numerical integration can be used to approximate this value for any shape. Thus, the line segment can be expressed as a convex combination of the segment's two end points. A new approach to line and line segment clipping in homogeneous coordinates, The Visual Computer, ISSN 0178-2789, Vol. 21, No. 11, pp. 905–914, Springer Verlag, 2005. More generally, when both of the segment's end points are vertices of a polygon or polyhedron, the line segment is either an edge (of that polygon or polyhedron) if they are adjacent vertices, or a diagonal. In other words, it is the expected Euclidean distance between two random points, where each point in the shape is equally likely to be chosen. However, in the random case, with high probability the longest edge has length approximately \sqrt{\frac{\log n}{\pi n}}, longer than the average by a non-constant factor. There are two common algorithms for line clipping: Cohen–Sutherland and Liang–Barsky. Line sampling is a method used in reliability engineering to compute small (i.e., rare event) failure probabilities encountered in engineering systems. At the same time, 100% of respondents selected either one of these quantities as being the least desirable. == Calculation methods == There are a few methods to calculate line length to fit the intended average count of characters that such lines should contain based on the factors listed above. The line or line segment p can be computed from points r1, r2 given in homogeneous coordinates directly using the cross product as :p = r1 × r2 = (x1, y1, w1) × (x2, y2, w2) or as :p = r1 × r2 = (x1, y1, 1) × (x2, y2, 1). ==See also== * Clipping (computer graphics) ==References== Category:Clipping (computer graphics) The global probability of failure is the mean of the probability of failure on the lines: : \tilde{p}_f = \frac{1}{N_L} \sum_{i=1}^{N_L} p_f^{(i)} where N_L is the total number of lines used in the analysis and the p_f^{(i)} are the partial probabilities of failure estimated along all the lines. 370x370px|right|Example of line clipping for a two-dimensional region In computer graphics, line clipping is the process of removing (clipping) lines or portions of lines outside an area of interest (a viewport or view volume). Even for simple shapes such as a square or a triangle, solving for the exact value of their mean line segment lengths can be difficult because their closed-form expressions can get quite complicated. In geometry, a line segment is often denoted using a line above the symbols for the two endpoints (such as ). *More generally than above, the concept of a line segment can be defined in an ordered geometry. thumb|300px|right|Euclidean minimum spanning tree of 25 random points in the plane A Euclidean minimum spanning tree of a finite set of points in the Euclidean plane or higher-dimensional Euclidean space connects the points by a system of line segments with the points as endpoints, minimizing the total length of the segments.
26.9
0.66666666666
167.0
-191.2
2.3
B
In a state lottery, four digits are drawn at random one at a time with replacement from 0 to 9. Suppose that you win if any permutation of your selected integers is drawn. Give the probability of winning if you select $6,7,8,9$.
A six-number lottery game is a form of lottery in which six numbers are drawn from a larger pool (for example, 6 out of 44). If the six numbers on a ticket match the numbers drawn by the lottery, the ticket holder is a jackpot winner—regardless of the order of the numbers. Pick 3 draws a 3 digit number and Pick 4 draws a 4 digit number. A lottery is a form of gambling which involves selling numbered tickets and giving prizes to the holders of numbers drawn at random. Lottery mathematics is used to calculate probabilities of winning or losing a lottery game. That is, if a ticket has the numbers 1, 2, 3, 4, 5, and 6, it wins as long as all the numbers 1 through 6 are drawn, no matter what order they come out in. The chance of winning can be demonstrated as follows: The first number drawn has a 1 in 49 chance of matching. The elements of a lottery correspond to the probabilities that each of the states of nature will occur, e.g. (Rain:.70, No Rain:.30).Mas-Colell, Andreu, Michael Whinston and Jerry Green (1995). For example, in the 6 from 49 lottery, given 10 powerball numbers, then the odds of getting a score of 3 and the powerball would be 1 in 56.66 × 10, or 566.6 (the probability would be divided by 10, to give an exact value of \frac{8815}{4994220}). The Missouri Lottery is the state-run lottery in Missouri. This can be written in a general form for all lotteries as: {K\choose B}{N-K\choose K-B}\over {N\choose K} where N is the number of balls in lottery, K is the number of balls in a single ticket, and B is the number of matching balls for a winning ticket. Winning the top prize, usually a progressive jackpot, requires a player to match all six regular numbers drawn; the order in which they are drawn is irrelevant. In the 5-from-90 lotto, the minimum number of tickets that can guarantee a ticket with at least 2 matches is 100. == Information theoretic results == As a discrete probability space, the probability of any particular lottery outcome is atomic, meaning it is greater than zero. If the player wagered an additional $1, they were eligible to win up to $25,000 in the Topper drawing, which was drawn by random number generator. ===Raffle=== The California Lottery offered two raffles; March 17, 2007 and one on January 1, 2008. Using X representing winning the 6-of-49 lottery, the Shannon entropy of 6-of-49 above is \begin{align} \Eta(X) &= -p\log(p) - q\log(q) = -\tfrac{1}{13,983,816}\log\\!{\tfrac{1}{13,983,816}} \- \tfrac{13,983,815}{13,983,816}\log\\!{\tfrac{13,983,815}{13,983,816}} \\\ & \approx 1.80065 \times 10^{-6} \text{ shannons.} \end{align} ==References== ==External links== * Euler's Analysis of the Genoese Lottery – Convergence (August 2010), Mathematical Association of America * Lottery Mathematics – INFAROM Publishing * 13,983,816 and the Lottery – YouTube video with James Clewett, Numberphile, March 2012 Mathematics Category:Combinatorics Category:Gambling mathematics A free ticket with 2 sets of numbers qualifying for the next Lotto draw is won by matching three numbers. Suppose the probabilities for lottery A are (Cured: .90, Uncured: .00, Dead: .10), and for lottery B are (Cured: .50, Uncured: .50, Dead: .00). That is to buy at least one lottery ticket for every possible number combination. There are two draws every day, televised at 1:29pm and 6:59pm.Televised Draw Results, California State Lottery ====Daily 4==== A "pick 4" type game premiered on May 19, 2008. This yields a final formula of :{n\choose k}={49\choose 6}={49\over 6} * {48\over 5} * {47\over 4} * {46\over 3} * {45\over 2} * {44\over 1} A 7th ball often is drawn as reserve ball, in the past only a second chance to get 5+1 numbers correct with 6 numbers played. ==Odds of getting other possibilities in choosing 6 from 49== One must divide the number of combinations producing the given result by the total number of possible combinations (for example, {49\choose 6} = 13,983,816 ). The prizes are smaller than other lottery games, but there are better odds (averaging 1:5). In expected utility theory, a lottery is a discrete distribution of probability on a set of states of nature.
0.0024
-0.10
0.983
0.0245
313
A
Extend Example 1.4-6 to an $n$-sided die. That is, suppose that a fair $n$-sided die is rolled $n$ independent times. A match occurs if side $i$ is observed on the $i$ th trial, $i=1,2, \ldots, n$. Find the limit of this probability as $n$ increases without bound.
More precisely, if E denotes the event in question, p its probability of occurrence, and Nn(E) the number of times E occurs in the first n trials, then with probability one,An Analytic Technique to Prove Borel's Strong Law of Large Numbers Wen, L. Am Math Month 1991 \frac{N_n(E)}{n}\to p\text{ as }n\to\infty. Therefore, while \lim_{n\to\infty} \sum_{i=1}^n \frac{X_i} n = \overline{X} other formulas that look similar are not verified, such as the raw deviation from "theoretical results": \sum_{i=1}^n X_i - n\times\overline{X} not only does it not converge toward zero as n increases, but it tends to increase in absolute value as n increases. ==Examples== For example, a single roll of a fair, six-sided die produces one of the numbers 1, 2, 3, 4, 5, or 6, each with equal probability. What this means is that the probability that, as the number of trials n goes to infinity, the average of the observations converges to the expected value, is equal to one. Section XIII.7 that if this probability is written as p(n,k) then : \lim_{n\rightarrow \infty} p(n,k) \alpha_k^{n+1}=\beta_k where αk is the smallest positive real root of :x^{k+1}=2^{k+1}(x-1) and :\beta_k={2-\alpha_k \over k+1-k\alpha_k}. ==Values of the constants== k \alpha_k \beta_k 1 2 2 2 1.23606797... 1.44721359... 3 1.08737802... 1.23683983... 4 1.03758012... 1.13268577... According to the law, the average of the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed. This result is useful to derive consistency of a large class of estimators (see Extremum estimator). ===Borel's law of large numbers=== Borel's law of large numbers, named after Émile Borel, states that if an experiment is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified event occurs approximately equals the probability of the event's occurrence on any particular trial; the larger the number of repetitions, the better the approximation tends to be. It does not converge in probability toward zero (or any other value) as n goes to infinity. The American Statistician, 56(3), pp.186-190 When bounding the event random variable deviates from its mean in only one direction (positive or negative), Cantelli's inequality gives an improvement over Chebyshev's inequality. The result is Kolmogorov's inequality with an extra factor of 27 on the right-hand side: : \Pr \Bigl( \max_{1 \leq k \leq n} | S_k | \geq \alpha \Bigr) \leq \frac{27}{\alpha^2} \operatorname{var} (S_n). ==References== * (Theorem 22.5) * Category:Probabilistic inequalities Category:Statistical inequalities In particular, the proportion of heads after n flips will almost surely converge to as n approaches infinity. By Kolmogorov's zero–one law, for any fixed M, the probability that the event \limsup_n \frac{S_n}{\sqrt{n}} \geq M occurs is 0 or 1. Therefore, the expected value of the average of the rolls is: \frac{1+2+3+4+5+6}{6} = 3.5 According to the law of large numbers, if a large number of six-sided dice are rolled, the average of their values (sometimes called the sample mean) will approach 3.5, with the precision increasing as more dice are rolled. It says that: ::\Pr\left[|S_n-E_n| > t \right] \leq 2\exp\left[ - \frac{V_n}{C^2} h\left(\frac{C t}{V_n} \right)\right], where h(u) = (1+u)\log(1+u)-u 5\. It follows from the law of large numbers that the empirical probability of success in a series of Bernoulli trials will converge to the theoretical probability. The law then states that this converges in probability to zero.) Thus, although the absolute value of the quantity S_n/\sqrt{2n\log\log n} is less than any predefined ε > 0 with probability approaching one, it will nevertheless almost surely be greater than ε infinitely often; in fact, the quantity will be visiting the neighborhoods of any point in the interval (-1,1) almost surely. thumb|Exhibition of Limit Theorems and their interrelationship ==Generalizations and variants== The law of the iterated logarithm (LIL) for a sum of independent and identically distributed (i.i.d.) random variables with zero mean and bounded increment dates back to Khinchin and Kolmogorov in the 1920s. There are two versions of the law of large numbers — the weak and the strong — and they both state that the sums Sn, scaled by n−1, converge to zero, respectively in probability and almost surely: : \frac{S_n}{n} \ \xrightarrow{p}\ 0, \qquad \frac{S_n}{n} \ \xrightarrow{a.s.} 0, \qquad \text{as}\ \ n\to\infty. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be "balanced" by the others (see the gambler's fallacy). In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. Let Sk denote the partial sum :S_k = X_1 + \cdots + X_k.\, Then :\Pr \Bigl( \max_{1 \leq k \leq n} | S_k | \geq 3 \alpha \Bigr) \leq 3 \max_{1 \leq k \leq n} \Pr \bigl( | S_k | \geq \alpha \bigr). ==Remark== Suppose that the random variables Xk have common expected value zero. In probability theory, Etemadi's inequality is a so-called "maximal inequality", an inequality that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. In probability theory, Cantelli's inequality (also called the Chebyshev- Cantelli inequality and the one-sided Chebyshev inequality) is an improved version of Chebyshev's inequality for one-sided tail bounds."Tail and Concentration Inequalities" by Hung Q. Ngo"Concentration-of-measure inequalities" by Gábor Lugosi The inequality states that, for \lambda > 0, : \Pr(X-\mathbb{E}[X]\ge\lambda) \le \frac{\sigma^2}{\sigma^2 + \lambda^2}, where :X is a real-valued random variable, :\Pr is the probability measure, :\mathbb{E}[X] is the expected value of X, :\sigma^2 is the variance of X. Applying the Cantelli inequality to -X gives a bound on the lower tail, : \Pr(X-\mathbb{E}[X]\le -\lambda) \le \frac{\sigma^2}{\sigma^2 + \lambda^2}.
38
19.4
24.4
-1.32
0.6321205588
E
Calculating the maximum wavelength capable of photoejection A photon of radiation of wavelength $305 \mathrm{~nm}$ ejects an electron from a metal with a kinetic energy of $1.77 \mathrm{eV}$. Calculate the maximum wavelength of radiation capable of ejecting an electron from the metal.
However, because short-wavelength photons carry more energy per photon, the maximum amount of photosynthesis per incident unit of energy is at a longer wavelength, around 650 nm (deep red). A Lyman-Werner photon is an ultraviolet photon with a photon energy in the range of 11.2 to 13.6 eV, corresponding to the energy range in which the Lyman and Werner absorption bands of molecular hydrogen (H2) are found. For a black-body light source at 5800 K, such as the sun is approximately, a fraction 0.368 of its total emitted radiation is emitted as PAR. Using the expression above, the optimal efficiency or second law efficiency for the conversion of radiation to work in the PAR region (from \lambda_1 = 400 nm to \lambda_2 = 700 nm), for a blackbody at T = 5800 K and an organism at T_0 = 300 K is determined as: : \eta^{ex}_\text{PAR}(T) = \frac{\int_{\lambda_1}^{\lambda_2} Ex(\lambda,T)d\lambda}{\int_{0}^\infty L(\lambda, T)d\lambda} = 0.337563 about 8.3% lower than the value considered until now, as a direct consequence of the fact that the organisms which are using solar radiation are also emitting radiation as a consequence of their own temperature. Ultraviolet astronomy is the observation of electromagnetic radiation at ultraviolet wavelengths between approximately 10 and 320 nanometres; shorter wavelengths--higher energy photons--are studied by X-ray astronomy and gamma- ray astronomy. The red curve in the graph shows that photons around 610 nm (orange-red) have the highest amount of photosynthesis per photon. This can be done by photoexcitation (PE), where the electron absorbs a photon and gains all its energy or by collisional excitation (CE), where the electron receives energy from a collision with another, energetic electron. These wavelengths correspond to photon energies of down to . The wavelengths in the solar spectrum range from approximately 0.3-2.0 μm. Ultra-high-energy gamma rays are gamma rays with photon energies higher than 100 TeV (0.1 PeV). In a 18 May 2021 press release, China's Large High Altitude Air Shower Observatory (LHAASO) reported the detection of a dozen ultra-high- energy gamma rays with energies exceeding 1 peta-electron-volt (quadrillion electron-volts or PeV), including one at 1.4 PeV, the highest energy photon ever observed. thumb|upright=1.25|Photosynthetically active radiation (PAR) spans the visible light portion of the electromagnetic spectrum from 400 to 700 nanometers. thumb|420x420px|A schematic of electron excitation, showing excitation by photon (left) and by particle collision (right) Electron excitation is the transfer of a bound electron to a more energetic, but still bound state. Spectral irradiance of wavelengths in the solar spectrum. The following table shows the conversion factors from watts for black-body spectra that are truncated to the range 400-700 nm. The quantities in the table are calculated as :\eta_v(T) = \frac{\int_{\lambda_1}^{\lambda_2} B(\lambda, T)\, 683 \mathrm{~[lm/W]}\, y(\lambda)\,d\lambda}{\int_{\lambda_1}^{\lambda_2} B(\lambda, T)\,d\lambda}, :\eta_{\mathrm{photon}}(T) = \frac{\int_{\lambda_1}^{\lambda_2} B(\lambda, T)\,\frac{\lambda}{hcN_\text{A}} \,d\lambda}{\int_{\lambda_1}^{\lambda_2} B(\lambda, T)\,d\lambda}, :\eta_{\mathrm{PAR}}(T) = \frac{\int_{\lambda_1}^{\lambda_2} B(\lambda, T)\,d\lambda}{\int_0^{\infty} B(\lambda, T)\,d\lambda}, where B(\lambda,T) is the black-body spectrum according to Planck's law, y is the standard luminosity function, \lambda_1,\lambda_2 represent the wavelength range (400–700 nm) of PAR, and N_\text{A} is the Avogadro constant. == Second law PAR efficiency == Besides the amount of radiation reaching a plant in the PAR region of the spectrum, it is also important to consider the quality of such radiation. A photon in this energy range, with a frequency that coincides with that of one of the lines in the Lyman or Werner bands, can be absorbed by H2, placing the molecule in an excited electronic state. Class energy (TeV) energy (eV) energy (μJ) frequency (YHz) wavelength (am) comparison properties 10−12 1 1.602 × 10−13 2.418 × 10−12 1.2398 × 1012 near infrared photon (for comparison) 0.1 1 × 1011 0.01602 24.2 12 Z boson Very- high-energy gamma rays 1 1 × 1012 0.1602 242 1.2 flying mosquito produces Cherenkov light 10 1 × 1013 1.602 2.42 × 103 0.12 air shower reaches ground 100 1 × 1014 16.02 2.42 × 104 0.012 ping pong ball falling off a bat causes nitrogen to fluoresce Ultra-high-energy gamma rays 1000 1 × 1015 160.2 2.42 × 10 1.2 × 10−3 10 000 TeV 1 × 1016 1602 2.42 × 106 1.2 × 10−4 potential energy of golf ball on a tee 100 000 1 × 1017 1.602 × 104 2.42 × 107 1.2 × 10−5 1 000 000 1 × 1018 1.602 × 105 2.42 × 108 1.2 × 10−6 10 000 000 1 × 1019 1.602 × 106 2.42 × 109 1.2 × 10−7 air rifle shot 1.22091 × 1028 1.95611 × 109 1.855 × 1019 1.61623 × 10−17 explosion of a car tank full of gasoline Planck energy ==References== ==External links== *Search for Galactic PeV Gamma Rays with the IceCube Neutrino Observatory *Air shower detection of diffuse PeV gamma-rays Category:Gamma rays In the Earth's magnetic field, a 1021 eV photon is expected to interact about 5000 km above the earth's surface. Thus, in order for a rectifying antenna to be an efficient electromagnetic collector in the solar spectrum, it needs to be on the order of hundreds of nm in size. right|250px|thumb| Figure 3. Photons at shorter wavelengths tend to be so energetic that they can be damaging to cells and tissues, but are mostly filtered out by the ozone layer in the stratosphere. *Photo-dissociation fragments carry away some of the photon energy as kinetic energy, heating the gas.
-3.8
418
0.375
2
540
E
Estimate the molar volume of $\mathrm{CO}_2$ at $500 \mathrm{~K}$ and 100 atm by treating it as a van der Waals gas.
Hence the molar van der Waals volume, which only counts the volume occupied by the atoms or molecules, is usually about times smaller than the molar volume for a gas at standard temperature and pressure. ==Table of van der Waals radii== == Methods of determination == Van der Waals radii may be determined from the mechanical properties of gases (the original method), from the critical point, from measurements of atomic spacing between pairs of unbonded atoms in crystals or from measurements of electrical or optical properties (the polarizability and the molar refractivity). The complete Van der Waals equation is therefore: :\left(P+a\frac1{V_m^2}\right)(V_m-b)=R T For n moles of gas, it can also be written as: :\left(P+a \frac{n^2}{V^2}\right)(V-n b)=n R T When the molar volume Vm is large, b becomes negligible in comparison with Vm, a/Vm2 becomes negligible with respect to P, and the Van der Waals equation reduces to the ideal gas law, PVm=RT. The van der Waals volume is given by V_{\rm w} = \frac{\pi V_{\rm m}}{N_{\rm A}\sqrt{18}} where the factor of π/√18 arises from the packing of spheres: V = = 23.0 Å, corresponding to a van der Waals radius r = 1.76 Å. === Molar refractivity === The molar refractivity of a gas is related to its refractive index by the Lorentz–Lorenz equation: A = \frac{R T (n^2 - 1)}{3p} The refractive index of helium n = at 0 °C and 101.325 kPa,Kaye & Laby Tables, Refractive index of gases. which corresponds to a molar refractivity A = . The van der Waals constant b volume can be used to calculate the van der Waals volume of an atom or molecule with experimental data derived from measurements on gases. Gas d (Å) b (cmmol) V (Å) r (Å) Hydrogen 0.74611 26.61 44.19 2.02 Nitrogen 1.0975 39.13 64.98 2.25 Oxygen 1.208 31.83 52.86 2.06 Chlorine 1.988 56.22 93.36 2.39 van der Waals radii r in Å (or in 100 picometers) calculated from the van der Waals constants of some diatomic gases. The van der Waals volume of an atom or molecule may also be determined by experimental measurements on gases, notably from the van der Waals constant b, the polarizability α, or the molar refractivity A. The van der Waals volume may be calculated if the van der Waals radii (and, for molecules, the inter-atomic distances, and angles) are known. D-166. b = 23.7 cm/mol. Helium is a monatomic gas, and each mole of helium contains atoms (the Avogadro constant, N): V_{\rm w} = {b\over{N_{\rm A}}} Therefore, the van der Waals volume of a single atom V = 39.36 Å, which corresponds to r = 2.11 Å (≈ 200 picometers). Indeed, there is no reason to assume that the van der Waals radius is a fixed property of the atom in all circumstances: rather, it tends to vary with the particular chemical environment of the atom in any given case. === Van der Waals equation of state === The van der Waals equation of state is the simplest and best-known modification of the ideal gas law to account for the behaviour of real gases: \left (p + a\left (\frac{n}{\tilde{V}}\right )^2\right ) (\tilde{V} - nb) = nRT, where is pressure, is the number of moles of the gas in question and and depend on the particular gas, \tilde{V} is the volume, is the specific gas constant on a unit mole basis and the absolute temperature; is a correction for intermolecular forces and corrects for finite atomic or molecular sizes; the value of equals the van der Waals volume per mole of the gas. The following table lists the Van der Waals constants (from the Van der Waals equation) for a number of common gases and volatile liquids. To find the van der Waals volume of a single atom or molecule, it is necessary to divide by the Avogadro constant N. These various methods give values for the van der Waals radius which are similar (1–2 Å, 100–200 pm) but not identical. An isotherm of the Van der Waals fluid taken at T r = 0.90 is also shown where the intersections of the isotherm with the loci illustrate the construct's requirement that the two areas (red and blue, shown) are equal. == Other parameters, forms and applications == ===Other thermodynamic parameters=== We reiterate that the extensive volume V is related to the volume per particle v=V/N where N = nNA is the number of particles in the system. The Van der Waals equation may be solved for VG and VL as functions of the temperature and the vapor pressure pV. The Van der Waals equation includes intermolecular interaction by adding to the observed pressure P in the equation of state a term of the form a /V_m^2, where a is a constant whose value depends on the gas. The molar van der Waals volume should not be confused with the molar volume of the substance. The density of solid helium at 1.1 K and 66 atm is , corresponding to a molar volume V = . However, near the phase transitions between gas and liquid, in the range of p, V, and T where the liquid phase and the gas phase are in equilibrium, the Van der Waals equation fails to accurately model observed experimental behavior. Values of d and b from Weast (1981). van der Waals radii r in Å (or in 100 picometers) calculated from the van der Waals constants of some diatomic gases. Values of d and b from Weast (1981). van der Waals radii r in Å (or in 100 picometers) calculated from the van der Waals constants of some diatomic gases. Values of d and b from Weast (1981). van der Waals radii r in Å (or in 100 picometers) calculated from the van der Waals constants of some diatomic gases. Values of d and b from Weast (1981). van der Waals radii r in Å (or in 100 picometers) calculated from the van der Waals constants of some diatomic gases.
0.4772
2.6
3.2
0.14
0.366
E
The single electron in a certain excited state of a hydrogenic $\mathrm{He}^{+}$ion $(Z=2)$ is described by the wavefunction $R_{3,2}(r) \times$ $Y_{2,-1}(\theta, \phi)$. What is the energy of its electron?
An electron in the spherically symmetric Coulomb potential has potential energy: :U_\text{C} = -\dfrac{e^2}{4\pi\varepsilon_0r}. Additional terms in the potential energy expression for a Rydberg state, on top of the hydrogenic Coulomb potential energy require the introduction of a quantum defect, δl, into the expression for the binding energy: :E_\text{B} = -\frac{\rm Ry}{(n-\delta_l)^2}. === Electron wavefunctions === The long lifetimes of Rydberg states with high orbital angular momentum can be explained in terms of the overlapping of wavefunctions. This energy is assumed to equal the electron's rest energy, defined by special relativity (E = mc2). The three exceptions to the definition of a Rydberg atom as an atom with a hydrogenic potential, have an alternative, quantum mechanical description that can be characterized by the additional term(s) in the atomic Hamiltonian: *If a second electron is excited into a state ni, energetically close to the state of the outer electron no, then its wavefunction becomes almost as large as the first (a double Rydberg state). For an atom in a multiple Rydberg state, the additional term, Uee, includes a summation of each pair of highly excited electrons: :U_{ee} = \dfrac{e^2}{4\pi\varepsilon_0}\sum_{i < j}\dfrac{1}{|\mathbf{r}_i - \mathbf{r}_j|}. An atom in a Rydberg state has a valence electron in a large orbit far from the ion core; in such an orbit, the outermost electron feels an almost hydrogenic, Coulomb potential, UC from a compact ion core consisting of a nucleus with Z protons and the lower electron shells filled with Z-1 electrons. In hydrogen the binding energy is given by: : E_\text{B} = -\frac{\rm Ry}{n^2}, where Ry = 13.6 eV is the Rydberg constant. The electron (symbol e) is on the left. thumb|In the Bohr model of the hydrogen atom, the electron transition from energy level n = 3 to n = 2 results in the emission of an H-alpha photon. There are three notable exceptions that can be characterized by the additional term added to the potential energy: *An atom may have two (or more) electrons in highly excited states with comparable orbital radii. In atomic physics, a two-electron atom or helium-like ion is a quantum mechanical system consisting of one nucleus with a charge of Ze and just two electrons. The wavefunction is a function of the two electron's positions: \psi = \psi(\mathbf{r}_1,\mathbf{r}_2) There is no closed form solution for this equation. ==Spectrum== The optical spectrum of the two electron atom has two systems of lines. The helium hydride ion or hydridohelium(1+) ion or helonium is a cation (positively charged ion) with chemical formula HeH+. In this case, the electron-electron interaction gives rise to a significant deviation from the hydrogen potential. E\psi = -\hbar^2\left[\frac{1}{2\mu}\left( abla_1^2 + abla_2^2 \right) + \frac{1}{M} abla_1 \cdot abla_2\right] \psi + \frac{e^2}{4\pi\varepsilon_0}\left[ \frac{1}{r_{12}} -Z\left( \frac{1}{r_1} + \frac{1}{r_2} \right) \right] \psi where r1 is the position of one electron (r1 = |r1| is its magnitude), r2 is the position of the other electron (r2 = |r2| is the magnitude), r12 = |r12| is the magnitude of the separation between them given by |\mathbf{r}_{12}| = |\mathbf{r}_2 - \mathbf{r}_1 | μ is the two-body reduced mass of an electron with respect to the nucleus of mass M \mu = \frac{m_e M}{m_e+M} and Z is the atomic number for the element (not a quantum number). Stark - Coulomb potential for a Rydberg atom in a static electric field. The word electron is a combination of the words _electr_ ic and i _on_."electron, n.2". *If the valence electron has very low angular momentum (interpreted classically as an extremely eccentric elliptical orbit), then it may pass close enough to polarise the ion core, giving rise to a 1/r4 core polarization term in the potential. An electron-electron repulsion term must be included in the atomic Hamiltonian. A Rydberg atom is an excited atom with one or more electrons that have a very high principal quantum number, n. The electron's mass is approximately 1/1836 that of the proton. It consists of a helium atom bonded to a hydrogen atom, with one electron removed.
2.5151
1.60
6.9
30
-6.04697
E
Calculate the typical wavelength of neutrons after reaching thermal equilibrium with their surroundings at $373 \mathrm{~K}$. For simplicity, assume that the particles are travelling in one dimension.
This can also be expressed using the reduced Planck constant \hbar= \frac{h}{2\pi} as \lambda_{\mathrm{th}} = {\sqrt{\frac{2\pi\hbar^2}{ mk_{\mathrm B}T}}} . ==Massless particles== For massless (or highly relativistic) particles, the thermal wavelength is defined as \lambda_{\mathrm{th}}= \frac{hc}{2 \pi^{1/3} k_{\mathrm B} T} = \frac{\pi^{2/3}\hbar c}{ k_{\mathrm B} T} , where c is the speed of light. If is the number of dimensions, and the relationship between energy () and momentum () is given by E=ap^s (with and being constants), then the thermal wavelength is defined as \lambda_{\mathrm{th}}=\frac{h}{\sqrt{\pi}}\left(\frac{a}{k_{\mathrm B}T}\right)^{1/s} \left[\frac{\Gamma(n/2+1)}{\Gamma(n/s+1)}\right]^{1/n} , where is the Gamma function. Such is the case for molecular or atomic gases at room temperature, and for thermal neutrons produced by a neutron source. ==Massive particles== For massive, non-interacting particles, the thermal de Broglie wavelength can be derived from the calculation of the partition function. Introduction of the Theory of Thermal Neutron Scattering. https://books.google.com/books?id=KUVD8KJt7_0C&dq;=thermal- neutron+reactor&pg;=PR9 thus scattering neutrons by nuclear forces, some nuclides are scattered large. In physics, the thermal de Broglie wavelength (\lambda_{\mathrm{th}}, sometimes also denoted by \Lambda) is roughly the average de Broglie wavelength of particles in an ideal gas at the specified temperature. The neutron fluence is defined as the neutron flux integrated over a certain time period, so its usual unit is cm−2 (neutrons per centimeter squared). The critical temperature is the transition point between these two regimes, and at this critical temperature, the thermal wavelength will be approximately equal to the interparticle distance. Earth atmospheric neutron flux, apparently from thunderstorms, can reach levels of 3·10−2 to 9·10+1 cm−2 s−1. A thermal-neutron reactor is a nuclear reactor that uses slow or thermal neutrons. The measured quantity is the difference in the number of gamma rays emitted within a solid angle between the two neutron spin states. The higher the neutron flux the greater the chance of a nuclear reaction occurring as there are more neutrons going through an area per unit time. === Reactor vessel wall neutron fluence === A reactor vessel of a typical nuclear power plant (PWR) endures in 40 years (32 full reactor years) of operation approximately 6.5×1019 cm−2 (E > 1 MeV) of neutron fluence.Nuclear Power Plant Borssele Reactor Pressure Vessel Safety Assessment, p. 29, 5.6 Neutron Fluence Calculation. A neutron may pass by a nucleus with a probability determined by the nuclear interaction distance, or be absorbed, or undergo scattering that may be either coherent or incoherent. Equivalently, it can be defined as the number of neutrons travelling through a small sphere of radius R in a time interval, divided by \pi R^2 (the cross section of the sphere) and by the time interval. Hence, \lambda_{\rm th} = \frac{h}{\sqrt{2\pi m k_{\mathrm B} T}} , where h is the Planck constant, is the mass of a gas particle, k_{\mathrm B} is the Boltzmann constant, and is the temperature of the gas. For example, when observing the long-wavelength spectrum of black body radiation, the classical Rayleigh–Jeans law can be applied, but when the observed wavelengths approach the thermal wavelength of the photons in the black body radiator, the quantum Planck's law must be used. ==General definition== A general definition of the thermal wavelength for an ideal gas of particles having an arbitrary power-law relationship between energy and momentum (dispersion relationship), in any number of dimensions, can be introduced. The polarization of the incoming neutron beam is alternated rapidly to study the spin correlation of the direction of the emitted gamma ray. TIme-dependent neutronics and temperatures (TINTE) is a two-group diffusion code for the study of nuclear and thermal behavior of high temperature reactors. ("Thermal" does not mean hot in an absolute sense, but means in thermal equilibrium with the medium it is interacting with, the reactor's fuel, moderator and structure, which is much lower energy than the fast neutrons initially produced by fission.) This explains why the 1-D derivation above agrees with the 3-D case. ==Examples== Some examples of the thermal de Broglie wavelength at 298 K are given below. As with the thermal wavelength for massive particles, this is of the order of the average wavelength of the particles in the gas and defines a critical point at which quantum effects begin to dominate. The usual unit is cm−2s−1 (neutrons per centimeter squared per second). NPDGamma is an ongoing effort to measure the parity-violating asymmetry in polarized cold neutron capture on parahydrogen. :\vec n + p \to d + \gamma Polarized neutrons of energies 2 meV – 15 meV are incident on a liquid parahydrogen target.
-45
7.27
17.0
2598960
226
E
Using the perfect gas equation Calculate the pressure in kilopascals exerted by $1.25 \mathrm{~g}$ of nitrogen gas in a flask of volume $250 \mathrm{~cm}^3$ at $20^{\circ} \mathrm{C}$.
The equation was developed by Martin Hans Christian Knudsen (1871–1949), a Danish physicist who taught and conducted research at the Technical University of Denmark. ==Cylindrical tube== For a cylindrical tube, the Knudsen equation is: :q = \frac16 \sqrt{2 \pi} \Delta P \frac{d^3}{ l \sqrt{\rho_1}}, where: Quantity Description q volume flow rate at unit pressure (volume×pressure/time) ΔP pressure drop from the beginning of the tube to the end d diameter of the tube l length of the tube ρ1 ratio of density and pressure For nitrogen (or air) at room temperature, the conductivity C (in liters per second) of a tube can be calculated from this equation: :\frac{C}{\mathrm{L}/\mathrm{s}} \approx 12 \, \frac{d^3/\mathrm{cm}^3}{{l/\mathrm{cm}}} == References == Category:Fluid dynamics In fluid dynamics, the Knudsen equation is used to describe how gas flows through a tube in free molecular flow. thumb|upright=1.75|An illustration of Dalton's law using the gases of air at sea level. A newton is equal to 1 kg⋅m/s2, and a kilogram-force is 9.80665 N,The NIST Guide for the use of the International System of Units, National Institute of Standards and Technology, 18 Oct 2011 meaning that 1 kgf/cm2 equals 98.0665 kilopascals (kPa). For a fixed number of moles of gas n, a thermally perfect gas * is in thermodynamic equilibrium * is not chemically reacting * has internal energy U, enthalpy H, and constant volume / constant pressure heat capacities C_V, C_P that are solely functions of temperature and not of pressure P or volume V, i.e., U = U(T), H = H(T), dU = C_V (T) dT, dH = C_P (T) dT. Dalton's law (also called Dalton's law of partial pressures) states that in a mixture of non-reacting gases, the total pressure exerted is equal to the sum of the partial pressures of the individual gases. A Knudsen gas is a gas in a state of such low density that the average distance travelled by the gas molecules between collisions (mean free path) is greater than the diameter of the receptacle that contains it. If the diameter of the receptacle is less than 68nm, the Knudsen number would greater than 1, and this sample of air would be considered a Knudsen gas. In particular, the short average distances between molecules increases intermolecular forces between gas molecules enough to substantially change the pressure exerted by them, an effect not included in the ideal gas model. ==See also== * * * * * * * * * ==References== Category:Gas laws Category:Physical chemistry Category:Engineering thermodynamics de:Partialdruck#Dalton-Gesetz et:Daltoni seadus Dalton's law is related to the ideal gas laws. ==Formula== Mathematically, the pressure of a mixture of non-reactive gases can be defined as the summation: p_\text{total} = \sum_{i=1}^n p_i = p_1+p_2+p_3+\cdots+p_n where p1, p2, ..., pn represent the partial pressures of each component. p_{i} = p_\text{total} x_i where xi is the mole fraction of the ith component in the total mixture of n components . ==Volume-based concentration== The relationship below provides a way to determine the volume- based concentration of any individual gaseous component p_i = p_\text{total} c_i where ci is the concentration of component i. In physics and engineering, a perfect gas is a theoretical gas model that differs from real gases in specific ways that makes certain calculations easier to handle. Note the "square" instead of 2. ( means "oil" in Swedish) A kilogram-force per centimetre square (kgf/cm2), often just kilogram per square centimetre (kg/cm2), or kilopond per centimetre square (kp/cm2) is a deprecated unit of pressure using metric units. It would not be a Knudsen gas if the diameter of the receptacle is greater than 68nm. ==References== == See also == * Free streaming * Kinetic theory Category:Gases Category:Phases of matter In all perfect gas models, intermolecular forces are neglected. In some older publications, kilogram-force per square centimetre is abbreviated ksc instead of kg/cm2. : 1 at = 98.0665 kPa 1 at ≈ standard atmospheres ==Ambiguity of at== The symbol "at" clashes with that of the katal (symbol: "kat"), the SI unit of catalytic activity; a kilotechnical atmosphere would have the symbol "kat", indistinguishable from the symbol for the katal. When 10^{-1}<\rm{Kn}<10, the flow regime of the gas is transitional flow. It is not a part of the International System of Units (SI), the modern metric system. 1 kgf/cm2 equals 98.0665 kPa (kilopascals). There are more collisions between the gas molecules and the receptacle walls (shown in red) compared to collisions between gas molecules (shown in blue). == Knudsen number == For a Knudsen gas, the Knudsen number must be greater than 1. Pressure piling describes phenomena related to combustion of gases in a tube or long vessel. Nomenclature 1 Nomenclature 2 Heat capacity at constant V, C_V, or constant P, C_P Ideal-gas law PV = nRT and C_P - C_V = nR Calorically perfect Perfect Thermally perfect Semi-perfect Ideal Imperfect Imperfect, or non-ideal === Thermally and calorically perfect gas === Along with the definition of a perfect gas, there are also two more simplifications that can be made although various textbooks either omit or combine the following simplifications into a general "perfect gas" definition. Dalton's law is not strictly followed by real gases, with the deviation increasing with pressure. However, the idea of a perfect gas model is often invoked as a combination of the ideal gas equation of state with specific additional assumptions regarding the variation (or nonvariation) of the heat capacity with temperature. == Perfect gas nomenclature == The terms perfect gas and ideal gas are sometimes used interchangeably, depending on the particular field of physics and engineering.
15.425
25.6773
435.0
6.6
9.8
C
Determine the energies and degeneracies of the lowest four energy levels of an ${ }^1 \mathrm{H}^{35} \mathrm{Cl}$ molecule freely rotating in three dimensions. What is the frequency of the transition between the lowest two rotational levels? The moment of inertia of an ${ }^1 \mathrm{H}^{35} \mathrm{Cl}$ molecule is $2.6422 \times 10^{-47} \mathrm{~kg} \mathrm{~m}^2$.
For example, the rotational energy levels for linear molecules (in the rigid-rotor approximation) are :E_\text{rot} = hc BJ(J + 1). The particular pattern of energy levels (and, hence, of transitions in the rotational spectrum) for a molecule is determined by its symmetry. Thus, for linear molecules the energy levels are described by a single moment of inertia and a single quantum number, J, which defines the magnitude of the rotational angular momentum. For a linear molecule, analysis of the rotational spectrum provides values for the rotational constantThis article uses the molecular spectroscopist's convention of expressing the rotational constant B in cm−1. Fitting the spectra to the theoretical expressions gives numerical values of the angular moments of inertia from which very precise values of molecular bond lengths and angles can be derived in favorable cases. Rotational spectroscopy is concerned with the measurement of the energies of transitions between quantized rotational states of molecules in the gas phase. Analytical expressions can be derived for the fourth category, asymmetric top, for rotational levels up to J=3, but higher energy levels need to be determined using numerical methods. Thus, by completing a Deslandres table it is easy to assign the correct vibrational quantum numbers v and v' for the transition, allowing important molecular properties to be calculated, such as the dissociation energy. == References == * Category:Spectroscopy The rotational energies are derived theoretically by considering the molecules to be rigid rotors and then applying extra terms to account for centrifugal distortion, fine structure, hyperfine structure and Coriolis coupling. Therefore, there should be three rotational diffusion constants - the eigenvalues of the rotational diffusion tensor - resulting in five rotational time constants. In the absence of an external electrical field, the rotational energy of a symmetric top is a function of only J and K and, in the rigid rotor approximation, the energy of each rotational state is given by : F\left( J,K \right) = B J \left( J+1 \right) + \left( A - B \right) K^2 \qquad J = 0, 1, 2, \ldots \quad \mbox{and}\quad K = +J, \ldots, 0, \ldots, -J where B = {h\over{8\pi^2cI_B}} and A = {h\over{8\pi^2cI_A}} for a prolate symmetric top molecule or A = {h\over{8\pi^2cI_C}} for an oblate molecule. When this is not possible, as with most asymmetric tops, all that can be done is to fit the spectra to three moments of inertia calculated from an assumed molecular structure. In a linear molecule the moment of inertia about an axis perpendicular to the molecular axis is unique, that is, I_B = I_C, I_A=0 , so : B = {h \over{8\pi^2cI_B}}= {h \over{8\pi^2cI_C}} For a diatomic molecule : I=\frac{m_1m_2}{m_1 +m_2}d^2 where m1 and m2 are the masses of the atoms and d is the distance between them. To account for this a centrifugal distortion correction term is added to the rotational energy levels of the diatomic molecule. The third quantum number, K is associated with rotation about the principal rotation axis of the molecule. For rotational spectroscopy, molecules are classified according to symmetry into a spherical top, linear and symmetric top; analytical expressions can be derived for the rotational energy terms of these molecules. The second factor is the degeneracy of the rotational state, which is equal to . Under the rigid rotor model, the rotational energy levels, F(J), of the molecule can be expressed as, : F\left( J \right) = B J \left( J+1 \right) \qquad J = 0,1,2,... where B is the rotational constant of the molecule and is related to the moment of inertia of the molecule. For any molecule, there are three moments of inertia: I_A, I_B and I_C about three mutually orthogonal axes A, B, and C with the origin at the center of mass of the system. This spectrum is also interesting because it shows clear evidence of Coriolis coupling in the asymmetric structure of the band. ===Linear molecules=== right|thumb|300px|Energy levels and line positions calculated in the rigid rotor approximation The rigid rotor is a good starting point from which to construct a model of a rotating molecule. However, since only integer values of J are allowed, the maximum line intensity is observed for a neighboring integer J. :J = \sqrt{\frac{kT}{2hcB}} - \frac{1}{2} The diagram at the right shows an intensity pattern roughly corresponding to the spectrum above it. ====Centrifugal distortion==== When a molecule rotates, the centrifugal force pulls the atoms apart. In this approximation, the vibration-rotation wavenumbers of transitions are :\tilde u = \tilde u_\text{vib} + BJ(J + 1) - B'J'(J' + 1), where B and B' are rotational constants for the upper and lower vibrational state respectively, while J and J' are the rotational quantum numbers of the upper and lower levels.
311875200
0.3359
12.0
635.7
0.9522
D
Using the Planck distribution Compare the energy output of a black-body radiator (such as an incandescent lamp) at two different wavelengths by calculating the ratio of the energy output at $450 \mathrm{~nm}$ (blue light) to that at $700 \mathrm{~nm}$ (red light) at $298 \mathrm{~K}$.
Only 25 percent of the energy in the black-body spectrum is associated with wavelengths shorter than the value given by the peak-wavelength version of Wien's law. thumb|upright=1.45|Planck blackbody spectrum parameterized by wavelength, fractional bandwidth (log wavelength or log frequency), and frequency, for a temperature of 6000 K. Notice that for a given temperature, different parameterizations imply different maximal wavelengths. The formula is given, where E is the radiant heat emitted from a unit of area per unit time, T is the absolute temperature, and is the Stefan–Boltzmann constant. ==Equations== ===Planck's law of black-body radiation=== Planck's law states that :B_ u(T) = \frac{2h u^3}{c^2}\frac{1}{e^{h u/kT} - 1}, where :B_{ u}(T) is the spectral radiance (the power per unit solid angle and per unit of area normal to the propagation) density of frequency u radiation per unit frequency at thermal equilibrium at temperature T. Units: power / [area × solid angle × frequency]. :h is the Planck constant; :c is the speed of light in vacuum; :k is the Boltzmann constant; : u is the frequency of the electromagnetic radiation; :T is the absolute temperature of the body. Meanwhile, the average energy of a photon from a blackbody isE = \left[\frac{\pi^4}{30\ \zeta(3)}\right] k_\mathrm{B}T \approx 2.701\ k_\mathrm{B}T,where \zeta is the Riemann zeta function. ===Approximations=== In the limit of low frequencies (i.e. long wavelengths), Planck's law becomes the Rayleigh–Jeans law B_ u(T) \approx \frac{2 u^2 }{c^2} k_\mathrm{B} T or B_\lambda(T) \approx \frac{2c}{\lambda^4} k_\mathrm{B} T The radiance increases as the square of the frequency, illustrating the ultraviolet catastrophe. The relative spectral power distribution (SPD) of a Planckian radiator follows Planck's law, and depends on the second radiation constant, c_2=hc/k. Through Planck's law the temperature spectrum of a black body is proportionally related to the frequency of light and one may substitute the temperature (T) for the frequency in this equation. The spectral radiance of Planckian radiation from a black body has the same value for every direction and angle of polarization, and so the black body is said to be a Lambertian radiator. ==Different forms== Planck's law can be encountered in several forms depending on the conventions and preferences of different scientific fields. The emitted energy flux density or irradiance B_ u(T,E), is related to the photon flux density b_ u(T,E) through :B_ u(T,E) = Eb_ u(T,E) ===Wien's displacement law=== Wien's displacement law shows how the spectrum of black-body radiation at any temperature is related to the spectrum at any other temperature. The Planckian locus is determined by substituting into the above equations the black body spectral radiant exitance, which is given by Planck's law: :M(\lambda,T) =\frac{c_{1}}{\lambda^5}\frac{1}{\exp\left(\frac{c_2}{{\lambda}T}\right)-1} where: :c1 = 2hc2 is the first radiation constant :c2 = hc/k is the second radiation constant and: :M is the black body spectral radiant exitance (power per unit area per unit wavelength: watt per square meter per meter (W/m3)) :T is the temperature of the black body :h is Planck's constant :c is the speed of light :k is Boltzmann's constant This will give the Planckian locus in CIE XYZ color space. Although the spectra of such lights are not accurately described by the black-body radiation curve, a color temperature (the correlated color temperature) is quoted for which black-body radiation would most closely match the subjective color of that source. Commonly a wavelength parameterization is used and in that case the black body spectral radiance (power per emitting area per solid angle) is: :u_{\lambda}(\lambda,T) = {2 h c^2\over \lambda^5}{1\over e^{h c/\lambda kT}-1}. They recommend that the Planck spectrum be plotted as a “spectral energy density per fractional bandwidth distribution,” using a logarithmic scale for the wavelength or frequency. ==See also== * Wien approximation * Emissivity * Sakuma–Hattori equation * Stefan–Boltzmann law * Thermometer * Ultraviolet catastrophe ==References== ==Further reading== * * ==External links== * Eric Weisstein's World of Physics Category:Statistical mechanics Category:Foundational quantum physics Category:Light Category:1893 in science Category:1893 in Germany Then for a perfectly black body, the wavelength- specific ratio of emissive power to absorption ratio is again just , with the dimensions of power. For a black body much bigger than the wavelength, the light energy absorbed at any wavelength λ per unit time is strictly proportional to the black-body curve. The theory even predicted that all bodies would emit most of their energy in the ultraviolet range, clearly contradicted by the experimental data which showed a different peak wavelength at different temperatures (see also Wien's law). thumb|303px|As the temperature increases, the peak of the emitted black-body radiation curve moves to higher intensities and shorter wavelengths. From the Planck constant h and the Boltzmann constant k, Wien's constant b can be obtained. ==Peak differs according to parameterization== Constants for different parameterizations of Wien's law Parameterized by x_\mathrm{peak} b (μm⋅K) Wavelength, \lambda 2898 \log\lambda or \log u 3670 Frequency, u 5099 Other characterizations of spectrum Parameterized by x b (μm⋅K) Mean photon energy 5327 10% percentile 2195 25% percentile 2898 50% percentile 4107 70% percentile 5590 90% percentile 9376 The results in the tables above summarize results from other sections of this article. Additionally, for a given temperature the radiance consisting of all photons between two wavelengths must be the same regardless of which distribution you use. In the limit of high frequencies (i.e. small wavelengths) Planck's law tends to the Wien approximation: B_ u(T) \approx \frac{2 h u^3}{c^2} e^{-\frac{h u}{k_\mathrm{B}T}} or B_\lambda(T) \approx \frac{2 h c^2}{\lambda^5} e^{-\frac{hc}{\lambda k_\mathrm{B} T}}. ===Percentiles=== Percentile (μm·K) 0.01% 910 0.0632 0.1% 1110 0.0771 1% 1448 0.1006 10% 2195 0.1526 20% 2676 0.1860 25.0% 2898 0.2014 30% 3119 0.2168 40% 3582 0.2490 41.8% 3670 0.2551 50% 4107 0.2855 60% 4745 0.3298 64.6% 5099 0.3544 70% 5590 0.3885 80% 6864 0.4771 90% 9376 0.6517 99% 22884 1.5905 99.9% 51613 3.5873 99.99% 113374 7.8799 Wien's displacement law in its stronger form states that the shape of Planck's law is independent of temperature. According to Kirchhoff's law of thermal radiation, this entails that, for every frequency , at thermodynamic equilibrium at temperature , one has , so that the thermal radiation from a black body is always equal to the full amount specified by Planck's law. In physics, Planck's law describes the spectral density of electromagnetic radiation emitted by a black body in thermal equilibrium at a given temperature , when there is no net flow of matter or energy between the body and its environment. thumb|right|upright=1.15|Planck's law accurately describes black-body radiation. UV-B lamps are lamps that emit a spectrum of ultraviolet light with wavelengths ranging from 290–320 nanometers. Then for a perfectly black body, the wavelength-specific ratio of emissive power to absorptivity is again just , with the dimensions of power.
358800
7.42
0.14
2.10
0.11
D
Lead has $T_{\mathrm{c}}=7.19 \mathrm{~K}$ and $\mathcal{H}_{\mathrm{c}}(0)=63.9 \mathrm{kA} \mathrm{m}^{-1}$. At what temperature does lead become superconducting in a magnetic field of $20 \mathrm{kA} \mathrm{m}^{-1}$ ?
At that temperature even the weakest external magnetic field will destroy the superconducting state, so the strength of the critical field is zero. In 2007, the same group published results suggesting a superconducting transition temperature of 260 K. As of 2015, the highest critical temperature found for a conventional superconductor is 203 K for H2S, although high pressures of approximately 90 gigapascals were required. In 2020, a room-temperature superconductor (critical temperature 288 K) made from hydrogen, carbon and sulfur under pressures of around 270 gigapascals was described in a paper in Nature. It has been experimentally demonstrated that, as a consequence, when the magnetic field is increased beyond the critical field, the resulting phase transition leads to a decrease in the temperature of the superconducting material. Conventional superconductors usually have critical temperatures ranging from around 20 K to less than 1 K. Solid mercury, for example, has a critical temperature of 4.2 K. This material has critical temperature of 10 kelvins and can superconduct at up to about 15 teslas. Similarly, at a fixed temperature below the critical temperature, superconducting materials cease to superconduct when an external magnetic field is applied which is greater than the critical magnetic field. Cambridge University Press, Cambridge From about 1993, the highest-temperature superconductor known was a ceramic material consisting of mercury, barium, calcium, copper and oxygen (HgBa2Ca2Cu3O8+δ) with Tc = 133–138 K. Changes in either temperature or magnetic flux density can cause the phase transition between normal and superconducting states.High Temperature Superconductivity, Jeffrey W. Lynn Editor, Springer-Verlag (1990) The highest temperature under which the superconducting state is seen is known as the critical temperature. For a given temperature, the critical field refers to the maximum magnetic field strength below which a material remains superconducting. A room-temperature superconductor is a material that is capable of exhibiting superconductivity at operating temperatures above , that is, temperatures that can be reached and easily maintained in an everyday environment. , the material with the highest claimed superconducting temperature is an extremely pressurized carbonaceous sulfur hydride with a critical transition temperature of +15 °C at 267 GPa. One exception to this rule is the iron pnictide group of superconductors which display behaviour and properties typical of high-temperature superconductors, yet some of the group have critical temperatures below 30 K. === By material === thumb|"Top: Periodic table of superconducting elemental solids and their experimental critical temperature (T). Low temperature superconductors refer to materials with a critical temperature below 30 K, and are cooled mainly by liquid helium (Tc > 4.2 K). High-temperature superconductivity was discovered in the 1980s. In 2019, the material with the highest accepted superconducting temperature was highly pressurized lanthanum decahydride (), whose transition temperature is approximately . In 1913, lead was found to superconduct at 7 K, and in 1941 niobium nitride was found to superconduct at 16 K. Later, other substances with superconductivity at temperatures up to 30 K were found. For a type-I superconductor the discontinuity in heat capacity seen at the superconducting transition is generally related to the slope of the critical field (H_\text{c}) at the critical temperature (T_\text{c}):Superconductivity of Metals and Alloys, P. G. de Gennes, Addison-Wesley (1989) :C_\text{super} - C_\text{normal} = {T \over 4 \pi} \left(\frac{dH_\text{c}}{dT}\right)^2_{T=T_\text{c}} There is also a direct relation between the critical field and the critical current – the maximum electric current density that a given superconducting material can carry, before switching into the normal state. The upper critical field (at 0 K) can also be estimated from the coherence length () using the Ginzburg–Landau expression: .Introduction to Solid State Physics, Charles Kittel, John Wiley and Sons, Inc. ==Lower critical field== The lower critical field is the magnetic flux density at which the magnetic flux starts to penetrate a type-II superconductor. ==References== Category:Superconductivity Several hundred metals, compounds, alloys and ceramics possess the property of superconductivity at low temperatures. The results were strongly supported by Monte Carlo computer simulations. === Meissner effect === When a superconductor is placed in a weak external magnetic field H, and cooled below its transition temperature, the magnetic field is ejected.
41
6.0
91.7
2.6
61
B
When an electric discharge is passed through gaseous hydrogen, the $\mathrm{H}_2$ molecules are dissociated and energetically excited $\mathrm{H}$ atoms are produced. If the electron in an excited $\mathrm{H}$ atom makes a transition from $n=2$ to $n=1$, calculate the wavenumber of the corresponding line in the emission spectrum.
Therefore, each wavelength of the emission lines corresponds to an electron dropping from a certain energy level (greater than 1) to the first energy level. == See also == * Bohr model * H-alpha * Hydrogen spectral series * K-alpha * Lyman-alpha line * Lyman continuum photon * Moseley's law * Rydberg formula * Balmer series ==References== Category:Emission spectroscopy Category:Hydrogen physics The emission spectrum of atomic hydrogen has been divided into a number of spectral series, with wavelengths given by the Rydberg formula. thumb|In the Bohr model of the hydrogen atom, the electron transition from energy level n = 3 to n = 2 results in the emission of an H-alpha photon. This equation is valid for all hydrogen-like species, i.e. atoms having only a single electron, and the particular case of hydrogen spectral lines is given by Z=1. ==Series== ===Lyman series ( = 1)=== In the Bohr model, the Lyman series includes the lines emitted by transitions of the electron from an outer orbit of quantum number n > 1 to the 1st orbit of quantum number n' = 1. It is emitted when the atomic electron transitions from an n = 2 orbital to the ground state (n = 1), where n is the principal quantum number. In physics and chemistry, the Lyman series is a hydrogen spectral series of transitions and resulting ultraviolet emission lines of the hydrogen atom as an electron goes from n ≥ 2 to n = 1 (where n is the principal quantum number), the lowest energy level of the electron. It is the first spectral line in the Balmer series and is emitted when an electron falls from a hydrogen atom's third- to second-lowest energy level. A photon in this energy range, with a frequency that coincides with that of one of the lines in the Lyman or Werner bands, can be absorbed by H2, placing the molecule in an excited electronic state. Therefore the motion of the electron in the process of photon absorption or emission is always accompanied by motion of the nucleus, and, because the mass of the nucleus is always finite, the energy spectra of hydrogen-like atoms must depend on the nuclear mass. ==Rydberg formula== The energy differences between levels in the Bohr model, and hence the wavelengths of emitted or absorbed photons, is given by the Rydberg formula: : {1 \over \lambda} = Z^2 R_\infty \left( {1 \over {n'}^2} - {1 \over n^2} \right) where : is the atomic number, : (often written n_1) is the principal quantum number of the lower energy level, : (or n_2) is the principal quantum number of the upper energy level, and : R_\infty is the Rydberg constant. ( for hydrogen and for heavy metals). These observed spectral lines are due to the electron making transitions between two energy levels in an atom. Here is an illustration of the first series of hydrogen emission lines: Historically, explaining the nature of the hydrogen spectrum was a considerable problem in physics. thumb|A hydrogen atom with proton and electron spins aligned (top) undergoes a flip of the electron spin, resulting in emission of a photon with a 21 cm wavelength (bottom) The hydrogen line, 21 centimeter line, or H I line is a spectral line that is created by a change in the energy state of solitary, electrically neutral hydrogen atoms. Also in . , vacuum (nm) 2 121.57 3 102.57 4 97.254 5 94.974 6 93.780 ∞ 91.175 Source: ===Balmer series ( = 2)=== 757px|thumb|center|The four visible hydrogen emission spectrum lines in the Balmer series. A Lyman-Werner photon is an ultraviolet photon with a photon energy in the range of 11.2 to 13.6 eV, corresponding to the energy range in which the Lyman and Werner absorption bands of molecular hydrogen (H2) are found. For the Lyman series the naming convention is: *n = 2 to n = 1 is called Lyman- alpha, *n = 3 to n = 1 is called Lyman-beta, etc. H-alpha has a wavelength of 656.281 nm, is visible in the red part of the electromagnetic spectrum, and is the easiest way for astronomers to trace the ionized hydrogen content of gas clouds. Approximately half the time, this cascade will include the n = 3 to n = 2 transition and the atom will emit H-alpha light. Replacing the energy in the above formula with the expression for the energy in the hydrogen atom where the initial energy corresponds to energy level n and the final energy corresponds to energy level m, : \frac{1}{\lambda} = \frac{E_\text{i} - E_\text{f}}{12398.4\,\text{eV Å}} = R_\text{H} \left(\frac{1}{m^2} - \frac{1}{n^2} \right) Where RH is the same Rydberg constant for hydrogen from Rydberg's long known formula. Spectral emission occurs when an electron transitions, or jumps, from a higher energy state to a lower energy state. H-alpha (Hα) is a deep-red visible spectral line of the hydrogen atom with a wavelength of 656.28 nm in air and 656.46 nm in vacuum. For example, the line is called "Lyman-alpha" (Ly-α), while the line is called "Paschen-delta" (Pa-δ). thumb|Energy level diagram of electrons in hydrogen atom There are emission lines from hydrogen that fall outside of these series, such as the 21 cm line. This two-step photodissociation process, known as the Solomon process, is one of the main mechanisms by which molecular hydrogen is destroyed in the interstellar medium. thumb|Electronic and vibrational levels of the hydrogen molecule In reference to the figure shown, Lyman-Werner photons are emitted as described below: *A hydrogen molecule can absorb a far- ultraviolet photon (11.2 eV < energy of the photon < 13.6 eV) and make a transition from the ground electronic state X to excited state B (Lyman) or C (Werner). In a laboratory setting, the hydrogen line parameters have been more precisely measured as: : λ = 21.106114054160(30) cm : ν = 1420405751.768(2) Hz in a vacuum.
4.4
62.2
82258.0
0.05882352941
226
C
Calculate the shielding constant for the proton in a free $\mathrm{H}$ atom.
Any one of these constants can be written in terms of any of the others using the fine-structure constant \alpha : :r_{\mathrm{e}} = \alpha \frac{\lambda_{\mathrm{e}}}{2\pi} = \alpha^2 a_0. ==Hydrogen atom and similar systems== The Bohr radius including the effect of reduced mass in the hydrogen atom is given by : \ a_0^* \ = \frac{m_\text{e}}{\mu}a_0 , where \mu = m_\text{e} m_\text{p} / (m_\text{e} + m_\text{p}) is the reduced mass of the electron–proton system (with m_\text{p} being the mass of proton). The shielding constant for each group is formed as the sum of the following contributions: #An amount of 0.35 from each other electron within the same group except for the [1s] group, where the other electron contributes only 0.30. This value is based on measurements involving a proton and an electron (namely, electron scattering measurements and complex calculation involving scattering cross section based on Rosenbluth equation for momentum-transfer cross section), and studies of the atomic energy levels of hydrogen and deuterium. In January 2013, an updated value for the charge radius of a proton——was published. The constant is expressed for either hydrogen as R_\text{H}, or at the limit of infinite nuclear mass as R_\infty. A resolution came in 2019, when two different studies, using different techniques involving the Lamb shift of the electron in hydrogen, and electron–proton scattering, found the radius of the proton to be 0.833 fm, with an uncertainty of ±0.010 fm, and 0.831 fm. The constant first arose as an empirical fitting parameter in the Rydberg formula for the hydrogen spectral series, but Niels Bohr later showed that its value could be calculated from more fundamental constants according to his model of the atom. The radius of the proton is linked to the form factor and momentum-transfer cross section. Consistent with the spectroscopy method, this produces a proton radius of about . ===2010 experiment=== In 2010, Pohl et al. published the results of an experiment relying on muonic hydrogen as opposed to normal hydrogen. H-alpha (Hα) is a deep-red visible spectral line of the hydrogen atom with a wavelength of 656.28 nm in air and 656.46 nm in vacuum. The internationally accepted value of a proton's charge radius is . By measuring the energy required to excite hydrogen atoms from the 2S to the 2P state, the Rydberg constant could be calculated, and from this the proton radius inferred. For hydrogen, whose nucleus consists only of one proton, this indirectly measures the proton charge radius. Revised values of screening constants based on computations of atomic structure by the Hartree–Fock method were obtained by Enrico Clementi et al. in the 1960s. ==Rules== Firstly, the electrons are arranged into a sequence of groups in order of increasing principal quantum number n, and for equal n in order of increasing azimuthal quantum number l, except that s- and p- orbitals are kept together. :[1s] [2s,2p] [3s,3p] [3d] [4s,4p] [4d] [4f] [5s, 5p] [5d] etc. The result is again ~5% smaller than the previously-accepted proton radius. The nucleus of the most common isotope of the hydrogen atom (with the chemical symbol "H") is a lone proton. this opinion is not yet universally held. ==Problem== Prior to 2010, the proton charge radius was measured using one of two methods: one relying on spectroscopy, and one relying on nuclear scattering. ===Spectroscopy method=== The spectroscopy method uses the energy levels of electrons orbiting the nucleus. His personal assumption is that past measurements have misgauged the Rydberg constant and that the current official proton size is inaccurate. ===Quantum chromodynamic calculation=== In a paper by Belushkin et al. (2007), including different constraints and perturbative quantum chromodynamics, a smaller proton radius than the then-accepted 0.877 femtometres was predicted. ===Proton radius extrapolation=== Papers from 2016 suggested that the problem was with the extrapolations that had typically been used to extract the proton radius from the electron scattering data though these explanation would require that there was also a problem with the atomic Lamb shift measurements. ===Data analysis method=== In one of the attempts to resolve the puzzle without new physics, Alarcón et al. (2018) of Jefferson Lab have proposed that a different technique to fit the experimental scattering data, in a theoretically as well as analytically justified manner, produces a proton charge radius from the existing electron scattering data that is consistent with the muonic hydrogen measurement. The 2014 CODATA adjustment slightly reduced the recommended value for the proton radius (computed using electron measurements only) to , but this leaves the discrepancy at σ. The screening constant, and subsequently the shielded (or effective) nuclear charge for each electron is deduced as: : \begin{matrix} 4s &: s = 0.35 \times 1& \+ &0.85 \times 14 &+& 1.00 \times 10 &=& 22.25 &\Rightarrow& Z_{\mathrm{eff}}(4s) = 26.00 - 22.25 = 3.75\\\ 3d &: s = 0.35 \times 5& & &+& 1.00 \times 18 &=& 19.75 &\Rightarrow& Z_{\mathrm{eff}}(3d)= 26.00 - 19.75 =6.25\\\ 3s,3p &: s = 0.35 \times 7& \+ &0.85 \times 8 &+& 1.00 \times 2 &=& 11.25 &\Rightarrow& Z_{\mathrm{eff}}(3s,3p)= 26.00 - 11.25 =14.75\\\ 2s,2p &: s = 0.35 \times 7& \+ &0.85 \times 2 & & &=& 4.15 &\Rightarrow& Z_{\mathrm{eff}}(2s,2p)= 26.00 - 4.15 =21.85\\\ 1s &: s = 0.30 \times 1& & & & &=& 0.30 &\Rightarrow& Z_{\mathrm{eff}}(1s)= 26.00 - 0.30 =25.70 \end{matrix} Note that the effective nuclear charge is calculated by subtracting the screening constant from the atomic number, 26. ==Motivation== The rules were developed by John C. Slater in an attempt to construct simple analytic expressions for the atomic orbital of any electron in an atom. The Rydberg constant for hydrogen may be calculated from the reduced mass of the electron: : R_\text{H} = R_\infty \frac{ m_\text{e} m_\text{p} }{ m_\text{e}+m_\text{p} } \approx 1.09678 \times 10^7 \text{ m}^{-1} , where * m_\text{e} is the mass of the electron, * m_\text{p} is the mass of the nucleus (a proton). === Rydberg unit of energy === The Rydberg unit of energy is equivalent to joules and electronvolts in the following manner: :1 \ \text{Ry} \equiv h c R_\infty = \frac{m_\text{e} e^4}{8 \varepsilon_{0}^{2} h^2} = \frac{e^2}{8 \pi \varepsilon_{0} a_0} = 2.179\;872\;361\;1035(42) \times 10^{-18}\ \text{J} \ = 13.605\;693\;122\;994(26)\ \text{eV}. === Rydberg frequency === :c R_\infty = 3.289\;841\;960\;2508(64) \times 10^{15}\ \text{Hz} . === Rydberg wavelength === :\frac 1 {R_\infty} = 9.112\;670\;505\;824(17) \times 10^{-8}\ \text{m}. The result is a protonated atom, which is a chemical compound of hydrogen.
1.91
1.1
5275.0
1.775
1.7
D
An insurance company sells several types of insurance policies, including auto policies and homeowner policies. Let $A_1$ be those people with an auto policy only, $A_2$ those people with a homeowner policy only, and $A_3$ those people with both an auto and homeowner policy (but no other policies). For a person randomly selected from the company's policy holders, suppose that $P\left(A_1\right)=0.3, P\left(A_2\right)=0.2$, and $P\left(A_3\right)=0.2$. Further, let $B$ be the event that the person will renew at least one of these policies. Say from past experience that we assign the conditional probabilities $P\left(B \mid A_1\right)=0.6, P\left(B \mid A_2\right)=0.7$, and $P\left(B \mid A_3\right)=0.8$. Given that the person selected at random has an auto or homeowner policy, what is the conditional probability that the person will renew at least one of those policies?
This shows that P(A|B) P(B) = P(B|A) P(A) i.e. P(A|B) = . * This results in P(A \mid B) = P(A \cap B)/P(B) whenever P(B) > 0 and 0 otherwise. In this event, the event B can be analyzed by a conditional probability with respect to A. Substituting 1 and 2 into 3 to select α: :\begin{align} 1 &= \sum_{\omega \in \Omega} {P(\omega \mid B)} \\\ &= \sum_{\omega \in B} {P(\omega\mid B)} + \cancelto{0}{\sum_{\omega otin B} P(\omega\mid B)} \\\ &= \alpha \sum_{\omega \in B} {P(\omega)} \\\\[5pt] &= \alpha \cdot P(B) \\\\[5pt] \Rightarrow \alpha &= \frac{1}{P(B)} \end{align} So the new probability distribution is #\omega \in B: P(\omega\mid B) = \frac{P(\omega)}{P(B)} #\omega otin B: P(\omega\mid B) = 0 Now for a general event A, :\begin{align} P(A\mid B) &= \sum_{\omega \in A \cap B} {P(\omega \mid B)} + \cancelto{0}{\sum_{\omega \in A \cap B^c} P(\omega\mid B)} \\\ &= \sum_{\omega \in A \cap B} {\frac{P(\omega)}{P(B)}} \\\\[5pt] &= \frac{P(A \cap B)}{P(B)} \end{align} == See also == * Bayes' theorem * Bayesian epistemology * Borel–Kolmogorov paradox * Chain rule (probability) * Class membership probabilities * Conditional independence * Conditional probability distribution * Conditioning (probability) * Joint probability distribution * Monty Hall problem * Pairwise independent distribution * Posterior probability * Regular conditional probability == References == ==External links== * *Visual explanation of conditional probability Category:Mathematical fallacies Category:Statistical ratios For events in B, two conditions must be met: the probability of B is one and the relative magnitudes of the probabilities must be preserved. The conditional probability can be found by the quotient of the probability of the joint intersection of events and (P(A \cap B))—the probability at which A and B occur together, although not necessarily occurring at the same time—and the probability of : :P(A \mid B) = \frac{P(A \cap B)}{P(B)}. We have P(A\mid B)=\tfrac{P(A \cap B)}{P(B)} = \tfrac{3/36}{10/36}=\tfrac{3}{10}, as seen in the table. == Use in inference == In statistical inference, the conditional probability is an update of the probability of an event based on new information. The relationship between P(A|B) and P(B|A) is given by Bayes' theorem: :\begin{align} P(B\mid A) &= \frac{P(A\mid B) P(B)}{P(A)}\\\ \Leftrightarrow \frac{P(B\mid A)}{P(A\mid B)} &= \frac{P(B)}{P(A)} \end{align} That is, P(A|B) ≈ P(B|A) only if P(B)/P(A) ≈ 1, or equivalently, P(A) ≈ P(B). === Assuming marginal and conditional probabilities are of similar size === In general, it cannot be assumed that P(A) ≈ P(A|B). Unconditionally (that is, without reference to C), A and B are independent of each other because \operatorname{P}(A)—the sum of the probabilities associated with a 1 in row A—is \tfrac{1}{2}, while \operatorname{P}(A\mid B) = \operatorname{P}(A \text{ and } B) / \operatorname{P}(B) = \tfrac{1/4}{1/2} = \tfrac{1}{2} = \operatorname{P}(A). That is, P(A) is the probability of A before accounting for evidence E, and P(A|E) is the probability of A after having accounted for evidence E or after having updated P(A). Thus, the conditional probability P(D1 = 2 | D1+D2 ≤ 5) = = 0.3: : Table 3 \+ + D2 D2 D2 D2 D2 D2 \+ + 1 2 3 4 5 6 D1 1 2 3 4 5 6 7 D1 2 3 4 5 6 7 8 D1 3 4 5 6 7 8 9 D1 4 5 6 7 8 9 10 D1 5 6 7 8 9 10 11 D1 6 7 8 9 10 11 12 Here, in the earlier notation for the definition of conditional probability, the conditioning event B is that D1 + D2 ≤ 5, and the event A is D1 = 2\. It can be shown that :P(A_B)= \frac{P(A \cap B)}{P(B)} which meets the Kolmogorov definition of conditional probability. === Conditioning on an event of probability zero === If P(B)=0 , then according to the definition, P(A \mid B) is undefined. Therefore, it can be useful to reverse or convert a conditional probability using Bayes' theorem: P(A\mid B) = {{P(B\mid A) P(A)}\over{P(B)}}. In probability theory, regular conditional probability is a concept that formalizes the notion of conditioning on the outcome of a random variable. Since in the presence of C the probability of A is affected by the presence or absence of B, A and B are mutually dependent conditional on C. == See also == * * * == References == Category:Independence (probability theory) For example, the conditional probability that someone unwell (sick) is coughing might be 75%, in which case we would have that = 5% and = 75 %. The existence of regular conditional probabilities: necessary and sufficient conditions. From the law of total probability, its expected value is equal to the unconditional probability of . === Partial conditional probability === The partial conditional probability P(A\mid B_1 \equiv b_1, \ldots, B_m \equiv b_m) is about the probability of event A given that each of the condition events B_i has occurred to a degree b_i (degree of belief, degree of experience) that might be different from 100%. The reverse, insufficient adjustment from the prior probability is conservatism. == Formal derivation == Formally, P(A | B) is defined as the probability of A according to a new probability function on the sample space, such that outcomes not in B have probability 0 and that it is consistent with all original probability measures.George Casella and Roger L. Berger (1990), Statistical Inference, Duxbury Press, (p. 18 et seq.)Grinstead and Snell's Introduction to Probability, p. 134 Let Ω be a discrete sample space with elementary events {ω}, and let P be the probability measure with respect to the σ-algebra of Ω. In probability theory, conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has already occurred. Applying the law of total probability, we have: : \begin{align} P(A) & = P(A\mid B_X) \cdot P(B_X) + P(A\mid B_Y) \cdot P(B_Y) \\\\[4pt] & = {99 \over 100} \cdot {6 \over 10} + {95 \over 100} \cdot {4 \over 10} = {{594 + 380} \over 1000} = {974 \over 1000} \end{align} where * P(B_X)={6 \over 10} is the probability that the purchased bulb was manufactured by factory X; * P(B_Y)={4 \over 10} is the probability that the purchased bulb was manufactured by factory Y; * P(A\mid B_X)={99 \over 100} is the probability that a bulb manufactured by X will work for over 5000 hours; * P(A\mid B_Y)={95 \over 100} is the probability that a bulb manufactured by Y will work for over 5000 hours. In general, it cannot be assumed that P(A|B) ≈ P(B|A).
0.686
7
27.0
2
-21.2
A
What is the number of possible 13-card hands (in bridge) that can be selected from a deck of 52 playing cards?
Each player is dealt thirteen cards from a standard 52-card deck. The total number of distinct 7-card hands is {52 \choose 7} = 133{,}784{,}560. The probability is calculated based on {52 \choose 7} = 133,784,560, the total number of 7-card combinations. Eliminating identical hands that ignore relative suit values leaves 6,009,159 distinct 7-card hands. The number of distinct 5-card poker hands that are possible from 7 cards is 4,824. Note that all cards are dealt face up Fourteen Out (also known as Fourteen Off, Fourteen Puzzle, Take Fourteen, or just Fourteen) is a Patience card game played with a deck of 52 playing cards. There are 7,462 distinct poker hands. ===7-card poker hands=== In some popular variations of poker such as Texas hold 'em, the most widespread poker variant overall,https://www.casinodaniabeach.com/most-popular-types-of-poker/ a player uses the best five-card poker hand out of seven cards. The following chart enumerates the (absolute) frequency of each hand, given all combinations of five cards randomly drawn from a full deck of 52 without replacement. thumb|left|180px| thumb|left|180px| In duplicate bridge, a board is an item of equipment that holds one deal, or one deck of 52 cards distributed in four hands of 13 cards each. Contract bridge, or simply bridge, is a trick-taking card game using a standard 52-card deck. *The Probability of drawing a given hand is calculated by dividing the number of ways of drawing the hand (Frequency) by the total number of 5-card hands (the sample space; {52 \choose 5} = 2,598,960). The probability is calculated based on {52 \choose 5} = 2,598,960, the total number of 5-card combinations. The diagram is typical of that used to illustrate a deal of 52 cards in four hands in the game of contract bridge.Bridge Writing Style Guide by Richard Pavlicek Each hand is designated by a point on the compass and so North–South are partners against East–West. So eliminating identical hands that ignore relative suit values, there are only 134,459 distinct hands. Perhaps surprisingly, this is fewer than the number of 5-card poker hands from 5 cards, as some 5-card hands are impossible with 7 cards (e.g. 7-high and 8-high). ===5-card lowball poker hands=== Some variants of poker, called lowball, use a low hand to determine the winning hand. # Player shuffle – before the start of play, each table receives a number of boards each containing 13 cards in each of its four pockets. The Total line also needs adjusting. ===7-card lowball poker hands=== In some variants of poker a player uses the best five-card low hand selected from seven cards. The frequencies are calculated in a manner similar to that shown for 5-card hands,https://www.pokerstrategy.com/strategy/various-poker/texas-holdem- probabilities/ except additional complications arise due to the extra two cards in the 7-card poker hand. The name refers to the goal of each turn to make pairs that add up to 14."Take Fourteen" (p.80) in The Little Book of Solitaire, Running Press, 2002. ==Rules== The cards are dealt face up into twelve columns, from left to right. The number of distinct poker hands is even smaller. The director is summoned if any player does not have exactly thirteen cards. The Total line also needs adjusting. ==See also== * Binomial coefficient * Combination * Combinatorial game theory * Effective hand strength algorithm * Event (probability theory) * Game complexity * Gaming mathematics * Odds * Permutation * Probability * Sample space * Set theory ==References== ==External links== * Brian Alspach's mathematics and poker page * MathWorld: Poker * Poker probabilities including conditional calculations * Numerous poker probability tables * 5, 6, and 7 card poker probabilities * Hold'em poker probabilities
10.065778
635013559600
2.9
1.61
48
B
What is the number of ways of selecting a president, a vice president, a secretary, and a treasurer in a club consisting of 10 persons?
This is a list of fellows of the Royal Society elected in 1909."Fellows of the Royal Society", Royal Society. This is a list of fellows of the Royal Society elected in 1907."Fellows of the Royal Society", Royal Society. This is a list of fellows of the Royal Society elected in 1910."Fellows of the Royal Society", Royal Society. This is a list of fellows of the Royal Society elected in 1903."Fellows of the Royal Society", Royal Society. This is a list of fellows of the Royal Society elected in 1908."Fellows of the Royal Society", Royal Society. This is a list of fellows of the Royal Society elected in 1904."Fellows of the Royal Society", Royal Society. "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *Edward Charles Cyril Baly (1871–1948) *Sir Thomas Barlow (1845–1945) *Ernest William Barnes (1874–1953) *Francis Arthur Bather (1863–1934) *Sir Robert Abbott Hadfield (1858–1940) *Sir Alfred Daniel Hall (1864–1942) *Sir Arthur Harden (1865–1940) *Alfred John Jukes-Browne (1851–1914) *Sir John Graham Kerr (1869–1957) *William James Lewis (1847–1926) *John Alexander McClelland (1870–1920) *William McFadden Orr (1866–1934) *Alfred Barton Rendle (1865–1938) *James Lorrain Smith (1862–1931) *James Thomas Wilson (1861–1945) ==Foreign members== *George Ellery Hale (1868–1938) *Hugo Kronecker (1839–1914) *Charles Emile Picard (1856–1941) *Santiago Ramon y Cajal (1852–1934) ==References== 1909 Category:1909 in the United Kingdom Category:1909 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *Charles Jasper Joly (1864–1906) *Hugh Marshall (1868–1913) *Donald Alexander Smith Baron Strathcona and Mount Royal (1820–1914) *Thomas Gregor Brodie (1866–1916) *Alexander Muirhead (1848–1920) *Sir James Johnston Dobbie (1852–1924) *Sir Arthur Everett Shipley (1861–1927) *Harold William Taylor Wager (1862–1929) *Alfred Cardew Dixon (1865–1936) *George Henry Falkiner Nuttall (1862–1937) *Edward Meyrick (1854–1938) *Sir Sidney Gerald Burrard (1860–1943) *William Whitehead Watts (1860–1947) *Sir Thomas Henry Holland (1868–1947) *Sir Gilbert Thomas Walker (1868–1958) *Morris William Travers (1872–1961) ==References== 1904 Category:1904 in the United Kingdom Category:1904 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *Frank Dawson Adams (1859–1942) *Sir Hugh Kerr Anderson (1865–1928) *Sir William Blaxland Benham (1860–1950) *Sir William Henry Bragg (1862–1942) *Archibald Campbell Campbell, 1st Baron Blythswood (1835–1908) *Frederick Daniel Chattaway (1860–1944) *Arthur William Crossley (1869–1927) *Arthur Robertson Cushny (1866–1926) *William Duddell (1872–1917) *Frederick William Gamble (1869–1926) *Sir Joseph Ernest Petavel (1873–1936) *Henry Cabourn Pocklington (1870–1952) *Henry Nicholas Ridley (1855–1956) *Sir Grafton Elliot Smith (1871–1937) *William Henry Young (1863–1942) ==Foreign members== *Ivan Petrovich Pavlov (1849–1936) *Edward Charles Pickering (1846–1919) *Magnus Gustaf Retzius (1842–1919) *Augusto Righi (1850–1920) ==References== 1907 Category:1907 in the United Kingdom Category:1907 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *August Friedrich Leopold Weismann (1834–1914) *Paul Ehrlich (1854–1915) *Henry George Plimmer (1856–1918) *Bertram Hopkinson (1874–1918) *John Allen Harker (1870–1923) *Sir William Boog Leishman (1865–1926) *Gilbert Charles Bourne (1861–1933) *Frederick Augustus Dixey (1855–1935) *Sir Archibald Edward Garrod (1857–1936) *Louis Napoleon George Filon (1875–1937) *Arthur Philemon Coleman (1852–1939) *Alfred Fowler (1868–1940) *Arthur Lapworth (1872–1941) *Sir Joseph Barcroft (187–1947) *Godfrey Harold Hardy (1877–1947) *John Theodore Hewitt (1868–;1954) *Frederick Soddy (1877–1956) ==Foreign members== # Svante August Arrhenius (1859-1927) ForMemRS # Jean-Baptiste Édouard Bornet (1828-1911) ForMemRS # Vito Volterra (1860-1940) ForMemRS # August Friedrich Leopold Weismann (1834-1914) ForMemRS ==References== 1909 Category:1910 in the United Kingdom Category:1910 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *Thomas William Bridge (1848–1909) *John Edward Stead (1851–1923) *Johnson Symington (1851–1924) *Sir William Maddock Bayliss (1860–1924) *Sir Horace Darwin (1851–1928) *Sir Aubrey Strahan (1852–1928) *William Philip Hiern (1839–1929) *Henry Reginald Arnulph Mallock (1851–1933) *Sir David Orme Masson (1858–1937) *Arthur George Perkin (1861–1937) *Ernest Rutherford Baron Rutherford of Nelson (1871–1937) *Ralph Allen Sampson (1866–1939) *Alfred North Whitehead (1861–1947) *Sydney Arthur Monckton Copeman (1862–1947) *Sir John Sealy Edward Townsend (1868–1957) ==References== 1903 Category:1903 in the United Kingdom Category:1903 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== #Antoine Henri Becquerel (1852–1908) #David James Hamilton (1849–1909) #Silas Weir Mitchell (1829–1914) #Friedrich Robert Helmert (1843–1917) #William Gowland (1842–1922) #William Halse Rivers Rivers (1864–1922) #Charles Immanuel Forsyth Major (1843–1923) #Arthur Dendy (1865–1925) #H. H. Asquith (1852–1928) #Shibasaburo Kitasato (1852–1931) #Sir Dugald Clerk #Otto Stapf #William Barlow #Edmund Neville Nevill #Herbrand Russell, 11th Duke of Bedford #Sir Jocelyn Field Thorpe #Randal Thomas Mowbray Rawdon Berkeley #John Stanley Gardiner (1872–1946) #Henry Horatio Dixon #John Hilton Grace #Bertrand Russell ==References== 1908 Category:1908 in the United Kingdom Category:1908 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *August Friedrich Leopold Weismann (1834–1914) *Paul Ehrlich (1854–1915) *Henry George Plimmer (1856–1918) *Bertram Hopkinson (1874–1918) *John Allen Harker (1870–1923) *Sir William Boog Leishman (1865–1926) *Gilbert Charles Bourne (1861–1933) *Frederick Augustus Dixey (1855–1935) *Sir Archibald Edward Garrod (1857–1936) *Louis Napoleon George Filon (1875–1937) *Arthur Philemon Coleman (1852–1939) *Alfred Fowler (1868–1940) *Arthur Lapworth (1872–1941) *Sir Joseph Barcroft (187–1947) *Godfrey Harold Hardy (1877–1947) *John Theodore Hewitt (1868–;1954) *Frederick Soddy (1877–1956) ==Foreign members== # Svante August Arrhenius (1859-1927) ForMemRS # Jean-Baptiste Édouard Bornet (1828-1911) ForMemRS # Vito Volterra (1860-1940) ForMemRS # August Friedrich Leopold Weismann (1834-1914) ForMemRS ==References== 1909 Category:1910 in the United Kingdom Category:1910 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *August Friedrich Leopold Weismann (1834–1914) *Paul Ehrlich (1854–1915) *Henry George Plimmer (1856–1918) *Bertram Hopkinson (1874–1918) *John Allen Harker (1870–1923) *Sir William Boog Leishman (1865–1926) *Gilbert Charles Bourne (1861–1933) *Frederick Augustus Dixey (1855–1935) *Sir Archibald Edward Garrod (1857–1936) *Louis Napoleon George Filon (1875–1937) *Arthur Philemon Coleman (1852–1939) *Alfred Fowler (1868–1940) *Arthur Lapworth (1872–1941) *Sir Joseph Barcroft (187–1947) *Godfrey Harold Hardy (1877–1947) *John Theodore Hewitt (1868–;1954) *Frederick Soddy (1877–1956) ==Foreign members== # Svante August Arrhenius (1859-1927) ForMemRS # Jean-Baptiste Édouard Bornet (1828-1911) ForMemRS # Vito Volterra (1860-1940) ForMemRS # August Friedrich Leopold Weismann (1834-1914) ForMemRS ==References== 1909 Category:1910 in the United Kingdom Category:1910 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *August Friedrich Leopold Weismann (1834–1914) *Paul Ehrlich (1854–1915) *Henry George Plimmer (1856–1918) *Bertram Hopkinson (1874–1918) *John Allen Harker (1870–1923) *Sir William Boog Leishman (1865–1926) *Gilbert Charles Bourne (1861–1933) *Frederick Augustus Dixey (1855–1935) *Sir Archibald Edward Garrod (1857–1936) *Louis Napoleon George Filon (1875–1937) *Arthur Philemon Coleman (1852–1939) *Alfred Fowler (1868–1940) *Arthur Lapworth (1872–1941) *Sir Joseph Barcroft (187–1947) *Godfrey Harold Hardy (1877–1947) *John Theodore Hewitt (1868–;1954) *Frederick Soddy (1877–1956) ==Foreign members== # Svante August Arrhenius (1859-1927) ForMemRS # Jean-Baptiste Édouard Bornet (1828-1911) ForMemRS # Vito Volterra (1860-1940) ForMemRS # August Friedrich Leopold Weismann (1834-1914) ForMemRS ==References== 1909 Category:1910 in the United Kingdom Category:1910 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *August Friedrich Leopold Weismann (1834–1914) *Paul Ehrlich (1854–1915) *Henry George Plimmer (1856–1918) *Bertram Hopkinson (1874–1918) *John Allen Harker (1870–1923) *Sir William Boog Leishman (1865–1926) *Gilbert Charles Bourne (1861–1933) *Frederick Augustus Dixey (1855–1935) *Sir Archibald Edward Garrod (1857–1936) *Louis Napoleon George Filon (1875–1937) *Arthur Philemon Coleman (1852–1939) *Alfred Fowler (1868–1940) *Arthur Lapworth (1872–1941) *Sir Joseph Barcroft (187–1947) *Godfrey Harold Hardy (1877–1947) *John Theodore Hewitt (1868–;1954) *Frederick Soddy (1877–1956) ==Foreign members== # Svante August Arrhenius (1859-1927) ForMemRS # Jean-Baptiste Édouard Bornet (1828-1911) ForMemRS # Vito Volterra (1860-1940) ForMemRS # August Friedrich Leopold Weismann (1834-1914) ForMemRS ==References== 1909 Category:1910 in the United Kingdom Category:1910 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *August Friedrich Leopold Weismann (1834–1914) *Paul Ehrlich (1854–1915) *Henry George Plimmer (1856–1918) *Bertram Hopkinson (1874–1918) *John Allen Harker (1870–1923) *Sir William Boog Leishman (1865–1926) *Gilbert Charles Bourne (1861–1933) *Frederick Augustus Dixey (1855–1935) *Sir Archibald Edward Garrod (1857–1936) *Louis Napoleon George Filon (1875–1937) *Arthur Philemon Coleman (1852–1939) *Alfred Fowler (1868–1940) *Arthur Lapworth (1872–1941) *Sir Joseph Barcroft (187–1947) *Godfrey Harold Hardy (1877–1947) *John Theodore Hewitt (1868–;1954) *Frederick Soddy (1877–1956) ==Foreign members== # Svante August Arrhenius (1859-1927) ForMemRS # Jean-Baptiste Édouard Bornet (1828-1911) ForMemRS # Vito Volterra (1860-1940) ForMemRS # August Friedrich Leopold Weismann (1834-1914) ForMemRS ==References== 1909 Category:1910 in the United Kingdom Category:1910 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *August Friedrich Leopold Weismann (1834–1914) *Paul Ehrlich (1854–1915) *Henry George Plimmer (1856–1918) *Bertram Hopkinson (1874–1918) *John Allen Harker (1870–1923) *Sir William Boog Leishman (1865–1926) *Gilbert Charles Bourne (1861–1933) *Frederick Augustus Dixey (1855–1935) *Sir Archibald Edward Garrod (1857–1936) *Louis Napoleon George Filon (1875–1937) *Arthur Philemon Coleman (1852–1939) *Alfred Fowler (1868–1940) *Arthur Lapworth (1872–1941) *Sir Joseph Barcroft (187–1947) *Godfrey Harold Hardy (1877–1947) *John Theodore Hewitt (1868–;1954) *Frederick Soddy (1877–1956) ==Foreign members== # Svante August Arrhenius (1859-1927) ForMemRS # Jean-Baptiste Édouard Bornet (1828-1911) ForMemRS # Vito Volterra (1860-1940) ForMemRS # August Friedrich Leopold Weismann (1834-1914) ForMemRS ==References== 1909 Category:1910 in the United Kingdom Category:1910 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *August Friedrich Leopold Weismann (1834–1914) *Paul Ehrlich (1854–1915) *Henry George Plimmer (1856–1918) *Bertram Hopkinson (1874–1918) *John Allen Harker (1870–1923) *Sir William Boog Leishman (1865–1926) *Gilbert Charles Bourne (1861–1933) *Frederick Augustus Dixey (1855–1935) *Sir Archibald Edward Garrod (1857–1936) *Louis Napoleon George Filon (1875–1937) *Arthur Philemon Coleman (1852–1939) *Alfred Fowler (1868–1940) *Arthur Lapworth (1872–1941) *Sir Joseph Barcroft (187–1947) *Godfrey Harold Hardy (1877–1947) *John Theodore Hewitt (1868–;1954) *Frederick Soddy (1877–1956) ==Foreign members== # Svante August Arrhenius (1859-1927) ForMemRS # Jean-Baptiste Édouard Bornet (1828-1911) ForMemRS # Vito Volterra (1860-1940) ForMemRS # August Friedrich Leopold Weismann (1834-1914) ForMemRS ==References== 1909 Category:1910 in the United Kingdom Category:1910 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *August Friedrich Leopold Weismann (1834–1914) *Paul Ehrlich (1854–1915) *Henry George Plimmer (1856–1918) *Bertram Hopkinson (1874–1918) *John Allen Harker (1870–1923) *Sir William Boog Leishman (1865–1926) *Gilbert Charles Bourne (1861–1933) *Frederick Augustus Dixey (1855–1935) *Sir Archibald Edward Garrod (1857–1936) *Louis Napoleon George Filon (1875–1937) *Arthur Philemon Coleman (1852–1939) *Alfred Fowler (1868–1940) *Arthur Lapworth (1872–1941) *Sir Joseph Barcroft (187–1947) *Godfrey Harold Hardy (1877–1947) *John Theodore Hewitt (1868–;1954) *Frederick Soddy (1877–1956) ==Foreign members== # Svante August Arrhenius (1859-1927) ForMemRS # Jean-Baptiste Édouard Bornet (1828-1911) ForMemRS # Vito Volterra (1860-1940) ForMemRS # August Friedrich Leopold Weismann (1834-1914) ForMemRS ==References== 1909 Category:1910 in the United Kingdom Category:1910 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *August Friedrich Leopold Weismann (1834–1914) *Paul Ehrlich (1854–1915) *Henry George Plimmer (1856–1918) *Bertram Hopkinson (1874–1918) *John Allen Harker (1870–1923) *Sir William Boog Leishman (1865–1926) *Gilbert Charles Bourne (1861–1933) *Frederick Augustus Dixey (1855–1935) *Sir Archibald Edward Garrod (1857–1936) *Louis Napoleon George Filon (1875–1937) *Arthur Philemon Coleman (1852–1939) *Alfred Fowler (1868–1940) *Arthur Lapworth (1872–1941) *Sir Joseph Barcroft (187–1947) *Godfrey Harold Hardy (1877–1947) *John Theodore Hewitt (1868–;1954) *Frederick Soddy (1877–1956) ==Foreign members== # Svante August Arrhenius (1859-1927) ForMemRS # Jean-Baptiste Édouard Bornet (1828-1911) ForMemRS # Vito Volterra (1860-1940) ForMemRS # August Friedrich Leopold Weismann (1834-1914) ForMemRS ==References== 1909 Category:1910 in the United Kingdom Category:1910 in science "Fellowship from 1660 onwards" (xlsx file on Google Docs via the Royal Society) ==Fellows== *August Friedrich Leopold Weismann (1834–1914) *Paul Ehrlich (1854–1915) *Henry George Plimmer (1856–1918) *Bertram Hopkinson (1874–1918) *John Allen Harker (1870–1923) *Sir William Boog Leishman (1865–1926) *Gilbert Charles Bourne (1861–1933) *Frederick Augustus Dixey (1855–1935) *Sir Archibald Edward Garrod (1857–1936) *Louis Napoleon George Filon (1875–1937) *Arthur Philemon Coleman (1852–1939) *Alfred Fowler (1868–1940) *Arthur Lapworth (1872–1941) *Sir Joseph Barcroft (187–1947) *Godfrey Harold Hardy (1877–1947) *John Theodore Hewitt (1868–;1954) *Frederick Soddy (1877–1956) ==Foreign members== # Svante August Arrhenius (1859-1927) ForMemRS # Jean-Baptiste Édouard Bornet (1828-1911) ForMemRS # Vito Volterra (1860-1940) ForMemRS # August Friedrich Leopold Weismann (1834-1914) ForMemRS ==References== 1909 Category:1910 in the United Kingdom Category:1910 in science
29.36
62.8318530718
1.6
5040
4.86
D
At a county fair carnival game there are 25 balloons on a board, of which 10 balloons 1.3-5 are yellow, 8 are red, and 7 are green. A player throws darts at the balloons to win a prize and randomly hits one of them. Given that the first balloon hit is yellow, what is the probability that the next balloon hit is also yellow?
The Topic International Darts League was a darts tournament held at the Triavium in Nijmegen, Netherlands. The festival began with approximately 15 balloons and to date has grown to about 30 balloons. The 2009 PartyPoker.com Grand Slam of Darts was the third staging of the darts tournament, the Grand Slam of Darts organised by the Professional Darts Corporation. thumb|various hot air balloons during the festival The Warren County Farmers' Fair Balloon Festival was started in 2001 and takes place during the week of the County Fair in Warren County, New Jersey. The 2003 Las Vegas Desert Classic was the second major Professional Darts Corporation Las Vegas Desert Classic darts tournament. The yellow-winged darter (Sympetrum flaveolum) is a dragonfly found in Europe and mid and northern China. The tournament was sponsored by PartyPoker.net, which has also sponsored other darts championships: the US Open, the Las Vegas Desert Classic and the German Darts Championship. ==References== ==External links== *Collated results of the 2008 European Championship Category:European Championship (darts) European Championship Darts Despite the presence of the PDC players in 2006 and 2007, the tournament was still a WDF/BDO ranking event, with all available points going only to the WDF/BDO players competing. ==International Darts League finals== Year Champion Each player's average score is based on the average for each 3-dart visit to the board (ie total points scored divided by darts thrown and multiplied by 3) Score Runner-up Prize money Prize money Prize money Sponsor Venue Year Champion Each player's average score is based on the average for each 3-dart visit to the board (ie total points scored divided by darts thrown and multiplied by 3) Score Runner-up Total Champion Runner-up Sponsor Venue 2003 Raymond van Barneveld (97.77) 8–5 Mervyn King (97.50) €134,000 €30,000 €15,000 Tempus Triavium, Nijmegen 2004 Raymond van Barneveld (101.64) 13–5 Tony David (95.04) €134,000 €30,000 €15,000 Tempus Triavium, Nijmegen 2005 Mervyn King (91.89) Tony O'Shea (91.74) €134,000 €30,000 €15,000 Tempus Triavium, Nijmegen 2006 Raymond van Barneveld (99.54) 13–5 Colin Lloyd (95.25) €134,000 €30,000 €15,000 Topic Triavium, Nijmegen 2007 Gary Anderson (95.85) 13–9 Mark Webster (94.54) €158,000 €30,000 €15,000 Topic Triavium, Nijmegen ==Sponsors== * 2003–2005 Tempus * 2006–2007 Topic ==References== ==External links== * International Darts League * IDL 2006 – A Review Category:2003 establishments in the Netherlands Category:2007 disestablishments in the Netherlands Category:Professional Darts Corporation tournaments Category:British Darts Organisation tournaments Category:Darts in the Netherlands Category:International sports competitions hosted by the Netherlands The 2008 PartyPoker.net European Championship was the inaugural edition of the Professional Darts Corporation tournament, which thereafter was promoted as the annual European Championship, matching top European players qualifying to play against the highest ranked players from the PDC Order of Merit. The event features some balloon races, including the typical hare and hound races, in addition to the Bicycle Balloon Race. The winner and the runner-up of the 2009 Championship League Darts would be invited, whilst it was announced that only the winner of the 2008 World Masters would be invited (though runner-up Scott Waites was invited anyway due to the withdrawal of Martin Adams). The case ended in failure on 21 February 2008, and the International Darts League was indefinitely postponed. An almost unmistakable darter, red-bodied in the male, with both sexes having large amounts of saffron-yellow colouration to the basal area of each wing, which is particularly noticeable on the hind-wings. The yellow-winged darter tends to make quite short flights when settled at a site, and frequently perches quite low down on vegetation. The future of the World Darts Trophy was also thrown into doubt as a result of the decision,IDL & WDT go to court Superstars of darts forum and both events were confirmed defunct by the failure of an appeal on April 29, 2008.IDL & WDT end Google translation from official web site ==Format== The format has changed slightly over the years – the 2006 competition had 8 round-robin groups of 4 players. Then the top 8 non-qualified players from the 2008 Players Championship Order of Merit after the October German Darts Trophy in Dinslaken, Germany joined them to make a field of 24. Played from 30 October–2 November 2008 at the Südbahnhof in Frankfurt, Germany, the inaugural tournament featured a field of 32 players and £200,000 in prize money, with a £50,000 winner's purse going to Phil Taylor.PDC website report - European Championship Details Confirmed from the Professional Darts Corporation obtained 12-08-2008 ==Format== First round — best of nine legs (by two legs) Second round — best of seventeen legs (ditto) Quarter-finals — best of seventeen legs (ditto) Semi-finals — best of twenty-one legs (ditto) Final — best of twenty-one legs (ditto) Each game had to be won by two clear legs, except that a game went to a sudden death leg if a further six legs did not separate the players; for example, a first round match played out to 7-7 is then decided with one sudden death leg. ==Prize money== A total of £200,000 was on offer to the players, divided based on the following performances: Position (no. of players) Position (no. of players) Prize money (Total: £200,000) Winner (1) £50,000 Runner-Up (1) £25,000 Semi-finalists (2) £12,500 Quarter-finalists (4) £8,500 Last 16 (second round) (8) £4,000 Last 32 (first round) (16) £2,000 Highest checkout (1) £2,000 ==Qualification== The top 16 players from the PDC Order of Merit after the 2008 Sky Poker World Grand Prix automatically qualified for the event. This was the second PDC darts tournament that ITV4 has broadcast, after the inaugural Grand Slam of Darts - after its rating success ITV chose to broadcast this event as well as the 2008 Grand Slam of Darts. It is the only major event that Phil Taylor has competed in at least once, but never won. ==End of event== Towards the end of 2007, the chairman of the PDC, Barry Hearn, announced that its players would not be competing in the 2008 International Darts League and World Darts Trophy events. The shootout occurred exactly one year to the day after a similar situation at the 2008 Grand Slam of Darts where Hamilton beat Alan Tabern. The yellow-winged darter has bred but is not established in the UK. Gary Anderson was the final champion, having claimed the title in 2007, when the tournament also became the first major event to witness two nine dart finishes.
3
57.2
0.375
0.323
22
C
What is the number of ordered samples of 5 cards that can be drawn without replacement from a standard deck of 52 playing cards?
The probability is calculated based on {52 \choose 5} = 2,598,960, the total number of 5-card combinations. The following chart enumerates the (absolute) frequency of each hand, given all combinations of five cards randomly drawn from a full deck of 52 without replacement. *The Probability of drawing a given hand is calculated by dividing the number of ways of drawing the hand (Frequency) by the total number of 5-card hands (the sample space; {52 \choose 5} = 2,598,960). The deck is retrieved, and each player is dealt in turn from the deck the same number of cards they discarded so that each player again has five cards. Each player specifies how many of their cards they wish to replace and discards them. The probability is calculated based on {52 \choose 7} = 133,784,560, the total number of 7-card combinations. However, a rule used by many casinos is that a player is not allowed to draw five consecutive cards from the deck. The total number of distinct 7-card hands is {52 \choose 7} = 133{,}784{,}560. The number of distinct 5-card poker hands that are possible from 7 cards is 4,824. If the deck is depleted during the draw before all players have received their replacements, the last players can receive cards chosen randomly from among those discarded by previous players. This list arranges card games by the number of cards used. To this day, many gamblers still rely on the basic concepts of probability theory in order to make informed decisions while gambling. ==Frequencies== ===5-card poker hands=== In straight poker and five- card draw, where there are no hole cards, players are simply dealt five cards from a deck of 52. Another common house rule is that the bottom card of the deck is never given as a replacement, to avoid the possibility of someone who might have seen it during the deal using that information. For example, if the last player to draw wants three replacements but there are only two cards remaining in the deck, the dealer gives the player the one top card he can give, then shuffles together the bottom card of the deck, the burn card, and the earlier players' discards (but not the player's own discards), and finally deals two more replacements to the last player. ==Sample deal== 200px|right The sample deal is being played by four players as shown to the right with Alice dealing. 52 pickup or 52-card pickup is a humorous prank which consists only of picking up a scattered deck of playing cards. In this case, if a player wishes to replace all five of their cards, that player is given four of them in turn, the other players are given their draws, and then the dealer returns to that player to give the fifth replacement card; if no other player draws it is necessary to deal a burn card first. Five-card draw (also known as Cantrell draw) is a poker variant that is considered the simplest variant of poker, and is the basis for video poker. Perhaps surprisingly, this is fewer than the number of 5-card poker hands from 5 cards, as some 5-card hands are impossible with 7 cards (e.g. 7-high and 8-high). ===5-card lowball poker hands=== Some variants of poker, called lowball, use a low hand to determine the winning hand. With five players, the sixes are added to make a 36-card deck. Its "Total" represents the 95.4% of the time that a player can select a 5-card low hand without any pair. The other player must then pick them up.. ==Variations== Genuine card games sometimes end in 52 pickup. In poker, the probability of each type of 5-card hand can be computed by calculating the proportion of hands of that type among all possible hands. == History == Probability and gambling have been ideas since long before the invention of poker.
2.2
0.24995
4943.0
0.87
311875200
E
A bowl contains seven blue chips and three red chips. Two chips are to be drawn successively at random and without replacement. We want to compute the probability that the first draw results in a red chip $(A)$ and the second draw results in a blue chip $(B)$.
The Sunday Times described triple-cooked chips as Blumenthal's most influential culinary innovation, which had given the chip "a whole new lease of life". ==History== Blumenthal said he was "obsessed with the idea of the perfect chip",Blumenthal, In Search of Perfection and described how, from 1992 onwards, he worked on a method for making "chips with a glass-like crust and a soft, fluffy centre". thumb|Colorized photo of Chips. The Bowl of Baal is a 1975 science fiction novel by Robert Ames Bennet. Eventually, Blumenthal developed the three-stage cooking process known as triple-cooked chips, which he identifies as "the first recipe I could call my own". Triple-cooked chips are a type of chips developed by the English chef Heston Blumenthal. 7 Colors (a.k.a. Filler) is a puzzle game, designed by Dmitry Pashkov. The result is what Blumenthal calls "chips with a glass-like crust and a soft, fluffy centre". The chips are first simmered, then cooled and drained using a sous-vide technique or by freezing; deep fried at and cooled again; and finally deep-fried again at . On July 10, 1943, Chips and his handler were pinned down on the beach by an Italian machine-gun team. In 2014, the London Fire Brigade attributed an increase in chip pan fires to the increased popularity of "posh chips", including triple-cooked chips. ==Preparation== ===Blumenthal's technique=== Previously, the traditional practice for cooking chips was a two-stage process, in which chipped potatoes were fried in oil first at a relatively low temperature to soften them and then at a higher temperature to crisp up the outside. thumb|A selection of Red Ribbon cakes on sale Red Ribbon Bakeshop, Inc. is a bakery chain based in the Philippines, which produces and distributes cakes and pastries. ==History== In 1979, Amalia Hizon Mercado, husband Renato Mercado, and their five children, Consuelo Tiutan, Teresita Moran, Renato Mercado, Ricky Mercado and Romy Mercado established Red Ribbon as a small cake shop along Timog Avenue in Quezon City. The second of the three stages is frying the chips at for approximately 5 minutes, after which they are cooled once more in a freezer or sous-vide machine before the third and final stage: frying at for approximately 7 minutes until crunchy and golden. Blumenthal describes moisture as the "enemy" of crisp chips. C.C. Moore eventually gifted Chips to the Wren family. Chips served as a sentry dog for the Roosevelt-Churchill conference in 1943. Bloomsbury. ==Further reading== * * ==External links== * Triple-Cooked Chips. Second, the cracks that develop in the chips provide places for oil to collect and harden during frying, making them crunchy.Blumenthal, Heston Blumenthal at Home Third, thoroughly drying out the chips drives off moisture that would otherwise keep the crust from becoming crisp. Blumenthal began work on the recipe in 1993, and eventually developed the three-stage cooking process. Chips (1940–1946) was a trained sentry dog for United States Army, and reputedly the most decorated war dog from World War II. Chips was a German Shepherd-Collie-Malamute mix owned by Edward J. Wren of Pleasantville, New York. Chips shipped out to the War Dog Training Center, Front Royal, Virginia, in 1942 for training as a sentry dog. "A single frying at a high temperature leads to a thin crust that can easily be rendered soggy by whatever moisture remains in the chip’s interior."
-383
5
0.23333333333
0.66666666666
313
C
From an ordinary deck of playing cards, cards are to be drawn successively at random and without replacement. What is the probability that the third spade appears on the sixth draw?
*The Probability of drawing a given hand is calculated by dividing the number of ways of drawing the hand (Frequency) by the total number of 5-card hands (the sample space; {52 \choose 5} = 2,598,960). Three Shuffles and a Draw is a solitaire game using one deck of playing cards. For example, the probability of drawing three of a kind is approximately 2.11%, while the probability of drawing a hand at least as good as three of a kind is about 2.87%. The following chart enumerates the (absolute) frequency of each hand, given all combinations of five cards randomly drawn from a full deck of 52 without replacement. For example, there are 4 different ways to draw a royal flush (one for each suit), so the probability is , or one in 649,740. One would then expect to draw this hand about once in every 649,740 draws, or nearly 0.000154% of the time. The name "Three Shuffles and a Draw" comes from the fact that there are 3 shuffles (counting the original starting shuffle plus the 2 redeals, and then a draw, where you can free any one single buried card). Draw poker is any poker variant in which each player is dealt a complete hand before the first betting round, and then develops the hand for later rounds by replacing, or "drawing", cards. Then a third card is revealed, followed by a betting round, a fourth card, a betting round, and finally a showdown. As a bridge hand contains thirteen cards, only two hand patterns can be classified as three suiters: 4-4-4-1 and 5-4-4-0. right In the game of contract bridge a three suiter (or three-suited hand) denotes a hand containing at least four cards in three of the four suits. thumb|right|170px|Three of Cups from a deck of Italian cards Three of Cups is the third card on the suit of Cups. The object of the game is to move all of the cards to the Foundations. == Rules == Three Shuffles and a Draw has four foundations build up in suit from Ace to King, e.g. A♣, 2♣, 3♣, 4♣... The probability is calculated based on {52 \choose 5} = 2,598,960, the total number of 5-card combinations. In the card game contract bridge, Gambling 3NT is a special of an opening of 3NT. Finally, each player draws as in normal draw poker, followed by a fourth betting round and showdown. The first betting round is then played, followed by a draw in which each player replaces cards from their hand with an equal number, so that each player still has only four cards in hand. Before the first betting round, each player examines their hand, removes exactly three cards from it, then places them on the table to their left. For example, 3♣ 7♣ 8♣ Q♠ A♠ and 3♦ 7♣ 8♦ Q♥ A♥ are not identical hands when just ignoring suit assignments because one hand has three suits, while the other hand has only two—that difference could affect the relative value of each hand when there are more cards to come. If any player opens, the game continues as traditional five-card draw poker. For instance, with a royal flush, there are 4 ways to draw one, and 2,598,956 ways to draw something else, so the odds against drawing a royal flush are 2,598,956 : 4, or 649,739 : 1. It is the , and this makes all 6-spot cards wild.
313
0.064
122.0
19.4
0.123
B
What is the probability of drawing three kings and two queens when drawing a five-card hand from a deck of 52 playing cards?
*The Probability of drawing a given hand is calculated by dividing the number of ways of drawing the hand (Frequency) by the total number of 5-card hands (the sample space; {52 \choose 5} = 2,598,960). The probability is calculated based on {52 \choose 5} = 2,598,960, the total number of 5-card combinations. The probability is calculated based on {52 \choose 7} = 133,784,560, the total number of 7-card combinations. The following chart enumerates the (absolute) frequency of each hand, given all combinations of five cards randomly drawn from a full deck of 52 without replacement. The total number of distinct 7-card hands is {52 \choose 7} = 133{,}784{,}560. In poker, the probability of each type of 5-card hand can be computed by calculating the proportion of hands of that type among all possible hands. == History == Probability and gambling have been ideas since long before the invention of poker. The queen of spades (Q) is one of 52 playing cards in a standard deck: the queen of the suit of spades (). Probabilities are adjusted in the above table such that "5-high" is not listed, "6-high" has 781,824 distinct hands, and "King-high" has 21,457,920 distinct hands, respectively. The number of distinct 5-card poker hands that are possible from 7 cards is 4,824. Royal Marriage is a patience or solitaire game using a deck of 52 playing cards. The remaining fifty cards are shuffled and placed on the top of the King to form the stock. Perhaps surprisingly, this is fewer than the number of 5-card poker hands from 5 cards, as some 5-card hands are impossible with 7 cards (e.g. 7-high and 8-high). ===5-card lowball poker hands=== Some variants of poker, called lowball, use a low hand to determine the winning hand. To this day, many gamblers still rely on the basic concepts of probability theory in order to make informed decisions while gambling. ==Frequencies== ===5-card poker hands=== In straight poker and five- card draw, where there are no hole cards, players are simply dealt five cards from a deck of 52. Royal Cotillion is a solitaire card game which uses two decks of 52 playing cards each. Royal Flush is a solitaire card game which is played with a deck of 52 playing cards. For instance, with a royal flush, there are 4 ways to draw one, and 2,598,956 ways to draw something else, so the odds against drawing a royal flush are 2,598,956 : 4, or 649,739 : 1. : Hand Distinct hands Frequency Probability Cumulative Odds against 5-high 1 1,024 0.0394% 0.0394% 2,537.05 : 1 6-high 5 5,120 0.197% 0.236% 506.61 : 1 7-high 15 15,360 0.591% 0.827% 168.20 : 1 8-high 35 35,840 1.38% 2.21% 71.52 : 1 9-high 70 71,680 2.76% 4.96% 35.26 : 1 10-high 126 129,024 4.96% 9.93% 19.14 : 1 Jack-high 210 215,040 8.27% 18.2% 11.09 : 1 Queen-high 330 337,920 13.0% 31.2% 6.69 : 1 King-high 495 506,880 19.5% 50.7% 4.13 : 1 Total 1,287 1,317,888 50.7% 50.7% 0.97 : 1 As can be seen from the table, just over half the time a player gets a hand that has no pairs, threes- or fours-of-a-kind. (50.7%) If aces are not low, simply rotate the hand descriptions so that 6-high replaces 5-high for the best hand and ace-high replaces king-high as the worst hand. For example, there are 4 different ways to draw a royal flush (one for each suit), so the probability is , or one in 649,740. Three Shuffles and a Draw is a solitaire game using one deck of playing cards. Probabilities are adjusted in the above table such that "5-high" is not listed", "6-high" has one distinct hand, and "King-high" having 330 distinct hands, respectively. In this case, the deck is held face-down in one hand, with the King being uppermost face-down card and the Queen being held face-up above it. The game is won when the King and Queen are brought together -- that is, when only one or two cards remain in between them, which can then be discarded. ==Variations== Royal Marriage is possible to play in-hand, rather than on a surface such as a table.
0.0000092
35
0.323
0.6321205588
14.5115
A
In an orchid show, seven orchids are to be placed along one side of the greenhouse. There are four lavender orchids and three white orchids. How many ways are there to lineup these orchids?
The Orchidoideae, or the orchidoid orchids, are a subfamily of the orchid family (Orchidaceae) that contains around 3630 species. Orchidales is an order of flowering plants. Genera Orchidacearum vol. 3: Orchidoideae part 2, Vanilloideae. Genera Orchidacearum 4. Genera Orchidacearum 5. Genera Orchidacearum 1. Genera Orchidacearum 3. This is a list of genera in the orchid family (Orchidaceae), originally according to The Families of Flowering Plants - L. Watson and M. J. Dallwitz. Genera Orchidacearum 2. This is a list of the orchids, sorted in alphabetical order, found in Metropolitan France. == A == * Anacamptis laxiflora * Anacamptis longicornu * Anacamptis morio * Anacamptis palustris == C == * Cephalanthera longifolia == D == * Dactylorhiza incarnata == E == * Epipactis phyllanthes == G == * Goodyera repens == O == * Ophrys aurelia * Ophrys catalaunica * Ophrys saratoi * Ophrys drumana * Orchis mascula == S == * Serapias lingua == References == France Phylogeny and Classification of the Orchid Family. She provided an English text, paintings, and drawings for the amateur reader, a mixture of impression and scientific illustration of the genera. ==Orchids of South Western Australia== Common name Genus No. species in southwest W.A. Remarks Babe-in-a-cradle Epiblema 1 Beard orchids Calochilus 6 Blue orchids Cyanicula 11 Bunny orchids Eriochilus 6 Donkey orchid Diuris ~36 Duck orchids Paracaleana 13 Elbow orchid Spiculaea 1 Enamel orchids Elythranthera 2 Fairy orchid Pheladenia 1 Fire orchids Pyrorchis 2 also Beak orchids Greenhoods Pterostylis ~90 Hammer orchids Drakaea 10 Hare orchid Leporella 1 Helmet orchids Corybas 4 Leafless orchid Praecoxanthus 1 Leek orchids Prasophyllum 25 Mignonette orchids Microtis 14 also Onion orchid Mosquito orchids Cyrtostylis 5 Potato orchids Gastrodia 1 also Bell orchid Pygmy orchid Corunastylis 1 Rabbit orchid Leptoceras 1 Rattle beaks Lyperanthus 1 Slipper orchids Cryptostylis 1 also Tongue orchid South African orchids Disa bracteata 1 introduced Spider orchids Caladenia 125 Sugar orchid Ericksonella 1 Sun orchids Thelymitra 37 Underground orchids Rhizanthella 1 * This table has its source as the Second Edition of Hoffman and Brown in 1992 ==References== thumb|Diuris plate III from West Australian Orchids, 1930 # ==Further reading== * * * * * * * == External links == * The Species Orchid Society of Western Australia (Inc) -- a gallery of orchids from Western Australia * Orchids from Western and South Australia * Terrestrial orchids of the south west western australia * Orchid Conservation Coalition List of orchids Western Australia Historically, the Orchidoideae have been partitioned into up to 6 tribes, including Orchideae, Diseae, Cranichideae, Chloraeeae, Diurideae, and Codonorchideae. Oxford Univ. Press == External links == *All recognized monocotiledons species (including Orchid family) - World Checklist of Selected Plant Families, Kew Botanic Garden - UK *Intergeneric orchid genus names (updated 11 Jan 2005) *List of orchid genera (updated 14 Jul 2004) *List of common names or *List of orchid hybrids - Royal Horticultural Society - UK *Orchid main page - eMonocot website Orchidaceae The first three orchids from Western Australia to be named were Caladenia menziesii (now Leptoceras menziesii), Caladenia flava, and Diuris longifolia. Dictionary of Orchid Names. This list is adapted regularly with the changes published in the Orchid Research Newsletter which is published twice a year by the Royal Botanic Gardens, Kew. This list is reflected on Wikispecies Orchidaceae and the new eMonocot website Orchidaceae Juss. Although mostly the order will consist of the orchids only (usually in one family only, but sometimes divided into more families, as in the Dahlgren system, see below), sometimes other families are added: ==Circumscription in the Takhtajan system== Takhtajan system: * order Orchidales *: family Orchidaceae ==Circumscription in the Cronquist system== Cronquist system (1981): * order Orchidales *: family Geosiridaceae *: family Burmanniaceae *: family Corsiaceae *: family Orchidaceae ==Circumscription in the Dahlgren system== Dahlgren system: * order Orchidales *: family Neuwiediaceae *: family Apostasiaceae *: family Cypripediaceae *: family Orchidaceae ==Circumscription in the Thorne system== Thorne system (1992): * order Orchidales *: family Orchidaceae ==APG system== The order is not recognized in the APG II system, which assigns the orchids to order Asparagales. ==See also== * Taxonomy of the orchid family Category:Monocots Category:Historically recognized angiosperm orders *Laeliopsis *Lanium *Lankesterella *Leaoa *Lecanorchis *Lemboglossum *Lemurella *Lemurorchis *Leochilus: smooth-lip orchid *Lepanthes: babyboot orchid *Lepanthopsis: tiny orchid *Lepidogyne *Leporella *Leptotes *Lesliea *Leucohyle *Ligeophila *Limodorum *Lindleyalis *Liparis: wide-lip orchid *Listrostachys *Lockhartia *Loefgrenianthus *Ludisia: jewel orchid *Lueddemannia *Luisia *Lycaste: bee orchid *Lycomormium *Lyperanthus *Lyroglossa ===M=== thumb|right|100px|Macodes lowii thumb|right|100px|Macodes petola thumb|right|100px|Maxillaria cucullata thumb|right|100px|Maxillaria picta thumb|right|100px|Mexicoa ghiesbrechtiana thumb|right|100px|Oncidium schroederianum *Macodes *Macradenia: long-gland orchid *Macroclinium *Macropodanthus *Malaxis: adder's mouth orchid *Malleola *Manniella *Margelliantha *Masdevallia *Mastigion *Maxillaria: tiger orchid, flame orchid *Mecopodum *Mediocalcar *Megalorchis *Megalotus *Megastylis *Meiracyllium *Meliorchis: extinct, 80-million-year-old orchid *Mendoncella *Mesadenella *Mesadenus: ladies'-tresses *Mesospinidium *Mexicoa *Microchilus *Microcoelia *Micropera *Microphytanthe *Microsaccus *Microtatorchis *Microterangis *Microthelys *Microtis *Miltonia Lindl.: pansy orchid *Miltoniopsis *Mischobulbum *Mixis *Mobilabium *Moerenhoutia *Monadenia *Monanthos *Monomeria *Monophyllorchis *Monosepalum *Mormodes *Mormolyca *Mycaranthes *Myoxanthus *Myrmechila D.L.Jones & M.A.Clem (2005) *Myrmechis *Myrmecophila *Myrosmodes *Mystacidium ===N=== *Nabaluia *Nageliella *Nematoceras *Neobathiea *Neobenthamia *Neobolusia *Neoclemensia *Neocogniauxia *Neodryas *Neoescobaria *Neofinetia *Neogardneria *Neogyna *Neomoorea *Neotinea *Neottia (including Listera) *Neowilliamsia *Nephelaphyllum *Nephrangis *Nervilia *Neuwiedia *Nidema: fairy orchid *Nigritella *Nitidobulbon *Nohawilliamsia *Nothodoritis *Nothostele *Notylia ===O=== thumb|right|100px|Oerstedella centropetalla thumb|right|100px|Ornithophora radicans *Oberonia *Oberonioides *Octarrhena *Octomeria *Odontochilus *Odontoglossum Kunth *Odontorrhynchus *Oeceoclades: monk orchid *Oeonia *Oeoniella *Oerstedella *Oestlundorchis *Olgasis *Oligochaetochilus *Oligophyton *Oliveriana *Omoea *Oncidium: dancing-lady orchid *Ophidion *Ophrys: ophrys *Orchipedum *Orchis: orchis *Oreorchis *Orestias *Orleanesia *Ornithidium *Ornithocephalus *Ornithochilus *Orthoceras *Osmoglossum *Ossiculum *Osyricera *Otochilus *Otoglossum *Otostylis *Oxystophyllum ===P=== thumb|right|100px|Phaius tankervilleae thumb|right|100px|Northern green orchid (Platanthera hyperborea) thumb|right|100px|Western prairie fringed orchid (Platanthera praeclara) thumb|right|100px|Polystachya pubescens thumb|right|100px|Prosthechea cochleata thumb|right|100px|Prosthechea garciana thumb|right|100px|Prosthechea radiata *Pabstia Garay *Pachites *Pachyphyllum *Pachyplectron *Pachystele *Pachystoma *Palmorchis *Panisea *Pantlingia *Paphinia *Paphiopedilum *Papilionanthe *Papillilabium *Paphiopedilum: Venus' slipper *Papperitzia *Papuaea *Paradisanthus *Paralophia P.J.Cribb & Hermans (2005) *Paraphalaenopsis *Parapteroceras *Pecteilis *Pedilochilus *Pedilonum *Pelatantheria *Pelexia: hachuela *Penkimia *Pennilabium *Peristeranthus *Peristeria *Peristylus *Pescatoria *Phaius: nun's-hood orchid *Phalaenopsis: moth orchid *Pheladenia *Pholidota *Phoringopsis *Phragmipedium *Phragmorchis *Phreatia *Phymatidium *Physoceras *Physogyne *Pilophyllum *Pinelia *Piperia: rein orchid *Pityphyllum *Platanthera: fringed orchid, bog orchid *Platantheroides *Platycoryne *Platyglottis *Platylepis *Platyrhiza *Platystele *Platythelys: jug orchid *Plectorrhiza *Plectrelminthus *Plectrophora *Pleione *Pleurothallis: bonnet orchid *Pleurothallopsis *Plexaure *Plocoglottis *Poaephyllum *Podangis *Podochilus *Pogonia: snake- mouth orchid *Pogoniopsis *Polycycnis *Polyotidium *Polyradicion: palmpolly *Polystachya *Pomatocalpa *Ponera *Ponerorchis *Ponthieva: shadow witch *Porpax *Porphyrodesme *Porphyroglottis *Porphyrostachys *Porroglossum *Porrorhachis *Potosia *Prasophyllum *Prescottia: Prescott orchid *Pristiglottis *Proctoria *Promenaea *Prosthechea *Pseudacoridium *Pseuderia *Pseudocentrum *Pseudocranichis *Pseudoeurystyles *Pseudogoodyera *Pseudolaelia *Pseudorchis *Pseudovanilla *Psilochilus: ragged-lip orchid *Psychilis: peacock orchid *Psychopsiella (sometimes included in Psychopsis) *Psychopsis: butterfly orchid *Psygmorchis *Pterichis *Pteroceras *Pteroglossa *Pteroglossaspis: giant orchid *Pterostemma *Pterostylis *Pterygodium *Pygmaeorchis *Pyrorchis ===Q=== *Quekettia *Quisqueya ===R=== thumb|right|100px|Rhyncholaelia glauca thumb|right|100px|Rhynchostele bictoniensis thumb|right|100px|Rhynchostele cordatum thumb|right|100px|Rossioglossum ampliatum *Rangaeris *Rauhiella *Raycadenco *Reichenbachanthus *Renanthera Lour. & Endl.: snail orchid *Comperia *Conchidium *Condylago Luer *Constantia *Corallorhiza (Haller) Chatelaine: coral root *Cordiglottis *Corunastylis *Coryanthes Hook.: bucket orchids *Corybas Salisb. *Grastidium *Greenwoodiella *Grobya *Grosourdya *Guarianthe Dressler & W.E.Higgins *Gunnarella *Gunnarorchis *Gymnadenia: fragrant orchid *Gymnadeniopsis *Gymnochilus *Gynoglottis ===H=== thumb|right|100px|Haraella retrocalla *Habenaria: bog orchid, false rein orchid *Hagsatera *Hammarbya *Hancockia *Hapalochilus *Hapalorchis *Haraella *Harrisella: airplant orchid *Hederorkis *Helcia *Helleriella: dotted orchid *Helonoma *Hemipilia *Herminium *Herpetophytum *Herpysma *Herschelianthe *Hetaeria *Heterotaxis *Heterozeuxine *Hexalectris: crested coralroot *Hexisea *Himantoglossum *Hintonella *Hippeophyllum *Hirtzia *Hispaniella *Hoehneella *Hoffmannseggella *Hofmeisterella *Holcoglossum *Holmesia *Holopogon *Holothrix *Homalopetalum *Horichia *Hormidium *Horvatia *Houlletia *Huntleya *Huttonaea *Hybochilus *Hydrorchis *Hygrochilus *Hylophila *Hymenorchis ===I=== *Imerinaea *Imerinorchis Szlach (2005) *Inobulbon *Ione *Ionopsis: violet orchid *Ipsea *Isabelia *Ischnocentrum *Ischnogyne *Isochilus: equal-lip orchid *Isotria: fiveleaf orchid *Ixyophora Dressler (2005) ===J=== *Jacquiniella: tufted orchid *Jejosephia *Jonesiopsis *Jostia *Jumellea ===K=== *Kalimpongia *Kaurorchis *Kefersteinia *Kegeliella *Kerigomnia *Kinetochilus *Kingidium *Kionophyton *Koellensteinia: grass-leaf orchid *Konantzia *Kraenzlinella *Kreodanthus *Kryptostoma *Kuhlhasseltia ===L=== thumb|right|100px|Leptotes bicolor thumb|right|100px|Ludisia discolor thumb|right|100px|Lycaste Cassiopeia (a cultivar) *Lacaena *Laelia Lindl.
+4.1
35
'-3.8'
2.00
0.3085
B
If $P(A)=0.4, P(B)=0.5$, and $P(A \cap B)=0.3$, find $P(B \mid A)$.
* This results in P(A \mid B) = P(A \cap B)/P(B) whenever P(B) > 0 and 0 otherwise. The conditional probability can be found by the quotient of the probability of the joint intersection of events and (P(A \cap B))—the probability at which A and B occur together, although not necessarily occurring at the same time—and the probability of : :P(A \mid B) = \frac{P(A \cap B)}{P(B)}. This can also be understood as the fraction of probability B that intersects with A, or the ratio of the probabilities of both events happening to the "given" one happening (how many times A occurs rather than not assuming B has occurred): P(A \mid B) = \frac{P(A \cap B)}{P(B)}. We denote the quantity \frac{P(A \cap B)}{P(B)} as P(A\mid B) and call it the "conditional probability of given ." It can be shown that :P(A_B)= \frac{P(A \cap B)}{P(B)} which meets the Kolmogorov definition of conditional probability. === Conditioning on an event of probability zero === If P(B)=0 , then according to the definition, P(A \mid B) is undefined. We have P(A\mid B)=\tfrac{P(A \cap B)}{P(B)} = \tfrac{3/36}{10/36}=\tfrac{3}{10}, as seen in the table. == Use in inference == In statistical inference, the conditional probability is an update of the probability of an event based on new information. Therefore, it can be useful to reverse or convert a conditional probability using Bayes' theorem: P(A\mid B) = {{P(B\mid A) P(A)}\over{P(B)}}. * Without the knowledge of the occurrence of B, the information about the occurrence of A would simply be P(A) * The probability of A knowing that event B has or will have occurred, will be the probability of A \cap B relative to P(B), the probability that B has occurred. In this event, the event B can be analyzed by a conditional probability with respect to A. This shows that P(A|B) P(B) = P(B|A) P(A) i.e. P(A|B) = . If P(B) is not zero, then this is equivalent to the statement that :P(A\mid B) = P(A). For a value in and an event , the conditional probability is given by P(A \mid X=x) . More formally, P(A|B) is assumed to be approximately equal to P(B|A). ==Examples== ===Example 1=== Relative size Malignant Benign Total Test positive 0.8 (true positive) 9.9 (false positive) 10.7 Test negative 0.2 (false negative) 89.1 (true negative) 89.3 Total 1 99 100 In one study, physicians were asked to give the chances of malignancy with a 1% prior probability of occurring. In general, it cannot be assumed that P(A|B) ≈ P(B|A). For example, the conditional probability that someone unwell (sick) is coughing might be 75%, in which case we would have that = 5% and = 75 %. Substituting 1 and 2 into 3 to select α: :\begin{align} 1 &= \sum_{\omega \in \Omega} {P(\omega \mid B)} \\\ &= \sum_{\omega \in B} {P(\omega\mid B)} + \cancelto{0}{\sum_{\omega otin B} P(\omega\mid B)} \\\ &= \alpha \sum_{\omega \in B} {P(\omega)} \\\\[5pt] &= \alpha \cdot P(B) \\\\[5pt] \Rightarrow \alpha &= \frac{1}{P(B)} \end{align} So the new probability distribution is #\omega \in B: P(\omega\mid B) = \frac{P(\omega)}{P(B)} #\omega otin B: P(\omega\mid B) = 0 Now for a general event A, :\begin{align} P(A\mid B) &= \sum_{\omega \in A \cap B} {P(\omega \mid B)} + \cancelto{0}{\sum_{\omega \in A \cap B^c} P(\omega\mid B)} \\\ &= \sum_{\omega \in A \cap B} {\frac{P(\omega)}{P(B)}} \\\\[5pt] &= \frac{P(A \cap B)}{P(B)} \end{align} == See also == * Bayes' theorem * Bayesian epistemology * Borel–Kolmogorov paradox * Chain rule (probability) * Class membership probabilities * Conditional independence * Conditional probability distribution * Conditioning (probability) * Joint probability distribution * Monty Hall problem * Pairwise independent distribution * Posterior probability * Regular conditional probability == References == ==External links== * *Visual explanation of conditional probability Category:Mathematical fallacies Category:Statistical ratios For events in B, two conditions must be met: the probability of B is one and the relative magnitudes of the probabilities must be preserved. Similarly, if P(A) is not zero, then :P(B\mid A) = P(B) is also equivalent. Similar reasoning can be used to show that P(Ā|B) = etc. The relationship between P(A|B) and P(B|A) is given by Bayes' theorem: :\begin{align} P(B\mid A) &= \frac{P(A\mid B) P(B)}{P(A)}\\\ \Leftrightarrow \frac{P(B\mid A)}{P(A\mid B)} &= \frac{P(B)}{P(A)} \end{align} That is, P(A|B) ≈ P(B|A) only if P(B)/P(A) ≈ 1, or equivalently, P(A) ≈ P(B). === Assuming marginal and conditional probabilities are of similar size === In general, it cannot be assumed that P(A) ≈ P(A|B). It is tempting to define the undefined probability P(A \mid X=x) using this limit, but this cannot be done in a consistent manner. That is, P(A) is the probability of A before accounting for evidence E, and P(A|E) is the probability of A after having accounted for evidence E or after having updated P(A).
0.01961
7.0
311875200.0
0.02828
0.75
E
What is the number of possible 5-card hands (in 5-card poker) drawn from a deck of 52 playing cards?
The probability is calculated based on {52 \choose 5} = 2,598,960, the total number of 5-card combinations. The number of distinct 5-card poker hands that are possible from 7 cards is 4,824. *The Probability of drawing a given hand is calculated by dividing the number of ways of drawing the hand (Frequency) by the total number of 5-card hands (the sample space; {52 \choose 5} = 2,598,960). The total number of distinct 7-card hands is {52 \choose 7} = 133{,}784{,}560. In poker, the probability of each type of 5-card hand can be computed by calculating the proportion of hands of that type among all possible hands. == History == Probability and gambling have been ideas since long before the invention of poker. Perhaps surprisingly, this is fewer than the number of 5-card poker hands from 5 cards, as some 5-card hands are impossible with 7 cards (e.g. 7-high and 8-high). ===5-card lowball poker hands=== Some variants of poker, called lowball, use a low hand to determine the winning hand. The probability is calculated based on {52 \choose 7} = 133,784,560, the total number of 7-card combinations. There are 7,462 distinct poker hands. ===7-card poker hands=== In some popular variations of poker such as Texas hold 'em, the most widespread poker variant overall,https://www.casinodaniabeach.com/most-popular-types-of-poker/ a player uses the best five-card poker hand out of seven cards. The following chart enumerates the (absolute) frequency of each hand, given all combinations of five cards randomly drawn from a full deck of 52 without replacement. To this day, many gamblers still rely on the basic concepts of probability theory in order to make informed decisions while gambling. ==Frequencies== ===5-card poker hands=== In straight poker and five- card draw, where there are no hole cards, players are simply dealt five cards from a deck of 52. Hand The five cards (or less) dealt on the screen are known as a hand. ==See also== *Casino comps *Draw poker *Gambling *Gambling mathematics *Problem gambling *Video blackjack *Video Lottery Terminal ==References== ==External links== * Category:Arcade video games The Total line also needs adjusting. ===7-card lowball poker hands=== In some variants of poker a player uses the best five-card low hand selected from seven cards. Video poker is a casino game based on five-card draw poker. The Total line also needs adjusting. ==See also== * Binomial coefficient * Combination * Combinatorial game theory * Effective hand strength algorithm * Event (probability theory) * Game complexity * Gaming mathematics * Odds * Permutation * Probability * Sample space * Set theory ==References== ==External links== * Brian Alspach's mathematics and poker page * MathWorld: Poker * Poker probabilities including conditional calculations * Numerous poker probability tables * 5, 6, and 7 card poker probabilities * Hold'em poker probabilities The frequencies are calculated in a manner similar to that shown for 5-card hands,https://www.pokerstrategy.com/strategy/various-poker/texas-holdem- probabilities/ except additional complications arise due to the extra two cards in the 7-card poker hand. Note that all cards are dealt face up Fourteen Out (also known as Fourteen Off, Fourteen Puzzle, Take Fourteen, or just Fourteen) is a Patience card game played with a deck of 52 playing cards. This list of poker playing card nicknames has some nicknames for the playing cards in a 52-card deck, as used in poker. ==Poker hand nicknames== The following sets of playing cards can be referred to by the corresponding names in card games that include sets of three or more cards, particularly 3 and 5 card draw, Texas Hold 'em and Omaha Hold 'em. The number of distinct poker hands is even smaller. Eliminating identical hands that ignore relative suit values leaves 6,009,159 distinct 7-card hands. Since poker is a game of incomplete information, the calculator is designed to evaluate the equity of ranges of hands that players can hold, instead of individual hands. The table does not extend to include five-card hands with at least one pair. (Wild cards substitute for any other card in the deck in order to make a better poker hand).
14.80
655
0.375
2598960
8.44
D
A certain food service gives the following choices for dinner: $E_1$, soup or tomato 1.2-2 juice; $E_2$, steak or shrimp; $E_3$, French fried potatoes, mashed potatoes, or a baked potato; $E_4$, corn or peas; $E_5$, jello, tossed salad, cottage cheese, or coleslaw; $E_6$, cake, cookies, pudding, brownie, vanilla ice cream, chocolate ice cream, or orange sherbet; $E_7$, coffee, tea, milk, or punch. How many different dinner selections are possible if one of the listed choices is made for each of $E_1, E_2, \ldots$, and $E_7$ ?
The establishment of restaurants and restaurant menus allowed customers to choose from a list of unseen dishes, which were produced to order according to the customer's selection. A combination meal can also comprise a meal in which separate dishes are selected by consumers from an entire menu, and can include à la carte selections that are combined on a plate. It usually includes several dishes to pick in a fixed list: an entrée (introductory course), a main course (a choice between up to four dishes), a cheese, a dessert, bread, and sometimes beverage (wine) and coffee all for a set price fixed for the year between €15 and €55. In a restaurant, the menu is a list of food and beverages offered to customers and the prices. Combination meals may be priced lower compared to ordering items separately, but this is not always the case. A meat and three meal is one where the customer picks one meat and three side dishes as a fixed-price offering. A fast food combination meal can contain over . A combination meal is also a meal in which the consumer orders items à la carte to create their own meal combination. Other types of restaurants, such as fast-casual restaurants also offer combination meals. A 2010 study published in the Journal of Public Policy & Marketing found that some consumers may order a combination meal even if no price discount is applied compared to the price of ordering items separately. The study found that this behavior is based upon consumers perceiving an inherent value in combination meals, and also suggested that the ease and convenience of ordering, such as ordering a meal by number, plays a role compared to ordering items separately. Combination meals may be priced lower compared to ordering the items separately, and this lower pricing may serve to entice consumers that are budget-minded. This has a fixed menu and often comes with side dishes such as pickled vegetables and miso soup. * A wine list * A liquor and mixed drinks menu * A beer list * A dessert menu (which may also include a list of tea and coffee options) Some restaurants use only text in their menus. thumb|An example of foods served as a fast food combination meal thumb|A combination meal with chicken curry, rice and beef curry thumb|A Spanish combination meal, consisting of a hamburger, French fries and a beer A combination meal, often referred as a combo-meal, is a type of meal that typically includes food items and a beverage. The variation in Chinese cuisine from different regions led caterers to create a list or menu for their patrons. Fast food restaurants will often prepare variations on items already available, but to have them all on the menu would create clutter. Boston Market and Cracker Barrel chains of restaurants offer a similar style of food selection. == See also == * Garbage Plate * List of restaurant terminology == References == === Sources === * * * * * Category:Cuisine of the Southern United States Category:Restaurants by type Category:Restaurant terminology Category:Culture of Nashville, Tennessee Category:Food combinations This way, all of the patrons can see all of the choices, and the restaurant does not have to provide printed menus. Similar concepts include the Hawaiian plate lunch, which features a variety of entrée choices with fixed side items of white rice and macaroni salad, and the southern Louisiana plate lunch, which features menu options that change daily. Salad buffet, bread and butter and beverage are included, and sometimes also a simple starter, like a soup. Most commonly, there is a choice of two or three dishes: a meat/fish/poultry dish, a vegetarian alternative, and a pasta.
1.51
49
4.5
2688
22
D
A rocket has a built-in redundant system. In this system, if component $K_1$ fails, it is bypassed and component $K_2$ is used. If component $K_2$ fails, it is bypassed and component $K_3$ is used. (An example of a system with these kinds of components is three computer systems.) Suppose that the probability of failure of any one component is 0.15 , and assume that the failures of these components are mutually independent events. Let $A_i$ denote the event that component $K_i$ fails for $i=1,2,3$. What is the probability that the system fails?
This can occur when a single part fails, increasing the probability that other portions of the system fail. Cascading failures may occur when one part of the system fails. :P_F = 1 - A_o \begin{cases} P_F = Probability \ of \ Mission \ Failure \\\ A_o = Operational \ Availability \end{cases} Apart from human error, mission failure results from the following causes. * Protection Strategies for Cascading Grid Failures — A Shortcut Approach * I. Dobson, B. A. Carreras, and D. E. Newman, preprint A loading-dependent model of probabilistic cascading failure, Probability in the Engineering and Informational Sciences, vol. 19, no. 1, January 2005, pp. 15–32. Those failures will occasionally combine in unforeseeable ways, and if they induce further failures in an operating environment of tightly interrelated processes, the failures will spin out of control, defeating all interventions." Redundancy is a form of resilience that ensures system availability in the event of component failure. A system accident (or normal accident) is an "unanticipated interaction of multiple failures" in a complex system.Perrow (1999, p. 70). Physics of failure is a technique under the practice of reliability design that leverages the knowledge and understanding of the processes and mechanisms that induce failure to predict reliability and improve product performance. This is a concept which disagrees with that of system accident. == Scott Sagan == Scott Sagan has multiple publications discussing the reliability of complex systems, especially regarding nuclear weapons. If a system has no redundancy, then MTB is in return of failure rate, \lambda. : \lambda = \frac{1}{MTB} Systems with spare parts that are energized but that lack automatic fault bypass are to accept actually results because human action is required to restore operation after every failure. Software reliability is the probability of the software causing a system failure over some specified operating time. A system accident is one that requires many things to go wrong in a cascade. Another common technique is to calculate a safety margin for the system by computer simulation of possible failures, to establish safe operating levels below which none of the calculated scenarios is predicted to cause cascading failure, and to identify the parts of the network which are most likely to cause cascading failures. Such a failure may happen in many types of systems, including power transmission, computer networking, finance, transportation systems, organisms, the human body, and ecosystems. This failure process cascades through the elements of the system like a ripple on a pond and continues until substantially all of the elements in the system are compromised and/or the system becomes functionally disconnected from the source of its load. A cascading failure is a failure in a system of interconnected parts in which the failure of one or few parts leads to the failure of other parts, growing progressively as a result of positive feedback. They are often overworked or maintenance is deferred due to budget cuts, because managers know that they system will continue to operate without fixing the backup system.Perrow (1999). == General characterization == In 2012 Charles Perrow wrote, "A normal accident [system accident] is where everyone tries very hard to play safe, but unexpected interaction of two or more failures (because of interactive complexity), causes a cascade of failures (because of tight coupling)." Owing to this coupling, interdependent networks are extremely sensitive to random failures, and in particular to targeted attacks, such that a failure of a small fraction of nodes in one network can trigger an iterative cascade of failures in several interdependent networks. Crucitti, V. Latora and M. Marchiori, Model for cascading failures in complex networks, Physical Review E (Rapid Communications) 69, 045104 (2004). * Data centre power generators that activate when the normal power source is unavailable. === 1+1 redundancy === 1+1 redundancy typically offers the advantage of additional failover transparency in the event of component failure. Annual, vol., no., pp.285-289, 21-23 Jan 1992 using the algorithms for prognostic purposes,NASA.gov NASA Prognostic Center of Excellence and integrating physics of failure predictions into system-level reliability calculations.http://www.dfrsolutions.com/uploads/publications/2010_01_RAMS_Paper.pdf, McLeish, J.G.; "Enhancing MIL-HDBK-217 reliability predictions with physics of failure methods," Reliability and Maintainability Symposium (RAMS), 2010 Proceedings - Annual, vol., no., pp.1-6, 25-28 Jan. 2010 ==Limitations== There are some limitations with the use of physics of failure in design assessments and reliability prediction. A cascade failure can affect large groups of people and systems.
0.15
-1.49
0.9966
1
1.07
C
Suppose that $P(A)=0.7, P(B)=0.3$, and $P(A \cap B)=0.2$. These probabilities are 1.3-3 listed on the Venn diagram in Figure 1.3-1. Given that the outcome of the experiment belongs to $B$, what then is the probability of $A$ ?
Each node on the diagram represents an event and is associated with the probability of that event. The region included in both A and B, where the two sets overlap, is called the intersection of A and B, denoted by . == History == Venn diagrams were introduced in 1880 by John Venn in a paper entitled "On the Diagrammatic and Mechanical Representation of Propositions and Reasonings" in the Philosophical Magazine and Journal of Science, about the different ways to represent propositions by diagrams. In probability theory, an event is a set of outcomes of an experiment (a subset of the sample space) to which a probability is assigned. The probability (with respect to some probability measure) that an event S occurs is the probability that S contains the outcome x of an experiment (that is, it is the probability that x \in S). The probability associated with a node is the chance of that event occurring after the parent event occurs. A Venn diagram consists of multiple overlapping closed curves, usually circles, each representing a set. A Venn diagram is a widely used diagram style that shows the logical relation between sets, popularized by John Venn (1834–1923) in the 1880s. A Venn diagram uses simple closed curves drawn on a plane to represent sets. Venn diagrams normally comprise overlapping circles. Since all events are sets, they are usually written as sets (for example, {1, 2, 3}), and represented graphically using Venn diagrams. __NOTOC__ thumb|Tree diagram for events A and B. Venn viewed his diagrams as a pedagogical tool, analogous to verification of physical concepts through experiment. Venn diagrams do not generally contain information on the relative or absolute sizes (cardinality) of sets. In probability theory, an outcome is a possible result of an experiment or trial. The book comes with a 3-page foldout of a seven-bit cylindrical Venn diagram.) * * * ==External links== * * Lewis Carroll's Logic Game – Venn vs. Euler at Cut-the-knot * Six sets Venn diagrams made from triangles * Interactive seven sets Venn diagram * VBVenn a free open source program for calculating and graphing quantitative two-circle Venn diagrams Category:Graphical concepts in set theory Category:Diagrams Category:Statistical charts and diagrams For instance, in a two-set Venn diagram, one circle may represent the group of all wooden objects, while the other circle may represent the set of all tables. Some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely. In 1866, Venn published The Logic of Chance, a groundbreaking book which espoused the frequency theory of probability, arguing that probability should be determined by how often something is forecast to occur as opposed to "educated" assumptions. In Venn diagrams, the curves are overlapped in every possible way, showing all possible relations between the sets. thumb|The Venn Building, University of Hull thumb|alt=Plaque in the form of a Venn diagram with one set labelled 'Mathematician, Philosopher & Anglican priest', a second set labelled 'Really strong beard game' with the overlapping area labelled 'John Venn'|Alternative heritage plaque for John Venn in Hull John Venn, FRS, FSA (4 August 1834 – 4 April 1923) was an English mathematician, logician and philosopher noted for introducing Venn diagrams, which are used in logic, set theory, probability, statistics, and computer science. Venn did not use the term "Venn diagram" and referred to the concept as "Eulerian Circles". These diagrams were devised while designing a stained-glass window in memory of Venn. ===Other diagrams=== Edwards–Venn diagrams are topologically equivalent to diagrams devised by Branko Grünbaum, which were based around intersecting polygons with increasing numbers of sides.
0.66666666666
200
8.0
17.4
-2
A