text
stringlengths
71
10k
is a quantitative variable defined on a population take a simple random sample of size n from (a) If we estimate the population mean prove that E X distribution of each X i ?) (b) Under the assumption that i.i.d. sampling makes sense, show that the variance of X equals 5.4.18 Suppose we have a finite population addition, suppose we have a measurement variable X : Based on a simple random sample of n from X is defined in Problem 5.4.14(a). (Hint: What is the R1 and we know T determine an estimator of 2 X is defined in Problem 5.4.14(b). X by the sample mean X and we do not know 2 X n, where n i 1 X i and that we X where N In 1 n X N (Hint: Use a function of X ) 5.4.19 Under i.i.d. sampling, prove that as n (Hint: f X x CHALLENGES i 1 1 and into two subpopulations and that we can partition 5.4.20 (Stratified sampling) Suppose that X is a quantitative variable defined on a pop­ ulation 2, such that a proportion p of the full population is in 1 Let fi X denote the conditional population distribution of X on (a) Prove that f X x (b) Establish that 2X , where 2 (c) Establish that p 1 2X (d) Suppose that it makes sense to assume i.i.d. sampling whenever we take a sample from either the full population or either of the subpopulations, i.e., whenever the sam­ ple sizes we are considering are small relative to the sizes of these populations. We implement stratified sampling by taking a simple random sample of size ni from sub­ p X2, where Xi is the sample population mean based on the sample from i Prove that E pX1 i X is the mean of X on p p f1X x p 1X p 2 1X i We then estimate X by p X1 p f2X x p X2 1 1 p p X 2 X 2X 1X 1 1 2 X and i Var pX1 1 p X2 p2 2 1X n1 1 p 2 2 2X n2 (e) Under the assumptions of part (d), prove that Var pX1 1 p X2 Var X pn n2 when X is based on a simple random sample of size n from the full population and n1 (f) Under what conditions is there no benefit to proportional stratified sampling? What do you conclude about situations in which stratified sampling will be most beneficial? p n This is called proportional stratified sampling. 1 282 Section 5.5: Some Basic Inferences DISCUSSION TOPICS 5.4.21 Sometimes it is argued that it is possible for a skilled practitioner to pick a more accurate representative sample of a population deterministically rather than by employ­ ing simple random sampling. This argument is based in part on the argument that it is always possible with simple random sampling that we could get a very unrepresenta­ tive sample through pure chance and that this can be avoided by an expert. Comment on this assertion. 5.4.22 Suppose it is claimed that a quantitative measurement X defined on a finite population is approximately distributed according to a normal distribution with un­ known mean and unknown variance. Explain fully what this claim means. 5.5 Some Basic Inferences Now suppose we are in a situation involving a measurement X whose distribution is unknown, and we have obtained the data x1 x2 i.e., observed n values of X Hopefully, these data were the result of simple random sampling, but perhaps they were collected as part of an observational study. Denote the associated unknown population relative frequency function, or an approximating density, by f X and the population distribution function by FX . xn What we do now with the data depends on two things. First, we have to determine what we want to know about the underlying population distribution. Typically, our interest is in only a few characteristics of this distribution — the mean and variance. Second, we have to use statistical theory to combine the data with the statistical model to make inferences about the characteristics of interest. We now discuss some typical characteristics of interest and present some informal estimation methods for these characteristics, known as descriptive statistics. These are often used as a preliminary step before more formal inferences are drawn and are justified on simple intuitive grounds. They are called descriptive because they are estimating quantities that describe features of the underlying distribution. 5.5.1 Descriptive Statistics Statisticians often focus on various characteristics of distributions. We present some of these in the following examples. EXAMPLE 5.5.1 Estimating Proportions and Cumulative Proportions Often we want to make inferences about the value f X x or the value FX x for a specific x Recall that f X x is the proportion of population members whose X mea­ surement equals x In general, FX x is the proportion of population members whose X measurement is less than or equal to x Now suppose we have a sample x1 x2 from f X . A natural estimate of f X x is given by f X x , the proportion of sample values equal to x A natural estimate of FX x is given by FX x the proportion of sample values less than or equal to x otherwise known as the empirical distribution function evaluated at x n i 1 I x] xi n 1 xn Chapter 5: Statistical Inference 283 Suppose we obtained the following sample of n 10 data values In this case, f X x 0 1 whenever x is a data value and is 0 otherwise. To compute FX x we simply count how many sample values are less than or equal to x and divide by n 0 2, and FX 4 10. For example, FX 0 FX 0 9 10 0 10 2 10 0 9. 3 An important class of characteristics of the distribution of a quantitative variable X is given by the following definition. [0 1], the pth quantile (or 100 pth percentile) x p for Definition 5.5.1 For p the distribution with cdf FX is defined to be the smallest number x p satisfying p FX x p For example, if your mark on a test placed you at the 90th percentile, then your mark equals x0 9 and 90% of your fellow test takers achieved your mark or lower. Note that by the definition of the inverse cumulative distribution function (Definition 2.10.1), we FX x can write x p min x : p When FX is strictly increasing and continuous, then F 1 X p is the unique value x p F 1 X p satisfying FX x p p. (5.5.1) Figure 5.5.1 illustrates the situation in which there is a unique solution to (5.5.1). When FX is not strictly increasing or continuous (as when X is discrete), then there may be more than one, or no, solutions to (5.5.1). Figure 5.5.2 illustrates the situation in which there is no solution to (5.5.1). FX 1 p Figure 5.5.1: The pth quantile x p when there is a unique solution to (5.5.1). xp x 284 Section 5.5: Some Basic Inferences FX 1 p Figure 5.5.2: The pth quantile x p determined by a cdf FX when there is no solution to (5.5.1). xp x So, when X is a continuous measurement, a proportion p of the population have F 1 X 0 5 0 75 are the first and third their X measurement less than or equal to x p. As particular cases, x0 5 is the median, while x0 25 quartiles, respectively, of the distribution. 0 25 and x0 75 F 1 X F 1 X EXAMPLE 5.5.2 Estimating Quantiles A natural estimate of a population quantile x p p . Note, however, that FX is not continuous, so there may not be a solution to (5.5.1) using FX . Applying Definition 5.5.1, however, leads to the following estimate. First, order the x n (see is the i n ­th quantile of the empirical distribution, xn to obtain the order statistics x 1 p is to use x p observed sample values x1 Section 2.8.4). Then, note that x i because F 1 X F 1 X FX x i i n and FX x is x p x i whenever i n whenever x x i . In general, we have that the sample pth quantile i 1 n p i n . (5.5.2) A number of modifications to this estimate are sometimes used. For example, if we find i such that (5.5.2) is satisfied and put 5.5.3) then x p is the linear interpolation between x i 1 and x i . When n is even, this defin­ ition gives the sample median as x0 5 x n 2 ; a similar formula holds when n is odd (Problem 5.5.21). Also see Problem 5.5.22 for more discussion of (5.5.3). Quite often the sample median is defined to be x0 odd n even, (5.5.4) Chapter 5: Statistical Inference 285 namely, the middle value when n is odd and the average of the two middle values when n is even. For n large enough, all these definitions will yield similar answers. The use of any of these is permissible in an application. Consider the data in Example 5.5.1. Sorting the data from smallest to largest, the order statistics are given by the following table 10 1 5 5 0 Then, using (5.5.3), the sample median is given by x0 5 quartiles are given by x 5 1 5, while the sample x0 25 x 2 0 3 10 x 3 10 0 4 x 2 0 25 0 2 0 3 0 25 0 2 0 05 and x0 75 x 7 2 2 10 x 8 10 3 3 0 75 x 7 2 2 0 75 0 7 0 7 2 75 So in this case, we estimate that 25% of the population under study has an X measure­ ment less than 0.05, etc. EXAMPLE 5.5.3 Measuring Location and Scale of a Population Distribution Often we are asked to make inferences about the value of the population mean and the population variance X 1 2 X 1 X X 2 X where is a finite population and X is a real­valued measurement defined on it. These are measures of the location and spread of the population distribution about the mean, respectively. Note that calculating a mean or variance makes sense only when X is a quantitative variable. When X is discrete, we can also write x f X x X x because f X x equals the number of elements continuous case, using an approximating density f X we can write with X x In the X x f X x dx Similar formulas exist for the population variance of X (see Problem 5.4.14). 286 Section 5.5: Some Basic Inferences It will probably occur to you that a natural estimate of the population mean X is given by the sample mean x 1 n n i 1 xi Also, a natural estimate of the population variance 2 X is given by the sample variance s2 n 1 n 1 xi x 2 (5.5.5) i 1 Later we will explain why we divided by n 1 in (5.5.5) rather than n. Actually, it makes little difference which we use, for even modest values of n The sample standard deviation is given by s the positive square root of s2 For the data in Example 5.1.1, we obtain x 2 097 X and population standard deviation X serve as a pair, in which X measures where the distribution is located on the real line and X meas
ures X Clearly, the greater the value of how much spread there is in the distribution about 1 73 and s The population mean X the more variability there is in the distribution. Alternatively, we could use the population median x0 5 as a measure of location of the distribution and the population interquartile range x0 75 x0 25 as a measure of the amount of variability in the distribution around the median. The median and interquartile range are the preferred choice to measure these aspects of the distribution whenever the distribution is skewed, i.e., not symmetrical. This is because the median is insensitive to very extreme values, while the mean is not. For example, house prices in an area are well known to exhibit a right­skewed distribution. A few houses selling for very high prices will not change the median price but could result in a big change in the mean price. When we have a symmetric distribution, the mean and median will agree (provided the mean exists). The greater the skewness in a distribution, however, the greater will be the discrepancy between its mean and median. For example, in Figure 5.5.3 we have 2 4 distribution. This distribution is skewed to the right, and plotted the density of a the mean is 4 while the median is 3.3567. 0.03 f 0.02 0.01 0.00 0 5 10 15 20 x Figure 5.5.3: The density f of a 2 4 distribution. Chapter 5: Statistical Inference 287 (IQR) given by I Q R sample median to be x0 5 We estimate the population interquartile range by the sample interquartile range x0 25. For the data in Example 5.5.1, we obtain the x0 75 If we change the largest value in the sample from x 10 500 0 the 1 5 but note that the sample mean goes from 1.73 to 1 5 while I Q R 2 75 0 05 2 70. 5 0 to x 10 sample median remains x0 5 51.23! 5.5.2 Plotting Data It is always a good idea to plot the data. For discrete quantitative variables, we can plot f X i.e., plot the sample proportions (relative frequencies). For continuous quantitative variables, we introduced the density histogram in section 5.4.3. These plots give us some idea of the shape of the distribution from which we are sampling. For example, we can see if there is any evidence that the distribution is strongly skewed. We now consider another very useful plot for quantitative variables. EXAMPLE 5.5.4 Boxplots and Outliers Another useful plot for quantitative variables is known as a boxplot. For example, Figure 5.5.4 gives a boxplot for the data in Example 5.5.1. The line in the center of the box is the median. The line below the median is the first quartile, and the line above the median is the third quartile. The vertical lines from the quartiles are called whiskers, which run from the quar­ tiles to the adjacent values. The adjacent values are given by the greatest value less than or equal to the upper limit (the third quartile plus 1.5 times the I Q R) and by the least value greater than or equal to the lower limit (the first quartile minus 1.5 times the I Q R). Values beyond the adjacent values, when these exist, are plotted with a ; in this case, there are none. If we changed x 10 15 0 however, we 5 0 to x 10 see this extreme value plotted as a , as shown in Figure 5.5.5. 5 4 3 2 1 0 ­1 ­2 Figure 5.5.4: A boxplot of the data in Example 5.5.1. 288 Section 5.5: Some Basic Inferences 15 10 5 0 Figure 5.5.5: A boxplot of the data in Example 5.5.1, changing x 10 x 10 15 0. 5 0 to Points outside the upper and lower limits, and thus plotted by , are commonly referred to as outliers. An outlier is a value that is extreme with respect to the rest of the observations. Sometimes outliers occur because a mistake has been made in collecting or recording the data, but they also occur simply because we are sampling from a long­tailed distribution. It is often difficult to ascertain which is the case in a particular application, but each such observation should be noted. We have seen in Example 5.5.3 that outliers can have a big impact on statistical analyses. Their effects should be recorded when reporting the results of a statistical analysis. For categorical variables, it is typical to plot the data in a bar chart, as described in the next example. EXAMPLE 5.5.5 Bar Charts For categorical variables, we code the values of the variable as equispaced numbers and then plot constant­width rectangles (the bars) over these values so that the height of the rectangle over a value equals the proportion of times that value is assumed. Such a plot is called a bar chart. Note that the values along the x­axis are only labels and not to be treated as numbers that we can do arithmetic on, etc. For example, suppose we take a simple random sample of 100 students and record their favorite avor of ice cream (from amongst four possibilities), obtaining the results given in the following table. Flavor Chocolate Vanilla Butterscotch Strawberry Count 42 28 22 8 Proportion 0.42 0.28 0.22 0.08 Coding Chocolate as 1, Vanilla as 2, Butterscotch as 3, and Strawberry as 4, Figure 5.5.6 presents a bar chart of these data. It is typical for the bars in these charts not to touch. Chapter 5: Statistical Inference 289 .4 0.3 0.2 0.1 1 2 3 4 Flavor Figure 5.5.6: A bar chart for the data of Example 5.5.5. 5.5.3 Types of Inferences Certainly quoting descriptive statistics and plotting the data are methods used by a sta­ tistician to try to learn something about the underlying population distribution. There are difficulties with this approach, however, as we have just chosen these methods based on intuition. Often it is not clear which descriptive statistics we should use. Further­ more, these data summaries make no use of the information we have about the true pop­ ulation distribution as expressed by the statistical model, namely, f X Taking account of this information leads us to develop a theory of statistical inference, i.e., to specify how we should combine the model information together with the data to make inferences about population quantities. We will do this in Chapters 6, 7, and 8, but first we discuss the types of inferences that are commonly used in applications. : f In Section 5.2, we discussed three types of inference in the context of a known probability model as specified by some density or probability function f We noted that we might want to do any of the following concerning an unobserved response value s. (i) Predict an unknown response value s via a prediction t. (ii) Construct a subset C of the sample space S that has a high probability of containing an unknown response value s. (iii) Assess whether or not s0 specified by f S is a plausible value from the probability distribution We refer to (i), (ii), and (iii) as inferences about the unobserved s The examples of Section 5.2 show that these are intuitively reasonable concepts. In an application, we do not know f ; we know only that f observe the data s. We are uncertain about which candidate f lently, which of the possible values of is correct. As mentioned in Section 5.5.1, our primary goal may be to determine not the true but some characteristic of the true distribution such as its mean, median, or the , and we is correct, or, equiva­ : f f 290 Section 5.5: Some Basic Inferences value of the true distribution function F at a specified value. We will denote this characteristic of interest by . For example, when the characteristic of interest is the mean of the true distribution of a continuous random variable, then x f x dx Alternatively, we might be interested in tion of a random variable with distribution function given by F . F 1 0 5 , the median of the distribu­ Different values of . After observing the data s, we want to make inferences about what the correct value is. We will consider the three types of inference for lead to possibly different values for the characteristic . (i) Choose an estimate T s of referred to as the problem of estimation. (ii) Construct a subset C s of the set of possible values for that we believe contains the true value, referred to as the problem of credible region or confidence region construction. (iii) Assess whether or not referred to as the problem of hypothesis assessment. 0 is a plausible value for after having observed s So estimates, credible or confidence regions, and hypothesis assessment are examples of types of inference. In particular, we want to construct estimates T s of construct credible or confidence regions C s for 0 for hypothesized value and assess the plausibility of a . The problem of statistical inference entails determining how we should combine and the data s to carry out these inferences f : the information in the model about . A very important statistical model for applications is the location­scale normal model introduced in Example 5.3.4. We illustrate some of the ideas discussed in this section via that model. EXAMPLE 5.5.6 Application of the Location­Scale Normal Model Suppose the following simple random sample of the heights (in inches) of 30 students has been collected. 64 9 64 9 61 6 61 4 64 3 64 0 66 3 62 5 61 5 64 3 63 1 64 2 65 1 65 0 66 8 64 4 65 8 66 4 59 8 63 4 65 8 63 6 61 9 71 4 66 5 66 6 67 8 65 0 60 9 66 3 The statistician believes that the distribution of heights in the population can be well approximated by a normal distribution with some unknown mean and variance, and she is unwilling to make any further assumptions about the true distribution. Accord­ 2 distributions, where ingly, the statistical model is given by the family of N 2 R1 R is unknown. Does this statistical model make sense, i.e., is the assumption of normality appro­ priate for this situation? The density histogram (based on 12 equal­length intervals from 59.5 to 71.5) in Figure 5.5.7 looks very roughly normal, but the extreme observa­ tion in the right tail might be some grounds for concern. In any case, we proceed as if Chapter 5: Statistical Inference 291 this assumption is reasonable. In Chapter 9, we will discuss more refined methods for assessing this assumption. 0.2 y t i s n
e D 0.1 0.0 60 65 heights 70 Figure 5.5.7: Density histogram of heights in Example 5.5.6. 2 2 z0 90 then P X Suppose we are interested in making inferences about the population mean height, Alternatively, we might want to namely, the characteristic of interest is make inferences about the 90th percentile of this distribution, i.e., x0 90 z0 90 where z0 90 is the 90th percentile of the N 0 1 distribution (when X 2 N 0 90). P X So 90% of the population under study have height less than x0 90 a value unknown 2 . Obviously, there are many other to us because we do not know the value of characteristics of the true distribution about which we might want to make inferences. and T x1 z0 90. To justify the choice of these estimates, we will need the theories developed in later chapters. In this 2 379 From Table D.2 case, we obtain x we obtain z0 90 xn sz0 90 seems like a sensible estimate of Just using our intuition, T x1 x 64 517 and from (5.5.5) we compute s x seems like a sensible estimate of 1 2816, so that z0 90 z0 90 xn x sz0 90 64 517 2 379 1 2816 67 566. How accurate is the estimate x of ? A natural approach to answering this question is to construct a credible interval, based on the estimate, that we believe has a high probability of containing the true value of and is as short as possible For example, the theory in Chapter 6 leads to using confidence intervals for of the form [x sc x sc] for some choice of the constant c. Notice that x is at the center of the interval. The 0 3734 leads to what is theory in Chapter 6 will show that, in this case, choosing c known as a 0 95­confidence interval for We then take the half­length of this interval, namely, sc 2 379 0 373 4 0 888, 292 Section 5.5: Some Basic Inferences as a measure of the accuracy of the estimate x enough information to say that we know the true value of with “confidence” equal to 0.95. 64 517 of In this case, we have to within one inch, at least Finally, suppose we have a hypothesized value 0 for the population mean height. For example, we may believe that the mean height of the population of individuals under study is the same as the mean height of another population for which this quantity 65 Then, based on the observed sample of heights we want is known to equal to assess whether or not the value 65 makes sense. If the sample mean height x is far from 0 this would seem to be evidence against the hypothesized value. In Chapter 6, we will show that we can base our assessment on the value of 0 0 t x s 0 n 64 517 2 379 65 30 1 112. If the value of t the hypothesized value 0 and we will do this in Chapter 6. It turns out that t when the true value of is very large, then we will conclude that we have evidence against 65. We have to prescribe what we mean by large here, 1 112 is a plausible value for t, equals 65, so we have no evidence against the hypothesis. Summary of Section 5.5 Descriptive statistics represent informal statistical methods that are used to make inferences about the distribution of a variable X of interest, based on an observed sample from this distribution. These quantities summarize characteristics of the observed sample and can be thought of as estimates of the corresponding un­ known population quantities. More formal methods are required to assess the error in these estimates or even to replace them with estimates having greater accuracy. It is important to plot the data using relevant plots. These give us some idea of the shape of the population distribution from which we are sampling. There are three main types of inference: estimates, credible or confidence inter­ vals, and hypothesis assessment. EXERCISES 5.5.1 Suppose the following data are obtained by recording X the number of cus­ tomers that arrive at an automatic banking machine during 15 successive one­minute time intervals and f X 4 (a) Record estimates of f X 0 (b) Record estimates of FX 0 FX 1 FX 2 FX 3 and FX 4 (c) Plot f X . (d) Record the mean and variance. f X 1 f X 2 Chapter 5: Statistical Inference 293 (e) Record the median and IQR and provide a boxplot. Using the rule prescribed in Example 5.5.4, decide whether there are any outliers. 5.5.2 Suppose the following sample of waiting times (in minutes) was obtained for customers in a queue at an automatic banking machine. 15 5 10 a) Record the empirical distribution function. (b) Plot f X . (c) Record the mean and variance. (d) Record the median and IQR and provide a boxplot. Using the rule given in Example 5.5.4, decide whether there are any outliers. 5.5.3 Suppose an experiment was conducted to see whether mosquitoes are attracted differentially to different colors. Three different colors of fabric were used and the number of mosquitoes landing on each piece was recorded over a 15­minute interval. The following data were obtained. Color 1 Color 2 Color 3 Number of landings 25 35 22 f X 2 and f X 3 where we use i for color i. (a) Record estimates of f X 1 (b) Does it make sense to estimate FX i ? Explain why or why not. (c) Plot a bar chart of these data. 5.5.4 A student is told that his score on a test was at the 90th percentile in the popula­ tion of all students who took the test. Explain exactly what this means. 5.5.5 Determine the empirical distribution function based on the sample given below Plot this function. Determine the sample median, the first and third quartiles, and the interquartile range. What is your estimate of F 1 ? 5.5.6 Consider the density histogram in Figure 5.5.8. If you were asked to record measures of location and spread for the data corresponding to this plot, what would you choose? Justify your answer. 5.5.7 Suppose that a statistical model is given by the family of N where inferences about the first quartile of the true distribution, then determine 2 5.5.8 Suppose that a statistical model is given by the family of N 0 distributions where If our interest is in making inferences about the third moment of the distribution, then determine 2 0 distributions If our interest is in making R1 is unknown, while R1 is unknown, while 2 0 is known. 2 0 is known. 294 Section 5.5: Some Basic Inferences 2 2 2 R1 R1 2 0 is known. R1 is unknown, while 2 0 distributions If our interest is in making 2 distributions R is unknown. If our interest is in making inferences 2 distributions R is unknown. If our interest is in making inferences 5.5.9 Suppose that a statistical model is given by the family of N where inferences about the distribution function evaluated at 3, then determine 5.5.10 Suppose that a statistical model is given by the family of N where about the first quartile of the true distribution, then determine 5.5.11 Suppose that a statistical model is given by the family of N where about the distribution function evaluated at 3, then determine 5.5.12 Suppose that a statistical model is given by the family of Bernoulli tions where that two independent observations from this model are the same, then determine distribu­ 5.5.13 Suppose that a statistical model is given by the family of Bernoulli tions where [0 1]. If our interest is in making inferences about the probability that in two independent observations from this model we obtain a 0 and a 1, then de­ termine ] dis­ 5.5.14 Suppose that a statistical model is given by the family of Uniform[0 tributions where If our interest is in making inferences about the coefficient of variation (see Exercise 5.3.5) of the true distribution, then determine distribu­ [0 1]. If our interest is in making inferences about the probability 0 2 What do you notice about this characteristic? 5.5.15 Suppose that a statistical model is given by the family of Gamma butions where variance of the true distribution, then determine distri­ If our interest is in making inferences about the 0 0 0.3 0.2 0.1 0.0 0 5 10 Figure 5.5.8: Density histogram for Exercise 5.5.6. COMPUTER EXERCISES 5.5.16 Do the following based on the data in Exercise 5.4.5. (a) Compute the order statistics for these data. (b) Calculate the empirical distribution function at the data points. Chapter 5: Statistical Inference 295 (c) Calculate the sample mean and the sample standard deviation. (d) Obtain the sample median and the sample interquartile range. (e) Based on the histograms obtained in Exercise 5.4.5, which set of descriptive statis­ tics do you feel are appropriate for measuring location and spread? (f) Suppose the first data value was recorded incorrectly as 13.9 rather than as 3.9. Repeat parts (c) and (d) using this data set and compare your answers with those previ­ ously obtained. Can you draw any general conclusions about these measures? Justify your reasoning. 5.5.17 Do the following based on the data in Example 5.5.6. (a) Compute the order statistics for these data. (b) Plot the empirical distribution function (only at the sample points). (c) Calculate the sample median and the sample interquartile range and obtain a box­ plot. Are there any outliers? (d) Based on the boxplot, which set of descriptive statistics do you feel is appropriate for measuring location and spread? (e) Suppose the first data value was recorded incorrectly as 84.9 rather than as 64.9. Repeat parts (c) and (d) using this data set and see whether any observations are deter­ mined to be outliers. 5.5.18 Generate a sample of 30 from an N 10 2 distribution and a sample of 1 from an N 30 2 distribution. Combine these together to make a single sample of 31 (a) Produce a boxplot of these data. (b) What do you notice about this plot? (c) Based on the boxplot, what characteristic do you think would be appropriate to measure the location and spread of the distribution? Explain why. 5.5.19 Generate a sample of 50 from a (a) Produce a boxplot of these data. (b) What do you notice about this plot? (c) Based on the boxplot, what characteristic do you think would be appropriate to measure the location and spread of the distribution? Explain why. 5.5.20 Generate a sample of 50 from an N 4 1 distribution. Suppose your interest is 4 and
in estimating the 90th percentile x0 9 of this distribution and we pretend that 2 1 distribution. 1 are unknown. (a) Compute an estimate of x0 9 based on the appropriate order statistic. (b) Compute an estimate based on the fact that x0 9 percentile of the N 0 1 distribution. (c) If you knew, or at least were willing to assume, that the sample came from a normal distribution, which of the estimates in parts (a) or (b) would you prefer? Explain why. z0 9 where z0 9 is the 90th PROBLEMS 5.5.21 Determine a formula for the sample median, based on interpolation (i.e., using (5.5.3)) when n is odd. (Hint: Use the least integer function or ceiling x smallest integer greater than or equal to x ) 296 Section 5.5: Some Basic Inferences 5.5.22 An alternative to the empirical distribution function is to define a distribution function F by F x and if x 1 if x if for i 1 F x i if x i x (a) Show that F x i for i (b) Prove that F is continuous on x 1 (c) Show that, for p p. F x p [1 n 1 1 n and is increasing from 0 to 1. and right continuous everywhere. the value x p defined in (5.5.3) is the solution to DISCUSSION TOPICS 5.5.23 Sometimes it is argued that statistics does not need a formal theory to prescribe inferences. Rather, statistical practice is better left to the skilled practitioner to decide what is a sensible approach in each problem. Comment on these statements. 5.5.24 How reasonable do you think it is for an investigator to assume that a random variable is normally distributed? Discuss the role of assumptions in scientific mod­ elling. Chapter 6 Likelihood Inference CHAPTER OUTLINE Section 1 The Likelihood Function Section 2 Maximum Likelihood Estimation Section 3 Section 4 Distribution­Free Methods Section 5 Inferences Based on the MLE Large Sample Behavior of the MLE (Advanced) In this chapter, we discuss some of the most basic approaches to inference. In essence, we want our inferences to depend only on the model P : and the data s. These methods are very minimal in the sense that they require few assumptions. While successful for certain problems, it seems that the additional structure of Chapter 7 or Chapter 8 is necessary in more involved situations. The likelihood function is one of the most basic concepts in statistical inference. Entire theories of inference have been constructed based on it. We discuss likeli­ hood methods in Sections 6.1, 6.2, 6.3, and 6.5. In Section 6.4, we introduce some distribution­free methods of inference. These are not really examples of likelihood methods, but they follow the same basic idea of having the inferences depend on as few assumptions as possible. 6.1 The Likelihood Function Likelihood inferences are based only on the data s and the model P : — the set of possible probability measures for the system under investigation From these ingredients we obtain the basic entity of likelihood inference, namely, the likelihood function. To motivate the definition of the likelihood function, suppose we have a statistical model in which each P is discrete, given by probability function f Having observed s consider the function L and taking values in R1, given by s defined on the parameter space L s f s 297 298 Section 6.1: The Likelihood Function We refer to L The value L we are fixing the data and varying the value of the parameter. s as the likelihood function determined by the model and the data. s is called the likelihood of Note that for the likelihood function, over 2 whenever f 1 s This imposes a belief ordering on We see that f of the parameter is the true value of says that the data are more likely under 2 whenever f 1 s s is just the probability of obtaining the data s when the true value , namely, we believe in 1 as f 2 s This is because the inequality 1 than 2 We are indifferent between 1 and is based on this ordering. s s The value L is the true value — it is not the probability of given s is very small for So it is not the actual value of the likelihood that is telling us how , but rather its value relative to the likelihoods of is the probability of s given that that we have observed s Also, it is possible that the value of L every value of much support to give to a particular other possible parameter values. It is important to remember the correct interpretation of L f 2 s . Likelihood inference about EXAMPLE 6.1.1 Suppose S 1 2 the uniform distribution on the integers 1 on 1 and L 2 10 ports and that the statistical model is P : where P1 is 103 and P2 is the uniform distribution 1 103 Further suppose that we observe s 10. Then L 1 10 1 106. Both values are quite small, but note that the likelihood sup­ 1 a thousand times more than it supports 106 1 2 2. Accordingly, we are only interested in likelihood ratios L L 1 s 2 s 1 2 based on the likelihood for when it comes to determining inferences for i.e., function This implies that any function that is a positive multiple of L L 0 can serve equally well as a likelihood function. We call two likelihoods equivalent if they are proportional in this way. In general, we refer to any positive multiple of L s as a likelihood function. for some fixed c cL s s s EXAMPLE 6.1.2 4 heads are observed. With Suppose that a coin is tossed n no knowledge whatsoever concerning the probability of getting a head on a single model with toss, the appropriate statistical model for the data is the Binomial 10 10 times and that s [0 1] The likelihood function is given by L 4 10 4 4 1 6 (6.1.1) which is plotted in Figure 6.1.1. This likelihood peaks at 0 4 and takes the value 0.2508 there. We will ex­ amine uses of the likelihood to estimate the unknown and assess the accuracy of the estimate. Roughly speaking, however, this is based on where the likelihood takes its maximum and how much spread there is in the likelihood about its peak. Chapter 6: Likelihood Inference 299 0.25 L 0.20 0.15 0.10 0.05 0.00 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 theta Figure 6.1.1: Likelihood function from the Binomial 10 model when s 4 is observed. There is a range of approaches to obtaining inferences via the likelihood function. At one extreme is the likelihood principle. Likelihood Principle: If two model and data combinations yield equivalent likelihood functions, then inferences about the unknown parameter must be the same. This principle dictates that anything we want to say about the unknown value of must be based only on L proscription. Consider the following example. For many statisticians, this is viewed as a very severe s EXAMPLE 6.1.3 Suppose a coin is tossed in independent tosses until four heads are obtained and the number of tails observed until the fourth head is s 6 Then s is distributed Negative­ Binomial 4 , and the likelihood specified by the observed data is L 6 9 6 4 1 6 Note that this likelihood function is a positive multiple of (6.1.1). So the likelihood principle asserts that these two model and data combinations must yield the same inferences about the unknown . In effect, the likelihood principle says we must ignore the fact that the data were obtained in entirely different ways. If, how­ ever, we take into account additional model features beyond the likelihood function, then it turns out that we can derive different inferences for the two situations. In partic­ ular, assessing a hypothesized value 0 can be carried out in different ways when the sampling method is taken into account. Many statisticians believe this additional information should be used when deriving inferences. 300 Section 6.1: The Likelihood Function As an example of an inference derived from a likelihood function, consider a set of the form C s : L s c for some c 0 The set C s is referred to as a likelihood region. It contains all those values for which their likelihood is at least c A likelihood region, for some c, seems C s , like a sensible set to quote as possibly containing the true value of then L C s and so is not as well­supported by the observed data as any value in C s . The size of C s can then be taken as a measure of how uncertain we are about the true value of s for every . For, if L s . We are left with the problem, however, of choosing a suitable value for c and, as Example 6.1.1 seems to indicate, the likelihood itself does not suggest a natural way to do this. In Section 6.3.2, we will discuss a method for choosing c that is based upon additional model properties beyond the likelihood function. So far in this section, we have assumed that our statistical models are comprised s of discrete distributions. The definition of the likelihood is quite natural, as L is simply the probability of s occurring when is the true value. This interpretation is clearly not directly available, however, when we have a continuous model because every data point has probability 0 of occurring. Imagine, however, that f 1 s f 2 s and that s R1 Then, assuming the continuity of every f at s we have P 1 V b a f 1 s dx P 2 V b a f 2 s dx for every interval V a b containing s that is small enough. We interpret this to mean that the probability of s occurring when 1 is true is greater than the probability of s occurring when 2 is true. So the data s support 1 more than 2 A similar interpretation applies when s 1 and V is a region containing s Rn for n s and interpret the ordering this imposes on the values of Therefore, in the continuous case, we again define the likelihood function by L f s exactly as we do in the discrete case.1 Again, two likelihoods will be considered equivalent if one is a positive multiple of the other. Now consider a very important example. EXAMPLE 6.1.4 Location Normal Model Suppose that x1 xn (i.i.d.) sample from an N 2 0 distribution where 0 is known. The likelihood function is given by 2 0 is an observed independently and identically distributed R1 is unknown and L x1 xn n i 1 f xi n i 1 2 2 0 1 2 exp 1 2 2 0 xi 2 1Note, however, that whenever we have a situation in which f 1 s f 2 s we could still have P 1 V 1 is supported more than 2 rather than these
two values having equal support, as implied by the likelihood. This phenomenon does not occur in the examples we discuss, so we will ignore it here. P 2 V for every V containing s and small enough. This implies that Chapter 6: Likelihood Inference 301 and clearly this simplifies to L x1 xn 2 2 2 0 2 0 n 2 exp n 2 exp xi 2 2 exp n 1 2 2 0 s2 An equivalent, simpler version of the likelihood function is then given by L x1 xn exp n 2 2 0 x 2 and we will use this version. For example, suppose n plotted in Figure 6.1.2. 25 2 0 1 and we observe x 3 3 This function is 1.0 L 0.8 0.6 0.4 0.2 0.0 2 3 4 5 theta Figure 6.1.2: Likelihood from a location normal model based on a sample of 25 with x 3 3. The likelihood peaks at there. The likelihood interval x 3 3 and the plotted function takes the value 1 C x : L x1 xn 0 5 3 0645 3 53548 contains all those values whose likelihood is at least 0.5 of the value of the likelihood at its peak. The location normal model is impractical for many applications, as it assumes that the variance is known, while the mean is unknown. For example, if we are interested in the distribution of heights in a population, it seems unlikely that we will know the population variance but not know the population mean. Still, it is an important statis­ tical model, as it is a context where inference methods can be developed fairly easily. 302 Section 6.1: The Likelihood Function The methodology developed for this situation is often used as a paradigm for inference methods in much more complicated models. The parameter need not be one­dimensional. The interpretation of the likelihood is still the same, but it is not possible to plot it — at least not when the dimension of is greater than 2. EXAMPLE 6.1.5 Multinomial Models In Example 2.8.5, we introduced multinomial distributions. These arise in applications when we have a categorical response variable s that can take a finite number k of values, say, 1 i i . 3 and we do not know the value of 1 2 3 In this k and P s Suppose, then, that k case, the parameter space is given by 1 2 3 : i 0 for i 1 2 3 and 1 2 3 1 Notice that it is really only two­dimensional, because as soon as we know the value of 1 and 2 we immediately know the value of the remaining any two of the i ’s say, parameter, as 2 This fact should always be remembered when we are 3 dealing with multinomial models. 1 1 Now suppose we observe a sample of n from this distribution, say, s1 sn . The likelihood function for this sample is given by L 1 2 3 s1 sn x1 1 x2 2 x3 3 (6.1.2) where xi is the number of i’s in the sample. Using the fact that we can treat positive multiples of the likelihood as being equiv­ alent, we see that the likelihood based on the observed counts x1 x2 x3 (since they arise from a Multinomial n 3 distribution) is given by 1 2 L 1 2 3 x1 x2 x3 x1 1 x2 2 x3 3 . This is identical to the likelihood (as functions of 2 and 3) for the original sam­ ple. It is certainly simpler to deal with the counts rather than the original sample. This is a very important phenomenon in statistics and is characterized by the concept of sufficiency, discussed in the next section. 1 6.1.1 Sufficient Statistics The equivalence for inference of positive multiples of the likelihood function leads to a useful equivalence amongst possible data values coming from the same model. For s2 for some example, suppose data values s1 and s2 are such that L c 0 From the point of view of likelihood, we are indifferent as to whether we obtained the data s1 or the data s2 as they lead to the same likelihood ratios. cL s1 This leads to the definition of a sufficient statistic. Chapter 6: Likelihood Inference 303 Definition 6.1.1 A function T defined on the sample space S is called a sufficient statistic for the model if, whenever T s1 T s2 then L s1 c s1 s2 L s2 for some constant c s1 s2 0 The terminology is motivated by the fact that we need only observe the value t for the function T as we can pick any value s T 1 t s : T s t and use the likelihood based on s All of these choices give the same likelihood ratios. Typically, T s will be of lower dimension than s so we can consider replacing s by T s as a data reduction which simplifies the analysis somewhat. We illustrate the computation of a sufficient statistic in a simple context. EXAMPLE 6.1.6 Suppose that S 1 2 3 4 given by the following table. a b and the two probability distributions are e.g., L a 2 1 4), so the Then L 0 1 data values in 2 3 4 all give the same likelihood ratios. Therefore, T : S given by T 1 1 is a sufficient statistic. The model T 4 has simplified a bit, as now the sample space for T has only two elements instead of four for the original model. 1 6 and L b 2 0 and T 2 T 3 The following result helps identify sufficient statistics. Theorem 6.1.1 (Factorization theorem) If the density (or probability function) for a model factors as f , where g and h are nonnegative, then T is a sufficient statistic. h s g T s s PROOF By hypothesis, it is clear that, when T s1 T s2 we have L s1 h s1 g T s1 h s1 g T s1 h s2 g T s2 h s2 g T s2 h s1 h s2 g T s2 because g T s1 h s2 g T s2 c s1 s2 L s2 Note that the name of this result is motivated by the fact that we have factored f as a product of two functions. The important point about a sufficient statistic T is that we are indifferent, at least when considering inferences about between observing the full data s or the value of T s . We will see in Chapter 9 that there is information in that is useful when we want to check assumptions. the data, beyond the value of T s 304 Section 6.1: The Likelihood Function Minimal Sufficient Statistics Given that a sufficient statistic makes a reduction in the data, without losing relevant information in the data for inferences about we look for a sufficient statistic that makes the greatest reduction. Such a statistic is called a minimal sufficient statistic. Definition 6.1.2 A sufficient statistic T for a model is a minimal sufficient statistic, whenever the value of T s can be calculated once we know the likelihood function L s . So a relevant likelihood function can always be obtained from the value of any suffi­ cient statistic T but if T is minimal sufficient as well, then we can also obtain the value of T from any likelihood function. It can be shown that a minimal sufficient statistic gives the greatest reduction of the data in the sense that, if T is minimal sufficient and h U Note that the definitions U is sufficient, then there is a function h such that T of sufficient statistic and minimal sufficient statistic depend on the model, i.e., different models can give rise to different sufficient and minimal sufficient statistics. While the idea of a minimal sufficient statistic is a bit subtle, it is usually quite simple to find one, as the following examples illustrate. EXAMPLE 6.1.7 Location Normal Model By the factorization theorem we see immediately, from the discussion in Example 6.1.4, that x is a sufficient statistic. Now any likelihood function for this model is a positive multiple of exp n 2 2 0 x 2 . Notice that any such function of its maximum, namely, at function for this model, and it is therefore a minimal sufficient statistic. is completely specified by the point where it takes x. So we have that x can be obtained from any likelihood EXAMPLE 6.1.8 Location­Scale Normal Model xn Suppose that x1 R1 and Examples 5.3.4 and 5.5.6. is a sample from an N 0 are unknown. Recall the discussion and application of this model in 2 distribution in which The parameter in this model is two­dimensional and is given by R1 0 Therefore, the likelihood function is given by 2 L x1 xn 2 2 n 2 exp n 2 exp xi 2 2 exp n 1 2 2 s2 . We see immediately, from the factorization theorem, that x s2 is a sufficient statistic. xn is maximized, as a func­ at x we have that Now, fixing 2, any positive multiple of L x. This is independent of x1 2. Fixing tion of , at L x 2 x1 xn 2 2 n 2 exp n 1 2 2 s2 Chapter 6: Likelihood Inference 305 is maximized, as a function of cause ln is a strictly increasing function. Now 2 at the same point as ln L x 2 x1 xn be­ ln L x 2 x 2 n 1 2 2 s2 ln 2 n 2 n 1 2 4 s2. 2 n 2 2 Setting this equal to 0 yields the solution 2 n 1 s2, n which is a 1–1 function of s2 So, given any likelihood function for this model, we can compute x s2 , which establishes that x s2 is a minimal sufficient statistic for the model. In fact, the likelihood is maximized at x 2 (Problem 6.1.22). EXAMPLE 6.1.9 Multinomial Models We saw in Example 6.1.5 that the likelihood function for a sample is given by (6.1.2). This makes clear that if two different samples have the same counts, then they have the same likelihood, so the counts x1 x2 x3 comprise a sufficient statistic. Now it turns out that this likelihood function is maximized by taking 1 2 3 x1 n x2 n x3 n So, given the likelihood, we can compute the counts (the sample size n is assumed known). Therefore, x1 x2 x3 is a minimal sufficient statistic. Summary of Section 6.1 The likelihood function for a model and data shows how the data support the various possible values of the parameter. It is not the actual value of the likeli­ hood that is important but the ratios of the likelihood at different values of the parameter. A sufficient statistic T for a model is any function of the data s such that once we know the value of T s s (up to a positive constant multiple). A minimal sufficient statistic T for a model is any sufficient statistic such that s for the model and data, then we can once we know a likelihood function L determine T s . then we can determine the likelihood function L EXERCISES 6.1.1 Suppose a sample of n individuals is being tested for the presence of an antibody in their blood and that the number with the antibody present is recorded. Record an appropriate statistical model for this situation when we assume that the responses from 306 Section 6.1: The Likelihood Function 30 345 , where individuals are independent. If we have a sample of 10 and record 3 positives,
graph a representative likelihood function. 6.1.2 Suppose that suicides occur in a population at a rate p per person year and that p is assumed completely unknown. If we model the number of suicides observed in a population with a total of N person years as Poisson N p , then record a representative likelihood function for p when we observe 22 suicides with N 6.1.3 Suppose that the lifelengths (in thousands of hours) of light bulbs are distributed Exponential 5 2 for a sample of 20 0 is unknown. If we observe x light bulbs, record a representative likelihood function. Why is it that we only need to observe the sample average to obtain a representative likelihood? 6.1.4 Suppose we take a sample of n 100 students from a university with over 50 000 students enrolled. We classify these students as either living on campus, living off campus with their parents, or living off campus independently. Suppose we observe the counts x1 x2 x3 34 44 22 . Determine the form of the likelihood function for the unknown proportions of students in the population that are in these categories. 6.1.5 Determine the constant that makes the likelihood functions in Examples 6.1.2 and 6.1.3 equal. 6.1.6 Suppose that x1 distribution, where xn is a sample from the Bernoulli [0 1] is unknown. Determine the likelihood function and a minimal sufficient sta­ tistic for this model. (Hint: Use the factorization theorem and maximize the logarithm of the likelihood function.) 6.1.7 Suppose x1 0 is unknown. Determine the likelihood function and a minimal sufficient statistic for this model. (Hint: the Factorization Theorem and maximization of the logarithm of the likelihood function.) 6.1.8 Suppose that a statistical model is comprised of two distributions given by the following table: xn is a sample from the Poisson distribution where f1 s f2 s (a) Plot the likelihood function for each possible data value s (b) Find a sufficient statistic that makes a reduction in the data. 6.1.9 Suppose a statistical model is given by f1 f2 , where fi is an N i 1 distribu­ tion. Compute the likelihood ratio L 1 0 L 2 0 and explain how you interpret this number. 6.1.10 Explain why a likelihood function can never take negative values. Can a likeli­ hood function be equal to 0 at a parameter value? 6.1.11 Suppose we have a statistical model : 1 true that 0 L 6.1.12 Suppose that x1 distribution, where [0 1] is unknown. Determine the likelihood function and a minimal sufficient sta­ tistic for this model. (Hint: Use the factorization theorem and maximize the logarithm of the likelihood.) xn is a sample from a Geometric [0 1] and we observe x0 Is it 1? Explain why or why not. x0 d f Chapter 6: Likelihood Inference 307 6.1.13 Suppose you are told that the likelihood of a particular parameter value is 109 Is it possible to interpret this number in any meaningful way? Explain why or why not. 6.1.14 Suppose one statistician records a likelihood function as 2 for [0 1] while another statistician records a likelihood function as 100 2 for [0 1] Explain why these likelihood functions are effectively the same. PROBLEMS 6.1.15 Show that T defined in Example 6.1.6 is a minimal sufficient statistic. (Hint: Show that once you know the likelihood function, you can determine which of the two possible values for T has occurred.) 6.1.16 Suppose that S 1 2 3 4 butions are given by the following table. a b c , where the three probability distri Determine a minimal sufficient statistic for this model. Is the minimal sufficient statis­ tic in Example 6.1.6 sufficient for this model? 6.1.17 Suppose that x1 is a sample from the N 2 0 distribution where xn R1 is unknown. Determine the form of likelihood intervals for this model. xn Rn is a sample from f , where 6.1.18 Suppose that x1 known. Show that the order statistics x 1 the model. 6.1.19 Determine a minimal sufficient statistic for a sample of n from the rate gamma model, i.e., is un­ comprise a sufficient statistic for x n f x x 0 1 exp x 0 0 0 0 and where 0 for x 6.1.20 Determine the form of a minimal sufficient statistic for a sample of size n from the Uniform[0 ] model where 0 is fixed. 0 2] model where 1 6.1.21 Determine the form of a minimal sufficient statistic for a sample of size n from the Uniform[ 1 2 6.1.22 For the location­scale normal model, establish that the point where the likeli­ 2 as defined in Example 6.1.8. (Hint: Show that hood is maximized is given by x 2 and then x , with respect to 2, is negative at the second derivative of ln L x argue that x 6.1.23 Suppose we have a sample of n from a Bernoulli [0 0 5]. Determine a minimal sufficient statistic for this model. (Hint: It is easy to establish the sufficiency of x, but this point will not maximize the likelihood when x 0 5, so x cannot be obtained from the likelihood by maximization, as in Exercise 6.1.6. In general, consider the second derivative of the log of the likelihood at any point 2 2 is the maximum.) distribution where 308 Section 6.2: Maximum Likelihood 0 0 5 and note that knowing the likelihood means that we can compute any of distri­ [0 1 3] is unknown. Determine the form of the likelihood function x2 is a minimal sufficient statistic where xi is the number of sample its derivatives at any values where these exist.) 6.1.24 Suppose we have a sample of n from the Multinomial 1 bution, where and show that x1 values corresponding to an observation in the ith category. (Hint: Problem 6.1.23.) 6.1.25 Suppose we observe s from a statistical model with two densities, f1 and f2 Show that the likelihood ratio T s is a minimal sufficient statistic. f1 s (Hint: Use the definition of sufficiency directly.) f2 s 1 2 3 CHALLENGES 6.1.26 Consider the location­scale gamma model, i.e., f x x 1 0 0 1 exp x 1 R1 0 is fixed. 0 and where 0 for x (a) Determine the minimal sufficient statistic for a sample of n when 0 1. (Hint: Determine where the likelihood is positive and calculate the partial derivative of the log of the likelihood with respect to .) 1. (Hint: (b) Determine the minimal sufficient statistic for a sample of n when 0 Use Problem 6.1.18, the partial derivative of the log of the likelihood with respect to , and determine where it is infinite.) DISCUSSION TOPICS 6.1.27 How important do you think it is for a statistician to try to quantify how much error there is in an inference drawn? For example, if an estimate is being quoted for some unknown quantity, is it important that the statistician give some indication about how accurate (or inaccurate) this inference is? 6.2 Maximum Likelihood Estimation In Section 6.1, we introduced the likelihood function L inferences about the unknown true value types of inferences discussed in Section 5.5.3 and start with estimation. s as a basis for making We now begin to consider the specific When we are interested in a point estimate of then a value s that maximizes L s is a sensible choice, as this value is the best supported by the data, i.e., L s s L s (6.2.1) for every Definition 6.2.1 We call likelihood estimator, and the value or MLE for short. : S satisfying (6.2.1) for every a maximum s is called a maximum likelihood estimate, Chapter 6: Likelihood Inference 309 s Notice that, if we use cL is also an MLE using this version of the likelihood. So we can use any version of the likelihood to calculate an MLE. s as the likelihood function, for fixed c 0, then EXAMPLE 6.2.1 Suppose the sample space is S model is given by the following table. 1 2 3 , the parameter space is 1 2 , and the s 1 s f1 s f2 Further suppose we observe s 1 So, for example, we could be presented with one of two bowls of chips containing these proportions of chips labeled 1, 2, and 3. We draw a chip, observe that it is labelled 1, and now want to make inferences about which bowl we have been presented with. In this case, the MLE is given by If we had instead observed s 3 1 1 2, then 1 since 0 3 L 1 1 0 1 2; if we had observed s 2 L 2 1 3 then Note that an MLE need not be unique. For example, in Example 6.2.1, if f2 was 0 3 then an MLE is as given there, 0 7 and f2 3 0 f2 2 defined by f2 1 3 but putting 2 also gives an MLE. The MLE has a very important invariance property. Suppose we reparameterize a defined on . By this we mean that, instead of labelling the model via a 1–1 function individual distributions in the model using For example, in Example 6.2.1, we could take a b So the model is now given by g : value : b so that for the unique and a new parameter space We have a new parameter . Nothing has changed about the probability distributions in the statistical model, a and where g , we use 1 such that 2 f only the way they are labelled. We then have the following result. Theorem 6.2.1 If 1–1 function defined on ization. s is an MLE for the original parameterization and, if is a is an MLE in the new parameter­ , then s s PROOF If we select the likelihood function for the new parameterization to be L and the likelihood for the original parameterization to be L g s s s then we have for every establishes the result. This implies that L s s L s for every and Theorem 6.2.1 shows that no matter how we parameterize the model, the MLE behaves in a consistent way under the reparameterization. This is an important property, and not all estimation procedures satisfy this. 310 Section 6.2: Maximum Likelihood 6.2.1 Computation of the MLE An important issue is the computation of MLEs. In Example 6.2.1, we were able to do this by simply examining the table giving the distributions. With more complicated models, this approach is not possible. In many situations, however, we can use the methods of calculus to compute s be a continuously differentiable function of so that we can use optimization methods from calculus s For this we require that f Rather than using the likelihood function, it is often convenient to use the log­ likelihood function. Definition 6.2.2 For likelihood function L s defined on , is given by l ln L s s , the log­likelihood function l s s for every Note that ln x is a 1–1 increa
sing function of x L can maximize l likelihood arises from the fact that, for a sample s1 likelihood function is given by s So we s instead when computing an MLE. The convenience of the log­ the 0 and this implies that L if and only if l s for every from f sn s s s : l L s1 sn whereas the log­likelihood is given by l s1 sn n i 1 f si n i 1 ln f si It is typically much easier to differentiate a sum than a product. Because we are going to be differentiating the log­likelihood, it is convenient to s of a model to is a give a name to this derivative. We define the score function S be the derivative of its log­likelihood function whenever this exists. So when one­dimensional real­valued parameter, then S s l s provided this partial derivative exists (see Appendix A.5 for a definition of partial deriv­ ative). We restrict our attention now to the situation in which is one­dimensional. To obtain the MLE, we must then solve the score equation S s 0 (6.2.2) for Of course, a solution to (6.2.2) is not necessarily an MLE, because such a point may be a local minimum or only a local maximum rather than a global maximum. To guarantee that a solution s is at least a local maximum, we must also check that S s 2l s 2 s 0 s (6.2.3) Chapter 6: Likelihood Inference 311 Then we must evaluate l maximum. s at each local maximum in order to determine the global Let us compute some MLEs using calculus. EXAMPLE 6.2.2 Location Normal Model Consider the likelihood function L x1 xn exp n 2 2 0 x 2 obtained in Example 6.1.4 for a sample x1 xn from the N R1 is unknown and 2 l and the score function is 0 is known. The log­likelihood function is then n 2 2 0 x1 xn x 2 2 0 model where S x1 xn The score equation is given by n 2 0 x n 2 0 x 0 Solving this for local maximum, we calculate gives the unique solution x1 xn x To check that this is a S x1 xn n 2 0 x which is negative, and thus indicates that x is a local maximum. Because we have only one local maximum, it is also the global maximum and we have indeed obtained the MLE. EXAMPLE 6.2.3 Exponential Model Suppose that a lifetime is known to be distributed Exponential 1 unknown. Then based on a sample x1 xn , the likelihood is given by where 0 is L x1 xn 1 n exp nx the log­likelihood is given by l x1 xn n ln nx and the score function is given by S x1 xn n nx 2 312 Section 6.2: Maximum Likelihood Solving the score equation gives x1 xn x and because x 0, S x1 xn n 2 nx 3 2 x x n x 2 0 so x is indeed the MLE. In both examples just considered, we were able to derive simple formulas for the MLE. This is not always possible. Consider the following example. EXAMPLE 6.2.4 Consider a population in which individuals are classified according to one of three types labelled 1, 2, and 3, respectively. Further suppose that the proportions of individuals falling in these categories are known to follow the law p1 2 where 2 p3 p2 1 [0 5 1 2] [0 0 618 03] is unknown. Here, pi denotes the proportion of individuals in the i th class Note that the requirement that 0 and the precise using the formula for the roots bound is obtained by solving of a quadratic. Relationships like this, amongst the proportions of the distribution of a categorical variable, often arise in genetics. For example, the categorical variable might serve to classify individuals into different genotypes. 1 imposes the upper bound on 2 0 for 1 2 For a sample of n (where n is small relative to the size of the population so that we can assume observations are i.i.d.), the likelihood function is given by L x1 xn x1 2x2 1 2 x3 where xi denotes the sample count in the ith class. The log­likelihood function is then l s1 sn x1 2x2 ln x3 ln 1 2 , and the score function is S s1 sn x1 2x2 x3 1 1 2 2 . The score equation then leads to a solution being a root of the quadratic x1 2x2 1 x1 2x2 2x3 x3 2 2 x1 2 2 2x2 x3 x1 2x2 . Using the formula for the roots of a quadratic, we obtain 1 2x2 2x3 2 x1 x1 2x2 x3 5x 2 1 20x1x2 10x1x3 20x 2 2 20x2x3 x 2 3 Notice that the formula for the roots does not determine the MLE in a clear way. In fact, we cannot even tell if either of the roots lies in [0 1]! So there are four possible Chapter 6: Likelihood Inference 313 values for the MLE at this point — either of the roots or the boundary points 0 and 0 61803. We can resolve this easily in an application by simply numerically evaluating the 25 then the xn likelihood at the four points. For example, if x1 roots are 0 47847 We can see this graphically in the plot of the log­likelihood provided in Fig­ ure 6.2.1. 1 28616 and 0 47847 so it is immediate that the MLE is 5 and x3 70 x2 x1 0.4 0.45 0.5 0.55 0.6 theta theta ­90 ­95 ­100 ­105 ­110 ­115 ­120 lnL lnL Figure 6.2.1: The log­likelihood function in Example 6.2.4 when x1 x3 25. 70 x2 5 and In general, the score equation (6.2.2) must be solved numerically, using an iterative routine like Newton–Raphson. Example 6.2.4 demonstrates that we must be very care­ ful not to just accept a solution from such a procedure as the MLE, but to check that the fundamental defining property (6.2.1) is satisfied. We also have to be careful that the necessary smoothness conditions are satisfied so that calculus can be used. Consider the following example. EXAMPLE 6.2.5 Uniform[0 Suppose x1 known. Then the likelihood function is given by ] Model xn is a sample from the Uniform[0 ] model where 0 is un­ L x1 xn xi xi for i 1 for some i n n 0 n I[x n where x n is the largest order statistic from the sample. graphed this function when n occurs at x n ; we cannot obtain this value via differentiation, as L differentiable there. In Figure 6.2.2, we have 1 916 Notice that the maximum clearly xn is not 10 and x n x1 314 Section 6.2: Maximum Likelihood 0.0015 L 0.0010 0.0005 0.0000 0 1 2 3 4 5 theta Figure 6.2.2: Plot of the likelihood function in Example 6.2.5 when n x 10 1 916. 10 and The lesson of Examples 6.2.4 and 6.2.5 is that we have to be careful when com­ puting MLEs. We now look at an example of a two­dimensional problem in which the MLE can be obtained using one­dimensional methods. EXAMPLE 6.2.6 Location­Scale Normal Model Suppose that x1 xn is a sample from an N and R1 0 are unknown. The parameter in this model is two­dimensional, given by 2 2 distribution, where The likelihood function is then given by R1 0 L 2 x1 xn 2 2 n 2 exp n 2 2 x 2 exp n 1 2 2 s2 as shown in Example 6.1.8. The log­likelihood function is given by l 2 x1 xn n 2 ln 2 n 2 ln s2 (6.2.4) As discussed in Example 6.1.8, it is clear that, for fixed 2, (6.2.4) is maximized, as a 2, so this must be the first function of coordinate of the MLE. x. Note that this does not involve by Substituting x into (6.2.4), we obtain n 2 ln 2 n 2 ln 2 n 1 2 2 s2, (6.2.5) and the second coordinate of the MLE must be the value of Differentiating (6.2.5) with respect to 2 and setting this equal to 0 gives 2 that maximizes (6.2.5). n 2 2 n 2 1 2 2 s2 0 (6.2.6) Chapter 6: Likelihood Inference 315 Solving (6.2.6) for 2 leads to the solution n 2 1 s2 n 1 n n i 1 xi x 2 Differentiating (6.2.6) with respect to 2 and substituting in 2 we see that the second derivative is negative, hence 2 is a point where the maximum is attained. Therefore, we have shown that the MLE of 2 is given by x 1 n n i 1 xi x 2 In the following section we will show that this result can also be obtained using multi­ dimensional calculus. s So far we have talked about estimating only the full parameter for a model. What defined about estimating a general characteristic of interest ? Perhaps the obvious answer here is to use the estimate on the parameter space s where This is sometimes referred to as the plug­ s is an MLE of Notice, however, that the plug­in MLE is not necessarily a true MLE, in and that takes then Theorem 6.2.1 in MLE of the sense that we have a likelihood function for a model indexed by its maximum value at establishes that s is a true MLE but not otherwise. is a 1–1 function defined on for some function If s If is not 1–1, then we can often find a complementing function defined on so that is a 1–1 function of . Then, by Theorem 6.2.1, s s s s is the joint MLE, but perform badly, as it ignores the information in example illustrates this phenomenon. s is still not formally an MLE. Sometimes a plug­in MLE can An about the true value of s EXAMPLE 6.2.7 Sum of Squared Means Suppose that Xi N i 1 for i i completely unknown. So here, 2 n. 2 1 The log­likelihood function is given by to estimate 1 1 l x1 xn Clearly this is maximized by is given by n i 1 x 2 i . Now observe that x1 xn n and that these are independent with the Rn. Suppose we want n and 1 2 n i 1 x1 xi 2. i xn . So the plug­in MLE of Var Xi 2 i n , 316 Section 6.2: Maximum Likelihood where E g refers to the expectation of g s when s likely that n i 1 x 2 to use i f So when n is large, it is is far from the true value. An immediate improvement in this estimator is n instead. There have been various attempts to correct problems such as the one illustrated in Example 6.2.7. Typically, these involve modifying the likelihood in some way. We do not pursue this issue further in this text but we do advise caution when using plug­in by x and 2 by s2, they MLEs. Sometimes, as in Example 6.2.6, where we estimate seem appropriate; other times, as in Example 6.2.7, they do not. 6.2.2 The Multidimensional Case (Advanced) Rk is multidimensional, 1 1 The likelihood and log­likelihood are then defined just as before, but the We now consider the situation in which i.e., k score function is now given by provided all these partial derivatives exist. For the score equation, we get and we must solve this k­dimensional equation for k This is often much more difficult than in the one­dimensional case, and we typically have to resort to numerical methods. 1 A necessary and sufficient condition for to be a local maximum, when the log­likelihood has continuous second partial derivatives, is that the matrix of second partial derivatives of the log­likelihood, evaluated at k , must be negative de
finite (equivalently, all of its eigenvalues must be negative). We then must evaluate the likelihood at each of the local maxima obtained to determine the global maximum or MLE. 1 1 k We will not pursue the numerical computation of MLEs in the multidimensional case any further here, but we restrict our attention to a situation in which we carry out the calculations in closed form. EXAMPLE 6.2.8 Location­Scale Normal Model We determined the log­likelihood function for this model in (6.2.4). The score function is then Chapter 6: Likelihood Inference 317 S 2 x1 xn The score equation is S S x1 x1 2 xn xn s2 . s2 0 0 , and the first of these equations immediately implies that for into the second equation and solving for 2 leads to the solution x Substituting this value 2 n 1 s2 n 1 n n i 1 xi x 2 From Example 6.2.6, we know that this solution does indeed give the MLE. Summary of Section 6.2 that max­ that is best supported by the An MLE (maximum likelihood estimator) is a value of the parameter imizes the likelihood function. It is the value of model and data. We can often compute an MLE by using the methods of calculus. When ap­ plicable, this leads to solving the score equation for either explicitly or using numerical algorithms. Always be careful to check that these methods are ap­ plicable to the specific problem at hand. Furthermore, always check that any solution to the score equation is a maximum and indeed an absolute maximum. EXERCISES 6.2.1 Suppose that S tions are given by the following table. 1 2 3 4 a b , where the two probability distribu for each possible data value. Determine the MLE of 6.2.2 If x1 unknown, then determine the MLE of 6.2.3 If x1 unknown, then determine the MLE of . 2. xn is a sample from a Bernoulli xn is a sample from a Bernoulli distribution, where [0 1] is distribution, where [0 1] is 318 Section 6.2: Maximum Likelihood xn is a sample from a Poisson 6.2.4 If x1 unknown, then determine the MLE of 6.2.5 If x1 0 xn is a sample from a Gamma 0 is unknown, then determine the MLE of . distribution, where 0 is distribution, where 0 0 and . xn is the result of independent tosses of a coin where we 6.2.6 Suppose that x1 toss until the first head occurs and where the probability of a head on a single toss is 0 1]. Determine the MLE of . . ) xn distribution (see Problem is a sample from a Pareto xn is a sample from a Beta distribution (see Problem 2.4.19), xn is a sample from a Weibull distribution (see Problem 2.4.20), 0 is unknown, then determine the MLE of 0 is unknown, then determine the MLE of 0 is unknown, then determine the score equation for the MLE of 1 distribution (see Problem 2.4.24) is a . (Hint: Assume . xn is a sample from a Log­normal 0 is unknown, then determine the MLE of 6.2.7 If x1 where differentiable function of 6.2.8 If x1 where 6.2.9 If x1 where 6.2.10 If x1 2.6.17), where 6.2.11 Suppose you are measuring the volume of a cubic box in centimeters by taking repeated independent measurements of one of the sides. Suppose it is reasonable to as­ is unknown sume that a single measurement follows an N and 2 as 3.2 0 is known. Based on a sample of measurements, you obtain the MLE of cm. What is your estimate of the volume of the box? How do you justify this in terms of the likelihood function? 6.2.12 If x1 unknown and 0 is known, then determine the MLE of from the plug­in MLE of 6.2.13 Explain why it is not possible that the function 3 exp is a likelihood function. 6.2.14 Suppose you are told that a likelihood function has local maxima at the points 2 2 4 6 and 9.2, as determined using calculus. Explain how you would determine 2 computed using the location­scale normal model? 5 3 2 for 0 is 2. How does this MLE differ is a sample from an N 0 2 0 distribution, where 2 distribution, where R1 xn 2 . the MLE. 6.2.15 If two functions of are equivalent versions of the likelihood when one is a positive multiple of the other, then when are two log­likelihood functions equivalent? 6.2.16 Suppose you are told that the likelihood of 2 is given by 1 4 Is this the probability that 2? Explain why or why not. at COMPUTER EXERCISES 2 2 2 R1 Numerically approximate the MLE by evaluating this function at 1000 1 2 2 3 exp 10 10] Also plot the likelihood function. 5 2 2 R1 Numerically approximate the MLE by evaluating this function at 1000 10 10] Also plot the likelihood function. Comment on the 1 2 2 3 exp 6.2.17 A likelihood function is given by exp for equispaced points in 6.2.18 A likelihood function is given by exp for equispaced points in form of likelihood intervals. Chapter 6: Likelihood Inference 319 PROBLEMS 1 , and 1 6.2.19 (Hardy–Weinberg law) The Hardy–Weinberg law in genetics says that the pro­ 2, respectively, portions of genotypes A A, Aa, and aa are 2, 2 [0 1] Suppose that in a sample of n from the population (small relative where to the size of the population), we observe x1 individuals of type A A, x2 individuals of type Aa and x3 individuals of type aa (a) What distribution do the counts X1 X2 X3 follow? (b) Record the likelihood function, the log­likelihood function, and the score function for (c) Record the form of the MLE for 6.2.20 If x1 known, determine the MLE of the probability content of the interval your answer. 6.2.21 If x1 known, determine the MLE of 6.2.22 Prove that, if statistic for the model, then 6.2.23 Suppose that X1 X2 X3 where s is the MLE for a model for response s and if T is a sufficient s is also the MLE for the model for T s . xn is a sample from an N 3 (see Example 6.1.5), R1 is un­ 1 . Justify is a sample from an N 1 distribution where 1 distribution where Multinomial n 0 is un­ xn and we observe X1 X2 X3 (a) Determine the MLE of 1 (b) What is the plug­in MLE of 6.2.24 If x1 x1 x2 x3 . 2 1 3 . 2 2 2 3? xn is a sample from a Uniform[ 1 2] distribution with 1 2 R2 : 1 2 determine the MLE of determine the maximum over 1 (Hint: You cannot use calculus. 2 . 1 when 2 is fixed, and then vary 2.) Instead, directly COMPUTER PROBLEMS 6.2.25 Suppose the proportion of left­handed individuals in a population is on a simple random sample of 20, you observe four left­handed individuals. (a) Assuming the sample size is small relative to the population size, plot the log­ likelihood function and determine the MLE. (b) If instead the population size is only 50, then plot the log­likelihood function and determine the MLE. (Hint: Remember that the number of left­handed individuals fol­ lows a hypergeometric distribution. This forces to be of the form i 50 for some integer i between 4 and 34. From a tabulation of the log­likelihood, you can obtain the MLE.) . Based 320 Section 6.3: Inferences Based on the MLE CHALLENGES 6.2.26 If x1 xn is a sample from a distribution with density f x 1 2 exp x for x . (Hint: You cannot use calculus. Instead, maximize the log­likelihood in each of the intervals R1 is unknown, then determine the MLE of R1 and where x 1 , [x 1 x 2 etc.). DISCUSSION TOPICS s is to report the for some constant 0 1 What problems do you see with this approach? In particular, how would 6.2.27 One approach to quantifying the uncertainty in an MLE MLE together with a likelihood interval c you choose c? : L cL s s s 6.3 Inferences Based on the MLE In Table 6.3.1. we have recorded n 66 measurements of the speed of light (pas­ sage time recorded as deviations from 24 800 nanoseconds between two mirrors 7400 meters apart) made by A. A. Michelson and S. Newcomb in 1882. 28 22 36 26 28 28 26 24 32 30 27 24 33 21 36 32 31 25 24 25 28 36 27 32 34 30 25 26 26 25 44 23 21 30 33 29 27 29 28 22 26 27 16 31 29 36 32 28 40 19 37 23 32 29 2 24 25 27 24 16 29 20 28 27 39 23 Table 6.3.1: Speed of light measurements. Figure 6.3.1 is a boxplot of these data with the variable labeled as x. Notice there are two outliers at x 44 We will presume there is something very special about these observations and discard them for the remainder of our discussion. 2 and x x 40 30 20 10 0 ­10 ­20 ­30 ­40 ­50 Figure 6.3.1: Boxplot of the data values in Table 6.3.1. Chapter 6: Likelihood Inference 321 Figure 6.3.2 presents a histogram of these data minus the two data values identified as outliers. Notice that the histogram looks reasonably symmetrical, so it seems plau­ sible to assume that these data are from an N and 2 Accordingly, a reasonable statistical model for these data would appear to be the location­scale normal model. In Chapter 9, we will discuss further how to assess the validity of the normality assumption. 2 distribution for some values of 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0.00 y t i s n e D 10 30 x Figure 6.3.2: Density histogram of the data in Table 6.3.1 with the outliers removed. 35 45 50 25 15 20 40 If we accept that the location­scale normal model makes sense, the question arises 2. The concerning how to make inferences about the unknown parameters purpose of this section is to develop methods for handling problems like this. The methods developed in this section depend on special features of the MLE in a given context. In Section 6.5, we develop a more general approach based on the MLE. and 6.3.1 Standard Errors, Bias, and Consistency Based on the justification for the likelihood, the MLE s seems like a natural estimate Let us suppose that we will then use the plug­in MLE estimate of the true value of might be the first for a characteristic of interest (e.g., s s quartile or the variance). s to be close to the true value of In an application, we want to know how reliable the estimate is. In other , or is there a reasonable words, can we expect s is far from the true value? This leads us to consider the sampling chance that distribution of s under repeated sampling from the true distribution f Because we do not know what the true value of s , as this tells us how much variability there will be in is, we have to look at the sampling distribution of s for every s To simplify this, we substitute a numerical measure of how concentrated these sam­ Perhaps the most commonly used m
easure of the i.e., we are not restricting ourselves to pling distributions are about accuracy of a general estimator T s of plug­in MLEs, is the mean­squared error. 322 Section 6.3: Inferences Based on the MLE Definition 6.3.1 The mean­squared error (MSE) of the estimator T of is given by MSE T 2 for each E T R1 Clearly, the smaller MSE T is, the more concentrated the sampling distribution of T s is about the value Looking at MSE T as a function of is as an estimate of the true value of and thus the true value of MSE T squared error at the true value. Often gives us some idea of how reliable T s Because we do not know the true value of statisticians record an estimate of the mean­ MSE s T is used for this. In other words, we evaluate MSE T at accuracy of the estimate T s . s as a measure of the The following result gives an important identity for the MSE. Theorem 6.3.1 If that E T exists, then R1 and T is a real­valued function defined on S such MSE T Var T E T 2 (6.3.1) PROOF We have 2E T E T E T Var T E T 2 E T 2 because The second term in (6.3.1) is the square of the bias in the estimator T Definition 6.3.2 The bias in the estimator T of whenever E T exists. When the bias in an estimator T is 0 for every , we call T for every an unbiased estimator of , i.e., T is unbiased whenever E T is given by E T Note that when the bias in an estimator is 0, then the MSE is just the variance. Unbiasedness tells us that, in a sense, the sampling distribution of the estimator is centered on the true value. For unbiased estimators, MSE s T Var s T Chapter 6: Likelihood Inference 323 and Sd s T Var s T is an estimate of the standard deviation of T and is referred to as the standard error of the estimate T s . As a principle of good statistical practice, whenever we quote an estimate of a quantity, we should also provide its standard error — at least when we have an unbiased estimator, as this tells us something about the accuracy of the estimate. We consider some examples. EXAMPLE 6.3.1 Location Normal Model Consider the likelihood function L x1 xn exp n 2 2 0 x 2 obtained in Example 6.1.4 for a sample x1 xn from the N 2 0 model, where The MLE R1 is unknown and 2 0 0 is known. Suppose we want to estimate of was computed in Example 6.2.2 to be x In this case, we can determine the sampling distribution of the MLE exactly from N the results in Section 4.6. We have that X 2 0 n and so X is unbiased, and MSE X Var X 2 0 n which is independent of standard error of the estimate is given by So we do not need to estimate the MSE in this case The Sd X 0 n Note that the standard error decreases as the population variance the sample size n increases 2 0 decreases and as EXAMPLE 6.3.2 Bernoulli Model Suppose x1 unknown. Suppose we wish to estimate . The likelihood function is given by xn is a sample from a Bernoulli distribution where [0 1] is L x1 xn nx 1 n 1 x and the MLE of mances. We have E X of is x (Exercise 6.2.2), the proportion of successes in the n perfor­ [0 1] so the MLE is an unbiased estimator for every Therefore, MSE X Var X 1 n and the estimated MSE is MSE X x 1 x n 324 Section 6.3: Inferences Based on the MLE The standard error of the estimate x is then given by Sd X x 1 x n Note how this standard error is quite different from the standard error of x in Example 6.3.1. EXAMPLE 6.3.3 Application of the Bernoulli Model A polling organization is asked to estimate the proportion of households in the pop­ ulation in a specific district who will participate in a proposed recycling program by separating their garbage into various components. The pollsters decided to take a sam­ ple of n 1000 from the population of approximately 1.5 million households (we will say more on how to choose this number later). Each respondent will indicate either yes or no to a question concerning their par­ ticipation. Given that the sample size is small relative to the population size, we can [0 1] is the pro­ assume that we are sampling from a Bernoulli portion of individuals in the population who will respond yes. model where After conducting the sample, there were 790 respondents who replied yes and 210 is who responded no. Therefore, the MLE of and the standard error of the estimate is x 790 1000 0 79 x 1 x 1000 0 79 1 0 79 1000 0 01288 Notice that it is not entirely clear how we should interpret the value 0 01288 Does it mean our estimate 0 79 is highly accurate, modestly accurate, or not accurate at all? We will discuss this further in Section 6.3.2. EXAMPLE 6.3.4 Location­Scale Normal Model Suppose that x1 is a sample from an N xn and 2 R1 0 are unknown. The parameter in this model is given by 0 . Suppose that we want to estimate 2 2 distribution where i.e., just the first R1 2 coordinate of the full model parameter. In Example 6.1.8, we determined that the likelihood function is given by L 2 x1 xn 2 2 n 2 exp n 2 2 x 2 exp n 1 2 2 s2 In Example 6.2.6 we showed that the MLE of is n x n 1 s2 Furthermore, from Theorem 4.6.6, the sampling distribution of the MLE is given by 1 S2 X 2 n independent of n 2 n 1 . N 2 Chapter 6: Likelihood Inference 325 The plug­in MLE of is x This estimator is unbiased and has MSE X Var X 2 n Since 2 is unknown we estimate MSE X by MSE X n 1 n s2 n 1 s2 n n2 s2 n The value s2 n is commonly used instead of MSE X , because (Corollary 4.6.2) E S2 2 i.e., S2 is an unbiased estimator of error of the estimate x 2. The quantity s n is referred to as the standard EXAMPLE 6.3.5 Application of the Location­Scale Normal Model In Example 5.5.6, we have a sample of n calculated x obtained the estimate s 64 517 is s interpreting exactly what this number means in terms of the accuracy of the estimate. Therefore, the standard error of the estimate x 0 43434 As in Example 6.3.3, we are faced with 30 heights (in inches) of students. We . In addition, we 64 517 as our estimate of the mean population height 2 379 of 30 2 379 30 Consistency of Estimators Perhaps the most important property that any estimator T of a characteristic can have is that it be consistent. Broadly speaking, this means that as we increase the amount of data we collect, then the sequence of estimates should converge to the true value of . To see why this is a necessary property of any estimation procedure, consider the finite population sampling context discussed in Section 5.4.1. When the sample size is equal to the population size, then of course we have the full information and can compute exactly every characteristic of the distribution of any measurement defined on the population. So it would be an error to use an estimation procedure for a characteristic of interest that did not converge to the true value of the characteristic as we increase the sample size. Fortunately, we have already developed the necessary mathematics in Chapter 4 to define precisely what we mean by consistency. Definition 6.3.3 A sequence of of estimates T1 T2 if Tn probability) for estimates T1 T2 as n for every as n P is said to be consistent (almost surely) for for every is said to be consistent (in A sequence of a s if Tn Notice that Theorem 4.3.1 says that if the sequence is consistent almost surely, then it is also consistent in probability. Consider now a sample x1 f n i 1 xi be the nth sample average as an estimator of and let Tn E X which from a model n 1 xn : 326 Section 6.3: Inferences Based on the MLE we presume exists. The weak and strong laws of large numbers immediately give us the consistency of the sequence T1 T2 We see immediately that this gives for the consistency of some of the estimators discussed in this section. In fact, Theorem 6.5.2 gives the consistency of the MLE in very general circumstances. Furthermore, Accordingly, we the plug­in MLE will also be consistent under weak restrictions on can think of maximum likelihood estimation as doing the right thing in a problem at least from the point of view of consistency. More generally, we should always restrict our attention to statistical procedures that perform correctly as the amount of data increases. Increasing the amount of data means that we are acquiring more information and thus reducing our uncertainty so that in the limit we know everything. A statistical procedure that was inconsistent would be potentially misleading. 6.3.2 Confidence Intervals While the standard error seems like a reasonable quantity for measuring the accuracy , its interpretation is not entirely clear at this point. It turns out of an estimate of that this is intrinsically tied up with the idea of a confidence interval. Consider the construction of an interval C s l s u s based on the data s that we believe is likely to contain the true value of To do this, we have to specify the lower endpoint l s and upper endpoint u s for each data value s How should we do this? One approach is to specify a probability [0 1] and then require that random interval C have the confidence property, as specified in the following definition. Definition 6.3.4 An interval C s l s u s if P We refer to C s P l s as the confidence level of the interval. is a u s ­confidence interval for for every So C is a probability that an interval either covers a particular instance of a value of . ­confidence interval for is in the interval is at least equal to if, whenever we are sampling from P the For a given data set, such or it does not. So note that it is not correct to say that of containing the true ­confidence region has probability If we choose to be a value close to 1, then we are highly confident that the R1 (a very big true value of is in C s Of course, we can always take C s interval!), and we are then 100% confident that the interval contains the true value. But this tells us nothing we did not already know. So the idea is to try to make use of the information in the data to construct an interval such that we have a high confidence, say, 0 99 that it contains the true value and is not any longer than necessary. We then interpret the length of the interval as a measure of how
accurately the data allow us to know the true value of 0 95 or Chapter 6: Likelihood Inference 327 z­Confidence Intervals Consider the following example, which provides one approach to the construction of confidence intervals. EXAMPLE 6.3.6 Location Normal Model and z­Confidence Intervals Suppose we have a sample x1 unknown and 2 0 6.3.1. Suppose we want a confidence interval for R1 is 0 is known. The likelihood function is as specified in Example 2 0 model, where xn from the N The reasoning that underlies the likelihood function leads naturally to the following restriction for such a region: If 1 C x1 xn and L 2 x1 xn L 1 x1 xn then we should also have lihood because the model and the data support conclude that C x1 1 is a plausible value, so is xn is of the form Therefore, C x1 2 2 xn . This restriction is implied by the like­ 1 Thus, if we 2 at least as well as C x1 xn : L x1 xn k x1 xn for some k x1 xn i.e., C x1 xn is a likelihood interval for . Then C x1 xn : exp n 2 2 0 x 2 k x1 xn : n 2 2 0 : x x 2 2 ln k x1 2 2 0 n ln k x1 xn xn x k x1 xn 0 n x k x1 xn 0 n where k x1 xn xn We are now left to choose k or equivalently k , so that the interval C is a ­ Perhaps the simplest choice is to try to choose k so that 2 ln k x1 xn is constant and is such that the interval as short as possible. Because confidence interval for k x1 Z X 0 n N 0 1 we have P P 1 C x1 xn 6.3.2) (6.3.3) 328 Section 6.3: Inferences Based on the MLE for every equality in (6.3.3) whenever R1 where is the N 0 1 cumulative distribution function. We have 1 k 2 th quantile of the N 0 1 distribution. and so k This is the smallest constant k satisfying (6.3.3). 2 where z denotes the z 1 We have shown that the likelihood interval given by 6.3.4) ­confidence interval for is an exact given by (6.3.2), they are called z­confidence intervals. For example, if we take 0 95 then 1 dix D), we obtain z0 975 of the form 0 975 and, from a statistical package (or Table D.2 in Appen­ 1 96 Therefore, in repeated sampling, 95% of the intervals As these intervals are based on the z­statistic, 2 will contain the true value of . x 1 96 0 n x 1 96 0 n for each of N This is illustrated in Figure 6.3.3. Here we have plotted the upper and lower end­ points of the 0 95­confidence intervals for 25 samples of size n 10 generated from an N 0 1 distribution. The theory says that when N is large, 0 In the plot, approximately 95% of these intervals will contain the true value coverage means that the lower endpoint (denoted by ) must be below the horizontal line at 0 and that the upper endpoint (denoted by ) must be above this horizontal line. We see that only the fourth and twenty­third confidence intervals do not contain 0, so , this proportion will converge to 23 25 0.95. 92% of the intervals contain 0. As 1 0 6 12 sample 18 24 Figure 6.3.3: Plot of 0.95­confidence intervals for 25 samples of size n 0 (lower endpoint 10 from an N 0 1 distribution. ) for N upper endpoint Notice that interval (6.3.4) is symmetrical about x. Accordingly, the half­length of this interval, z 1 2 0 n Chapter 6: Likelihood Inference 329 is a measure of the accuracy of the estimate x. The half­length is often referred to as the margin of error. From the margin of error, we now see how to interpret the standard error; the stan­ For ex­ 0 9974), dard error controls the lengths of the confidence intervals for the unknown ample, we know that with probability approximately equal to 1 (actually the interval [x n] contains the true value of . 3 0 Example 6.3.6 serves as a standard example for how confidence intervals are often constructed in statistics. Basically, the idea is that we take an estimate and then look at the intervals formed by taking symmetrical intervals around the estimate via multiples of its standard error. We illustrate this via some further examples. EXAMPLE 6.3.7 Bernoulli Model Suppose that x1 xn is a sample from a Bernoulli ­confidence interval for is unknown and we want a have that the MLE is x (see Exercise 6.2.2) and the standard error of this estimate is [0 1] . Following Example 6.3.2, we distribution where x 1 x n For this model, likelihood intervals take the form n 1 x C x1 xn : nx 1 k x1 xn xn Again restricting to constant k we see that to determine these for some k x1 intervals, we have to find the roots of equations of the form nx 1 n 1 x k x1 xn While numerical root­finding methods can handle this quite easily, this approach is not to give a very tractable when we want to find the appropriate value of k x1 xn ­confidence interval. To avoid these computational complexities, it is common to use an approximate likelihood and confidence interval based on the central limit theorem. The central limit theorem (see Example 4.4.9) implies that n X 1 D N 0 1 as n 4.4.2), shows that . Furthermore, a generalization of the central limit theorem (see Section Therefore, we have lim n P z 1 2 lim 330 and Section 6.3: Inferences Based on the MLE 6.3.5) is an approximate the interval in Example 6.3.6, except that the standard error has changed. For example, if we want an approximate 0.95­confidence interval for ­confidence interval for Notice that this takes the same form as in Example 6.3.3, then based on the observed x 0 79 we obtain 0 79 1 96 0 79 1 0 79 1000 [0 76475 0 81525] The margin of error in this case equals 0 025245 so we can conclude that we know the true proportion with reasonable accuracy based on our sample. Actually, it may be that this accuracy is not good enough or is even too good. We will discuss methods for ensuring that we achieve appropriate accuracy in Section 6.3.5. The ­confidence interval derived here for is one of many that you will see rec­ ­confidence ommended in the literature. Recall that (6.3.5) is only an approximate and n may need to be large for the approximation to be accurate. In interval for and could be far from other words, the true confidence level for (6.3.5) will not equal is near 0 or 1, then n may need that value if n is too small. In particular, if the true to be very large. In an actual application, we usually have some idea of a small range of possible values a population proportion can take. Accordingly, it is advisable to carry out some simulation studies to assess whether or not (6.3.5) is going to provide an acceptable approximation for in that range (see Computer Exercise 6.3.21). t­Confidence Intervals Now we consider confidence intervals for unrealistic assumption that we know the population variance. in an N 2 model when we drop the 0 xn is a sample from an N 0 are unknown. The parameter in this model is given by . Suppose we want to form confidence intervals for EXAMPLE 6.3.8 Location­Scale Normal Model and t­Confidence Intervals 2 distribution, where Suppose that x1 2 and 2 . R1 and 2, and so the reasoning we employed in Example 6.3.6 to determine the form of the confidence n as the stan­ interval is not directly applicable. In Example 6.3.4, we developed s dard error of the estimate x of . Accordingly, we restrict our attention to confidence intervals of the form The likelihood function in this case is a function of two variables, R1 C x1 xn x k s n x k s n for some constant k Chapter 6: Likelihood Inference 331 We then have where G n 1 is the distribution function of Now, by Theorem 4.6.66.3.6) independent of n 1 S2 2 2 n 1 . Therefore, by Definition 4.6.2, X T n n 1 S2 2 X S n t n 1 So if we take where t t 1 is the th quantile of the t k 1 2 n distributionconfidence interval for The quantiles of the t distributions are available is an exact from a statistical package (or Table D.4 in Appendix D). As these intervals are based on the t­statistic, given by (6.3.6), they are called t­confidence intervals. These confidence intervals for tend to be longer than those obtained in Example 5, n is a 0 97­confidence interval. When we replace s being unknown. When n 6.3.6, and this reects the greater uncertainty due to then it can be shown that x by the true value of then x 3s 3 n is a 0 9974­confidence interval. ks As already noted, the intervals x n are not likelihood intervals for So the justification for using these must be a little different from that given in Example 6.3.6. 2 , and it is not entirely In fact, the likelihood is defined for the full parameter clear how to extract inferences from it when our interest is in a marginal parameter like . There are a number of different attempts at resolving this issue. Here, however, we rely on the intuitive reasonableness of these intervals. In Chapter 7, we will see that these intervals also arise from another approach to inference, which reinforces our belief that the use of these intervals is appropriate. In Example 6.3.5, we have a sample of n 64 517 as our estimate of with standard error s calculated x Using software (or Table D.4), we obtain t0 975 29 interval for is given by 30 heights (in inches) of students. We 0 43434. 30 2 0452 So a 0 95­confidence [64 517 2 0452 0 43434 ] [63 629 65 405] 332 Section 6.3: Inferences Based on the MLE The margin of error is 0 888 so we are very confident that the estimate x within an inch of the true mean height. 64 517 is 6.3.3 Testing Hypotheses and P­Values As discussed in Section 5.5.3, another class of inference procedures is concerned with what we call hypothesis assessment. Suppose there is a theory, conjecture, or hypoth­ esis that specifies a value for a characteristic of interest 0 Often this hypothesis is written H0 : 0 and is referred to as the null hypothesis. say The word null is used because, as we will see in Chapter 10, the value specified in H0 is often associated with a treatment having no effect. For example, if we want to assess whether or not a proposed new drug does a better job of treating a particular condition than a standard treatment does, the null hypothesis will often be equivalent to the new drug providing no improvement. Of course, we have to show how this can be expressed in terms of some characteristic of an unknown distribution, and we will do so in C
hapter 10. The statistician is then charged with assessing whether or not the observed s is in ac­ cord with this hypothesis. So we wish to assess the evidence in s for 0 being true. A statistical procedure that does this can be referred to as a hypothesis assessment, a test of significance, or a test of hypothesis. Such a procedure involves measuring how surprising the observed s is when we assume H0 to be true. It is clear that s is surprising whenever s lies in a region of low probability for each of the distributions specified by the null hypothesis, i.e., for each of the distributions in the model for which 0 is true. If we decide that the data are surprising under H0, then this is evidence against H0 This assessment is carried out by calculating a probability, called a P­value, so that small values of the P­value indicate that s is surprising. It is important to always remember that while a P­value is a probability, this prob­ ability is a measure of surprise. Small values of the P­value indicate to us that a sur­ prising event has occurred if the null hypothesis H0 was true. A large P­value is not evidence that the null hypothesis is true. Moreover, a P­value is not the probability that the null hypothesis is true. The power of a hypothesis assessment method (see Section 6.3.6) also has a bearing on how we interpret a P­value. z­Tests We now illustrate the computation and use of P­values via several examples. EXAMPLE 6.3.9 Location Normal Model and the z­Test Suppose we have a sample x1 unknown and 2 0 unknown mean, say, H0 : sampling distribution of the MLE is given by X R1 is 0 is known, and we have a theory that specifies a value for the 0 Note that, by Corollary 4.6.1, when H0 is true, the 2 0 n . 2 0 model, where xn from the N So one method of assessing whether or not the hypothesis H0 makes sense is to compare the observed value x with this distribution. If x is in a region of low probabil­ 2 0 n distribution, then this is evidence that H0 is false. Because the ity for the N 0 2 0 n distribution is unimodal, the regions of low probability for density of the N 0 N 0 Chapter 6: Likelihood Inference 333 this distribution occur in its tails. The farther out in the tails x lies, the more surprising this will be when H0 is true, and thus the more evidence we will have against H0. In Figure 6.3.4, we have plotted a density of the MLE together with an observed value x that lies far in the right tail of the distribution. This would clearly be a surprising value from this distribution. So we want to measure how far out in the tails of the N 0 2 0 n distribution the value x is. We can do this by computing the probability of observing a value of x as far, or farther, away from the center of the distribution under H0 as x. The center of this distribution is given by 0. Because Z X 0 0 n N 0 1 (6.3.7) under H0 the P­value is then given by , where denotes the N 0 1 distribution function. If the P­value is small, then we have evidence that x is a surprising value because this tells us that x is out in a tail of 2 the N 0 0 n distribution. Because this P­value is based on the statistic Z defined in (6.3.7), this is referred to as the z­test procedure. density 1.2 1.0 0.8 0.6 0.4 0.2 0.0 1 2 3 4 5 MLE Figure 6.3.4: Plot of the density of the MLE in Example 6.3.9 when n 10 together with the observed value and EXAMPLE 6.3.10 Application of the z­Test We generated the following sample of n 10 from an N 26 4 distribution. 29 0651 28 6592 27 3980 25 5546 23 4346 29 4477 26 3665 28 0979 23 4994 25 2850 334 Section 6.3: Inferences Based on the MLE Even though we know the true value of esis H0 : , let us suppose we do not and test the hypoth­ 25 To assess this, we compute (using a statistical package to evaluate ) the P­value 26 6808 25 2 2 6576 10 0 0078 which is quite small. For example, if the hypothesis H0 is correct, then, in repeated sampling, we would see data giving a value of x at least as surprising as what we have observed only 0 78% of the time. So we conclude that we have evidence against H0 being true, which, of course, is appropriate in this case. If you do not use a statistical package for the evaluation of then you will have to use Table D.2 of Appendix D to get an approximation. For example, rounding 2 6576 to 2 66, Table D.2 gives 0 9961 and the approximate P­value is 2 1 0 0078 In this case, the approximation is exact to four 0 9961 decimal places. 2 6576 2 66 EXAMPLE 6.3.11 Bernoulli Model Suppose that x1 [0 1] is unknown, and we want to test H0 : true, we have xn is a sample from a Bernoulli distribution, where 0 As in Example 6.3.7, when H0 is as n So we can test this hypothesis by computing the approximate P­value when n is large. As a specific example, suppose that a psychic claims the ability to predict the value of a randomly tossed fair coin. To test this, a coin was tossed 100 times and the psy­ chic’s guesses were recorded as successes or failures. A total of 54 successes were observed. If the psychic has no predictive ability, then we would expect the successes to occur randomly, just as heads occur when we toss the coin. Therefore, we want to test the null hypothesis that the probability 1 2. This is equivalent to saying that the psychic has no predictive ability. The MLE is 0.54 and the approximate P­value is given by of a success occurring is equal to 0 2 1 100 0 54 7881 0 4238 and we would appear to have no evidence that H0 is false, i.e., no reason to doubt that the psychic has no predictive ability. Often cutoff values like 0.05 or 0.01 are used to determine whether the results of a test are significant or not. For example, if the P­value is less than 0.05, then Chapter 6: Likelihood Inference 335 the results are said to be statistically significant at the 5% level. There is nothing sacrosanct about the 0.05 level, however, and different values can be used depending on the application. For example, if the result of concluding that we have evidence against H0 is that something very expensive or important will take place, then naturally we might demand that the cutoff value be much smaller than 0.05. When Is Statistical Significance Practically Significant? It is also important to point out here the difference between statistical significance and practical significance. Consider the situation in Example 6.3.9, when the true 0 that, practically speaking, they are value of indistinguishable. By the strong law of large numbers, we have that X 1 as n 1 is so close to and therefore 0 but a s is 1 X a s 0 n 0 This implies that 2 1 X a s 0 0 n 0 We conclude that, if we take a large enough sample size n we will inevitably conclude that 0 because the P­value of the z­test goes to 0. Of course, this is correct because the hypothesis is false. 0 as an estimate of In spite of this, we do not want to conclude that just because we have statistical sig­ nificance, the difference between the true value and 0 is of any practical importance. If we examine the observed absolute difference x 0 , however, we will not make this mistake. If this absolute difference is smaller than some threshold that we consider represents a practically significant difference, then even if the P­value leads us to conclude that difference exists, we might conclude that no difference of any importance exists. Of course, the value of is application dependent. 1 2 we might not care if the For example, in coin tossing, where we are testing coin is slightly unfair, say, 0 01 In testing the abilities of a psychic, as in Ex­ ample 6.3.11, however, we might take much lower, as any evidence of psychic powers would be an astounding finding. The issue of practical significance is something we should always be aware of when conducting a test of significance. 0 Hypothesis Assessment via Confidence Intervals ­confidence interval C s for Another approach to testing hypotheses is via confidence intervals. For example, if we then this seems like clear have a evidence against H0 : is close to 1. It turns out that in 0 at least when many problems, the approach to testing via confidence intervals is equivalent to using P­values with a specific cutoff for the P­value to determine statistical significance. We illustrate this equivalence using the z­test and z­confidence intervals. 0 C s and 336 Section 6.3: Inferences Based on the MLE EXAMPLE 6.3.12 An Equivalence Between z­Tests and z­Confidence Intervals We develop this equivalence by showing that obtaining a P­value less than 1 H0 : that 0 is equivalent to 0 not being in a ­confidence interval for for Observe 1 2 1 x 0 n if and only if This is true if and only if which holds if and only if This implies that the the P­value for the hypothesis H0 : ­confidence interval for 0 is greater than 1 . comprises those values 0 for which Therefore, the P­value, based on the z­statistic, for the null hypothesis H0 : if and only if 0 is not in the 0, will be smaller than 1 ­confidence interval derived in Example 6.3.6. For example, if we decide that for any P­values less 0 05 we will declare the results statistically significant, then we know does not 0 For the data of Example 6.3.10, a 0.95­confidence interval is given by 25 we have evidence against for than 1 the results will be significant whenever the 0.95­confidence interval for contain [25 441 27 920]. As this interval does not contain 0 the null hypothesis at the 0.05 level. We can apply the same reasoning for tests about when we are sampling from a model. For the data in Example 6.3.11, we obtain the 0.95­confidence Bernoulli interval x z0 975 x 1 x n 0 54 1 96 0 54 1 0 54 100 [0 44231 0 63769] which includes the value 0 of no predictive ability for the psychic at the 0.05 level. 0 5. So we have no evidence against the null hypothesis t­Tests We now consider an example pertaining to the important location­scale normal model. EXAMPLE 6.3.13 Location­Scale Normal Model and t­Tests Suppose that x1 and In Example 6.3.8, we obtained a 2 distribution, where 0 are unknown, and suppose we w
ant to test the null hypothesis H0 : xn is a sample from an N ­confidence interval for 0 This was based on the R1 Chapter 6: Likelihood Inference 337 t­statistic given by (6.3.6). So we base our test on this statistic also. In fact, it can be shown that the test we derive here is equivalent to using the confidence intervals to assess the hypothesis as described in Example 6.3.12. As in Example 6.3.8, we can prove that when the null hypothesis is true, then T X S 0 n (6.3.8) is distributed t n 1 . The t distributions are unimodal, with the mode at 0, and the regions of low probability are given by the tails. So we test, or assess, this hypothesis by computing the probability of observing a value as far or farther away from 0 as (6.3.8). Therefore, the P­value is given by is the distribution function of the t n where G 1 distribution. We then have evidence against H0 whenever this probability is small. This procedure is called the t­test. Again, it is a good idea to look at the difference x 0 , when we conclude that H0 is false, to determine whether or not the detected difference is of practical importance. Consider now the data in Example 6.3.10 and let us pretend that we do not know 2. Then we have x or the value of the t­statistic is 2 2050 so to test H0 : 26 6808 and s 4 8620 25 t x s 0 n 26 6808 2 2050 25 10 2 4105 From a statistics package (or Table D.4) we obtain t0 975 9 2 2622 so we have a statistically significant result at the 5% level and conclude that we have evidence 25 Using a statistical package, we can determine the precise value against H0 : of the P­value to be 0.039 in this case. One­Sided Tests to be a single value All the tests we have discussed so far in this section for a characteristic of interest have been two­sided tests. This means that the null hypothesis specified the value of 0 Sometimes, however, we want to test a null hypothesis 0 To carry out such tests, we use 0 or H0 : of the form H0 : the same test statistics as we have developed in the various examples here but compute the P­value in a way that reects the one­sided nature of the null. These are known as one­sided tests. We illustrate a one­sided test using the location normal model. EXAMPLE 6.3.14 One­Sided Tests Suppose we have a sample x1 unknown and 2 0 xn from the N 2 0 model, where R1 is 0 is known. Suppose further that it is hypothesized that H0 : 0 is true, and we wish to assess this after observing the data. 338 Section 6.3: Inferences Based on the MLE We will base our test on the z­statistic So Z is the sum of a random variable having an N 0 1 distribution and the constant n 0 0 which implies that Note that if and only if H0 is true. This implies that, when the null hypothesis is false, we will tend to see values of Z in the right tail of the N 0 1 distribution; when the null hypothesis is true, we will tend to see values of Z that are reasonable for the N 0 1 distribution, or in the left tail of this distribution. Accordingly, to test H0 we compute the P­value with Z Using the same reasoning, the P­value for the null hypothesis H0 : N 0 1 and conclude that we have evidence against H0 when this is small. 0 equals . For more discussion of one­sided tests and confidence intervals, see Problems 6.3.25 through 6.3.32. 6.3.4 Inferences for the Variance or In Sections 6.3.1, 6.3.2, and 6.3.3, we focused on inferences for the unknown mean of a distribution, e.g., when we are sampling from an N distribution and our interest is in respectively. In general, location parameters tend to play a much more important role in a statistical analysis than other characteris­ tics of a distribution. There are logical reasons for this, discussed in Chapter 10, when 2 as a nui­ we consider regression models. Sometimes we refer to a parameter such as sance parameter because our interest is in distribution is variance too, i.e., there are no nuisance parameters. Note that the variance of a Bernoulli are logically inferences about the 2 distribution or a Bernoulli so that inferences about 1 But sometimes we are primarily interested in making inferences about 2 in the 2 distribution when it is unknown. For example, suppose that previous expe­ N rience with a system under study indicates that the true value of the variance is well­ 0 i.e., the true value does not differ from 2 approximated by 2 0 by an amount having Chapter 6: Likelihood Inference 339 any practical significance. Now based on the new sample, we may want to assess the 2 hypothesis H0 : 0 i.e., we wonder whether or not the basic variability in the process has changed. 2 The discussion in Section 6.3.1 led to consideration of the standard error s n as n of x In many ways s2 seems like a very an estimate of the standard deviation natural estimator of 2 even when we aren’t sampling from a normal distribution. The following example develops confidence intervals and P­values for 2 xn is a sample from an N 2 The plug­in MLE is given by n EXAMPLE 6.3.15 Location­Scale Normal Model and Inferences for the Variance Suppose that x1 and R1 0 are unknown, and we want to make inferences about the population variance 1 s2 n which is the average of the squared deviations of the data values from x Often s2 is recommended as the estimate because it has the unbiasedness property, and we will use this here. An expression can be determined for the standard error of this estimate, but, as it is somewhat complicated, we will not pursue this further here. 2 distribution, where We can form a 1 (Theorem 4.6.6). There are a number of possibilities for this interval, but one is to note that, letting 2 denote the th quantile for the ­confidence interval for distribution, then 2 2 using n 1 S2 S2 n 1 2 n 2 1 S2 S2 2 n 1 for every 2 R1 0 So n 2 1 1 s2 n 1 2 n 2 1 1 s2 2 n 1 is an exact the 1 value of procedure. ­confidence interval for level, we need only see whether or not such that 0 at 2 0 is in the interval. The smallest 2 0 is in the interval is the P­value for this hypothesis assessment 2 To test a hypothesis such as H0 : For the data in Example 6.3.10, let us pretend that we do not know that 4. 4 8620 From a statistics package (or Table D.3 in Appendix 19 023 So a 0.95­confidence interval 2 700 10 and s2 2 0 975 9 2 Here, n D) we obtain 2 2 is given by for 0 025 9 n 2 1 1 s2 n 1 2 n 2 1 1 s2 2 n 1 9 4 8620 19 023 9 4 8620 2 700 [2 300 3 16 207] The length of the interval indicates that there is a reasonable degree of uncertainty concerning the true value of 4 would not reject this hypothesis at the 5% level because the value 4 is in the 0.95­confidence interval. 2. We see, however, that a test of H0 : 2 340 Section 6.3: Inferences Based on the MLE 6.3.5 Sample­Size Calculations: Confidence Intervals Quite often a statistician is asked to determine the sample size n to ensure that with a very high probability the results of a statistical analysis will yield definitive results. and For example, suppose we are going to take a sample of size n from a population want to estimate the population mean so that the estimate is within 0.5 of the true mean with probability at least 0.95. This means that we want the half­length, or margin of error, of the 0.95­confidence interval for the mean to be guaranteed to be less than 0.5. We consider such problems in the following examples. Note that in general, sample­ size calculations are the domain of experimental design, which we will discuss more extensively in Chapter 10. First, we consider the problem of selecting the sample size to ensure that a confi­ dence interval is shorter than some prescribed value. EXAMPLE 6.3.16 The Length of a Confidence Interval for a Mean Suppose we are in the situation described in Example 6.3.6, in which we have a sample 0 known. x1 Further suppose that the statistician is asked to determine n so that the margin of error for a is no greater than a prescribed value ­confidence interval for the population mean 0 This entails that n be chosen so that R1 unknown and 2 0 2 0 model, with from the N xn or, equivalently, so that For example, if n is 154. 2 0 10 0 95 and 0 5 then the smallest possible value for Now consider the situation described in Example 6.3.8, in which we have a sample 0 both unknown. In this 2 model with R1 and 2 x1 xn from the N case, we want n so that which entails s2 But note this also depends on the unobserved value of s so we cannot determine an appropriate value of n. Often, however, we can determine an upper bound on the population standard de­ viation, say, b For example, suppose we are measuring human heights in cen­ timeters. Then we have a pretty good idea of upper and lower bounds on the possible heights we will actually obtain. Therefore, with the normality assumption, the interval given by the population mean, plus or minus three standard deviations, must be con­ tained within the interval given by the upper and lower bounds. So dividing the length Chapter 6: Likelihood Inference 341 of this interval by 6 gives a plausible upper bound b for the value of when we have such an upper bound, we can expect that s conservatively Therefore, we take n to satisfy In any case, b at least if we choose b n b2 t 1 2 n 1 2 . Note that we need to evaluate t 1 1 for each n as well. It is wise to be fairly conservative in our choice of n in this case, i.e., do not choose the smallest possible value. 2 n EXAMPLE 6.3.17 The Length of a Confidence Interval for a Proportion Suppose we are in the situation described in Example 6.3.2, in which we have a sample [0 1] is unknown. The statistician x1 is required to specify the sample size n so that the margin of error of a ­confidence interval for So, from Example 6.3.7, we want n to satisfy xn from the Bernoulli model and is no greater than a prescribed value x 1 x z 1 2 (6.3.9) and this entails n x 1 x n z 1 2 2 . Because this also depends on the unobserved x, we cannot determine n. Note, however, that 0 1 4 for every x (plot this function) and that this upper bound is x achieved when x 1 2. Therefor
e, if we determine n so that x 1 n 1 4 z 1 2 2 , then we know that (6.3.9) is satisfied. For example, if possible value of n is 97; if 9604. 0 95 0 1 the smallest 0 01, the smallest possible value of n is 0 95 6.3.6 Sample­Size Calculations: Power Suppose the purpose of a study is to assess a specific hypothesis H0 : 0 and it is has been decided that the results will be declared statistically significant whenever Suppose that the statistician is asked to choose n so that the P­value is less than 0 at some specific the P­value obtained is smaller than for a specific 1 such that value of and call is not really complete, as it suppresses the power function of the test. The notation the dependence of n and the test procedure, but we will assume that on these are clear in a particular context. The problem the statistician is presented with can then be stated as: Find n so that 0 The probability that the P­value is less than is called the power of the test at We will denote this by with probability at least 1 0 1 0 The power function of a test is a measure of the sensitivity of the test to detect 0 05 0 01 etc.) so that departures from the null hypothesis. We choose small ( 342 Section 6.3: Inferences Based on the MLE we do not erroneously declare that we have evidence against the null hypothesis when the null hypothesis is in fact true. When is the probability that the test does the right thing and detects that H0 is false. 0 then For any test procedure, it is a good idea to examine its power function, perhaps , to see how good the test is at detecting departures. For it for several choices of can happen that we do not find any evidence against a null hypothesis when it is false because the sample size is too small. In such a case, the power will be small at values that represent practically significant departures from H0 To avoid this problem, we 1 that represents a practically significant departure from should always choose a value 1 0 and then determine n so that we reject H0 with high probability when We consider the computation and use of the power function in several examples. EXAMPLE 6.3.18 The Power Function in the Location Normal Model For the two­sided z­test in Example 6.3.9, we have 6.3.10) P P P 1 Notice that so 0 0 (put is symmetric about and we get the same value) 0 0 0 0 and 0 in the expression for Differentiating (6.3.10) with respect to n we obtain 6.3.11) is the density of the N 0 1 distribution. We can establish that (6.3.11) is is increasing in 0 for n (the solution may not be an integer) to where always nonnegative (see Challenge 6.3.34). This implies that n so we need only solve determine a suitable sample size (all larger values of n will give a larger power). 0 05 For example, when 0 0 1 we must 0 99 and 1 1 1 0 0 find n satisfying 1 n 0 1 1 96 n 0 1 1 96 0 99 (6.3.12) (Note that the symmetry of 0 0 1 here instead of 0 about 0 means we will get the same answer if we use 0 1 ) Tabulating (6.3.12) as a function of n using a Chapter 6: Likelihood Inference 343 statistical package determines that n bound. 785 is the smallest value achieving the required Also observe that the derivative of (6.3.10) with respect to is given by 6.3.13) 0, negative when This is positive when 0 (see Challenge 6.3.35) From (6.3.10) we have that 0 and takes the value 0 when 1 as 0 and that it is increasing as takes its minimum value at These facts establish that we move away from 0 Therefore, once we have determined n so that the power is at satisfying least 1 we know that the power is at least 0 for all values of 0 at some 0 0 1 As an example of this, consider Figure 6.3.5, where we have plotted the power function when n 10 0 1 0 0 1 and 0 05 so that 10 1 96 10 1 96 Notice the symmetry about from 0. We obtain P­value for testing H0 : n this graph will rise even more steeply to 1 as we move away from 0. increases as moves away 1 2 the probability that the 0 will be less than 0 05 is 0 967. Of course, as we increase 0 0 967 so that when 0 and the fact that 1 2 r e w o p 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 ­5 Figure 6.3.5: Plot of the power function and 0 1 is assumed known. 0 mu for Example 6.3.18 when 5 0 05 0 0 Many statistical packages contain the power function as a built­in function for var­ ious tests. This is very convenient for examining the sensitivity of the test and deter­ mining sample sizes. EXAMPLE 6.3.19 The Power Function for For the two­sided test in Example 6.3.11, we have that the power function is given by in the Bernoulli Model P 2 1 n X 0 1 0 0 344 Section 6.3: Inferences Based on the MLE 1 Under the assumption that we choose n large enough so that X is approximately dis­ n the approximate calculation of this power function can be tributed N . We do not pursue approached as in Example 6.3.18, when we put this calculation further here but note that many statistical packages will evaluate as a built­in function. 1 0 EXAMPLE 6.3.20 The Power Function in the Location­Scale Normal Model For the two­sided t­test in Example 6.3.13, we have and 2 and then determine n so that 1 is the cumulative distribution function of the t n where G Notice that it is a function of both 1 distribution. and 2. In particular, we have to specify both 0 Many statistical packages n will have the calculation of this power function built­in so that an appropriate n can be determined using this. Alternatively, we can use Monte Carlo methods to approximate the distribution function of 2 X S 0 n when sampling from the N priate value. 2 for a variety of values of n to determine an appro­ Summary of Section 6.3 is the best­supported value of the parameter The MLE by the model and data. As such, it makes sense to base the derivation of inferences about some on the MLE. These inferences include estimates and their characteristic standard errors, confidence intervals, and the assessment of hypotheses via P­ values. An important aspect of the design of a sampling study is to decide on the size n of the sample to ensure that the results of the study produce sufficiently accurate results. Prescribing the half­lengths of confidence intervals (margins of error) or the power of a test are two techniques for doing this. EXERCISES 2 0 , where 6.3.1 Suppose measurements (in centimeters) are taken using an instrument. There is error in the measuring process and a measurement is assumed to be distributed 10) measure­ N ments 4.7, 5.5, 4.4, 3.3, 4.6, 5.3, 5.2, 4.8, 5.7, 5.3 were obtained, assess the hypothesis H0 : 5 by computing the relevant P­value. Also compute a 0.95­confidence interval for the unknown is the exact measurement and 2 0 0 5 If the (n Chapter 6: Likelihood Inference 345 2 0 24 0 5 Then assess 5 and compute a 0.95­confidence interval for 2 and 2 Determine a 0.99­confidence inter­ 6.3.2 Suppose in Exercise 6.3.1, we drop the assumption that the hypothesis H0 : 6.3.3 Marks on an exam in a statistics course are assumed to be normally distributed with unknown mean but with variance equal to 5. A sample of four students is selected, 60 by computing and their marks are 52, 63, 64, 84. Assess the hypothesis H0 : the relevant P­value and compute a 0.95­confidence interval for the unknown 6.3.4 Suppose in Exercise 6.3.3 that we drop the assumption that the population vari­ ance is 5. Assess the hypothesis H0 : 60 by computing the relevant P­value and compute a 0.95­confidence interval for the unknown 6.3.5 Suppose that in Exercise 6.3.3 we had observed only one mark and that it was 60 by computing the relevant P­value and compute 52. Assess the hypothesis H0 : a 0.95­confidence interval for the unknown Is it possible to compute a P­value and construct a 0.95­confidence interval for without the assumption that we know the population variance? Explain your answer and, if your answer is no, determine the minimum sample size n for which inference is possible without the assumption that the population variance is known. 6.3.6 Assume that the speed of light data in Table 6.3.1 is a sample from an N distribution for some unknown values of val for Assess the null hypothesis H0 : 6.3.7 A manufacturer wants to assess whether or not rods are being constructed appro­ priately, where the diameter of the rods is supposed to be 1 0 cm and the variation in the diameters is known to be distributed N 0 1 . The manufacturer is willing to tolerate a deviation of the population mean from this value of no more than 0 1 cm, i.e., if the 0 1 cm, then the manufacturing process is population mean is within the interval 1 0 500 rods is taken, and the average diameter performing correctly. A sample of n 0 083 cm2. Are these results 1 05 cm, with s2 of these rods is found to be x statistically significant? Are the results practically significant? Justify your answers. 6.3.8 A polling firm conducts a poll to determine what proportion of voters in a given 250 was taken population will vote in an upcoming election. A random sample of n from the population, and the proportion answering yes was 0.62. Assess the hypothesis H0 : 6.3.9 A coin was tossed n 0.51. Do we have evidence to conclude that the coin is unfair? 6.3.10 How many times must we toss a coin to ensure that a 0.95­confidence interval for the probability of heads on a single toss has length less than 0.1, 0.05, and 0 .01, respectively? 6.3.11 Suppose a possibly biased die is rolled 30 times and that the face containing two pips comes up 10 times. Do we have evidence to conclude that the die is biased? 6.3.12 Suppose a measurement on a population is assumed to be distributed N 2 R1 is unknown and that the size of the population is very large. A researcher where that is no longer than 1. What is wants to determine a 0.95­confidence interval for the minimum sample size that will guarantee this? 6.3.13 Suppose x1 (a) Show that xn is a sample from a Bernoulli x 2 x 0 65 and construct an approximate 0.90­confidence interval for 1000 times, and the proportion of heads observed was [0 1] unknown. nx 1 with xi ) (Hint: x 2 i n i 1 xi 3
46 Section 6.3: Inferences Based on the MLE 1 with Var X Bernoulli [0 1] unknown. Record the relationship 1 an unbiased estimator of 2 2 and that given by s2 in (5.5.5). then (b) If X between the plug­in estimate of 2 (see Problem 6.3.23), use the results in part (c) Since s2 is an unbiased estimator of (b) to determine the bias in the plug­in estimate. What happens to this bias as n ? 6.3.14 Suppose you are told that, based on some data, a 0 95­confidence interval for is given by 1 23 2 45 You are then asked if there is any evi­ a characteristic dence against the hypothesis H0 : 2 State your conclusion and justify your reasoning. 6.3.15 Suppose that x1 is a value from a Bernoulli (a) Is x1 an unbiased estimator of ? (b) Is x 2 2? is given by 5.3. Also a P­value 6.3.16 Suppose a plug­in MLE of a characteristic 5 and the value was 0 000132 If was computed to assess the hypothesis H0 : you are told that differences among values of less than 0 5 are of no importance as far as the application is concerned, then what do you conclude from these results? Suppose instead you were told that differences among values of less than 0 25 are of no importance as far as the application is concerned, then what do you conclude from these results? 6.3.17 A P­value was computed to assess the hypothesis H0 : 0 and the value 0 22 was obtained. The investigator says this is strong evidence that the hypothesis is correct. How do you respond? 1 and the value 6.3.18 A P­value was computed to assess the hypothesis H0 : 0 55 was obtained. You are told that differences in greater than 0 5 are considered to be practically significant but not otherwise. The investigator wants to know if enough data were collected to reliably detect a difference of this size or greater. How would you respond? COMPUTER EXERCISES 2 2 0 R1 is unknown and the size of the population is is given by 5. A researcher wants to that is no longer than 1. Determine a sample 6.3.19 Suppose a measurement on a population can be assumed to follow the N distribution, where very large. A very conservative upper bound on determine a 0.95­confidence interval for size that will guarantee this. (Hint: Start with a large sample approximation.) 2 , 6.3.20 Suppose a measurement on a population is assumed to be distributed N R1 is unknown and the size of the population is very large. A researcher where 0 and ensure that the probability is at wants to assess a null hypothesis H0 : least 0.80 that the P­value is less than 0.05 when 0 5 What is the minimum sample size that will guarantee this? (Hint: Tabulate the power as a function of the sample size n ) 6.3.21 Generate 103 samples of size n 5 from the Bernoulli 0 5 distribution. For each of these samples, calculate (6.3.5) with 0 95 and record the proportion of intervals that contain the true value. What do you notice? Repeat this simulation with n 20 What do you notice? 0 Chapter 6: Likelihood Inference 347 6.3.22 Generate 104 samples of size n these samples, calculate the interval x dard deviation, and compute the proportion of times this interval contains this simulation with n 5 from the N 0 1 distribution. For each of 5 where s is the sample stan­ . Repeat 10 and 100 and compare your results. 5 x s s PROBLEMS 2 1 xn and R1. whenever T2 is also an unbiased estimator of is a sample from a distribution with mean 1 s2 n, then determine the bias in this estimate 6.3.23 Suppose that x1 2 variance (a) Prove that s2 given by (5.5.5) is an unbiased estimator of 2 by n (b) If instead we estimate and what happens to it as n 6.3.24 Suppose we have two unbiased estimators T1 and T2 of (a) Show that T1 [0 1] (b) If T1 and T2 are also independent, e.g., determined from independent samples, then calculate Var (c) For the situation in part (b), determine the best choice of choice Var of T1 having a very large variance relative to T2? (d) Repeat parts (b) and (c), but now do not assume that T1 and T2 are independent, so Var 6.3.25 (One­sided confidence intervals for means) Suppose that x1 ple from an N pose we want to make inferences about the interval problem of finding an interval C x1 interval in the sense that for this T2 is smallest. What is the effect on this combined estimator xn is a sam­ 0 is known. Sup­ . Consider the that covers the T2 in terms of Var T1 and Var T2 T2 will also involve Cov T1 T2 So we want u such that for every , 2 0 distribution, where R1 is unknown and 2 with probability at least u x1 T1 T1 T1 xn xn 1 1 1 P u X1 Xn 0 k x xn xn u x1 u x1 using u x1 is unknown and 2 0 distribution, where xn , so Obtain an exact left­ n , i.e., find the if and only if xn is called a left­sided ­confidence interval for Note that C x1 sided ­confidence interval for k that gives this property xn is a sample from 6.3.26 (One­sided hypotheses for means ) Suppose that x1 2 0 is known. Suppose we want a N to assess the hypothesis H0 : 0. Under these circumstances, we say that the observed value x is surprising if x occurs in a region of low probability for every distribution in H0. Therefore, a sensible P­value for this problem is max H0 P X x . Show that this leads to the P­value 1 6.3.27 Determine the form of the power function associated with the hypothesis assess­ ment procedure of Problem 6.3.26, when we declare a test result as being statistically significant whenever the P­value is less than 6.3.28 Repeat Problems 6.3.25 and 6.3.26, but this time obtain a right­sided ­confidence interval for and assess the hypothesis H0 : n x 0 0 0. 348 Section 6.3: Inferences Based on the MLE 6.3.29 Repeat Problems 6.3.25 and 6.3.26, but this time do not assume the population variance is known. In particular, determine k so that u x1 n gives an exact left­sided and show that the P­value for testing H0 : ­confidence interval for 0 is given by k s xn .3.30 (One­sided confidence intervals for variances) Suppose that x1 2 distribution, where sample from the N we want a ­confidence interval of the form R1 0 2 is a is unknown, and xn C x1 xn 0 u x1 xn xn ks2 then determine k so that this interval is an exact 2 If u x1 for confidence interval. is a sample 6.3.31 (One­sided hypotheses for variances) Suppose that x1 2 is unknown, and we from the N 0 Argue that the sample variance s2 is 2 want to assess the hypothesis H0 : surprising if s2 is large and that, therefore, a sensible P­value for this problem is to compute max s2 Show that this leads to the P­value 2 distribution, where 2 R1 xn 0 ­ 2 H0 P S2 n 1 H 1 s2 2 0 n 1 2 n n 1 distribution. 1 is the distribution function of the where H 6.3.32 Determine the form of the power function associated with the hypothesis as­ sessment procedure of Problem 6.3.31, for computing the probability that the P­value is less than 6.3.33 Repeat Exercise 6.3.7, but this time do not assume that the population variance is known. In this case, the manufacturer deems the process to be under control if the population standard deviation is less than or equal to 0.1 and the population mean is in the interval 1 0 0 1 cm. Use Problem 6.3.31 for the test concerning the population variance. CHALLENGES 6.3.34 Prove that (6.3.11) is always nonnegative. (Hint: Use the facts that metric about 0, increases to the left of 0, and decreases to the right of 0.) 6.3.35 Establish that (6.3.13) is positive when 0, negative when takes the value 0 when 0 is sym­ 0 and DISCUSSION TOPICS 6.3.36 Discuss the following statement: The accuracy of the results of a statistical analysis is so important that we should always take the largest possible sample size. Chapter 6: Likelihood Inference 349 6.3.37 Suppose we have a sequence of estimators T1 T2 as n for each might consider Tn a useful estimator of for Discuss under what circumstances you and Tn P 6.4 Distribution­Free Methods The likelihood methods we have been discussing all depend on the assumption that the . There is typically nothing that guarantees that true distribution lies in P : is correct. If the distribution we are sampling from is far the assumption P : different from any of the distributions in P : , then methods of inference that depend on this assumption, such as likelihood methods, can be very misleading. So it is important in any application to check that our assumptions make sense. We will discuss the topic of model checking in Chapter 9. Another approach to this problem is to take the model P : as large as possible, reecting the fact that we may have very little information about what the true distribution is like. For example, inferences based on the Bernoulli model with [0 1] really specify no information about the true distribution because this 0 1 . Infer­ model includes all the possible distributions on the sample space S ence methods that are suitable when P : is very large are sometimes called distribution­free, to reect the fact that very little information is specified in the model about the true distribution. For finite sample spaces, it is straightforward to adopt the distribution­free ap­ proach, as with the just cited Bernoulli model, but when the sample space is infinite, things are more complicated. In fact, sometimes it is very difficult to determine infer­ ences about characteristics of interest when the model is very big. Furthermore, if we have P : 1 P : then, when the smaller model contains the true distribution, methods based on the smaller model will make better use of the information in the data about the true value in . So there is a trade­off between taking too big a model and taking too precise a model. This is an issue that a statistician must always address. 1 than will methods using the bigger model P : We now consider some examples of distribution­free inferences. In some cases, the inferences have approximate sampling properties, while in other cases the inferences have exact sampling properties for very large models. 6.4.1 Method of Moments Suppose we take P : first l moments, and we want to make inferences about the moments to be the set of all distributions on R1 that ha
ve their E X i i 350 Section 6.4: Distribution­Free Methods 1 for i population moment l based on a sample x1 xn . The natural sample analog of the i is the ith sample moment mi 1 n n j 1 x i j which would seem to be a sensible estimator. In particular, we have that E Mi so mi is unbiased, and the weak and strong laws of large numbers establish that mi converges to i as n increases Furthermore, the central limit theorem establishes that i for every Mi i Var Mi D N 0 1 as n have that provided that Var Mi Now, because X1 Xn are i.i.d., we Var Mi 1 n2 n j 1 Var X i j 1 n Var 2i 2i 2 i so we have that Var Mi 2i 2 i by provided that i l 2 In this case, we can estimate s2 i n 1 n 1 j 1 xi j mi 2 as we can simply treat x i i 1 and variance is an unbiased estimate of Var Mi . So, as with inferences for the population mean based on the z­statistic, we have that x i n as a sample from a distribution with mean i Problem 6.3.23 establishes that s2 2 i 2i mi z 1 2 si n is an approximate we can test hypothesis H0 : the population mean using the z­statistic. ­confidence interval for i i whenever i l 2 and n is large. Also, i0 in exactly the same fashion, as we did this for Notice that the model P : is very large (all distributions on R1 having their first l 2 moments finite), and these approximate inferences are appropriate for every distribution in the model. A cautionary note is that estimation of moments becomes more difficult as the order of the moments rises. Very large sample sizes are required for the accurate estimation of high­order moments. The general method of moments principle allows us to make inference about char­ acteristics that are functions of moments. This takes the following form: Method of moments principle: A function moments is estimated by m1 mk 1 k of the first k l Chapter 6: Likelihood Inference 351 is continuously differentiable and nonzero at When it can be proved that M1 given by 1 and covariances of M1 topic further here but note that, in the case k the so­called delta theorem, which says that l 2, then Mk converges in distribution to a normal with mean k and variance given by an expression involving the variances Mk and the partial derivatives of We do not pursue this 2 these conditions lead to 1 and l and 6.4.1) provided that as n 0 see Approximation Theorems of Mathematical Statistics, by R. J. Sering (John Wiley & Sons, New York, 1980), for a proof of this result. This result provides approximate confidence intervals and tests for is continuously differentiable at 1 and 1 1 . EXAMPLE 6.4.1 Inference about a Characteristic Using the Method of Moments Suppose x1 ance xn is a sample from a distribution with unknown mean 2 and we want to construct a ­confidence interval for and vari­ 2 Then 1 2 3 so the delta theorem says that n 1 X 2 1 2 D 2s X 3 N 0 1 as n Therefore, 2 1 x 2 s nx 3 z 1 2 2 1 is an approximate Notice that if ­confidence interval for is not continuously differentiable at 0. So if you think the population mean could be 0, or even close to 0, this would not be an appropriate choice of confidence interval for 0 then this confidence interval is not valid because . 6.4.2 Bootstrapping Suppose that P : xn is a sample from some unknown distribution with cdf F . Then the empirical distribution function is the set of all distributions on R1 and that x1 n F x 1 n I x] xi , i 1 introduced in Section 5.4.1, is a natural estimator of the cdf F x . We have ] Xi 1 n n i 1 F x F x for every numbers then establish the consistency of F x for F x as n so that F is unbiased for F The weak and strong laws of large Observing that 352 Section 6.4: Distribution­Free Methods the I that the standard error of F x is given by x] xi constitute a sample from the Bernoulli F x distribution, we have F x 1 F x n . These facts can be used to form approximate confidence intervals and test hypotheses for F x , just as in Examples 6.3.7 and 6.3.11. Observe that F x prescribes a distribution on the set x1 xn , e.g., if the sam­ ple values are distinct, this probability distribution puts mass 1 n on each xi . Note that it is easy to sample a value from F, as we just select a value from x1 xn where each point has probability 1 n of occurring. When the xi are not distinct, then this is changed in an obvious way, namely, xi has probability fi n, where fi is the number of times xi occurs in x1 xn. Suppose we are interested in estimating T F , where T is a function of the distribution F We use this notation to emphasize that corresponds to some characteristic of the distribution rather than just being an arbitrary mathematical function of For example, T F could be a moment of F a quantile of F etc. Now suppose we have an estimator x1 . Naturally, we are interested in the accuracy of that is being proposed for in­ , and we could xn ferences about choose to measure this by MSE E 2 Var . (6.4.2) Then, to assess the accuracy of our estimate When n is large, we expect F to be close to F , so a natural estimate of xn , we need to estimate (6.4.2). is T F i.e., simply compute the same characteristic of the empirical distribution. This is the approach adopted in Chapter 5 when we discussed descriptive statistics. Then we estimate the square of the bias in by x1 T F 2. (6.4.3) To estimate the variance of , we use VarF 1 nn 2 E F E 2 F n n i1 1 in 1 2 xi1 xin 1 nn n n i1 1 in 1 2 xi1 xin , (6.4.4) xn as i.i.d. random values with cdf given by F So to calculate i.e., we treat x1 an estimate of (6.4.2), we simply have to calculate VarF . This is rarely feasible, however, because the sums in (6.4.4) involve nn terms. For even very modest sample sizes, like n 10 this cannot be carried out, even on a computer. The solution to this problem is to approximate (6.4.4) by drawing m indepen­ for each of these samples to obtain dent samples of size n from F evaluating Chapter 6: Likelihood Inference 1 m and then using the sample variance VarF 353 (6.4.5) as the estimate. The m samples from F are referred to as bootstrap samples or re­ samples, and this technique is referred to as bootstrapping or resampling. Combining (6.4.3) and (6.4.5) gives an estimate of MSE i is called the bootstrap mean, and Furthermore, m 1 m i 1 VarF is the bootstrap standard error. Note that the bootstrap standard error is a valid estimate of the error in whenever has little or no bias. Consider the following example. EXAMPLE 6.4.2 The Sample Median as an Estimator of the Population Mean Suppose we want to estimate the location of a unimodal, symmetric distribution. While the sample mean might seem like the obvious choice for this, it turns out that for some distributions there are better estimators. This is because the distribution we are sam­ pling may have long tails, i.e., may produce extreme values that are far from the center of the distribution. This implies that the sample average itself could be highly inu­ enced by a few extreme observations and would thus be a poor estimate of the true mean. Not all estimators suffer from this defect. For example, if we are sampling from a symmetric distribution, then either the sample mean or the sample median could serve as an estimator of the population mean. But, as we have previously discussed, the sample median is not inuenced by extreme values, i.e., it does not change as we move the smallest (or largest) values away from the rest of the data, and this is not the case for the sample mean. A problem with working with the sample median x0 5 rather than the sample mean x is that the sampling distribution for x0 5 is typically more difficult to study than that of x. In this situation, bootstrapping becomes useful. If we are estimating the population mean T F by using the sample median (which is appropriate when we know the distribution we were sampling from is symmetric), then the estimate of the squared bias in the sample median is given by T F 2 x0 5 x 2 x0 5 and T F x (the mean of the empirical distribution is x). This because should be close to 0, or else our assumption of a symmetric distribution would seem to be incorrect. To calculate (6.4.5), we have to generate m samples of size n from x1 xn (with replacement) and calculate x0 5 for each sample. To illustrate, suppose we have a sample of size n 15 given by the following table 354 Section 6.4: Distribution­Free Methods Then, using the definition of x0 5 given by (5.5.4) (denoted x0 5 there), and x 2 000 2 087 The estimate of the squared bias (6.4.3) equals 7 569 m 103 samples of size n of the sample points and obtained 10 3, which is appropriately small. Using a statistical package, we generated 15 from the distribution that has probability 1 15 at each 2 000 2 087 2 VarF 0 770866 Based on m 104 samples, we obtained VarF 0 718612 and based on m 105 samples we obtained VarF 0 704928 Because these estimates appear to be stabilizing, we take this as our estimate. So in this case, the bootstrap estimate of the MSE of the sample median at the true value of is given by MSE 0 007569 0 704928 0 71250 Note that the estimated MSE of the sample average is given by s2 0 62410 so the sample mean and sample median appear to be providing similar accuracy in this problem. In Figure 6.4.1, we have plotted a density histogram of the sample medians obtained from the m 105 bootstrap samples. Note that the histogram is very skewed. See Appendix B for more details on how these computations were carried out. y t i s n e D 0.6 0.5 0.4 0.3 0.2 0.1 0.0 -5 -4 -3 -2 sample median -1 0 1 Figure 6.4.1: A density histogram of m 105 sample medians, each obtained from a bootstrap sample of size n 15 from the data in Example 6.4.2. Even with the very small sample size here, it was necessary to use the computer to carry out our calculations. To evaluate (6.4.4) exactly would have required computing Chapter 6: Likelihood Inference 355 the median of 1515 (roughly 4 4 using a computer. So the bootstrap is a very useful device. 1017) samples, which is clearly impossible even The validity of the bootstrapping te
chnique depends on having its first two mo­ must be appropriately restricted, but we can see that ments. So the family P : the technique is very general. Broadly speaking, it is not clear how to choose m Perhaps the most direct method is to implement bootstrapping for successively higher values of m and stop when we see that the results stabilize for several values. This is what we did in Example 6.4.2, but it must be acknowledged that this approach is not foolproof, as we could have a sample x1 xn such that the estimate (6.4.5) is very slowly convergent. Bootstrap Confidence Intervals Bootstrap methods have also been devised to obtain approximate vals for characteristics such as form the bootstrap t ­confidence inter­ T F One very simple method is to simply ­confidence interval t 1 2 n 1 VarF , where t 1 possibility is to compute a bootstrap percentile confidence interval given by 2th quantile of the t n 1 is the 1 2 n 1 distribution. Another 1 2 1 2 , where p denotes the pth empirical quantile of in the bootstrap sample of m It should be noted that to be applicable, these intervals require some conditions to hold. In particular, and the boot­ should be at least approximately unbiased for strap distribution should be approximately normal. Looking at the plot of the bootstrap distribution in Figure 6.4.1 we can see that the median does not have an approximately normal bootstrap distribution, so these intervals are not applicable with the median. Consider the following example. EXAMPLE 6.4.3 The 0.25­Trimmed Mean as an Estimator of the Population Mean One of the virtues of the sample median as an estimator of the population mean is that it is not affected by extreme values in the sample. On the other hand, the sample median discards all but one or two of the data values and so seems to be discarding a lot of information. Estimators known as trimmed means can be seen as an attempt at retaining the virtues of the median while at the same time not discarding too much information. Let x denote the greatest integer less than or equal to x R1 Definition 6.4.1 For [0 1] a sample ­trimmed mean is given by where x i is the ith­order statistic. 356 Section 6.4: Distribution­Free Methods ­trimmed mean, we toss out (approximately) n of the smallest Thus for a sample data values and n of the largest data values and calculate the average of the n 2 n of the data values remaining. We need the greatest integer function because in general, 0 and the sample n will not be an integer. Note that the sample mean arises with median arises with 0 5 For the data in Example 6.4.1 and 3 75, so we discard the three smallest and three largest observations leaving the nine data val­ 2 9 ues 3 9 0 2 The average of these nine 1 97778, which we note is close to both the sample median x0 25 values gives and the sample mean. 0 25, we have 0 25 15 Now suppose we use a 0.25­trimmed mean as an estimator of a population mean where we believe the population distribution is symmetric. Consider the data in Ex­ ample 6.4.1 and suppose we generated m 104 bootstrap samples. We have plotted a histogram of the 104 values of in Figure 6.4.2. Notice that it is very normal looking, so we feel justified in using the confidence intervals associated with the bootstrap. In this case, we obtained VarF 0 7380 so the bootstrap t 0 95­confidence interval for the mean is given by 2 14479 0 7380 percentile 0 95­confidence interval as shows that the two intervals are very similar. 0 4 Sorting the bootstrap sample gives a bootstrap 0 5 which 0 488889 1 97778 3 36667 .6 0.5 0.4 0.3 0.2 0.1 0.0 -5.4 -4.5 -3.6 -2.7 .25-trimmed mean -1.8 -0.9 0.0 0.9 Figure 6.4.2: A density histogram of m 104 sample 0.25­trimmed means, each obtained from a bootstrap sample of size n 15 from the data in Example 6.4.3 More details about the bootstrap can be found in An Introduction to the Bootstrap, by B. Efron and R. J. Tibshirani (Chapman and Hall, New York, 1993). Chapter 6: Likelihood Inference 357 6.4.3 The Sign Statistic and Inferences about Quantiles is the set of all distributions on R1 such that the associated Suppose that P : distribution functions are continuous. Suppose we want to make inferences about a pth so that, when the distribution function quantile of P We denote this quantile by x p Note that continuity associated with P is denoted by F , we have p implies there is always a solution in x to p is the smallest solution. and that x p F x p F x Recall the definitions and discussion of estimation of these quantities in Example xn . For simplicity, let us restrict attention to the is n . In this case, we have that x p i n for some i x i 1 5.5.2 based on a sample x1 cases where p the natural estimate of x p. x p S n i 1 I Now consider assessing the evidence in the data concerning the hypothesis H0 : x0. For testing this hypothesis, we can use the sign test statistic, given by x0] xi . So S is the number of sample values less than or equal to x0 Notice that when H0 is true, I x0] x1 I x0] xn is a sample from the Bernoulli p distribution. This implies that, when H0 is true, S Binomial n p Therefore, we can test H0 by computing the observed value of S denoted So and seeing whether this value lies in a region of low probability for the Binomial n p dis­ tribution. Because the binomial distribution is unimodal, the regions of low probability correspond to the left and right tails of this distribution. See, for example, Figure 6.4.3, where we have plotted the probability function of a Binomial 20 0 7 distribution. The P­value is therefore obtained by computing the probability of the set i : n i pi 1 p n i n So pSo 1 p n So (6.4.6) using the Binomial n p probability distribution. This is a measure of how far out in the tails the observed value So is (see Figure 6.4.3). Notice that this P­value is com­ pletely independent of and is thus valid for the entire model. Tables of binomial probabilities (Table D.6 in Appendix D), or built­in functions available in most statis­ tical packages, can be used to calculate this P­value. 0.2 0.1 0.0 0 10 x 20 Figure 6.4.3: Plot of the Binomial 20 0 7 probability function. When n is large, we have that, under H0 np Z S np 1 D N 0 1 p 358 as n Section 6.4: Distribution­Free Methods Therefore, an approximate P­value is given by 2 1 So 0 5 np 1 np p (as in Example 6.3.11), where we have replaced So by So continuity (see Example 4.4.9 for discussion of the correction for continuity). 0 5 as a correction for A special case arises when p 1 2 i.e., when we are making inferences about . In this case, the distribution of S under H0 is an unknown population median x0 5 Binomial n 1 2 . Because the Binomial n 1 2 is unimodal and symmetrical about n 2 (6.4.6) becomes i : So n 2 i n 2 If we want a ­confidence interval for x0 5 , then we can use the equivalence between tests, which we always reject when the P­value is less than or equal to 1 , and ­confidence intervals (see Example 6.3.12). For this, let j be the smallest integer greater than n 2 satisfying 6.4.7) n 2 , we n 2 level and will not otherwise. This leads ­confidence interval, namely, the set of all those values x0 5 such that the null where P is the Binomial n 1 2 distribution. If S will reject H0 : x0 5 to the hypothesis H0 : x0 5 x0 5 is not rejected at the 1 level, equaling x0 at the 1 i : i j C x1 xn x0 : x0 : x0] xi n 2 j n 2 x0] xi j [x n j 1 x j (6.4.8) because, for example, n j n i 1 I x0] xi if and only if x0 x n j 1 EXAMPLE 6.4.4 Application of the Sign Test Suppose we have the following sample of size n variable X and we wish to test the hypothesis H0 : x0 5 0 10 from a continuous random 0 44 1 15 0 06 1 08 0 43 5 67 0 16 4 97 2 13 0 11 The boxplot in Figure 6.4.4 indicates that it is very unlikely that this sample came from a normal distribution, as there are two extreme observations. So it is appropriate to measure the location of the distribution of X by the median. Chapter 6: Likelihood Inference 359 5 x 0 ­5 Figure 6.4.4: Boxplot of the data in Example 6.4.4. In this case, the sample median (using (5.5.4)) is given by 0 11 0 43 2 0 27. The sign statistic for the null is given by S 10 i 1 I 0] xi 4 The P­value is given by 10 5 10 1 2 1 0 24609 0 75391, and we have no reason to reject the null hypothesis. Now suppose that we want a 0 95­confidence interval for the median. Using soft­ ware (or Table D.6), we calculate 10 5 10 3 10 1 10 10 10 1 2 1 2 1 2 0 24609 0 11719 9 7656 10 3 10 4 10 2 10 0 10 10 10 1 2 1 2 1 2 0 20508 4 3945 9 7656 10 2 10 4 We will use these values to compute the value of j in (6.4.7). We can use the symmetry of the Binomial 10 1 2 distribution about n 2 to com­ 10 we have that n 2 as follows. For j n 2 i j pute the values of P i : (6.4.7) equals P i : i 5 5 P 0 10 2 10 0 10 1 2 1 9531 10 3 and note that 1 9531 10 3 1 0 95 0 05 For j 9 we have that (6.4.7) equals 10 2 10 0 10 1 2 2 10 1 10 1 2 2 148 4 10 2 360 Section 6.4: Distribution­Free Methods which is also less than 0.05. For j 8 we have that (6.4.7) equals 10 1 2 2 10 0 0 10938 10 2 10 1 10 1 2 2 10 2 10 1 2 and this is greater than 0.05. Therefore, the appropriate value is j confidence interval for the median is given by [x 2 x 9 [ 0 16 1 15 . 9 and a 0.95­ There are many other distribution­free methods for a variety of statistical situations. While some of these are discussed in the problems, we leave a thorough study of such methods to further courses in statistics. Summary of Section 6.4 Distribution­free methods of statistical inference are appropriate methods when we feel we can make only very minimal assumptions about the distribution from which we are sampling. The method of moments, bootstrapping, and methods of inference based on the sign statistic are three distribution­free methods that are applicable in different circumstances. EXERCISES 6.4.1 Suppose we obtained the following sample from a distribution that we know has its first six moments. Determine an approximate 0 95­confidence interval for 3. 3 27 1 42 1 24 2
75 3 97 2 25 3 47 1 48 4 97 8 00 0 09 7 45 3 26 0 15 6 20 3 74 4 12 3 64 4 88 4 55 where is the population mean and 6.4.2 Determine the method of moments estimator of the population variance. Is this estimator unbiased for the population variance? Justify your answer. 6.4.3 (Coefficient of variation) The coefficient of variation for a population measure­ is the ment with nonzero mean is given by population standard deviation. What is the method of moments estimate of the coeffi­ cient of variation? Prove that the coefficient of variation is invariant under rescalings of the distribution, i.e., under transformations of the form T x 0. It is this invariance that leads to the coefficient of variation being an appropriate mea­ sure of sampling variability in certain problems, as it is independent of the units we use for the measurement. 6.4.4 For the context described in Exercise 6.4.1, determine an approximate 0.95­ confidence interval for exp 6.4.5 Verify that the third moment of an N 2 distribution is given by cx for constant c 1 3 3 2 Because the normal distribution is specified by its first two moments, any characteristic of the normal distribution can be estimated by simply plugging in 3 Chapter 6: Likelihood Inference 361 and 2. Compare the method of moments estimator of the MLE estimates of with this plug­in MLE estimator, i.e., determine whether they are the same or not. 6.4.6 Suppose we have the sample data 1.48, 4.10, 2.02, 56.59, 2.98, 1.51, 76.49, 50.25, 43.52, 2.96. Consider this as a sample from a normal distribution with unknown mean and variance, and assess the hypothesis that the population median (which is the same as the mean in this case) is 3. Also carry out a sign test that the population median is 3 and compare the results. Plot a boxplot for these data. Does this support the assumption that we are sampling from a normal distribution? Which test do you think is more appropriate? Justify your answer. 6.4.7 Determine the empirical distribution function based on the sample given below. 3 1 06 1 42 0 00 0 98 1 28 0 44 1 02 0 38 0 40 0 58 1 35 2 13 1 36 0 24 2 05 0 03 0 35 1 34 1 06 1 29 3 distinct values given by 1, 2, and 3. Using the empirical cdf, determine the sample median, the first and third quartiles, and the interquartile range. What is your estimate of F 2 ? 6.4.8 Suppose you obtain the sample of n (a) Write down all possible bootstrap samples. (b) If you are bootstrapping the sample median, what are the possible values for the sample median for a bootstrap sample? (c) If you are bootstrapping the sample mean, what are the possible values for the sample mean for a bootstrap sample? (d) What do you conclude about the bootstrap distribution of the sample median com­ pared to the bootstrap distribution of the sample mean? 6.4.9 Explain why the central limit theorem justifies saying that the bootstrap distri­ bution of the sample mean is approximately normal when n and m are large. What result justifies the approximate normality of the bootstrap distribution of a function of the sample mean under certain conditions? 6.4.10 For the data in Exercise 6.4.1, determine an approximate 0.95­confidence inter­ val for the population median when we assume the distribution we are sampling from is symmetric with finite first and second moments. (Hint: Use large sample results.) 6.4.11 Suppose you have a sample of n distinct values and are interested in the boot­ strap distribution of the sample range given by x n x 1 What is the maximum number of values that this statistic can take over all bootstrap samples? What are the largest and smallest values that the sample range can take in a bootstrap sample? Do you think the bootstrap distribution of the sample range will be approximately normal? Justify your answer. 6.4.12 Suppose you obtain the data 1 1 tinct bootstrap samples are there? 1 0 1 1 3 1 2 2, and 3 1. How many dis­ 362 Section 6.4: Distribution­Free Methods COMPUTER EXERCISES 103 and m 1000, use bootstrapping to estimate the 3 for the normal distribution, using the sample 1000 is a large enough sample for 6.4.13 For the data of Exercise 6.4.7, assess the hypothesis that the population median is 0. State a 0.95­confidence interval for the population median. What is the exact coverage probability of this interval? 6.4.14 For the data of Exercise 6.4.7, assess the hypothesis that the first quartile of the distribution we are sampling from is 1 0. 6.4.15 With a bootstrap sample size of m MSE of the plug­in MLE estimator of data in Exercise 6.4.1. Determine whether m accurate results. 6.4.16 For the data of Exercise 6.4.1, use the plug­in MLE to estimate the first quartile 2 distribution. Use bootstrapping to estimate the MSE of this estimate of an N 104 (use (5.5.3) to compute the first quartile of the empirical for m distribution). 6.4.17 For the data of Exercise 6.4.1, use the plug­in MLE to estimate F 3 for an 2 distribution. Use bootstrapping to estimate the MSE of this estimate for N m 103 and m 104. 6.4.18 For the data of Exercise 6.4.1, form a 0.95­confidence interval for that this is a sample from an N interval for a bootstrap percentile 0.95­confidence interval using m Compare the four intervals. 6.4.19 For the data of Exercise 6.4.1, use the plug­in MLE to estimate the first quintile, 2 distribution. Plot a density histogram estimate of the bootstrap i.e., x0 2 of an N 103 and compute a bootstrap t 0.95­confidence distribution of this estimator for m interval for x0 2, if you think it is appropriate. 3 of an 6.4.20 For the data of Exercise 6.4.1, use the plug­in MLE to estimate 2 distribution. Plot a density histogram estimate of the bootstrap distribu­ N tion of this estimator for m 103 and compute a bootstrap percentile 0.95­confidence interval for assuming 2 distribution. Also compute a 0.95­confidence based on the sign statistic, a bootstrap t 0.95­confidence interval, and 103 for the bootstrapping. 3 if you think it is appropriate. PROBLEMS 6.4.21 Prove that when x1 xn is a sample of distinct values from a distribution on R1 then the ith moment of the empirical distribution on R1 (i.e., the distribution with cdf given by F is mi xn is a sample from a distribution on R1. Determine the 6.4.22 Suppose that x1 general form of the i th moment of F i.e., in contrast to Problem 6.4.21, we are now allowing for several of the data values to be equal 6.4.23 (Variance stabilizing transformations) From the delta theorem, we have that 2 2 n when 0 and M1 is asymptotically normal with 2 n In some applications, it is important to choose the trans­ 1 i.e., M1 is asymptotically normal with mean is continuously differentiable, so that the asymptotic variance does not depend on the mean mean 1 and variance formation 1 and variance 1 1 Chapter 6: Likelihood Inference 1 2 2 is constant as 1 varies (note that 2 may change as transformations are known as variance stabilizing transformations. (a) If we are sampling from a Poisson variance stabilizing. (b) If we are sampling from a Bernoulli is variance stabilizing. (c) If we are sampling from a distribution on 0 to the square of its mean (like the Gamma ln x is variance stabilizing. distribution, show that 363 1 changes). Such x arcsin x whose variance is proportional distribution), then show that x distribution, then show that x x is CHALLENGES 6.4.24 Suppose that X has an absolutely continuous distribution on R1 with density f that is symmetrical about its median. Assuming that the median is 0, prove that X and sgn are independent, with X having density 2 f and sgn X uniformly distributed on 1 1 6.4.25 (Fisher signed deviation statistic) Suppose that x1 xn is a sample from an absolutely continuous distribution on R1 with density that is symmetrical about its median. Suppose we want to assess the hypothesis H0 : x0 5 x0 One possibility for this is to use the Fisher signed deviation test based on the sta­ n i 1 xi xn x0 x0 sgn xi tistic S . The observed value of S is given by So We then assess H0 by comparing So with the conditional distribution of S given the absolute deviations x1 x0 . If a value So occurs near the smallest or x0 under this conditional distribution, then we assert that largest possible value for S we have evidence against H0 We measure this by computing the P­value given by the conditional probability of obtaining a value as far, or farther, from the center of the conditional distribution of S using the conditional mean as the center. This is an ex­ ample of a randomization test, as the distribution for the test statistic is determined by randomly modifying the observed data (in this case, by randomly changing the signs of the deviations of the xi from x0). (a) Prove that So (b) Prove that the P­value described above does not depend on which distribution we are sampling from in the model. Prove that the conditional mean of S is 0 and the conditional distribution of S is symmetric about this value. (c) Use the Fisher signed deviation test statistic to assess the hypothesis H0 : x0 5 2 when the data are 2.2, 1.5, 3.4, 0.4, 5.3, 4.3, 2.1, with the results declared to be statistically significant if the P­value is less than or equal to 0.05. (Hint: Based on the results obtained in part (b), you need only compute probabilities for the extreme values of S .) x0 . n x 364 Section 6.5: Large Sample Behavior of the MLE (Advanced) (d) Show that using the Fisher signed deviation test statistic to assess the hypothesis H0 : x0 5 x0 is equivalent to the following randomized t­test statistic hypothesis assessment procedure. For this, we compute the conditional distribution of T X S x0 n when the Xi x0 are i.i.d. uniform on x0 are fixed and the sgn Xi 1 1 . Compare the observed value of the t­statistic with this distribution, as we x0 xi did for the Fisher signed deviation test statistic. (Hint: Show that n i 1 xi x 2 2 and that large absolute values of T correspond to large n i 1 xi n x absolute values of S ) x0 2 x0 6.5 Asymptotics for the MLE (Advanced) As we saw in Exampl
es 6.3.7 and 6.3.11, implementing exact sampling procedures based on the MLE can be difficult. In those examples, because the MLE was the sample average and we could use the central limit theorem, large sample theory allowed us to work out approximate procedures. In fact, there is some general large sample theory available for the MLE that allows us to obtain approximate sampling inferences. This is the content of this section. The results we develop are all for the case when is one­ dimensional. Similar results exist for the higher­dimensional problems, but we leave those to a later course. In Section 6.3, the basic issue was the need to measure the accuracy of the MLE. One approach is to plot the likelihood and examine how concentrated it is about its peak, with a more highly concentrated likelihood implying greater accuracy for the MLE. There are several problems with this. In particular, the appearance of the likeli­ hood will depend greatly on how we choose the scales for the axes. With appropriate choices, we can make a likelihood look as concentrated or as diffuse as we want. Also, is more than two­dimensional, we cannot even plot the likelihood. One solu­ when tion, when the likelihood is a smooth function of is to compute a numerical measure of how concentrated the log­likelihood is at its peak. The quantity typically used for this is called the observed Fisher information. Definition 6.5.1 The observed Fisher information is given by I s 2l s 2 s (6.5.1) where s is the MLE. The larger the observed Fisher information is, the more peaked the likelihood func­ tion is at its maximum value. We will show that the observed Fisher information is estimating a quantity of considerable importance in statistical inference. 365 (6.5.2) (6.5.3) (6.5.4) (6.5.5) Chapter 6: Likelihood Inference Suppose that response X is real­valued, satisfies the following regularity conditions: is real­valued, and the model f : 2 ln f x 2 exists for each x E S X ln f x f x dx 0 ln f x f x dx 0 and Note that we have 2 ln f x 2 f x dx f x ln f x f x so we can write (6.5.3) equivalently as f x dx 0 Also note that (6.5.4) can be written as 0 l x f x l dx 2 x f x dx 2l 2l x x 2 2 S2 x f x dx E 2l x 2 S2 X This together with (6.5.3) and (6.5.5), implies that we can write (6.5.4) equivalently as Var S X E S2 X E 2 2 l X . We give a name to the quantity on the left. Definition 6.5.2 The function I tion of the model. Var S X is called the Fisher informa­ Our developments above have proven the following result. Theorem 6.5.1 If (6.5.2) and (6.5.3) are satisfied, then E S addition, (6.5.4) and (6.5.5) are satisfied, then X 0 If, in I Var S X E 2l X 2 366 Section 6.5: Large Sample Behavior of the MLE (Advanced) Now we see why I is called the observed Fisher information, as it is a natural estimate of the Fisher information at the true value We note that there is another natural estimate of the Fisher information at the true value, given by I We call this the plug­in Fisher information. When we have a sample x1 xn from f then S x1 xn n ln f xi i 1 n i 1 ln f xi n i 1 S xi . So, if (6.5.3) holds for the basic model, then E S 0 and (6.5.3) also holds for the sampling model. Furthermore, if (6.5.4) holds for the basic model, then X1 Xn ln f Xi n i 1 E S2 Xi X1 Xn Var S X1 Xn which implies Var S X1 Xn E 2 2 l X1 Xn n I x1 xn because l xi . Therefore, (6.5.4) holds for the sampling model as well, and the Fisher information for the sampling model is given by the sam­ ple size times the Fisher information for the basic model. We have established the following result. n i 1 ln f Corollary 6.5.1 Under i.i.d. sampling from a model with Fisher information I the Fisher information for a sample of size n is given by n I . The conditions necessary for Theorem 6.5.1 to apply do not hold in general and have to be checked in each example. There are, however, many models where these conditions do hold. EXAMPLE 6.5.1 Nonexistence of the Fisher Information If X any x Indeed, if we ignored the lack of differentiability at 1 I[0 ] x which is not differentiable at x and wrote ] then f U [0 x x for f x 1 2 I[0 ] x then f x dx 1 2 I[0 ] x dx 1 0 So we cannot define the Fisher information for this model. Chapter 6: Likelihood Inference 367 EXAMPLE 6.5.2 Location Normal Suppose we have a sample x1 is unknown and 2 0 is known. We saw in Example 6.2.2 that xn from an N 2 0 distribution where R1 and therefore S x1 xn n 2 0 x 2 2 l x1 xn n 2 0 n I E 2 2 l X1 Xn n 2 0 We also determined in Example 6.2.2 that the MLE is given by Then the plug­in Fisher information is x1 xn x n I x n 2 0 while the observed Fisher information is I x1 xn 2l x1 2 xn n 2 0 x In this case, there is no need to estimate the Fisher information, but it is comforting that both of our estimates give the exact value. We now state, without proof, some theorems about the large sample behavior of the MLE under repeated sampling from the model. First, we have a result concerning the consistency of the MLE as an estimator of the true value of Theorem 6.5.2 Under regularity conditions (like those specified above) for the model , the MLE exists a.s. and as n a s : f PROOF See Approximation Theorems of Mathematical Statistics, by R. J. Sering (John Wiley & Sons, New York, 1980), for the proof of this result. We see that Theorem 6.5.2 serves as a kind of strong law for the MLE. It also turns out that when the sample size is large, the sampling distribution of the MLE is approx­ imately normal. Theorem 6.5.3 Under regularity conditions (like those specified above) for the model f : then n I 1 2 D N 0 1 as n PROOF See Approximation Theorems of Mathematical Statistics, by R. J. Sering (John Wiley & Sons, New York, 1980), for the proof of this result. 368 Section 6.5: Large Sample Behavior of the MLE (Advanced) We see that Theorem 6.5.3 serves as a kind of central limit theorem for the MLE. To make this result fully useful to us for inference, we need the following corollary to this theorem. Corollary 6.5.2 When I is a continuous function of then n I 1 2 D N 0 1 In Corollary 6.5.2, we have estimated the Fisher information I Fisher estimation I case, we instead estimate n I result such as Corollary 6.5.2 again holds in this case. by the plug­in . Often it is very difficult to evaluate the function I In such a xn A by the observed Fisher information I x1 From Corollary 6.5.2, we can devise large sample approximate inference methods based on the MLE. For example, the approximate standard error of the MLE is An approximate ­confidence interval is given by n I 1 2. n I 1 2z 1 2. Finally, if we want to assess the hypothesis H0 : the approximate P­value 0 we can do this by computing 2 1 n I 0 1 2 0 Notice that we are using Theorem 6.5.3 for the P­value, rather than Corollary 6.5.2, as, 1. So we when H0 is true, we know the asymptotic variance of the MLE is n I do not have to estimate this quantity. 0 When evaluating I is difficult, we can replace n I xn in the above expressions for the confidence interval and P­value. We now see very clearly the sig­ nificance of the observed information. Of course, as we move from using n I to n I xn we expect that larger sample sizes n are needed to make the normality approximation accurate. We consider some examples. by I x1 to I x1 EXAMPLE 6.5.3 Location Normal Model Using the Fisher information derived in Example 6.5.2, the approximate interval based on the MLE is ­confidence n I 1 2z 1 2 x 0 n z 1 2 This is just the z­confidence interval derived in Example 6.3.6. Rather than being an ­confidence interval, the coverage is exact in this case. Similarly, the approximate approximate P­value corresponds to the z­test and the P­value is exact. Chapter 6: Likelihood Inference 369 EXAMPLE 6.5.4 Bernoulli Model Suppose that x1 [0 1] is unknown. The likelihood function is given by is a sample from a Bernoulli xn distribution, where L x1 xn nx 1 n 1 x , and the MLE of is x. The log­likelihood is l x1 xn nx ln n 1 x ln 1 , the score function is given by and S x1 xn nx n 1 1 x , S x1 xn nx 2 n 1 1 x 2 . Therefore, the Fisher information for the sample is n I E S X1 Xn , and the plug­in Fisher information is n I x n x 1 x Note that the plug­in Fisher information is the same as the observed Fisher information in this case. So an approximate ­confidence interval is given by n I 1 2z , which is precisely the interval obtained in Example 6.3.7 using large sample consider­ ations based on the central limit theorem. Similarly, we obtain the same P­value as in Example 6.3.11 when testing H0 : 0 EXAMPLE 6.5.5 Poisson Model Suppose that x1 unknown. The likelihood function is given by xn is a sample from a Poisson distribution, where 0 is The log­likelihood is L x1 xn nx e n l x1 xn nx ln n the score function is given by S x1 xn nx n 370 and Section 6.5: Large Sample Behavior of the MLE (Advanced) S x1 xn nx 2 From this we deduce that the MLE of is x. Therefore, the Fisher information for the sample is n I E S X1 Xn E n X 2 n and the plug­in Fisher information is n I x n x Note that the plug­in Fisher information is the same as the observed Fisher information in this case. So an approximate ­confidence interval is given by n I 1 2z 1 2 x z 1 2 x n Similarly, the approximate P­value for testing H0 : 0 is given by . Note that we have used the Fisher information evaluated at 0 for this test. Summary of Section 6.5 we can for the model. is the true value of the parameter, the MLE is consistent for Under regularity conditions on the statistical model with parameter define the Fisher information I Under regularity conditions on the statistical model, it can be proved that, when and the MLE and with variance is approximately normally distributed with mean given by given by n I can be estimated by plugging in the MLE or by The Fisher information I using the observed Fisher information. These estimates lead to practically useful inferences for in many problems. 1. EXERCISES 6.5.1 If x1 and 2 0 6.5.2 If x1 and 0 6.5.3 If x1 where xn is a
sample from an N 0 is unknown, determine the Fisher information xn is a sample from a Gamma 0 is unknown, determine the Fisher information 2 distribution, where 0 is known distribution, where 0 is known xn is a sample from a Pareto 0 is unknown, determine the Fisher information. distribution (see Exercise 6.2.9), Chapter 6: Likelihood Inference 371 6.5.4 Suppose the number of calls arriving at an answering service during a given hour of the day is Poisson is unknown. The number of calls actually received during this hour was recorded for 20 days and the following data were obtained. , where 0 9 10 7 8 12 11 12 5 16 13 9 5 13 5 13 9 9 8 9 10 Construct an approximate 0.95­confidence interval for Assess the hypothesis that this is a sample from a Poisson 11 distribution. If you are going to decide that the hypothesis is false when the P­value is less than 0.05, then compute an approximate power for this procedure when 6.5.5 Suppose the lifelengths in hours of lightbulbs from a manufacturing process are is unknown. A random , where known to be distributed Gamma 2 sample of 27 bulbs was taken and their lifelengths measured with the following data obtained. 10 0 336 87 2750 71 2199 44 710 64 2162 01 1856 47 2225 68 3524 23 2618 51 979 54 2159 18 1908 94 1397 96 292 99 1835 55 1385 36 2690 52 361 68 914 41 1548 48 1801 84 753 24 1016 16 1666 71 1196 42 1225 68 2422 53 Determine an approximate 0.90­confidence interval for 6.5.6 Repeat the analysis of Exercise 6.5.5, but this time assume that the lifelengths are distributed Gamma 1 6.5.7 Suppose that incomes (measured in thousands of dollars) above $20K can be 0 is unknown, for a particular population. A assumed to be Pareto sample of 20 is taken from the population and the following data obtained. . Comment on the differences in the two analyses. , where 21 265 20 857 21 090 20 047 20 019 32 509 21 622 20 693 20 109 23 182 21 199 20 035 20 084 20 038 22 054 20 190 20 488 20 456 20 066 20 302 Assess the hypothesis that xn is a sample from an Exponential Construct an approximate 0.95­confidence interval for the mean income in this population is $25K. 6.5.8 Suppose that x1 struct an approximate left­sided ­confidence interval for 6.5.9 Suppose that x1 struct an approximate left­sided ­confidence interval for 6.5.10 Suppose that x1 ution. Construct an approximate left­sided ­confidence interval for 6.3.25.) is a sample from a Geometric xn xn is a sample from a Negative­Binomial r distribution. Con­ (See Problem 6.3.25.) distribution. Con­ (See Problem 6.3.25.) distrib­ (See Problem PROBLEMS 6.5.11 In Exercise 6.5.1, verify that (6.5.2), (6.5.3), (6.5.4), and (6.5.5) are satisfied. 6.5.12 In Exercise 6.5.2, verify that (6.5.2), (6.5.3), (6.5.4), and (6.5.5) are satisfied. 372 Section 6.5: Large Sample Behavior of the MLE (Advanced) 6.5.13 In Exercise 6.5.3, verify that (6.5.2), (6.5.3), (6.5.4), and (6.5.5) are satisfied. 6.5.14 Suppose that sampling from the model (6.5.4), and (6.5.5). Prove that n 1 I 6.5.15 (MV) When f model the Fisher information matrix is defined by then, under appropriate regularity conditions for the satisfies (6.5.2), (6.5.3), as Multinomial 1 If X1 X2 X3 3 Fisher information for this model. Recall that from 1 6.5.16 (MV) Generalize Problem 6.5.15 to the case where 2 . 2 1 3 (Example 6.1.5), then determine the 2 and so is determined 1 1 X1 Xk Multinomial 1 1 k 6.5.17 (MV) Using the definition of the Fisher information matrix in Exercise 6.5.15, 2 1 1 0 model, where determine the Fisher information for the Bivariate Normal 1 1 2 R1 are unknown. 6.5.18 (MV) Extending the definition in Exercise 6.5.15 to the three­dimensional case, 2 0 model determine the Fisher information for the Bivariate Normal where 0 are unknown. R1 and 2 1 2 2 1 2 CHALLENGES 6.5.19 Suppose that model has Fisher information I differentiable, then, putting : f If : : satisfies the regularity conditions and that its Fisher information at 1 1 satisfies (6.5.2), (6.5.3), (6.5.4), (6.5.5), and 1 are continuously R1 is 1–1, and , prove that the model given by g : is given and by I 2. DISCUSSION TOPICS 6.5.20 The method of moments inference methods discussed in Section 6.4.1 are es­ sentially large sample methods based on the central limit theorem. The large sample methods in Section 6.5 are based on the form of the likelihood function. Which meth­ ods do you think are more likely to be correct when we know very little about the form of the distribution from which we are sampling? In what sense will your choice be “more correct”? Chapter 7 Bayesian Inference CHAPTER OUTLINE Section 1 The Prior and Posterior Distributions Section 2 Section 3 Bayesian Computations Section 4 Choosing Priors Section 5 Further Proofs (Advanced) Inferences Based on the Posterior In Chapter 5, we introduced the basic concepts of inference. At the heart of the the­ ory of inference is the concept of the statistical model that describes the statistician’s uncertainty about how the observed data were produced. Chapter 6 dealt with the analysis of this uncertainty based on the model and the data alone. In some cases, this seemed quite successful, but we note that we only dealt with some of the simpler contexts there f : If we accept the principle that, to be amenable to analysis, all uncertainties need to be described by probabilities, then the prescription of a model alone is incomplete, as this does not tell us how to make probability statements about the unknown true value In this chapter, we complete the description so that all uncertainties are described of by probabilities. This leads to a probability distribution for and, in essence, we are in the situation of Section 5.2, with the parameter now playing the role of the unobserved response. This is the Bayesian approach to inference. Many statisticians prefer to develop statistical theory without the additional ingre­ dients necessary for a full probability description of the unknowns. In part, this is motivated by the desire to avoid the prescription of the additional model ingredients necessary for the Bayesian formulation. Of course, we would prefer to have our sta­ tistical analysis proceed based on the fewest and weakest model assumptions possible. For example, in Section 6.4, we introduced distribution­free methods. A price is paid for this weakening, however, and this typically manifests itself in ambiguities about how inference should proceed. The Bayesian formulation in essence removes the am­ biguity, but at the price of a more involved model. The Bayesian approach to inference is sometimes presented as antagonistic to meth­ ods that are based on repeated sampling properties (often referred to as frequentist 373 374 Section 7.1: The Prior and Posterior Distributions methods), as discussed, for example, in Chapter 6. The approach taken in this text, however, is that the Bayesian model arises naturally from the statistician assuming more ingredients for the model. It is up to the statistician to decide what ingredients can be justified and then use appropriate methods. We must be wary of all model assumptions, because using inappropriate ones may invalidate our inferences. Model checking will be taken up in Chapter 9. 7.1 The Prior and Posterior Distributions The Bayesian model for inference contains the statistical model for S and adds to this the prior probability measure data s the statistician’s beliefs about the true value of the parameter [0 1] and observing the data. For example, if a head on the toss of a coin, then the prior density that the statistician has some belief that the true value of formation is not very precise. f for the The prior describes a priori, i.e., before equals the probability of getting plotted in Figure 7.1.1 indicates is around 0.5. But this in­ : prior 1.5 1.0 0.5 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 theta Figure 7.1.1: A fairly diffuse prior on [0,1]. On the other hand, the prior density tician has very precise information about the true value of knows nothing about the true value of might be appropriate. plotted in Figure 7.1.2indicates that the statis­ In fact, if the statistician , then using the uniform distribution on [0 1] Chapter 7: Bayesian Inference 375 prior 10 8 6 4 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 theta Figure 7.1.2: A fairly precise prior on [0,1]. It is important to remember that the probabilities prescribed by the prior repre­ sent beliefs. They do not in general correspond to long­run frequencies, although they could in certain circumstances. A natural question to ask is: Where do these beliefs come from in an application? An easy answer is to say that they come from previous experience with the random system under investigation or perhaps with related sys­ tems. To be honest, however, this is rarely the case, and one has to admit that the prior, as well as the statistical model, is often a somewhat arbitrary construction used to drive the statistician’s investigations. This raises the issue as to whether or not the inferences derived have any relevance to the practical context, if the model ingredients suffer from this arbitrariness. This is where the concept of model checking comes into play, a topic we will discuss in Chapter 9. At this point, we will assume that all the ingredients make sense, but remember that in an application, these must be checked if the inferences taken are to be practically meaningful. We note that the ingredients of the Bayesian formulation for inference prescribe a and a set of conditional distributions . By the law of total probability (Theorems marginal distribution for for the data s given f 2.3.1 and 2.8.1), these ingredients specify a joint distribution for s namely, the prior namely, namely, : f s , denotes the probability or density function associated with where distribution is absolutely continuous, the marginal distribution for s is given by . When the prior m s f s d and is referred to as the prior predictive distribution of the data. When the prior di
stri­ bution of is discrete, we replace (as usual) the integral by a sum. 376 Section 7.1: The Prior and Posterior Distributions If we did not observe any data, then the prior predictive distribution is the relevant distribution for making probability statements about the unknown value of s Similarly, the prior before we observe s Inference about these unobserved quantities then proceeds as described in Section 5.2. is the relevant distribution to use in making probability statements about Recall now the principle of conditional probability; namely, P A is replaced by P A C after we are told that C is true. Therefore, after observing the data, the rel­ is the conditional evant distribution to use in making probability statements about s distribution of Note that the density (or probability and refer to it as the posterior distribution of function) of the posterior is obtained immediately by taking the joint density s of s given s We denote this conditional probability measure by and dividing it by the marginal m s of s f Definition 7.1.1 The posterior distribution of is the conditional distribution of , given s. The posterior density, or posterior probability function (whichever is relevant), is given by s s f m s (7.1.1) Sometimes this use of conditional probability is referred to as an application of Bayes’ theorem (Theorem 1.5.2). This is because we can think of a value of being selected first according to , and then s is generated from f We then want to make probability statements about the first stage, having observed the outcome of the sec­ ond stage. It is important to remember, however, that choosing to use the posterior is an axiom, or principle, not a theorem. distribution for probability statements about We note that in (7.1.1) the prior predictive of the data s plays the role of the inverse normalizing constant for the posterior density. By this we mean that the posterior density of ; to convert this into a proper density function, we need only divide by m s In many examples, we do not need to compute the inverse normalizing constant. This is because we recognize the s functional form, as a function of and so immediately deduce the posterior probability distribution of Also, there are Monte Carlo methods, such as those discussed in Chapter 4, that allow us to sample from of the posterior from the expression s , as a function of s without knowing m s (also see Section 7.3). We consider some applications of Bayesian inference. is proportional to f f EXAMPLE 7.1.1 Bernoulli Model Suppose that we observe a sample x1 [0 1] unknown. For the prior, we take Problem 2.4.16). Then the posterior of xn from the Bernoulli to be equal to a Beta is proportional to the likelihood distribution with density (see n i 1 xi 1 1 xi nx 1 n 1 x times the prior B 1 1 1 1 . Chapter 7: Bayesian Inference 377 This product is proportional to nx 1 1 n 1 x 1 . We recognize this as the unnormalized density of a Beta nx tribution. So in this example, we did not need to compute m x1 posterior. n 1 x dis­ xn to obtain the As a specific case, suppose that we observe nx 1 i.e., we have a uniform prior on 40 and is given by the Beta 11 31 distribution. We plot the posterior density in Figure 7.1.3 as well as the prior. Then the posterior of 10 in a sample of n 6 5 4 3 2 1 0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 theta Figure 7.1.3: Prior (dashed line) and posterior densities (solid line) in Example 7.1.1. The spread of the posterior distribution gives us some idea of the precision of any . Note how much information the data have probability statements we make about added, as reected in the graphs of the prior and posterior densities. EXAMPLE 7.1.2 Location Normal Model Suppose that x1 unknown and 2 xn is a sample from an N 0 is known. The likelihood function is then given by 2 0 distribution, where R1 is L x1 xn exp n 2 2 0 x 2 Suppose we take the prior distribution of 0 The posterior density of 0 and 2 choice of to be an N 0 is then proportional to 2 0 for some specified 378 Section 7.1: The Prior and Posterior Distributions exp 1 2 2 0 2 exp 0 n 2 2 0 x 2 exp exp exp exp exp exp nx nx 7.1.2) We immediately recognize this, as a function of of an as being proportional to the density distribution. Notice that the posterior mean is a weighted average of the prior mean 0 and the sample mean x, with weights and respectively. This implies that the posterior mean lies between the prior mean and the sample mean. Furthermore, the posterior variance is smaller than the variance of the sample mean. So if the information expressed by the prior is accurate, inferences about based on the posterior will be more accurate than those based on the sample mean alone. Note 2 0 is — the less inuence the that the more diffuse the prior is — namely, the larger 2 1 then the ratio of the prior has. For example, when n 1 0 posterior variance to the sample mean variance is 20 21 0 95 So there has been a 5% improvement due to the use of prior information. 20 and 2 0 Chapter 7: Bayesian Inference 379 For example, suppose that 0 1 2 Then the prior is an N 0 2 distribution, while the posterior is an 2 and that for n observe x 1 0 10 we 2 0 2 0 N 1 2 10 1 1 0 2 10 1 1 2 1 1 2 10 1 N 1 1429 9 523 8 10 2 distribution. These densities are plotted in Figure 7.1.4. Notice that the posterior is quite concentrated compared to the prior, so we have learned a lot from the data. 1.2 1.0 0.8 0.6 0.4 0.2 ­5 ­4 ­3 ­2 ­1 0 1 2 3 4 5 x Figure 7.1.4: Plot of the N 0 2 prior (dashed line) and the N 1 1429 9 523 8 posterior (solid line) in Example 7.1.2. 10 2 S EXAMPLE 7.1.3 Multinomial Model Suppose we have a categorical response s that takes k possible values, say, s 1 1 observing its label. k . For example, suppose we have a bowl containing chips labelled one of k. A proportion i of the chips are labelled i , and we randomly draw a chip, When the i are unknown, the statistical model is given by where and : 0 k i 1 i 1 k and 1 k 1 Note that the parameter space is really only k k 1 namely, once we have determined k 1 ­dimensional because, for example, the 1 of the i 1 k 1 remaining value is specified. Now suppose we observe a sample s1 sn from this model. Let the frequency (count) of the ith category in the sample be denoted by xi Then, from Example 2.8.5, we see that the likelihood is given by L 1 k s1 sn x1 1 x2 2 xk k 380 Section 7.1: The Prior and Posterior Distributions For the prior we assume that sity (see Problem 2.7.13) given by 1 k 1 Dirichlet 1 2 k with den7.1.3) k 1 (recall that for i are nonnega­ tive constants chosen by the statistician to reect her beliefs about the unknown value of 1 corresponds to a uniform k . The choice distribution, as then (7.1.3) is constant on k 1). The 1 1 1 1 2 k k . The posterior density of 1 k 1 is then proportional to x1 1 1 1 x2 2 2 1 k 1 xk k for 1 ution of k 1 . From (7.1.3), we immediately deduce that the posterior distrib­ k 1 is Dirichlet x1 1 x2 2 xk k . EXAMPLE 7.1.4 Location­Scale Normal Model xn is a sample from an N Suppose that x1 0 are unknown. The likelihood function is then given by and 2 distribution, where R1 L 2 x1 xn 2 2 n 2 exp n 2 2 x 2 exp n 1 2 2 s2 Suppose we put the following prior on 2 . First, we specify that 2 N 0 2 2 0 i.e., the conditional prior distribution of ance 2 0 2. Then we specify the marginal prior distribution of given 2 is normal with mean 0 and vari­ 2 as 1 2 Gamma 0 0 . (7.1.4) Sometimes (7.1.4) is referred to by saying that values 0 2 0 0 and 0 are selected by the statistician to reect his prior beliefs. 2 is distributed inverse Gamma. The From this, we can deduce (see Section 7.5 for the full derivation) that the posterior distribution of 2 is given by and where 2 x1 xn x1 xn Gamma nx 0 2 0 (7.1.5) (7.1.6) (7.1.7) Chapter 7: Bayesian Inference 381 and n 1 x 0 n 2 0 2 from the posterior, we can make use of the method of composition (see Problem 2.10.13) by first generating 2 using (7.1.6) and then using (7.1.5) to generate We will discuss this further in Section 7.3. To generate a value (7.1.8) s2 2 0 2 1 2 n x 1 Notice that as 0 the conditional posterior distribution of 2 n distribution because N x i.e., as the prior on and x 1 2 0 n becomes increasingly diffuse, 2 converges in distribution to an (7.1.9) (7.1.10) given x 1 1 n Furthermore, as distribution to a Gamma 0 and 0 0 n 2 n 0 the marginal posterior of 1 1 s2 2 distribution because 2 converges in x n 1 s2 2 (7.1.11) Actually, it does not really seem to make sense to let 2 0 in as the prior does not converge to a proper probability the prior distribution of distribution. The idea here, however, is that we think of taking 0 large and 0 small, so that the posterior inferences are approximately those obtained from the limiting posterior. There is still a need to choose 0 however, even in the diffuse case, as the limiting inferences are dependent on this quantity. and 0 0 Summary of Section 7.1 Bayesian inference adds the prior probability distribution to the sampling model for the data as an additional ingredient to be used in determining inferences about the unknown value of the parameter. Having observed the data, the principle of conditional probability leads to the posterior distribution of the parameter as the basis for inference. Inference about marginal parameters is handled by marginalizing the full poste­ rior. EXERCISES 7.1.1 Suppose that S for the response s is given by the following table. 1 2 3 1 2 and the class of probability distributions s 1 1/2 1/3 3/4 s 2 1/2 2/3 1/4 f1 s f2 s f3 s 382 Section 7.1: The Prior and Posterior Distributions If we use the prior given by the table 1 1/5 2 3 2/5 2/5 then determine the posterior distribution of 7.1.2 In Example 7.1.1, determine the posterior mean and variance of for each possible sample of size 2. . 0 10 x 7.1.3 In Example 7.1.2, what is the posterior probability that 1 when 2 1 n 0 probability of this event. 7.1.4 Suppose that x1 unknown. If we use the prior distribution for then
determine the posterior distribution of 7.1.5 Suppose that x1 xn is a sample from a Poisson 0 and 2 0 xn . is positive, given that 10? Compare this with the prior given by the Gamma distribution with 0 distribution, 0 unknown. If the prior distribution of is a sample from a Uniform[0 is Gamma ] distribution with then obtain the form of . the posterior density of 7.1.6 Find the posterior mean and variance of See Problems 3.2.16 and 3.3.20.) 7.1.7 Suppose we have a sample i in Example 7.1.3 when k 3. (Hint: 6 56 6 39 3 30 3 03 5 31 5 62 5 10 2 45 8 24 3 71 4 14 2 80 7 43 6 82 4 75 4 09 7 95 5 84 8 44 9 36 2 1 2 . 2 distribution and we determine that a prior specified by Gamma 1 1 is appropriate. Determine the posterior distribution from an N N 3 4 2 of 7.1.8 Suppose that the prior probability of posterior probability of being in A is 0 80 (a) Explain what effect the data have had on your beliefs concerning the true value of being in a set A is 0 25 and the 2 being in A (b) Explain why a posterior probability is more relevant to report than is a prior proba­ bility. 7.1.9 Suppose you toss a coin and put a Uniform[0 4 0 6] prior on , the probability of getting a head on a single toss. (a) If you toss the coin n times and obtain n heads, then determine the posterior density of (b) Suppose the true value of put any probability mass around (c) What do you conclude from part (b) about how you should choose a prior ? 7.1.10 Suppose that for statistical model is, in fact, 0 99. Will the posterior distribution of 0 99 for any sample of n? ever f : R1 , we assign the prior density , where Now suppose that we reparameterize the model via the function : R1 R1 is differentiable and strictly increasing. (a) Determine the prior density of (b) Show that m x is the same whether we parameterize the model by or by Chapter 7: Bayesian Inference 383 : , where f , which is uniform on 7.1.11 Suppose that for statistical model we assign the prior probability function are interested primarily in making inferences about (a) Determine the prior probability distribution of (b) A uniform prior distribution is sometimes used to express complete ignorance about the value of a parameter. Does complete ignorance about the value of a parameter imply complete ignorance about a function of a parameter? Explain. 7.1.12 Suppose that for statistical model 1 0 1 2 3 , 2 Now suppose we Is this distribution uniform? [0 1] , we assign the prior density [0 1] Now suppose we are interested primarily in making , which is uniform on f : 2 inferences about (a) Determine the prior density of (b) A uniform prior distribution is sometimes used to express complete ignorance about the value of a parameter. Does complete ignorance about the value of a parameter imply complete ignorance about a function of a parameter? Explain. 2 Is this distribution uniform? COMPUTER EXERCISES 20 and x 7.1.13 In Example 7.1.2, when 2 generate a sample of 104 (or as large as possible) from the posterior distribution of and estimate the posterior probability that the coefficient of variation is greater than 0 125 Estimate the error in your 0.125, i.e., the posterior probability that approximation. 7.1.14 In Example 7.1.2, when 2 generate a sample of 104 (or as large as possible) from the posterior distribution of and estimate the posterior expectation of the coefficient of variation 0 the error in your approximation. 7.1.15 In Example 7.1.1, plot the prior and posterior densities on the same graph and compare them when n 3. (Hint: Calculate the logarithm of the posterior density and then exponentiate this. You will need the log­ gamma function defined by ln 20 and x Estimate 3 and 30 x 0 73 1 n 8 2 for 0 ) 2 0 2 0 1 0 PROBLEMS 2 dis­ 7.1.16 Suppose the prior of a real­valued parameter tribution. Show that this distribution does not converge to a probability distribution as is given by the N 0 (Hint: Consider the limits of the distribution functions.) : is a sample from f xn . Show that if we observe a further sample xn 1 7.1.17 Suppose that x1 and that we have a prior xn m , then the posterior you obtain from using the posterior xn as a prior, and then condition­ ing on xn 1 and xn m This is the Bayesian updating property. conditioning on x1 7.1.18 In Example 7.1.1, determine m x . If you were asked to generate a value from this distribution, how would you do it? (Hint: For the generation part, use the theorem of total probability.) xn m , is the same as the posterior obtained using the prior xn xn 1 x1 384 Section 7.2: Inferences Based on the Posterior 7.1.19 Prove that the posterior distribution depends on the data only through the value of a sufficient statistic. COMPUTER PROBLEMS 8 2 1 2 1 generate a sample of 104 (or as large as is feasible) from the posterior 2. Estimate the error 2 over 7.1.20 For the data of Exercise 7.1.7, plot the prior and posterior densities of 0 10 on the same graph and compare them. (Hint: Evaluate the logarithms of the densities first and then plot the exponential of these values.) 7.1.21 In Example 7.1.4, when 0 and s2 distribution of in your approximation. 7.1.22 In Example 7.1.4, when 0 and s2 distribution of in your approximation. 8 2 1 2 1 generate a sample of 104 (or as large as is feasible) from the posterior . Estimate the error 2 and estimate the posterior expectation of 2 and estimate the posterior probability that 20 x 20 DISCUSSION TOPICS 7.1.23 One of the objections raised concerning Bayesian inference methodology is that it is subjective in nature. Comment on this and the role of subjectivity in scientific investigations. 7.1.24 Two statisticians are asked to analyze a data set x produced by a system under I , while study. Statistician I chooses to use a sampling model statistician II chooses to use a sampling model g : I I Comment on the fact that these ingredients can be completely different and so the subsequent analyses completely different. What is the relevance of this for the role of subjectivity in scientific analyses of data? and prior and prior f : 7.2 Inferences Based on the Posterior In Section 7.1, we determined the posterior distribution of as a fundamental object of Bayesian inference. In essence, the principle of conditional probability asserts that s contains all the relevant information in the sampling the posterior distribution model and the data s about the unknown true value of : While this is a major step forward, it does not completely tell us how to make the types of inferences we discussed in Section 5.5.3. the prior f In particular, we must specify how to compute estimates, credible regions, and carry out hypothesis assessment — which is what we will do in this section. It turns out that there are often several plausible ways of proceeding, but they all have the common characteristic that they are based on the posterior. In general, we are interested in specifying inferences about a real­valued charac­ . One of the great advantages of the Bayesian approach is that are determined in the same way as inferences about the full para­ replacing the full posterior. , but with the marginal posterior distribution for teristic of interest inferences about meter Chapter 7: Bayesian Inference 385 This situation can be compared with the likelihood methods of Chapter 6, where it is not always entirely clear how we should proceed to determine inferences about based upon the likelihood. Still, we have paid a price for this in requiring the addition of another model ingredient, namely, the prior. So we need to determine the posterior distribution of in general, even if we have a closed­form expression for distribution of is discrete, the posterior probability function of This can be a difficult task s . When the posterior is given by 0 s s . : 0 When the posterior distribution of is absolutely continuous, we can often find a complementing function is 1–1, and such that the methods of Section 2.9.2 can be applied. Then, denoting the inverse of this transforma­ tion by the methods of Section 2.9.2 show that the marginal posterior distribution of has density given by so that 7.2.1) where J denotes the Jacobian derivative of this transformation (see Problem 7.2.35). Evaluating (7.2.1) can be difficult, and we will generally avoid doing so here. An example illustrates how we can sometimes avoid directly implementing (7.2.1) and still obtain the marginal posterior distribution of . EXAMPLE 7.2.1 Location­Scale Normal Model Suppose that x1 xn is a sample from an N and distribution for R1 0 are unknown, and we use the prior given in Example 7.1.4. The posterior 2 distribution, where 2 is then given by (7.1.5) and (7.1.6). Suppose we are primarily interested in 2 We see immediately that 2 is prescribed by (7.1.6) and thus have no further work to 2. We can use the 2 the marginal posterior of do, unless we want a form for the marginal posterior density of methods of Section 2.6 for this (see Exercise 7.2.4). If we want the marginal posterior distribution of 2 , then things are not quite so simple because (7.1.5) only prescribes the conditional posterior distribution given 2 We can, however, avoid the necessity to implement (7.2.1). Note that of (7.1.5) implies that Z n 1 2 0 x 1 2 2 x1 xn N 0 1 x is given in (7.1.7). Because this distribution does not involve where terior distribution of Z is independent of the posterior distribution of Gamma the definition of the general chi­squared distribution) and so, from (7.1.6), Gamma , then Y 2 X 2 2 1 2 2 the pos­ Now if X (see Problem 4.6.16 for 2 x 2 x1 xn 2 2 0 n 386 Section 7.2: Inferences Based on the Posterior x is given in (7.1.8). Therefore (using Problem 4.6.14), as we are dividing an n random variable where N 0 1 variable by the square root of an independent divided by its degrees of freedom, we conclude that the posterior distribution of is t 2 0 n . Equivalently, we can say the posterior distribution of is the same as . By (7.1.9), (7.1.10), and (7.1.11), we have that the posterior where T distri
bution of t 2 0 converges to the distribution of x n 2 0 1 n s n T as and 0 0. 0 In other cases, we cannot avoid the use of (7.2.1) if we want the marginal posterior For example, suppose we are interested in the posterior distribution of the 0 from the parameter space) density of coefficient of variation (we exclude the line given by 2 1 2 1 1 2 Then a complementing function to is given by 2 1 2 and it can be shown (see Section 7.5) that J 2 1 2 If we let 1 x1 given , and the posterior density of xn and x1 xn denote the posterior densities of , respectively, then, from (7.2.1), the marginal density of is given by 2 0 1 1 2 1 x1 xn x1 xn 1 2 d (7.2.2) Without writing this out (see Problem 7.2.22), we note that we are left with a rather messy integral to evaluate. In some cases, integrals such as (7.2.2) can be evaluated in closed form; in other cases, they cannot. While it is convenient to have a closed form for a density, often this is not necessary, as we can use Monte Carlo methods to approximate posterior Chapter 7: Bayesian Inference 387 probabilities and expectations of interest. We will return to this in Section 7.3. We should always remember that our goal, in implementing Bayesian inference methods, is not to find the marginal posterior densities of quantities of interest, but rather to have a computational algorithm that allows us to implement our inferences. Under fairly weak conditions, it can be shown that the posterior distribution of converges, as the sample size increases, to a distribution degenerate at the true value. This is very satisfying, as it indicates that Bayesian inference methods are consistent. 7.2.1 Estimation Suppose now that we want to calculate an estimate of a characteristic of interest . We base this on the posterior distribution of this quantity. There are several different approaches to this problem. Perhaps the most natural estimate is to obtain the posterior density (or probability i.e., the point where the and use the posterior mode function when relevant) of posterior probability or density function of takes its maximum. In the discrete case, this is the value of with the greatest posterior probability; in the continuous case, it is the value that has the greatest amount of posterior probability in short intervals containing it. To calculate the posterior mode, we need to maximize s as a function of Note that it is equivalent to maximize m s s so that we do not need to compute the inverse normalizing constant to implement this. In fact, we can conveniently choose to maximize any function that is a 1–1 increasing function of s and get the same answer. In general, s may not have a unique mode, but typically there is only one. An alternative estimate is commonly used and has a natural interpretation. This is given by the posterior mean E s , is symmetrical about its whenever this exists. When the posterior distribution of mode, and the expectation exists, then the posterior expectation is the same as the posterior mode; otherwise, these estimates will be different. If we want the estimate to reect where the central mass of probability lies, then in cases where s is highly skewed, perhaps the mode is a better choice than the mean. We will see in Chapter 8, however, that there are other ways of justifying the posterior mean as an estimate. We now consider some examples. EXAMPLE 7.2.2 Bernoulli Model Suppose we observe a sample x1 [0 1] unknown and we place a Beta the posterior distribution of the characteristic of interest is xn from the Bernoulli distribution with to be Beta nx prior on . In Example 7.1.1, we determined . Let us suppose that n 1 x The posterior expectation of is given by 388 Section 7.2: Inferences Based on the Posterior xn n nx n n n 1 x nx nx 0 nx x1 1 0 nx nx nx n When we have a uniform prior, i.e., 1 , the posterior expectation is given by E x nx n 1 2 To determine the posterior mode, we need to maximize ln nx 1 1 n 1 x 1 nx 1 ln n 1 x 1 ln 1 This function has first derivative and second derivative nx 1 n 1 x 1 nx 1 n 1 x 2 1 2 1 1 Setting the first derivative equal to 0 and solving gives the solution nx n 1 2 1 we see that the second derivative is always negative, and so 1 Now, if 1 is the unique posterior mode. The restriction on the choice of that the prior has a mode in 0 1 rather than at 0 or 1 Note that when namely, when we put a uniform prior on same as the maximum likelihood estimate (MLE). The posterior is highly skewed whenever nx the posterior mode is and n 1 are far apart (plot Beta densities to see this). Thus, in such a case, we might consider the posterior mode as a more sensible estimate of . Note that when n is large, the mode and the mean will be very close together and in fact very close to the MLE x x 1 implies 1 1 x. This is the EXAMPLE 7.2.3 Location Normal Model Suppose that x1 unknown and 2 us suppose, that the characteristic of interest is xn is a sample from an N 0 is known, and we take the prior distribution on to be N 2 0 distribution, where R1 is 2 0 . Let Chapter 7: Bayesian Inference 389 In Example 7.1.2 we showed that the posterior distribution of is given by the distribution. Because this distribution is symmetric about its mode, and the mean exists, the posterior mode and mean agree and equal This is a weighted average of the prior mean and the sample mean and lies between these two values. When n is large, we see that this estimator is approximately equal to the sample mean x which we also know to be the MLE for this situation Furthermore, when we take the prior to be very diffuse, namely, when 2 0 is very large, then again this estimator is close to the sample mean. Also observe that the ratio of the sampling variance of x to the posterior variance of is is always greater than 1. The closer 2 0 is to 0, the larger this ratio is. Furthermore, as 2 0 0 the Bayesian estimate converges to 0 If we are pretty confident that the population mean is close to the prior mean 0 we will take 2 0 small so that the bias in the Bayesian estimate will be small and its variance will be much smaller than the sampling variance of x In such a situation, the Bayesian estimator improves on accuracy over the sample mean. Of course, if we are not very confident that is close to the prior mean 0 then we choose a large value for 2 0 and the Bayesian estimator is basically the MLE. EXAMPLE 7.2.4 Multinomial Model Suppose we have a sample s1 sn and we place a Dirichlet 1 2 k 1 is then distribution of 1 from the model discussed in Example 7.1.3 k 1 . The posterior k distribution on 1 Dirichlet x1 1 x2 2 xk k , where xi is the number of responses in the ith category. Now suppose we are interested in estimating a response is in the first category. the probability that It can be shown (see Problem 7.2.25) that, if 1 1 k 1 is distributed Dirichlet 1 2 k then i is distributed where distribution of i 2 1 1 is Dirichlet i i Beta i i k i This result implies that the marginal posterior Beta x1 1 x2 xk 2 k . 390 Section 7.2: Inferences Based on the Posterior Then, assuming that each i 1 and using the argument in Example 7.2.2 and x1 xk n, the marginal posterior mode of 1 is 1 n x1 2 When the prior is the uniform, namely, 1 1 n k k 1 then 1 1 1 x1 k 2 As in Example 7.2.2, we compute the posterior expectation to be E 1 x x1 1 n 1 k The posterior distribution is highly skewed whenever x1 1 and x2 xk 2 k are far apart. From Problem 7.2.26, we have that the plug­in MLE of 1 is x1 n When n is large, the Bayesian estimates are close to this value, so there is no conict between the estimates. Notice, however, that when the prior is uniform, then 1 k, hence the plug­in MLE and the Bayesian estimates will be quite different when k is large relative to n. In fact, the posterior mode will always be smaller than the plug­in MLE when k 0 This is a situation in which the Bayesian and frequentist approaches to inference differ. 2 and x1 k At this point, the decision about which estimate to use is left with the practitioner, as theory does not seem to provide a clear answer. We can be comforted by the fact that the estimates will not differ by much in many contexts of practical importance. EXAMPLE 7.2.5 Location­Scale Normal Model Suppose that x1 xn is a sample from an N and that the characteristic of interest is R1 0 are unknown, and we use the prior given in Example 7.1.4. Let us suppose 2 distribution, where 2 . In Example 7.2.1, we derived the marginal posterior distribution of to be the same as the distribution of where T t n 2 0 . This is a t n 2 0 distribution relocated to have its mode at x and rescaled by the factor So the marginal posterior mode of is x n 1 1 2 0 nx 0 2 0 Chapter 7: Bayesian Inference 391 , provided that n Because a t distribution is symmetric about its mode, this is also the posterior mean of 1 (see x is a 1 as a t Problem 4.6.16) This will always be the case as the sample size n weighted average of the prior mean 0 and the sample average x distribution has a mean only when 1 Again, 2 0 The marginal posterior mode and expectation can also be obtained for 2 These computations are left to the reader (see Exercise 7.2.4). 2 One issue that we have not yet addressed is how we will assess the accuracy of Bayesian estimates. Naturally, this is based on the posterior distribution and how con­ centrated it is about the estimate being used. In the case of the posterior mean, this means that we compute the posterior variance as a measure of spread for the posterior distribution of about its mean. For the posterior mode, we will discuss this issue further in Section 7.2.3. EXAMPLE 7.2.6 Posterior Variances In Example 7.2.2, the posterior variance of is given by (see Exercise 7.2.6) nx n x n 1 2 n 1 Notice that the posterior variance converges to 0 as n In Example 7.2.3, the posterior variance is given by 1 the posterior variance converges to 0 as 2 variance of x, as 0 0 and converges to 2 2 0 n 2 1. Notice that 0 0 n the sampling 2 0 In Example 7.2.4, the posterio
r variance of 1 is given by (see Exercise 7.2.7) x1 1 x2 xk Notice that the posterior variance converges to 0 as n In Example 7.2.5, the posterior variance of is given by (see Problem 7.2.28 provided n 2 0 2 because the variance of a t distribution is 2 when 2 (see Problem 4.6.16). Notice that the posterior variance goes to 0 as n 7.2.2 Credible Intervals A credible interval, for a real­valued parameter that we believe will contain the true value of we specify a probability and then find an interval C s satisfying is an interval C s [l s u s ] As with the sampling theory approach7.2.3) We then refer to C s as a ­credible interval for 392 Section 7.2: Inferences Based on the Posterior Naturally, we try to find a s is as possible, and such that C s is as short as possible. This leads to the as close to consideration of highest posterior density (HPD) intervals, which are of the form ­credible interval C s so that C s C s : s c , s is the marginal posterior density of where and where c is chosen as large as possible so that (7.2.3) is satisfied. In Figure 7.2.1, we have plotted an example of an HPD interval for a given value of c   | s) c [ l(s) ] u(s)  Figure 7.2.1: An HPD interval C s [l s u s ] : s c Clearly, C s contains the mode whenever c max length of an HPD interval as a measure of the accuracy of the mode of estimator of . The length of a 0 95­credible interval for purpose as the margin of error does with confidence intervals. Consider now some applications of the concept of credible interval. s . We can take the s as an will serve the same EXAMPLE 7.2.7 Location Normal Model Suppose that x1 unknown and 2 Example 7.1.2, we showed that the posterior distribution of 0 is known, and we take the prior distribution on xn is a sample from an N 2 0 distribution, where to be N 0 is given by the R1 is 2 0 . In distribution. Since this distribution is symmetric about its mode (also mean) est ­HPD interval is of the form , a short­ 1 2 c, 1 2 0 n 2 0 Chapter 7: Bayesian Inference 393 where c is such that Since x1 xn c x1 xn x1 xn we have function (cdf). This immediately implies that c given by c , where is the standard normal cumulative distribution ­HPD interval is 2 and the Note that as 2 0 interval converges to the interval namely, as the prior becomes increasingly diffuse, this x z 1 0 n 2 which is also the a diffuse normal prior, the Bayesian and frequentist approaches agree. ­confidence interval derived in Chapter 6 for this problem. So under EXAMPLE 7.2.8 Location­Scale Normal Model Suppose that x1 xn is a sample from an N and 7.2.1, we derived the marginal posterior distribution of R1 0 are unknown, and we use the prior given in Example 7.1.4. In Example 2 distribution, where to be the same as where T t 2 0 ­HPD interval is of the form n . Because this distribution is symmetric about its mode , 2 0 n n 394 Section 7.2: Inferences Based on the Posterior where c satisfies G2 0 n c G2 0 n c . x1 xn c 2 0 1 2 c x1 xn Here, G2 0 n is the t 2 0 n cdf, and therefore c t 1 2 2 0 n . Using (7.1.9), (7.1.10), and (7.1.11) we have that this interval converges to the interval 0 and 0 as interval we obtained for identical Note that this is a little different from the ­confidence in Example 6.3.8, but when 0 n is small, they are virtually In the examples we have considered so far, we could obtain closed­form expres­ sions for the HPD intervals. In general, this is not the case. In such situations, we have to resort to numerical methods to obtain the HPD intervals, but we do not pursue this topic further here. l is a 1 ­credible interval for method of obtaining a There are other methods of deriving credible intervals. For example, a common r ] where 2 1 quantile for this distribution. Alternatively, we could form one­sided intervals. These credible intervals avoid the more extensive computations that may be needed for HPD intervals. 2 quantile for the posterior distribution of is to take the interval [ and r is a 1 l 7.2.3 Hypothesis Testing and Bayes Factors Suppose now that we want to assess the evidence in the observed data concerning the hypothesis H0 : 0 It seems clear how we should assess this, namely, compute the posterior probability 0 s . (7.2.4) If this is small, then conclude that we have evidence against H0 We will see further justification for this approach in Chapter 8. EXAMPLE 7.2.9 Suppose we want to assess the evidence concerning whether or not IA then we are assessing the hypothesis H0 : 1 and A If we let So in this case, we simply compute the posterior probability that A 1 s A s . Chapter 7: Bayesian Inference 395 There can be a problem, however, with using (7.2.4) to assess a hypothesis. For when the prior distribution of 0 for all data s. Therefore, we would always find evidence against H0 no matter what is observed, which does not make sense. In general, if the value 0 is assigned small prior probability, then it can happen that this value also has a small posterior probability no matter what data are observed. is absolutely continuous, then 0 s , then this is evidence that H0 is false. The value To avoid this problem, there is an alternative approach to hypothesis assessment that 0 is a surprising value for the posterior distribution is sometimes used. Recall that, if 0 is surprising whenever it of . A region of low occurs in a region of low probability for the posterior distribution of probability will correspond to a region where the posterior density s is relatively low. So, one possible method for assessing this is by computing the (Bayesian) P­value : s 0 s s . (7.2.5) s is unimodal, (7.2.5) corresponds to computing a tail probability. 0 is surprising, at least with respect to our Note that when If the probability (7.2.5) is small, then posterior beliefs. When we decide to reject H0 whenever the P­value is less than 1 then this approach is equivalent to computing a 0 is not in the region. whenever EXAMPLE 7.2.10 (Example 7.2.9 continued) Applying the P­value approach to this problem, we see that terior given by the Bernoulli has pos­ I A s is defined by distribution. Therefore, and rejecting H0 ­HPD region for A s Ac s and 0 s 1 Now 0 A s 1 so : Ac s Ac s . Therefore, (7.2.5) becomes : Ac s Ac s , so again we have evidence against H0 whenever A s is small. We see from Examples 7.2.9 and 7.2.10 that computing the P­value (7.2.5) is essen­ takes only two takes more than two values, however, and the tially equivalent to using (7.2.4), whenever the marginal parameter values. This is not the case whenever statistician has to decide which method is more appropriate in such a context. As previously noted, when the prior distribution of is absolutely continuous, then (7.2.4) is always 0, no matter what data are observed. As the following example illustrates, there is also a difficulty with using (7.2.5) in such a situation. EXAMPLE 7.2.11 Suppose that the posterior distribution of 1 and we want to assess H0 : 0 is Beta 2 1 , i.e., s 3 4 Then s 2 when 3 4 s if and 396 Section 7.2: Inferences Based on the Posterior only if 3 4 and (7.2.5) is given by 3 4 0 2 d 9 16 On the other hand, suppose we make a 1–1 transformation to 9 16 The posterior distribution of the hypothesis is now H0 : Since the posterior density of is constant, this implies that the posterior density at every possible value is less than or equal to the posterior density evaluated at 9 16. Therefore, (7.2.5) equals 1, and we would never find evidence against H0 using this parameterization is Beta 1 1 2 so that This example shows that our assessment of H0 via (7.2.5) depends on the parame­ terization used, which does not seem appropriate. The difficulty in using (7.2.5), as demonstrated in Example 7.2.11, only occurs with continuous posterior distributions. So, to avoid this problem, it is often recommended that the hypothesis to be tested always be assigned a positive prior probability. As demonstrated in Example 7.2.10, the approach via (7.2.5) is then essentially equivalent to using (7.2.4) to assess H0. In problems where it seems natural to use continuous priors, this is accomplished by to be a mixture of probability distributions, as discussed in Section taking the prior 2.5.4, namely, the prior distribution equals p 1 1 where and 1 0 2 is continuous at 1 and 0. Then 2 p 0 2, 0, i.e., 1 is degenerate at is the prior probability that H0 is true. The prior predictive for the data s is then given by m s pm1 s 1 p m2 s , i (see Problem 7.2.34) This im­ where mi is the prior predictive obtained via prior plies (see Problem 7.2.34) that the posterior probability measure for when using the prior is A s pm1 s 1 p m2 s pm1 s 1 A s 1 pm1 s p m2 s 1 p m2 s 2 A s (7.2.6) is the posterior measure obtained via the prior i s where mixture of the posterior probability measures abilities 1 s and 2 i . Note that this a s with mixture prob­ pm1 s 1 and p m2 s 1 pm1 s p m2 s 1 p m2 s . pm1 s Chapter 7: Bayesian Inference 397 s is degenerate at Now 1 must be degenerate at that point too) and 0 (if the prior is degenerate at a point then the posterior 2 s is continuous at 0 Therefore, 0 s pm1 s 1 , p m2 s pm1 s (7.2.7) and we use this probability to assess H0 The following example illustrates this approach. to be an N 0 xn is a sample from an N 2 0 distribution, where 0 is known, and we want to assess the hypothesis H0 : R1 0. As 2 0 distribution. Given 0 it seems reasonable to place the mode of 2 0 then reects 2 denote this prior probability EXAMPLE 7.2.12 Location Normal Model Suppose that x1 is unknown and 2 in Example 7.1.2, we will take the prior for that we are assessing whether or not the prior at the hypothesized value. The choice of the hyperparameter the degree of our prior belief that H0 is true. We let 2 is the N 0 measure, i.e., If we use 2 as our prior, then, as shown in Example 7.1.2, the posterior distribution is absolutely continuous. This implies that (7.2.4) is 0. So, following the preceding 2 obtained by mixing 1 and so 1 is p 1 1 degenerate at
p. As shown in Example 7.1.2, under 1 p 0. Then 2 the posterior distribution of of discussion, we consider instead the prior 2 0 probability measure. 2 with a probability measure while the posterior under evaluate (7.2.7), and we will do this in Example 7.2.13. 1 is the distribution degenerate at 0. We now need to Bayes Factors Bayes factors comprise another method of hypothesis assessment and are defined in terms of odds. Definition 7.2.1 In a probability model with sample space S and probability mea­ S is defined to be P A P Ac namely, the sure P the odds in favor of event A ratio of the probability of A to the probability of Ac Obviously, large values of the odds in favor of A indicate a strong belief that A is true. Odds represent another way of presenting probabilities that are convenient in certain contexts, e.g., horse racing. Bayes factors compare posterior odds with prior odds. Definition 7.2.2 The Bayes factor B FH0 in favor of the hypothesis H0 : 0 is defined, whenever the prior probability of H0 is not 0 or 1, to be the ratio of the posterior odds in favor of H0 to the prior odds in favor of H0 or B FH0 1 0 s 0 s 1 0 0 (7.2.8) 398 Section 7.2: Inferences Based on the Posterior So the Bayes factor in favor of H0 is measuring the degree to which the data have changed the odds in favor of the hypothesis. If B FH0 is small, then the data are provid­ ing evidence against H0 and evidence in favor of H0 when B FH0 is large. There is a relationship between the posterior probability of H0 being true and B FH0. From (7.2.8), we obtain 0 s r B FH0 1 r B FH0 , (7.2.9) where r 1 0 0 is the prior odds in favor of H0 So, when B FH0 is small, then small and conversely. 0 s is One reason for using Bayes factors to assess hypotheses is the following result. This establishes a connection with likelihood ratios. Theorem 7.2.1 If the prior 1 is a mixture 1, and we want to assess the hypothesis H0 : 2 AC p 1 1 p 2, where A then 1 A B FH0 m1 s m2 s where mi is the prior predictive of the data under i PROOF Recall that, if a prior concentrates all of its probability on a set, then the posterior concentrates all of its probability on this set, too. Then using (7.2.6), we have B FH0 A s 1 A s 1 A A pm1 s p 1 p m2 s 1 p m1 s m2 s Interestingly, Theorem 7.2.1 indicates that the Bayes factor is independent of p We note, however, that it is not immediately clear how to interpret the value of B FH0. In particular, how large does B FH0 have to be to provide strong evidence in favor of H0? One approach to this problem is to use (7.2.9), as this gives the posterior probability of H0, which is directly interpretable. So we can calibrate the Bayes factor. Note, however, that this requires the specification of p. EXAMPLE 7.2.13 Location Normal Model (Example 7.2.12 continued) We now compute the prior predictive under x1 xn given equals 2 We have that the joint density of 2 2 0 n 2 exp n 1 2 2 0 s2 exp n 2 2 0 x 2 Chapter 7: Bayesian Inference 399 and so m2 x1 xn 2 n 2 exp 2 0 s2 exp exp exp 1 n 2 2 0 s2 1 0 2 1 2 exp n 2 2 0 x 2 exp 1 2 2 0 2 0 d Then using (7.1.2), we have 1 0 2 1 2 exp 1 0 exp exp Therefore, m2 x1 xn n 2 2 0 1 n 2 0 nx 2 2 0 x 2 exp 7.2.10) 2 2 0 n 2 exp 1 n 2 2 0 s2 exp 1 2 2 0 2 0 nx exp Because is given by 1 is degenerate at 0 it is immediate that the prior predictive under 1 m1 x1 xn 2 2 0 n 2 exp 1 n 2 2 0 s2 exp n 2 2 0 x 2 0 Therefore, B FH0 equals divided by (7.2.10). exp n 2 2 0 x 2 0 400 Section 7.2: Inferences Based on the Posterior For example, suppose that 0 0 2 0 2 2 0 1 n 10 and x 0 2 Then exp n 2 2 0 x 2 0 exp 10 2 0 2 2 0 81873 while (7.2.10) equals 1 2 exp 1 2 1 2 0 21615 1 10 10 0 2 2 exp 10 0 2 2 2 10 1 2 1 2 So 0 81873 0 21615 which gives some evidence in favor of H0 : 1 2 so that we are completely indifferent between H0 being true and not being true, then r 0. If we suppose that p 1 and (7.2.9) gives 3 7878 B FH0 0 x1 xn 3 7878 1 3 7878 0 79114 indicating a large degree of support for H0. 7.2.4 Prediction Prediction problems arise when we have an unobserved response value t in a sample S Furthermore, we have the statistical model space T and observed response s P : for t given for s and the conditional statistical model Q s. We assume that both models have the same true value of The objective is to T of the unobserved value t based on the observed data construct a prediction t s s The value of t could be unknown simply because it represents a future outcome. s : If we denote the conditional density or probability function (whichever is relevant) s t s , the joint distribution of is given by of t by q q t s f s . Then, once we have observed s (assume here that the distributions of and t are ab­ solutely continuous; if not, we replace integrals by sums), the conditional density of t , given s is dt d Then the marginal posterior distribution of t known as the posterior predictive of t is Chapter 7: Bayesian Inference 401 Notice that the posterior predictive of t is obtained by averaging the conditional density of t given s with respect to the posterior distribution of Now that we have obtained the posterior predictive distribution of t we can use it to select an estimate of the unobserved value. Again, we could choose the posterior mode T tq t s dt as our prediction, whichever is t or the posterior expectation E t x deemed most relevant. EXAMPLE 7.2.14 Bernoulli Model Suppose we want to predict the next independent outcome Xn 1 having observed . Here, the future a sample x1 observation is independent of the observed data. The posterior predictive probability function of Xn 1 at t is then given by from the Bernoulli Beta and xn q t x1 1 xn nx nx nx n n 1 x n nx nx nx 1 1 n 1 x 1 d x nx which is the probability function of a Bernoulli nx n Using the posterior mode as the predictor, i.e., maximizing q t x1 distribution. xn for t leads to the prediction t 1 if nx n 0 otherwise. n 1 x n The posterior expectation predictor is given by E t x1 xn nx n Note that the posterior mode takes a value in 0 1 , and the future Xn 1 will be in this set, too. The posterior mean can be any value in [0 1]. EXAMPLE 7.2.15 Location Normal Model Suppose that x1 xn is a sample from an N 2 0 distribution, where R1 is unknown and 2 0 is known, and we use the prior given in Example 7.1.2. Suppose we want to predict a future observation Xn 1, but this time Xn 1 is from the 7.2.11) 402 Section 7.2: Inferences Based on the Posterior distribution. So, in this case, the future observation is not independent of the observed data, but it is independent of the parameter. A simple calculation (see Exercise 7.2.9) shows that (7.2.11) is the posterior predictive distribution of t and so we would predict t by x, as this is both the posterior mode and mean. We can also construct a s : for a future value t from the model q , where s Q s is the posterior predictive measure for t One approach to constructing C s is to apply the HPD concept to q t s . We illustrate this via several examples. A ­prediction region for t satisfies Q C s ­prediction region C s EXAMPLE 7.2.16 Bernoulli Model (Example 7.2.14 continued) Suppose we want a we derived the posterior predictive distribution of Xn 1 to be ­prediction region for a future value Xn 1. In Example 7.2.14, Bernoulli nx n Accordingly, a ­prediction region for t, derived via the HPD concept, is given by C x1 xn 0 1 if max 1 0 if if nx n max max nx n nx nx n n 1 x n We see that this predictive region contains just the mode or encompasses all possible values for Xn 1. In the latter case, this is not an informative inference. EXAMPLE 7.2.17 Location Normal Model (Example 7.2.15 continued) Suppose we want a ­prediction interval for a future observation Xn 1 from distribution. As this is also the posterior predictive distribution of Xn 1 and is sym­ metric about x a ­prediction interval for Xn 1 derived via the HPD concept, is given by Summary of Section 7.2 Based on the posterior distribution of a parameter, we can obtain estimates of the parameter (posterior modes or means), construct credible intervals for the parameter (HPD intervals), and assess hypotheses about the parameter (posterior probability of the hypothesis, Bayesian P­values, Bayes factors). Chapter 7: Bayesian Inference 403 A new type of inference was discussed in this section, namely, prediction prob­ lems where we are concerned with predicting an unobserved value from a sam­ pling model. EXERCISES m 1 in Example 7.2.4. 2. 2. (Hint: 2 to determine the mode.) in Example 7.2.2 is as given in Example 7.2.6. 1 in Example 7.2.4 is as given in Example 7.2.6. 7.2.1 For the model discussed in Example 7.1.1, derive the posterior mean of where m 0 7.2.2 For the model discussed in Example 7.1.2, determine the posterior distribution of the third quartile 0z0 75 Determine the posterior mode and the posterior expectation of 7.2.3 In Example 7.2.1, determine the posterior expectation and mode of 1 7.2.4 In Example 7.2.1, determine the posterior expectation and mode of You will need the posterior density of 7.2.5 Carry out the calculations to verify the posterior mode and posterior expectation of 7.2.6 Establish that the variance of the Prove that this goes to 0 as n 7.2.7 Establish that the variance of Prove that this goes to 0 as n 7.2.8 In Example 7.2.14, which of the two predictors derived there do you find more sensible? Why? 7.2.9 In Example 7.2.15, prove that the posterior predictive distribution for Xn 1 is as stated. (Hint: Write the posterior predictive distribution density as an expectation.) 7.2.10 Suppose that x1 distribution, where 0 . Determine the mode of posterior distribution of . 7.2.11 Suppose that x1 distribution 0 . Determine the mode of poste­ where rior distribution of a future independent observation Xn 1. Also determine the poste­ rior expectation of Xn 1 and posterior variance of Xn 1. (Hint: Problems 3.2.16 and 3.3.20.) 7.2.12 Suppose that in a population of students in a course with a large enrollment, the mark, out of 100, on
a final exam is approximately distributed N 9 The instructor places the prior N 65 1 on the unknown parameter. A sample of 10 marks is obtained as given below. is a sample from the Exponential Gamma . Also determine the posterior expectation and posterior variance of is a sample from the Exponential 0 is unknown and 0 is unknown and Gamma xn xn 0 0 46 68 34 86 75 56 77 73 53 64 (a) Determine the posterior mode and a 0.95­credible interval for interval tell you about the accuracy of the estimate? (b) Use the 0.95­credible interval for (c) Suppose we assign prior probability 0 5 to 0 5 1 compute the posterior probability of the null hypothesis. to test the hypothesis H0 : 1 is degenerate at 0 5 2, where 65 and 65. Using the mixture prior 2 is the N 65 1 distribution, . What does this 65. 404 Section 7.2: Inferences Based on the Posterior 0 is appropriate. 2 , where 2 Gamma 65 when using the mixture prior. 0 is known and 2 0 (d) Compute the Bayes factor in favor of H0 : 7.2.13 A manufacturer believes that a machine produces rods with lengths in centime­ ters distributed N 0 0 is unknown, and that the prior distribution 1 (a) Determine the posterior distribution of (b) Determine the posterior mean of 2 (c) Indicate how you would assess the hypothesis H0 : 0. 7.2.14 Consider the sampling model and prior in Exercise 7.1.1. (a) Suppose we want to estimate 1 Determine the based upon having observed s posterior mode and posterior mean. Which would you prefer in this situation? Explain why. (b) Determine a 0.8 HPD region for (c) Suppose instead interest was in 2 based on a sample x1 based on having observed s 1 Identify the prior distribution of 1 Determine Identify the posterior distribution of based on having observed s I 1 2 xn . 2. 2 1 P A a 0.5 HPD region for 7.2.15 For an event A, we have that P Ac (a) What is the relationship between the odds in favor of A and the odds in favor of Ac? (b) When A is a subset of the parameter space, what is the relationship between the Bayes factor in favor of A and the Bayes factor in favor of Ac? 7.2.16 Suppose you are told that the odds in favor of a subset A are 3 to 1. What is the probability of A? If the Bayes factor in favor of A is 10 and the prior probability of A is 1/2, then determine the posterior probability of A 7.2.17 Suppose data s is obtained. Two statisticians analyze these data using the same sampling model but different priors, and they are asked to assess a hypothesis H0 Both statisticians report a Bayes factor in favor of H0 equal to 100. Statistician I assigned prior probability 1/2 to H0 whereas statistician II assigned prior probability 1/4 to H0 Which statistician has the greatest posterior degree of belief in H0 being true? 7.2.18 You are told that a 0.95­credible interval, determined using the HPD criterion, for a quantity 3 3 2 6 If you are asked to assess the hypothesis H0 : 0 then what can you say about the Bayesian P­value? Explain your answer. 7.2.19 What is the range of possible values for a Bayes factor in favor of A Under what conditions will a Bayes factor in favor of A take its smallest value? is given by ? PROBLEMS 7.2.20 Suppose that x1 0 is unknown, and we have terior distribution of 7.2.21 Suppose that x1 xn is a sample from the Uniform[0 0 ] distribution, where 0 . Determine the mode of the pos­ Gamma . (Hint: The posterior is not differentiable at x n .) 0 1 is unknown, and we have Uniform[0 1]. Determine the form of the xn is a sample from the Uniform[0 ] distribution, where ­ credible interval for 7.2.22 In Example 7.2.1, write out the integral given in (7.2.2). based on the HPD concept. Chapter 7: Bayesian Inference 405 7.2.23 (MV) In Example 7.2.1, write out ithe integral that you would need to evaluate if you wanted to compute the posterior density of the third quartile of the population distribution, i.e., 7.2.24 Consider the location normal model discussed in Example 7.1.2 and the popu­ . lation coefficient of variation (a) Show that the posterior expectation of write the posterior expectation as does not exist. (Hint: Show that we can z0 75. 0 0 bz a 1 2 e z2 2 dz i1 i2 k 1 k 2 ik 1 k . Beta a b ) k 1 1 Dirichlet i . (Hint: Use parts (b) and (c).) k . (Hint: In the inte­ k 2 .) k . (Hint: Use part (a).) k . Prove that 0 and show that this integral does not exist by considering the behavior of is a permutation of 1 ik . (Hint: What is the Jacobian of this transformation?) k 1 Dirichlet k 1 make the transformation k 1 Beta ik where b the integrand at z (b) Determine the posterior density of (c) Show that you can determine the posterior mode of by evaluating the posterior density at two specific points (Hint: Proceed by maximizing the logarithm of the pos­ terior density using the methods of calculus.) 7.2.25 (MV) Suppose that (a) Prove that gral to integrate out (b) Prove that (c) Suppose i1 Dirichlet i1 (d) Prove that 1 is given by x1 n i.e., 7.2.26 (MV) In Example 7.2.4, show that the plug­in MLE of find the MLE of k and determine the first coordinate. (Hint: Show there is a unique solution to the score equations and then use the facts that the log­likelihood is bounded above and goes to 7.2.27 Compare the results obtained in Exercises 7.2.3 and 7.2.4. What do you con­ clude about the invariance properties of these estimation procedures? (Hint: Consider Theorem 6.2.1.) 7.2.28 In Example 7.2.5, establish that the posterior variance of ample 7.2.6. (Hint: Problem 4.6.16.) 7.2.29 In a prediction problem, as described in Section 7.2.4, derive the form of the prior predictive density for t when the joint density of (assume s and 7.2.30 In Example 7.2.16, derive the posterior predictive probability function of Xn 1 Xn 2 xn when X1 having observed x1 pendently and identically distributed (i.i.d.) Bernoulli 7.2.31 In Example 7.2.15, derive the posterior predictive distribution for Xn 1 having 2 observed x1 0 . (Hint: We can write Xn 1 N 0 1 is independent of the posterior distribution of ) Xn Xn 1 Xn 2 are inde­ Xn Xn 1 are i.i.d. N is as stated in Ex­ are real­valued) 0 Z , where Z xn when X1 is q t s f whenever s t 0 ) s 1 . i 406 Section 7.2: Inferences Based on the Posterior 7.2.32 For the context of Example 7.2.1, prove that the posterior predictive distribution of an additional future observation Xn 1 from the population distribution has the same distribution as . Xn t 2 0 N 0 1 independent of X1 (Hint: Note that we can write Xn 1 U , where where T and then reason as in Example 7.2.1.) U 7.2.33 In Example 7.2.1, determine the form of an exact ­prediction interval for an additional future observation Xn 1 from the population distribution, based on the HPD concept. (Hint: Use Problem 7.2.32.) 7.2.34 Suppose that space prior predictive for the data s is given by m s posterior probability measure is given by (7.2.6). R2. 7.2.35 (MV) Suppose that Assume that h satisfies the necessary conditions and establish (7.2.1). (Hint: Theorem 2.9.2.) 2 are discrete probability distributions on the parameter 2, then the p 1 and the . Prove that when the prior p p m2 s R2 and h 1 is a mixture pm1 s 1 and 1 1 2 1 2 CHALLENGES 7.2.36 Another way to assess the null hypothesis H0 : P­value 0 is to compute the s 0 s 0 s (7.2.12) is the marginal prior density or probability function of We call (7.2.12) the where observed relative surprise ofH0. 0 s 0 is the true value of When (7.2.12) is small, 0 is a measure of how the data s have changed our a 0 is a surprising , as this indicates that the data have increased our belief more for other The quantity priori belief that value for values of (a) Prove that (7.2.12) is invariant under 1–1 continuously differentiable transforma­ tions of (b) Show that a value We call such a value a least relative suprise estimate of (c) Indicate how to use (7.2.12) to form a surprise region, for (d) Suppose that both continuous and positive at Show that B FA takes its values in an 0 Generalize this to the case where open subset of Rk This shows that we can think of the observed relative surprise as a way of calibrating Bayes factors. is real­valued with prior density 0 Let A 0 that makes (7.2.12) smallest, maximizes ­credible region, known as a and posterior density ­relative 0 as 0 s 0 s s 0 0 0 Chapter 7: Bayesian Inference 407 7.3 Bayesian Computations In virtually all the examples in this chapter so far, we have been able to work out the exact form of the posterior distributions and carry out a number of important com­ putations using these. It often occurs, however, that we cannot derive any convenient form for the posterior distribution. Furthermore, even when we can derive the posterior distribution, there computations might arise that cannot be carried out exactly — e.g., recall the discussion in Example 7.2.1 that led to the integral (7.2.2). These calculations involve evaluating complicated sums or integrals. Therefore, when we apply Bayesian inference in a practical example, we need to have available methods for approximating these quantities. The subject of approximating integrals is an extensive topic that we cannot deal with fully here.1 We will, however, introduce several approximation methods that arise very naturally in Bayesian inference problems. 7.3.1 Asymptotic Normality of the Posterior R1 is approx­ In many circumstances, it turns out that the posterior distribution of imately normally distributed. We can then use this to compute approximate credible , carry out hypothesis assessment, etc. One such re­ regions for the true value of xn is a sult says that, under conditions that we will not describe here, when x1 sample from f , then x1 x1 xn xn z x1 xn z as n where x1 xn is the posterior mode, and 2 x1 xn 2 ln L x1 xn 2 1 . x1 Note that this result is similar to Theorem 6.5.3 for the MLE. Actually, we can replace 2 x1 xn by the observed information is k­dimensional, there is a similar xn by the MLE and replace (see Section 6.5), and the result still holds. When but more complicated result. 7.3.2 Sampling from th
e Posterior Typically, there are many things we want to compute as part of implementing a Bayesian analysis. Many of these can be written as expectations with respect to the posterior dis­ tribution of For example, we might want to compute the posterior probability content of a subset A namely, 1See, for example, Approximating Integrals via Monte Carlo and Deterministic Methods, by M. Evans and T. Swartz (Oxford University Press, Oxford, 2000). A s E IA s . 408 Section 7.3: Bayesian Computations More generally, we want to be able to compute the posterior expectation of some arbi­ trary function , namely E s . (7.3.1) It would certainly be convenient if we could compute all these quantities exactly, but quite often we cannot. In fact, it is not really necessary that we evaluate (7.3.1) exactly. This is because we naturally expect any inference we make about the true value of the parameter to be subject (different data sets of the same size lead to different inferences) to sampling error. It is not necessary to carry out our computations to a much higher degree of precision than what sampling error contributes. For example, if the sampling error only allows us to know the value of a parameter to within only 0 1 units, then there is no point in computing an estimate to many more digits of accuracy. In light of this, many of the computational problems associated with implementing Bayesian inference are effectively solved if we can sample from the posterior for For when this is possible, we simply generate an i.i.d. sequence 1 the posterior distribution of and estimate (7.3.1) by 2 N from 1 N N i 1 i . We know then, from the strong law of large numbers (see Theorem 4.3.2), that E x as N a s Of course, for any given N the value of only approximates (7.3.1); we would like to know that we have chosen N large enough so that the approximation is appropriately accurate. When E 2 then the central limit theorem (see Theorem 4.4.3) tells us that s E s N D N 0 1 as N but we can estimate it by where 2 Var s . In general, we do not know the value of 2 , s2 1 N 1 N i 1 2 i is a quantitative variable, and by s2 when As shown in Section 4.4.2, in either case, s2 is a consistent estimate of Corollary 4.4.4, we have that when 1 I A for A 2 Then, by E s N s D N 0 1 as N . From this result we know that s 3 N 409 N Chapter 7: Bayesian Inference s so we can look at 3s is an approximate 100% confidence interval for E to determine whether or not N is large enough for the accuracy required. One caution concerning this approach to assessing error is that 3s N is itself so this could be misleading. A common subject to error, as s N for successively larger values recommendation then is to monitor the value of 3s of N and stop the sampling only when it is clear that the value of 3s N is small enough for the accuracy desired and appears to be declining appropriately. Even this approach, however, will not give a guaranteed bound on the accuracy of the computa­ tions, so it is necessary to be cautious. is an estimate of It is also important to remember that application of these results requires that For a bounded , this is always true, as any bounded random variable always has however, this must be checked — sometimes 2 a finite variance. For an unbounded this is very difficult to do. We consider an example where it is possible to exactly sample from the posterior. EXAMPLE 7.3.1 Location­Scale Normal Suppose that x1 and distribution for 2 developed there is R1 0 are unknown, and we use the prior given in Example 7.1.4. The posterior 2 distribution where is a sample from an N xn 2 x1 xn N x n 1 1 2 2 0 and 1 2 x1 xn Gamma 0 n 2 x , (7.3.2) (7.3.3) where x is given by (7.1.7) and x is given by (7.1.8). Most statistical packages have built­in generators for gamma distributions and for the normal distribution. Accordingly, it is very easy to generate a sample 2 N from this posterior. We simply generate a value for 1 N gamma distribution; then, given this value, we generate the value of fied normal distribution. 1 2 i from the specified i from the speci­ 2 1 of variation Suppose, then, that we want to derive the posterior distribution of the coefficient . To do this we generate N values from the joint posterior of for each of these. We then know 2 , using (7.3.2) and (7.3.3), and compute immediately that 1 N is a sample from the posterior distribution of As a specific numerical example, suppose that we observed the following sample x1 x15 11 6714 8 1631 1 9020 1 8957 1 8236 7 4899 2 1228 4 0362 4 9233 2 1286 6 8513 8 3223 1 0751 7 6461 7 9486 5 2 and s 3 3 Suppose further that the prior is specified by 0 4 Here, x 2 2 and 0 0 From (7.1.7), we have 1 2 0 x 15 1 2 1 4 2 15 5 2 5 161, 410 Section 7.3: Bayesian Computations and from (7.1.8), x 1 15 2 77 578 5 2 2 42 2 2 14 2 3 3 2 1 2 15 1 2 1 4 2 2 15 5 2 Therefore, we generate 1 2 x1 xn Gamma 9 5 77 578 followed by 2 x1 xn N 5 161 15 5 1 2 In Figure 7.3.1, we have plotted a sample of N See Appendix B for some code that can be used to generate from this joint distribution. 2 from this joint posterior. In Figure 7.3.2, we have plotted a density histogram of the 200 values of that arise from this sample. 200 values of 25 15 mu 6 7 Figure 7.3.1: A sample of 200 values of when n 5 2 s 15 x 3 3, 0 4 2 from the joint posterior in Example 7.3.1 2 0 2 and 0 1. 2 0 Chapter 7: Bayesian Inference 411 4 3 2 1 0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Figure 7.3.2: A density histogram of 200 values from the posterior distribution of Example 7.3.1. in 104 values. We can see from this that at N A sample of 200 is not very large, so we next generated a sample of N 103 values from the posterior distribution of A density histogram of these values is pro­ vided in Figure 7.3.3. In Figure 7.3.4, we have provided a density histogram based on 103, the basic shape of a sample of N the distribution has been obtained, although the right tail is not being very accurately 104, but note there are still some estimated. Things look better in the right tail for N extreme values quite disconnected from the main mass of values. As is characteristic of most distributions, we will need very large values of N to accurately estimate the tails. In any case, we have learned that this distribution is skewed to the right with a long right tail. 4 3 2 1 0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Figure 7.3.3: A density histogram of 1000 values from the posterior distribution of Example 7.3.1. in 412 Section 7.3: Bayesian Computations 4 3 2 1 0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Figure 7.3.4: A density histogram of N Example 7.3.1. 104 values from the posterior distribution of in Suppose we want to estimate 0 5 x1 xn E I 0 5 x1 xn . I Now 0 5 is bounded so its posterior variance exists. In the following table, we have recorded the estimates for each N together with the standard error based on each of the generated samples. We have included some code for computing these 104 estimates and their standard errors in Appendix B. Based on the results from N it would appear that this posterior probability is in the interval 0 289 [0 2755 0 3025]. 3 0 0045 N 200 103 104 Estimate of 0 5 x1 xn 0 265 0 271 0 289 Standard Error 0 0312 0 0141 0 0045 This example also demonstrates an important point. It would be very easy for us to generated from its posterior distribution calculate the sample mean of the values of and then consider this as an estimate of the posterior mean of But Problem 7.2.24 suggests (see Problem 7.3.15) that this mean will not exist. Accordingly, a Monte Carlo estimate of this quantity does not make any sense! So we must always check first that any expectation we want to estimate exists, before we proceed with some estimation procedure. When we cannot sample directly from the posterior, then the methods of the fol­ lowing section are needed. Chapter 7: Bayesian Inference 413 7.3.3 Sampling from the Posterior Via Gibbs Sampling (Advanced) Sampling from the posterior, as described in Section 7.3.2, is very effective, when it can be implemented. Unfortunately, it is often difficult or even impossible to do this directly, as we did in Example 7.3.1. There are, however, a number of algorithms that allow us to approximately sample from the posterior. One of these, known as Gibbs sampling, is applicable in many statistical contexts. To describe this algorithm, suppose we want to generate samples from the joint Rk Further suppose that we can generate from each of distribution of Y1 the full conditional distributions Yi Y i Yk y i , where Y i Y1 Yi 1 Yi 1 Yk , namely, we can generate from the conditional distribution of Yi given the values of all the other coordinates. The Gibbs sampler then proceeds iteratively as follows. 1. Specify an initial value y1 0 yk 0 for Y1 Yk . 2. For N y1 N 0 generate Yi N from its conditional distribution given yi 1 N yi 1 N 1 yk N 1 for each i 1 k. For example, if k 3 we first specify y1 0 y2 0 y3 0 . Then we generate Y1 1 Y2 0 Y2 1 Y1 1 Y3 1 Y1 1 y2 0 Y3 0 y1 1 Y3 0 y1 1 Y2 1 to obtain Y1 1 Y2 1 Y3 1 . Next we generate Y1 2 Y2 1 Y2 2 Y1 2 Y3 2 Y1 2 y2 1 Y3 1 y1 2 Y3 1 y1 2 Y2 2 y3 0 y3 0 y2 1 y3 1 y3 1 y2 2 to obtain Y1 2 Y2 2 Y3 2 as it is never used. etc. Note that we actually did not need to specify Y1 0 converges in distribution to the joint distribution of Y1 It can then be shown (see Section 11.3) that, in fairly general circumstances, Y1 N Yk N So for large N , we have that the distribution of Y1 N Yk N is approximately from which we want to sample. So the same as the joint distribution of Y1 Gibbs sampling provides an approximate method for sampling from a distribution of interest. Yk as N Yk Furthermore, and this is the result that is most relevant for simulations, it can be shown that, under conditions, 1 N N i 1 Y1 i Yk i a s E Y1 Yk . 414 Section 7.3: Bayesian Computations Estimation of the variance of sample variance, because now the is different than in the i.i.d. case, where we used the Y1 i Yk i terms are not independent. There
are several approaches to estimating the variance of but perhaps the most commonly used is the technique of batching. For this we divide the sequence Y1 0 Yk 0 Y1 N Yk N into N m nonoverlapping sequential batches of size m (assuming here that N is divisi­ ble by m), calculate the mean in each batch obtaining 1 N m, and then estimate the variance of by s2 b N m , (7.3.4) where s2 b is the sample variance obtained from the batch means, i.e., s2 . It can be shown that Y1 i Yk i m are approximately independent for m large enough. Accordingly, we choose the batch size m large enough so that the batch means are approximately independent, but not so large as to leave very few degrees of freedom for the estimation of the variance. Under ideal conditions, and Y1 i m Yk i 1 N m is an i.i.d. sequence with sample mean 1 N m N m i 1 i , and, as usual, we estimate the variance of by (7.3.4). Sometimes even Gibbs sampling cannot be directly implemented because we can­ not obtain algorithms to generate from all the full conditionals. There are a variety of techniques for dealing with this, but in many statistical applications the technique of latent variables often works. For this, we search for some random variables, say Vl and such that we can apply V1 Gibbs sampling to the joint distribution of V1 Vl We illustrate Gibbs sampling via latent variables in the following example. Vl where each Yi is a function of V1 EXAMPLE 7.3.2 Location­Scale Student Suppose now that x1 Z , where Z t xn is a sample from a distribution that is of the form X (see Section 4.6.2 and Problem 4.6.14). If is 2 1 2 is the standard deviation of the distribution (see Problem 1 corresponds to corresponds to normal variation, while 2, then at some specified value to reect the fact that we are interested in modeling situations in which the variable under consideration has a distribution with longer tails than the normal distribution. Typically, this manifests itself in a histogram of the data with a roughly symmetric shape but exhibiting a few extreme values out in the tails, so a t distribution might be appropriate. the mean and 4.6.16). Note that Cauchy variation. We will fix Chapter 7: Bayesian Inference 415 Suppose we place the prior on 2 , given by Gamma 0 0 . The likelihood function is given by 2 N 0 2 0 2 and xi 2 1 2 , (7.3.5) hence the posterior density of 1 2 is proportional to 1 2 n 2 n 1 i 1 1 xi 2 1 2 1 2 exp exp 0 2 This distribution is not immediately recognizable, and it is not at all clear how to gen­ erate from it. It is natural, then, to see if we can implement Gibbs sampling. To do this directly, 2 and we need an algorithm to generate from the posterior of 2 given an algorithm to generate from the posterior of Unfortunately, neither of these conditional distributions is amenable to the techniques discussed in Section 2.10, so we cannot implement Gibbs sampling directly. given the value of Recall, however, that when V independent of Y N 2 2 Gamma then (Problem 4.6.14) 2 1 2 (see Problem 4.6.13) Z Y V t . Therefore, writing X Z Y V Y V we have that X V N 2 . 2 and suppose Xi Vi We now introduce the n latent or hidden variables V1 i Vn which are i.i.d. i . The Vi are considered latent be­ cause they are not really part of the problem formulation but have been added here for associated with convenience (as we shall see). Then, noting that there is a factor Xn Vn is the density of Xi Vi proportional to 1 2 i the joint density of the values X1 V1 N 2 i 1 2 n 2 n exp i 1 i 2 2 xi 2 i 2 1 2 exp i 2 . From the above argument, the marginal joint density of X1 out the Xn (after integrating i ’s) is proportional to (7.3.5), namely, a sample of n from the distribution 416 Section 7.3: Bayesian Computations specified by X we have that the joint density of Z , where Z t . With the same prior structure as before, X1 V1 Xn Vn 2 1 is proportional to 1 2 n 2 n i 1 1 2 exp i 2 2 xi 2 i 2 1 2 exp i 2 1 2 exp exp 0 2 . (7.3.6) In (7.3.6), treat x1 the conditional distributions of each of the variables V1 other variables. From (7.3.6), we have that the full conditional density of tional to xn as constants (we observed these values) and consider 2 given all the is propor­ Vn 1 exp 1 2 2 which is proportional to n i 1 i xi 2 1 2 0 2 0 , exp xi 0 2 0 . From this, we immediately deduce that x1 xn xi r 1 2 , n 0 2 0 where r 1 n n i i 1 From (7.3.6), we have that the conditional density of 1 1 1 2 0 2 is proportional to n 2 0 1 2 exp 1 2 n i 1 i xi , and we immediately deduce that 1 2 x1 Gamma xn xi 2 1 2 0 2 0 2 0 . Chapter 7: Bayesian Inference 417 Finally, the conditional density of Vi is proportional to 2 1 2 exp i 2 xi 2 2 1 2 , i and it is immediate that Vi x1 xn 1 i 1 i 1 n 2 Gamma 1 2 1 2 2 2 xi 2 1 . We can now easily generate from all these distributions and implement a Gibbs Vn we simply sampling algorithm. As we are not interested in the values of V1 discard these as we iterate. Let us now consider a specific computation using the same data and prior as in Example 7.3.1. The analysis of Example 7.3.1 assumed that the data were coming from a normal distribution, but now we are going to assume that the data are a sample from a 3 We again consider approximating the posterior distribution of the coefficient of variation t 3 distribution, i.e., We carry out the Gibbs sampling iteration in the order 1 n implies that we need starting values only for do not depend on the other starting value of to be s to obtain the sequence 1 2 N . j ) We take the starting value of 3 3 For each generated value of and 2 (the full conditionals of the 1 2 This i 5 2 and the to be x 2 , we calculate The values 1 2 N are not i.i.d. from the posterior of . The best we can say is that D m x1 xn x1 xn , where . Also, values suf­ as m xn . Thus, x1 ficiently far apart in the sequence, will be like i.i.d. values from one approach is to determine an appropriate value m and then extract m 3m 2m as an approximate i.i.d. sequence from the posterior. Often it is difficult to determine an appropriate value for m however. is the posterior density of In any case, it is known that, under fairly weak conditions x1 xn So we can use the whole sequence N and record a density 1 just as we did in Example 7.3.1. The value of the density histogram as N histogram for between two cut points will converge almost surely to the correct value as N However, we will have to take N larger when using the Gibbs sampling algorithm than with i.i.d. sampling, to achieve the same accuracy. For many examples, the effect of the deviation of the sequence from being i.i.d. is very small, so N will not have to be much larger. We always need to be cautious, however, and the general recommendation is to 2 418 Section 7.3: Bayesian Computations compute estimates for successively higher values of N only stopping when the results seem to have stabilized. In Figure 7.3.5, we have plotted the density histogram of the values that resulted from 104 iterations of the Gibbs sampler. In this case, plotting the density histogram of 104 resulted in only minor deviations from this plot. Note that this density looks very similar to that plotted in Example 7.3.1, but it is not quite so peaked and it has a shorter right tail. based upon N 104 and N 8 5 4 3 2 1 0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Figure 7.3.5: A density histogram of N sampling in Example 7.3.2. 104 values of generated sequentially via Gibbs 0 5 x1 We can also estimate just as we did in Example 7.3.1, by recording the proportion of values in the sequence that are smaller than 0.5, i.e., 0 5 . In this case, we obtained the estimate 0 5441, which is quite different from the value obtained in Example 7.3.1. So using a t 3 distribution to describe the variation in the response has made a big difference in the results. , where A I A xn : Of course, we must also quantify how accurate we believe our estimate is. Using 10 we obtained the standard error of the estimate 0 5441 to be a batch size of m 20, the standard error of the mean 0 00639. When we took the batch size to be m is 0 00659; with a batch size of m 40 the standard error of the mean is 0 00668. So we feel quite confident that we are assessing the error in the estimate appropriately. is asymptotically normal so that in this case Again, under conditions, we have that we can assert that the interval 0 5441 3 0 0066 [0 5243 0 5639] contains the true value of xn with virtual certainty. 0 5 x1 See Appendix B for some code that was used to implement the Gibbs sampling algorithm described here. It is fair to say that the introduction of Gibbs sampling has resulted in a revolution in statistical applications due to the wide variety of previously intractable problems that it successfully handles. There are a number of modifications and closely related Chapter 7: Bayesian Inference 419 algorithms. We refer the interested reader to Chapter 11, where the general theory of what is called Markov chain Monte Carlo (MCMC) is discussed. Summary of Section 7.3 Implementation of Bayesian inference often requires the evaluation of compli­ cated integrals or sums. If, however, we can sample from the posterior of the parameter, this will often lead to sufficiently accurate approximations to these integrals or sums via Monte Carlo. It is often difficult to sample exactly from a posterior distribution of interest. In such circumstances, Gibbs sampling can prove to be an effective method for generating an approximate sample from this distribution. EXERCISES 7.3.1 Suppose we have the following sample from an N unknown. 2 distribution, where is If the prior on is Uniform 2 6 , determine an approximate 0.95­credible interval for based on the large sample results described in Section 7.3.1. ] and Uniform[0 1 prior, discussed in Example 7.2.2. 2 0 prior, discussed in Example 7.2.3. 7.3.2 Determine the form of the approximate 0.95­credible interval of Section 7.3.1, for the Bernoulli model with a Beta 7.3.3 Determine the form of the approximate 0.95­credible inte
rvals of Section 7.3.1, for the location­normal model with an N 0 7.3.4 Suppose that X Exponential 1 . Derive a crude Monte Carlo algorithm, based on generating from a gamma distribution, to generate a value from the conditional distribution x Generalize this to a sample of n from X the Uniform[0 1 ] distribution. When will this algortithm be inefficient in the sense that we need a lot of computation to generate a single value? Uniform[0 1]. Derive a crude Monte Carlo 7.3.5 Suppose that X algorithm, based on generating from a normal distribution, to generate from the con­ ditional distribution 1 distribution. When will this algortithm be inefficient in the sense that we need a lot of computation to generate a single value? 7.3.6 Suppose that X Uniform[0 1]. Derive a 0 5N crude Monte Carlo algorithm, based on generating from a mixure of normal distrib­ utions, to generate from the conditional distribution x Generalize this to a sample of n x Generalize this to a sample of n from the N 2 from the 0 5N 2 distribution. 2 and 1 and 0 5N 0 5N N X X 1 1 420 Section 7.3: Bayesian Computations COMPUTER EXERCISES 5 , where If a sample of n distribution, where 0 1 and assess the error in your estimate. z0 25, i.e., the population first quartile, using N 7.3.7 In the context of Example 7.3.1, construct a density histogram of the posterior 103 distribution of 104 and compare the results. Estimate the posterior mean of this distribution and N and assess the error in your approximation. (Hint: Modify the program in Appendix B.) 7.3.8 Suppose that a manufacturer takes a random sample of manufactured items and tests each item as to whether it is defective or not. The responses are felt to be i.i.d. is the probability that the item is defective. The manufacturer Bernoulli 100 items is taken and 5 places a Beta 0 5 10 distribution on defectives are observed, then, using a Monte Carlo sample with N 1000 estimate the posterior probability that 7.3.9 Suppose that lifelengths (in years) of a manufactured item are known to follow an Exponential Gamma 10 2 . Suppose that the lifelengths 4.3, 6.2, 8.4, 3.1, 6.0, 5.5, and 7.8 were observed. (a) Using a Monte Carlo sample of size N that (b) Using a Monte Carlo sample of size N function of 1 (c) Using a Monte Carlo sample of size N tion of 1 10 from a Pareto 2 distribution. Now pretend you 7.3.10 Generate a sample of n 0 is only know that you have a sample from a Pareto Using a Monte Carlo sample of size unknown, and place a Gamma 2 1 prior on N 1 based on the observed sample, and assess the accuracy of your approximation by quoting an interval that contains the exact value with virtual certainty. (Hint: Problem 2.10.15.) 104, approximate the posterior expectation of 1 ( x equals the greatest integer less than or equal to x). 103, approximate the posterior probability 103, approximate the posterior probability 103, approximate the posterior expecta­ and assess the error in your approximation. [3 6] and assess the error of your estimate. 0 is unknown and for the prior we take distribution, where PROBLEMS Xn is a sample from the model 7.3.11 Suppose X1 ularity conditions of Section 6.5 apply. Assume that the prior function of sample from f X1 (the latter assumption holds under very general conditions). and that the posterior mode when X1 and all the reg­ is a continuous Xn is a Xn a s : f (a) Using the fact that, if Yn prove that a s Y and g is a continuous function, then g Yn a s g Y , 2 ln L x1 xn 2 1 n a s I Xn is a sample from f . when X1 (b) Explain to what extent the large sample approximate methods of Section 7.3.1 de­ pend on the prior if the assumptions just described apply. Chapter 7: Bayesian Inference 421 1 1 2 2 y x 1 . 7.3.12 In Exercise 7.3.10, explain why the interval you constructed to contain the pos­ terior mean of 1 1 with virtual certainty may or may not contain the true value of 1 7.3.13 Suppose that X Y is distributed Bivariate Normal . Deter­ mine a Gibbs sampling algorithm to generate from this distribution. Assume that you have an algorithm for generating from univariate normal distributions. Is this the best way to sample from this distribution? (Hint: Problem 2.8.27.) 8x y for 7.3.14 Suppose that the joint density of X Y is given by f X Y x y 1 Fully describe a Gibbs sampling algorithm for this distribution. In 0 particular, indicate how you would generate all random variables. Can you design an algorithm to generate exactly from this distribution? 7.3.15 In Example 7.3.1, prove that the posterior mean of does not exist. (Hint: Use Problem 7.2.24 and the theorem of total expectation to split the integral into two parts, where one part has value 7.3.16 (Importance sampling based on the prior) Suppose we have an algorithm to generate from the prior. (a) Indicate how you could use this to approximate a posterior expectation using im­ portance sampling (see Problem 4.5.21). (b) What do you suppose is the major weakness is of this approach? and the other part has value ) COMPUTER PROBLEMS 7.3.17 In the context of Example 7.3.2, construct a density histogram of the posterior 104. Esti­ distribution of mate the posterior mean of this distribution and assess the error in your approximation. z0 25 i.e., the population first quartile, using N 7.4 Choosing Priors The issue of selecting a prior for a problem is an important one. Of course, the idea is that we choose a prior to reect our a priori beliefs about the true value of Because this will typically vary from statistician to statistician, this is often criticized as being too subjective for scientific studies. It should be remembered, however, that the sam­ pling model is also a subjective choice by the statistician. These choices are guided by the statistician’s judgment. What then justifies one choice of a statistical model or prior over another? f : In effect, when statisticians choose a prior and a model, they are prescribing a joint s . The only way to assess whether or not an appropriate choice distribution for was made is to check whether the observed s is reasonable given this choice If s is surprising, when compared to the distribution prescribed by the model and prior, then we have evidence against the statistician’s choices Methods designed to assess this are called model­checking procedures, and are discussed in Chapter 9. At this point, however, we should recognize the subjectivity that enters into statistical analyses, but take some comfort that we have a methodology for checking whether or not the choices made by the statistician make sense. 422 Section 7.4: Choosing Priors Often a statistician will consider a particular family of priors for a . In such a context the problem and try to select a suitable prior parameter is called a hyperparameter. Note that this family could be the set of all possible priors, so there is no restriction in this formulation. We now discuss some commonly used families and methods for selecting 0 : : : 0 7.4.1 Conjugate Priors Depending on the sampling model, the family may be conjugate. Definition 7.4.1 The family of priors : for the parameter of the model f : : is conjugate, if for all data s . S and all the posterior s for Conjugacy is usually a great convenience as we start with some choice the prior, and then we find the relevant for the posterior, often without much s computation. While conjugacy can be criticized as a mere mathematical convenience, it has to be acknowledged that many conjugate families offer sufficient variety to allow for the expression of a wide spectrum of prior beliefs. 0 EXAMPLE 7.4.1 Conjugate Families In Example 7.1.1, we have effectively shown that the family of all Beta distributions is conjugate for sampling from the Bernoulli model. In Example 7.1.2, it is shown that the family of normal priors is conjugate for sampling from the location normal model. In Example 7.1.3, it is shown that the family of Dirichlet distributions is conjugate for Multinomial models. In Example 7.1.4, it is shown that the family of priors specified there is conjugate for sampling from the location­scale normal model. Of course, using a conjugate family does not tell us how to select 0 Perhaps the most justifiable approach is to use prior elicitation. 7.4.2 Elicitation Elicitation involves explicitly using the statistician’s beliefs about the true value of that reects these beliefs. Typically, these involve the to select a prior in statistician asking questions of himself, or of experts in the application area, in such a way that the answers specify a prior from the family. : EXAMPLE 7.4.2 Location Normal Suppose we are sampling from an N known, and we restrict attention to the family N 0 priors for Thus, specifying two independent characteristics specifies a prior. 2 0 2 0 of 0 2 0 and there are two degrees of freedom in this family. unknown and 2 0 2 0 distribution with So here, R1 0 0 : Accordingly, we could ask an expert to specify two quantiles of his or her prior (see Exercise 7.4.10), as this specifies a prior in the family. For distribution for example, we might ask an expert to specify a number was as likely to be greater than as less than 0 so that 0 such that the true value of 0 is the median of the prior. Chapter 7: Bayesian Inference 423 We might also ask the expert to specify a value 0 such that there is 99% certainty that the true value of is less than 0 This of course is the 0.99­quantile of their prior. Alternatively, we could ask the expert to specify the center 0 of their prior dis­ 3 0 contains the true value of with 0 is the prior mean and 0 is the prior standard 0 tribution and for a constant virtual certainty. Clearly, in this case, deviation. 0 such that Elicitation is an important part of any Bayesian statistical analysis. If the experts used are truly knowledgeable about the application, then it seems intuitively clear that we will improve a statistical analysis by including such prior information. The process of elicitation can be somewhat involved, h
owever, for complicated problems. Furthermore, there are various considerations that need to be taken into ac­ count involving, prejudices and aws in the way we reason about probability outside of a mathematical formulation. See Garthwaite, Kadane and O’Hagan (2005), “Statisti­ cal methods for eliciting probability distributions”, Journal of the American Statistical Association (Vol. 100, No. 470, pp. 680–700), for a deeper discussion of these issues. 7.4.3 Empirical Bayes When the choice of 0 is based on the data s these methods are referred to as empirical Bayesian methods. Logically, such methods would seem to violate a basic principle of inference, namely, the principle of conditional probability. For when we compute using a prior based on s in general this is no longer the posterior distribution of the conditional distribution of given the data. While this is certainly an important concern, in many problems the application of empirical Bayes leads to inferences with satisfying properties. for the data s and then base the choice of For example, one empirical Bayesian method is to compute the prior predictive on these values. Note that the m s (as it is the density or probability prior predictive is like a likelihood function for function for the observed s), and so the methods of Chapter 6 apply for inference about s that maximizes m s . The required is typically multidimensional. We illustrate with a . For example, we could select the value of computations can be extensive, as simple example. EXAMPLE 7.4.3 Bernoulli Suppose we have a sample x1 plate putting a Beta 1/2 and the spread in this distribution is controlled by and the prior variance is 1 4 2 that choosing xn from a Bernoulli 0 as large leads to a very precise prior. Then we have that distribution and we contem­ 0 So the prior is symmetric about Since the prior mean is 1/2 we see 1 1 2 2] for some prior on 2 [ 2 m x1 xn 1 nx 0 nx 424 Section 7.4: Choosing Priors It is difficult to find the value of and plot m x1 can also be used. that maximizes this, but for real data we can tabulate xn to obtain this value. More advanced computational methods For example, suppose that n observed. In Figure 7.4.1 we have plotted the graph of m x1 We can see from this that the maximum occurs near 20 and we obtained nx 5 as the number of 1’s xn as a function of 2 More precisely, from a 2 3 is close to the maximum. Accordingly, we use tabulation we determine that the Beta 5 2 3 15 2 3 Beta 7 3 17 3 distribution for inferences about 0.000004 0.000003 0.000002 0.000001 .000000 0 5 10 lambda 15 20 Figure 7.4.1: Plot of m x1 xn in Example 7.4.3. There are many issues concerning empirical Bayes methods. This represents an active area of statistical research. 7.4.4 Hierarchical Bayes : in An alternative to choosing a prior for consists of putting yet another prior distribution , called a hyperprior, on . This approach is commonly called hi­ erarchical Bayes. The prior for d , so we basically becomes have in effect integrated out the hyperparameter. The problem then is how to choose . In essence, we have simply replaced the problem of choosing the prior the prior It is common, in applications using hierarchi­ on with choosing the hyperprior on cal Bayes, that default choices are made for although we could also make use of elicitation techniques We will discuss this further in Section 7.4.5. So in this situation, the posterior density of is equal to , Chapter 7: Bayesian Inference 425 f f s where m s m s s Note that the posterior density of posterior density of given Therefore, we can use d (assuming m s d d is continuous with prior density given by ). m s is the d and, for fixed m s while f s is m s Typically, however, we are not interested in s for inferences about the model parameter (e.g., m s for in­ estimation, credible regions, and hypothesis assessment) and m s and in fact it doesn’t ferences about really make sense to talk about the “true” value of corresponds to the distribution that actually produced the observed data s at least when the model as being generated from This also implies is correct, while we are not thinking of another distinction between is part of the likelihood function based on and how the data was generated, while The true value of For is not. EXAMPLE 7.4.4 Location­Scale Normal Suppose the situation is as is discussed in Example 7.1.4. In that case, both and 2 are part of the likelihood function and so are model parameters, while 0 and 0 are not, and so they are hyperparameters. To complete this specification as a 0 , a task we leave to a hierarchical model, we need to specify a prior higher­level course. 2 0 0 0 0 2 0 7.4.5 Improper Priors and Noninformativity One approach to choosing a prior, and to stop the chain of priors in a hierarchical Bayes approach, is to prescribe a noninformative prior based on ignorance. Such a prior is also referred to as a default prior or reference prior. The motivation is to specify a prior that puts as little information into the analysis as possible and in some sense characterizes ignorance. Surprisingly, in many contexts, statisticians have been led to choose noninformative priors that are improper, i.e., so they do not correspond to probability distributions. d The idea here is to give a rule such that, if a statistician has no prior beliefs about the value of a parameter or hyperparameter, then a prior is prescribed that reects this. In the hierarchical Bayes approach, one continues up the chain until the statistician declares ignorance, and a default prior completes the specification. Unfortunately, just how ignorance is to be expressed turns out to be a rather subtle issue. In many cases, the default priors turn out to be improper, i.e., the integral or sum of the prior over the whole parameter space equals so the prior is not a probability distribution The interpretation of an improper prior is not at all clear, and their use is somewhat controversial. Of course, s no longer has a joint probability distribution when we are using improper priors, and we cannot use the principle of conditional probability to justify basing our inferences on the posterior. e.g., d There have been numerous difficulties associated with the use of improper priors, which is perhaps not surprising. In particular, it is important to note that there is no to exist as a proper probability distribution reason in general for the posterior of when is improper. If an improper prior is being used, then we should always check to make sure the posterior is proper, as inferences will not make sense if we are using an improper posterior. 426 Section 7.4: Choosing Priors c c When using an improper prior for any c is proper; then the posteriors are identical (see Exercise 7.4.6). The following example illustrates the use of an improper prior. 0 for the posterior under , it is completely equivalent to instead use the prior is proper if and only if the posterior under EXAMPLE 7.4.5 Location Normal Model with an Improper Prior Suppose that x1 R1 is unknown and 2 to the choice xn is a sample from an N 0 is known Many arguments for default priors in this context lead 1, which is clearly improper. 2 0 distribution, where Proceeding as in Example 7.1.2, namely, pretending that this is a proper proba­ bility density, we get that the posterior density of is proportional to exp n 2 2 0 x 2 . This immediately implies that the posterior distribution of that this is the same as the limiting posterior obtained in Example 7.1.2 as although the point of view is quite different. is N x 0 2 0 n . Note One commonly used method of selecting a default prior is to use, when it is avail­ 1 2 in the multidimen­ able, the prior given by I 1 2 sional case), where I is the Fisher information for the statistical model as defined in Section 6.5. This is referred to as Jeffreys’ prior. Note that Jeffreys’ prior is dependent on the model. R1 (and by det I when Jeffreys’ prior has an important invariance property. From Challenge 6.5.19, we have that, under some regularity conditions, if we make a 1–1 transformation of the real­valued parameter then the Fisher information of is given by via I 1 2 1 Therefore, the default Jeffreys’ prior for is I 1 2 1 1 (7.4.1) Now we see that, if we had started with the default prior I 1 2 change of variable to 2.6.3. A similar result can be obtained when and made the then this prior transforms to (7.4.1) by Theorems 2.6.2 and is multidimensional. for Jeffreys’ prior often turns out to be improper, as the next example illustrates. EXAMPLE 7.4.6 Location Normal (Example 7.4.5 continued) In this case, Jeffreys’ prior is given by 0 which gives the same posterior as in Example 7.4.5. Note that Jeffreys’ prior is effectively a constant and hence the prior of Example 7.4.5 is equivalent to Jeffreys’ prior. n Research into rules for determining noninformative priors and the consequences of using such priors is an active area in statistics. While the impropriety seems counterin­ tuitive, their usage often produces inferences with good properties. Chapter 7: Bayesian Inference 427 Summary of Section 7.4 To implement Bayesian inference, the statistician must choose a prior as well as the sampling model for the data. These choices must be checked if the inferences obtained are supposed to have practical validity. This topic is discussed in Chapter 9. Various techniques have been devised to allow for automatic selection of a prior. These include empirical Bayes methods, hierarchical Bayes, and the use of non­ informative priors to express ignorance. Noninformative priors are often improper. We must always check that an im­ proper prior leads to a proper posterior. EXERCISES 7.4.1 Prove that the family Gamma priors with respect to sampling from the model given by Pareto 0 is a conjugate family of distributions with 0 : 0. 7.4.2 Prove that the family : 1 0 of priors given by I[ 1 1 is a conjugate family of priors with respect to sampling from the model giv
en by the Uniform[0 7.4.3 Suppose that the statistical model is given by ] distributions with 0 and that we consider the family of priors given by . 1 x2 and hence the prior, which prior is selected here? based on the selected prior. and we observe the sample x1 1 x2 (a) If we use the maximum value of the prior predictive for the data to determine the value of (b) Determine the posterior of 7.4.4 For the situation described in Exercise 7.4.3, put a uniform prior on the hyperpa­ rameter 7.4.5 For the model for proportions described in Example 7.1.1, determine the prior predictive density. If n 1 1 or 5 5 would the prior predictive criterion select for further inferences about ? (Hint: Theorem of total probability.) 7 which of the priors given by and determine the posterior of 10 and nx 428 Section 7.4: Choosing Priors ] 1 is proper for c the posterior under 0 model and we want to is proper if and 7.4.6 Prove that when using an improper prior only if the posterior under c 0 and then the posteriors are identical. 7.4.7 Determine Jeffreys’ prior for the Bernoulli model and determine the posterior distribution of based on this prior. 7.4.8 Suppose we are sampling from a Uniform[0 use the improper prior (a) Does the posterior exist in this context? (b) Does Jeffreys’ prior exist in this context? 7.4.9 Suppose a student wants to put a prior on the mean grade out of 100 that their class will obtain on the next statistics exam. The student feels that a normal prior centered at 66 is appropriate and that the interval 40 92 should contain 99% of the marks. Fully identify the prior. 7.4.10 A lab has conducted many measurements in the past on water samples from a particular source to determine the existence of a certain contaminant. From their records, it was determined that 50% of the samples had contamination less than 5.3 parts per million, while 95% had contamination less than 7.3 parts per million. If a normal prior is going to be used for a future analysis, what prior do these data deter­ mine? 7.4.11 Suppose that a manufacturer wants to construct a 0.95­credible interval for the of an item sold by the company. A consulting engineer is 99% certain mean lifetime , that the mean lifetime is less than 50 months. If the prior on then determine based on this information. 7.4.12 Suppose the prior on a model parameter and 2 unable to do this for 1/ 2 0 0 are hyperparameters. The statistician is able to elicit a value for 0 Accordingly, the statistician puts a hyperprior on 2 2 0 0 but feels 0 given by (Hint: Write 0 Determine the prior on 0 1 for some value of is taken to be N 0 is an Exponential 2 0 , where Gamma 0 0z, where z N 0 1 ) COMPUTER EXERCISES 10 nx 7, and we are using a symmetric prior, i.e., 7.4.13 Consider the situation discussed in Exercise 7.4.5. (a) If we observe n plot in the range 0 20 (you will need a statistical the prior predictive as a function of package that provides evaluations of the gamma function for this). Does this graph clearly select a value for ? (b) If we observe n 10 nx range 0 20 . Compare this plot with that in part (a). 7.4.14 Reproduce the plot given in Example 7.4.3 and verify that the maximum occurs near 9, plot the prior predictive as a function of in the 2 3 PROBLEMS R1 7.4.15 Show that a distribution in the family N 0 pletely determined once we specify two quantiles of the distribution. 2 0 : 0 2 0 0 is com­ Chapter 7: Bayesian Inference 429 7.4.16 (Scale normal model) Consider the family of N 0 2 distributions, where 0 is known and 2 0 is unknown. Determine Jeffreys’ prior for this model. 2 . 7.4.17 Suppose that for the location­scale normal model described in Example 7.1.4, we use the prior formed by the Jeffreys’ prior for the location model (just a constant) times the Jeffreys’ prior for the scale normal model. Determine the posterior distribu­ tion of 7.4.18 Consider the location normal model described in Example 7.1.2. (a) Determine the prior predictive density m. (Hint: Write down the joint density of and do not worry about getting m into the sample and Use (7.1.2) to integrate out a recognizable form.) (b) How would you generate a value X1 (c) Are X1 0 Zi Xn mutually independent? Justify your answer. (Hint: Write Xi 0 Xn from this distribution? Zn are i.i.d. N 0 1 ) 0 Z , where Z Z1 2. De­ 7.4.19 Consider Example 7.3.2, but this time use the prior velop the Gibbs sampling algorithm for this situation. (Hint: Simply adjust each full conditional in Example 7.3.2 appropriately.) 1 2 COMPUTER PROBLEMS 7.4.20 Use the formulation described in Problem 7.4.17 and the data in the following table 2.6 3.0 4.2 4.0 3.1 4.1 5.2 3.2 3.7 2.2 3.8 3.4 5.6 4.5 1.8 2.9 5.3 4.7 4.0 5.2 generate a sample of size N of the posterior density of 104 from the posterior. Plot a density histogram estimate based on this sample. CHALLENGES 1 2 , the Fisher information matrix I 2 is defined in Prob­ 7.4.21 When 1 2. Determine Jef­ lem 6.5.15. The Jeffreys’ prior is then defined as det I freys’ prior for the location­scale normal model and compare this with the prior used in Problem 7.4.17. 2 1 1 DISCUSSION TOPICS 7.4.22 Using empirical Bayes methods to determine a prior violates the Bayesian prin­ ciple that all unknowns should be assigned probability distributions. Comment on this. Is the hierarchical Bayesian approach a solution to this problem? 430 Section 7.5: Further Proofs (Advanced) 7.5 Further Proofs (Advanced) Derivation of the Posterior Distribution for the Location­Scale Normal Model In Example 7.1.4, the likelihood function is given by L x1 xn 2 2 n 2 exp n 2 2 x 2 exp n 1 2 2 s2 The prior on 2 where 0 0 0 and 0 are fixed and known. 2 is given by 2 N 0 2 0 2 and 1 2 Gamma 0 0 , The posterior density of 2 is then proportional to the likelihood times the joint prior. Therefore, retaining only those parts of the likelihood and the prior that depend on and 2 the joint posterior density is proportional to 2 exp 2 0 n 1 2 2 s2 1 2 0 1 exp exp exp s2 exp 2 2 nx exp 1 2 2 n 0 n 2 1 exp exp exp exp 1 2 1 2 1 2 1 s2 1 2 n 2 1 2 nx 0 2 0 n 1 2 s2 2 nx 1 2 2 is given by 2 0 2 0 1 From this, we deduce that the posterior distribution of 2 x N n x 1 2 0 Chapter 7: Bayesian Inference 431 and where and 1 2 x Gamma nx s2 nx 1 s2 Derivation of J 0 for the Location­Scale Normal Here we have that 2 1 2 1 1 2 and We have that det and so det det J 0 2 1 2 1. Chapter 8 Optimal Inferences CHAPTER OUTLINE Section 1 Optimal Unbiased Estimation Section 2 Optimal Hypothesis Testing Section 3 Optimal Bayesian Inferences Section 4 Decision Theory (Advanced) Section 5 Further Proofs (Advanced) In Chapter 5, we introduced the basic ingredient of statistical inference — the statistical model. In Chapter 6, inference methods were developed based on the model alone via the likelihood function. In Chapter 7, we added the prior distribution on the model parameter, which led to the posterior distribution as the basis for deriving inference methods. With both the likelihood and the posterior, however, the inferences were derived largely based on intuition. For example, when we had a characteristic of interest , there was nothing in the theory in Chapters 6 and 7 that forced us to choose a particular estimator, confidence or credible interval, or testing procedure. A complete theory of statistical inference, however, would totally prescribe our inferences. One attempt to resolve this issue is to introduce a performance measure on infer­ ences and then choose an inference that does best with respect to this measure. For example, we might choose to measure the performance of estimators by their mean­ squared error (MSE) and then try to obtain an estimator that had the smallest possible MSE. This is the optimality approach to inference, and it has been quite successful in a number of problems. In this chapter, we will consider several successes for the optimality approach to deriving inferences. Sometimes the performance measure we use can be considered to be based on what is called a loss function. Loss functions form the basis for yet another approach to statistical inference called decision theory. While it is not always the case that a performance measure is based on a loss function, this holds in some of the most impor­ tant problems in statistical inference. Decision theory provides a general framework in which to discuss these problems. A brief introduction to decision theory is provided in Section 8.4 as an advanced topic. 433 434 Section 8.1: Optimal Unbiased Estimation 8.1 Optimal Unbiased Estimation for the statistical Suppose we want to estimate the real­valued characteristic If we have observed the data s an estimate is a value T s that the model . We refer to T as an estimator statistician hopes will be close to the true value of of . For a variety of reasons The error in the estimate is given by T s (mostly to do with mathematics) it is more convenient to consider the squared error T s 2. f : Of course, we would like this squared error to be as small as possible. Because this leads us to consider the distributions of the we do not know the true value of squared error, when s has distribution given by f . We would then like to choose the estimator T so that these distributions are as concentrated as possible about 0. A convenient measure of the concentration of these distributions about 0 is given by their means, or for each MSE T E T 2 (8.1.1) called the mean­squared error (recall Definition 6.3.1). An optimal estimator of is then a T that minimizes (8.1.1) for every In other words, T would be optimal if, for any other estimator T defined on S we have that MSE T MSE T for each Unfortunately, it can be shown that, except in very artificial circumstances, there is no such T so we need to modify our optimization problem. This modification takes the form of restricting the estimators T that we will enter­ tain as possible choices for the inference. Consider an estimator T such that E T does not exist or is infinite. It can then be
shown that (8.1.1) is infinite (see Challenge 8.1.26). So we will first restrict our search to those T for which E T is finite for every Further restrictions on the types of estimators that we consider make use of the following result (recall also Theorem 6.3.1). Theorem 8.1.1 If T is such that E T 2 is finite, then E T c 2 Var T E T c 2 This is minimized by taking c E T . PROOF We have that E T c 2 E T Var T E T E T 2 E T 2E T c 2 because E T not depend on c, the value of (8.1.2) is minimized by taking c 0. As 8.1.2) 0, and Var T does E T . Chapter 8: Optimal Inferences 435 8.1.1 The Rao–Blackwell Theorem and Rao–Blackwellization We will prove that, when we are looking for T to minimize (8.1.1), we can further restrict our attention to estimators T that depend on the data only through the value of a sufficient statistic. This simplifies our search, as sufficiency often results in a reduction of the dimension of the data (recall the discussion and examples in Section 6.1.1). First, however, we need the following property of sufficiency. Theorem 8.1.2 A statistic U is sufficient for a model if and only if the conditional distribution of the data s given U u is the same for every PROOF See Section 8.5 for the proof of this result. u can tell us nothing about the true value of The implication of this result is that information in the data s beyond the value of U s because this information comes from a distribution that does not depend on the parameter. Notice that Theorem 8.1.2 is a characterization of sufficiency, alternative to that provided in Section 6.1.1. Consider a simple example that illustrates the content of Theorem 8.1.2. EXAMPLE 8.1.1 Suppose that S 1 2 3 4 given by the following table. a b , where the two probability distributions are Then L U 2 L U 4 As we must have s the response s given U s the point 1) for both response s given U s similarly when following table. 0 1 , given by U 1 0 and L 4 , and so U : S 1 is a sufficient statistic. 1 when we observe U s 0 the conditional distribution of 0 is degenerate at 1 (i.e., all the probability mass is at a and a the conditional distribution of the 1 places 1/3 of its mass at each of the points in 2 3 4 and 1 the conditional distributions are as in the b When b So given Thus, we see that indeed the conditional distributions are independent of . We now combine Theorems 8.1.1 and 8.1.2 to show that we can restrict our at­ tention to estimators T that depend on the data only through the value of a sufficient statistic U . By Theorem 8.1.2 we can denote the conditional probability measure for u , i.e., this probability measure does not depend on s given U s . , as it is the same for every u, by P U For estimator T of , such that E T is finite for every put TU s equal to the conditional expectation of T given the value of U s namely, TU s E P U U s T , 436 Section 8.1: Optimal Unbiased Estimation i.e., TU is the average value of T when we average using P U that TU s1 P U U s2 TU s2 whenever U s1 ), and so TU depends on the data s only through the value of U s . U s2 (this is because P U U s Notice U s1 Theorem 8.1.3 (Rao–Blackwell) Suppose that U is a sufficient statistic and E T 2 is finite for every . Then MSE TU MSE T for every PROOF Let P U denote the marginal probability measure of U induced by P . By the theorem of total expectation (see Theorem 3.5.2), we have that MSE , where E P U u by Theorem 8.1.1, T 2 denotes the conditional MSE of T , given U u. Now E P U u T 2 VarP U u T E P U u T 2. (8.1.3) As both terms in (8.1.3) are nonnegative, and recalling the definition of TU we have MSE T E P U VarP U u T E P U TU s 2 . E P U TU s 2 Now TU s the theorem of total expectation, E P U u 2 TU s 2 (Theorem 3.5.4) and so, by E P U TU s E P TU TU s 2 MSE TU and the theorem is proved. Theorem 8.1.3 shows that we can always improve on (or at least make no worse) any estimator T that possesses a finite second moment, by replacing T s by the esti­ mate TU s . This process is sometimes referred to as the Rao­Blackwellization of an estimator. Notice that putting E E and c in Theorem 8.1.1 implies that MSE T Var T E T 2. (8.1.4) So the MSE of T can be decomposed as the sum of the variance of T plus the squared bias of T (this was also proved in Theorem 6.3.1). Theorem 8.1.1 has another important implication, for (8.1.4) is minimized by tak­ ing E T . This indicates that, on average, the estimator T comes closer (in terms of squared error) to E T than to any other value. So, if we are sampling from T s is a natural estimate of E T . Therefore, for a the distribution specified by general characteristic , it makes sense to restrict attention to estimators that have bias equal to 0. This leads to the following definition. Chapter 8: Optimal Inferences 437 Definition 8.1.1 An estimator T of is unbiased if E T for every Notice that, for unbiased estimators with finite second moment, (8.1.4) becomes MSE T Var T . Therefore, our search for an optimal estimator has become the search for an unbiased estimator with smallest variance. If such an estimator exists, we give it a special name. Definition 8.1.2 An unbiased estimator of with smallest variance for each is called a uniformly minimum variance unbiased (UMVU) estimator. It is important to note that the Rao–Blackwell theorem (Theorem 8.1.3) also ap­ plies to unbiased estimators. This is because the Rao–Blackwellization of an unbiased estimator yields an unbiased estimator, as the following result demonstrates. Theorem 8.1.4 (Rao–Blackwell for unbiased estimators) If T has finite second mo­ ment, is unbiased for for every and U is a sufficient statistic, then E TU Var T (so TU is also unbiased for ) and Var TU PROOF Using the theorem of total expectation (Theorem 3.5.2), we have E TU E P U TU . So TU is unbiased for plying Theorem 8.1.3 gives Var TU and MSE T Var T . Var T , MSE TU Var TU Ap­ There are many situations in which the theory of unbiased estimation leads to good estimators. However, the following example illustrates that in some problems, there are no unbiased estimators and hence the theory has some limitations. EXAMPLE 8.1.2 The Nonexistence of an Unbiased Estimator and we wish to find a Suppose that x1 UMVU estimator of , the odds in favor of a success occurring From Theorem 8.1.4, we can restrict our search to unbiased estimators T that are functions of the sufficient statistic nx. is a sample from the Bernoulli xn 1 Such a T satisfies E T n X 1 for every [0 1]. Recalling that n X Binomial n this implies that for every [0 1]. By the binomial theorem, we have 438 Section 8.1: Optimal Unbiased Estimation Substituting this into the preceding expression for of powers of leads to 1 and writing this in terms . (8.1.5) Now the left­hand side of (8.1.5) goes to polynomial in , which is bounded in [0 1] Therefore, an unbiased estimator of cannot exist. 1 but the right­hand side is a as If a characteristic has an unbiased estimator, then it is said to be U­estimable. It should be kept in mind, however, that just because a parameter is not U­estimable in Example 8.1.2, is a 1–1 does not mean that we cannot estimate it! For example, x (see Theorem 6.2.1); this seems is given by x so the MLE of function of like a sensible estimator, even if it is biased. 1 8.1.2 Completeness and the Lehmann–Scheffé Theorem In certain circumstances, if an unbiased estimator exists, and is a function of a sufficient statistic U then there is only one such estimator — so it must be UMVU. We need the concept of completeness to establish this. Definition 8.1.3 A statistic U is complete if any function h of U which satisfies E h U 0 with probability 1 for each , also satisfies h U s 1 for every (i.e., P s : h U s 0 for every ). 0 In probability theory, we treat two functions as equivalent if they differ only on a set having probability content 0, as the probability of the functions taking different values at an observed response value is 0. So in Definition 8.1.3, we need not distinguish between h and the constant 0. Therefore, a statistic U is complete if the only unbiased estimator of 0, based on U is given by 0 itself. We can now derive the following result. Theorem 8.1.5 (Lehmann–Scheffé) If U is a complete sufficient statistic, and if T depends on the data only through the value of U has finite second moment for every and is unbiased for then T is UMVU. PROOF Suppose that T is also an unbiased estimator of By Theorem 8.1.4 we can assume that T depends on the data only through the value of U Then there exist functions h and h such that T s and T s h U s h U s and . By the completeness of U , we have that h U which implies that T T with probability 1 for each h U with probability 1 for each This says based on U and so it must be there is essentially only one unbiased estimator for UMVU. Chapter 8: Optimal Inferences 439 The Rao–Blackwell theorem for unbiased estimators (Theorem 8.1.4), together with the Lehmann–Scheffé theorem, provide a method for obtaining a UMVU esti­ mator of . Suppose we can find an unbiased estimator T that has finite second If we also have a complete sufficient statistic U then by Theorem 8.1.4 moment. and depends on the data only through E P U U s TU s the value of U because TU s1 U s2 . Therefore, by Theorem 8.1.5, TU is UMVU for TU s2 whenever U s1 T is unbiased for . It is not necessary, in a given problem, that a complete sufficient statistic exist. In fact, it can be proved that the only candidate for this is a minimal sufficient statistic (recall the definition in Section 6.1.1). So in a given problem, we must obtain a minimal sufficient statistic and then determine whether or not it is complete. We illustrate this via an example. EXAMPLE 8.1.3 Location Normal Suppose that x1 is unknown and 2 0 sufficient statistic for this model. R1 xn is a sample from an N 0 is known. In Example 6.1.7, we showed that x is a minimal 2 0 distribution, where In fact, x is also complete for this model. The proof of this is a bit involved and is presented in Section 8.5.
Given that x is a complete, minimal sufficient statistic, this implies that T x is a UMVU estimator of its mean E T X whenever T has a finite second moment for and every E X 2 0z p is the UMVU estimator of E X R1 In particular, x is the UMVU estimator of 0z p (the pth quantile of the true distribution). Furthermore, x because E X 2 0 n 0z p 2 The arguments needed to show the completeness of a minimal sufficient statistic in a problem are often similar to the one required in Example 8.1.3 (see Challenge 8.1.27). Rather than pursue such technicalities here, we quote some important examples in which the minimal sufficient statistic is complete. EXAMPLE 8.1.4 Location­Scale Normal Suppose that x1 and by R1 0 are unknown. The parameter in this model is two­dimensional and is given 2 xn is a sample from an N 2 distribution, where R1 0 . We showed, in Example 6.1.8, that x s2 is a minimal sufficient statistic for this model. In fact, it can be shown that x s2 is a complete minimal sufficient statistic. Therefore, T x s2 is a UMVU estimator of E T X S2 whenever the second mo­ ment of T x s2 is finite for every In particular, x is the UMVU estimator of 2 and s2 is UMVU for 2 EXAMPLE 8.1.5 Distribution­Free Models Suppose that x1 statistical model comprises all continuous distributions on R1 xn is a sample from some continuous distribution on R1 The It can be shown that the order statistics x 1 sufficient statistic for this model. Therefore, T x 1 x n make up a complete minimal is UMVU for x n E T X 1 X n 440 whenever Section 8.1: Optimal Unbiased Estimation E T 2 X 1 X n for every continuous distribution. In particular, if T : Rn is the case. For example, if 8.1.6) R1 is bounded, then this the relative frequency of the event A in the sample, then T x 1 for is UMVU Now change the model assumption so that x1 is a sample from some continuous distribution on R1 that possesses its first m moments. Again, it can be shown that the order statistics make up a complete minimal sufficient statistic. There­ X n whenever (8.1.6) holds for fore, T x 1 2 every continuous distribution possessing its first m moments. For example, if m then this implies that T x 1 4 we x n have that s2 is UMVU for the population variance (see Exercise 8.1.2). x is UMVU for E X . When m is UMVU for E T X 1 x n xn 8.1.3 The Cramer–Rao Inequality (Advanced) There is a fundamental inequality that holds for the variance of an estimator T This is given by the Cramer–Rao inequality (sometimes called the information inequality). It is a corollary to the following inequality. Theorem 8.1.6 (Covariance inequality) Suppose T U : S R1 and E T 2 0 E U 2 for every Then Var T Cov T U 2 Var U for every Equality holds if and only if T s E T Cov T U Var U U s E U s with probability 1 for every related). (i.e., if and only if T s and U s are linearly PROOF This result follows immediately from the Cauchy–Schwartz inequality (The­ orem 3.6.3). Now suppose that is an open subinterval of R1 and we take U s S s ln f s (8.1.7) i.e., U is the score function. Assume that the conditions discussed in Section 6.5 hold, is so that E S and, Fisher’s information I 0 for all Var S s s Chapter 8: Optimal Inferences 441 finite. Then using we have Cov T U ln ln 8.1.8) in the discrete case, where we have assumed conditions like those discussed in Section 6.5, so we can pull the partial derivative through the sum. A similar argument gives the equality (8.1.8) in the continuous case as well. The covariance inequality, applied with U specified as in (8.1.7) and using (8.1.8), gives the following result. Corollary 8.1.1 (Cramer–Rao or information inequality) Under conditions, Var T E T 2 I 1 for every Equality holds if and only if T s E T E T I 1S s with probability 1 for every . The Cramer–Rao inequality provides a fundamental lower bound on the variance of an estimator T From (8.1.4), we know that the variance is a relevant measure of the accuracy of an estimator only when the estimator is unbiased, so we restate Corollary 8.1.1 for this case. Corollary 8.1.2 Under the conditions of Corollary 8.1.1, when T is an unbiased estimator of Var T for every Equality holds if and only if 2 I 1 T s I 1S s (8.1.9) with probability 1 for every . Notice that when then Corollary 8.1.2 says that the variance of the unbiased estimator T is bounded below by the reciprocal of the Fisher information. More generally, when is a 1–1, smooth transformation, we have (using Challenge 6.5.19) that the variance of an unbiased T is again bounded below by the reciprocal of 442 Section 8.1: Optimal Unbiased Estimation the Fisher information, but this time the model uses the parameterization in terms of . Corollary 8.1.2 has several interesting implications. First, if we obtain an unbiased estimator T with variance at the lower bound, then we know immediately that it is UMVU. Second, we know that any unbiased estimator that achieves the lower bound is of the form given in (8.1.9). Note that the right­hand side of (8.1.9) must be inde­ pendent of in order for this to be an estimator. If this is not the case, then there are no UMVU estimators whose variance achieves the lower bound. The following example demonstrates that there are cases in which UMVU estimators exist, but their variance does not achieve the lower bound. EXAMPLE 8.1.6 Poisson Model Suppose that x1 unknown. The log­likelihood is given by l nx function is given by S xn is a sample from the Poisson xn n Now x1 x1 xn and thus S x1 xn I E nx 2 nx 2 n distribution where 0 is n , so the score nx ln Suppose we are estimating I 1 n. Noting that x is unbiased for immediately that x is UMVU and achieves the lower bound. Then the Cramer–Rao lower bound is given by n we see and that Var X Now suppose that we are estimating lower bound equals e 2 n and I 1 S x1 xn e e e P 0 . The Cramer–Rao nx n e n 1 x , . So there does not exist a UMVU estimator for which is clearly not independent of that attains the lower bound. Does there exist a UMVU estimator for 1 then I 0 x1 . As it turns out, x is (for every n) a complete mini­ is an unbiased estimator of mal sufficient statistic for this model, so by the Lehmann–Scheffé theorem I 0 x1 is UMVU for Furthermore, I 0 X1 has variance ? Observe that when n P X1 0 1 P X1 0 e 1 e since I 0 X1 Bernoulli e This implies that e 1 e e 2 . In general, we have that 1 n n i 1 I 0 xi is an unbiased estimator of , but it is not a function of x. Thus we cannot apply the Lehmann–Scheffé theorem, but we can Rao–Blackwellize this estimator. Therefore, Chapter 8: Optimal Inferences 443 the UMVU estimator of is given by 1 n n i 1 E I 0 Xi X x . To determine this estimator in closed form, we reason as follows. The condi­ x, because n X is distributed Xn given X tional probability function of X1 Poisson n is x1 x1! xn xn! e n n nx nx ! e n 1 nx x1 xn x1 1 n xn , 1 n Xn given X i.e., X1 cordingly, the UMVU estimator is given by x is distributed Multinomial nx 1 n 1 n Ac­ E I 0 X1 X x P X1 0 X x because Xi X x Binomial nx 1 n for each i 1 nx 1 n 1 n Certainly, it is not at all obvious from the functional form that this estimator is unbiased, let alone UMVU. So this result can be viewed as a somewhat remarkable application of the theory. Recall now Theorems 6.5.2 and 6.5.3. The implications of these results, with some additional conditions, are that the MLE of and that the asymptotic variance of the MLE is at the information lower bound. This is often interpreted to mean that, with large samples, the MLE makes full use of the information about is asymptotically unbiased for contained in the data. Summary of Section 8.1 An estimator comes closest (using squared distance) on average to its mean (see Theorem 8.1.1), so we can restrict attention to unbiased estimators for quantities of interest. The Rao–Blackwell theorem says that we can restrict attention to functions of a sufficient statistic when looking for an estimator minimizing MSE. When a sufficient statistic is complete, then any function of that sufficient statis­ tic is UMVU for its mean. The Cramer–Rao lower bound gives a lower bound on the variance of an unbi­ ased estimator and a method for obtaining an estimator that has variance at this lower bound when such an estimator exists. 444 Section 8.1: Optimal Unbiased Estimation EXERCISES 8.1.1 Suppose that a statistical model is given by the two distributions in the following table 12 1 6 s 4 5 12 1 12 fa s fb s T 2 1 2 3 4 is defined by T 1 If T : 1 2 3 4 s otherwise, then prove that T is a sufficient statistic. Derive the conditional distributions of s given T s and show that these are independent of 8.1.2 Suppose that x1 2 Prove that s2 ance 8.1.3 Suppose that x1 R1 is unknown and 2 n i 1 xi xn is a sample from an N 0 is known. Determine a UMVU estimator of the second moment xn is a sample from a distribution with mean 2 0 distribution, where x 2 is unbiased for 1 and T s and vari­ 1 1 n 2 2 2 0 8.1.4 Suppose that x1 R1 is unknown and 2 xn is a sample from an N 2 0 distribution, where 0 is known. Determine a UMVU estimator of the first quartile 0z0 25. xn is a sample from an N 2 0 distribution, where 3 a UMVU estimator of anything? If so, what 8.1.5 Suppose that x1 R1 is unknown and 2 0 is known. Is 2x is it UMVU for? Justify your answer. 8.1.6 Suppose that x1 xn [0 1] is unknown. Determine a UMVU estimator of is a sample from a Bernoulli distribution, where (use the fact that a minimal sufficient statistic for this model is complete). 8.1.7 Suppose that x1 0 is known and xn is a sample from a Gamma distribution, where 0 is unknown. Using the fact that x is a complete sufficient 0 1. statistic (see Challenge 8.1.27), determine a UMVU estimator of 2 distribution, where 8.1.8 Suppose that x1 0 2 is a sufficient statistic is known and 2 for this problem. Using the fact that it is complete, determine a UMVU estimator for xn is a sample from an N 0 0 is unknown. Show that n i 1 xi 0 2. 8.1.9 Suppose a statistical model comprises all continuous distributions on R1. Based o
n a sample of n, determine a UMVU estimator of P 1 1 , where P is the true probability measure. Justify your answer. 8.1.10 Suppose a statistical model comprises all continuous distributions on R1 that have a finite second moment. Based on a sample of n, determine a UMVU estimator is the true mean. Justify your answer. (Hint: Find an unbiased esti­ of mator for n 2 Rao–Blackwellize this estimator for a sample of n, and then use the Lehmann–Scheffé theorem.) 2 when the 8.1.11 The estimator determined in Exercise 8.1.10 is also unbiased for statistical model comprises all continuous distributions on R1 that have a finite first moment. Is this estimator still UMVU for 2 where 2? Chapter 8: Optimal Inferences 445 PROBLEMS 8.1.12 Suppose that x1 ] distribution, where xn is a sample from a Uniform[0 0 is unknown. Show that x n is a sufficient statistic and determine its distribution. distribution, where xn , Using the fact that x n is complete, determine a UMVU estimator of 8.1.13 Suppose that x1 xn is a sample from a Bernoulli . [0 1] is unknown. Then determine the conditional distribution of x1 given the value of the sufficient statistic x. 8.1.14 Prove that L a 2 satisfies a L a1 1 a2 L a1 1 L a2 when a ranges in a subinterval of R1. Use this result together with Jensen’s inequality (Theorem 3.6.4) to prove the Rao–Blackwell theorem. 8.1.15 Prove that L a satisfies a L a1 1 a2 L a1 1 L a2 when a ranges in a subinterval of R1. Use this result together with Jensen’s inequality (Theorem 3.6.4) to prove the Rao–Blackwell theorem for absolute error. (Hint: First show that x 8.1.16 Suppose that x1 R1 2 distribution, where is unknown. Show that the optimal estimator (in the sense 2 is given by c 1 . is a sample from an N y for any x and y.) xn n n 0 1 x y 2 of minimizing the MSE), of the form cs2 for Determine the bias of this estimator and show that it goes to 0 as n 8.1.17 Prove that if a statistic T is complete for a model and U function h then U is also complete. 8.1.18 Suppose that x1 R1 0 2 xn 2 distribution, where is unknown. Derive a UMVU estimator of the standard devia­ is a sample from an N . h T for a 1–1 (Hint: Calculate the expected value of the sample standard deviation s.) xn 2 distribution, where is unknown. Derive a UMVU estimator of the first quartile is a sample from an N tion 8.1.19 Suppose that x1 R1 0 2 z0 25. (Hint: Problem 8.1.17.) 8.1.20 Suppose that x1 xn is a sample from an N 2 0 distribution, where 0 is known. Establish that x is a minimal 1 2 is unknown and 2 0 sufficient statistic for this model but that it is not complete. is a sample from an N 8.1.21 Suppose that x1 xn R1 is unknown and 2 2 0 distribution, where 0 is known. Determine the information lower bound, for an 2 0. Does 2 unbiased estimator, when we consider estimating the second moment the UMVU estimator in Exercise 8.1.3 attain the information lower bound? 8.1.22 Suppose that x1 xn is a sample from a Gamma distribution, where 0 is unknown. Determine the information lower bound for the 1 using unbiased estimators, and determine if the UMVU estimator 0 0 is known and estimation of obtained in Exercise 8.1.7 attains this. 8.1.23 Suppose that x1 x xn [0 1] and 1 for x x is a sample from the distribution with density f 0 is unknown. Determine the information lower 446 Section 8.2: Optimal Hypothesis Testing using unbiased estimators Does a UMVU estimator with vari­ bound for estimating ance at the lower bound exist for this problem? 8.1.24 Suppose that a statistic T is a complete statistic based on some statistical model. A submodel is a statistical model that comprises only some of the distributions in the original model. Why is it not necessarily the case that T is complete for a submodel? 8.1.25 Suppose that a statistic T is a complete statistic based on some statistical model. If we construct a larger model that contains all the distributions in the original model and is such that any set that has probability content equal to 0 for every distribution in the original model also has probability content equal to 0 for every distribution in the larger model, then prove that T is complete for the larger model as well. CHALLENGES 8.1.26 If X is a random variable such that E X either does not exist or is infinite, then show that E X 8.1.27 Suppose that x1 for any constant c. xn is a sample from a Gamma distribution, where 0 is unknown. Show that x is a complete minimal sufficient c 2 0 0 is known and statistic. 8.2 Optimal Hypothesis Testing Suppose we want to assess a hypothesis about the real­valued characteristic the model f have specified a value for we have evidence against H0. for 0, where we . After observing data s, we want to assess whether or not . Typically, this will take the form H0 : : In Section 6.3.3, we discussed methods for assessing such a hypothesis based on the plug­in MLE for These involved computing a P­value as a measure of how surprising the data s are when the null hypothesis is assumed to be true. If s is sur­ 0 then we have evidence for which prising for each of the distributions f against H0 The development of such procedures was largely based on the intuitive justification for the likelihood function. 8.2.1 The Power Function of a Test Closely associated with a specific procedure for computing a P­value is the concept of a power function as defined in Section 6.3.6. For this, we specified a critical such that we declare the results of the test statistically significant whenever the value P­value is less than or equal to is then the probability of the P­value being less than or equal to when we are sampling from f The greater the value of 0 the better the procedure is at detecting departures from H0. The power function is thus a measure of the sensitivity of the testing procedure to detecting departures from H0 The power when Recall the following fundamental example. Chapter 8: Optimal Inferences 447 EXAMPLE 8.2.1 Location Normal Model Suppose we have a sample x1 unknown and 2 0 0 In Example 6.3.9, we showed that a sensible test for this problem is based on the z­ statistic 0 is known, and we want to assess the null hypothesis H0 : 2 0 model, where xn from the N R1 is z x 0 n 0 with Z N 0 1 under H0 The P­value is then given by where denotes the N 0 1 distribution function. In Example 6.3.18, we showed that, for critical value the power function of the z­test is given by P 2 1 1 0 0 n X 0 z1 0 n 2 X 0 0 n z1 2 P 0 0 n 1 2 because X N 2 0 n . We see that specifying a value for specifies a set of data values R x1 xn : x 0 0 n 1 2 such that the results of the test are determined to be statistically significant whenever x1 is 1–1 increasing, we can also write R as R Using the fact that xn R x1 x1 xn : xn : z1 2 Furthermore, the power function is given by P R and 0 P 0 R . 8.2.2 Type I and Type II Errors We now adopt a different point of view. We are going to look for tests that are optimal for testing the null hypothesis H0 : 0. First, we will assume that, having observed the data s we will decide to either accept or reject H0 If we reject H0 then this is equivalent to accepting the alternative Ha : 0. Our performance measure for assessing testing procedures will then be the probability that the testing procedure makes an error. 448 Section 8.2: Optimal Hypothesis Testing There are two types of error. We can make a type I error — rejecting H0 when it is true — or make a type II error — accepting H0 when H0 is false. Note that if we reject H0 then this implies that we are accepting the alternative hypothesis Ha : 0 It turns out that, except in very artificial circumstances, there are no testing proce­ dures that simultaneously minimize the probabilities of making the two kinds of errors. Accordingly, we will place an upper bound called the critical value, on the proba­ bility of making a type I error. We then search among those tests whose probability of making a type I error is less than or equal to for a testing procedure that minimizes the probability of making a type II error. Sometimes hypothesis testing problems for real­valued parameters are distinguished 0 ver­ 0 0 are examples of one­sided problems. Notice, as being one­sided or two­sided. For example, if sus Ha : or H0 : however, that if we define 0 is a two­sided testing problem, while H0 : 0 versus Ha : is real­valued, then H0 : 0 versus Ha : I 0 , 0 versus Ha : 0 is equivalent to the problem H0 : 0 0. Similarly, if we define I , 0 0 versus Ha : 0 0. So the formulation we have adopted for testing problems about 0 is equivalent to the problem H0 : includes the one­sided problems as special cases. then H0 : versus Ha : then H0 : versus Ha : a general 8.2.3 Rejection Regions and Test Functions S before we One approach to specifying a testing procedure is to select a subset R observe s. We then reject H0 whenever s R The set R is referred to as a rejection region. Putting an upper bound on the probability of rejecting H0 when it is true leads to the following. R and accept H0 whenever s Definition 8.2.1 A rejection region R satisfying P R (8.2.1) whenever 0 is called a size rejection region for H0. So (8.2.1) expresses the bound on the probability of making a type I error. Among all size rejection regions R we want to find the one (if it exists) that will minimize the probability of making a type II error. This is equivalent to finding the size rejection region R that maximizes the probability of rejecting the null hypothesis when it is false. This probability can be expressed in terms of the power function of R and is given by P R whenever 0 To fully specify the optimality approach to testing hypotheses, we need one addi­ rejection region R is E IR tional ingredient. Observe that our search for an optimal size equivalent to finding the indicator function IR that satisfies P R Chapter 8: Optimal Inferences when 0 and maximizes E IR P R , when turns out that, in a number of problems, there is no such rejection region. 449 0 It On the other hand, there is often a solution to the mo
re general problem of finding a function : S [0 1] satisfying E , (8.2.2) when 0 and maximizes when 0 We have the following terminology. E , Definition 8.2.2 We call power function associated with the test function : S 0 it is called a size 0 it is called an exact size maximizes (UMP) size E test function. when . test function. [0 1] a test function and the satisfies (8.2.2) when when that 0 is called a uniformly most powerful If satisfies E If test function. A size test function E Note that P R . IR is a test function with power function given by E IR s s For observed data s we interpret 1 to mean that we reject H0 In general, we interpret 0 to mean that we accept H0 and interpret s to be the conditional probability that we reject H0 given the data s Operationally, this means that, after we random variable. If we get a 1 we reject s observe s we generate a Bernoulli H0 if we get a 0 we accept H0 Therefore, by the theorem of total expectation, E is the unconditional probability of rejecting H0. The randomization that occurs when 1 may seem somewhat counterintuitive, but it is forced on us by our search 0 s test, as we can increase power by doing this in certain problems. for a UMP size 8.2.4 The Neyman–Pearson Theorem For a testing problem specified by a null hypothesis H0 : value function 0 for H0 : of 0 is characterized (letting we want to find a UMP size test function ) by Note that a UMP size 0 and a critical test denote the power function when 0 and by 0 0 , when 0, for any other size test function Still, this optimization problem does not have a solution in general. In certain prob­ lems, however, an optimal solution can be found. The following result gives one such example. It is fundamental to the entire theory of optimal hypothesis testing. 450 Section 8.2: Optimal Hypothesis Testing Theorem 8.2.1 (Neyman–Pearson) Suppose that 0 Then an exact size test H0 : test function 0 exists of the form 0 1 and that we want to c0 c0 c0 (8.2.3) for some [0 1] and c0 0 This test is UMP size PROOF See Section 8.5 for the proof of this result. The following result can be established by a simple extension of the proof of the Neyman–Pearson theorem. Corollary 8.2.1 If possibly on the boundary B size unless the power of a UMP size is a UMP size s : f 1 s s test, then f 0 s test equals 1. 0 s everywhere except has exact c0 Furthermore, PROOF See Challenge 8.2.22. Notice the intuitive nature of the test given by the Neyman–Pearson theorem, for (8.2.3) indicates that we categorically reject H0 as being true when the likelihood ratio of 0 is greater than the constant c0 and we accept H0 when it is smaller. When the likelihood ratio equals c0, we randomly decide to reject H0 with probability test is basically unique, although there . Also, Corollary 8.2.1 says that a UMP size 1 versus are possibly different randomization strategies on the boundary. The proof of the Neyman–Pearson theorem reveals that c0 is the smallest real num­ ber such that and P 0 f 1 s f 0 s c0 (8.2.4 c0 c0 0 P 0 f 1 s f 0 s otherwise. c0 0 (8.2.5) We use (8.2.4) and (8.2.5) to calculate c0 and , and so determine the UMP size in a particular problem. test, Note that the test is nonrandomized whenever P 0 as 0, i.e., we categorically accept or reject H0 after seeing the data. This then always occurs whenever the distribution of f 1 s P 0. Interestingly, it can happen that the distribution of the ratio is not continuous even when the distribution of s is continuous (see Problem 8.2.17). f 0 s is continuous when s f 0 s f 1 s c0 Before considering some applications of the Neyman–Pearson theorem, we estab­ lish the analog of the Rao–Blackwell theorem for hypothesis testing problems. Given Chapter 8: Optimal Inferences 451 the value of the sufficient statistic U s measure for the response s by P U sure does not depend on ) For test function expectation of given the value of U s namely, u, we denote the conditional probability u (by Theorem 8.1.2, this probability mea­ put U s equal to the conditional U s E P U U s . Theorem 8.2.2 Suppose that U is a sufficient statistic and 0 Then U is a size for H0 : depends on the data only through the value of U Furthermore, same power function. test function for H0 : is a size test function 0 that and U have the and so U PROOF It is clear that U s1 depends on the data only through the value of U Now let P U denote the marginal probability measure of U induced by P . Then by the theorem of total expectation, we when E P U have E U s2 whenever U s1 U s2 E E P U E P U u 0, which implies that E U U when U . Now E 0, and E E U when 0 This result allows us to restrict our search for a UMP size that depend on the data only through the value of a sufficient statistic. test to those test functions We now consider some applications of the Neyman–Pearson theorem. The follow­ ing example shows that this result can lead to solutions to much more general problems than the simple case being addressed. EXAMPLE 8.2.2 Optimal Hypothesis Testing in the Location Normal Model 2 0 distribution, where Suppose that x1 1 and 2 0 versus Ha : 0 0 is known, and we want to test H0 : xn is a sample from an N 0 The likelihood function is given by 1. L x1 xn exp n 2 2 0 x 2 , and x is a sufficient statistic for this restricted model. By Theorem 8.2.2, we can restrict our attention to test functions that depend on the data through x Now X N 2 0 n so that f 1 x f 0 x exp exp exp exp 2x 1 2 1 x 2 2x 0 2 0 1 0 x exp n 2 2 0 2 1 2 0 452 Therefore, Section 8.2: Optimal Hypothesis Testing exp P 0 exp c0 c0 c0 0 X exp n 2 2 0 0 X c0 exp n 2 2 0 2 0 n ln c0 exp c0 2 0 1 1 0 0, where c0 n 0 2 0 n 1 0 ln c0 exp n 2 2 0 2 1 2 0 0 Using (8.2.4), when 1 0 we select c0 so that c0 z1 when 1 0 we select c0 so that c0 z These choices imply that P 0 f 1 X f 0 X c0 and, by (8.2.5), 0. So the UMP size test is nonrandomized. When 1 0 the test is given by 0 x 1 0 When 1 0 the test is given by z1 z1 0 n z 0 n 0 n z Notice that the test function in (8.2.6) does not depend on subsequent implication is that this test function is UMP size Ha : 1 for any 1 versus the alternative Ha : 0 This implies that 0 0 is UMP size (8.2.6) (8.2.7) for H0 : 1 in any way. The 0 versus 0 for H0 : Chapter 8: Optimal Inferences 453 Furthermore, we have 0 P X 0 1 0 0 n z1 0 n z1 P X 0 n 0 0 z1 n Note that this is increasing in H0 : H0 : Ha : Ha : UMP size 0 is a size is a size 0 versus Ha : test for H0 : 0 versus Ha : 0 From this, we conclude that for H0 : 0 Similarly (see Problem 8.2.12), it can be shown that for H0 : , which implies that 0 Observe that, if 0 then it is also a size 0 is UMP size 0 versus Ha : 0 test function for test function for 0 versus 0 versus 0 in (8.2.7) is We might wonder if a UMP size 0 Suppose that 0 versus Ha : test exists for the two­sided problem H0 : is a size UMP test for this problem. Then for H0 : 0 versus Ha : is also size 0. Using Corollary 8.2.1 and the preceding developments (which also shows that there does not exist a test of the form (8.2.3) having power equal to 1 for this problem), this implies that (the boundary B has probability 0 here). But for H0 : versus Ha : is also UMP size 0; thus, by the same reasoning, 0 0 But clearly 1 when 1 1 when 1 0 0 0 so there is no UMP size Intuitively, we would expect that the size test for the two­sided problem. x 1 0 test given by x 0 x 0 0 n 0 n z1 z1 2 2 (8.2.8) would be a good test to use, but it is not UMP size . It turns out, however, that the test when in (8.2.8) is UMP size among all tests satisfying and 0 0. Example 8.2.2 illustrated a hypothesis testing problem for which no UMP size test exists. Sometimes, however, by requiring that the test possess another very natural property, we can obtain an optimal test. Definition 8.2.3 A test that satisfies when 0 and when problem H0 : 0 is said to be an unbiased size test for the hypothesis testing 0 So (8.2.8) is a UMP unbiased size test. An unbiased test has the property that the probability of rejecting the null hypothesis, when the null hypothesis is false, is always greater than the probability of rejecting the null hypothesis, when the null hypothesis is true. This seems like a very reasonable property. In particular, it can be proved that test (Problem 8.2.14). We do not pursue any UMP size the theory of unbiased tests further in this text. is always an unbiased size We now consider an example which shows that we cannot dispense with the use of randomized tests. 454 Section 8.2: Optimal Hypothesis Testing EXAMPLE 8.2.3 Optimal Hypothesis Testing in the Bernoulli Model Suppose that x1 xn is a sample from a Bernoulli 0 versus Ha : distribution, where 1 where 1 0 1 , and we want to test H0 : 0 Then nx is a minimal sufficient statistic and, by Theorem 8.2.2, we can restrict our attention to test functions that depend on the data only through nx Now n X Binomial n so f 1 nx f 0 nx nx 1 1 nx 0 1 n nx n nx 1 0 nx 1 0 1 1 n nx . 1 0 Therefore c0 c0 ln 1 P 0 n X 1 1 1 ln c0 ln 1 1 0 1 0 c0 1 1 n 1 0 n ln c0 1 1 1 0 P 0 n X c0 because ln 1 1 1 0 1 0 0 as 1 is increasing in which implies 1 1 1 0 1 0 . Now, using (8.2.4), we choose c0 so that c0 is an integer satisfying P 0 n X c0 and P 0 n X c0 1 Because n X will not be able to achieve P 0 n X Binomial n 0 is a discrete distribution, we see that, in general, we exactly. So, using (8.2.5), c0 will not be equal to 0. Then P 0 n X c0 P 0 n X c0 0 nx 1 0 nx nx nx c0 c0 c0 Chapter 8: Optimal Inferences 455 is UMP size software (or Table D.6) for the binomial distribution to obtain c0 0 versus Ha : for H0 : 1 Note that we can use statistical For example, suppose n 6 and 0 of the Binomial 6 0 25 distribution function to three decimal places. 0 25 The following table gives the values x F x 0 0 178 1 0 534 2 0 831 3 0 962 4 0 995 5 1 000 6 1 000 Therefore, if 0 038 and P0 25 n X 0 05 we have that c0 0 831 1 2 3 because P0 25 n X 0 169 This implies that 3 1 0 962 0 05 0 962 1 0 962 0 012 So with this test, we reject H0 : greater than 3, accept H
0 : than 3, and when the number of 1’s equals 3, we randomly reject H0 : probability 0 012 (e.g., generate U Uniform[0 1] and reject whenever U 0 012 0 categorically if the number of successes is 0 categorically when the number of successes is less 0 with Notice that the test 0 does not involve 1 so indeed it is UMP size for H0 : 0 versus Ha : 0 Furthermore, using Problem 8.2.18, we have P n X c0 1 Because n k c0 c0 1 uc0 1 u n c0 1 du uc0 1 u n c0 1 du c0 1 is decreasing in Example 8.2.2, we conclude that we must have that P n X 0 is UMP size Similarly, we obtain a UMP size Example 8.2.2, there is no UMP size there is a UMP unbiased size test for H0 : test for H0 : test for this problem. c0 is increasing in Arguing as in for H0 : 0 0 As in 0 but 0 versus Ha : 0 versus Ha : 0 versus Ha : 8.2.5 Likelihood Ratio Tests (Advanced) In the examples considered so far, the Neyman–Pearson theorem has led to solutions to problems in which H0 or Ha are not just single values of the parameter, even though the theorem was only stated for the single­value case. We also noted, however, that this is not true in general (for example, the two­sided problems discussed in Examples 8.2.2 and 8.2.3). The method of generalized likelihood ratio tests for H0 : 0 has been developed to deal with the general case. This is motivated by the Neyman–Pearson 456 Section 8.2: Optimal Hypothesis Testing theorem, for observe that in (8.2.3), . Therefore, (8.2.3) can be thought of as being based on the ratio of the likelihood at 1 to the likelihood at 0 and we reject H0 : 0 when the likelihood gives much more support to 1 than to 0 The amount of the additional support required for rejection is determined by c0 The larger c0 is, the larger the likelihood L 1 s has to be relative to L 0 s before we reject H0 : Denote the overall MLE of and the MLE, when H0 by H0 s . So 0 s by we have for all when L s L H0 s s such that 0 The generalized likelihood ratio test then rejects H0 L s L H0 s s s (8.2.9) is large, as this indicates evidence against H0 being true. How do we determine when (8.2.9) is large enough to reject? Denoting the ob­ served data by s0 we do this by computing the P­values P L s L H0 s s s L s0 L H0 s0 s0 s0 (8.2.10) when H0. Small values of (8.2.10) are evidence against H0 Of course, when then it is not clear which value of (8.2.10) to 0 for more than one value of use. It can be shown, however, that under conditions such as those discussed in Section 6.5, if s corresponds to a sample of n values from a distribution, then 2 ln L s L H0 s s s D 2 dim dim H0 as n dimensions of these sets. This leads us to a test that rejects H0 whenever whenever the true value of is in H0 Here, dim and dim H0 are the 2 ln L s L H0 s s s (8.2.11) is greater than a particular quantile of the 2 dim For example, suppose that in a location­scale normal model, we are testing H0 : 1 and, 2 dim H0 0 2 for a size 0.05 test, we reject whenever (8.2.11) is greater than 0 95 1 . Note that, strictly speaking, likelihood ratio tests are not derived via optimality considerations. We will not discuss likelihood ratio tests further in this text. dim H0 distribution. 0 Then dim R1 H0 [0 [0 Chapter 8: Optimal Inferences 457 Summary of Section 8.2 In searching for an optimal hypothesis testing procedure, we place an upper bound on the probability of making a type I error (rejecting H0 when it is true) and search for a test that minimizes the probability of making a type II error (accepting H0 when it is false). The Neyman–Pearson theorem prescribes an optimal size Ha each specify a single value for the full parameter Sometimes the Neyman–Pearson theorem leads to solutions to hypothesis test­ ing problems when the null or alternative hypotheses allow for more than one possible value for but in general we must resort to likelihood ratio tests for such problems. test when H0 and . EXERCISES 8.2.1 Suppose that a statistical model is given by the two distributions in the following table 12 1 6 4 s 5 12 1 12 fa s fb s b What is a versus Ha : Uniform[0 1] and reject H0 whenever U Determine the UMP size 0.10 test for testing H0 : the power of this test? Repeat this with the size equal to 0.05. 8.2.2 Suppose for the hypothesis testing problem of Exercise 8.2.1, a statistician de­ cides to generate U 0 05. Show that this test has size 0.05. Explain why this is not a good choice of test and why the test derived in Exercise 8.2.1 is better. Provide numerical evidence for this. 8.2.3 Suppose an investigator knows that an industrial process yields a response vari­ able that follows an N 1 2 distribution. Some changes have been made in the indus­ trial process, and the investigator believes that these have possibly made a change in the mean of the response (not the variance), increasing its value. The investigator wants the probability of a type I error occurring to be less than 1%. Determine an appropriate testing procedure for this problem based on a sample of size 10. 8.2.4 Suppose you have a sample of 20 from an N 0.975­confidence interval for and use it to test H0 : 0 is not in the confidence interval. (a) What is the size of this test? (b) Determine the power function of this test. 8.2.5 Suppose you have a sample of size n where value is greater than 1. (a) What is the size of this test? (b) Determine the power function of this test. R1 You use a 8.2.6 Suppose you are testing a null hypothesis H0 : size 0.05 testing procedure and accept H0 You feel you have a fairly large sample, but ] distribution, 1 by rejecting H0 whenever the sampled 1 distribution. You form a 0 by rejecting H0 whenever 0 is unknown. You test H0 : 1 from a Uniform[0 0, where 458 Section 8.2: Optimal Hypothesis Testing 0 2, you obtain a value of 0 10 where 0 2 represents when you compute the power at the smallest difference from 0 that is of practical importance. Do you believe it makes sense to conclude that the null hypothesis is true? Justify your conclusion. 8.2.7 Suppose you want to test the null hypothesis H0 : 1 distribution, where n from an N the power at 2 of the optimal size 0.05 test, is equal to 0.99? 8.2.8 Suppose we have available two different test procedures in a problem and these have the same power function. Explain why, from the point of view of optimal hypoth­ esis testing theory, we should not care which test is used. 8.2.9 Suppose you have a UMP size 0 based on a sample of 0 2 How large does n have to be so that for testing the hypothesis H0 : test 0, where is real­valued. Explain how the graph of the power function of another size test that was not UMP would differ from the graph of the power function of COMPUTER EXERCISES 8.2.10 Suppose you have a coin and you want to test the hypothesis that the coin is is the probability of getting a head 1 2 where fair, i.e., you want to test H0 : on a single toss. You decide to reject H0 using the rejection region R 0 1 7 8 based on n 0 1 8 2 8 8.2.11 On the same graph, plot the power functions for the two­sided z­test of H0 : 10 tosses. Tabulate the power function for this procedure for 7 8 1 0 for samples of sizes n 1 4 10 20 and 100 based on 0 05 (a) What do you observe about these graphs? (b) Explain how these graphs demonstrate the unbiasedness of this test. PROBLEMS 0 in (8.2.7) is UMP size 8.2.12 Prove that 8.2.13 Prove that the test function function. What is the interpretation of this test function? 8.2.14 Using the test function in Problem 8.2.13, show that a UMP size UMP unbiased size 8.2.15 Suppose that x1 xn is a sample from a Gamma for H0 : for every s test. s 0 0 is unknown. Determine the UMP size 0 is known and 1, where 1 0 Is this test UMP size test is also a distribution, where test for testing H0 : 0 for H0 : 0 versus Ha : S is an exact size 0. test 0 versus Ha : 0? versus Ha : 8.2.16 Suppose that x1 0 is known and 2 2 0 versus Ha : 2 2 H0 : H0 : 8.2.17 Suppose that x1 2 0 versus Ha : 2 Ha : Ha : 1, where 0 0? xn is a sample from an N 0 0 is unknown. Determine the UMP size 2 distribution, where test for testing for 2 1 Is this test UMP size 2 1 where 2 0 2 2 0? xn is a sample from a Uniform[0 test for testing H0 : for H0 : ] distribution, where 0 versus 0 versus 1 Is this test function UMP size 0 is unknown. Determine the UMP size Chapter 8: Optimal Inferences 459 8.2.18 Suppose that F is the distribution function for the Binomial n Then prove that distribution yx 1 y n x 1 dy n 1 This establishes a relationship between the binomial probability 0 1 for x distribution and the beta function. (Hint: Integration by parts.) 8.2.19 Suppose that F is the distribution function for the Poisson prove that distribution. Then F x 1 x! yx e y dy . This establishes a relationship between the Poisson probability 0 1 for x distribution and the gamma function. (Hint: Integration by parts.) xn 8.2.20 Suppose that x1 is a sample from a Poisson distribution, where 1, 0? 0 versus Ha : 0 versus Ha : 2 distribution, where likelihood test for H0 : for H0 : 0 is unknown. Determine the UMP size 2 0 xn is a sample from an N 1 Is this test function UMP size where 0 (Hint: You will need the result of Problem 8.2.19.) 8.2.21 Suppose that x1 R1 ratio test for testing H0 : 8.2.22 (Optimal confidence intervals) Suppose that for model UMP size pose further that each size (a) Prove that only takes values in 0 1 , i.e., each 0 versus H0 : test function. test function for H0 : 0 0 0 is unknown. Derive the form of the exact size 0 for each possible value of f 0 : we have a 0 Sup­ is a nonrandomized . Conclude that C s is a 1 ­confidence set for ­confidence set for . , then prove that the test function defined satisfies for every (b) If C is a 1 by for H0 : is size (c) Suppose that for each value H0 : 0 versus H0 : 0. 0 the test function 0 0. Then prove that P C s is UMP size for testing is minimized, when probability (8.2.12) is the probability of C containing the false value 0 among all 1 ­confidence sets for (8.2.12) . The and a 460 Section 8.3: Optimal Bayesian Inferences 1 u
niformly most accurate (UMA) 1 ­confidence region that minimizes this probability when ­confidence region for . 0 is called a CHALLENGES 8.2.23 Prove Corollary 8.2.1 in the discrete case. 8.3 Optimal Bayesian Inferences with density We now add the prior probability measure . As we will see, this completes the specification of an optimality problem, as now there is always a solution. Solutions to Bayesian optimization problems are known as Bayes rules. In Section 8.1, the unrestricted optimization problem was to find the estimator T The Bayesian that minimizes MSE T for each T 2 of E version of this problem is to minimize E MSE T E E T 2 . (8.3.1) By the theorem of total expectation (Theorem 3.5.2), (8.3.1) is the expected value of the squared error T s s induced by the (the sampling model), and by the marginal dis­ conditional distribution for s given ). Again, by the theorem of total expectation, tribution for we can write this as 2 under the joint distribution on (the prior distribution of E MSE T E M E s T 2 , (8.3.2) where conditional distribution of measure for s (the marginal distribution of s). s denotes the posterior probability measure for given the data s (the given s), and M denotes the prior predictive probability We have the following result. Theorem 8.3.1 When (8.3.1) is finite, a Bayes rule is given by T s E namely, the posterior expectation of s . PROOF First, consider the expected posterior squared error E s T s 2 of an estimate T s . By Theorem 8.1.1 this is minimized by taking T s equal to (note that the “random” quantity here is T s E s ). Then we have just shown that Now suppose that T is any estimator of Chapter 8: Optimal Inference Methods 461 and thus, E MSE MSE T . Therefore, T minimizes (8.3.1) and is a Bayes rule. So we see that, under mild conditions, the optimal Bayesian estimation problem always has a solution and there is no need to restrict ourselves to unbiased estimators, etc. For the hypothesis testing problem H0 : 0 we want to find the test that minimizes the prior probability of making an error (type I or type II). function Such a is a Bayes rule. We have the following result. Theorem 8.3.2 A Bayes rule for the hypothesis testing problem H0 : is given by 0 0 s 1 0 otherwise. s 0 s 0 : PROOF Consider test function and let I of the set otherwise). Observe that which is an error when I 1; 1 having observed s which is an error when I denote the indicator function 0 and equals 0 s is the probability of rejecting H0 having observed s s is the probability of accepting H0 0 Therefore, given s and 0 1 when 0 (so I 0 0 0 the probability of making an error is By the theorem of total expectation, the prior probability of making an error (taking the expectation of e s under the joint distribution of s ) is E M E e s s (8.3.3) As in the proof of Theorem 8.3.1, if we can find each s then also minimizes (8.3.3) and is a Bayes rule. that minimizes E e s s for Using Theorem 3.5.4 to pull s through the conditional expectation, and the fact that E s I A A s for any event A then Because s [0 1] we have min 462 Section 8.3: Optimal Bayesian Inferences Therefore, the minimum value of E e s s is attained by s 0 s . Observe that Theorem 8.3.2 says that the Bayes rule rejects H0 whenever the pos­ terior probability of the null hypothesis is less than or equal to the posterior probability of the alternative. This is an intuitively satisfying result. The following problem does arise with this approach, however. We have s 0 max : 8.3.4) 0 0 (8.3.4) implies that When 0 for every s. Therefore, using the Bayes rule, we would always reject H0 no matter what data s are obtained, which does not seem sensible. As discussed in Section 7.2.3, we have to be careful to make sure we use a prior that assigns positive mass to H0 if we are going to use the optimal Bayes approach to a hypothesis testing problem. s 0 Summary of Section 8.3 Optimal Bayesian procedures are obtained by minimizing the expected perfor­ mance measure using the posterior distribution. In estimation problems, when using squared error as the performance measure, the posterior mean is optimal. In hypothesis testing problems, when minimizing the probability of making an error as the performance measure, then computing the posterior probability of the null hypothesis and accepting H0 when this is greater than 1/2 is optimal. EXERCISES 8.3.1 Suppose that S following table. We place a uniform prior on 1 2 3 and want to estimate 1 2 , with data distributions given by the f1 s f2 s 2 when s 2 is observed. Using a Bayes rule, test the hypothesis H0 : 8.3.2 For the situation described in Exercise 8.3.1, determine the Bayes rule estimator of when using expected squared error as our performance measure for estimators. 2 8.3.3 Suppose that we have a sample x1 0 distribution, using expected where squared error as our performance measure for estimators If we use the prior distrib­ 2 ution 0 , then determine the Bayes rule for this problem. Determine the limiting Bayes rule as 2 0 is known, and we want to estimate is unknown and from an N xn N . 0 Chapter 8: Optimal Inference Methods 463 0 Gamma 0 is known, and xn from a Bernoulli xn is a sample from a Gamma , then determine a Bayes rule for this problem. is completely unknown, and we want to estimate distribution, using expected squared 8.3.4 Suppose that we observe a sample x1 where error as our performance measure for estimators. If we use the prior distribution Beta 8.3.5 Suppose that x1 distribution, where 0 0 , where 0 and 0 are known. If we want to estimate using expected squared error as our performance measure for estimators, then determine the Bayes rule. Use the weak (or strong) law of large numbers to determine what this estimator converges to as n 8.3.6 For the situation described in Exercise 8.3.5, determine the Bayes rule for esti­ 1 when using expected squared error as our performance measure for esti­ mating mators. 2 0 distribution, 8.3.7 Suppose that we have a sample x1 where 0 that minimizes the prior probability of making an error (type I or type II). If we use the prior distribution 0 1 is known (i.e., p0 N 0 p0 I 2 0 distribution), the prior is a mixture of a distribution degenerate at then determine the Bayes rule for this problem. Determine the limiting Bayes rule as xn 0 is known, and we want to find the test of H0 : is unknown and 2 2 0 , where p0 0 and an N 0 from an N 1 . 0 . (Hint: Make use of the computations in Example 7.2.13.) is unknown, and we want to find the test of H0 : 0 distribution, 8.3.8 Suppose that we have a sample x1 0 that minimizes the where prior probability of making an error (type I or type II). If we use the prior distribution 0 1 is known (i.e., the prior is a 0 and a uniform distribution), then determine p0 Uniform[0 1], where p0 from a Bernoulli p0 I xn 1 0 mixture of a distribution degenerate at the Bayes rule for this problem. PROBLEMS 1 on 2 , that we put a prior . If the model is denoted f and that we want to esti­ Suppose our performance measure for estimators is the probability of making , then obtain the form of 8.3.9 Suppose that mate an incorrect choice of the Bayes rule when data s are observed. 8.3.10 For the situation described in Exercise 8.3.1, use the Bayes rule obtained via the 2. What advantage does this estimate method of Problem 8.3.9 to estimate when s have over that obtained in Exercise 8.3.2? 8.3.11 Suppose that x1 R1 2 distribution where is a sample from an N using expected squared is unknown, and want to estimate error as our performance measure for estimators. Using the prior distribution given by xn 0 2 : and using 2 N 0 2 0 2 , 1 2 Gamma 0 0 where 0 2 0 0 and 0 are fixed and known, then determine the Bayes rule for . 464 Section 8.4: Decision Theory (Advanced) 8.3.12 (Model selection) Generalize Problem 8.3.9 to the case 1 k . CHALLENGES 8.3.13 In Section 7.2.4, we described the Bayesian prediction problem. Using the notation found there, suppose we wish to predict t If we assess the accuracy of a predictor by R1 using a predictor then determine the prior predictor that minimizes this quantity (assume all relevant expectations are finite). If we observe s0 then determine the best predictor. (Hint: Assume all the probability measures are discrete.) 8.4 Decision Theory (Advanced) To determine an optimal inference, we chose a performance measure and then at­ tempted to find an inference, of a given type, that has optimal performance with respect to this measure. For example, when considering estimates of a real­valued character­ istic of interest , we took the performance measure to be MSE and then searched for the estimator that minimizes this for each value of Decision theory is closely related to the optimal approach to deriving inferences, but it is a little more specialized. In the decision framework, we take the point of view that, in any statistical problem, the statistician is faced with making a decision, e.g., deciding on a particular value for . Furthermore, associated with a decision is the notion of a loss incurred whenever the decision is incorrect. A decision rule is a procedure, based on the observed data s that the statistician uses to select a decision. The decision problem is then to find a decision rule that minimizes the average loss incurred. There are a number of real­world contexts in which losses are an obvious part of the problem, e.g., the monetary losses associated with various insurance plans that an insurance company may consider offering. So the decision theory approach has many applications. It is clear in many practical problems, however, that losses (as well as performance measures) are somewhat arbitrary components of a statistical problem, often chosen simply for convenience. In such circumstances, the approaches to deriv­ ing inferences described in Chapters 6 and 7 are preferred by many statisticians. So the decision theory model for inference adds another ingredient to the sampling model (or to the samplin
g model and prior) to derive inferences — the loss function. To formalize this, we conceive of a set of possible actions or decisions that the statistician could take after observing the data s. This set of possible actions is denoted by and is called the action space. To connect these actions with the statistical model, there is the correct action to take is a correct action function A : we when do not know the correct action A so there is uncertainty involved in our decision. Consider a simple example. is the true value of the parameter. Of course, because we do not know such that A Chapter 8: Optimal Inference Methods 465 EXAMPLE 8.4.1 Suppose you are told that an urn containing 100 balls has either 50 white and 50 black balls or 60 white and 40 black balls. Five balls are drawn from the urn without replace­ ment and their colors are observed. The statistician’s job is to make a decision about the true proportion of white balls in the urn based on these data. The statistical model then comprises two distributions P1 P2 where, using para­ meter space 1 2 P1 is the Hypergeometric 100 50 5 distribution (see Exam­ ple 2.3.7) and P2 is the Hypergeometric 100 60 5 distribution. The action space is 0 6 The data are 0 5 0 6 , and A : is given by A 1 0 5 and A 2 given by the colors of the five balls drawn. We suppose now that there is also a loss or penalty L a incurred when we select and is true. If we select the correct action, then the loss is 0; it is greater action a than 0 otherwise. Definition 8.4.1 A loss function is a function L defined on in [0 0 if and only if a such that L A a and taking values Sometimes the loss can be an actual monetary loss. Actually, decision theory is a little more general than what we have just described, as we can allow for negative losses (gains or profits), but the restriction to nonnegative losses is suitable for purely statistical applications. In a specific problem, the statistician chooses a loss function that is believed to lead to reasonable statistical procedures. This choice is dependent on the particular application. Consider some examples. EXAMPLE 8.4.2 (Example 8.4.1 continued) Perhaps a sensible choice in this problem would be otherwise. Here we have decided that selecting a error than selecting a cally, then we could take 0 5 when it is not correct is a more serious 0 6 when it is not correct. If we want to treat errors symmetri.e., the losses are 1 or 0. EXAMPLE 8.4.3 Estimation as a Decision Problem Suppose we have a marginal parameter estimate T s after observing s and A Naturally, we want T s S. Here, the action space is For example, suppose x1 xn is a sample from an N 2 where this case, R1 R is unknown, and we want to estimate R1 and a possible estimator is the sample average T x1 : 2 distribution, In x xn 2 of interest, and we want to specify an 466 Section 8.4: Decision Theory (Advanced) There are many possible choices for the loss function. Perhaps a natural choice is to use L a a the absolute deviation between and a Alternatively, it is common to use L a a 2 (8.4.1) (8.4.2) the squared deviations between and a We refer to (8.4.2) as squared error loss. Notice that (8.4.2) is just the square of the Euclidean distance between and a It might seem more natural to actually use the distance (8.4.1) as the loss function. It turns out, however, that there are a number of mathematical conveniences that arise from using squared distance. EXAMPLE 8.4.4 Hypothesis Testing as a Decision Problem In this problem, we have a characteristic of interest sibility of the value written as H0 : as the null hypothesis and to Ha as the alternative hypothesis. and want to assess the plau­ 0 after viewing the data s In a hypothesis testing problem, this is 0. As in Section 8.2, we refer to H0 0 versus Ha : The purpose of a hypothesis testing procedure is to decide which of H0 or Ha is H0 Ha true based on the observed data s So in this problem, the action space is and the correct action function is A H0 Ha 0 0 1 . We write H0 An alternative, and useful, way of thinking of the two hypotheses is as subsets of values that make the null hypothesis values that make the null hypothesis false. true, and Ha Then, based on the data s we want to decide if the true value of is in Ha If H0 (or Ha) is composed of a single point, then it is called a simple hypothesis or a point hypothesis; otherwise, it is referred to as a composite hypothesis. 0 is the subset of all 0 as the subset of all is in H0 or if H c 2 For example, suppose that x1 R1 R 0 versus the alternative Ha : R For the same model, let c where H0 : Ha 0 xn is a sample from an N 2 distribution and we want to test the null hypothesis R and 0 Then H0 0 I 0] R 2 is the indicator function for the subset 1 i.e., versus the alternative Ha : 0 is equivalent to testing that the mean is less than or equal to 0 versus the alternative that it is greater than 0 This one­sided hypothesis testing problem is often denoted as H0 : 0] R Then testing H0 : 0 versus Ha : 0 There are a number of possible choices for the loss function, but the most com­ monly used is of the form L a 0 b c H0 a H0 a Ha a H0 or H0 Ha Ha a Ha Chapter 8: Optimal Inference Methods 467 If we reject H0 when H0 is true (a type I error), we incur a loss of c; if we accept H0 when H0 is false (a type II error), we incur a loss of b. When b c, we can take b 1 and produce the commonly used 0–1 loss function. c A statistician faced with a decision problem — i.e., a model, action space, correct action function, and loss function — must now select a rule for choosing an element of the action space when the data s are observed. A decision function is a procedure that specifies how an action is to be selected in the action space Definition 8.4.2 A nonrandomized decision function d is a function d : S So after observing s we decide that the appropriate action is d s Actually, we will allow our decision procedures to be a little more general than this, as we permit a random choice of an action after observing s. Definition 8.4.3 A decision function on the action space ) taken is in A for each s is such that is a probability measure s A is the probability that the action s S (so Operationally, after observing s a random mechanism with distribution specified by is used to select the action from the set of possible actions. Notice that if s s is a probability measure degenerate at the point d s (so s then Problem 8.4.8). 1) for each is equivalent to the nonrandomized decision function d and conversely (see s d s The use of randomized decision procedures may seem rather unnatural, but, as we will see, sometimes they are an essential ingredient of decision theory. In many estimation problems, the use of randomized procedures provides no advantage, but this is not the case in hypothesis testing problems. We let D denote the set of all decision functions for the specific problem of interest. The decision problem is to choose a decision function to make the loss as small as possible. For a particular D. The selected will then be used to generate decisions in applications. We base this choice on how the perform with respect to the loss function. Intuitively, we various decision functions want to choose , because s a is a random quantity. Therefore, rather and a f than minimizing specific losses, we speak instead about minimizing some aspect of the Perhaps a reasonable choice is to minimize distribution of the losses for each the average loss. Accordingly, we define the risk function associated with D as the average loss incurred by The risk function plays a central role in determining an appropriate decision function for a problem. , the loss L s Definition 8.4.4 The risk function associated with decision function is given by R E E s L a (8.4.3) Notice that to calculate the risk function we first calculate the average of L a based on s fixed and a to s f . By the theorem of total expectation, this is the average loss. When Then we average this conditional average with respect is s s 468 Section 8.4: Decision Theory (Advanced) degenerate at d s for each s then (8.4.3) simplifies (see Problem 8.4.8) to R E L d s Consider the following examples. EXAMPLE 8.4.5 Suppose that S table. 1 2 3 1 2 , and the distributions are given by the following f1 s f2 s Further suppose that when A a but is 0 otherwise. and the loss function is given by L a 1 Now consider the decision function specified by the following table So when we observe s and choose the action a does the sensible thing and selects the decision a know unequivocally that 1 we randomly choose the action a 1 with probability 1/4 2 with probability 3/4, etc. Notice that this decision function 3 as we 1 when we observe s 1 in this case. We have so the risk function of is then given by R 1 and R 2 L 1 1 L 1 2 E1 E s 1 1 3 4 3 12 3 12 E2 0L 2 1 Chapter 8: Optimal Inference Methods 469 EXAMPLE 8.4.6 Estimation We will restrict our attention to nonrandomized decision functions and note that these are also called estimators. The risk function associated with estimator T and loss func­ tion (8.4.1) is given by RT E T and is called the mean absolute deviation (MAD). The risk function associated with the estimator T and loss function (8.4.2) is given by and is called the MSE. RT E T 2 We want to choose the estimator T to minimize RT Note that, when using (8.4.2), this decision problem is exactly the same as the optimal estimation problem discussed in Section 8.1. for every EXAMPLE 8.4.7 Hypothesis Testing for this problem, and a data value s We note that for a given decision function the distribution s Ha , which is the probability of rejecting H0 when s has been observed. This is because the probability measure is concentrated on two points, so we need only give its value at one of these to and observe that a completely specify it. We call decision function for this problem is also specified by a test function the test function associated with is characterized by s s s We have imm
ediately that E s L a 1 s L H0 s L Ha (8.4.4) Therefore, when using the 0–1 loss function, R E L 1 H0 s L E H0 L s E 1 s E s s L Ha L H0 Ha H0 Ha Recall that in Section 6.3.6, we introduced the power function associated with a hypothesis assessment procedure that rejected H0 whenever the P­value was smaller is the probability that than some prescribed value. The power function, evaluated at such a procedure rejects H0 when s is the conditional probability, given s that H0 is rejected, the theorem of total expectation implies that E is the true value. So in general, we refer to the function equals the unconditional probability that we reject H0 when is the true value. Because s E s as the power function of the decision procedure or, equivalently, as the power function of the test function Therefore, minimizing the risk function in this case is equivalent to choosing to minimize Ha Ac­ cordingly, this decision problem is exactly the same as the optimal inference problem discussed in Section 8.2. H0 and to maximize for every for every 470 Section 8.4: Decision Theory (Advanced) Once we have written down all the ingredients for a decision problem, it is then clear what form a solution to the problem will take. In particular, any decision function 0 that satisfies R 0 R and If for every two decision functions have the same risk functions, then, from the point of view of decision theory, they are equivalent. So it is conceivable that there might be more than one solution to a decision problem. D is an optimal decision function and is a solution. Actually, it turns out that an optimal decision function exists only in extremely unrealistic cases, namely, the data always tell us categorically what the correct decision is (see Problem 8.4.9). We do not really need statistical inference for such situations. For example, suppose we have two coins — coin A has two heads and coin B has two tails. As soon as we observe an outcome from a coin toss, we know exactly which coin was tossed and there is no need for statistical inference. Still, we can identify some decision rules that we do not want to use. For example, and if then naturally we strictly prefer 0 to D is such that there exists 0 D satisfying R 0 if there is at least one for every for which R 0 R R Definition 8.4.5 A decision function is said to be admissible if there is no 0 that is strictly preferred to it. A consequence of decision theory is that we should use only admissible decision functions. Still, there are many admissible decision functions and typically none is optimal. Furthermore, a procedure that is only admissible may be a very poor choice (see Challenge 8.4.11). There are several routes out of this impasse for decision theory. One approach is to use reduction principles. By this we mean that we look for an optimal decision D that is considered appropriate. So we then look for function in some subclass D0 D0 i.e., we look for an R a 0 D0 such that R 0 optimal decision function in D0. Consider the following example. for every and EXAMPLE 8.4.8 Size Tests for Hypothesis Testing Consider a hypothesis testing problem H0 versus Ha Recall that in Section 8.2, we H0 restricted attention to those test functions test function for this problem. So in this case, we are Such a restricting to the class D0 of all decision functions for this problem, which correspond to size is called a size test functions. that satisfy E for every In Section 8.2, we showed that sometimes there is an optimal D0 For example, when H0 and Ha are simple, the Neyman–Pearson theorem (Theorem 8.2.1) provides is optimal. We also showed in an optimal Section 8.2, however, that in general there is no optimal size and so test function there is no optimal D0 In this case, further reduction principles are necessary. defined by s Ha thus, s Another approach to selecting a valued characteristic of the risk function of on that. There are several possibilities. D is based on choosing one particular real­ and ordering the decision functions based Chapter 8: Optimal Inference Methods 471 One way is to introduce a prior into the problem and then look for the decision procedure D that has smallest prior risk r E R We then look for a rule that has prior risk equal to min D r (or inf D r ) This ap­ proach is called Bayesian decision theory. Definition 8.4.6 The quantity r is called the prior risk of , min D r is called the Bayes risk, and a rule with prior risk equal to the Bayes risk is called a Bayes rule. We derived Bayes rules for several problems in Section 8.3. Interestingly, Bayesian decision theory always effectively produces an answer to a decision problem. This is a very desirable property for any theory of statistics. Another way to order decision functions uses the maximum (or supremum) risk. So for a decision function we calculate max R (or sup the smallest, largest risk or the smallest, worst behavior. ) and then select a R D that minimizes this quantity. Such a has Definition 8.4.7 A decision function 0 satisfying max R 0 max R min D (8.1) is called a minimax decision function. Again, this approach will always effectively produce an answer to a decision problem (see Problem 8.4.10). Much more can be said about decision theory than this brief introduction to the basic concepts. Many interesting, general results have been established for the decision theoretic approach to statistical inference. Summary of Section 8.4 The decision theoretic approach to statistical inference introduces an action space and a loss function L s on . The prescribes a probability distribution using this distribution after observing s for this, the risk function R A decision function statistician generates a decision in The problem in decision theory is to select is used. The value R function Typically, no optimal decision function various re­ duction criteria are used to reduce the class of possible decision functions, or the decision functions are ordered using some real­valued characteristic of their risk functions, e.g., maximum risk or average risk with respect to some prior. is the average loss incurred when using the decision and the goal is to minimize risk. exists. So, to select a 472 Section 8.4: Decision Theory (Advanced) EXERCISES xn xn from a Bernoulli 10. distribution, where is completely unknown, and we want to estimate using squared error loss. Write out x x. Graph the risk function when n xn from a Poisson distribution, 8.4.1 Suppose we observe a sample x1 where using squared error loss. Write out all the ingredients of this decision problem. Calculate the risk function of the estimator T x1 8.4.2 Suppose we have a sample x1 is completely unknown, and we want to estimate all the ingredients of this decision problem. Consider the estimator T x1 and calculate its risk function. Graph the risk function when n 8.4.3 Suppose we have a sample x1 is unknown and 2 out all the ingredients of this decision problem. Consider the estimator T x1 2 x and calculate its risk function. Graph the risk function when n 0 distribution, 8.4.4 Suppose we observe a sample x1 where 1 2 is completely unknown, and we want to test the null hypothesis that versus the alternative that it is not equal to this quantity, and we use 0­1 loss. Write out all the ingredients of this decision problem. Suppose we reject the null hypothesis whenever we observe nx 1 n . Determine the form of the test function 0 1 n and its associated power function. Graph the power function when n 8.4.5 Consider the decision problem with sample space S space table. 1 2 3 4 , parameter a b , with the parameter indexing the distributions given in the following 0 is known, and we want to estimate 25 from a Bernoulli 2 0 distribution, where using squared error loss. Write xn from an N 10. 25. xn xn xn 2 fa s fb s a 1 when a Suppose that the action space by L (a) Calculate the risk function of the deterministic decision function given by d 1 d 2 (b) Is d in part (a) optimal? and is equal to 0 otherwise. a and d 4 with A and the loss function is given d 3 A b COMPUTER EXERCISES xn n 0 from a Poisson 8.4.6 Suppose we have a sample x1 is completely unknown, and we want to test the hypothesis that distribution, where 0 versus the alternative that 0 using the 0–1 loss function. Write out all the ingredients of this decision problem. Suppose we decide to reject the null hypothesis whenever 2 n 0 and randomly reject the null hypothesis with probability 1/2 nx when nx 2 n 0 Determine the form of the test function and its associated power function. Graph the power function when 0 8.4.7 Suppose we have a sample x1 5. 2 0 distribution, where from an N 0 is known, and we want to test the null hypothesis that the mean response is 0 versus the alternative that the mean response is not equal to 0 using the 0–1 loss function. Write out all the ingredients of this decision problem. Suppose is unknown and 2 1 and n n 0 xn Chapter 8: Optimal Inference Methods 473 n]. Determine the that we decide to reject whenever x form of the test function and its associated power function. Graph the power function when 0 3 and n 2 0 2 0 [ 0 10 0 n 0 0 PROBLEMS s d s E L prove that R degen­ and that gives a probability measure S is equivalent to specifying a function d : S 8.4.8 Prove that a decision function erate at d s for each s conversely. For such a 8.4.9 Suppose we have a decision problem and that each probability distribution in the model is discrete. (a) Prove that for which P s (b) Prove that if there exist such that A 1 concentrated on disjoint sets, then there is no optimal 8.4.10 If decision function minimax. has constant risk and is admissible, then prove that is optimal in D if and only if and P 1 P 2 are not is degenerate at A A 2 D for each s is 0 s 1 2 . CHALLENGES 8.4.11 Suppose we have a decision problem in which 0 0 Further assume that there is no optimal 0 for every implies that P C decision function (see Problem 8.4.9). Then prove that the nonrandomized decision function d give
n by d s A 0 is admissible. What does this result tell you about the concept of admissibility? is such that P 0 C DISCUSSION TOPICS 8.4.12 Comment on the following statement: A natural requirement for any theory of inference is that it produce an answer for every inference problem posed. Have we discussed any theories so far that you believe will satisfy this? 8.4.13 Decision theory produces a decision in a given problem. It says nothing about how likely it is that the decision is in error. Some statisticians argue that a valid ap­ proach to inference must include some quantification of our uncertainty concerning any statement we make about an unknown, as only then can a recipient judge the reliability of the inference. Comment on this. 8.5 Further Proofs (Advanced) Proof of Theorem 8.1.2 We want to show that a statistic U is sufficient for a model if and only if the conditional distribution of the data s given U u is the same for every We prove this in the discrete case so that f s P s . The general case re­ quires more mathematics, and we leave that to a further course. 474 Section 8.5: Further Proofs (Advanced) Let u be such that P U 1 u is the set of values of s such that U s 0 where U 1 u u We have s : U s u so U 1 u P s s1 U u P s s1 U u P U u (8.5.1) Whenever s1 U 1 u , P s s1 U u P s1 s : U s u P 0 independently of Therefore, P s s1 U u 0 independently of So let us suppose that s1 U 1 u Then P s s1 U u P s1 s : U s u P s1 f s1 If U is a sufficient statistic, the factorization theorem (Theorem 6.1.1) implies f h s g U s for some h and g. Therefore, since 8.5.1) equals s1 f s U 1 u f s s1 f s U 1 u c s s1 f s1 1 s U 1 u c s s1 where f f s s1 h s h s1 c s s1 . We conclude that (8.5.1) is independent of Conversely, if (8.5.1) is independent of then for s1 s2 U 1 u we have P U u P s s2 s2 U u . P s Thus where f s1 P s s1 P s P s s1 U u P s P s s1 U u s2 U u P s s1 U u P U u s2 s2 U u P s f s2 c s1 s2 f s2 , c s1 s2 P s P s s1 U u s2 U u . By the definition of sufficiency in Section 6.1.1, this establishes the sufficiency of U Chapter 8: Optimal Inference Methods 475 Establishing the Completeness of x in Example 8.1.3 Suppose that x1 is unknown and 2 0 sufficient statistic. R1 xn is a sample from an N 0 is known. In Example 6.1.7, we showed that x is a minimal 2 0 distribution, where Suppose that the function h is such that E h x 0 for every R1 Then defining h x max 0 h x and h x max 0 h x we have h x h x h x . Therefore, setting c E h X and c E h X , we must have and so c c c . Because h and h are nonnegative functions, we have that 0 and c 0 If c 0 then we have that h 0 with probability 1, because a non­ x negative function has mean 0 if and only if it is 0 with probability 1 (see Challenge 3.3.22). Then h 0 with probability 1. If c 0 with probability 1 also, and we conclude that h x 0 then h 0 for all x in a set A having positive probability 0 with probability 1, x 2 0 n distribution (otherwise h x x 0). This implies that c 0 for every 2 0 n distribution assigns positive probability to A as well (you 0 with respect to the N 0 which implies, as above, that c because every N can think of A as a subinterval of R1). 0 Now note that g x h x 1 2 0 exp nx 2 2 2 0 is nonnegative and is strictly positive on A. We can write c E h X h x 1 2 exp n 2 2 2 0 exp n x 0 2 0 g exp n x 2 2 2 0 dx x dx (8.5.2) Setting every 0 establishes that 0 g x dx because 0 c for Therefore, g g x x dx is a probability density of a distribution concentrated on A 0 . Fur­ thermore, using (8.5.2) and the definition of moment­generating function in Section 3.4, x : h x c exp n 2 2 2 0 g x dx (8.5.3) 476 Section 8.5: Further Proofs (Advanced) is the moment­generating function of this distribution evaluated at n Similarly, we define 2 0 so that g x h x 1 2 0 exp nx 2 2 2 0 g g x x dx is a probability density of a distribution concentrated on A x : h x 0 Also, c exp n 2 2 2 0 g x dx (8.5.4) is the moment­generating function of this distribution evaluated at n Because c c we have that (setting 0) 2 0. g x dx g x dx This implies that (8.5.3) equals (8.5.4) for every and so the moment­generating functions of these two distributions are the same everywhere. By Theorem 3.4.6, these distributions must be the same. But this is impossible, as the distribution given by g is concentrated on A whereas the distribution given by g is concentrated on A and A 0 and we are done. Accordingly, we conclude that we cannot have c A The Proof of Theorem 8.2.1 (the Neyman–Pearson Theorem) We want to prove that when exact size test function 0 exists of the form 0 1 and we want to test H0 : 0 then an c0 c0 c0 (8.5.5) for some [0 1] and c0 0 and this test is UMP size We develop the proof of this result in the discrete case. The proof in the more general context is similar. First, we note that s : 0 has P measure equal to 0 for both 1 Accordingly, without loss we can remove this set from the sample space and assume hereafter that f 0 s and f 1 s cannot be simultaneously 0. Therefore, the ratio f 1 s f 0 s is always defined. 0 and f 0 s f 1 s 1 Then setting c 1 Therefore, 0 and 0 is UMP size 1 in (8.5.5), we see that because no test can have power 0 s Suppose that 1 and so E 1 greater than 1 0 Chapter 8: Optimal Inference Methods 477 0 Setting c0 0 (if f 0 s Suppose that and only if f 0 s is the indicator function for the set A Further, any size 0 test function must be 0 on Ac to have E 0 0 s and so E 1 that 0 and 0 then in (8.5.5), we see that f 0 s 0 s 0 if and conversely). So 0 0 0 On A we have 0 0 and therefore E 0 0 Therefore, 1. Consider the distribution function of the likelihood 0 is UMP size Now assume that 0 ratio when 0, namely So 1 c is a nondecreasing function of c with 1 0 and 1 Let c0 be the smallest value of c such that 1 1 c (recall that 1 is right continuous because it is a distribution function). Then we have that 1 0 in a distribution function at a point equals the probability of the point) lim 0 c0 and (using the fact that the jump c0 1 1 1 1 c c0 P 0 f 1 s f 0 s c0 1 c0 c0 0 1 c0 c0 0 Using this value of c0 in (8.5.5), put c0 c0 c0 0 c0 0 c0 0 otherwise, and note that [0 1] Then we have E 0 0 P 0 f 1 s c0 f 0 s c0 c0 P 0 f 1 s f 0 s c0 so 0 has exact size Now suppose that is another size test and E 1 E 1 0 We partition the sample space as S S0 S1 S2 where S0 S1 S2 Note that S1 because f 1 s 0 as c0 implies s 0 s 1 Also 0 f 1 s f 0 s c0 1 which implies 0 s s 1 S2 because f 1 s s 0 as 0 f 0 s 1 c0 implies 0 s 0 which implies c0 0 s s s 478 Section 8.5: Further Proofs (Advanced) Therefore, 0 0 E 1 E 1 IS1 IS2 s 0 s s Now note that E 1 IS1 S1 c0 s S1 0 s s f 0 s c0 E 0 IS1 s 0 s s because that 0 s s 0 and f 1 s f 0 s c0 when s S1 Similarly, we have E 1 IS2 S2 c0 s S2 0 s s f 0 s c0 E 0 IS2 s 0 s s because 0 s f 0 s Combining these inequalities, we obtain 0 and f 1 s s c0 when s S2 0 E 1 0 c0 E 0 E 1 0 E 0 c0 E 0 c0 0 E 0 0 because E 0 among all size 0 Therefore, E 1 0 E 1 which proves that 0 is UMP tests. Chapter 9 Model Checking CHAPTER OUTLINE Section 1 Checking the Sampling Model Section 2 Checking for Prior–Data Conict Section 3 The Problem with Multiple Checks The statistical inference methods developed in Chapters 6 through 8 all depend on various assumptions. For example, in Chapter 6 we assumed that the data s were generated from a distribution in the statistical model P : . In Chapter 7, we also assumed that our uncertainty concerning the true value of the model parameter could be described by a prior probability distribution . As such, any inferences drawn are of questionable validity if these assumptions do not make sense in a particular application. In fact, all statistical methodology is based on assumptions or choices made by the statistical analyst, and these must be checked if we want to feel confident that our inferences are relevant. We refer to the process of checking these assumptions as model checking, the topic of this chapter. Obviously, this is of enormous importance in applications of statistics, and good statistical practice demands that effective model checking be carried out. Methods range from fairly informal graphical methods to more elaborate hypothesis assessment, and we will discuss a number of these. 9.1 Checking the Sampling Model , for the Frequency­based inference methods start with a statistical model true distribution that generated the data s. This means we are assuming that the true distribution for the observed data is in this set If this assumption is not true, then it seems reasonable to question the relevance of any subsequent inferences we make about : f . Except in relatively rare circumstances, we can never know categorically that a model is correct. The most we can hope for is that we can assess whether or not the observed data s could plausibly have arisen from the model. 479 480 Section 9.1: Checking the Sampling Model If the observed data are surprising for each distribution in the model, then we have evidence that the model is incorrect. This leads us to think in terms of computing a P­value to check the correctness of the model. Of course, in this situation the null hypothesis is that the model is correct; the alternative is that the model could be any of the other possible models for the type of data we are dealing with. We recall now our discussion of P­values in Chapter 6, where we distinguished between practical significance and statistical significance. It was noted that, while a P­ value may indicate that a null hypothesis is false, in practical terms the deviation from the null hypothesis may be so small as to be immaterial for the application. When the sample size gets large, it is inevitable that any reasonable approach via P­values will detect such a deviation and indicate that the null hypothesis is false. This is also true when we are carrying out model checking using P­values. The resolution of this is to estimate, in some fashion, the size of the deviation of the model from correctness, and so deter
mine whether or not the model will be adequate for the application. Even if we ultimately accept the use of the model, it is still valuable to know, however, that we have detected evidence of model incorrectness when this is the case. One P­value approach to model checking entails specifying a discrepancy statistic R1 that measures deviations from the model under consideration. Typically, D : S large values of D are meant to indicate that a deviation has occurred. The actual value D s is, of course, not necessarily an indication of this. The relevant issue is whether or not the observed value D s is surprising under the assumption that the model is cor­ rect. Therefore, we must assess whether or not D s lies in a region of low probability for the distribution of this quantity when the model is correct. For example, consider the density of a potential D statistic plotted in Figure 9.1.1. Here a value D s in the left tail (near 0), right tail (out past 15), or between the two modes (in the interval from about 7 to 9) all would indicate that the model is incorrect, because such values have a low probability of occurrence when the model is correct. 0.3 0.2 0.1 0.0 0 2 4 6 8 10 12 14 16 18 20 D Figure 9.1.1: Plot of a density for a discrepancy statistic D Chapter 9: Model Checking 481 The above discussion places the restriction that, when the model is correct, D must have a single distribution, i.e., the distribution cannot depend on . For many com­ monly used discrepancy statistics, this distribution is unimodal. A value in the right tail then indicates a lack of fit, or underfitting, by the model (the discrepancies are unnaturally large); a value in the left tail then indicates overfitting by the model (the discrepancies are unnaturally small). There are two general methods available for obtaining a single distribution for the computation of P­values. One method requires that D be ancillary. Definition 9.1.1 A statistic D whose distribution under the model does not depend upon P , then D s has the same distribution for every is called ancillary, i.e., if s . If D is ancillary, then it has a single distribution specified by the model. If D s is a surprising value for this distribution, then we have evidence against the model being true. It is not the case that any ancillary D will serve as a useful discrepancy statistic. For example, if D is a constant, then it is ancillary, but it is obviously not useful for model checking. So we have to be careful in choosing D. Quite often we can find useful ancillary statistics for a model by looking at resid­ uals. Loosely speaking, residuals are based on the information in the data that is left over after we have fit the model. If we have used all the relevant information in the data for fitting, then the residuals should contain no useful information for inference about the parameter . Example 9.1.1 will illustrate more clearly what we mean by residuals. Residuals play a major role in model checking. The second method works with any discrepancy statistic D. For this, we use the conditional distribution of D given the value of a sufficient statistic T . By Theorem 8.1.2, this conditional distribution is the same for every value of . If D s is a surpris­ ing value for this distribution, then we have evidence against the model being true. Sometimes the two approaches we have just described agree, but not always. Con­ sider some examples. EXAMPLE 9.1.1 Location Normal Suppose we assume that x1 R1 is unknown and 2 2 0 distribution, where 0 is known. We know that x is a minimal sufficient statistic for this problem (see Example 6.1.7). Also, x represents the fitting of the model to the data, as it is the estimate of the unknown parameter value xn is a sample from an N Now consider r r x1 xn r1 rn x1 x xn x as one possible definition of the residual. Note that we can reconstruct the original data from the values of x and r. It turns out that R with E Ri Xn X has a distribution that is independent of 0 and Cov Ri R j 1 n for every i X1 X i j 2 0 j and 0 otherwise). Moreover, R is independent of X and Ri (see Problems 9.1.19 and 9.1.20). j ( i j N 0 1 when i 2 1 n 0 1 482 Section 9.1: Checking the Sampling Model Accordingly, we have that r is ancillary and so is any discrepancy statistic D that depends on the data only through r . Furthermore, the conditional distribution of D R given X x is the same as the marginal distribution of D R because they are inde­ pendent. Therefore, the two approaches to obtaining a P­value agree here, whenever the discrepancy statistic depends on the data only through r By Theorem 4.6.6, we have that D R 1 2 0 n i 1 R2 i 1 2 0 n i 1 Xi 2 X is distributed value 2 n 1 , so this is a possible discrepancy statistic Therefore, the P­ P D D r (9.1.1) 2 n where D 1 , provides an assessment of whether or not the model is correct. Note that values of (9.1.1) near 0 or near 1 are both evidence against the model, as both indicate that D r is in a region of low probability when assuming the model is correct. A value near 0 indicates that D r is in the right tail, whereas a value near 1 indicates that D r is in the left tail. The necessity of examining the left tail of the distribution of D r as well as the right, is seen as follows. Consider the situation where we are in fact sampling from an 2 is much smaller than 2 0 In this case, we expect D r N 1 to be a value in the left tail, because E D R 2 distribution where n 2 There are obviously many other choices that could be made for the D statistic At present, there is not a theory that prescribes one choice over another. One caution should be noted, however. The choice of a statistic D cannot be based upon looking at the data first. Doing so invalidates the computation of the P­value as described above, as then we must condition on the data feature that led us to choose that particular D. 2 0 EXAMPLE 9.1.2 Location­Scale Normal Suppose we assume that x1 R1 0 2 xn is a sample from an N is unknown. We know that x s2 2 distribution, where is a minimal sufficient statistic for this model (Example 6.1.8). Consider r r x1 xn r1 rn x1 x xn x s s as one possible definition of the residual. Note that we can reconstruct the data from the values of x s2 and r . It turns out R has a distribution that is independent of 2 (and hence is an­ cillary — see Challenge 9.1.28) as well as independent of X S2 So again, the two approaches to obtaining a P­value agree here, as long as the discrepancy statistic de­ pends on the data only through r One possible discrepancy statistic is given by D r 1 n n ln i 1 r 2 i n 1 Chapter 9: Model Checking 483 To use this statistic for model checking, we need to obtain its distribution when the model is correct. Then we compare the observed value D r with this distribution, to see if it is surprising. We can do this via simulation. Because the distribution of D R is independent 2 , we can generate N samples of size n from the N 0 1 distribution (or of any other normal distribution) and calculate D R for each sample. Then we look at histograms of the simulated values to see if D r , from the original sample, is a surprising value, i.e., if it lies in a region of low probability like a left or right tail. For example, suppose we observed the sample 2 08 0 28 2 01 1 37 40 08 4 93 Then, simulating 104 values from the distribution obtaining the value D r of D under the assumption of model correctness, we obtained the density histogram given in Figure 9.1.2. See Appendix B for some code used to carry out this simulation. The value D r 4 93 is out in the right tail and thus indicates that the sample is not from a normal distribution. In fact, only 0 0057 of the simulated values are larger, so this is definite evidence against the model being correct. y t i s n e d 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 1 2 3 4 D 5 6 7 Figure 9.1.2: A density histogram for a simulation of 104 values of D in Example 9.1.2. n Obviously, there are other possible functions of r that we could use for model checking here. In particular, Dskew r i , the skewness statis­ n n i 1 r 4 i , the kurtosis statistic, are commonly used. tic, and Dkurtosis r The skewness statistic measures the symmetry in the data, while the kurtosis statistic measures the “peakedness” in the data. As just described, we can simulate the distribu­ tion of these statistics under the normality assumption and then compare the observed values with these distributions to see if we have any evidence against the model (see Computer Problem 9.1.27). The following examples present contexts in which the two approaches to computing a P­value for model checking are not the same. 484 Section 9.1: Checking the Sampling Model EXAMPLE 9.1.3 Location­Scale Cauchy Suppose we assume that x1 xn is a sample from the distribution given by 2 R1 t 1 and Z where Z is unknown. This time, x s2 is not a minimal sufficient statistic, but the statistic r defined in Example 9.1.2 is still ancillary (Challenge 9.1.28). We can again simulate values from the distribution of R (just generate samples from the t 1 distribution and compute r for each sample) to estimate P­values for any discrepancy statistic such as the D r statistics discussed in Example 9.1.2. 0 EXAMPLE 9.1.4 Fisher’s Exact Test Suppose we take a sample of n from a population of students and observe the values 2 indicating a1 b1 female) and bi is a categorical variable for part­time employment status (B 1 indicat­ 2 indicating unemployed). So each individual is being categorized ing employed, B into one of four categories, namely, an bn where ai is gender (A 1 indicating male, A Category 1, when A Category 2, when A Category 3, when A Category 4, when Suppose our model for this situation is that A and B are independent with P A 1 1 P B 1 1 where 1 Then letting Xi j denote the count for the category, where A gives that [0 1] and 1 [0 1] are completely unknown. j, Example 2.8.5 i B X11 X12 X21 X22 Multinomial n 1 1 1 2 2 1 2 2 As we will see in Chapt
er 10, this model is equivalent to saying that there is no rela­ tionship between gender and employment status. Denoting the observed cell counts by x11 x12 x21 x22 , the likelihood function is given by x11 1 1 x11 x12 1 x1 1 1 1 2 1 1 x12 x21 2 1 1 n x1 n x11 x12 x 1 1 1 x22 2 2 x11 x21 1 n x 1 1 1 1 n x11 x21 x11 x12 x11 x21 . Therefore, the MLE (Problem 9.1.14) is 1 1 x1 n x 1 n . where x1 x 1 given by Note that 1 is the proportion of males in the sample and 1 is the proportion of all employed in the sample. Because x1 x 1 determines the likelihood function and can be calculated from the likelihood function, we have that x1 x 1 is a minimal sufficient statistic. In this example, a natural definition of residual does not seem readily apparent. So we consider looking at the conditional distribution of the data, given the minimal Chapter 9: Model Checking 485 An Bn sufficient statistic. The conditional distribution of the sample A1 B1 given the values x1 x 1 the restrictions is the uniform distribution on the set of all samples where x11 x11 x21 x12 x21 x22 x1 x 1 n x11 x12 (9.1.2) are satisfied. Notice that, given x1 x 1 when we specify a value for x11. all the other values in (9.1.2) are determined It can be shown that the number of such samples is equal to (see Problem 9.1.21) n x1 n x 1 Now the number of samples with prescribed values for x1 x 1 and x11 i is given by n x1 x1 i n x 1 x1 i Therefore, the conditional probability function of x11 given x1 x 1 is P x11 i x1 x 1 n x1 x1 i n x1 n x1 x 1 i n x 1 x1 i n x1 x 1 i n x 1 This is the Hypergeometric n x 1 x1 probability function. So we have evidence against the model holding whenever x11 is out in the tails of this distribution. Assessing this requires a tabulation of this distribution or the use of a statistical package with the hypergeometric distribution function built in. As a simple numerical example, suppose that we took a sample of n obtaining x 1 the Hypergeometric 20 12 6 probability function is given by the following table. 12 unemployed, x1 6 males, and x11 20 students, 2 employed males. Then i p i 0 0 001 1 0 017 2 0 119 3 0 318 4 0 358 5 0 163 6 0 024 2 is equal The probability of getting a value as far, or farther, out in the tails than x11 to the probability of observing a value of x11 with probability of occurrence as small as or smaller than x11 2 This P­value equals 0 119 0 017 0 001 0 024 0 161 Therefore, we have no evidence against the model of independence between A and B Of course, the sample size is quite small here. There is another approach here to testing the independence of A and B. In particu­ lar, we could only assume the independence of the initial unclassified sample, and then we always have X11 X12 X21 X22 Multinomial n 11 12 21 22 486 Section 9.1: Checking the Sampling Model where the could then test for the independence of A and B We will discuss this in Section 10.2. i j comprise an unknown probability distribution. Given this model, we Another approach to model checking proceeds as follows. We enlarge the model to include more distributions and then test the null hypothesis that the true model is the submodel we initially started with. If we can apply the methods of Section 8.2 to come up with a uniformly most powerful (UMP) test of this null hypothesis, then we will have a check of departures from the model of interest — at least as expressed by the possible alternatives in the enlarged model. If the model passes such a check, however, we are still required to check the validity of the enlarged model. This can be viewed as a technique for generating relevant discrepancy statistics D. 9.1.1 Residual and Probability Plots There is another, more informal approach to checking model correctness that is often used when we have residuals available. These methods involve various plots of the residuals that should exhibit specific characteristics if the model is correct. While this approach lacks the rigor of the P­value approach, it is good at demonstrating gross deviations from model assumptions. We illustrate this via some examples. EXAMPLE 9.1.5 Location and Location­Scale Normal Models Using the residuals for the location normal model discussed in Example 9.1.1, we have 2 that E Ri 1 n We standardize these values so that they 0 1 also have variance 1, and so obtain the standardized residuals r1 rn given by 0 and Var Ri ri n 2 0 n 1 xi x . (9.1.3) The standardized residuals are distributed N 0 1 and, assuming that n is reasonably large, it can be shown that they are approximately independent. Accordingly, we can think of r1 rn as an approximate sample from the N 0 1 distribution. Therefore, a plot of the points i ri should not exhibit any discernible pattern. Furthermore, all the values in the y­direction should lie in unless of course 3 3 n is very large, in which case we might expect a few values outside this interval A discernible pattern, or several extreme values, can be taken as some evidence that the model assumption is not correct. Always keep in mind, however, that any observed pattern could have arisen simply from sampling variability when the true model is correct. Simulating a few of these residual plots (just generating several samples of n from the N 0 1 distribution and obtaining a residual plot for each sample) will give us some idea of whether or not the observed pattern is unusual. Figure 9.1.3 shows a plot of the standardized residuals (9.1.3) for a sample of 100 from the N 0 1 distribution. Figure 9.1.4 shows a plot of the standardized residuals for a sample of 100 from the distribution given by 3 1 2 Z where Z t 3 . Note that a t 3 distribution has mean 0 and variance equal to 3, so Var 3 1 2 Z 1 (Problem 4.6.16). Figure 9.1.5 shows the standardized residuals for a sample of 100 from an Exponential 1 distribution. Chapter 9: Model Checking 487 1 ­2 ­3 ­4 ­5 ­6 0 50 i 100 Figure 9.1.3: A plot of the standardized residuals for a sample of 100 from an N 0 1 distribution1 ­2 ­3 ­4 ­5 ­6 0 50 i 100 Figure 9.1.4: A plot of the standardized residuals for a sample of 100 from X where 1 ­2 ­3 ­4 ­5 ­6 0 50 i 100 Figure 9.1.5: A plot of the standardized residuals for a sample of 100 from an Exponential 1 distribution. 488 Section 9.1: Checking the Sampling Model Note that the distributions of the standardized residuals for all these samples have mean 0 and variance equal to 1. The difference in Figures 9.1.3 and 9.1.4 is due to the fact that the t distribution has much longer tails. This is reected in the fact that a few of the standardized residuals are outside 3 3 in Figure 9.1.4 but not in Figure 9.1.3. Even though the two distributions are quite different — e.g., the N 0 1 distribution has all of its moments whereas the 3 1 2 t 3 distribution has only two moments — the plots of the standardized residuals are otherwise very similar. The difference in Figures 9.1.3 and 9.1.5 is due to the asymmetry in the Exponential 1 distribution, as it is skewed to the right. Using the residuals for the location­scale normal model discussed in Example 9.1.2, we define the standardized residuals r1 rn by ri n s2 n 1 xi x . (9.1.4) Here, the unknown variance is estimated by s2. Again, it can be shown that when n is rn is an approximate sample from the N 0 1 distribution. So we large, then r1 and interpret the plot just as we described for the location normal plot the values i ri model. It is very common in statistical applications to assume some basic form for the dis­ tribution of the data, e.g., we might assume we are sampling from a normal distribution with some mean and variance. To assess such an assumption, the use of a probability plot has proven to be very useful. To illustrate, suppose that x1 2 distribution. Then it can be shown that when n is large, the expectation of the i­th order statistic satisfies xn is a sample from an N 1 i n If the data value x j corresponds to order statistic ), then we call the normal score of x j in the sample Then (9.1.5) indicates that if 1 i n , these should lie approximately on a line . We call such a plot a normal probability plot or normal we plot the points x i with intercept quantile plot. Similar plots can be obtained for other distributions. and slope (i.e., x i 1 (9.1.5) 1 EXAMPLE 9.1.6 Location­Scale Normal Suppose we want to assess whether or not the following data set can be considered a sample of size n 10 from some normal distribution. 2 00 0 28 0 47 3 33 1 66 8 17 1 18 4 15 6 43 1 77 The order statistics and associated normal scores for this sample are given in the fol­ lowing table 28 1 34 6 2 00 0 11 2 0 47 0 91 7 3 33 0 34 3 1 18 0 61 8 4 15 0 60 4 1 66 0 35 9 6 43 0 90 5 1 77 0 12 10 8 17 1 33 Chapter 9: Model Checking 489 The values x i 1 i n 1 are then plotted in Figure 9.1.6. There is some definite deviation from a straight line here, but note that it is difficult to tell whether this is unexpected in a sample of this size from a normal distribution. Again, simulating a few samples of the same size (say, from an N 0 1 distribution) and looking at their normal probability plots is recom­ mended. In this case, we conclude that the plot in Figure 9.1.6 looks reasonable Figure 9.1.6: Normal probability plot of the data in Example 9.1.6. We will see in Chapter 10 that the use of normal probability plots of standardized residuals is an important part of model checking for more complicated models. So, while they are not really needed here, we consider some of the characteristics of such plots when assessing whether or not a sample is from a location normal or location­ scale normal model. Assume that n is large so that we can consider the standardized residuals, given by (9.1.3) or (9.1.4) as an approximate sample from the N 0 1 distribution. Then a normal probability plot of the standardized residuals should be approximately linear, with y­intercept approximately equal to 0 and slope approximately equal to 1. If we get a substantial deviation from this, then we have evidence th
at the assumed model is incorrect. In Figure 9.1.7, we have plotted a normal probability plot of the standardized resid­ 25 from an N 0 1 distribution In Figure 9.1.8, we have uals for a sample of n 25 plotted a normal probability plot of the standardized residuals for a sample of n from the distribution given by X t 3 . Both distributions have mean 0 and variance 1, so the difference in the normal probability plots is due to other distributional differences. 3 1 2 Z where Z 490 Section 9.1: Checking the Sampling Model 1 ­2 ­2 ­1 0 1 2 Standardized residuals Figure 9.1.7: Normal probability plot of the standardized residuals of a sample of 25 from an N 0 1 distribution1 ­2 ­2 ­1 0 1 2 3 Standardized residuals Figure 9.1.8: Normal probability plot of the standardized residuals of a sample of 25 from X 3 1 2 Z where Z t 3 9.1.2 The Chi­Squared Goodness of Fit Test The chi­squared goodness of fit test has an important historical place in any discussion of assessing model correctness. We use this test to assess whether or not a categorical k , has a random variable W , which takes its values in the finite sample space 1 2 specified probability measure P, after having observed a sample n . When we have a random variable that is discrete and takes infinitely many values, then we partition the possible values into k categories and let W simply indicate which category has occurred. If we have a random variable that is quantitative, then we partition R1 into k subintervals and let W indicate in which interval the response occurred. In effect, we want to check whether or not a specific probability model, as given by P is correct for W based on an observed sample. 1 Chapter 9: Model Checking 491 Let X1 Xk be the observed counts or frequencies of 1 k respectively. If P is correct, then, from Example 2.8.5, X1 Xk Multinomial n p1 pk where pi that Xi P i . This implies that E Xi npi and Var Xi npi 1 pi (recall Binomial n pi ). From this, we deduce that Ri Xi npi 1 npi D pi N 0 1 (9.1.6) as n (see Example 4.4.9). For finite n the distribution of Ri when the model is correct, is dependent on P but the limiting distribution is not. Thus we can think of the Ri as standardized residuals when n is large. Therefore, it would seem that a reasonable discrepancy k i 1 R2 statistic is given by the sum of the squares of the standardized residuals with i approximately distributed 2 k The restriction x1 n holds, however, so xk 2 k . We do, however, the Ri are not independent and the limiting distribution is not have the following result, which provides a similar discrepancy statistic. Theorem 9.1.1 If X1 Xk Multinomial n p1 pk , then X 2 k i 1 1 pi R2 i k Xi 2 npi D i 1 npi 2 k 1 as n The proof of this result is a little too involved for this text, so see, for example, Theorem 17.2 of Asymptotic Statistics by A. W. van der Vaart (Cambridge University Press, Cambridge, 1998), which we will use here. We refer to X 2 as the chi­squared statistic. The process of assessing the correctness of the model by computing the P­value P X 2 1 and X 2 0 is the observed value of the chi­squared statistic, is referred to as the chi­squared goodness of fit test. Small P­values near 0 provide evidence of the incorrectness of the probability model. Small P­values indicate that some of the residuals are too large. 0 , where X 2 X 2 2 k Note that the ith term of the chi­squared statistic can be written as Xi 2 npi (number in the ith cell expected number in the ith cell)2 npi expected number in the ith cell . It is recommended, for example, in Statistical Methods, by G. Snedecor and W. Cochran (Iowa State Press, 6th ed., Ames, 1967) that grouping (combining cells) be employed to ensure that E Xi 1 for every i as simulations have shown that this improves the accuracy of the approximation to the P­value. npi We consider an important application. EXAMPLE 9.1.7 Testing the Accuracy of a Random Number Generator In effect, every Monte Carlo simulation can be considered to be a set of mathematical in [0 1] that are supposed to operations applied to a stream of numbers U1 U2 492 Section 9.1: Checking the Sampling Model be i.i.d. Uniform[0 1] Of course, they cannot satisfy this requirement exactly because they are generated according to some deterministic function. Typically, a function f : [0 1]m [0 1] is chosen and is applied iteratively to obtain the sequence. So we select U1 Um as initial seed values and then Um 1 etc. There are many possibilities for f and a great deal of re­ f U2 search and study have gone into selecting functions that will produce sequences that adequately mimic the properties of an i.i.d. Uniform[0 1] sequence. Um Um 2 Um 1 f U1 Of course, it is always possible that the underlying f used in a particular statistical In such a case, the results of the package or other piece of software is very poor. simulations can be grossly in error How do we assess whether a particular f is good or not? One approach is to run a battery of statistical tests to see whether the sequence is behaving as we know an ideal sequence would. For example, if the sequence U1 U2 is i.i.d. Uniform[0 1], then 10U1 10U2 10 ( x denotes the smallest integer greater than x e.g., is i.i.d. Uniform 1 2 4) So we can test the adequacy of the underlying function f by generating 3 2 and then carrying out a chi­squared U1 Un for large n putting xi goodness of fit test with the 10 categories 1 10 with each cell probability equal to 1/10. 10Ui Doing this using a popular statistical package (with n 104) gave the following table of counts xi and standardized residuals ri as specified in (9.1.6). 10 xi 993 1044 1061 1021 1017 973 975 965 996 955 ri 0 23333 1 46667 2 03333 0 70000 0 56667 0 90000 0 83333 1 16667 0 13333 1 50000 All the standardized residuals look reasonable as possible values from an N 0 1 dis­ tribution. Furthermore, X 2 0 1 0 1 11 0560 0 23333 2 0 70000 2 1 46667 2 0 56667 2 2 03333 2 0 90000 2 0 83333 2 1 50000 2 1 16667 2 0 13333 2 gives the P­value P X 2 we have no evidence that the random number generator is defective. 0 27190 when X 2 11 0560 2 9 This indicates that Chapter 9: Model Checking 493 Of course, the story does not end with a single test like this. Many other features of the sequence should be tested. For example, we might want to investigate the inde­ pendence properties of the sequence and so test if each possible combination of i j occurs with probability 1/100, etc. More generally, we will not have a prescribed probability distribution P for X but where each P is a probability measure on the k Then, based on the sample from the model, we have that rather a statistical model P : finite set 1 2 X1 Xk Multinomial n p1 pk where pi P i Perhaps a natural way to assess whether or not this model fits the data is to find the MLE from the likelihood function L x1 xk p1 x1 pk xk and then look at the standardized residuals ri xi npi npi 1 pi We have the following result, which we state without proof. Theorem 9.1.2 Under conditions (similar to those discussed in Section 6.5), we have that Ri N 0 1 and D X 2 k i 1 1 pi R2 i k i 1 Xi npi 2 D npi 2 k 1 dim as n we mean the dimension of the set By dim Loosely speaking, this is the mini­ mum number of coordinates required to specify a point in the set, e.g., a line requires one coordinate (positive or negative distance from a fixed point), a circle requires one coordinate, a plane in R3 requires two coordinates, etc. Of course, this result implies 1 that the number of cells must satisfy k dim Consider an example. EXAMPLE 9.1.8 Testing for Exponentiality Suppose that a sample of lifelengths of light bulbs (measured in thousands of hours) is supposed to be from an Exponential is unknown. So here dim 1 and we require at least two cells for the chi­squared test The manufacturer expects that most bulbs will last at least 1000 hours, 50% will last less than 2000 hours, and most will have failed by 3000 hours. So based on this, we partition the sample space as distribution, where 0 0 0 1] 1 2] 2 3] 3 494 Section 9.1: Checking the Sampling Model Suppose that a sample of n 5 30 light bulbs was taken and that the counts x1 1 were obtained for the four intervals, respectively. Then 16 x3 x2 the likelihood function based on these counts is given by 8 and x4 L x1 x40 1 e 5 e e 2 16 e 2 e 3 8 e 3 1 because, for example, the probability of a value falling in 1 2] is e have x2 and we 16 observations in this interval. Figure 9.1.9 is a plot of the log­likelihood. e 2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 theta 1.8 2.0 ln L ­100 ­200 ­300 ­400 ­500 Figure 9.1.9: Plot of the log­likelihood function in Example 9.1.8. By successively plotting the likelihood on shorter and shorter intervals, the MLE was determined to be 0 603535 This value leads to the probabilities p1 p2 p3 p4 the fitted values e 0 603535 1 e 0 603535 e 2 0 603535 e 3 0 603535 0 453125 e 2 0 603535 0 247803 e 3 0 603535 0 163555 0 135517 30 p1 30 p2 30 p3 30 p4 13 59375 7 43409 4 06551 4 90665 Chapter 9: Model Checking 495 and the standardized residuals r1 r2 r3 r4 5 16 8 1 13 59375 30 0 453125 1 0 453125 3 151875 7 43409 30 0 247803 1 0 247803 3 622378 4 06551 30 0 135517 1 0 135517 2 098711 4 90665 30 0 163555 1 0 163555 1 928382 Note that two of the standardized residuals look large. Finally, we compute X 2 0 1 0 453125 3 151875 2 0 135517 2 098711 2 1 1 1 0 247803 3 622378 2 0 163555 1 928382 2 22 221018 and P X 2 22 221018 0 0000 when X 2 2 2 Therefore, we have strong evidence that the Exponential model is not correct for these data and we would not use this model to make inference about . Note that we used the MLE of based on the count data and not the original sample! If instead we were to use the MLE for based on the original sample (in this case, equal to x and so much easier to compute), then Theorem 9.1.2 would no longer be valid. The chi­squared goodness of fit test is but one of many discrepancy statistics that have been proposed for model checking in the statistical literature.
The general ap­ proach is to select a discrepancy statistic D like X 2 such that the exact or asymptotic distribution of D is independent of and known. We then compute a P­value based on D The Kolmogorov–Smirnov test and the Cramer–von Mises test are further examples of such discrepancy statistics, but we do not discuss these here. 9.1.3 Prediction and Cross­Validation Perhaps the most rigorous test that a scientific model or theory can be subjected to is assessing how well it predicts new data after it has been fit to an independent data set. In fact, this is a crucial step in the acceptance of any new empirically developed scientific theory — to be accepted, it must predict new results beyond the data that led to its formulation. If a model does not do a good job at predicting new data, then it is reasonable to say that we have evidence against the model being correct. If the model is too simple, then the fitted model will underfit the observed data and also the future data. If the model is too complicated, then the model will overfit the original data, and this will be detected when we consider the new data in light of this fitted model. In statistical applications, we typically cannot wait until new data are generated to check the model. So statisticians use a technique called cross­validation. For this, we split an original data set x1 xn into two parts: the training set T comprising k of the data values and used to fit the model; and the validation set V , which comprises 496 Section 9.1: Checking the Sampling Model k data values. Based on the training data, we construct predictors of the remaining n various aspects of the validation data. Using the discrepancies between the predicted and actual values, we then assess whether or not the validation set V is surprising as a possible future sample from the model. Of course, there are n k possible such splits of the data and we would not want to make a decision based on just one of these. So a cross­validational analysis will have to take this into account. Furthermore, we will have to decide how to measure the discrepancies between T and V and choose a value for k We do not pursue this topic any further in this text. 9.1.4 What Do We Do When a Model Fails? So far we have been concerned with determining whether or not an assumed model is appropriate given observed data. Suppose the result of our model checking is that we decide a particular model is inappropriate. What do we do now? Perhaps the obvious response is to say that we have to come up with a more appro­ priate model — one that will pass our model checking. It is not obvious how we should go about this, but statisticians have devised some techniques. One of the simplest techniques is the method of transformations. For example, sup­ exp X pose that we observe a sample y1 2 . A normal probability plot based on the yi , as in Figure 9.1.10, with X will detect evidence of the nonnormality of the distribution. Transforming these yi values to ln yi will, however, yield a reasonable looking normal probability plot, as in Figure 9.1.11. yn from the distribution given by Y N So in this case, a simple transformation of the sample yields a data set that passes this check. In fact, this is something statisticians commonly do. Several transforma­ tions from the family of power transformations given by Y p for p 0 or the logarithm transformation ln Y are tried to see if a distributional assumption can be satisfied by a transformed sample. We will see some applications of this in Chapter 10. Surprisingly, this simple technique often works, although there are no guarantees that it always will. Perhaps the most commonly applied transformation is the logarithm when our data values are positive (note that this is a necessity for this transformation). Another very common transformation is the square root transformation, i.e., p 1 2 when we have count data. Of course, it is not correct to try many, many transformations until we find one that makes the probability plots or residual plots look acceptable. Rather, we try a few simple transformations. Chapter 9: Model Checking 497 1 ­2 0 1 2 4 5 6 3 Y Figure 9.1.10: A normal probability plot of a sample of n Y exp X with X N 0 1 . 50 from the distribution given by 1 ­2 ­3 ­2 ­1 0 1 2 Y Figure 9.1.11: A normal probability plot of a sample of n ln Y , where Y exp X and X N 0 1 . 50 from the distribution given by Summary of Section 9.1 Model checking is a key component of the practical application of statistics. One approach to model checking involves choosing a discrepancy statistic D and then assessing whether or not the observed value of D is surprising by computing a P­value. 498 Section 9.1: Checking the Sampling Model Computation of the P­value requires that the distribution of D be known under the assumption that the model is correct. There are two approaches to accom­ plishing this. One involves choosing D to be ancillary, and the other involves computing the P­value using the conditional distribution of the data given the minimal sufficient statistic. The chi­squared goodness of fit statistic is a commonly used discrepancy statis­ tic. For large samples, with the expected cell counts determined by the MLE based on the multinomial likelihood, the chi­squared goodness of fit statistic is approximately ancillary. There are also many informal methods of model checking based on various plots of residuals. If a model is rejected, then there are several techniques for modifying the model. These typically involve transformations of the data. Also, a model that fails a model­checking procedure may still be useful, if the deviation from correctness is small. EXERCISES 9.1.1 Suppose the following sample is assumed to be from an N 4 distribution with R1 unknown Check this model using the discrepancy statistic of Example 9.1.1. 9.1.2 Suppose the following sample is assumed to be from an N 2 distribution with unknowna) Plot the standardized residuals. (b) Construct a normal probability plot of the standardized residuals. (c) What conclusions do you draw based on the results of parts (a) and (b)? 9.1.3 Suppose the following sample is assumed to be from an N 0 are unknown. where R1 and 2 2 distribution, 14 0 9 4 12 1 13 4 6 3 8 5 7 1 12 4 13 3 9 1 (a) Plot the standardized residuals. (b) Construct a normal probability plot of the standardized residuals. (c) What conclusions do you draw based on the results of parts (a) and (b)? 9.1.4 Suppose the following table was obtained from classifying members of a sample of n 10 from a student population according to the classification variables A and B, where A 0 1 indicates conservative, liberal. 0 1 indicates male, female and Chapter 9: Model Checking 499 Check the model that says gender and political orientation are independent, using Fisher’s exact test. 9.1.5 The following sample of n ution. 20 is supposed to be from a Uniform[0 1] distrib­ 0 11 0 45 0 56 0 22 0 72 0 08 0 18 0 65 0 26 0 32 0 32 0 88 0 42 0 76 0 22 0 32 0 96 0 21 0 04 0 80 After grouping the data, using a partition of five equal­length intervals, carry out the chi­squared goodness of fit test to assess whether or not we have evidence against this assumption. Record the standardized residuals. 9.1.6 Suppose a die is tossed 1000 times, and the following frequencies are obtained for the number of pips up when the die comes to a rest. x1 163 x2 178 x3 142 x4 150 x5 183 x6 184 Using the chi­squared goodness of fit test, assess whether we have evidence that this is not a symmetrical die. Record the standardized residuals. 9.1.7 Suppose the sample space for a response is given by S (a) Suppose that a statistician believes that in fact the response will lie in the set S 10 11 12 13 and so chooses a probability measure P that reects this When 3 is observed. What is an appropriate the data are collected, however, the value s P­value to quote as a measure of how surprising this value is as a potential value from P? (b) Suppose instead P is taken to be a Geometric(0.1) distribution. Determine an ap­ 3 is as a potential value propriate P­value to quote as a measure of how surprising s from P. 0 1 2 3 . 3 heads in n 9.1.8 Suppose we observe s 10 independent tosses of a purportedly fair coin. Compute a P­value that assesses how surprising this value is as a potential value from the distribution prescribed. Do not use the chi­squared test. 9.1.9 Suppose you check a model by computing a P­value based on some discrepancy statistic and conclude that there is no evidence against the model. Does this mean the model is correct? Explain your answer. 9.1.10 Suppose you are told that standardized scores on a test are distributed N 0 1 A student’s standardized score is 4. Compute an appropriate P­value to assess whether or not the statement is correct. 9.1.11 It is asserted that a coin is being tossed in independent tosses. You are somewhat skeptical about the independence of the tosses because you know that some people practice tossing coins so that they can increase the frequency of getting a head. So you wish to assess the independence of x1 (a) Show that the conditional distribution of x1 of all sequences of length n with entries from 0 1 (b) Using this conditional distribution, determine the distribution of the number of 1’s observed in the first n 2 observations. (Hint: The hypergeometric distribution.) xn given x is uniform on the set xn from a Bernoulli distribution. 500 Section 9.1: Checking the Sampling Model (c) Suppose you observe Compute an appropriate P­value to assess the independence of these tosses using (b). COMPUTER EXERCISES 9.1.12 For the data of Exercise 9.1.1, present a normal probability plot of the standard­ ized residuals and comment on it. 9.1.13 Generate 25 samples from the N 0 1 distribution with n their normal probability plots. Draw any general conclusions. 9.1.14 Suppose the following table was obtained from classifying members of a sam­ 100 from a student population according to th
e classification variables A ple on n and B, where A 0 1 indicates conservative, liberal. 0 1 indicates male, female and B 10 and look at B 0 B 1 A A 0 1 20 36 15 29 Check the model that gender and political orientation are independent using Fisher’s exact test. 9.1.15 Using software, generate a sample of n 1000 from the Binomial 10 0 2 distribution. Then, using the chi­squared goodness of fit test, check that this sample is 1. What would indeed from this distribution. Use grouping to ensure E Xi you conclude if you got a P­value close to 0? 9.1.16 Using a statistical package, generate a sample of n 1000 from the Poisson 5 distribution. Then, using the chi­squared goodness of fit test based on grouping the 1, check that this sample observations into five cells chosen to ensure E Xi is indeed from this distribution. What would you conclude if you got a P­value close to 0? 9.1.17 Using a statistical package, generate a sample of n 1000 from the N 0 1 distribution. Then, using the chi­squared goodness of fit test based on grouping the observations into five cells chosen to ensure E Xi 1, check that this sample is indeed from this distribution. What would you conclude if you got a P­value close to 0? npi npi npi PROBLEMS 9.1.18 (Multivariate normal distribution) A random vector Y have a multivariate normal distribution with mean vector Y1 Yk is said to Rk and variance matrix i j Rk k if a1Y1 akYk N ai i ai for every choice of a1 Cov Yi Y j ak i j and that Yi R1. We write Y N i ii . (Hint: Theorem 3.3.4.) Nk . Prove that E Yi i , Chapter 9: Model Checking 501 i j i j Y1 R1 Yk is distributed multivariate normal with mean vector Rn is distributed mul­ 0 and variance 0 j and i i 9.1.19 In Example 9.1.1, prove that the residual R tivariate normal (see Problem 9.1.18) with mean vector Rk k, where 2 matrix 0 n when i (Hint: Theorem 4.6.1.) Rk 9.1.20 If Y is distributed multi­ and variance matrix Rl Rl and variance matrix then it variate normal with mean vector i j l k i 1 ai Yi and i 1 bi Xi are can be shown that Y and X are independent whenever bl . Use this fact to show ak and b1 independent for every choice of a1 that, in Example 9.1.1, X and R are independent. (Hint: Theorem 4.6.2 and Problem 9.1.19.) 9.1.21 In Example 9.1.4, prove that ( 1 x1 n x 1 n is the MLE. Rk k and if X 2 0 1 1 n X1 Xl i j l 1 9.1.22 In Example 9.1.4, prove that the number of samples satisfying the constraints (9.1.2) equals n x1 (Hint: Using i for the count x11, show that the number of such samples equals n x 1 . n x1 min x1 x 1 i max 0 x1 x 1 n x1 i n x 1 x1 i and sum this using the fact that the sum of Hypergeometric n x 1 x1 probabilities equals 1.) COMPUTER PROBLEMS 9.1.23 For the data of Exercise 9.1.3, carry out a simulation to estimate the P­value for the discrepancy statistic of Example 9.1.2. Plot a density histogram of the simulated values. (Hint: See Appendix B for appropriate code.) 10 generate 104 values of the discrepancy statistic in Example 9.1.2 9.1.24 When n when we have a sample from an N 0 1 distribution. Plot these in a density histogram. Repeat this, but now generate from a Cauchy distribution. Compare the histograms (do not forget to make sure both plots have the same scales). 9.1.25 The following data are supposed to have come from an Exponential ution, where 0 is unknown. distrib 12 1 12 10 1 0 1 4 9 Check this model using a chi­squared goodness of fit test based on the intervals 2 0] 2 0 4 0] 4 0 6 0] 6 0 8 0] 8 0 10 0] 10 0 (Hint: Calculate the MLE by plotting the log­likelihood over successively smaller in­ tervals.) 502 Section 9.2: Checking for Prior–Data Conict 9.1.26 The following table, taken from Introduction to the Practice of Statistics, by D. Moore and G. McCabe (W. H. Freeman, New York, 1999), gives the measurements in milligrams of daily calcium intake for 38 women between the ages of 18 and 24 years. 808 651 626 1156 882 716 774 684 1062 438 1253 1933 970 1420 549 748 909 1425 1325 1203 802 948 446 2433 374 1050 465 1255 416 976 1269 110 784 572 671 997 403 696 600] 600 1200] 1200 1800] 1800 (a) Suppose that the model specifies a location normal model for these data with 2 0 500 2. Carry out a chi­squared goodness of fit test on these data using the intervals (Hint: Plot the log­likelihood over successively smaller intervals to determine the MLE to about one decimal place. To determine the initial range for plotting, use the overall MLE of minus three standard errors to the overall MLE plus three standard errors.) (b) Compare the MLE of (c) It would be more realistic to assume that the variance 2 is unknown as well. Record the log­likelihood for the grouped data. (More sophisticated numerical methods are needed to find the MLE of 9.1.27 Generate 104 values of the discrepancy statistics Dskew and Dkurtosis in Example 10 from an N 0 1 distribution. Plot these 9.1.2 when we have a sample of n Indicate how you would use these histograms to assess the in density histograms. normality assumption when we had an actual sample of size 10. Repeat this for n 20 and compare the distributions. obtained in part (a) with the ungrouped MLE. 2 in this case.) CHALLENGES 9.1.28 (MV) Prove that when x1 Z , where Z has a known distribution and xn is a sample from the distribution given by is unknown, R1 0 2 then the statistic r x1 xn x1 x xn x s s is ancillary. r x1 (Hint: Write a sample element as xi xn can be written as a function of the zi .) zi and then show that 9.2 Checking for Prior–Data Conict to the statistical model Bayesian methodology adds the prior probability measure P : for the subsequent statistical analysis. The methods of Section 9.1 are designed to check that the observed data can realistically be assumed to have come When we add the prior, we are in effect saying from a distribution in P : that our knowledge about the true distribution leads us to assign the prior predictive probability M given by M A to describe the process generating the data. So it would seem, then, that a sensible Bayesian model­checking E P A for A Chapter 9: Model Checking 503 approach would be to compare the observed data s with the distribution given by M to see if it is surprising or not. Suppose that we were to conclude that the Bayesian model was incorrect after deciding that s is a surprising value from M This only tells us, however, that the probability measure M is unlikely to have produced the data and not that the model P : was wrong. Consider the following example. EXAMPLE 9.2.1 Prior–Data Conict Suppose we obtain a sample consisting of n 1 from the model with 1 2 and probability functions for the basic response given by the following 20 values of s table f1 s f2 s Then the probability of obtaining this sample from f2 is given by 0 9 20 f1 which is a reasonable value, so we have no evidence against the model 0 12158 f2 . Suppose we place a prior on 0 9999 so that we are virtually 1 Then the probability of getting these data from the prior predictive given by 1 certain that M is 0 9999 0 1 20 0 0001 0 9 20 1 2158 10 5. The prior probability of observing a sample of 20, whose prior predictive probability is 10 5 can be calculated (using statistical software to tabulate no greater than 1 2158 the prior predictive) to be approximately 0 04. This tells us that the observed data are “in the tails” of the prior predictive and thus are surprising, which leads us to conclude that we have evidence that M is incorrect. So in this example, checking the model leads us to conclude that it is plausible for the data observed. On the other hand, checking the model given by M leads us to the conclusion that the Bayesian model is implausible. : f The lesson of Example 9.2.1 is that we can have model failure in the Bayesian con­ text in two ways. First, the data s may be surprising in light of the model . Second, even when the data are plausibly from this model, the prior and the data may conict. This conict will occur whenever the prior assigns most of its probability to distributions in the model for which the data are surprising. In either situation, infer­ ences drawn from the Bayesian model may be awed. : f If, however, the prior assigns positive probability (or density) to every possible value of then the consistency results for Bayesian inference mentioned in Chapter 7 indicate that a large amount of data will overcome a prior–data conict (see Example 9.2.4). This is because the effect of the prior decreases with increasing amounts of data. So the existence of a prior–data conict does not necessarily mean that our inferences are in error. Still, it is useful to know whether or not this conict exists, as it is often difficult to detect whether or not we have sufficient data to avoid the problem. Therefore, we should first use the checks discussed in Section 9.1 to ensure that the If we accept the model, then we look data s is plausibly from the model for any prior–data conict. We now consider how to go about this. : f 504 Section 9.2: Checking for Prior–Data Conict The prior predictive distribution of any ancillary statistic is the same as its distrib­ ution under the sampling model, i.e., its prior predictive distribution is not affected by the choice of the prior. So the observed value of any ancillary statistic cannot tell us anything about the existence of a prior–data conict. We conclude from this that, if we are going to use some function of the data to assess whether or not there is prior–data conict, then its marginal distribution has to depend on . We now show that the prior predictive conditional distribution of the data given a minimal sufficient statistic T is independent of the prior. Theorem 9.2.1 Suppose T is a sufficient statistic for the model for data s Then the conditional prior predictive distribution of the data s given T is independent of the prior f : . PROOF We will prove this in the case that each sample distribution f and the prior are discrete. A similar argument can be developed for the more general case. By
Theorem 6.1.1 (factorization theorem) we have that f s h s g T s for some functions g and h Therefore the prior predictive probability function of s is given by m s h s g T s The prior predictive probability function of T at t is given by m t h s g t s:T s t Therefore, the conditional prior predictive probability function of the data s given T s t is t s t h s which is independent of So, from Theorem 9.2.1, we conclude that any aspects of the data, beyond the value of a minimal sufficient statistic, can tell us nothing about the existence of a prior– data conict. Therefore, if we want to base our check for a prior–data conict on the prior predictive, then we must use the prior predictive for a minimal sufficient statistic. Consider the following examples. EXAMPLE 9.2.2 Checking a Beta Prior for a Bernoulli Model Suppose that x1 unknown, and n i 1 xi is a minimal sufficient statistic and is distributed Binomial n count y [0 1] is prior distribution. Then we have that the sample xn is a sample from a Bernoulli is given a Beta model, where Chapter 9: Model Checking 505 Therefore, the prior predictive probability function for y is given by then m y Now observe that when On the other hand, when 1 n 1 i.e., the prior predictive of y is Uniform 0 1 n and no values of y are surprising. This is not unexpected, as with the uniform prior on we are implicitly saying that any count y is reasonable. 2 the prior puts more weight around 1/2. The 1 This prior predictive is prior predictive is then proportional to y plotted in Figure 9.2.1 when n 20. Note that counts near 0 or 20 lead to evidence that there is a conict between the data and the prior. For example, if we obtain the 3, we can assess how surprising this value is by computing the probability count y of obtaining a value with a lower probability of occurrence. Using the symmetry of the prior predictive, we have that this probability equals (using statistical software for the computation) m 0 m 2 m 19 m 20 0 0688876 Therefore, the observation 3 is not surprising at the 5% level.07 0.06 0.05 0.04 0.03 0.02 0.01 0 10 y 20 Figure 9.2.1: Plot of the prior predictive of the sample count y in Example 9.2.2 when 2 and n 20. Suppose now that n 50 and 2 4 The mean of this prior is 2 2 4 1 3 and the prior is right­skewed. The prior predictive is plotted in Figure 9.2.2. Clearly, values of y near 50 give evidence against the model in this case. For example, 35 then the probability of getting a count with smaller probability of if we observe y occurrence is given by (using statistical software for the computation) m 36 m 50 against the model at the 5% level. 0 0500457. Only values more extreme than this would provide evidence 506 Section 9.2: Checking for Prior–Data Conict .04 0.03 0.02 0.01 0.00 0 10 20 30 40 50 y Figure 9.2.2: Plot of the prior predictive of the sample count y in Example 9.2.2 when 2 4 and n 50. EXAMPLE 9.2.3 Checking a Normal Prior for a Location Normal Model R1 Suppose that x1 xn is a sample from an N to be an is unknown and 2 N 0 0 Note that x is a minimal sufficient 0 for some specified choice of statistic for this model, so we need to compare the observed of this statistic to its prior predictive distribution to assess whether or not there is prior–data conict. 2 0 is known. Suppose we take the prior distribution of 0 and 2 2 0 distribution, where Now we can write x z where N 0 2 0 independent of z 2 0 n From this, we immediately deduce (see Exercise 9.2.3) that the prior pre­ 2 0 n . From the symmetry of the prior predictive N 0 dictive distribution of x is N 0 density about 2 0 0 we immediately see that the appropriate P­value is 9.2.1) M X So a small value of (9.2.1) is evidence that there is a conict between the observed data and the prior, i.e., the prior is putting most of its mass on values of for which the observed data are surprising. Another possibility for model checking in this context is to look at the posterior predictive distribution of the data. Consider, however, the following example. EXAMPLE 9.2.4 (Example 9.2.1 continued) Recall that, in Example 9.2.1, we concluded that a prior–data conict existed. Note, however, that the posterior probability of 2 is 0 0001 0 9 20 0 9999 0 1 20 0 0001 0 9 20 1 Therefore, the posterior predictive probability of the observed sequence of 20 values of 1 is 0 12158 which does not indicate any prior–data conict. We note, however, that in this example, the amount of data are sufficient to overwhelm the prior; thus we are led to a sensible inference about Chapter 9: Model Checking 507 The problem with using the posterior predictive to assess whether or not a prior– data conict exists is that we have an instance of the so­called double use of the data. For we have fit the model, i.e., constructed the posterior predictive, using the observed data, and then we tried to use this posterior predictive to assess whether or not a prior– data conict exists. The double use of the data results in overly optimistic assessments of the validity of the Bayesian model and will often not detect discrepancies. We will not discuss posterior model checking further in this text. We have only touched on the basics of checking for prior–data conict here. With more complicated models, the possibility exists of checking individual components of a prior, e.g., the components of the prior specified in Example 7.1.4 for the location­scale normal model, to ascertain more precisely where a prior–data conict is arising. Also, ancillary statistics play a role in checking for prior–data conict as we must remove any ancillary variation when computing the P­value because this variation does not depend on the prior. Furthermore, when the prior predictive distribution of a minimal sufficient statistic is continuous, then issues concerning exactly how P­values are to be computed must be addressed. These are all topics for a further course in statistics. Summary of Section 9.2 In Bayesian inference, there are two potential sources of model incorrectness. First, the sampling model for the data may be incorrect. Second, even if the sampling model is correct, the prior may conict with the data in the sense that most of the prior probability is assigned to distributions in the model for which the data are surprising. We first check for the correctness of the sampling model using the methods of Section 9.1. If we do not find evidence against the sampling model, we next check for prior–data conict by seeing if the observed value of a minimal suffi­ cient statistic is surprising or not, with respect to the prior predictive distribution of this quantity. Even if a prior–data conict exists, posterior inferences may still be valid if we have enough data. EXERCISES 9.2.1 Suppose we observe the value s table. 2 from the model, given by the following f1 s f2 s (a) Do the observed data lead us to doubt the validity of the model? Explain why or why not. (b) Suppose the prior, given by 1 2 . Is there any evidence of a prior–data conict? (Hint: Compute the prior predictive for each possible data set and assess whether or not the observed data set is surprising.) 0 3 is placed on the parameter 1 508 Section 9.2: Checking for Prior–Data Conict 0 01. distribution, where 2 is obtained, then determine 1 6 is taken from a Bernoulli (c) Repeat part (b) using the prior given by 9.2.2 Suppose a sample of n has a Beta 3 3 prior distribution. If the value nx whether there is any prior–data conict. 9.2.3 In Example 9.2.3, establish that the prior predictive distribution of x is given by the N 0 9.2.4 Suppose we have a sample of n unknown and the value x the appropriate P­value to check for prior–data conict. 9.2.5 Suppose that x observed, then determine an appropriate P­value for checking for prior–data conict. 7 3 is observed. An N 0 1 prior is placed on Uniform[0 1] If the value x 2 0 n distribution. 2 distribution where is Compute 5 from an N Uniform[0 2 2 is ] and 2 0 COMPUTER EXERCISES 9.2.6 Suppose a sample of n 20 is taken from a Bernoulli has a Beta 3 3 prior distribution. If the value nx whether there is any prior–data conict. PROBLEMS distribution, where 6 is obtained, then determine 2 0 . Determine the prior predictive distribution of x xn is a sample from an N 2 0 distribution, where 9.2.7 Suppose that x1 N 0 9.2.8 Suppose that x1 xn is a sample from an Exponential distribution where Gamma 0 0 Determine the prior predictive distribution of x ] distribution, where 1 where 1 9.2.9 Suppose that s1 bution, where 1 distribution of x1 9.2.10 Suppose that x1 sn is a sample from a Multinomial 1 Dirichlet k 1 xk , where xi is the count in the ith category. xn is a sample from a Uniform[0 1 k distri­ 1 k Determine the prior predictive has prior density given by I[ 1 0 Determine the prior predictive distribution of x n . 9.2.11 Suppose we have the context of Example 9.2.3. Determine the limiting P­value for checking for prior–data conict as n Interpret the meaning of this P­value in terms of the prior and the true value of Uniform[0 1] 9.2.12 Suppose that x Geometric (a) Determine the appropriate P­value for checking for prior–data conict. (b) Based on the P­value determined in part (a), describe the circumstances under which evidence of prior–data conict will exist. (c) If we use a continuous prior that is positive at a point, then this an assertion that the point is possible. In light of this, discuss whether or not a continuous prior that is positive at 0 makes sense for the Geometric distribution and distribution. CHALLENGES 9.2.13 Suppose that X1 N 0 2 2 0 2 and 1 Xn is a sample from an N 2 Gamma 0 2 distribution where 0 . Then determine a form for the Chapter 9: Model Checking 509 prior predictive density of X S2 that you could evaluate without integrating (Hint: Use the algebraic manipulations found in Section 7.5.) 9.3 The Problem with Multiple Checks As we have mentioned throughout this text, model checking is a part of good statistical practice. In ot
her words, one should always be wary of the value of statistical work in which the investigators have not engaged in, and reported the results of, reasonably rigorous model checking. It is really the job of those who report statistical results to convince us that their models are reasonable for the data collected, bearing in mind the effects of both underfitting and overfitting. In this chapter, we have reported some of the possible model­checking approaches available. We have focused on the main categories of procedures and perhaps the most often used methods from within these. There are many others. At this point, we cannot say that any one approach is the best possible method. Perhaps greater insight along these lines will come with further research into the topic, and then a clearer recommendation could be made. One recommendation that can be made now, however, is that it is not reasonable to go about model checking by implementing every possible model­checking procedure you can. A simple example illustrates the folly of such an approach. EXAMPLE 9.3.1 Suppose that x1 Suppose we decide to check this model by computing the P­values xn is supposed to be a sample from the N 0 1 distribution. Pi P X 2 i x 2 i for i incorrect if the minimum of these P­values is less than 0.05. n where X 2 i 1 2 1 Furthermore, we will decide that the model is Now consider the repeated sampling behavior of this method when the model is correct. We have that if and only if and so min P1 Pn 0 05 max x 2 1 x 2 n 2 0 95 1 P min P1 Pn 0 05 P max 95 1 1 P max X 2 1 X 2 n 2 0 05 1 2 0 95 1 1 0 95 n 1 as n This tells us that if n is large enough, we will reject the model with virtual certainty even though it is correct! Note that n does not have to be very large for there 10 the to be an appreciable probability of making an error. For example, when n 510 Section 9.3: The Problem with Multiple Checks probability of making an error is 0.40; when n is 0.64; and when n 100 the probability of making an error is 0.99. 20 the probability of making an error We can learn an important lesson from Example 9.3.1, for, if we carry out too many model­checking procedures, we are almost certain to find something wrong — even if the model is correct. The cure for this is that before actually observing the data (so that our choices are not determined by the actual data obtained), we decide on a few relevant model­checking procedures to be carried out and implement only these. The problem we have been discussing here is sometimes referred to as the problem of multiple comparisons, which comes up in other situations as well — e.g., see Sec­ tion 10.4.1, where multiple means are compared via pairwise tests for differences in the means. One approach for avoiding the multiple­comparisons problem is to simply lower the cutoff for the P­value so that the probability of making a mistake is appro­ priately small. For example, if we decided in Example 9.3.1 that evidence against the model is only warranted when an individual P­value is smaller than 0.0001, then the probability of making a mistake is 0 01 when n 100 A difficulty with this approach generally is that our model­checking procedures will not be independent, and it does not always seem possible to determine an appropriate cutoff for the individual P­values. More advanced methods are needed to deal with this problem. Summary of Section 9.3 Carrying out too many model checks is not a good idea, as we will invariably find something that leads us to conclude that the model is incorrect. Rather than engaging in a “fishing expedition,” where we just keep on checking the model, it is better to choose a few procedures before we see the data, and use these, and only these, for the model checking. Chapter 10 Relationships Among Variables CHAPTER OUTLINE Section 1 Related Variables Section 2 Categorical Response and Predictors Section 3 Quantitative Response and Predictors Section 4 Quantitative Response and Categorical Predictors Section 5 Categorical Response and Quantitative Predictors Section 6 Further Proofs (Advanced) In this chapter, we are concerned with perhaps the most important application of sta­ tistical inference: the problem of analyzing whether or not a relationship exists among variables and what form the relationship takes. As a particular instance of this, recall the example and discussion in Section 5.1. Many of the most important problems in science and society are concerned with re­ lationships among variables. For example, what is the relationship between the amount of carbon dioxide placed into the atmosphere and global temperatures? What is the re­ lationship between class size and scholastic achievement by students? What is the relationship between weight and carbohydrate intake in humans? What is the relation­ ship between lifelength and the dosage of a certain drug for cancer patients? These are all examples of questions whose answers involve relationships among variables. We will see that statistics plays a key role in answering such questions. In Section 10.1, we provide a precise definition of what it means for variables to be related, and we distinguish between two broad categories of relationship, namely, association and cause–effect. Also, we discuss some of the key ideas involved in col­ lecting data when we want to determine whether a cause–effect relationship exists. In the remaining sections, we examine the various statistical methodologies that are used to analyze data when we are concerned with relationships. We emphasize the use of frequentist methodologies in this chapter. We give some examples of the Bayesian approach, but there are some complexities involved with the distributional problems associated with Bayesian methods that are best avoided at this 511 512 Section 10.1: Related Variables stage. Sampling algorithms for the Bayesian approach have been developed, along the lines of those discussed in Chapter 7 (see also Chapter 11), but their full discussion would take us beyond the scope of this text. It is worth noting, however, that Bayesian analyses with diffuse priors will often yield results very similar to those obtained via the frequentist approach. As discussed in Chapter 9, model checking is an important feature of any statistical analysis. For the models used in this chapter, a full discussion of the more rigorous P­ value approach to model checking requires more development than we can accomplish in this text. As such, we emphasize the informal approach to model checking, via residual and probability plots. This should not be interpreted as a recommendation that these are the preferred methods for such models. 10.1 Related Variables R1 defined on it. What does Consider a population with two variables X Y : it mean to say that the variables X and Y are related? Perhaps our first inclination is to bX 2 for say that there must be a formula relating the two variables, such as Y some choice of constants a and b or Y is the height of humans and suppose X of individual in centimeters. From our experience, we know that taller people tend to be heavier, so we believe that there is some kind of relationship between height and weight. We know, too, that there cannot be an exact formula that describes this relationship, because people with the same weight will often have different heights, and people with the same height will often have different weights. exp X etc. But consider a population in kilograms and Y is the weight of a 10.1.1 The Definition of Relationship If we think of all the people with a given weight x, then there will be a distribution that have weight x. We call this distribution the of heights for all those individuals conditional distribution of Y given that X x. We can now express what we mean by our intuitive idea that X and Y are related, for, as we change the value of the weight that we condition on, we expect the condi­ tional distribution to change. In particular, as x increases, we expect that the location of the conditional distribution will increase, although other features of the distribution may change as well. For example, in Figure 10.1.1 we provide a possible plot of two approximating densities for the conditional distributions of Y given X 70 kg and the conditional distribution of Y given X 90 kg. We see that the conditional distribution has shifted up when X goes from 70 to 90 kg but also that the shape of the distribution has changed somewhat as well. So we can say that a relationship definitely exists between X and Y at least in this population. No­ tice that, as defined so far, X and Y are not random variables, but they become so when from the population. In that case, the conditional distributions we randomly select referred to become the conditional probability distributions of the random variable Y given that we observe X 90 respectively. 70 and X Chapter 10: Relationships Among Variables 513 0.05 0.00 140 160 180 200 x Figure 10.1.1: Plot of two approximating densities for the conditional distribution of Y given 90 kg (solid line). X 70 kg (dashed line) and the conditional distribution of Y given X We will adopt the following definition to precisely specify what we mean when we say that variables are related. Definition 10.1.1 Variables X and Y are related variables if there is any change in the conditional distribution of Y given X x, as x changes. We could instead define what it means for variables to be unrelated. We say that variables X and Y are unrelated if they are independent. This is equivalent to Definition 10.1.1, because two variables are independent if and only if the conditional distribution of one given the other does not depend on the condition (Exercise 10.1.1). There is an apparent asymmetry in Definition 10.1.1, because the definition consid­ ers only the conditional distribution of Y given X and not the conditional distribution of X given Y But, if there is a change in the conditional distribution of Y given X x as we change x
then by the above comment, X and Y are not independent; thus there y as we change y must be a change in the conditional distribution of X given Y (also see Problem 10.1.23). Notice that the definition is applicable no matter what kind of variables we are dealing with. So both could be quantitative variables, or both categorical variables, or one could be a quantitative variable while the other is a categorical variable. Definition 10.1.1 says that X and Y are related if any change is observed in the conditional distribution. In reality, this would mean that there is practically always a relationship between variables X and Y It seems likely that we will always detect some difference if we carry out a census and calculate all the relevant conditional distribu­ tions. This is where the idea of the strength of a relationship among variables becomes relevant, for if we see large changes in the conditional distributions, then we can say a strong relationship exists. If we see only very small changes, then we can say a very weak relationship exists that is perhaps of no practical importance. 514 Section 10.1: Related Variables The Role of Statistical Models If a relationship exists between two variables, then its form is completely described by the set of conditional distributions of Y given X. Sometimes it may be necessary to describe the relationship using all these conditional distributions. In many problems, however, we look for a simpler presentation. In fact, we often assume a statistical model that prescribes a simple form for how the conditional distributions change as we change X Consider the following example. EXAMPLE 10.1.1 Simple Normal Linear Regression Model In Section 10.3.2, we will discuss the simple normal linear regression model, where the conditional distribution of quantitative variable Y given the quantitative variable X x, is assumed to be distributed N 1 2 2x where 1 individual and X the amount of salt the person consumed each day. 2 and 2 are unknown. For example, Y could be the blood pressure of an In this case, the conditional distributions have constant shape and change, as x changes, only through the conditional mean. The mean moves along the line given by 1 and slope 2 If this model is correct, then the variables 0 as this is the only situation in which the conditional 2x for some intercept are unrelated if and only if distributions can remain constant as we change x 1 2 Statistical models, like that described in Example 10.1.1, can be wrong. There is nothing requiring that two quantitative variables must be related in that way. For example, the conditional variance of Y can vary with x, and the very shape of the conditional distribution can vary with x, too. The model of Example 10.1.1 is an instance of a simplifying assumption that is appropriate in many practical contexts. However, methods such as those discussed in Chapter 9 must be employed to check model assumptions before accepting statistical inferences based on such a model. We will always consider model checking as part of our discussion of the various models used to examine the relationship among variables. Response and Predictor Variables Often, we think of Y as a dependent variable (depending on X) and of X as an indepen­ dent variable (free to vary). Our goal, then, is to predict the value of Y given the value of X. In this situation, we call Y the response variable and X the predictor variable. Sometimes, though, there is really nothing to distinguish the roles of X and Y . For example, suppose that X is the weight of an individual in kilograms and Y is the height in centimeters. We could then think of predicting weight from height or conversely. It is then immaterial which we choose to condition on. In many applications, there is more than one response variable and more than one predictor variable X We will not consider the situation in which we have more than is one response variable, but we will consider the case in which X X1 Xk Chapter 10: Relationships Among Variables 515 k­dimensional. Here, the various predictors that make up X could be all categorical, all quantitative, or some mixture of categorical and quantitative variables. Xk The definition of a relationship existing between response variable Y and the set of is exactly as in Definition 10.1.1. In particular, a relationship predictors X1 Xk if there is any change in the conditional distribution exists between Y and X1 of Y given X1 xk is varied. If such a relation­ xk when x1 ship exists, then the form of the relationship is specified by the full set of conditional distributions. Again, statistical models are often used where simplifying assumptions are made about the form of the relationship. Consider the following example. Xk x1 EXAMPLE 10.1.2 The Normal Linear Model with k Predictors In Section 10.3.4, we will discuss the normal multiple linear regression model. For this, the conditional distribution of quantitative variable Y given that the quantitative predictors X1 is assumed to be the Xk x1 xk N 1 2x1 k 1xk 2 k 1 and 2 are unknown. For example, Y could be blood distribution, where 1 pressure, X1 the amount of daily salt intake, X2 the age of the individual, X3 the weight of the individual, etc. In this case, the conditional distributions have constant shape and change, as the xk change only through the conditional mean, which values of the predictors x1 k 1xk Notice that, if this model changes according to the function 1 is correct, then the variables are unrelated if and only if 0 as this is the only situation in which the conditional distributions can remain constant as we change x1 xk . 2x1 k 1 2 When we split a set of variables Y X1 Xk into response Y and predictors Xk , we are implicitly saying that we are directly interested only in the con­ Xk There may be relationships among the X1 ditional distributions of Y given X1 predictors X1 Xk however, and these can be of interest. For example, suppose we have two predictors X1 and X2 and the conditional dis­ tribution of X1 given X2 is virtually degenerate at a value a cX2 for some constants a and c Then it is not a good idea to include both X1 and X2 in a model, such as that discussed in Example 10.1.2, as this can make the analysis very sensitive to small changes in the data. This is known as the problem of multicollinearity. The effect of multicollinearity, and how to avoid it, will not be discussed any further in this text. This is, however, a topic of considerable practical importance. Regression Models Suppose that the response Y is quantitative and we have k predictors X1 One of the most important simplifying assumptions used in practice is the regression the only thing that assumption, namely, we assume that, as we change X1 can possibly change about the conditional distribution of Y given X1 is the Xk Xk The importance of this assumption is that, to an­ conditional mean E Y X1 Xk we now need only consider how alyze the relationship between Y and X1 Xk Xk 516 Section 10.1: Related Variables Xk changes as X1 Xk Xk changes. Indeed, if E Y X1 E Y X1 does not change as X1 Xk changes, then there is no relationship between Y and the predictors. Of course, this kind of an analysis is dependent on the regression assumption holding, and the methods of Section 9.1 must be used to check this. Regres­ sion models — namely, statistical models where we make the regression assumption — are among the most important statistical models used in practice. Sections 10.3 and 10.4 discuss several instances of regression models. Regression models are often presented in the form Y E Y X1 Xk Z (10.1.1) Y Xk E Y X1 is fixed as we change X1 where Z is known as the error term. We see immedi­ ately that, if the regression assumption applies, then the conditional distribution of Z Xk and, conversely, if the con­ Xk given X1 ditional distribution of Z given X1 then the regression assumption holds. So when the regression assumption applies, (10.1.1) provides a decomposition of Y into two parts: (1) a part possibly dependent on and (2) a part that is always independent X1 of X1 namely, the error Z Note that Examples 10.1.1 and 10.1.2 can be written in the form (10.1.1), where Z is fixed as we change X1 namely, E Y X1 N 0 Xk Xk Xk Xk Xk 2 10.1.2 Cause–Effect Relationships and Experiments Suppose now that we have variables X and Y defined on a population and have concluded that a relationship exists according to Definition 10.1.1. This may be based or, more typically, we will have drawn a on having conducted a full census of simple random sample from and then used the methods of the remaining sections of this chapter to conclude that such a relationship exists. If Y is playing the role of the response and if X is the predictor, then we often want to be able to assert that changes in X are causing the observed changes in the conditional distributions of Y Of course, if there are no changes in the conditional distributions, then there is no relationship between X and Y and hence no cause–effect relationship, either. For example, suppose that the amount of carbon dioxide gas being released in the atmosphere is increasing, and we observe that mean global temperatures are rising. If we have reason to believe that the amount of carbon dioxide released can have an effect on temperature, then perhaps it is sensible to believe that the increase in carbon dioxide emissions is causing the observed increase in mean global temperatures. As another example, for many years it has been observed that smokers suffer from respiratory It seems reasonable, then, to diseases much more frequently than do nonsmokers. conclude that smoking causes an increased risk for respiratory disease. On the other hand, suppose we consider the relationship between weight and height. It seems clear that a relationship exists, but it does not make any sense to say that changes in one of the variables is causing the changes in the conditional distributions of the other. Chapte
r 10: Relationships Among Variables 517 Confounding Variables When can we say that an observed relationship between X and Y is a cause–effect relationship? If a relationship exists between X and Y then we know that there are at i.e., these two X least two values x1 and x2 such that fY conditional distributions are not equal. If we wish to say that this difference is caused by the change in X, then we have to know categorically that there is no other variable Z defined on that confounds with X The following example illustrates the idea of two variables confounding. x2 x1 fY X EXAMPLE 10.1.3 Suppose that is a population of students such that most females hold a part­time job and most males do not. A researcher is interested in the distribution of grades, as measured by grade point average (GPA), and is looking to see if there is a relationship between GPA and gender. On the basis of the data collected, the researcher observes a difference in the conditional distribution of GPA given gender and concludes that a relationship exists between these variables. It seems clear, however, that an assertion of a cause–effect relationship existing between GPA and gender is not warranted, as the difference in the conditional distributions could also be attributed to the difference in part­time work status rather than gender. In this example, part­time work status and gender are confounded. A more careful analysis might rescue the situation described in Example 10.1.3, for if X and Z denote the confounding variables, then we could collect data on Z as well and examine the conditional distributions fY z . In Example 10.1.3, these will be the conditional distributions of GPA, given gender and part­time work status. If these conditional distributions change as we change x for some fixed value of z then we could assert that a cause–effect relationship exists between X and Y provided there are no further confounding variables Of course, there are probably still more confounding variables, and we really should be conditioning on all of them. This brings up the point that, in any practical application, we almost certainly will never even know all the potential confounding variables. x Z X Controlling Predictor Variable Assignments Fortunately, there is sometimes a way around the difficulties raised by confounding variables. Suppose we can control the value of the variable X for any i.e., we can assign the value x to x for any of the possible values of x so that X In Example 10.1.3, this would mean that we could assign a part­time work status to any student in the population. Now consider the following idealized situation. Imagine x1 and then carrying out a census assigning every element x1 . Now imagine assigning every to obtain the conditional distribution fY x2 and then carrying out a census to obtain the conditional the value X X distribution fY x1 and fY x2 , then the only possible reason is that the value of X differs. Therefore, if fY x1 x2 we can assert that a cause–effect relationship exists. x2 . If there is any difference in fY fY X X X X the value X X 518 Section 10.1: Related Variables X X x1 and fY and randomly assign n1 of these the value X A difficulty with the above argument is that typically we can never exactly deter­ mine fY x2 But in fact, we may be able to sample from them; then the methods of statistical inference become available to us to infer whether n1 n2 from or not there is any difference. Suppose we take a random sample 1 x1 with the remaining ’s assigned ’s assigned the value x1 the value x2. We obtain the Y values y11 ’s assigned the value x2. Then it is ap­ and obtain the Y values y21 X y2n2 is a sample parent that y11 n2 is small relative to the population X from fY size, then we can consider these as i.i.d. samples from these conditional distributions. So we see that in certain circumstances, it is possible to collect data in such a way that we can make inferences about whether or not a cause–effect relationship exists. We now specify the characteristics of the relevant data collection technique. y1n1 is a sample from fY x2 . In fact, provided that n1 y1n1 for those y2n2 for those x1 and y21 Conditions for Cause–Effect Relationships First, if our inferences are to apply to a population then we must have a random sample from that population. This is just the characteristic of what we called a sampling study in Section 5.4, and we must do this to avoid any selection effects. So if the purpose of a study is to examine the relationship between the duration of migraine headaches and the dosage of a certain drug, the investigator must have a random sample from the population of migraine headache sufferers. Second, we must be able to assign any possible value of the predictor variable X to any selected . If we cannot do this, or do not do this, then there may be hidden confounding variables (sometimes called lurking variables) that are inuencing the conditional distributions of Y . So in a study of the effects of the dosage of a drug on migraine headaches, the investigator must be able to impose the dosage on each participant in the study. Third, after deciding what values of X we will use in our study, we must randomly allocate these values to members of the sample. This is done to avoid the possibility of selection effects. So, after deciding what dosages to use in the study of the effects of the dosage of a drug on migraine headaches, and how many participants will receive each dosage, the investigator must randomly select the individuals who will receive each dosage. This will (hopefully) avoid selection effects, such as only the healthiest individuals getting the lowest dosage, etc. When these requirements are met, we refer to the data collection process as an experiment. Statistical inference based on data collected via an experiment has the ca­ pability of inferring that cause–effect relationships exist, so this represents an important and powerful scientific tool. A Hierarchy of Studies Combining this discussion with Section 5.4, we see a hierarchy of data collection meth­ ods. Observational studies reside at the bottom of the hierarchy. Inferences drawn from observational studies must be taken with a degree of caution, for selection effects could mean that the results do not apply to the population intended, and the existence Chapter 10: Relationships Among Variables 519 of confounding variables means that we cannot make inferences about cause–effect re­ lationships. For sampling studies, we know that any inferences drawn will be about the appropriate population; but the existence of confounding variables again causes difficulties for any statements about the existence of cause–effect relationships, e.g., of Example just taking random samples of males and females from the population 10.1.3 will not avoid the confounding variables. At the top of the hierarchy reside experiments. It is probably apparent that it is often impossible to conduct an experiment. In Example 10.1.3, we cannot assign the value of gender, so nothing can be said about the existence of a cause–effect relationship between GPA and gender. There are many notorious examples in which assertions are made about the exis­ tence of cause–effect relationships but for which no experiment is possible. For exam­ ple, there have been a number of studies conducted where differences have been noted among the IQ distributions of various racial groups. It is impossible, however, to con­ trol the variable racial origin, so it is impossible to assert that the observed differences in the conditional distributions of IQ, given race, are caused by changes in race. Another example concerns smoking and lung cancer in humans. It has been pointed out that it is impossible to conduct an experiment, as we cannot assign values of the predictor variable (perhaps different amounts of smoking) to humans at birth and then observe the response, namely, whether someone contracts lung cancer or not. This raises an important point. We do not simply reject the results of analyses based on observational studies or sampling studies because the data did not arise from an ex­ periment. Rather, we treat these as evidence — potentially awed evidence, but still evidence. Think of eyewitness evidence in a court of law suggesting that a crime was com­ mitted by a certain individual. Eyewitness evidence may be unreliable, but if two or three unconnected eyewitnesses give similar reports, then our confidence grows in the reliability of the evidence. Similarly, if many observational and sampling studies seem to indicate that smoking leads to an increased risk for contracting lung cancer, then our confidence grows that a cause–effect relationship does indeed exist. Furthermore, if we can identify potentially confounding variables, then observational or sampling studies can be conducted taking these into account, increasing our confidence still more. Ul­ timately, we may not be able to definitively settle the issue via an experiment, but it is still possible to build overwhelming evidence that smoking and lung cancer do have a cause–effect relationship. 10.1.3 Design of Experiments Suppose we have a response Y and a predictor X (sometimes called a factor in experi­ mental contexts) defined on a population and we want to collect data to determine whether a cause–effect relationship exists between them. Following the discussion in Section 10.1.1, we will conduct an experiment. There are now a number of decisions to be made, and our choices constitute what we call the design of the experiment. For example, we are going to assign values of X to the sampled elements, now Which of the possible values of X called experimental units, n from 1 520 Section 10.1: Related Variables should we use? When X can take only a small finite number of values, then it is natural to use these values. On the other hand, when the number of possible values of X is very large or even infinite, as with quantitative pr
edictors, then we have to choose values of X to use in the experiment. xk for X. We refer to x1 Suppose we have chosen the values x1 xk as the levels of X; any particular assignment xi to a j in the sample will be called a treatment. Typically, we will choose the levels so that they span the possible range of X fairly uniformly. For example, if X is temperature in degrees Celsius, and we want to examine the relationship between Y and X for X in the range [0 100] then, using k 5 levels, we might take x1 25 x3 Having chosen the levels of X , we then have to choose how many treatments of each level we are going to use in the experiment, i.e., decide how many response values ni 1 we are going to observe at level xi for i 75 and x5 50 x4 0 x2 100 k In any experiment, we will have a finite amount of resources (money, time, etc.) at The question then is how n? If we know nothing about the then it makes sense to use balance, namely, our disposal, which determines the sample size n from should we choose the ni so that n1 conditional distributions fY choose n1 nk nk xi X On the other hand, suppose we know that some of the fY xi will exhibit greater variability than others. For example, we might measure variability by the vari­ ance of fY xi . Then it makes sense to allocate more treatments to the levels of X where the response is more variable. This is because it will take more observations to make accurate inferences about characteristics of such an fY than for the less variable conditional distributions. xi X X X As discussed in Sections 6.3.4 and 6.3.5, we also want to choose the ni so that any inferences we make have desired accuracy. Methods for choosing the sample sizes ni similar to those discussed in Chapter 7, have been developed for these more compli­ cated designs, but we will not discuss these any further here. Suppose, then, that we have determined set of ordered pairs as the experimental design. x1 n1 Consider some examples. xk nk We refer to this EXAMPLE 10.1.4 Suppose that is a population of students at a given university. The administration is concerned with determining the value of each student being assigned an academic advisor. The response variable Y will be a rating that a student assigns on a scale of 1 to 10 (completely dissatisfied to completely satisfied with their university experience) at the end of a given semester. We treat Y as a quantitative variable. A random sample of 100 students is selected from , and 50 of these are randomly selected to receive n advisers while the remaining 50 are not assigned advisers. Here, the predictor X is a categorical variable that indicates whether or not the 2 levels, and both are used in the experiment. 50 student has an advisor. There are only k If x1 and we have a balanced experiment. The experimental design is given by 1 denotes having an advisor, then n1 0 denotes no advisor and x2 n2 0 50 1 50 Chapter 10: Relationships Among Variables 521 At the end of the experiment, we want to use the data to make inferences about the conditional distributions fY 1 to determine whether a 0 and fY cause–effect relationship exists. The methods of Section 10.4 will be relevant for this. X X EXAMPLE 10.1.5 Suppose that is a population of dairy cows. A feed company is concerned with the relationship between weight gain, measured in kilograms, over a specific time period and the amount of a supplement, measured in grams/liter, of an additive put into the cows’ feed. Here, the response Y is the weight gain — a quantitative variable. The pre­ dictor X is the concentration of the additive. Suppose X can plausibly range between 0 and 2 so it is also a quantitative variable. The experimenter decides to use k 0 66 x3 1 32 and x4 determined to be appropriate. So the balanced experimental design is given by 2 00 Further, the sample sizes n1 n4 10 were 4 levels with x1 n2 0 00 x2 n3 0 00 10 0 66 10 1 32 10 2 00 10 . At the end of the experiment, we want to make inferences about the conditional distri­ butions fY 2 00 . The methods of Section 10.3 are relevant for this. and fY 0 00 1 32 0 66 fY fY X X X X Control Treatment, the Placebo Effect, and Blinding Notice that in Example 10.1.5, we included the level X 0, which corresponds to no application of the additive. This is called a control treatment, as it gives a baseline against which we can assess the effect of the predictor. In many experiments, it is important to include a control treatment. In medical experiments, there is often a placebo effect — that is, a disease sufferer given any treatment will often record an improvement in symptoms. The placebo effect is believed to be due to the fact that a sufferer will start to feel better simply because someone is paying attention to the condition. Accordingly, in any experiment to de­ termine the efficacy of a drug in alleviating disease symptoms, it is important that a control treatment be used as well. For example, if we want to investigate whether or not a given drug alleviates migraine headaches, then among the dosages we select for the experiment, we should make sure that we include a pill containing none of the drug (the so­called sugar pill); that way we can assess the extent of the placebo effect. Of course, the recipients should not know whether they are receiving the sugar pill or the drug. This is called a blind experiment. If we also conceal the identity of the treatment from the experimenters, so as to avoid any biasing of the results on their part, then this is known as a double­blind experiment. In Example 10.1.5, we assumed that it is possible to take a sample from the popula­ tion of all dairy cows. Strictly speaking, this is necessary if we want to avoid selection effects and make sure that our inferences apply to the population of interest. In prac­ tice, however, taking a sample of experimental units from the full population of interest is often not feasible. For example, many medical experiments are conducted on ani­ 522 Section 10.1: Related Variables mals, and these are definitely not random samples from the population of the particular animal in question, e.g., rats. In such cases, however, we simply recognize the possibility that selection effects or lurking variables could render invalid the conclusions drawn from such analyses when they are to be applied to the population of interest. But we still regard the results as evidence concerning the phenomenon under study. It is the job of the experimenter to come as close as possible to the idealized situation specified by a valid experiment; for example, randomization is still employed when assigning treatments to experimental units so that selection effects are avoided as much as possible. Interactions 1 X In the experiments we have discussed so far, there has been one predictor. In many practical contexts, there is more than one predictor. Suppose, then, that there are two predictors X and W and that we have decided on the levels x1 xk for X and the l for W One possibility is to look at the conditional distributions levels fY l to determine for i whether X and W individually have a relationship with the response Y Such an ap­ proach, however, ignores the effect of the two predictors together. In particular, the way the conditional distributions fY change as we change x may depend on ; when this is the case, we say that there is an interaction between the predictors. k and fY W for j x W xi X 1 1 j To investigate the possibility of an interaction existing between X and W we must k and sample from each of the kl distributions fY j xi W l The experimental design then takes the form for i X 1 1 j x1 1 n11 x2 1 n21 xk l nkl where ni j gives the number of applications of the treatment xi . We say that the two predictors X and W are completely crossed in such a design because each value of X used in the experiment occurs with each value of W used in the experiment Of course, we can extend this discussion to the case where there are more than two predictors. We will discuss in Section 10.4.3 how to analyze data to determine whether there are any interactions between predictors. j EXAMPLE 10.1.6 Suppose we have a population of students at a particular university and are investi­ gating the relationship between the response Y given by a student’s grade in calculus, and the predictors W and X. The predictor W is the number of hours of academic advising given monthly to a student; it can take the values 0 1 or 2. The predictor X 1 indicates large indicates class size, where X class size. So we have a quantitative response Y a quantitative predictor W taking three values, and a categorical predictor X taking two values. The crossed values of the predictors W X are given by the set 0 indicates small class size and so there are six treatments. To conduct the experiment, the university then takes a random sample of 6n students and randomly assigns n students to each treatment. Chapter 10: Relationships Among Variables 523 Sometimes we include additional predictors in an experimental design even when we are not primarily interested in their effects on the response Y We do this because we know that such a variable has a relationship with Y . Including such predictors allows us to condition on their values and so investigate more precisely the relationship Y has with the remaining predictors. We refer to such a variable as a blocking variable. EXAMPLE 10.1.7 Suppose the response variable Y is yield of wheat in bushels per acre, and the predictor variable X is an indicator variable for which of three types of wheat is being planted in an agricultural study. Each type of wheat is going to be planted on a plot of land, where all the plots are of the same size, but it is known that the plots used in the experiment will vary considerably with respect to their fertility. Note that such an experiment is another example of a situation in which it is impossible to randomly sample the experimental units (the plots) from the full population o
f experimental units. Suppose the experimenter can group the available experimental units into plots of low fertility and high fertility. We call these two classes of fields blocks. Let W indicate the type of plot. So W is a categorical variable taking two values. It then seems clear will be much less variable than that the conditional distributions fY X the conditional distributions fY X x x W In this case, W is serving as a blocking variable. The experimental units in a par­ ticular block, the one of low fertility or the one of high fertility, are more homogeneous than the full set of plots, so variability will be reduced and inferences will be more accurate. Summary of Section 10.1 We say two variables are related if the conditional distribution of one given the other changes at all, as we change the value of the conditioning variable. To conclude that a relationship between two variables is a cause–effect relation­ ship, we must make sure that (through conditioning) we have taken account of all confounding variables. Statistics provides a practical way of avoiding the effects of confounding vari­ ables via conducting an experiment. For this, we must be able to assign the val­ ues of the predictor variable to experimental units sampled from the population of interest. The design of experiments is concerned with determining methods of collecting the data so that the analysis of the data will lead to accurate inferences concerning questions of interest. EXERCISES 10.1.1 Prove that discrete random variables X and Y are unrelated if and only if X and Y are independent. 10.1.2 Suppose that two variables X and Y defined on a finite population tionally related as Y are func­ g X for some unknown nonconstant function g Explain how 524 Section 10.1: Related Variables this situation is covered by Definition 10.1.1, i.e., the definition will lead us to conclude that X and Y are related. What about the situation in which g x c for some value c for every x? (Hint: Use the relative frequency functions of the variables.) 10.1.3 Suppose that a census is conducted on a population and the joint distribution of X Y is obtained as in the following table. X X 1 2 Y 1 0 15 0 12 Y 2 0 18 0 09 Y 3 0 40 0 06 Determine whether or not a relationship exists between Y and X 10.1.4 Suppose that a census is conducted on a population and the joint distribution of X Y is obtained as in the following table 12 1 6 1 12 1 3 1 6 X 2 Determine whether or not X Determine whether or not a relationship exists between Y and X 10.1.5 Suppose that X is a random variable and Y and Y are related. What happens when X has a degenerate distribution? 10.1.6 Suppose a researcher wants to investigate the relationship between birth weight and performance on a standardized test administered to children at two years of age. If a relationship is found, can this be claimed to be a cause–effect relationship? Explain why or why not? 10.1.7 Suppose a large study of all doctors in Canada was undertaken to determine the relationship between various lifestyle choices and lifelength. If the conditional distribution of lifelength given various smoking habits changes, then discuss what can be concluded from this study. 10.1.8 Suppose a teacher wanted to determine whether an open­ or closed­book exam was a more appropriate way to test students on a particular topic. The response variable is the grade obtained on the exam out of 100. Discuss how the teacher could go about answering this question. 10.1.9 Suppose a researcher wanted to determine whether or not there is a cause– effect relationship between the type of political ad (negative or positive) seen by a voter from a particular population and the way the voter votes. Discuss your advice to the researcher about how best to conduct the study. 10.1.10 If two random variables have a nonzero correlation, are they necessarily re­ lated? Explain why or why not. 10.1.11 An experimenter wants to determine the relationship between weight change Y over a specified period and the use of a specially designed diet. The predictor variable X is a categorical variable indicating whether or not a person is on the diet. A total of 200 volunteers signed on for the study; a random selection of 100 of these were given the diet and the remaining 100 continued their usual diet. (a) Record the experimental design. Chapter 10: Relationships Among Variables 525 (b) If the results of the study are to be applied to the population of all humans, what concerns do you have about how the study was conducted? (c) It is felt that the amount of weight lost or gained also is dependent on the initial weight W of a participant. How would you propose that the experiment be altered to take this into account? 10.1.12 A study will be conducted, involving the population of people aged 15 to 19 in a particular country, to determine whether a relationship exists between the response Y (amount spent in dollars in a week on music downloads) and the predictors W (gender) and X (age in years). (a) If observations are to be taken from every possible conditional distribution of Y given the two factors, then how many such conditional distributions are there? (b) Identify the types of each variable involved in the study. (c) Suppose there are enough funds available to monitor 2000 members of the popula­ tion. How would you recommend that these resources be allocated among the various combinations of factors? (d) If a relationship is found between the response and the predictors, can this be claimed to be a cause–effect relationship? Explain why or why not. (e) Suppose that in addition, it was believed that family income would likely have an effect on Y and that families could be classified into low and high income. Indicate how you would modify the study to take this into account. 10.1.13 A random sample of 100 households, from the set of all households contain­ ing two or more members in a given geographical area, is selected and their television viewing habits are monitored for six months. A random selection of 50 of the house­ holds is sent a brochure each week advertising a certain program. The purpose of the study is to determine whether there is any relationship between exposure to the brochure and whether or not this program is watched. (a) Identify suitable response and predictor variables. (b) If a relationship is found, can this be claimed to be a cause–effect relationship? Explain why or why not. 10.1.14 Suppose we have a quantitative response variable Y and two categorical pre­ dictor variables W and X, both taking values in 0 1 . Suppose the conditional distri­ butions of Y are given by . Does W have a relationship with Y ? Does X have a relationship with Y ? Explain your answers. 10.1.15 Suppose we have a quantitative response variable Y and two categorical pre­ dictor variables W and X both taking values in 0 1 Suppose the conditional distri­ 526 Section 10.1: Related Variables butions of Y are given by when i is 1 when i is odd and X i 0 otherwise. Does W have a relationship with Y ? Does X have a relationship with Y ? Explain your answers. 10.1.16 Do the predictors interact in Exercise 10.1.14? Do the predictors interact in Exercise 10.1.15? Explain your answers. 10.1.17 Suppose we have variables X and Y defined on the population 0 when i is even, Y i 10 , where X i divisible by 3 and Y i (a) Determine the relative frequency function of X (b) Determine the relative frequency function of Y (c) Determine the joint relative frequency function of X Y (d) Determine all the conditional distributions of Y given X (e) Are X and Y related? Justify your answer. 10.1.18 A mathematical approach to examining the relationship between variables X and Y is to see whether there is a function g such that Y g X Explain why this approach does not work for many practical applications where we are examining the relationship between variables. Explain how statistics treats this problem. 10.1.19 Suppose a variable X takes the values 1 and 2 on a population and the condi­ tional distributions of Y given X are N 0 5 when X 2. Determine whether X and Y are related and if so, describe their relationship. 10.1.20 A variable Y has conditional distribution given X specified by N 1 when X ship is. 10.1.21 Suppose that X between Y and X Are X and Y related? x Determine if X and Y are related and if so, describe what their relation­ X 2 Determine the correlation 1 and N 0 7 when X Uniform[ 1 1] and Y 2x x PROBLEMS 10.1.22 If there is more than one predictor involved in an experiment, do you think it is preferable for the predictors to interact or not? Explain your answer. Can the experimenter control whether or not predictors interact? 10.1.23 Prove directly, using Definition 10.1.1, that when X and Y are related variables defined on a finite population 10.1.24 Suppose that X Y Z are independent N 0 1 random variables and that U X Z V Calculate Cov U V ) 10.1.25 Suppose that X Y Z lated? Z Determine whether or not the variables U and V are related. (Hint: Multinomial n 1 3 1 3 1 3 Are X and Y re­ then Y and X are also related Y Chapter 10: Relationships Among Variables 527 10.1.26 Suppose that X Y Y are unrelated if and only if Corr X Y 10.1.27 Suppose that X Y Z have probability function pX Y Z If Y is related to X but not to Z then prove that pX Y Z x y z pY X y x pX Z x z pZ z Bivariate­Normal Show that X and 0 1 2 2 1 10.2 Categorical Response and Predictors There are two possible situations when we have a single categorical response Y and a single categorical predictor X The categorical predictor is either random or determin­ istic, depending on how we sample. We examine these two situations separately. 10.2.1 Random Predictor We consider the situation in which X is categorical, taking values in 1 Y is categorical, taking values in 1 population, then the values X i b If we take a sample xi are random, as are the values Y 1 a and n from the y j i Suppose the sample size n is very small relative
to the population size (so we can j we assume that i.i.d. sampling is applicable). Then, letting i j obtain the likelihood function (see Problem 10.2.15) P X i Y L 11 ab x1 y1 xn yn a b i 1 j 1 fi j i j (10.2.1) j An easy computation where fi j is the number of sample values with X Y (see Problem 10.2.16) shows that the MLE of fi j n is given by i j and that the standard error of this estimate (because the incidence of a sample member falling in the i i j and using Example 6.3.2) is given by j ­th cell is distributed Bernoulli 11 kl i i j 1 n i j . We are interested in whether or not there is a relationship between X and Y . To answer this, we look at the conditional distributions of Y given X The conditional distributions of Y given X using i i , are given in the following table. P X i b i1 X X 1 a Y 11 1 1 Y 1b b 1 a1 a ab a 528 Section 10.2: Categorical Response and Predictors fi , where fi i by i j Then estimating i j conditional distributions are as in the following table. fi j i fi1 fi b the estimated X X 1 a Y 1 f11 f1 Y b f1b f1 fa1 fa fab fa If we conclude that there is a relationship between X and Y then we look at the table of estimated conditional distributions to determine the form of the relationship, i.e., how the conditional distributions change as we change the value of X we are conditioning on How, then, do we infer whether or not a relationship exists between X and Y ? No relationship exists between Y and X if and only if the conditional distributions of Y given X x do not change with x This is the case if and only if X and Y are independent, and this is true if and only if for every i and j where j . Therefore, to assess whether or not there is a relationship between X and Y it is equivalent to assess the null hypothesis H0 : j for every i and j How should we assess whether or not the observed data are surprising when H0 holds? The methods of Section 9.1.2, and in particular Theorem 9.1.2, can be applied here, as we have that F11 F12 Fab Multinomial n 1 1 1 2 a b when H0 holds, where Fi j is the count in the i j ­th cell. To apply Theorem 9.1.2, we need the MLE of the parameters of the model under H0. The likelihood, when H0 holds, is L 1 a 1 b x1 y1 xn yn a b i 1 j 1 fi j . (10.2.2) i j From this, we deduce (see Problem 10.2.17) that the MLE’s of the i and by i f j n Therefore, the relevant chi­squared statistic is fi n and j j are given X 2 a b fi so we distribution because Under H0 the parameter space has dimension a compare the observed value of X 2 with the ab Consider an example. 2 Chapter 10: Relationships Among Variables 529 EXAMPLE 10.2.1 Piston Ring Data The following table gives the counts of piston ring failures, where variable Y is the compressor number and variable X is the leg position based on a sample of n 166. These data were taken from Statistical Methods in Research and Production, by O. L. Davies (Hafner Publishers, New York, 1961). Here, Y takes four values and X takes three values (N = North, C = Central, and S = South). X N C X S X Y 1 17 17 12 Y 2 11 9 13 Y 3 11 8 19 Y 4 14 7 28 The question of interest is whether or not there is any relation between compressor and 72 the conditional distributions 53 f2 leg position. Because f1 of Y given X are estimated as in the rows of the following table. 41 and f3 Y X N 17 53 C 17 41 X 12 72 S X 1 0 321 0 415 0 167 Y 11 53 9 41 13 72 2 0 208 0 222 0 181 Y 11 53 8 41 19 72 3 0 208 0 195 0 264 Y 14 53 7 41 28 72 4 0 264 0 171 0 389 Comparing the rows, it certainly looks as if there is a difference in the conditional distributions, but we must assess whether or not the observed differences can be ex­ plained as due to sampling error. To see if the observed differences are real, we carry out the chi­squared test. Under the null hypothesis of independence, the MLE’s are given by 1 46 166 2 33 166 3 38 166 4 49 166 for the Y probabilities, and by 1 53 166 2 41 166 3 72 166 for the X probabilities. Then the estimated expected counts n i following table. j are given by the Y 1 X N 14 6867 C 11 3614 X 19 9518 S X Y 2 10 5361 8 1506 14 3133 Y 3 12 1325 9 3855 16 4819 4 Y 15 6446 12 1024 21 2530 The standardized residuals (using (9.1.6)) fi are as in the following table. 530 Section 10.2: Categorical Response and Predictors X N C X S X 1 Y 0 6322 1 7332 1 8979 2 Y 0 1477 0 3051 0 3631 3 Y 0 3377 0 4656 0 6536 4 Y 0 4369 1 5233 1 5673 All of the standardized residuals seem reasonable, and we have that X 2 with P 2 6 0 0685, which is not unreasonably small. 11 7223 11 7223 So, while there may be some indication that the null hypothesis of no relationship is false, this evidence is not overwhelming. Accordingly, in this case, we may assume that Y and X are independent and use the estimates of cell probabilities obtained under this assumption. We must also be concerned with model checking, i.e., is the model that we have as­ sumed for the data x1 y1 xn yn correct? If these observations are i.i.d., then indeed the model is correct, as that is all that is being effectively assumed. So we need to check that the observations are a plausible i.i.d. sample. Because the minimal suffi­ such a test could be based on the conditional cient statistic is given by f11 fab The distribution xn yn given f11 distribution of the sample x1 y1 theory for such tests is computationally difficult to implement, however, and we do not pursue this topic further in this text. fab 10.2.2 Deterministic Predictor 1 Consider again the situation in which X is categorical, taking values in 1 Y is categorical, taking values in 1 a and b But now suppose that we take a sample n from the population, where we have specified that ni sample members have i etc. This could be by assignment, when we are trying to determine the value X whether a cause–effect relationship exists; or we might have a populations a and want to see whether there is any difference in the distribution of Y between popu­ lations. Note that n1 na n 1 In both cases, we again want to make inferences about the conditional distributions of Y given X as represented by the following table difference in the conditional distributions means there is a relationship between Y and X If we denote the number of observations in the ith sample that have Y j by fi j then assuming the sample sizes are small relative to the population sizes, the likelihood function is given by L 1 X 1 b X a x1 y1 xn yn a b i 1 j 1 fi j j X i (10.2.3) Chapter 10: Relationships Among Variables 531 and the MLE is given by j X i fi j ni (Problem 10.2.18). There is no relationship between Y and X if and only if the conditional distributions do not vary as we vary X or if and only if H0 : j X 1 j X a j for all j 1 hood function is given by b for some probability distribution 1 b Under H0 the likeli­ L 1 b x1 y1 xn yn b j 1 f j j (10.2.4) and the MLE of Theorem 9.1.2, we have that the statistic j is given by j f j n (see Problem 10.2.19). Then, applying X 2 a b fi j ni j 2 i 1 j 1 ni j has an approximate free parameters in the full model Consider an example. 1 2 a 1 b 1 distribution under H0 because there are a b 1 1 parameters in the independence model, and EXAMPLE 10.2.2 This example is taken from a famous applied statistics book, Statistical Methods, 6th ed., by G. Snedecor and W. Cochran (Iowa State University Press, Ames, 1967). In­ dividuals were classified according to their blood type Y (O, A, B, and AB, although the AB individuals were eliminated, as they were small in number) and also classified according to X their disease status (peptic ulcer = P, gastric cancer = G, or control = C). So we have three populations; namely, those suffering from a peptic ulcer, those suffering from gastric cancer, and those suffering from neither. We suppose further that the individuals involved in the study can be considered as random samples from the respective populations. The data are given in the following table 983 383 2892 679 416 2625 B Total 1796 883 6087 134 84 570 The estimated conditional distributions of Y given X are then as follows 983 1796 383 883 C 2892 6087 0 547 0 434 0 475 679 1796 416 883 2625 6087 0 378 0 471 0 431 134 1796 84 883 570 6087 0 075 0 095 0 093 532 Section 10.2: Categorical Response and Predictors We now want to assess whether or not there is any evidence for concluding that a difference exists among these conditional distributions. Under the null hypothesis that no difference exists, the MLE’s of the probabilities P Y A , and 3 P Y B are given by P Y O 1 2 1 2 3 983 1796 679 1796 134 1796 383 883 416 883 84 883 2892 6087 2625 6087 570 6087 0 4857 0 4244 0 0899 Then the estimated expected counts ni j are given by the following table. Y O 872 3172 428 8731 C 2956 4559 X P X G X Y A 762 2224 374 7452 2583 3228 Y B 161 4604 79 3817 547 2213 The standardized residuals (using (9.1.6)) the following table. fi j ni j ni 1 1 2 are given by 2219 3 0910 1 659 2 Y A 3 9705 2 8111 1 0861 Y B 2 2643 0 5441 1 0227 We have that X 2 0 0000 so we have strong evidence against the null hypothesis of no relationship existing between Y and X Ob­ serve the large residuals when X 40 5434 and P 2 4 P and Y O, Y A. 40 5434 We are left with examining the conditional distributions to ascertain what form the relationship between Y and X takes. A useful tool in this regard is to plot the conditional distributions in bar charts, as we have done in Figure 10.2.1. From this, we see that the peptic ulcer population has a greater proportion of blood type O than the other populations. 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0. Figure 10.2.1: Plot of the conditional distributions of Y given X in Example 10.2.2. Chapter 10: Relationships Among Variables 533 10.2.3 Bayesian Formulation for the unknown values of the parameters of the models We now add a prior density discussed in Sections 10.2.1 and 10.2.2. Depending on how we choose , and de­ pending on the particular computation we want to carry out, we could be faced with some difficult computational problems. Of cour
se, we have the Monte Carlo methods available in such circumstances, which can often render a computation fairly straight­ forward. The most common choice of prior in these circumstances is to choose a conjugate prior. Because the likelihoods discussed in this section are as in Example 7.1.3, we see immediately that Dirichlet priors will be conjugate for the full model in Section 10.2.1 and that products of independent Dirichlet priors will be conjugate for the full model in Section 10.2.2. In Section 10.2.1, the general likelihood — i.e., no restrictions on the i j — is of the form L 11 ab x1 y1 xn yn a b i 1 j 1 fi j i j If we place a Dirichlet is proportional to 11 ab prior on the parameter, then the posterior density a b i 1 j 1 i j 1 fi j i j so the posterior is a Dirichlet f11 In Section 10.2.2, the general likelihood is of the form fab 11 ab distribution. L 1 X 1 b X a x1 y1 xn yn a b i 1 j 1 fi Because distribution Dirichlet 1 i 1 X i b X i 1 for each i a we must place a prior on each If we choose the prior on the ith distribution to be 1 a i , then the posterior density is proportional to a b i 1 j 1 fi j j i j i 1 . We recognize this as the product of independent Dirichlet distributions, with the poste­ b X i equal to a rior distribution on 1 X i Dirichlet fi1 1 i fi b b i distribution. A special and important case of the Dirichlet priors corresponds to the situation in which we feel that we have no information about the parameter. In such a situation, it 534 Section 10.2: Categorical Response and Predictors makes sense to choose all the parameters of the Dirichlet to be 1, so that the priors are all uniform. There are many characteristics of a Dirichlet distribution that can be evaluated in closed form, e.g., the expectation of any polynomial (see Problem 10.2.20). But still there will be many quantities for which exact computations will not be available. It turns out that we can always easily generate samples from Dirichlet distributions, pro­ vided we have access to a generator for beta distributions. This is available with most statistical packages. We now discuss how to do this. 1 EXAMPLE 10.2.3 Generating from a Dirichlet The technique we discuss here is a commonly used method for generating from multi­ variate distributions. If we want to generate a value of the random vector X1 then we can proceed as follows. First, generate a value x1 from the marginal distrib­ ution of X1 Next, generate a value x2 from the conditional distribution of X2 given x1 Then generate a value x3 from the conditional distribution of X3 given that X1 x1 and X2 X1 k Distribution x2 etc. Xk If the distribution of X is discrete, then we have that the probability of a particular xk arising via this scheme is vector of values x1 x2 P X1 x1 P X2 x2 X1 x1 P Xk xk X1 x1 Xk 1 xk 1 Expanding each of these conditional probabilities, we obtain x1 P X1 Xk 1 xk 1 Xk Xk 1 xk 1 is and so x1 x2 which equals P X1 a value from the joint distribution of X1 Xk This approach also works for ab­ solutely continuous distributions, and the proof is the same but uses density functions instead. P X1 x1 X2 x2 P X1 x1 Xk 1 xk 1 Xk P X1 x1 P X1 x1 x1 xk xk xk In the case of X1 lenge 10.2.23) X1 Beta has the same distribution as 1 Xk 1 2 1 x1 Dirichlet 1 k and Xi given X1 xi 1 Ui where k we have that (see Chal­ xi 1 Xi 1 x1 Ui Beta i i 1 k 1 X1 Xk 1 for any 2 1 X1 U2 generate U3 Beta 1 k , generate U2 3 4 and U2 Dirichlet distribution So we generate X1 Beta Uk 1 are independent Note that Xk Beta k and put X2 3 2 k and put X3 X1 Below, we present a table of a sample of n X2 U3 etc. 1 5 values from a Dirichlet 2 3 1 1 5 distribution. X1 0 116159 0 166639 0 411488 0 483124 0 117876 X2 0 585788 0 566369 0 183686 0 316647 0 147869 X3 0 229019 0 056627 0 326451 0 115544 0 418013 X4 0 069034 0 210366 0 078375 0 084684 0 316242 1 2 3 4 5 Appendix B contains the code used for this. It can be modified to generate from any Dirichlet distribution. Chapter 10: Relationships Among Variables 535 Summary of Section 10.2 In this section, we have considered the situation in which we have a categorical response variable and a categorical predictor variable. We distinguished two situations. The first arises when the value of the predictor variable is not assigned, and the second arises when it is. In both cases, the test of the null hypothesis that no relationship exists involved the chi­squared test. EXERCISES 10.2.1 The following table gives the counts of accidents for two successive years in a particular city. Year 1 Year 2 June 60 80 July August 100 100 80 60 Is there any evidence of a difference in the distribution of accidents for these months between the two years? 10.2.2 The following data are from a study by Linus Pauling (1971) (“The significance of the evidence about ascorbic acid and the common cold,” Proceedings of the National Academy of Sciences, Vol. 68, p. 2678), concerned with examining the relationship between taking vitamin C and the incidence of colds. Of 279 participants in the study, 140 received a placebo (sugar pill) and 139 received vitamin C. Placebo Vitamin C No Cold Cold 109 122 31 17 Assess the null hypothesis that there is no relationship between taking vitamin C and the incidence of the common cold. 10.2.3 A simulation experiment is carried out to see whether there is any relationship between the first and second digits of a random variable generated from a Uniform[0 1] distribution. A total of 1000 uniforms were generated; if the first and second digits were in 0 1 2 3 4 they were recorded as a 0, and as a 1 otherwise. The cross­classified data are given in the following table. First digit 0 First digit 1 Second digit 0 240 255 Second digit 1 250 255 Assess the null hypothesis that there is no relationship between the digits. 10.2.4 Grades in a first­year calculus course were obtained for randomly selected stu­ dents at two universities and classified as pass or fail. The following data were ob­ tained. University 1 University 2 Fail 33 22 Pass 143 263 536 Section 10.2: Categorical Response and Predictors Is there any evidence of a relationship between calculus grades and university? 10.2.5 The following data are recorded in Statistical Methods for Research Workers, by R. A. Fisher (Hafner Press, New York, 1922), and show the classifications of 3883 Scottish children by gender (X and hair color (Y ). X m f X Y Y fair 592 544 red 119 97 Y medium Y 849 677 Y dark 504 451 jet black 36 14 (a) Is there any evidence for a relationship between hair color and gender? (b) Plot the appropriate bar chart(s). (c) Record the residuals and relate these to the results in parts (a) and (b). What do you conclude about the size of any deviation from independence? 10.2.6 Suppose we have a controllable predictor X that takes four different values, and we measure a binary­valued response Y A random sample of 100 was taken from the population and the value of X was randomly assigned to each individual in such a way that there are 25 sample members taking each of the possible values of X Suppose that the following data were obtained. Y Y 0 1 X 1 12 13 X 2 10 15 X 3 16 9 X 4 14 11 (a) Assess whether or not there is any evidence against a cause–effect relationship existing between X and Y (b) Explain why it is possible in this example to assert that any evidence found that a relationship exists is evidence that a cause–effect relationship exists. 10.2.7 Write out in full how you would generate a value from a Dirichlet 1 1 1 1 distribution. and we 10.2.8 Suppose we have two categorical variables defined on a population conduct a census. How would you decide whether or not a relationship exists between X and Y ? If you decided that a relationship existed, how would you distinguish between a strong and a weak relationship? 10.2.9 Suppose you simultaneously roll two dice n times and record the outcomes. Based on these values, how would you assess the null hypothesis that the outcome on each die is independent of the outcome on the other? 10.2.10 Suppose a professor wants to assess whether or not there is any difference in the final grade distributions (A, B, C, D, and F) between males and females in a particular class. To assess the null hypothesis that there is no difference between these distributions, the professor carries out a chi­squared test. (a) Discuss how the professor carried out this test. (b) If the professor obtained evidence against the null hypothesis, discuss what con­ cerns you have over the use of the chi­squared test. 10.2.11 Suppose that a chi­squared test is carried out, based on a random sample of n from a population, to assess whether or not two categorical variables X and Y are Chapter 10: Relationships Among Variables 537 independent. Suppose the P­value equals 0.001 and the investigator concludes that there is evidence against independence. Discuss how you would check to see if the deviation from independence was of practical significance. PROBLEMS 10.2.12 In Example 10.2.1, place a uniform prior on the parameters (a Dirichlet distri­ bution with all parameters equal to 1) and then determine the posterior distribution of the parameters. 10.2.13 In Example 10.2.2, place a uniform prior on the parameters of each population (a Dirichlet distribution with all parameters equal to 1) and such that the three priors are independent. Then determine the posterior distribution. 10.2.14 In a 2 are independent if and only if 2 table with probabilities i j prove that the row and column variables 11 22 12 21 1 ab fi j 11 j 0 for every i in (10.2.1) is given by i j namely, we have independence if and only if the cross­ratio equals 1. 10.2.15 Establish that the likelihood in (10.2.1) is correct when the population size is infinite (or when we are sampling with replacement from the population). 10.2.16 (MV) Prove that the MLE of fi j n Assume that function on this parameter space must achieve its maximum at some point in and that, if the function is continuously diffe
rentiable at such a point, then all its first­ order partial derivatives are zero there. This will allow you to conclude that the unique solution to the score equations must be the point where the log­likelihood is maximized. Try the case where a 2 first.) 10.2.17 (MV) Prove that the MLE of by i Use the hint in Problem 10.2.16.) 10.2.18 (MV) Prove that the MLE of in (10.2.2) is given j (Hint: (Hint: Use the facts that a continuous f j n Assume that fi b 0 for every i 1 0 f j fi n and 2 b a 1 j 1 X 1 0 for every i b X a in (10.2.3) is given by j. (Hint: Use the hint in Problem fi j ni Assume that fi j j X i 10.2.16.) 10.2.19 (MV) Prove that the MLE of Assume that f j 10.2.20 Suppose that X X lk E X l1 k 1 0 for every i X1 b in (10.2.4) is given by j j (Hint: Use the hint in Problem 10.2.16.) 1 f j n. Dirichlet in terms of the gamma function, when li Xk 1 1 0 for i k . Determine 1 k. COMPUTER PROBLEMS 2 1 10.2.21 Suppose that as in Exercise 10.2.7. 3 104 from this distribution and use this to estimate the Generate a sample of size N expectations of the i . Compare these estimates with their exact values. (Hint: There is some relevant code in Appendix B for the generation; see Appendix C for formulas for the exact values of these expectations.) Dirichlet 1 1 1 1 4 538 Section 10.3: Quantitative Response and Predictors 104 from the posterior 10.2.22 For Problem 10.2.12, generate a sample of size N distribution of the parameters and use this to estimate the posterior expectations of the cell probabilities. Compare these estimates with their exact values. (Hint: There is some relevant code in Appendix B for the generation; see Appendix C for formulas for the exact values of these expectations.) CHALLENGES 10.2.23 (MV) Establish the validity of the method discussed in Example 10.2.3 for generating from a Dirichlet k distribution. 1 10.3 Quantitative Response and Predictors When the response and predictor variables are all categorical, it can be difficult to for­ mulate simple models that adequately describe the relationship between the variables. We are left with recording the conditional distributions and plotting these in bar charts. When the response variable is quantitative, however, useful models have been formu­ lated that give a precise mathematical expression for the form of the relationship that may exist. We will study these kinds of models in the next three sections. This section concentrates on the situation in which all the variables are quantitative. 10.3.1 The Method of Least Squares The method of least squares is a general method for obtaining an estimate of a distribu­ tion mean. It does not require specific distributional assumptions and so can be thought of as a distribution­free method (see Section 6.4). Suppose we have a random variable Y and we want to estimate E Y based on a yn The following principle is commonly used to generate estimates. sample y1 The least­squares principle says that we select the point t y1 in the set of possible values for E Y that minimizes the sum of squared 2 deviations (hence, “least squares”) given by Such an estimate is called a least­squares estimate. n i 1 yi t y1 yn yn Note that a least­squares estimate is defined for every sample size, even n To implement least squares, we must find the minimizing point t y1 n i 1 yi haps a first guess at this value is the sample average y Because t y1 0 we have t y1 n y yn yn y n i 1 yi 1 yn . Per yi yi yi t y1 2 yn n i 1 yi y y t y1 2 yn y 2 2 n i 1 yi y y t y1 yn n i 1 y t y1 2 yn y 2 n y t y1 2 yn (10.3.1) Chapter 10: Relationships Among Variables 539 n and this is assumed i 1 yi Therefore, the smallest possible value of (10.3.1) is by taking t y1 y Note, however, that y might not be a possible value for E Y and that, in such a case, it will not be the least­squares estimate In general, (10.3.1) says that the least­squares estimate is the value t y1 yn that is closest to y and is a possible value for E Y . y2 yn Consider the following example. EXAMPLE 10.3.1 Suppose that Y has one of the distributions on S 0 1 given in the following table p1 y p2 y Then the mean of Y is given by E1 Y 0 1 2 1 1 2 1 2 or E2 Y 0 1 3 1 2 3 2 3 . Now suppose we observe the sample 0 0 1 1 1 and so y possible values for E Y are in 1 2 2 3 we see that t 0 0 1 1 1 3 5 0 004 while 3 5 1 2 2 2 3 2 0 01 3 5 Because the 2 3 because a b Whenever the set of possible values for E Y is an interval a b however, and P Y a b This implies that y is the least­squares estimator of E Y So we see that in quite general circumstances, y is the least­squares estimate. There is an equivalence between least squares and the maximum likelihood method 1 then y when we are dealing with normal distributions. EXAMPLE 10.3.2 Least Squares with Normal Distributions Suppose that y1 known. Then the MLE of is obtained by finding the value of yn is a sample from an N 2 0 distribution, where is un­ that maximizes L y1 yn exp n 2 2 0 y 2 Equivalently, the MLE maximizes the log­likelihood l y1 yn n 2 2 0 y 2 So we need to find the value of that minimizes y 2 just as with least squares In the case of the normal location model, we see that the least­squares estimate and the MLE of agree. This equivalence is true in general for normal models (e.g., the location­scale normal model), at least when we are considering estimates of location parameters. Some of the most important applications of least squares arise when we have that indicates that the response is a random vector Y Rn (the prime Y1 Yn 540 Section 10.3: Quantitative Response and Predictors we consider Y as a column), and we observe a single observation y Rn The expected value of Y component random variables, namely, Rn is defined to be the vector of expectations of its y1 yn E Y E Y1 E Yn Rn The least­squares principle then says that, based on the single observation y yn , we must find y1 t y t y1 yn t1 y1 yn tn y1 yn in the set of possible values for E Y (a subset of Rn), that minimizes n i 1 yi ti y1 2 yn (10.3.2) So t y is the possible value for E Y that is closest to y as the squared distance between two points x y Rn is given by 2. n i 1 xi yi As is common in statistical applications, suppose that there are predictor variables that may be related to Y and whose values are observed. In this case, we will replace E Y by its conditional mean, given the observed values of the predictors. The least­ squares estimate of the conditional mean is then the value t y1 in the set of possible values for the conditional mean of Y that minimizes (10.3.2). We will use this definition in the following sections. yn Finding the minimizing value of t y in (10.3.2) can be a challenging optimization problem when the set of possible values for the mean is complicated. We will now apply least squares to some important problems where the least­squares solution can be found in closed form. 10.3.2 The Simple Linear Regression Model Suppose we have a single quantitative response variable Y and a single quantitative predictor X e.g., Y could be blood pressure measured in pounds per square inch and X could be age in years. To study the relationship between these variables, we examine the conditional distributions of Y given X x to see how these change as we change x We might choose to examine a particular characteristic of these distributions to see how it varies with x Perhaps the most commonly used characteristic is the conditional mean of Y given X x (see Section 3.5). x or E Y X In the regression model (see Section 10.1), we assume that the conditional distrib­ utions have constant shape and that they change, as we change x at most through the conditional mean. In the simple linear regression model, we assume that the only way the conditional mean can change is via the relationship E Y X x 1 2x Chapter 10: Relationships Among Variables 541 R1 (the intercept term) and 2 for some unknown values of coefficient). We also refer to 1 and 2 as the regression coefficients. 1 R1 (the slope Suppose we observe the independent values x1 y1 using the simple linear regression model, we have that xn yn for X Y Then, E Y1 Yn X1 x1 Xn xn 1 1 2x1 2xn . (10.3.3) Equation (10.3.3) tells us that the conditional expected value of the response Y1 Yn is in a particular subset of Rn Furthermore, (10.3.2) becomes n i 1 yi ti y 2 n i 1 yi 1 2 2xi (10.3.4) and we must find the values of called the least­squares estimates of 1 and 2. 1 and 2 that minimize (10.3.4). These values are Before we show how to do this, consider an example. EXAMPLE 10.3.3 Suppose we obtained the following n 10 data points xi yi . 3 9 8 9 5 4 12 10 4 5 6 4 1 10 In Figure 10.3.1, we have plotted these points together with the line y 1 x 10 y 0 ­10 ­5 0 x 5 Figure 10.3.1: A plot of the data points xi yi (+) and the line y 1 x in Example 10.3.3. Notice that with 1 1 and 2 1 then yi 1 2 2xi yi 1 2 xi 542 Section 10.3: Quantitative Response and Predictors is the squared vertical distance between the point xi yi and the point on the line with the same x value. So (10.3.4) is the sum of these squared deviations and in this case equals 141 15 1 1 and 2 If 1 were the least­squares estimates, then 141.15 would be equal to the smallest possible value of (10.3.4). In this case, it turns out (see Example 10.3.4) that the least­squares estimates are given by the values 2 06, and the minimized value of (10.3.4) is given by 8 46 which is much smaller than 141 15 1 33 1 2 So we see that, in finding the least­squares estimates, we are in essence finding 2x that best fits the data, in the sense that the sum of squared vertical the line deviations of the observed points to the line is minimized. 1 Scatter Plots As part of Example 10.3.3, we plotted the points x1 y1 xn yn in a graph. This is called a scatter plot, and it is a recommended first step as part of any analysis of the relationship between quantitative variables X and Y . A scatter plot can give us a very general idea of whether or not a relationship exists and what form it might ta
ke. It is important to remember, however, that the appearance of such a plot is highly dependent on the scales we choose for the axes. For example, we can make a scatter plot look virtually at (and so indicate that no relationship exists) by choosing to place too wide a range of tick marks on the y­axis. So we must always augment a scatter plot with a statistical analysis based on numbers. Least­Squares Estimates, Predictions, and Standard Errors For the simple linear regression model, we can work out exact formulas for the least­ squares estimates of 1and 2 Theorem 10.3.1 Suppose that E Y X pendent values x1 y1 of 1 and 2 are given by 2x and we observe the inde­ xn yn for X Y Then the least­squares estimates x 1 b1 y b2x and b2 n x i 1 xi n i 1 xi y yi x 2 respectively, whenever n i 1 xi x 2 0 PROOF The proof of this result can be found in Section 10.6. We call the line y is the least­squares estimate of E Y X only if x1 b1 b2x the least­squares line, or best­fitting line, and b1 b2x 0 if and 1 and xn. In such a case we cannot use least squares to estimate x . Note that n i 1 xi x 2 2 although we can still estimate E Y X x (see Problem 10.3.19). Chapter 10: Relationships Among Variables 543 Now that we have estimates b1 b2 of the regression coefficients, we want to use 1 and 2 These estimates have the unbiasedness property. these for inferences about xn yn for X Y x 2x and we observe the independent 1 then Theorem 10.3.2 If E Y X values x1 y1 (i) E B1 X1 (ii) E B2 X1 Xn Xn x1 x1 xn xn 1 2. PROOF The proof of this result can be found in Section 10.6. Note that Theorem 10.3.2 and the theorem of total expectation imply that E B1 and E B2 2 unconditionally as well. 1 Adding the assumption that the conditional variances exist, we have the following theorem. x Theorem 10.3.3 If E Y X and we observe the independent values x1 y1 2 1 n x1 (i) Var B1 X1 2 (ii) Var B2 X1 x1 (iii) Cov B1 B2 X1 xn xn Xn Xn Xn xn x1 1 2x Var Y X x 2 for every x xn yn for X Y then x 2 n i 1 xi 2x n i 1 xi x 2 n i 1 xi x 2 x 2 PROOF See Section 10.6 for the proof of this result. For the least­squares estimate b1 b2x of the mean E Y X x 1 2x, we have the following result. Corollary 10.3.1 Var B1 B2x X1 x1 Xn xn 2 1 n x 2 x n i 1 xi x 2 (10.3.5) PROOF See Section 10.6 for the proof of this result. A natural predictor of a future value of Y when X x is given by the conditional 1and 2 we mean E Y X 1 use the estimated mean b1 x 2x Because we do not know the values of b2x as the predictor. When we are predicting Y at an x value that lies within the range of the observed values of X, we refer to this as an interpolation. When we want to predict at an x value that lies outside this range, we refer to this as an extrapolation. Extrapolations are much less reliable than interpolations. The farther away x is from the observed range of X values, then, intuitively, the less reliable we feel such a prediction will be. Such considerations should always be borne in mind. From (10.3.5), we see that the variance of the prediction at the value X x increases as x moves away from x So to a certain extent, the standard error does reect this increased uncertainty, but note that its form is based on the assumption that the simple linear regression model is correct. 544 Section 10.3: Quantitative Response and Predictors Even if we accept the simple linear regression model based on the observed data (we will discuss model checking later in this section), this model may fail to apply for very different values of x and so the predictions would be in error. We want to use the results of Theorem 10.3.3 and Corollary 10.3.1 to calculate standard errors of the least­squares estimates. Because we do not know 2 however, we need an estimate of this quantity as well. The following result shows that s2 n 1 n 2 i 1 yi b1 2 b2xi (10.3.6) is an unbiased estimate of 2. Theorem 10.3.4 If E Y X and we observe the independent values x1 y1 x 1 2x Var Y X x 2 for every x xn yn for X Y , then E S2 X1 x1 Xn xn 2 PROOF See Section 10.6 for the proof of this result. Therefore, the standard error of b1 is then given by s 1 n x 2 n i 1 xi x 2 1 2 , and the standard error of b2 is then given by xi x 2 1 2 . s n i 1 Under further assumptions, these standard errors can be interpreted just as we inter­ preted standard errors of estimates of the mean in the location and location­scale nor­ mal models. EXAMPLE 10.3.4 (Example 10.3.3 continued) Using the data in Example 10.3.3 and the formulas of Theorem 10.3.1, we obtain b1 1 33 b2 So the least­squares line is given by 1 33 2 as the estimate of 2 06 as the least­squares estimates of the intercept and slope, respectively. 1 06 2 06x Using (10.3.6), we obtain s2 Using the formulas of Theorem 10.3.3, the standard error of b1 is 0 3408, while the standard error of b2 is 0 1023 The prediction of Y at X 2 0 is given by 1 33 2 06 2 5 45 Using Corollary 10.3.1, this estimate has standard error 0 341 This prediction is an interpolation. The ANOVA Decomposition and the F­Statistic The following result gives a decomposition of the total sum of squares n i 1 yi y 2. Chapter 10: Relationships Among Variables 545 Lemma 10.3.1 If x1 y1 xn yn are such that n i 1 xi x 2 0 then n i 1 yi y 2 b2 2 n i 1 xi x 2 n i 1 yi b1 b2xi 2. PROOF The proof of this result can be found in Section 10.6. We refer to b2 2 n i 1 xi x 2 as the regression sum of squares (RSS) and refer to n i 1 yi b1 2 b2xi as the error sum of squares (ESS). If we think of the total sum of squares as measuring the total observed variation in the response values yi , then Lemma 10.3.1 provides a decomposition of this variation into the RSS, measuring changes in the response due to changes in the predictor, and the ESS, measuring changes in the response due to the contribution of random error. It is common to write this decomposition in an analysis of variance table (ANOVA). Source X Error Total Df 1 n n 2 1 b2 2 Sum of Squares n i 1 xi x 2 n i 1 yi n i 1 yi b2xi b1 y 2 Mean Square n i 1 xi x 2 b2 2 s2 2 Here, Df stands for degrees of freedom (we will discuss how the Df entries are cal­ culated in Section 10.3.4). The entries in the Mean Square column are calculated by dividing the corresponding sum of squares by the Df entry. To see the significance of the ANOVA table, note that, from Theorem 10.3.3, E B2 2 n i 1 xi x 2 X1 x1 Xn xn 2 n 2 2 i 1 xi x 2 (10.3.7) which is equal to 2 if and only if 0 (we are always assuming here that the xi 2 0 if vary). Given that the simple linear regression model is correct, we have that and only if there is no relationship between the response and the predictor. Therefore, b2 0 Because s2 2 2 (Theorem 10.3.4), a sensible statistic to use in is always an unbiased estimate of 0, is given by assessing H0 : x 2 is an unbiased estimator of 2 if and only if n i 1 xi 2 2 2 F RSS ESS n 2 b2 2 n i 1 xi s2 x 2 , (10.3.8) 546 Section 10.3: Quantitative Response and Predictors 2 when H0 is true. We then conclude as this is the ratio of two unbiased estimators of that we have evidence against H0 when F is large, as (10.3.7) also shows that the numerator will tend to be larger than 2 when H0 is false. We refer to (10.3.8) as the F­statistic. We will subsequently discuss the sampling distribution of F to see how to determine when the value F is so large as to be evidence against H0. EXAMPLE 10.3.5 (Example 10.3.3 continued) Using the data of Example 10.3.3, we obtain n i 1 n i 1 yi xi b2 2 y 2 x 2 437 01 428 55 n i 1 and so yi b1 2 b2xi 437 01 428 55 8 46 b2 2 F x 2 n i 1 xi s2 428 55 1 06 404 29 Note that F is much bigger than 1, and this seems to indicate a linear effect due to X. The Coefficient of Determination and Correlation Lemma 10.3.1 implies that R2 b2 2 n i 1 xi n i 1 yi x 2 y 2 R2 1 Therefore, the closer R2 is to 1, the more of the observed total satisfies 0 variation in the response is accounted for by changes in the predictor. In fact, we interpret R2 called the coefficient of determination, as the proportion of the observed variation in the response explained by changes in the predictor via the simple linear regression. The coefficient of determination is an important descriptive statistic, for, even if we conclude that a relationship does exist, it can happen that most of the observed variation is due to error. If we want to use the model to predict further values of the response, then the coefficient of determination tells us whether we can expect highly accurate predictions or not. A value of R2 near 1 means highly accurate predictions, whereas a value near 0 means that predictions will not be very accurate. EXAMPLE 10.3.6 (Example 10.3.3 continued) Using the data of Example 10.3.3, we obtain R2 0 981 Therefore, 98.1% of the ob­ served variation in Y can be explained by the changes in X through the linear relation. This indicates that we can expect fairly accurate predictions when using this model, at least when we are predicting within the range of the observed X values. Chapter 10: Relationships Among Variables 547 Recall that in Section 3.3, we defined the correlation coefficient between random variables X and Y to be XY Corr X Y Cov X Y Sd X Sd Y . In Corollary 3.6.1, we proved that Y of the extent to which a linear relationship exists between X and Y cX for some constants a 1 if and only if 0 So XY can be taken as a measure 1 R1 and c 1 with XY XY a If we do not know the joint distribution of X Y xn yn XY Based on the observations x1 y1 then we will have to estimate the natural estimate to use is the sample correlation coefficient where rx y sx y sx sy sx y n 1 n 1 i 1 xi x yi y is the sample covariance estimating Cov X Y , and sx sy are the sample standard rx y deviations for the X and Y variables, respectively. Then 1 1 with rx y if and only if yi 0 for every i (the proof is the same as in Corollary 3.6.1 using the joint distribution that puts probability mass 1 n at each point xi yi — see Problem 3.6.16). cxi for some constants a 1 R1 and c a The following result show
s that the coefficient of determination is the square of the correlation between the observed X and Y values. Theorem 10.3.5 If x1 y1 y 2 0 then R2 n i 1 yi xn yn are such that r 2 x y. n i 1 xi x 2 0 PROOF We have r 2 x y n i 1 xi x n i 1 xi x 2 y 2 yi n i 1 yi y 2 b2 2 n i 1 xi n i 1 yi x 2 y 2 R2 where we have used the formula for b2 given in Theorem 10.3.1. Confidence Intervals and Testing Hypotheses We need to make some further assumptions in order to discuss the sampling distribu­ tions of the various statistics that we have introduced. We have the following results. 548 Section 10.3: Quantitative Response and Predictors x, is distributed N 1 2 and we ob­ 2x xn yn for X Y , then the conditional x1 x 2 xn are as follows. Xn Theorem 10.3.6 If Y , given X serve the independent values x1 y1 distributions of B1 B2 and S2 given X1 2 1 n (i) B1 2 (ii) B2 (iii) B1 (iv) n N 1 N 2 B2x 2 S2 x 2 n i 1 xi 2x n i 1 xi xi x 2 2 independent of B1 B2 PROOF The proof of this result can be found in Section 10.6. Corollary 10.3.2 (i) B1 (ii) B2 (iii) 1 2 S 1 n n i 1 xi x 2 n i 1 xi 2x 1 n i 1 xi (iv) If F is defined as in (10.3.8), then H0 : F 1 n B2x x 2 B1 is true if and only if F PROOF The proof of this result can be found in Section 10.6. Using Corollary 10.3.2(i), we have that b1 s 1 n x 2 1 2 xi is an exact ­confidence interval for 1. Also, from Corollary 10.3.2(ii), b2 s n i 1 1 2 xi x 2 t 1 2 n 2 is an exact ­confidence interval for From Corollary 10.3.2(iv), we can test H0 : 2. 0 by computing the P­value 2 P F b2 2 n i 1 xi s2 x 2 , (10.3.9) F 1 n where F 2 , to see whether or not the observed value (10.3.8) is surprising. This is sometimes called the ANOVA test. Note that Corollary 10.3.2(ii) implies that we can also test H0 : 0 by computing the P­value 2 P T b2 n i 1 xi s x 2 1 2 , (10.3.10) Chapter 10: Relationships Among Variables 549 where T (10.3.10) are equal. t n 2 . The proof of Corollary 10.3.2(iv) reveals that (10.3.9) and EXAMPLE 10.3.7 (Example 10.3.3 continued) Using software or Table D.4, we obtain t0 975 8 Example 10.3.3, we obtain a 0.95­confidence interval for 1 as 2 306 Then, using the data of b1 s 1 n x 2 1 2 xi 33 0 3408 2 306 [0 544 2 116] and a 0.95­confidence interval for 2 as b2 s n i 1 1 2 xi x 2 t 1 2 n 2 2 06 0 1023 2 306 [1 824 2 296] The 0.95­confidence interval for 2 does not include 0, so we have evidence against the null hypothesis H0 : 0 and conclude that there is evidence of a relationship between X and Y This is confirmed by the F­test of this null hypothesis, as it gives the P­value P F 0 000 when F 404 29 F 1 8 2 Analysis of Residuals In an application of the simple regression model, we must check to make sure that the assumptions make sense in light of the data we have collected. Model checking is based on the residuals yi b2xi (after standardization), as discussed in Section 9.1. Note that the ith residual is just the difference between the observed value yi at xi and the predicted value b1 b2xi at xi . b1 From the proof of Theorem 10.3.4, we have the following result. Corollary 10.3.3 (i) E Yi (ii) Var Yi B1 B1 B2xi X1 x1 Xn xn 0 B2xi X1 x1 Xn xn 2 1 1 n x 2 xi n i 1 xi x 2 This leads to the definition of the i th standardized residual as yi b1 b2xi s 1 1 n xi 10.3.11) Corollary 10.3.3 says that (10.3.11), with replacing s is a value from a distri­ bution with conditional mean 0 and conditional variance 1. Furthermore, when the conditional distribution of the response given the predictors is normal, then the con­ ditional distribution of this quantity is N 0 1 (see Problem 10.3.21). These results 550 Section 10.3: Quantitative Response and Predictors are approximately true for (10.3.11) for large n. Furthermore, it can be shown (see Problem 10.3.20) that Cov Yi B1 B2xi Y j B1 B2x j X1 x1 Xn xn 2 1 n xi x j x n k 1 xk x x 2 . Therefore, under the normality assumption, the residuals are approximately indepen­ dent when n is large and xi x n k 1 xk x 2 0 This will be the case whenever Var X is finite (see Challenge 10.3.27) as n or, in the design context, when the values of the predictor are chosen accordingly. So one approach to model checking here is to see whether the values given by (10.3.11) look at all like a sample from the N 0 1 distribution. For this, we can use the plots discussed in Chapter 9. EXAMPLE 10.3.8 (Example 10.3.3 continued) Using the data of Example 10.3.3, we obtain the following standardized residuals. 0 49643 0 17348 0 43212 0 75281 1 73371 0 28430 1 00487 1 43570 0 08358 1 51027 These are plotted against the predictor x in Figure 10.3.21 ­2 ­5 0 x 5 Figure 10.3.2: Plot of the standardized residuals in Example 10.3.8. It is recommended that we plot the standardized residuals against the predictor, as this may reveal some underlying relationship that has not been captured by the model. This residual plot looks reasonable. In Figure 10.3.3, we have a normal probability plot of the standardized residuals. These points lie close to the line through the origin with slope equal to 1, so we conclude that we have no evidence against the model here. Chapter 10: Relationships Among Variables 551 1 ­2 ­1 0 1 Standardized Residual Figure 10.3.3: Normal probability plot of the standardized residuals in Example 10.3.8. What do we do if model checking leads to a failure of the model? As discussed in Chapter 9, perhaps the most common approach is to consider making various trans­ formations of the data to see whether there is a simple modification of the model that will pass. We can make transformations, not only to the response variable Y but to the predictor variable X as well. An Application of Simple Linear Regression Analysis The following data set is taken from Statistical Methods, 6th ed., by G. Snedecor and W. Cochran (Iowa State University Press, Ames, 1967) and gives the record speed Y in miles per hour at the Indianapolis Memorial Day car races in the years 1911–1941, excepting the years 1917–1918. We have coded the year X starting at 0 in 1911 and incrementing by 1 for each year. There are n 29 data points xi yi The goal of the analysis is to obtain the least­squares line and, if warranted, make inferences about the regression coefficients. We take the normal simple linear regression model as our statistical model. Note that this is an observational study. Year 0 1 2 3 4 5 8 9 10 11 Speed Year 12 13 14 15 16 17 18 19 20 21 74.6 78.7 75.9 82.5 89.8 83.3 88.1 88.6 89.6 94.5 Speed Year 22 23 24 25 26 27 28 29 30 91.0 98.2 101.1 95.9 97.5 99.5 97.6 100.4 96.6 104.1 Speed 104.2 104.9 106.2 109.1 113.6 117.2 115.0 114.3 115.1 Using Theorem 10.3.1, we obtain the least­squares line as y This line, together with a scatter plot of the values xi yi 77 5681 1 27793x is plotted in Figure 10.3.4. 552 Section 10.3: Quantitative Response and Predictors The fit looks quite good, but this is no guarantee of model correctness, and we must carry out some form of model checking. Figure 10.3.5 is a plot of the standardized residuals against the predictor. This plot looks reasonable, with no particularly unusual pattern apparent. Figure 10.3.6 is a nor­ mal probability plot of the standardized residuals. The curvature in the center might give rise to some doubt about the normality assumption. We generated a few samples of n 29 from an N 0 1 distribution, however, and looking at the normal probabil­ ity plots (always recommended) reveals that this is not much cause for concern. Of course, we should also carry out model checking procedures based upon the standard­ ized residuals and using P­values, but we do not pursue this topic further here. Regression Plot Speed = 77.5681 + 1.27793 Year S = 2.99865 R­Sq = 94.0 % R­Sq(adj) = 93.8 % d e e p S 120 110 100 90 80 70 0 10 20 30 Year Figure 10.3.4: A scatter plot of the data together with a plot of the least­squares line. Residuals Versus Year (response is Speed1 ­2 0 10 20 30 Year Figure 10.3.5: A plot of the standardized residuals against the predictor. Chapter 10: Relationships Among Variables 553 Normal Probability Plot of the Residuals (response is Speed1 ­2 ­2 ­1 0 1 2 3 Standardized Residual Figure 10.3.6: A normal probability plot of the standardized residuals. Based on the results of our model checking, we decide to proceed to inferences about the regression coefficients. The estimates and their standard errors are given in 2 999 2, the following table, where we have used the estimate of to compute the standard errors. We have also recorded the t­statistics appropriate for testing each of the hypotheses H0 : 2 given by s2 0 and H0 : 0 1 2 Coefficient Estimate 77 568 1 278 1 2 Standard Error 1 118 0 062 t­statistic 69 39 20 55 From this, we see that the P­value for assessing H0 : 0 is given by 2 P T 20 55 0 000 t 27 , and so we have strong evidence against H0 It seems clear that there when T is a strong positive relationship between Y and X. Since the 0.975 point of the t 27 distribution equals 2 0518 a 0.95­confidence interval for 2 is given by 1 278 0 062 2 0518 [1 1508 1 4052] . The ANOVA decomposition is given in the following table. Source Regression Error Total Df 1 27 28 Sum of Squares Mean Square 3797 0 9 0 3797 0 242 8 4039 8 Accordingly, we have that F 421 888 0 is true, P F H0 : what we got from the preceding t­test. 2 3797 0 9 0 421 888 and, as F F 1 27 when 0 000 which simply confirms (as it must) The coefficient of determination is given by R2 0 94 There­ fore, 94% of the observed variation in the response variable can be explained by the 3797 0 4039 8 554 Section 10.3: Quantitative Response and Predictors changes in the predictor through the simple linear regression. The value of R2 indicates that the fitted model will be an excellent predictor of future values, provided that the value of X that we want to predict at is in the range (or close to it) of the values of X used to fit the model. 10.3.3 Bayesian Simple Linear Model (Advanced) For the Bayesian formulation of the simple linear regression model with normal error, we need to
add a prior distribution for the unknown parameters of the model, namely, 2 and 2 There are many possible choices for this. A relevant prior is dependent 1 on the application. To help simplify the calculations, we reparameterize the model as follows. Let 1 1 n i 1 2x and 2 2 It is then easy to show (see Problem 10.3.24) that yi 1 2 2xi n i 1 n i 1 n i 1 yi yi 1 y 2 xi x 2 1 y 2 xi x 2 yi xi x 2 2 2 xi x yi y (10.3.12) i 1 The likelihood function, using this reparameterization, then equals 2 2 n 2 exp 1 2 2 n i 1 yi 1 2 xi x 2 From (10.3.12), and setting c2 x c2 y cx xi yi xi x 2 y 2 x yi y we can write this as Chapter 10: Relationships Among Variables 555 2 2 n 2 exp c2 y 2 2 exp n 2 2 y 2 1 exp 1 2 2 2 2 n 2 exp 2c2 2 x c2 y 2 2cx y c2 x a2 2 2 exp n 2 2 y 2 1 exp c2 x 2 2 a 2 , 2 where the last equality follows from 2 cx y/c2 x 2c2 x 2 2cx y c2 x 2 a 2 x a2 with a c2 2 are independent given This implies that, whenever the prior distribution on 1 and 1 and 2 are also independent given 2 Note also that y and a are the least­squares estimates (as well 1 and 2 respectively (see Problem 10.3.24). as the MLE’s) of 2 then the posterior distributions of 2 is such that 1 Now suppose we take the prior to be Gamma Note that 1 and 2 are independent given 2 As it turns out, this prior is conjugate, so we can easily determine an exact form for 2 the posterior distribution (see Problem 10.3.25). The joint posterior of is given by c2 x Gamma 1 1 n y c2 c2 where x y 1 2 c2 y x a2 c2 n y2 x a2 c2 2 2 2 2 2 1 2 1 c2 ny c2 . 556 Section 10.3: Quantitative Response and Predictors Of course, we must select the values of the hyperparameters to fully specify the prior. 1 1 2 2 and Now observe that for a diffuse analysis, i.e., when we have little or no prior infor­ 0 and the posterior and 2 mation about the parameters, we let converges to c2 x Gamma n 2 x y where x y the hyperparameter the analysis when n is not too small. x a2 . But this still leaves us with the necessity of choosing c2 1 2 c2 y . We will see, however, that this choice has only a small effect on We can easily work out the marginal posterior distribution of the i For example, in the diffuse case, the marginal posterior density of 2 is proportional to 1 2 exp 1 2 n 2 1 2 c2 exp x y 1 2 c2 x 2 n 2 1 exp . Making the change of variable 1 2 x y where c2 x 2 a 2 2 1 2 in the preceding integral, shows that the marginal posterior density of to 2 is proportional 1 c2 x 2 x y a 2 2 which is proportional to n 1 2 n 2 1 2 exp d 0 1 c2 . This establishes (see Problem 4.6.17) that the posterior distribution of by 2 is specified 2 n 2 a 2 x y c2 x t 2 n So a ­HPD (highest posterior density) interval for 2 is given by a 1 2 n 2 x y c2 x t 1 2 2 n Chapter 10: Relationships Among Variables 557 Note that these intervals will not change much as we change too small. provided that n is not We consider an application of a Bayesian analysis for such a model. EXAMPLE 10.3.9 Haavelmo’s Data on Income and Investment The data for this example were taken from An Introduction to Bayesian Inference in Econometrics, by A. Zellner (Wiley Classics, New York, 1996). The response variable Y is income in U.S. dollars per capita (deated), and the predictor variable X is invest­ ment in dollars per capita (deated) for the United States for the years 1922–1941. The data are provided in the following table. Year 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 Income 433 483 479 486 494 498 511 534 478 440 Investment Year 1932 39 1933 60 1934 42 1935 52 1936 47 1937 51 1938 45 1939 60 1940 39 1941 41 Income 372 381 419 449 511 520 477 517 548 629 Investment 22 17 27 33 48 51 33 46 54 100 In Figure 10.3.7, we present a normal probability plot of the standardized residuals, obtained via a least­squares fit. In Figure 10.3.8, we present a plot of the standardized residuals against the predictor. Both plots indicate that the model assumptions are reasonable. Suppose now that we analyze these data using the limiting diffuse prior with 64993 c2 x 483 c2 y 5710 55 and cx y 17408 3 2 64993 2 17408 3 so that 23792 35 The 3 05 and x y Here, we have that y a posterior is then given by 17408 3 5710 55 1 2 2 2 2 2 1 2 20 N 483 2 5710 55 N 3 05 Gamma 12 23792 35 The primary interest here is in the investment multiplier 2, using t0 975 24 0.95­HPD interval for 2 0639 is given by 2 By the above results 23792 35 5710 55 t0 975 24 3 05 0 589 2 0639 3 05 24 1 834 4 266 558 Section 10.3: Quantitative Response and Predictors Normal Probability Plot of the Residuals (response is Income1 ­2 ­2 ­1 0 1 Standardized Residual Figure 10.3.7: Normal probability plot of the standardized residuals in Example 10.3.91 ­ In vestm en t Figure 10.3.8: Plot of the standardized residuals against the predictor in Example 10.3.9. 10.3.4 The Multiple Linear Regression Model (Advanced) We now consider the situation in which we have a quantitative response Y and quanti­ tative predictors X1 Xk For the regression model, we assume that the conditional distributions of Y given the predictors, have constant shape and that they change, as the predictors change, at most through the conditional mean E Y X1 xk . For the linear regression model, we assume that this conditional mean is of the form Xk x1 E Y X1 x1 Xk xk 1x1 k xk (10.3.13) This is linear in the unknown i R1 for i 1 k Chapter 10: Relationships Among Variables 559 We will develop only the broad outline of the analysis of the multiple linear regres­ sion model here. All results will be stated without proofs provided. The proofs can be found in more advanced texts. It is important to note, however, that all of these results are just analogs of the results we developed by elementary methods in Section 10.3.2, for the simple linear regression model. Matrix Formulation of the Least­Squares Problem For the analysis of the multiple linear regression model, we need some matrix concepts. We will briey discuss some of these here, but also see Appendix A.4. Let A Rm n denote a rectangular array of numbers with m rows and n columns, j ­th and let ai j denote the entry in the ith row and j th column (referred to as the i entry of A). For example R2 3 denotes a 2 3 matrix and, for example, a22 0 2 We can add two matrices of the same dimensions m and n by simply adding their elements componentwise. So if A B bi j . Furthermore, we can multiply a matrix by a real number c by simply multiplying every entry in the matrix by c So if A cai j . We will sometimes write a matrix A Rm n and bi j c A Rm n in terms of its columns as A Rm n, then B B, then ci j ai j A Rm n and C a1 an so that here ai define the product of A times b as Ab Suppose now that Y Rm Finally, if A Rn then we bnan Rn and that E Y is constrained to lie in a set of the form b1a1 Rm n and b Rm S 1 1 k k : i R1 i 1 k where 1 Rn. When k are fixed vectors in Rn A set such as S is called a linear subspace of 1 k has the linear independence property, namely, 1 1 k k 0 k 0 then we say that S has dimension k and 1 k if and only if is a basis for S If we set 1 V 1 k 11 21 12 22 n1 n2 1k 2k nk Rn k then we can write E Y 1 1 k k 1 11 1 21 2 12 2 22 k 1k k 2k V 1 n1 2 n2 k nk 560 Section 10.3: Quantitative Response and Predictors for some unknown point 1 least­squares estimate of E Y is obtained by finding the value of . When we observe y 2 k Rn, then the that minimizes n i 1 yi 1 i1 2 i2 2 . k i k k 1 It can be proved that a unique minimizing value for Rk exists whenever is a basis. The minimizing value of will be denoted by b and is called V b is the least­squares the least­squares estimate of estimate of E Y and is sometimes called the vector of fitted values. The point y V b is called the vector of residuals. . The point b1 1 bk k We now consider how to calculate b. For this, we need to understand what it means Rk n The matrix Rm k on the right by the matrix B to multiply the matrix A product AB is defined to be the m n matrix whose i j ­th entry is given by k l 1 ailbl j Notice that the array A must have the same number of columns as the number of rows Rm k is defined to of B for this product to be defined. The transpose of a matrix A be a11 am1 A Rk m a1k amk namely, the ith column of A becomes the i th row of A . For a matrix A matrix inverse of A is defined to be the matrix A 1 such that Rk k, the A A 1 A 1 A I Rk k has 1’s along its diagonal and 0’s everywhere else; it is called the k k where I Rk k has an inverse, but when it does identity matrix. It is not always the case that A it can be shown that the inverse is unique. Note that there are many mathematical and statistical software packages that include the facility for computing matrix products, transposes, and inverses. We have the following fundamental result. Theorem 10.3.7 If E Y and the columns of V V V 1 exists, the least­squares estimate of 1 1 S 1 k k have the linear independence property, then k k : 1 i R1 i is unique, and it is given by b b1 bk V V 1 V y (10.3.14) Chapter 10: Relationships Among Variables 561 Least­Squares Estimates, Predictions, and Standard Errors For the linear regression model (10.3.13), we have that (writing Xi j for the jth value of Xi ) E Y1 Yn Xi j xi j for all i j 1x11 1xn1 1 1 k k k x1k k xnk V where 1 V and k 1 2 k x11 xn1 x1k xnk Rn k k of V have the linear indepen­ We will assume, hereafter, that the columns dence property. Then (replacing expectation by conditional expectation) it is immediate that the least­squares estimate of is given by (10.3.14). As with the simple linear regression model, we have a number of results concerning 1 the least­squares estimates. We state these here without proof. Theorem 10.3.8 If the xi1 n and the linear regression model applies, then xi k yi are independent observations for i 1 E Bi Xi j xi j for all i j i for i 1 k So Theorem 10.3.8 states that the least­squares estimates are unbiased estimates of the linear regression coefficients. If we want to assess the accuracy of these estimates, then we need to be able to compute thei
r standard errors. Theorem 10.3.9 If the xi1 n from the linear regression model, and if Var Y X1 for every x1 xi k yi are independent observations for i xk xk then Xk x1 1 2 Cov Bi B j Xi j xi j for all i j 2ci j (10.3.15) where ci j is the i j ­th entry in the matrix V V 1. We have the following result concerning the estimation of the mean E Y X1 x1 Xk xk 1x1 k xk by the estimate b1x1 bk xk 562 Section 10.3: Quantitative Response and Predictors Corollary 10.3.4 Var B1x1 k 2 Bk xk Xi j xi j for all i j x 2 i ci i 2 xi x j ci j 2x V V 1x (10.3.16) i 1 i j where x x1 xk . We also use b1x1 Xk x1 X1 xk bk xk b x as a prediction of a new response value when We see, from Theorem 10.3.9 and Corollary 10.3.4, that we need an estimate of 2 to compute standard errors The estimate is given by s2 n 1 n k i 1 yi b1xi1 bk xik 2 1 n k and we have the following result. y Xb y Xb (10.3.17) Theorem 10.3.10 If the xi1 1 n from the linear regression model, and if Var Y X1 xi k yi are independent observations for i xk Xk x1 2 then E S2 Xi j xi j for all i j 2. Combining (10.3.15) and (10.3.17), we deduce that the standard error of bi is s ci i . Combining (10.3.16) and (10.3.17), we deduce that the standard error of b1x1 bk xk is s k i 1 x 2 i ci i 2 xi x j ci j i j 1 2 s x V V 1x 1 2. The ANOVA Decomposition and F­Statistics When one of the predictors X1 Xk is constant, then we say that the model has an intercept term. By convention, we will always take this to be the first predictor. So 1 and 1 is the when we want the model to have an intercept term, we take X1 intercept, e.g., the simple linear regression model. Note that it is common to denote the intercept term by 0 so that X0 Xk denote the predictors that actually change. We will also adopt this convention when it seems appropriate. 1 and X1 Basically, inclusion of an intercept term is very common, as this says that, when the predictors that actually change have no relationship with the response Y , then the intercept is the unknown mean of the response. When we do not include an intercept, then this says we know that the mean response is 0 when there is no relationship be­ tween Y and the nonconstant predictors. Unless there is substantive, application­based evidence to support this, we will generally not want to make this assumption. Denoting the intercept term by 1 so that X1 1 we have the following ANOVA decomposition for this model that shows how to isolate the observed variation in Y that can be explained by changes in the nonconstant predictors. Chapter 10: Relationships Among Variables 563 xi k yi are such that the Lemma 10.3.2 If, for i matrix V has linearly independent columns, with 1 equal to a column of ones, then b1 n the values xi1 bk xk and b2x2 1 y n i 1 yi y 2 n b2 xi2 x2 bk xi k xk 2 i 1 n i 1 yi b1xi1 bk xik 2 We call RSS X2 Xk n i 1 b2 xi2 x2 bk xi k xk 2 the regression sum of squares and ESS n i 1 yi b1xi1 bk xi k 2 the error sum of squares. This leads to the following ANOVA table. Source Df X2 Error Total Xk k n n Sum of Squares Xk 1 RSS X2 ESS k n i 1 yi 1 y 2 Mean Square RSS X2 s2 Xk k 1 When there is an intercept term, the null hypothesis of no relationship between the response and the predictors is equivalent to H0 : 0 As with the simple linear regression model, the mean square for regression can be shown to be an 2 if and only if the null hypothesis is true. Therefore, a sensible unbiased estimator of statistic to use for assessing the null hypothesis is the F­statistic 2 k F RSS X2 Xk k 1 s2 with large values being evidence against the null. Often, we want to assess the null hypothesis H0 : l 1 equivalently, the hypothesis that the model is given by k 0 or, E Y X1 x1 Xk xk 1x1 l xl where l relationship with the response. k This hypothesis says that the last k l predictors Xl 1 Xk have no If we denote the least­squares estimates of 1 l obtained by fitting the smaller model, by b1 bl then we have the following result. 564 Section 10.3: Quantitative Response and Predictors n are values for which the Lemma 10.3.3 If the xi1 for i matrix V has linearly independent columns, with 1 equal to a column of ones, then xi k yi 1 RSS X2 Xk n i 1 n b2 xi2 x2 bk xik xk 2 b2 xi2 i 1 RSS X2 x2 Xl bl xil 2 xl (10.3.18) On the right of the inequality in (10.3.18), we have the regression sum of squares obtained by fitting the model based on the first l predictors. Therefore, we can interpret the difference of the left and right sides of (10.3.18), namely, RSS Xl 1 Xk X2 Xl RSS X2 Xk RSS X2 Xl Xk to the regression sum of squares as the contribution of the predictors Xl 1 Xl are in the model. We get the following ANOVA ta­ when the predictors X1 ble (actually only the first three columns of the ANOVA table) corresponding to this decomposition of the total sum of squares. Source Xl Xk X2 Xl Df 1 l k 1 l k n n X2 Xl 1 Error Total Sum of Squares Xl Xk X2 Xl y 2 RSS X2 RSS Xl 1 ESS n i 1 yi It can be shown that the null hypothesis H0 : l 1 k 0 holds if and only if is an unbiased estimator of null hypothesis is the F­statistic RSS Xl 1 Xk X2 k 2. Therefore, a sensible statistic to use for assessing this Xl l RSS Xl 1 F Xk X2 s2 Xl k l with large values being evidence against the null. The Coefficient of Determination The coefficient of determination for this model is given by R2 RSS X2 n i 1 yi Xk y 2 which, by Lemma 10.3.2, is always between 0 and 1. The value of R2 gives the propor­ tion of the observed variation in Y that is explained by the inclusion of the nonconstant predictors in the model. Chapter 10: Relationships Among Variables 565 It can be shown that R2 is the square of the multiple correlation coefficient between Xk However, we do not discuss the multiple correlation coefficient in Y and X1 this text. Confidence Intervals and Testing Hypotheses For inference, we have the following result. 2 and if we observe the independent values xi1 k xk n, then the conditional distributions of the Bi and S2 given Theorem 10.3.11 If the conditional distribution of Y given X1 xk xik yi Xi j (i) Bi (ii) B1x1 is N 1x1 for i 1 xi j for all i N i 2cii Bk xk is distributed j are as follows. Xk x1 N 1x1 2 k xk k i 1 x 2 i cii 2 xi x j ci j i j (iii) n k S2 2 2 n k independent of B1 Bk Corollary 10.3.5 (i) Bi (ii) sc1 2 ii i B1x1 t n k Bk xk 1x1 S k i 1 x 2 i ci i 2 j xi x j ci j i k xk 1 2 t n k (iii) H0 : l 1 RSS X2 F k Xk 0 is true if and only if RSS X2 S2 Xl k l F k l n k Analysis of Residuals In an application of the multiple regression model, we must check to make sure that the assumptions make sense. Model checking is based on the residuals yi b1xi1 bk xi k (after standardization), just as discussed in Section 9.1. Note that the ith xi k and residual is simply the difference between the observed value yi at xi1 the predicted value b1xi1 bk xi k at xi1 xik We also have the following result (this can be proved as a Corollary of Theorem 10.3.10). 566 Section 10.3: Quantitative Response and Predictors Corollary 10.3.6 (i) E Yi B1xi1 (ii) Cov Yi di j is the i Bk xi k V 0 B1xi1 Bk xik Y j j ­th entry of the matrix I B1x j1 V V V 1V . Bk x jk V 2di j , where Therefore, the standardized residuals are given by y j b1x j1 sd1 2 ii bk x jk . (10.3.19) When s is replaced by in (10.3.19), Corollary 10.3.6 implies that this quantity has conditional mean 0 and conditional variance 1. Furthermore, when the conditional distribution of the response given the predictors is normal, then it can be shown that the conditional distribution of this quantity is N 0 1 . These results are also approxi­ mately true for (10.3.19) for large n. Furthermore, it can be shown that the covariances between the standardized residuals go to 0 as n under certain reasonable con­ ditions on distribution of the predictor variables. So one approach to model checking here is to see whether the values given by (10.3.19) look at all like a sample from the N 0 1 distribution. What do we do if model checking leads to a failure of the model? As in Chapter 9, we can consider making various transformations of the data to see if there is a simple modification of the model that will pass. We can make transformations not only to the response variable Y but to the predictor variables X1 Xk as well. An Application of Multiple Linear Regression Analysis The computations needed to implement a multiple linear regression analysis cannot be carried out by hand. These are much too time­consuming and error­prone. It is therefore important that a statistician have a computer with suitable software available when doing a multiple linear regression analysis. We consider the model Y x1 x2 x3 The data in Table 10.1 are taken from Statistical Theory and Methodology in Sci­ ence and Engineering, 2nd ed., by K. A. Brownlee (John Wiley & Sons, New York, 1965). The response variable Y is stack loss (Loss), which represents 10 times the per­ centage of ammonia lost as unabsorbed nitric oxide. The predictor variables are X1 air ow (Air), X2 the concentration of temperature of inlet water (Temp), and X3 nitric acid (Acid). Also recorded is the day (Day) on which the observation was taken. 2 . Note that we have included an intercept term. Figure 10.3.9 is a normal probability plot of the 2 63822 that standardized residuals. This looks reasonable, except for one residual, diverges quite distinctively from the rest of the values, which lie close to the 45­degree line. Printing out the standardized residuals shows that this residual is associated with the observation on the twenty­first day. Possibly there was something unique about this day’s operations, and so it is reasonable to discard this data value and refit the model. Figure 10.3.10 is a normal probability plot obtained by fitting the model to the first 20 observations. This looks somewhat better, but still we might be concerned about at least one of the residuals that deviates substantially from the 45­degree line. N 0 3x3 1x1 2x2 Chapter 10: Relationships Among Variables 567 Day Air Temp Acid Loss Day Air Tem
p Acid Loss 13 11 12 8 7 8 8 9 15 15 58 58 58 50 50 50 50 50 56 70 12 13 14 15 16 17 18 19 20 21 17 18 19 18 18 19 19 20 20 20 88 82 93 89 86 72 79 80 82 91 1 2 3 4 5 6 7 8 9 10 11 80 80 75 62 62 62 62 62 58 58 58 27 27 25 24 22 23 24 24 23 18 18 89 88 90 87 87 87 93 93 87 80 89 42 37 37 28 18 18 19 20 15 14 14 Table 10.1: Data for Application of Multiple Linear Regression Analysis Normal Probability Plot of the Residuals (response is Loss1 ­2 ­3 ­2 ­1 0 1 2 Standardized Residual Figure 10.3.9: Normal probability plot of the standardized residuals based on all the data. Normal Probability Plot of the Residuals (response is Loss1 ­2 ­1 0 1 2 3 Standardized Residual Figure 10.3.10: Normal probability plot of the standardized residuals based on the first 20 data values. 568 Section 10.3: Quantitative Response and Predictors Following the analysis of these data in Fitting Equations to Data, by C. Daniel and F. S. Wood (Wiley­Interscience, New York, 1971), we consider instead the model ln Y x1 x2 x3 N 0 1x1 2x2 3x3 2 , (10.3.20) i.e., we transform the response variable by taking its logarithm and use all of the data. Often, when models do not fit, simple transformations like this can lead to major im­ provements. In this case, we see a much improved normal probability plot, as provided in Figure 10.3.11. Normal Probability Plot of the Residuals (response is Loss1 ­2 ­2 ­1 0 1 2 Standardized Residual Figure 10.3.11: Normal probability plot of the standardized residuals for all the data using ln Y as the response. We also looked at plots of the standardized residuals against the various predic­ tors, and these looked reasonable. Figure 10.3.12 is a plot of the standardized residuals against the values of Air. Residuals Versus Air (response is Loss1 ­2 50 60 70 80 Air Figure 10.3.12: A plot of the standardized residuals for all the data, using ln Y as the response, against the values of the predictor Air. Chapter 10: Relationships Among Variables 569 Now that we have accepted the model (10.3.20), we can proceed to inferences about the unknowns of the model. The least­squares estimates of the i their standard errors (Se), the corresponding t­statistics for testing the i 0, and the P­values for this are given in the following table. Coefficient 0 1 2 3 Estimate 0 948700 0 034565 0 063460 0 002864 Se 0 647700 0 007343 0 020040 0 008510 t­statistic 1 46 4 71 3 17 0 34 P­value 0 161 0 000 0 006 0 742 The estimate of 2 is given by s2 0 0312. To test the null hypothesis that there is no relationship between the response and 0, we have the following 1 2 3 the predictors, or that, equivalently, H0 : ANOVA table. Source X1 X2 X3 Error Total Df 3 17 20 Sum of Squares Mean Square 4 9515 0 5302 5 4817 1 6505 0 0312 52 900 52 900 and when F The value of the F­statistic is given by 1 6505 0 0312 F 3 17 we have that P F 0 000 So there is substantial evidence against the null hypothesis. To see how well the model explains the variation in the response, we computed the value of R2 86 9% Therefore, approximately 87% of the observed variation in Y can be explained by changes in the predictors in the model. While we have concluded that a relationship exists between the response and the predictors, it may be that some of the predictors have no relationship with the response. For example, the table of t­statistics above would seem to indicate that perhaps X3 (acid) is not affecting Y . We can assess this via the following ANOVA table, obtained N 0 by fitting the model ln Y x1 x2 x3 2x2 1x1 2 Source X1 X2 X3 X1 X2 Error Total Df 2 1 17 20 Sum of Squares Mean Square 4 9480 0 0035 0 5302 5 4817 2 4740 0 0035 0 0312 4 9480 3 0 112 4 9515 0 is 0 0035 0 0312 0 0035 The value of the F­statistic Note that RSS X3 X1 X2 F 1 17 we for testing H0 : 0 742 So we have no evidence against the null hypothesis have that P F and can drop X3 from the model. Actually, this is the same P­value as obtained via the t­test of this null hypothesis, as, in general, the t­test that a single regression coefficient is 0 is equivalent to the F­test. Similar tests of the need to include X1 and X2 do not lead us to drop these variables from the model. 0 112 and when F So based on the above results, we decide to drop X3 from the model and use the equation E Y X1 x1 X2 x2 0 7522 0 035402X1 0 06346X2 (10.3.21) 570 Section 10.3: Quantitative Response and Predictors to describe the relationship between Y and the predictors. Note that the least­squares estimates of 1 and 2 in (10.3.21) are obtained by refitting the model without X3 0 Summary of Section 10.3 In this section, we examined the situation in which the response variable and the predictor variables are quantitative. In this situation, the linear regression model provides a possible description of the form of any relationship that may exist between the response and the predic­ tors. Least squares is a standard method for fitting linear regression models to data. The ANOVA is a decomposition of the total variation observed in the response variable into a part attributable to changes in the predictor variables and a part attributable to random error. If we assume a normal linear regression model, then we have inference methods available such as confidence intervals and tests of significance. In particular, we have available the F­test to assess whether or not a relationship exists between the response and the predictors. A normal linear regression model is checked by examining the standardized residuals. EXERCISES 10.3.1 Suppose that x1 distribution, where [0 1] is unknown. What is the least­squares estimate of the mean of this distribu­ xn is a sample from a Bernoulli tion? 10.3.2 Suppose that x1 ], where unknown. What is the least­squares estimate of the mean of this distribution? 10.3.3 Suppose that x1 , where unknown. What is the least­squares estimate of the mean of this distribution? 10.3.4 Consider the n xn is a sample from the Exponential xn is a sample from the Uniform[0 11 data values in the following table. 0 is 0 is Observation 1 2 3 4 5 6 X 5 00 4 00 3 00 2 00 1 00 0 00 Y 10 00 8 83 9 15 4 26 0 30 0 04 Observation 7 8 9 10 11 X 1 00 2 00 3 00 4 00 5 00 Y 3 52 5 64 7 28 7 62 8 51 Suppose we consider the simple normal linear regression to describe the relationship between the response Y and the predictor X (a) Plot the data in a scatter plot. Chapter 10: Relationships Among Variables 571 (b) Calculate the least­squares line and plot this on the scatter plot in part (a). (c) Plot the standardized residuals against X (d) Produce a normal probability plot of the standardized residuals. (e) What are your conclusions based on the plots produced in parts (c) and (d)? (f) If appropriate, calculate 0.95­confidence intervals for the intercept and slope. (g) Construct the ANOVA table to test whether or not there is a relationship between the response and the predictors. What is your conclusion? (h) If the model is correct, what proportion of the observed variation in the response is explained by changes in the predictor? (i) Predict a future Y at X Determine the standard error of this prediction. (j) Predict a future Y at X Determine the standard error of this prediction. (k) Predict a future Y at X 20 0 Is this prediction an extrapolation or an interpola­ tion? Determine the standard error of this prediction. Compare this with the standard errors obtained in parts (i) and (j) and explain the differences. 11 data values in the following table. 10.3.5 Consider the n 6 0 Is this prediction an extrapolation or an interpolation? 0 0 Is this prediction an extrapolation or an interpolation? Observation 1 2 3 4 5 6 X 5 00 4 00 3 00 2 00 1 00 0 00 Y 65 00 39 17 17 85 7 74 2 70 0 04 Observation 7 8 9 10 11 X 1 00 2 00 3 00 4 00 5 00 Y 6 52 17 64 34 28 55 62 83 51 Suppose we consider the simple normal linear regression to describe the relationship between the response Y and the predictor X (a) Plot the data in a scatter plot. (b) Calculate the least­squares line and plot this on the scatter plot in part (a). (c) Plot the standardized residuals against X (d) Produce a normal probability plot of the standardized residuals. (e) What are your conclusions based on the plots produced in parts (c) and (d)? (f) If appropriate, calculate 0.95­confidence intervals for the intercept and slope. (g) Do the results of your analysis allow you to conclude that there is a relationship between Y and X? Explain why or why not. (h) If the model is correct, what proportion of the observed variation in the response is explained by changes in the predictor? 10.3.6 Suppose the following data record the densities of an organism in a containment vessel for 10 days. Suppose we consider the simple normal linear regression to describe the relationship between the response Y (density) and the predictor X (day) 572 Section 10.3: Quantitative Response and Predictors Day Number/Liter Day Number/Liter 1341 6 2042 9 7427 0 15571 8 33128 5 1 6 16 7 65 2 23 6 345 3 6 7 8 9 10 1 2 3 4 5 (a) Plot the data in a scatter plot. (b) Calculate the least­squares line and plot this on the scatter plot in part (a). (c) Plot the standardized residuals against X (d) Produce a normal probability plot of the standardized residuals. (e) What are your conclusions based on the plots produced in parts (c) and (d)? (f) Can you think of a transformation of the response that might address any problems found? If so, repeat parts (a) through (e) after performing this transformation. (Hint: The scatter plot looks like exponential growth. What transformation is the inverse of exponentiation?) (g) Calculate 0.95­confidence intervals for the appropriate intercept and slope. (h) Construct the appropriate ANOVA table to test whether or not there is a relationship between the response and the predictors. What is your conclusion? (i) Do the results of your analysis allow you to conclude that there is a relationship between Y and X? Explain why or why not. (j) Compute the proportion of variation explained by th
e predictor for the two models you have considered. Compare the results. 12 Is this prediction an extrapolation or an interpolation? (k) Predict a future Y at X 10.3.7 A student takes weekly quizzes in a course and receives the following grades over 12 weeks. Week Grade Week Grade 1 2 3 4 5 6 65 55 62 73 68 76 7 8 9 10 11 12 74 76 48 80 85 90 grade. (a) Plot the data in a scatter plot with X week and Y (b) Calculate the least­squares line and plot this on the scatter plot in part (a). (c) Plot the standardized residuals against X. (d) What are your conclusions based on the plot produced in (c)? (e) Calculate 0.95­confidence intervals for the intercept and slope. (f) Construct the ANOVA table to test whether or not there is a relationship between the response and the predictors. What is your conclusion? (g) What proportion of the observed variation in the response is explained by changes in the predictor? Chapter 10: Relationships Among Variables 573 x Y exp 1 0 (Hint: Write Z E Y X 0 E Y X and use Theo­ Z , where X Y and Z are random variables. 10.3.8 Suppose that Y (a) Show that E Z X (b) Show that Cov E Y X Z rems 3.5.2 and 3.5.4.) (c) Suppose that Z is independent of X Show that this implies that the conditional distribution of Y given X depends on X only through its conditional mean. (Hint: Evaluate the conditional distribution function of Y given X 10.3.9 Suppose that X and Y are random variables such that a regression model de­ 2 X , then discuss scribes the relationship between Y and X If E Y X whether or not this is a simple linear regression model (perhaps involving a predictor other than X). 10.3.10 Suppose that X and Y are random variables and Corr(X Y 1 Does a simple linear regression model hold to describe the relationship between Y and X ? If so, what is it? 10.3.11 Suppose that X and Y are random variables such that a regression model de­ 2 X 2, then discuss scribes the relationship between Y and X If E Y X whether or not this is a simple linear regression model (perhaps involving a predictor other than X). 10.3.12 Suppose that X Z . N 2 3 independently of Z Does this structure imply that the relationship between Y and X can be summarized by a simple linear regression model? If so, what are 1 10.3.13 Suppose that a simple linear model is fit to data. An analysis of the residuals indicates that there is no reason to doubt that the model is correct; the ANOVA test indicates that there is substantial evidence against the null hypothesis of no relationship between the response and predictor. The value of R2 is found to be 0.05. What is the interpretation of this number and what are the practical consequences? 2, and 2? N 0 1 and Y X 1 COMPUTER EXERCISES 10.3.14 Suppose we consider the simple normal linear regression to describe the re­ lationship between the response Y (income) and the predictor X (investment) for the data in Example 10.3.9. (a) Plot the data in a scatter plot. (b) Calculate the least­squares line and plot this on the scatter plot in part (a). (c) Plot the standardized residuals against X (d) Produce a normal probability plot of the standardized residuals. (e) What are your conclusions based on the plots produced in parts (c) and (d)? (f) If appropriate, calculate 0.95­confidence intervals for the intercept and slope. (g) Do the results of your analysis allow you to conclude that there is a relationship between Y and X? Explain why or why not. (h) If the model is correct, what proportion of the observed variation in the response is explained by changes in the predictor? 574 Section 10.3: Quantitative Response and Predictors 10.3.15 The following data are measurements of tensile strength (100 lb/in2) and hard­ ness (Rockwell E) on 20 pieces of die­cast aluminum. Sample 1 2 3 4 5 6 7 8 9 10 Strength Hardness 293 349 340 340 340 354 322 334 247 348 53 70 78 55 64 71 82 67 56 86 Sample 11 12 13 14 15 16 17 18 19 20 Strength Hardness 298 292 380 345 257 265 246 286 324 282 60 51 95 88 51 54 52 64 83 56 Suppose we consider the simple normal linear regression to describe the relationship between the response Y (strength) and the predictor X (hardness). (a) Plot the data in a scatter plot. (b) Calculate the least­squares line and plot this on the scatter plot in part (a). (c) Plot the standardized residuals against X (d) Produce a normal probability plot of the standardized residuals. (e) What are your conclusions based on the plots produced in parts (c) and (d)? (f) If appropriate, calculate 0.95­confidence intervals for the intercept and slope. (g) Do the results of your analysis allow you to conclude that there is a relationship between Y and X? Explain why or why not. (h) If the model is correct, what proportion of the observed variation in the response is explained by changes in the predictor? 10.3.16 Tests were carried out to determine the effect of gas inlet temperature (degrees Fahrenheit) and rotor speed (rpm) on the tar content (grains/cu ft) of a gas stream, producing the following data. Observation 1 2 3 4 5 6 7 8 9 10 Tar 60 0 65 0 63 5 44 0 54 5 26 0 54 0 53 5 33 5 44 0 Speed Temperature 2400 2450 2500 2700 2700 2775 2800 2900 3075 3150 54 5 58 5 58 0 62 5 68 0 45 5 63 0 64 5 57 0 64 0 Suppose we consider the normal linear regression model Y W X x N 1 2 2 3x Chapter 10: Relationships Among Variables 575 to describe the relationship between Y (tar content) and the predictors W (rotor speed) and X (temperature). (a) Plot the response in scatter plots against each predictor. (b) Calculate the least­squares equation. (c) Plot the standardized residuals against W and X (d) Produce a normal probability plot of the standardized residuals. (e) What are your conclusions based on the plots produced in parts (c) and (d)? (f) If appropriate, calculate 0.95­confidence intervals for the regression coefficients. (g) Construct the ANOVA table to test whether or not there is a relationship between the response and the predictors. What is your conclusion? (h) If the model is correct, what proportion of the observed variation in the response is explained by changes in the predictors? (i) In an ANOVA table, assess the null hypothesis that there is no effect due to W given that X is in the model. (j) Estimate the mean of Y when W 2750 and X 50 0 If we consider this value as a prediction of a future Y at these settings, is this an extrapolation or interpolation? 10.3.17 Suppose we consider the normal linear regression model Y X x N 1 2x 3x 2 2 for the data of Exercise 10.3.5. (a) Plot the response Y in a scatter plot against X . (b) Calculate the least­squares equation. (c) Plot the standardized residuals against X (d) Produce a normal probability plot of the standardized residuals. (e) What are your conclusions based on the plots produced in parts (c) and (d)? (f) If appropriate, calculate 0.95­confidence intervals for the regression coefficients. (g) Construct the ANOVA table to test whether or not there is a relationship between the response and the predictor. What is your conclusion? (h) If the model is correct, what proportion of the observed variation in the response is explained by changes in the predictors? (i) In an ANOVA table, assess the null hypothesis that there is no effect due to X 2 given that X is in the model. (j) Compare the predictions of Y at X and using the linear model with a linear and quadratic term. 6 using the simple linear regression model PROBLEMS 10.3.18 Suppose that x1 xn is a sample from the mixture distribution 0 5Uniform[0 1] 0 5Uniform[2 ] where distribution? 2 is unknown. What is the least­squares estimate of the mean of this 576 Section 10.3: Quantitative Response and Predictors 10.3.19 Consider the simple linear regression model and suppose that for the data col­ lected, we have 0 Explain how, and for which value of x, you would estimate E Y X 10.3.20 For the simple linear regression model, under the assumptions of Theorem 10.3.3, establish that n i 1 xi x 2 x Cov Yi B1 B2xi Y j B1 B2x j X1 x1 Xn xn 2 i j 2 1 n xi x j x n k 1 xk x x 2 1 when i j and is 0 otherwise. (Hint: Use Theorems 3.3.2 and 10.3.3.) in the where i j 10.3.21 Establish that (10.3.11) is distributed N 0 1 when S is replaced by denominator. (Hint: Use Theorem 4.6.1 and Problem 10.3.20.) 10.3.22 (Prediction intervals) Under the assumptions of Theorem 10.3.6, prove that the interval b1 b2x s 1 1 n 1 2 x 2 xi n k 1 xk x 2 t 1 2 n 2 based on independent x1 y1 for a future independent X Y with X xn yn will contain Y with probability equal to x (Hint: Theorems 4.6.1 and 3.3.2 and Corollary 10.3.1.) 10.3.23 Consider the regression model with no intercept, given by E Y X x R1 is unknown. Suppose we observe the independent values x1 y1 x,where xn yn (a) Determine the least­squares estimate of (b) Prove that the least­squares estimate b of 2 prove that is unbiased and, when Var Y X x Var B X1 x1 Xn xn 2 n i 1 x 2 i (c) Under the assumptions given in part (b), prove that s2 n 1 n 1 i 1 yi 2 bxi 2 is an unbiased estimator of (d) Record an appropriate ANOVA decomposition for this model and a formula for R2 measuring the proportion of the variation observed in Y due to changes in X (e) When Y X xn yn and we observe the independent values x1 y1 2 x prove that f) Under the assumptions of part (e), and assuming that n 1 independent of B (this can be proved), indicate how you would test the null hypothesis of no relationship between Y and X 1 S2 2 2 n Chapter 10: Relationships Among Variables 577 (g) How would you define standardized residuals for this model and use them to check model validity? 10.3.24 For data x1 y1 then prove that if 2x and 2 2 equals xn yn 2 1 1 1 2xi n i 1 yi n i 1 yi xi x 2 2 2 n i 1 xi x yi y . 2 n i 1 xi n i 1 xi y x yi N 1 1 and 2 respectively. From this, deduce that y and a squares of 10.3.25 For the model discussed in Section 10.3.3, prove that the prior given by 1 2 2 2 leads to the posterior distribution stated there. Conclude that this prior is conjugate with the poste­ rior distribution,
as specified. (Hint: The development is similar to Example 7.1.4, as detailed in Section 7.5.) 10.3.26 For the model specified in Section 10.3.3, prove that when 1 Gamma N 2 and 1 x 2 are the least 2 1 2 2 2 2 2 , and 2 x y n 2 0 the posterior distribution of n 1 2 Z , where Z t 2 2 1 is given by the distribution of y a2c2 x 2 c2 y n and x y CHALLENGES 10.3.27 If X1 that Xn is a sample from a distribution with finite variance, then prove Xi X n k 1 Xk 2 X a s 0 10.4 Quantitative Response and Categorical Predictors In this section, we consider the situation in which the response is quantitative and the predictors are categorical. There can be many categorical predictors, but we restrict our discussion to at most two, as this gives the most important features of the general case. The general case is left to a further course. 10.4.1 One Categorical Predictor (One­Way ANOVA) Suppose now that the response Y is quantitative and the predictor X is categorical, a. With the regression model, we assume that taking a values or levels denoted 1 the only aspect of the conditional distribution of Y , given X x that changes as x changes, is the mean. We let E Y X i i denote the mean response when the predictor X is at level i. Note that this is immedi­ ately a linear regression model. 578 Section 10.4: Quantitative Response and Categorical Predictors We introduce the dummy variables Xi 1 0 X X i i 1 a. Notice that, whatever the value is of the response Y , only one of the for i dummy variables takes the value 1, and the rest take the value 0. Accordingly, we can write E Y X1 x1 Xa xa 1x1 a xa because one and only one of the xi 1 whereas the rest are 0. This has exactly the same form as the model discussed in Section 10.3.4, as the Xi are quantitative. As such, all the results of Section 10.3.4 immediately apply (we will restate relevant results here). Inferences About Individual Means Now suppose that we observe ni values yi1 i and all the re­ sponse values are independent. Note that we have a independent samples. The least­ squares estimates of the i are obtained by minimizing yini when X a ni i 1 j 1 yi j 2 i The least­squares estimates are then equal to (see Problem 10.4.14) bi yi 1 ni ni j 1 yi j These can be shown to be unbiased estimators of the i . Assuming that the conditional distributions of Y given X x all have variance equal to 2 we have that the conditional variance of Yi is given by 2 ni and the conditional covariance between Yi and Y j when i is 0. Furthermore, under these 2 is given by conditions, an unbiased estimator of j s2 1 N a a ni i 1 j 1 yi j 2 yi where N n1 nk If, in addition, we assume the normal linear regression model, namely then Yi Definition 4.6.2, 2 ni independent of N a S2 2 2 N a Therefore, by T Yi S i ni t N a , Chapter 10: Relationships Among Variables 579 which leads to a ­confidence interval of the form yi s ni t 1 2 N a for i Also, we can test the null hypothesis H0 : P T yi s i0 ni 2 1 G i yi s i0 by computing the P­value i0 ni N a N a is the cdf of the t N a distribution. Note that these inferences where G are just like those derived in Section 6.3 for the location­scale normal model, except we now use a different estimator of 2 (with more degrees of freedom). Inferences about Differences of Means and Two Sample Inferences Often we want to make inferences about a difference of means E Yi Y j j and i i j . Note that Var Yi Y j Var Yi Var Y j 2 1 ni 1 n j because Yi and Y j are independent. By Theorem 4.6.1, Yi Y j N i 2 1 ni j 1 n j . Furthermore, independent of N a S2 Yi Y j 1 ni . Therefore, by Definition 4.6.2, T Yi Y j 1 ni i 1 n j j 1 2 N a S2 2 N a Yi Y j S 1 ni 10.4.1) This leads to the ­confidence interval yi y j s 1 ni 1 n j t 1 2 N a for the difference of means that the difference in the means equals 0, by computing the P­value j . We can test the null hypothesis H0 : i i j , i.e., P T yi y j s 1 ni 1 n j 2 1 G yi y j s 1 ni 1 n j N a . 580 Section 10.4: Quantitative Response and Categorical Predictors When a 2 i.e., there are just two values for X we refer to (10.4.1) as the two­sample t­statistic, and the corresponding inference procedures are called the two­ sample t­confidence interval and the two­sample t­test for the difference of means. In this case, if we conclude that 2, then we are saying that a relationship exists between Y and X 1 The ANOVA for Assessing a Relationship with the Predictor 2 we are interested in assessing whether or not Suppose, in the general case when a there is a relationship between the response and the predictor. There is no relationship if and only if all the conditional distributions are the same; this is true, under our assumptions, if and only if a i.e., if and only if all the means are equal. So testing the null hypothesis that there is no relationship between the response and the predictor is equivalent to testing the null hypothesis H0 : for some unknown a 1 1 If the null hypothesis is true, the least­squares estimate of is given by y the overall average response value. In this case, we have that the total variation decomposes as (see Problem 10.4.15) a ni i 1 j 1 yi j y 2 a i 1 ni yi y 2 a ni i 1 j 1 yi j 2 yi and so the relevant ANOVA table for testing H0 is given below. Source X Error Total Df a 1 N a N 1 Sum of Squares a y 2 i 1 ni yi a i a i ni j 1 yi j ni j 1 yi j 2 yi y 2 Mean Square a i 1 ni yi y 2 a 1 s2 To assess H0 we use the F­statistic F a i 1 ni yi y 2 a 1 s2 because, under the null hypothesis, both the numerator and the denominator are un­ 2 When the null hypothesis is false, the numerator tends to be biased estimators of larger than 2. When we add the normality assumption, we have that F 1 N F a a , and so we compute the P­value P F a i 1 ni yi y 2 a 1 s2 to assess whether the observed value of F is so large as to be surprising. Note that when a 2, this P­value equals the P­value obtained via the two­sample t­test. Chapter 10: Relationships Among Variables 581 Multiple Comparisons If we reject the null hypothesis of no differences among the means, then we want to see where the differences exist. For this, we use inference methods based on (10.4.1). Of course, we have to worry about the problem of multiple comparisons, as discussed in Section 9.3. Recall that this problem arises whenever we are testing many null hypotheses using a specific critical value, such as 5%, as a cutoff for a P­value, to decide whether or not a difference exists. The cutoff value for an individual P­value is referred to as the individual error rate. In effect, even if no differences exist, the probability of concluding that at least one difference exists, the family error rate, can be quite high. There are a number of procedures designed to control the family error rate when making multiple comparisons. The simplest is to lower the individual error rate, as the family error rate is typically an increasing function of this quantity. This is the approach we adopt here, and we rely on statistical software to compute and report the family error rate for us. We refer to this procedure as Fisher’s multiple comparison test. Model Checking To check the model, we look at the standardized residuals (see Problem 10.4.17) given by yi j s 1 yi 1 ni (10.4.2) We will restrict our attention to various plots of the standardized residuals for model checking. We now consider an example. EXAMPLE 10.4.1 A study was undertaken to determine whether or not eight different types of fat are absorbed in different amounts during the cooking of donuts. Results were collected based on cooking six different donuts and then measuring the amount of fat in grams absorbed. We take the variable X to be the type of fat and use the model of this section. The collected data are presented in the following table. Fat 1 Fat 2 Fat 3 Fat 4 Fat 5 Fat 6 Fat 7 Fat 8 164 172 177 178 163 163 150 164 177 197 184 196 177 193 179 169 168 167 187 177 144 176 146 155 156 161 169 181 165 172 141 149 172 180 179 184 166 176 169 170 195 190 197 191 178 178 183 167 A normal probability plot of the standardized residuals is provided in Figure 10.4.1. A plot of the standardized residuals against type of fat is provided in Figure 10.4.2. 582 Section 10.4: Quantitative Response and Categorical Predictors Neither plot gives us significant grounds for concern over the validity of the model, although there is some indication of a difference in the variability of the response as the type of fat changes. Another useful plot in this situation is a side­by­side boxplot, as it shows graphically where potential differences may lie. Such a plot is provided in Figure 10.4.3. The following table gives the mean amounts of each fat absorbed. Fat 1 172 00 Fat 2 177 83 Fat 3 182 17 Fat 4 184 50 Fat 5 165 50 Fat 6 176 33 Fat 7 161 33 Fat 8 162 33 The grand mean response is given by 172.81 ­2 ­2 ­1 0 Normal Score 1 2 Figure 10.4.1: Normal probability plot of the standardized residuals in Example 10.4.11 ­2 1 2 3 4 5 6 7 8 Type of Fat Figure 10.4.2: Standardized residuals versus type of fat in Example 10.4.1. Chapter 10: Relationships Among Variables 583 200 190 180 Y 170 160 150 140 1 2 3 4 5 6 7 8 Type of Fat Figure 10.4.3: Side­by­side boxplots of the response versus type of fat in Example 10.4.1. To assess the null hypothesis of no differences among the types of fat, we calculate the following ANOVA table. Source Df 7 40 47 X Error Total Sum of Squares Mean Square 3344 5799 9143 478 145 Then we use the F­statistic given by F under H0 we obtain the P­value P F there is a difference among the fat types at the 0.05 level. 478 145 3 3 3 3 Because F F 7 40 0 007 Therefore, we conclude that To ascertain where the differences exist, we look at all pairwise differences. There are 8 7 2 28 such comparisons. If we use the 0.05 level to determine whether or not a difference among means exists, then software computes the family error rate as 0.481, which seems uncomfortably high. When we use the 0.01 level, the family error rate falls to 0.151
. With the individual error rate at 0.003, the family error rate is 0.0546. Using the individual error rate of 0.003, the only differences detected among the means are those between Fat 4 and Fat 7, and Fat 4 and Fat 8. Note that Fat 4 has the highest absorption whereas Fats 7 and 8 have the lowest absorptions. Overall, the results are somewhat inconclusive, as we see some evidence of dif­ ferences existing, but we are left with some anomalies as well. For example, Fats 4 and 5 are not different and neither are Fats 7 and 5, but Fats 4 and 7 are deemed to be different. To resolve such conicts requires either larger sample sizes or a more refined experiment so that the comparisons are more accurate. 584 Section 10.4: Quantitative Response and Categorical Predictors 10.4.2 Repeated Measures (Paired Comparisons) Consider k quantitative variables Y1 Suppose that our purpose is to compare the distributions of these variables. Typically, these will be similar variables, all measured in the same units. Yk defined on a population EXAMPLE 10.4.2 Suppose that is a set of students enrolled in a first­year program requiring students to take both calculus and physics, and we want to compare the marks achieved in these subjects. If we let Y1 denote the calculus grade and Y2 denote the physics grade, then we want to compare the distributions of these variables. EXAMPLE 10.4.3 Suppose we want to compare the distributions of the duration of headaches for two treatments A and B) in a population of migraine headache sufferers. We let Y1 denote the duration of a headache after being administered treatment A and let Y2 denote the duration of a headache after being administered treatment B. Yk involves taking a random sample The repeated­measures approach to the problem of comparing the distributions of i , Y1 obtaining the k­dimensional value Y1 yi k . This gives a sample of n from a k­dimensional distribution. Obviously, this is called repeated i on the same measures because we are taking the measurements Y1 n from and, for each 1 Yk yi1 Yk i i i i . An alternative to repeated measures is to take k independent samples from and, for each of these samples, to obtain the values of one and only one of the vari­ ables Yi . There is an important reason why the repeated­measures approach is pre­ ferred: We expect less variation in the values of differences, like Yi Y j under repeated­measures sampling, than we do under independent sampling because the val­ ues Y1 are being taken on the same member of the population in re­ Yk peated measures. To see this more clearly, suppose all of the variances and covariances exist for the joint distribution of Y1 Yk. This implies that Var Yi Y j Var Yi Var Y j 2 Cov Yi Y j . (10.4.3) Because Yi and Y j are similar variables, being measured on the same individual, we expect them to be positively correlated. Now with independent sampling, we have that Var Yi , so the variances of differences should be smaller Var Yi with repeated measures than with independent sampling. Var Yi Y j When we assume that the distributions of the Yi differ at most in their means, then it j In the repeated­measures context, makes sense to make inferences about the differences of the population means using the differences of the sample means yi we can write y j i yi y j 1 n n l 1 yli yl j . Chapter 10: Relationships Among Variables 585 Because the individual components of this sum are independent and so, Var Yi Y j Var Yi Var Y j n 2 Cov Yi Y j We can consider the differences d1 y1i y1 j dn yni of n from a one­dimensional distribution with mean i yi (10.4.3). Accordingly, we estimate j by d i j and variance y j and estimate ynj to be a sample 2 given by 2 by s2 n 1 n 1 i 1 di 2 . d (10.4.4) If we assume that the joint distribution of Y1 Yk is multivariate normal (this means that any linear combination of these variables is normally distributed — see 2 . Problem 9.1.18), then this forces the distribution of Yi Accordingly, we have all the univariate techniques discussed in Chapter 6 for inferences about Y j to be N i j i j The discussion so far has been about whether the distributions of variables differed. Assuming these distributions differ at most in their means, this leads to a comparison of the means. We can, however, record an observation as X Y where X takes values k and X in 1 Yi Then the conditional distribution of Y i is the same as the distribution of Yi . Therefore, if we conclude that the given X distributions of the Yi are different, we can conclude that a relationship exists between Y and X In Example 10.4.2, this means that a relationship exists between a student’s grade and whether or not the grade was in calculus or physics. In Example 10.4.3, this means that a relationship exists between length of a headache and the treatment. i means that Y When can we assert that such a relationship is in fact a cause–effect relationship? Applying the discussion in Section 10.1.2, we know that we have to be able to assign the value of X to a randomly selected element of the population. In Example 10.4.2, we see this is impossible, so we cannot assert that such a relationship is a cause–effect In Example 10.4.3, however, we can indeed do this — namely, for a relationship. randomly selected individual, we randomly assign a treatment to the first headache experienced during the study period and then apply the other treatment to the second headache experienced during the study period. A full discussion of repeated measures requires more advanced concepts in statis­ 2 tics. We restrict our attention now to the presentation of an example when k which is commonly referred to as paired comparisons. EXAMPLE 10.4.4 Blood Pressure Study The following table came from a study of the effect of the drug captopril on blood pressure, as reported in Applied Statistics, Principles and Examples by D. R. Cox and E. J. Snell (Chapman and Hall, London, 1981). Each measurement is the difference in the systolic blood pressure before and after having been administered the drug. 9 31 23 4 17 33 21 26 19 3 26 19 20 10 23 586 Section 10.4: Quantitative Response and Categorical Predictors Figure 10.4.4 is a normal probability plot for these data and, because this looks rea­ sonable, we conclude that the inference methods based on the assumption of normality are acceptable. Note that here we have not standardized the variable first, so we are only looking to see if the plot is reasonably straight1 ­2 ­32 ­24 ­8 Blood pressure difference ­16 0 Figure 10.4.4: Normal probability plot for the data in Example 10.4.4. The mean difference is given by d 9 03 Accordingly, the standard error of the estimate of the difference in the means, using (10.4.4), is given by s 2 33 A 0 95­confidence interval for the difference in the mean systolic blood pressure, before and after being administered captopril, is then 18 93 with standard deviation s 15 d s n t0 975 n 1 18 93 2 33 t0 975 14 23 93 13 93 Because this does not include 0, we reject the null hypothesis of no difference in the means at the 0.05 level. The actual P­value for the two­sided test is given by P T 18 93 2 33 0 000 because T t 14 under the null hypothesis H0 that the means are equal. Therefore, we have strong evidence against H0. It seems that we have strong evidence that the drug is leading to a drop in blood pressure. 10.4.3 Two Categorical Predictors (Two­Way ANOVA) Now suppose that we have a single quantitative response Y and two categorical pre­ dictors A and B where A takes a levels and B takes b levels. One possibility is to consider running two one­factor studies. One study will examine the relationship be­ tween Y and A, and the second study will examine the relationship between Y and B There are several disadvantages to such an approach, however. First, and perhaps foremost, doing two separate analyses will not allow us to de­ termine the joint relationship A and B have with Y This relates directly to the concept Chapter 10: Relationships Among Variables 587 of interaction between predictors. We will soon define this concept more precisely, but basically, if A and B interact, then the conditional relationship between Y and A given B j, changes in some substantive way as we change j. If the predictors A and B do not interact, then indeed we will be able to examine the relationship between the response and each of the predictors separately. But we almost never know that this is the case beforehand and must assess whether or not an interaction exists based on collected data. A second reason for including both predictors in the analysis is that this will often lead to a reduction in the contribution of random error to the results. By this, we mean that we will be able to explain some of the observed variation in Y by the inclusion of the second variable in the model. This depends, however, on the additional variable having a relationship with the response. Furthermore, for the inclusion of a second variable to be worthwhile, this relationship must be strong enough to justify the loss in degrees of freedom available for the estimation of the contribution of random error to the experimental results. As we will see, including the second variable in the analysis results in a reduction in the degrees of freedom in the Error row of the ANOVA table. Degrees of freedom are playing the role of sample size here. The fewer the degrees of freedom in the Error row, the less accurate our estimate of 2 will be. When we include both predictors in our analysis, and we have the opportunity to determine the sampling process, it is important that we cross the predictors. By this, we mean that we observe Y at each combination A B i j 1 a 1 b . Suppose, then, that we have ni j response values at the A B predictors. Then, letting i j setting of the E Y A B i j i j be the mean response when A i and B j, and introducing the dummy variables Xi j 1 0 A A j i B i or B j we can write E Y Xi j 11x11 21x21 abxab xi j for all i b a j
i j xi j i 1 j 1 The relationship between Y and the predictors is completely encompassed in the changes in the i j as i and j change. From this, we can see that a regression model for this sit­ uation is immediately a linear regression model. 588 Section 10.4: Quantitative Response and Categorical Predictors Inferences About Individual Means and Differences of Means Now let yi jk denote the kth response value when Xi j the least­squares estimate of i j is given by 1 Then, as in Section 10.4.1, bi j yi j 1 ni j ni j k 1 yi jk the mean of the observations when Xi j 1 If in addition we assume that the condi­ tional distributions of Y given the predictors all have variance equal to 2, then with N nab we have that n11 n21 s2 1 N ab a b ni j i 1 j 1 k 1 yi jk 2 yi j (10.4.5) is an unbiased estimator of ni j given by s 2 Therefore, using (10.4.5), the standard error of yi j is of With the normality assumption, we have that Yi j N i j 2 ni j independent N ab S2 2 2 N ab This leads to the ­confidence intervals for i j and yi j yi j ykl s s ni j 1 ni j t 1 2 N ab 1 nkl t 1 2 N ab for the difference of means i j kl The ANOVA for Assessing Interaction and Relationships with the Predictors We are interested in whether or not there is any relationship between Y and the pre­ dictors. There is no relationship between the response and the predictors if and only if all the i j are equal. Before testing this, however, it is customary to test the null hypothesis that there is no interaction between the predictors. The precise definition of no interaction here is that i j i j i and j for all i and j for some constants i.e., the means can be expressed additively. j and let A vary, then these response curves (a response curve Note that if we fix B is a plot of the means of one variable while holding the value of the second variable fixed) are all parallel. This is an equivalent way of saying that there is no interaction between the predictors. Chapter 10: Relationships Among Variables 589 In Figure 10.4.5, we have depicted response curves in which the factors do not in­ teract, and in Figure 10.4.6 we have depicted response curves in which they do. Note that the solid lines, for example, joining 11 and 21 are there just to make it easier to display the parallelism (or lack thereof) and have no other significance. E(Y | A, B) 12 11 • • 1   • • 2  • B = 2  • B = 1 3 A Figure 10.4.5: Response curves for expected response with two predictors, with A taking three levels and B taking two levels. Because they are parallel, the predictors do not interact. E(Y | A, B) 12 11 • • 1   • • 2 B = 2 B = 1   • • 3 A Figure 10.4.6: Response curves for expected response with two predictors, with A taking three levels and B taking two levels. They are not parallel, so the predictors interact. i j i j To test the null hypothesis of no interaction, we must first fit the model where i.e., find the least­squares estimates of the i j under these constraints. We will not pursue the mathematics of obtaining these estimates here, but rely on software to do this for us and to compute the sum of squares relevant for testing the null hypothesis of no interaction (from the results of Section 10.3.4, we know that this 590 Section 10.4: Quantitative Response and Categorical Predictors is obtained by differencing the regression sum of squares obtained from the full model and the regression sums of squares obtained from the model with no interaction). If we decide that an interaction exists, then it is immediate that both A and B have an effect on Y (if A does not have an effect, then A and B cannot interact — see Problem 10.4.16); we must look at differences among the yi j to determine the form of the relationship. If we decide that no interaction exists, then A has an effect if and only if the j vary. We can test the null hypothesis H0 : a of no effect due to A and the null hypothesis b of no effect due to V separately, once we have decided that no H0 : 1 interaction exists. i vary, and B has an effect if and only if the 1 The details for deriving the relevant sums of squares for all these hypotheses are not covered here, but many statistical packages will produce an ANOVA table, as given below. Source Df A B A Error 1 1 1 b a b a N ab B Total N 1 1 Sum of Squares RSS A RSS B RSS ni j k 1 yi jk ni j k 1 yi jk 2 yi j y 2 Note that if we had included only A in the model, then there would be N of freedom for the estimation of a b 2 By including B, we lose N a 2 1 degrees of freedom for the estimation of a degrees N ab Using this table, we first assess the null hypothesis H0 : no interaction between A and B using F F a 1 b 1 N ab under H0 via the P­value RSS A B P F a s2 1 b 1 where s2 is given by (10.4.5). If we decide that no interaction exists, then we assess the null hypothesis H0 : no effect due to A using F ab under H0, via the P­value 1 N F a RSS A a 1 P F and assess H0 : no effect due to B using F P­value RSS B P F Model Checking s2 s2 F b 1 N ab under H0 via the b 1 . To check the model, we look at the standardized residuals given by (see Problem 10.4.18) yi jk s 1 yi j 1 ni j (10.4.6) Chapter 10: Relationships Among Variables 591 We will restrict our attention to various plots of the standardized residuals for model checking. We consider an example of a two­factor analysis. EXAMPLE 10.4.5 The data in the following table come from G. E. P. Box and D. R. Cox, “An analysis of transformations” (Journal of the Royal Statistical Society, 1964, Series B, p. 211) and represent survival times, in hours, of animals exposed to one of three different types of poisons and allocated four different types of treatments. We let A denote the treatments 12 different A B combinations. and B denote the type of poison, so we have 3 Each combination was administered to four different animals; i.e., ni j 4 for every i and j 4 A1 A2 8 2 11 12 4 3 0 3 7 3 8 2 9 A3 A4 10 B1 B2 B3 A normal probability plot for these data, using the standardized residuals after fit­ ting the two­factor model, reveals a definite problem. In the above reference, a trans­ formation of the response to the reciprocal 1 Y is suggested, based on a more sophis­ ticated analysis, and this indeed leads to much more appropriate standardized residual plots. Figure 10.4.7 is a normal probability plot for the standardized residuals based on the reciprocal response. This normal probability plot looks reasonable1 ­2 ­2 ­1 0 Normal Scores 1 2 Figure 10.4.7: Normal probability plot of the standardized residuals in Example 10.4.5 using the reciprocal of the response. Figure 10.4.8 is a plot of the standardized residuals against the various A B j with j 1 2 3 This coding assigns a unique integer to each j and is convenient when comparing scatter plots of the response for combinations, where we have coded the combination i b 3 i combination i 1 2 3 4 and j as b i 1 592 Section 10.4: Quantitative Response and Categorical Predictors each treatment. Again, this residual plot looks reasonable1 ­2 1 2 3 5 4 6 (Treatment, Poison) 7 8 9 10 11 12 Figure 10.4.8: Scatter plot for the data in Example 10.4.5 of the standardized residuals against each value of A B using the reciprocal of the response. Below we provide the least­squares estimates of the i j for the transformed model. A1 0 24869 0 32685 0 48027 A2 0 11635 0 13934 0 30290 A3 0 18627 0 27139 0 42650 A4 0 16897 0 17015 0 30918 B1 B2 B3 The ANOVA table for the data, as obtained from a standard statistical package, is given below. Source Df 3 A 2 B 6 A 36 Error 47 Total B Sum of Squares Mean Square 0 20414 0 34877 0 01571 0 08643 0 65505 0 06805 0 17439 0 00262 0 00240 From this, we determine that s errors of the least­squares estimates are all equal to s 2 0 00240 4 89898 10 2, and so the standard 0 0244949 To test the null hypothesis of no interaction between A and B, we have, using F F 6 36 under H0 the P­value P F 0 00262 0 00240 P F 1 09 0 387 We have no evidence against the null hypothesis. So we can go on to test the null hypothesis of no effect due to A and we have, using F F 2 36 under H0 the P­value P F 0 06805 0 00240 P F 28 35 0 000 Chapter 10: Relationships Among Variables 593 We reject this null hypothesis. Similarly, testing the null hypothesis of no effect due to B, we have, using F F 2 36 under H0 the P­value P F 0 17439 0 00240 P F 72 66 0 000 We reject this null hypothesis as well. Accordingly, we have decided that the appropriate model is the additive model j (we are still using the transformed j given by E 1 Y A B response 1 Y ) We can also write this as for any choice of Therefore, there is no unique estimate of the additive effects due to A or B However, we still have unique least­squares estimates of the means, which are obtained (using software) by fitting the model with constraints on the i j corresponding to no interaction existing. These are recorded in the following table. A1 0 26977 0 31663 0 46941 A2 0 10403 0 15089 0 30367 A3 0 21255 0 25942 0 41219 A4 0 13393 0 18080 0 33357 B1 B2 B3 As we have decided that there is no interaction between A and B we can assess single­factor effects by examining the response means for each factor separately. For example, the means for investigating the effect of A are given in the following table. A1 0 352 A2 0 186 A3 0 295 A4 0 216 We can compare these means using procedures based on the t­distribution. For exam­ ple, a 0.95­confidence interval for the difference in the means at levels A1 and A2 is given by y1 y2 s 12 t0 975 36 0 352 0 186 0 00240 12 2 0281 0 13732 0 19468 (10.4.7) This indicates that we would reject the null hypothesis of no difference between these means at the 0 05 level. Notice that we have used the estimate of 2 based on the full model in (10.4.7). Logically, it would seem to make more sense to use the estimate based on fitting the additive model because we have decided that it is appropriate. When we do so, this is referred to as pooling, as it can be shown that the new err
or estimate is calculated by adding RSS A B degrees of freedom and the error degrees of freedom. Not to pool is regarded as a somewhat more conservative procedure. B to the original ESS and dividing by the sum of the A 594 Section 10.4: Quantitative Response and Categorical Predictors 10.4.4 Randomized Blocks With two­factor models, we generally want to investigate whether or not both of these factors have a relationship with the response Y Suppose, however, that we know that a factor B has a relationship with Y , and we are interested in investigating whether or not another factor A has a relationship with Y . Should we run a single­factor experiment using the predictor A or run a two­factor experiment including the factor B? The answer is as we have stated at the start of Section 10.4.2. Including the factor B will allow us, if B accounts for a lot of the observed variation, to make more accurate comparisons. Notice, however, that if B does not have a substantial effect on Y then its inclusion will be a waste, as we sacrificed a b 1 degrees of freedom that would otherwise go toward the estimation of 2. So it is important that we do indeed know that B has a substantial effect. In such a case, we refer to B as a blocking variable. It is important again that the blocking variable B be crossed with A Then we can test for any effect due to A by first testing for an interaction between A and B; if no such interaction is found, then we test for an effect due to A alone, just as we have discussed in Section 10.4.3. A special case of using a blocking variable arises when we have ni j 1 for all i and ab so there are no degrees of freedom available for the estimation j In this case, N of error. In fact, we have that (see Problem 10.4.19) s2 0 Still, such a design has practical value, provided we are willing to assume that there is no interaction between A and B This is called a randomized block design. For a randomized block design, we have that s2 RSS A a 1 b B 1 (10.4.8) 2, and so we have a is an unbiased estimate of 1 degrees of freedom for the estimation of error. Of course, this will not be correct if A and B do interact, but when they do not, this can be a highly efficient design, as we have removed the effect of the variation due to B and require only ab observations for this. When the randomized block design is appropriate, we test for an effect due to A, using F F a 1 under H0 via the P­value 1 b 1 b 1 a P F RSS A a 1 s2 . 10.4.5 One Categorical and One Quantitative Predictor It is also possible that the response is quantitative while some of the predictors are categorical and some are quantitative. We now consider the situation where we have one categorical predictor A taking a values, and one quantitative predictor W . We assume that the regression model applies. Furthermore, we restrict our attention to the situation where we suppose that, within each level of A the mean response varies as E Y A W i i1 i2 Chapter 10: Relationships Among Variables 595 so that we have a simple linear regression model within each level of A. If we introduce the dummy variables Xi j W j 1 0 A A i i for i 1 a and j 1 2 then we can write the linear regression model as E Y Xi j xi j 11x11 12x12 a1xa1 a2xa2 i1 is the intercept and i2 is the slope specifying the relationship between Y and i The methods of Section 10.3.4 are then available for inference about Here, W when A this model. We also have a notion of interaction in this context, as we say that the two pre­ dictors interact if the slopes of the lines vary across the levels of A So saying that no interaction exists is the same as saying that the response curves are parallel when graphed for each level of A If an interaction exists, then it is definite that both A and W have an effect on Y Thus the null hypothesis that no interaction exists is equivalent to H0 : a2 12 If we decide that no interaction exists, then we can test for no effect due to W by testing the null hypothesis that the common slope is equal to 0, or we can test the null hypothesis that there is no effect due to A by testing H0 : a1 i.e., that the intercept terms are the same across the levels of A 11 We do not pursue the analysis of this model further here. Statistical software is available, however, that will calculate the relevant ANOVA table for assessing the var­ ious null hypotheses. Analysis of Covariance Suppose we are running an experimental design and for each experimental unit we can measure, but not control, a quantitative variable W that we believe has an effect on the response Y If the effect of this variable is appreciable, then good statistical practice suggests we should include this variable in the model, as we will reduce the contri­ bution of error to our experimental results and thus make more accurate comparisons. Of course, we pay a price when we do this, as we lose degrees of freedom that would otherwise be available for the estimation of error. So we must be sure that W does have a significant effect in such a case. Also, we do not test for an effect of such a variable, as we presumably know it has an effect. This technique is referred to as the analysis of covariance and is obviously similar in nature to the use of blocking variables. Summary of Section 10.4 We considered the situation involving a quantitative response and categorical predictor variables. By the introduction of dummy variables for the predictor variables, we can con­ sider this situation as a particular application of the multiple regression model of Section 10.3.4. 596 Section 10.4: Quantitative Response and Categorical Predictors If we decide that a relationship exists, then we typically try to explain what form this relationship takes by comparing means. To prevent finding too many statistically significant differences, we lower the individual error rate to ensure a sensible family error rate. When we have two predictors, we first check to see if the factors interact. If the two predictors interact, then both have an effect on the response. A special case of a two­way analysis arises when one of the predictors serves as a blocking variable. It is generally important to know that the blocking variable has an effect on the response, so that we do not waste degrees of freedom by including it. Sometimes we can measure variables on individual experimental units that we know have an effect on the response. In such a case, we include these variables in our model, as they will reduce the contribution of random error to the analysis and make our inferences more accurate. EXERCISES 10.4.1 The following values of a response Y were obtained for three settings of a categorical predictor A A A A 1 2 3 2 9976 0 7468 2 1192 0 3606 1 3308 2 3739 4 7716 2 2167 0 3335 1 5652 0 3184 3 3015 Suppose we assume the normal regression model for these data with one categorical predictor. (a) Produce a side­by­side boxplot for the data. (b) Plot the standardized residuals against A (if you are using a computer for your cal­ culations, also produce a normal probability plot of the standardized residuals) Does this give you grounds for concern that the model assumptions are incorrect? (c) Carry out a one­way ANOVA to test for any difference among the conditional means of Y given A (d) If warranted, construct 0.95­confidence intervals for the differences between the means and summarize your findings. 10.4.2 The following values of a response Y were obtained for three settings of a categorical predictor A A A A 1 2 3 0 090 5 120 5 080 0 800 1 580 3 510 33 070 1 760 4 420 1 890 1 740 1 190 Suppose we assume the normal regression model for these data with one categorical predictor. (a) Produce a side­by­side boxplot for the data. Chapter 10: Relationships Among Variables 597 (b) Plot the standardized residuals against A (if you are using a computer for your cal­ culations, also produce a normal probability plot of the standardized residuals) Does this give you grounds for concern that the model assumptions are incorrect? (c) If concerns arise about the validity of the model, can you “fix” the problem? (d) If you have been able to fix any problems encountered with the model, carry out a one­way ANOVA to test for any differences among the conditional means of Y given A (e) If warranted, construct 0.95­confidence intervals for the differences between the means and summarize your findings. 10.4.3 The following table gives the percentage moisture content of two different types of cheeses determined by randomly sampling batches of cheese from the production process. Cheese 1 Cheese 2 39 02 38 79 35 74 35 41 37 02 36 00 38 96 39 01 35 58 35 52 35 70 36 04 Suppose we assume the normal regression model for these data with one categorical predictor. (a) Produce a side­by­side boxplot for the data. (b) Plot the standardized residuals against Cheese (if you are using a computer for your calculations, also produce a normal probability plot of the standardized residuals). Does this give you grounds for concern that the model assumptions are incorrect? (c) Carry out a one­way ANOVA to test for any differences among the conditional means of Y given Cheese Note that this is the same as a t­test for the difference in the means. 10.4.4 In an experiment, rats were fed a stock ration for 100 days with various amounts of gossypol added. The following weight gains in grams were recorded. 0.00% Gossypol 0.04% Gossypol 0.07% Gossypol 0.10% Gossypol 0.13% Gossypol 228 229 218 216 224 208 235 229 233 219 224 220 232 200 208 232 186 229 220 208 228 198 222 273 216 198 213 179 193 183 180 143 204 114 188 178 134 208 196 130 87 135 116 118 165 151 59 126 64 78 94 150 160 122 110 178 154 130 118 118 118 104 112 134 98 100 104 Suppose we assume the normal regression model for these data and treat gossypol as a categorical predictor taking five levels. (a) Create a side­by­side boxplot graph for the data. Does this give you any reason to be concerned about the assumptions that underlie an analy
sis based on the normal regression model? (b) Produce a plot of the standardized residuals against the factor gossypol (if you are using a computer for your calculations, also produce a normal probability plot of the standardized residuals). What are your conclusions? 598 Section 10.4: Quantitative Response and Categorical Predictors (c) Carry out a one­way ANOVA to test for any differences among the mean responses for the different amounts of gossypol. (d) Compute 0.95­confidence intervals for all the pairwise differences of means and summarize your conclusions. 10.4.5 In an investigation into the effect of deficiencies of trace elements on a variable Y measured on sheep, the data in the following table were obtained. Control Cobalt Copper Cobalt + Copper 13 2 13 6 11 9 13 0 14 5 13 4 11 9 12 2 13 9 12 8 12 7 12 9 14 2 14 0 15 1 14 9 13 7 15 8 15 0 15 6 14 5 15 8 13 9 14 4 Suppose we assume the normal regression model for these data with one categorical predictor. (a) Produce a side­by­side boxplot for the data. (b) Plot the standardized residuals against the predictor (if you are using a computer for your calculations, also produce a normal probability plot of the standardized residuals). Does this give you grounds for concern that the model assumptions are incorrect? (c) Carry out a one­way ANOVA to test for any differences among the conditional means of Y given the predictor (d) If warranted, construct 0.95­confidence intervals for all the pairwise differences between the means and summarize your findings. 10.4.6 Two diets were given to samples of pigs over a period of time, and the following weight gains (in lbs) were recorded. Diet A 8 4 14 15 11 10 6 12 13 7 Diet B 7 13 22 15 12 14 18 8 21 23 10 17 Suppose we assume the normal regression model for these data. (a) Produce a side­by­side boxplot for the data. (b) Plot the standardized residuals against Diet. Also produce a normal probability plot of the standardized residuals. Does this give you grounds for concern that the model assumptions are incorrect? (c) Carry out a one­way ANOVA to test for a difference between the conditional means of Y given Diet (d) Construct 0.95­confidence intervals for differences between the means. 10.4.7 Ten students were randomly selected from the students in a university who took first­year calculus and first­year statistics. Their grades in these courses are recorded in the following table. Student Calculus Statistics 1 66 66 2 61 63 3 77 79 4 62 63 5 66 67 6 68 70 7 64 71 8 75 80 9 59 63 10 71 74 Suppose we assume the normal regression model for these data. Chapter 10: Relationships Among Variables 599 (a) Produce a side­by­side boxplot for the data. (b) Treating the calculus and statistics marks as separate samples, carry out a one­way ANOVA to test for any difference between the mean mark in calculus and the mean mark in statistics. Produce the appropriate plots to check for model assumptions. (c) Now take into account that each student has a calculus mark and a statistics mark and test for any difference between the mean mark in calculus and the mean mark in statistics. Produce the appropriate plots to check for model assumptions. Compare your results with those obtained in part (b). (d) Estimate the correlation between the calculus and statistics marks. 10.4.8 The following data were recorded in Statistical Methods, 6th ed., by G. Snedecor and W. Cochran (Iowa State University Press, Ames, 1967) and represent the average number of orets observed on plants in seven plots. Each of the plants was planted with either high corms or low corms (a type of underground stem). Corm High Corm Low Plot 1 11 2 14 6 Plot 2 13 3 12 6 Plot 3 12 8 15 0 Plot 4 13 7 15 6 Plot 5 12 2 12 7 Plot 6 11 9 12 0 Plot 7 12 1 13 1 Suppose we assume the normal regression model for these data. (a) Produce a side­by­side boxplot for the data. (b) Treating the Corm High and Corm Low measurements as separate samples, carry out a one­way ANOVA to test for any difference between the population means. Pro­ duce the appropriate plots to check for model assumptions. (c) Now take into account that each plot has a Corm High and Corm Low measurement. Compare your results with those obtained in part (b). Produce the appropriate plots to check for model assumptions. (d) Estimate the correlation between the calculus and statistics marks. 10.4.9 Suppose two measurements, Y1 and Y2, corresponding to different treatments, are taken on the same individual who has been randomly sampled from a population . Suppose that Y1 and Y2 have the same variance and are negatively correlated. Our goal is to compare the treatment means. Explain why it would have been better to have randomly sampled two individuals from and applied the treatments to these Y2 in these two sampling situations.) individuals separately. (Hint: Consider Var Y1 10.4.10 List the assumptions that underlie the validity of the one­way ANOVA test discussed in Section 10.4.1. 10.4.11 List the assumptions that underlie the validity of the paired comparison test discussed in Section 10.4.2. 10.4.12 List the assumptions that underlie the validity of the two­way ANOVA test discussed in Section 10.4.3. 10.4.13 List the assumptions that underlie the validity of the test used with the ran­ domized block design, discussed in Section 10.4.4, when ni j 1 for all i and j. 600 Section 10.4: Quantitative Response and Categorical Predictors PROBLEMS 10.4.14 Prove that i yi yi1 10.4.15 Prove that a i 1 yini ni j 1 yi j ni for i i 1 2 is minimized as a function of the i by a a ni i 1 j 1 yi j y 2 a i 1 ni yi y 2 a ni i 1 j 1 yi j 2 yi yi1 yi ni ni and y is the grand mean. where yi 10.4.16 Argue that if the relationship between a quantitative response Y and two cat­ egorical predictors A and B is given by a linear regression model, then A and B both have an effect on Y whenever A and B interact. (Hint: What does it mean in terms of response curves for an interaction to exist, for an effect due to A to exist?) 10.4.17 Establish that (10.4.2) is the appropriate expression for the standardized resid­ ual for the linear regression model with one categorical predictor. 10.4.18 Establish that (10.4.6) is the appropriate expression for the standardized resid­ ual for the linear regression model with two categorical predictors. 10.4.19 Establish that s2 predictors when ni j 10.4.20 How would you assess whether or not the randomized block design was ap­ propriate after collecting the data? 0 for the linear regression model with two categorical 1 for all i and j COMPUTER PROBLEMS 10.4.21 Use appropriate software to carry out Fisher’s multiple comparison test on the data in Exercise 10.4.5 so that the family error rate is between 0.04 and 0.05. What individual error rate is required? 10.4.22 Consider the data in Exercise 10.4.3, but now suppose we also take into ac­ count that the cheeses were made in lots where each lot corresponded to a production run. Recording the data this way, we obtain the following table. Cheese 1 Cheese 2 Lot 1 39 02 38 79 38 96 39 01 Lot 2 35 74 35 41 35 58 35 52 Lot 3 37 02 36 00 35 70 36 04 Suppose we assume the normal regression model for these data with two categorical predictors. (a) Produce a side­by­side boxplot for the data for each treatment. (b) Produce a table of cell means. (c) Produce a normal probability plot of the standardized residuals and a plot of the standardized residuals against each treatment combination (code the treatment combi­ nations so there is a unique integer corresponding to each). Comment on the validity of the model. Chapter 10: Relationships Among Variables 601 (d) Construct the ANOVA table testing first for no interaction between A and B and, if necessary, an effect due to A and an effect due to B (e) Based on the results of part (d), construct the appropriate table of means, plot the corresponding response curve, and make all pairwise comparisons among the means. (f) Compare your results with those obtained in Exercise 10.4.4 and comment on the differences. 10.4.23 A two­factor experimental design was carried out, with factors A and B both categorical variables taking three values. Each treatment was applied four times and the following response values were obtained. B B B 1 2 3 A 19 86 20 15 15 35 21 86 4 01 21 66 1 20 88 25 44 15 86 26 92 4 48 25 93 A 26 37 24 87 22 82 29 38 10 34 30 59 2 24 38 30 93 20 98 34 13 9 38 40 04 A 29 72 30 06 27 12 34 78 15 64 36 80 3 29 64 35 49 24 27 40 72 14 03 42 55 Suppose we assume the normal regression model for these data with two categorical predictors. (a) Produce a side­by­side boxplot for the data for each treatment. (b) Produce a table of cell means. (c) Produce a normal probability plot of the standardized residuals and a plot of the standardized residuals against each treatment combination (code the treatment combi­ nations so there is a unique integer corresponding to each). Comment on the validity of the model. (d) Construct the ANOVA table testing first for no interaction between A and B and, if necessary, an effect due to A and an effect due to B (e) Based on the results of part (d), construct the appropriate table of means, plot the corresponding response curves, and make all pairwise comparisons among the means. 10.4.24 A chemical paste is made in batches and put into casks. Ten delivery batches were randomly selected for testing; then three casks were randomly selected from each delivery and the paste strength was measured twice, based on samples drawn from each sampled cask. The response was expressed as a percentage of fill strength. The col­ lected data are given in the following table. Suppose we assume the normal regression model for these data with two categorical predictors. Cask 1 Cask 2 Cask 3 Cask 1 Cask 2 Cask 3 Batch 1 62 8 62 6 60 1 62 3 62 7 63 1 Batch 6 63 4 64 9 59 3 58 1 60 5 60 0 Batch 2 60 0 61 4 57 5 56 9 61 1 58 9 Batch 7 62 5 62 6 61 0 58 7 56 9 57 7 Batch 3 58 7 57 5 63 9 63 1 65 4 63 7 Batch 8 59 2 59 4 65
2 66 0 64 8 64 1 Batch 4 57 1 56 4 56 9 58 6 64 7 64 5 Batch 9 54 8 54 8 64 0 64 0 57 7 56 8 Batch 5 55 1 55 1 54 7 54 2 58 5 57 5 Batch 10 58 3 59 3 59 2 59 2 58 9 56 8 602 Section 10.5: Categorical Response and Quantitative Predictors (a) Produce a side­by­side boxplot for the data for each treatment. (b) Produce a table of cell means. (c) Produce a normal probability plot of the standardized residuals and a plot of the standardized residuals against each treatment combination (code the treatment combi­ nations so there is a unique integer corresponding to each). Comment on the validity of the model. (d) Construct the ANOVA table testing first for no interaction between Batch and Cask and, if necessary, no effect due to Batch and no effect due to Cask (e) Based on the results of part (d), construct the appropriate table of means and plot the corresponding response curves. 10.4.25 The following data arose from a randomized block design, where factor B is the blocking variable and corresponds to plots of land on which cotton is planted. Each plot was divided into five subplots, and different concentrations of fertilizer were ap­ plied to each, with the response being a strength measurement of the cotton harvested. There were three blocks and five different concentrations of fertilizer. Note that there is only one observation for each block and concentration combination. Further discussion of these data can be found in Experimental Design, 2nd ed., by W. G. Cochran and G. M. Cox (John Wiley & Sons, New York, 1957, pp. 107–108). Suppose we assume the normal regression model with two categorical predictors. A A A A A 36 54 72 108 144 1 B 7 62 8 14 7 70 7 17 7 46 2 B 8 00 8 15 7 73 7 57 7 68 3 B 7 93 7 87 7 74 7 80 7 21 (a) Construct the ANOVA table for testing for no effect due to fertilizer and which also removes the variation due to the blocking variable. (b) Beyond the usual assumptions that we are concerned about, what additional as­ sumption is necessary for this analysis? (c) Actually, the factor A is a quantitative variable. If we were to take this into ac­ count by fitting a model that had the same slope for each block but possibly different intercepts, then what benefit would be gained? (d) Carry out the analysis suggested in part (c) and assess whether or not this model makes sense for these data. 10.5 Categorical Response and Quantitative Predictors We now consider the situation in which the response is categorical but at least some of the predictors are quantitative. The essential difficulty in this context lies with the quantitative predictors, so we will focus on the situation in which all the predictors Chapter 10: Relationships Among Variables 603 are quantitative. When there are also some categorical predictors, these can be han­ dled in the same way, as we can replace each categorical predictor by a set of dummy quantitative variables, as discussed in Section 10.4.5. For reasons of simplicity, we will restrict our attention to the situation in which the response variable Y is binary valued, and we will take these values to be 0 and 1 Suppose, then, that there are k quantitative predictors X1 0 1 , we have Xk Because Y E Y X1 x1 Xk xk P Y 1 X1 x1 Xk xk [0 1] . Therefore, we cannot write E Y x1 some unnatural restrictions on the i to ensure that xk 1x1 1x1 Perhaps the simplest way around this is to use a 1–1 function l : [0 1] write so that l P Y 1 X1 x1 Xk xk 1x1 k xk k xk without placing [0 1]. k xk R1 and P Y 1 X1 x1 Xk xk l 1 1x1 k xk . We refer to l as a link function. There are many possible choices for l. For example, it is immediate that we can take l to be any inverse cdf for a continuous distribution. If we take l 1 i.e., the inverse cdf of the N 0 1 distribution, then this is called the probit link. A more commonly used link, due to some inherent mathematical simplicities, is the logistic link given by l p ln p 1 p . (10.5.1) The right­hand side of (10.5.1) is referred to as the logit or log odds. The logistic link is the inverse cdf of the logistic distribution (see Exercise 10.5.1). We will restrict our discussion to the logistic link hereafter. The logistic link implies that (see Exercise 10.5.2) P Y 1 X1 x1 Xk xk exp 1x1 1 exp 1x1 k xk k xk (10.5.2) which is a relatively simple relationship. We see immediately, however, that Var Y X1 P Y x1 1 X1 Xk x1 xk Xk xk 1 P Y 1 X1 x1 Xk xk so the variance of the conditional distribution of Y , given the predictors, depends on the values of the predictors. Therefore, these models are not, strictly speaking, regression models as we have defined them. Still when we use the link function given by (10.5.1), we refer to this as the logistic regression model. Now suppose we observe n independent observations xi1 xik yi for i 1 n We then have that, given xi1 xik the response yi is an observation 604 Section 10.5: Categorical Response and Quantitative Predictors from the Bernoulli P Y implies that the conditional likelihood, given the values of the predictors, is 1 X1 distribution. Then (10.5.2) Xk x1 xk n i 1 exp 1x1 1 exp 1x1 yi k xk k xk 1 1 yi 1 exp 1x1 k xk Inference about the i then proceeds via the likelihood methods discussed in Chap­ ter 6. In fact, we need to use software to obtain the MLE’s, and, because the exact sampling distributions of these quantities are not available, the large sample methods discussed in Section 6.5 are used for approximate confidence intervals and P­values. Note that assessing the null hypothesis H0 : 0 is equivalent to assessing the null hypothesis that the predictor Xi does not have a relationship with the response. i We illustrate the use of logistic regression via an example. EXAMPLE 10.5.1 The following table of data represent the (number of failures, number of successes) for ingots prepared for rolling under different settings of the predictor variables, U soaking time and V heating time, as reported in Analysis of Binary Data, by D. R. Cox (Methuen, London, 1970). A failure indicates that an ingot is not ready for rolling after the treatment. There were observations at 19 different settings of these variables 10 0 17 0 7 0 12 0 9 V 0 31 0 43 2 31 0 31 0 19 14 V 27 V 51 1 55 4 40 0 21 1 21 1 15 3 10 0 1 0 1 0 0 0 1 Including an intercept in the model and linear terms for U and V leads to three predictor variables X1 1 X2 U X3 V and the model takes the form P Y 1 X2 x2 X3 x3 exp 1 2x2 3x3 1 exp 1 2x2 3x3 Fitting the model via the method of maximum likelihood leads to the estimates given in the following table. Here, z is the value of estimate divided by its standard error. Because this is approximately distributed N 0 1 when the corresponding i equals 0, the P­value for assessing the null hypothesis that z with Z 0 is P Z N 0 1 . i Coefficient 1 2 3 Estimate 5 55900 0 05680 0 08203 Std. Error 1 12000 0 33120 0 02373 z 4 96 0 17 3 46 P­value 0 000 0 864 0 001 Of course, we have to feel confident that the model is appropriate before we can i In this case, we note that the number proceed to make formal inferences about the Chapter 10: Relationships Among Variables 605 of successes s x2 x3 in the cell of the table, corresponding to the setting X2 X3 x2 x3 , is an observation from a Binomial m x2 x3 P Y 1 X2 x2 X3 x3 distribution, where m x2 x3 is the sum of the number of successes and failures in that cell. So, for example, if X2 10 and 1 0 and X3 s 1 0 7 obtained by plugging in the MLE, we have that (see Problem 10.5.8) 7 then m 1 0 7 x2 X3 10 Denoting the estimate of P Y x3 by p x2 x3 V 1 X2 U X 2 x2 x3 s x2 x3 m x2 x3 p x2 x3 2 m x2 x3 p x2 x3 (10.5.3) 2 19 2 16 distribution when the model is is asymptotically distributed as a 3 correct. We determine the degrees of freedom by counting the number of cells where there were observations (19 in this case, as no observations were obtained when U 2 8 V X 2 evidence that the model is incorrect and can proceed to make inferences about the based on the logistic regression model. 51) and subtracting the number of parameters estimated. For these data, 0 633 Therefore, we have no 13 543 13 543 and the P­value is P 2 16 i From the preceding table, we see that the null hypothesis H0 : rejected. Accordingly, we drop X2 and fit the smaller model given by 2 0 is not P Y 1 X3 x3 exp 1 3x3 1 exp 1 3x3 This leads to the estimates 0 08070 Note that these are only marginally different from the previous estimates. In Figure 10.5.1, we present a graph of the fitted function over the range where we have observed X3. 5 4152 and 3 1 1.0 0.9 0. 10 30 V Figure 10.5.1: The fitted probability of obtaining an ingot ready to be rolled as a function of heating time in Example 10.5.1. 50 40 20 606 Section 10.5: Categorical Response and Quantitative Predictors Summary of Section 10.5 We have examined the situation in which we have a single binary­valued re­ sponse variable and a number of quantitative predictors. One method of expressing a relationship between the response and predictors is via the use of a link function. If we use the logistic link function, then we can carry out a logistic regression analysis using likelihood methods of inference. EXERCISES R1, defined by f x R1 is a density function with distribution function given by F x 10.5.1 Prove that the function f : R1 x and inverse cdf given by F 1 p logistic distribution. 10.5.2 Establish (10.5.2). 10.5.3 Suppose that a logistic regression model for a binary­valued response Y is given by e x 1 [0 1] This is called the 2 for 1 p for p e x 1 e x ln 1 ln p P Y 1 x exp 1 2x 1 exp 1 2x 2x x is given by 1 Prove that the log odds at X 10.5.4 Suppose that instead of the inverse logistic cdf as the link function, we use the inverse cdf of a Laplace distribution (see Problem 2.4.22). Determine the form of P Y 10.5.5 Suppose that instead of the inverse logistic cdf as the link function, we use the inverse cdf of a Cauchy distribution (see Problem 2.4.21). Determine the form of P Y 1 X1 xk . Xk x1 1 X1 xk . Xk x1 COMPUTER EXERCISES 10.5.6 Use software to rep
licate the results of Example 10.5.1. 10.5.7 Suppose that the following data were obtained for the quantitative predictor X and the binary­valued response variable a) Using these data, fit the logistic regression model given by P Y 1 x exp 1 2x 1 exp 1 2x 3x 2 3x 2 (b) Does the model fit the data? (c) Test the null hypothesis H0 : 0 3 Chapter 10: Relationships Among Variables 607 (d) If you decide there is no quadratic effect, refit the model and test for any linear effect. (e) Plot P Y 1 x as a function of x PROBLEMS 10.5.8 Prove that (10.5.3) is the correct form for the chi­squared goodness­of­fit test statistic. 10.6 Further Proofs (Advanced) Proof of Theorem 10.3.1 We want to prove that, when E Y X values x1 y1 are given by b1 b2x and y 1 xn yn for X Y then the least­squares estimates of 2x and we observe the independent 1 and 2 x b2 n x i 1 xi n i 1 xi y yi x 2 whenever n i 1 xi x 2 0 We need an algebraic result that will simplify our calculations. Lemma 10.6.1 If x1 y1 n i 1 yi q r R1, then b1 PROOF We have xn yn are such that r xi b2xi q 0 n i 1 xi x 2 0 and n i 1 yi b1 b2xi ny nb1 nb2x n y y b2x b2x 0 which establishes that formulas in Theorem 10.3.1, we obtain b1 n i 1 yi n yi b1 b2xi xi i 1 n yi yi i 1 n i 1 b1 b2xi xi x y b2 xi x xi x This establishes the lemma. b2xi q 0 for any q Now using this, and the n i 1 yi y xi x n i 1 yi y xi x 0 608 Section 10.6: Further Proofs (Advanced) Returning to the proof of Theorem 10.3.1, we have n i 1 yi n 1 2 2xi n i 1 yi b1 2 b2xi 2 b1 b2xi 1 b1 b2 xi 2 2 yi b1 b2xi 1 b1 2 b2 xi yi b1 b2 xi 2 2 yi b1 i 1 2 b2xi n i 1 1 b1 b2xi 2 2 as the middle term is 0 by Lemma 10.6.1. Therefore, n i 1 yi 1 2 2xi n i 1 yi b1 2 b2xi and n i 1 yi 1 2xi n 2 takes its minimum value if and only if 1 b1 b2 xi 2 0. 2 i 1 This occurs if and only if not all the same value, this is true if and only if the proof. b1 2 1 b2 xi 1 Proof of Theorem 10.3.2 We want to prove that, if E Y X values x1 y1 (i) E B1 X1 (ii) E B2 X1 1 xn yn for X Y then xn xn Xn Xn x1 x1 x 1 2. 0 for every i Because the xi are b2, which completes b1 and 2 2x and we observe the independent From Theorem 10.3.1 and E Y X1 x1 Xn xn 1 2x we have that E B2 X1 x1 Xn xn n i 1 xi x 1 n i 1 xi 2xi x 2 1 2x n i 1 xi n i 1 xi 2 x 2 x 2 2 Also, from Theorem 10.3.1 and what we have just proved, E B1 X1 x1 Xn xn 1 2x 2x 1 Chapter 10: Relationships Among Variables 609 Proof of Theorem 10.3.3 x We want to prove that, if E Y X x and we observe the independent values x1 y1 x1 (i) Var B1 X1 (ii) Var B2 X1 x1 (iii) Cov B1 B2 X1 xn xn Xn 2 1 n 2 Xn Xn 1 x 2 n i 1 xi 2x 2x Var(Y X x xn yn for X Y then n x 2 i 1 xi x 2 n i 1 xi x 2 x1 xn We first prove (ii). Observe that b2 is a linear combination of the yi y values, so we can evaluate the conditional variance once we have obtained the conditional variances and covariances of the Yi Y values. We have that 2 for every Yi Y 1 1 n Yi 1 n j Y j i so the conditional variance of Yi Y is given by 2 1 2 1 n 2 n 1 n2 2 1 1 n When i j we can write Yi Y 1 1 n Yi 1 n Y j 1 n k i j Yk and the conditional covariance between Yi Y and Y j Y is then given by n2 2 n (note that you can assume that the means of the expectations of the Y ’s are 0 for this calculation). Therefore, the conditional variance of B2 is given by Var B2 x1 xn 2 1 1 n 2 n i 1 xi x 2 , n i 1 xi n i 1 xi x 2 x 2 2 2 n i j xi n i 1 xi x x j x x 2 2 because xi x x j x i j xi x 2 n i 1 xi x 2 xi x 2. n i 1 n i 1 610 Section 10.6: Further Proofs (Advanced) For (iii), we have that Cov B1 B2 X1 Cov Y x1 B2 X B2 X1 Cov Y B2 X1 x1 and xn Xn x1 Xn Xn xn xn x Var B2 X1 x1 Xn xn Cov Y B2 X1 i 1 xi x1 x Cov Yi 2 1 1 n i 1 xi n i 1 xi Xn xn Y Y X1 x 2 n i 1 xi x x 2 0. x1 Xn xn Therefore, Cov B1 B2 X1 Finally, for (i), we have, x1 Xn xn 2x n i 1 xi x 2. Var B1 X1 x1 Xn xn Var Y X1 x1 2 Cov Y B2 X1 Xn x1 Var Y B2x X1 x 2 Var B2 X1 x1 x1 xn Xn xn Xn xn Xn xn where Var Y X1 xn (iii) completes the proof of the theorem. Xn x1 2 n Substituting the results for (ii) and Proof of Corollary 10.3.1 We need to show that Var B1 B2x X1 x1 Xn xn For this, we have that Var B1 B2x X1 x1 Xn xn 2 1 n x 2 x n i 1 xi x 2 Var B1 X1 x1 Xn xn x 2 Var B2 X1 x1 Xn xn 2x Cov B1 B2 X1 x 2 2 x 2 n i 1 xi x1 xn Xn 1 n 2 x 2 x n i 1 xi x 2 2x x x 2 1 n Proof of Theorem 10.3.4 We want to show that, if E Y X x and we observe the independent values x1 y1 x 1 2x Var Y X x xn yn for X Y then 2 for every E S2 X1 x1 Xn xn 2 Chapter 10: Relationships Among Variables 611 We have that S2 X1 n Yi B1 i 1 x1 Xn xn B2xi 2 X1 x1 Xn xn E Yi Y B2 xi x 2 X1 x1 Xn xn Var Yi Y B2 xi x X1 x1 Xn xn because Now, E Yi Y B2 xi 1 2xi 1 x X1 2x x1 2 xi Xn xn x 0 x1 Xn xn Var Yi Y Var Yi 2 xi xi x X1 B2 xi Y X1 x1 x Cov Yi x 2 Var B2 X1 Xn xn Y B2 X1 x1 Xn xn x1 Xn xn and, using the results established about the covariances of the Yi Theorem 10.3.3, we have that Y in the proof of Var Yi Y X1 x1 Xn xn 2 1 1 n and Cov Yi Y B2 X1 n x 2 j 1 1 n i 1 xi 2 n i 1 xi x 2 x1 x j Xn xn x Cov Yi Y Y j Y X1 x1 Xn xn 1 1 n xi x 1 n j i x j x 2 xi n i 1 xi x x 2 because i x j j x xi x Therefore, Var Yi Y B2 xi X1 2 xi n i 1 xi x 2 xi n i 1 xi x 2 x1 x 2 x 2 xn Xn 2 xi n i 1 xi x 2 x 2 612 and Section 10.6: Further Proofs (Advanced) E S2 X1 x1 Xn xn 2 n n 2 i 1 1 1 n xi n i 1 xi x 2 x 2 2 as was stated. Proof of Lemma 10.3.1 We need to show that, if x1 y1 n xn yn are such that n i 1 xi x 2 0 then n i 1 xi x 2 n i 1 yi b1 2 b2xi y 2 b2 2 yi y 2 i 1 We have that n yi y2 i n y2 yi yi b1 b2xi b1 2 b2xi ny2 b1 2 b2xi n i 1 b1 2 b2xi ny2 n because i 1 yi 10.3.1, we have n b1 b2xi i 1 b1 b2xi b1 b2xi 0 by Lemma 10.6.1. Then, using Theorem 2 ny2 n i 1 y b2 xi x 2 n y2 b2 2 n i 1 xi x 2 and this completes the proof. Proof of Theorem 10.3.6 We want to show that, if Y given X observe the independent values x1 y1 distributions of B1 B2 and S2 given X1 n i 1 xi xi (i) B1 (ii) B2 (iii) x is distributed N 1 2 and we xn yn for X Y , then the conditional 2x Xn xn are as follows. x1 x 2 B1 B2x N 1 2 2x 1 n x 2 x n i 1 xi x 2 2 n 2 S2 2 2 independent of B1 B2 (iv) n We first prove (i). Because B1 can be written as a linear combination of the Yi , Theorem 4.6.1 implies that the distribution of B1 must be normal. The result then follows from Theorems 10.3.2 and 10.3.3. A similar proof establishes (ii) and (iii). The proof of (iv) is similar to the proof of Theorem 4.6.6, and we leave this to a further course in statistics. Chapter 10: Relationships Among Variables 613 Proof of Corollary 10.3.2 We want to show (i) B1 1 S 1 n (ii) B2 (iii) 2 n i 1 xi x 2 n i 1 xi B1 2x 1 n i 1 xi (iv) If F is defined as in (10.3.8), then H0 : F 1 n B2x is true if and only if F We first prove (i). Because B1 and S2 are independent B1 1 n i 1 xi independent of n have 2 S2 2 2 n 2 Therefore, applying Definition 4.6.2, we xi B1 1 n i 1 xi 1 B1 x 2 1 2 n 2 S2 For (ii), the proof proceeds just as in the proof of (i). For (iii), the proof proceeds just as in the proof of (i) and also using Corollary 10.3.1. We now prove (iv). Taking the square of the ratio in (ii) and applying Theorem 4.6.11 implies G S2 B2 n i 1 xi 2 2 1 x 2 B2 2 2 n i 1 xi x 2 S2 F 1 n 2 . Now observe that F defined by (10.3.8) equals G when 2 F further course. 0 The converse that 0 is somewhat harder to prove and we leave this to a 2 only if F 1 n 2 Chapter 11 Advanced Topic — Stochastic Processes CHAPTER OUTLINE Section 1 Simple Random Walk Section 2 Markov Chains Section 3 Markov Chain Monte Carlo Section 4 Martingales Section 5 Brownian Motion Section 6 Poisson Processes Section 7 Further Proofs In this chapter, we consider stochastic processes, which are processes that proceed randomly in time. That is, rather than consider fixed random variables X, Y , etc., or even sequences of independent and identically distributed (i.i.d.) random variables, we where Xn represents some random shall instead consider sequences X0 X1 X2 quantity at time n. In general, the value Xn at time n might depend on the quantity Xn 1 at time n 1, or even the values Xm for other times m n. Stochastic processes have a different “avor” from ordinary random variables — because they proceed in time, they seem more “alive.” We begin with a simple but very interesting case, namely, simple random walk. 11.1 Simple Random Walk Simple random walk can be thought of as a model for repeated gambling. Specifically, suppose you start with $a, and repeatedly make $1 bets. At each bet, you have proba­ 1. If Xn is the bility p of winning $1 and probability q of losing $1, where p a, amount of money you have at time n (henceforth, your fortune at time n), then X0 1 depending on whether you win or lose your first while X1 could be a bet. Then X2 could be a 2 (if you win your first two bets), or a (if you win once and 2 (if you lose your first two bets). Continuing in this way, we obtain lose once), or a 1 or a q 615 616 Section 11.1: Simple Random Walk a whole sequence X0 X1 X2 times 0 1 2 . of random values, corresponding to your fortune at We shall refer to the stochastic process Xn as simple random walk. Another way p and P Zi to define this model is to start with random variables Zi 1 1 the ith bet, while Zi we set q, where 0 1 if you lose the ith bet.) We then set X0 p 1 p that are i.i.d. with P Zi 1. (Here, Zi 1 if you win 1 a, and for n Xn a Z1 Z2 Zn. The following is a specific example of this. EXAMPLE 11.1.1 Consider simple random walk with a 1 3, so you start with $8 and have probability 1 3 of winning each bet. Then the probability that you have $9 after one bet is given by 8 and p P X1 9 P 8 Z1 9 P Z1 1 1 3, as it should be. Also, the probability that you have $7 after one bet is given by P X1 7 P 8 Z1 7 P Z1 1 2 3. On the other hand, the probability that you have $10 after two bets is given by P X2 10 P 8 Z1 Z2 10 P Z1 Z2 1 1 3 1 3 1 9. EXAMPLE 11.1.2 Consider again simple random walk with a that you have $7 after three bets is given by 8 and p 1 3. Then the probability P X3 7 P 8 Z1 Z2 Z3 7 P Z1 Z2 Z3 1 . 1, while Z2 Now, there are three differ
ent ways we could have Z1 Z1 while Z1 Hence, 1, namely: (a) 1, Z3 1. Each of these three options has probability 1 3 2 3 2 3 . 1; or (c) Z3 1, while Z1 1; (b) Z2 Z2 Z3 Z2 Z3 P X3 . If the number of bets is much larger than three, then it becomes less and less con­ venient to compute probabilities in the above manner. A more systematic approach is required. We turn to that next. 11.1.1 The Distribution of the Fortune We first compute the distribution of Xn, i.e., the probability that your fortune Xn after n bets takes on various values. Chapter 11: Advanced Topic — Stochastic Processes 617 Theorem 11.1.1 Let Xn be simple random walk as before, and let n be a positive k integer. If k is an integer such that k is even, then n and n n P Xn a k n n k 2 p n k 2q n k 2. For all other values of k, we have P Xn a n 2 p 1 a k 0. Furthermore, E Xn PROOF See Section 11.7. This theorem tells us the entire distribution, and expected value, of the fortune Xn at time n. EXAMPLE 11.1.3 Suppose p 1 3 n n 5 the other hand, 8 and a 1. Then P Xn 6 13 is not even. Also, P Xn 13 0 because 13 1 0 because 6 1 12 and 12 5, and n. On P Xn 5 P Xn 1 4 n n 4 2 p n 4 2q 0256. Also, E Xn . Regarding E Xn , we immediately obtain the following corollary. Corollary 11.1.1 If p E Xn a for all n 1 2, then E Xn a for all n 1. If p 1 2, then E Xn a for all n 0. If p 1. 1 2, then This corollary has the following interpretation. If p 1 2, then the game is fair, i.e., both you and your opponent have equal chance of winning each bet. Thus, the corollary says that for fair games, your expected fortune E Xn will never change from its initial value, a. On the other hand, if p 1 2, then the game is subfair, i.e., your opponent’s chances are better than yours. In this case, the corollary says your expected fortune will decrease, i.e., be less than its initial value of a. Similarly, if p 1 2 then the game is superfair, and the corollary says your expected fortune will increase, i.e., be more than its initial value of a. Of course, in a real gambling casino, the game is always subfair (which is how the casino makes its profit). Hence, in a real casino, the average amount of money with which you leave will always be less than the amount with which you entered! EXAMPLE 11.1.4 3n 4 Hence, 10 and p Suppose a we always have E Xn 14. That is, your expected fortune is never more than your initial value of $10 and in fact is negative after 14 or more bets. 1 4. Then E Xn 10, and indeed E Xn 0 if n n 2 p 10 10 1 618 Section 11.1: Simple Random Walk Finally, we note as an aside that it is possible to change your probabilities by chang­ ing your gambling strategy, as in the following example. Hence, the preceding analysis applies only to the strategy of betting just $1 each time. EXAMPLE 11.1.5 Consider the “double ’til you win” gambling strategy, defined as follows. We first bet $1. Each time we lose, we double our bet on the succeeding turn. As soon as we win once, we stop playing (i.e., bet zero from then on). It is easily seen that, with this gambling strategy, we will be up $1 as soon as we 0). Hence, with probability 1 win a bet (which must happen eventually because p we will gain $1 with this gambling strategy for any positive value of p. p This is rather surprising, because if 0 1 2 then the odds in this game are against us. So it seems that we have “cheated fate,” and indeed we have. On the other hand, we may need to lose an arbitrarily large amount of money before we win our $1, so “infinite capital” is required to follow this gambling strategy. If only finite capital is available, then it is impossible to cheat fate in this manner. For a proof of this, see more advanced probability books, e.g., page 64 of A First Look at Rigorous Probability Theory, 2nd ed., by J. S. Rosenthal (World Scientific Publishing, Singapore, 2006). 11.1.2 The Gambler’s Ruin Problem The previous subsection considered the distribution and expected value of the fortune Xn at a fixed time n. Here, we consider the gambler’s ruin problem, which requires the consideration of many different n at once, i.e., considers the time evolution of the process. Let Xn be simple random walk as before, for some initial fortune a and some probability p of winning each bet. Assume a is a positive integer. Furthermore, let c a be some other integer. The gambler’s ruin question is: If you repeatedly bet $1, then what is the probability that you will reach a fortune of $c before you lose all your money by reaching a fortune $0? In other words, will the random walk hit c before hitting 0? Informally, what is the probability that the gambler gets rich (i.e., has $c) before going broke? More formally, let 0 c min n min n 0 : Xn 0 : Xn 0 , c be the first hitting times of 0 and c, respectively. That is, reaches 0, while c is the first time your fortune reaches c. 0 is the first time your fortune The gambler’s ruin question is: What is P c 0 , the probability of hitting c before hitting 0? This question is not so easy to answer, because there is no limit to how long it might take until either c or 0 is hit. Hence, it is not sufficient to just compute the probabilities after 10 bets, or 20 bets, or 100 bets, or even 1,000,000 bets. Fortunately, it is possible to answer this question, as follows. Chapter 11: Advanced Topic — Stochastic Processes 619 Theorem 11.1.2 Let Xn be simple random walk, with some initial fortune a and probability p of winning each bet. Assume 0 c. Then the probability P c 0 of hitting c before 0 is given by . PROOF See Section 11.7 for the proof. Consider some applications of this result. EXAMPLE 11.1.6 Suppose you start with $5 (i.e., a (i.e., c 10). If p p 0 499, then your probability of success is given by 0 500, then your probability of success is a c 5) and your goal is to win $10 before going broke 0 500. If 1 5 0 501 0 499 1 10 1 , 0 501 0 499 which is approximately 0 495. If p by 0 501, then your probability of success is given 1 5 0 499 0 501 1 10 1 , 0 499 0 501 which is approximately 0 505. We thus see that in this case, small changes in p lead to small changes in the probability of winning at gambler’s ruin. EXAMPLE 11.1.7 Suppose now that you start with $5000 (i.e., a before going broke (i.e., c is a c of success is given by 10 000). If p 0 500, same as before. On the other hand, if p 5000) and your goal is to win $10,000 0 500, then your probability of success 0 499, then your probability 1 0 501 0 499 5000 1 0 501 0 499 10,000 1 , which is approximately 2 then your probability of success is given by 10 9, i.e., two parts in a billion! Finally, if p 0 501, 1 0 499 0 501 5000 1 0 499 0 501 10,000 1 , which is extremely close to 1. We thus see that in this case, small changes in p lead to extremely large changes in the probability of winning at gambler’s ruin. For example, even a tiny disadvantage on each bet can lead to a very large disadvantage in the long 620 Section 11.1: Simple Random Walk run! The reason for this is that, to get from 5000 to 10,000, many bets must be made, so small changes in p have a huge effect overall. Finally, we note that it is also possible to use the gambler’s ruin result to compute , the probability that the walk will ever hit 0 (equivalently, that you will P 0 ever lose all your money), as follows. Theorem 11.1.3 Let Xn be simple random walk, with initial fortune a probability p of winning each bet. Then the probability P 0 will ever hit 0 is given by 0 and that the walk . PROOF See Section 11.7 for the proof. 2 and p EXAMPLE 11.1.8 Suppose a 2 3. Then the probability that you will eventually lose all your money is given by q p a 1 4. Thus, starting with just $2, we see that 3/4 of the time, you will be able to bet forever without ever losing all your money. 2 3 2 1 3 On the other hand, if p 1 2, then no matter how large a is, it is certain that you will eventually lose all your money. Summary of Section 11.1 1 1 p Xn A simple random walk is a sequence Xn of random variables, with X0 P Xn 1 P Xn 1 n It follows that P Xn n k 2 n 2 p 4 2 If 0 to a c if p 1 . c, then the gambler’s ruin probability of reaching c before 0 is equal p p a 1 and n n p n k 2q n k 2 for k 1 2, otherwise to 1 n, and E Xn p p c . Xn EXERCISES 12 and probability x for the following values of n and 1 3 of winning each bet. Compute P Xn 11.1.1 Let Xn be simple random walk, with initial fortune a p x. (a) n (b) n (c) n (d) n (e) n (f) n (g) n (h 13 12 13 11 14 12 13 14 Chapter 11: Advanced Topic — Stochastic Processes 621 8 . 8 . 5 . 5 . 18 10 1000 and p 15 15 16 5 and probability 7 and probability X3 6 X3 8 . 6 X2 4 X2 5 . 2 5 of winning each bet. 1 6 of winning each bet. 2 x 20 x 20 x 20 x 20 x (i) n (j) n (k) n (l) n (m) n 11.1.2 Let Xn be simple random walk, with initial fortune a p (a) Compute P X1 (b) Compute P X1 (c) Compute P X2 (d) What is the relationship between the quantities in parts (a), (b), and (c)? Why is this so? 11.1.3 Let Xn be simple random walk, with initial fortune a p (a) Compute P X1 (b) Compute P X1 (c) Compute P X3 (d) What is the relationship between the quantities in parts (a), (b), and (c)? Why is this so? 11.1.4 Suppose a (a) Compute E Xn for n (b) How large does n need to be before E Xn 11.1.5 Let Xn be simple random walk, with initial fortune a and probability p 0 499 of winning each bet. Compute the gambler’s ruin probability P c following values of a and c. Interpret your results in words. (a) a (b) a (c) a (d) a (e) a (f) a 11.1.6 Let Xn be simple random walk, with initial fortune a of winning each bet. Compute P 0 Interpret your results in words. 11.1.7 Let Xn be simple random walk, with initial fortune a p (a) Compute P X1 (b) Compute P X1 (c) Compute P X2 (d) Compute P X2 (e) Compute P X2 (f) Compute P X1 (g) Explain why the answer to part (f) equals what it equals. 9 c 90 c 900 c 9000 c 90,000, c 900,000, c 10 and probability p 0 6. 6 . 4 . 7 . 7 X1 7 X1 6 X2 100 1000 10,000 0 1 2 10 20 100, and 1000. 1 4 of winning each bet. 100,000 1,000,000 0 4 and a
lso where p 5, and probability 6 . 4 . 7 . 0 for the , where p 0 49. 10 0? 622 Section 11.1: Simple Random Walk 2 5 of winning each bet. 11.1.8 Let Xn be simple random walk, with initial fortune a p (a) Compute E X1 . (b) Compute E X10 . (c) Compute E X100 . (d) Compute E X1000 . (e) Find the smallest value of n such that E Xn 11.1.9 Let Xn be simple random walk, with initial fortune a p (a) Compute P X1 (b) Compute P X2 (c) Compute P X3 P Xn (d) Guess the value of limn (e) Interpret part (d) in plain English. 18 38 of winning each bet (as when betting on Red in roulette). 1000 and probability 100 and probability PROBLEMS 11.1.10 Suppose you start with $10 and repeatedly bet $2 (instead of $1), having prob­ ability p of winning each time. Suppose your goal is $100, i.e., you keep on betting until you either lose all your money, or reach $100. (a) As a function of p, what is the probability that you will reach $100 before losing all your money? Be sure to justify your solution. (Hint: You may find yourself dividing both 10 and 100 by 2.) (b) Suppose p (c) Compare the probabilities in part (b) with the corresponding probabilities if you bet just $1 each time. Which is larger? (d) Repeat part (b) for the case where you bet $10 each time. Does the probability of success increase or decrease? 0 4. Compute a numerical value for the solution in part (a). CHALLENGES 11.1.11 Prove that the formula for the gambler’s ruin probability P c continuous function of p, by proving that it is continuous at p that is a 1 2. That is, prove 0 lim 1 2 p DISCUSSION TOPICS 11.1.12 Suppose you repeatedly play roulette in a real casino, betting the same amount each time, continuing forever as long as you have money to bet. Is it certain that you will eventually lose all your money? Why or why not? 11.1.13 In Problem 11.1.10, parts (c) and (d), can you explain intuitively why the probabilities change as they do, as we increase the amount we bet each time? 11.1.14 Suppose you start at a and need to reach c, where c 0. You must keep gambling until you reach either c or 0. Suppose you are playing a subfair game (i.e., a Chapter 11: Advanced Topic — Stochastic Processes 623 1 2), but you can choose how much to bet each time (i.e., you can bet $1, or $2, p or more, though of course you cannot bet more than you have). What betting amounts do you think1 will maximize your probability of success, i.e., maximize P c 0 ? (Hint: The results of Problem 11.1.10 may provide a clue.) 11.2 Markov Chains Intuitively, a Markov chain represents the random motion of some object. We shall write Xn for the position (or value) of the object at time n. There are then rules that give the probabilities for where the object will jump next. A Markov chain requires a state space S, which is the set of all places the object top, bottom , or S is the set of all 1 2 3 , or S can go. (For example, perhaps S positive integers.) A Markov chain also requires transition probabilities, which give the probabilities for where the object will jump next. Specifically, for i S, the number pi j is the probability that, if the object is at i, it will next jump to j. Thus, the collection pi j : i S of transition probabilities satisfies pi j 0 for all i S, and j j j pi j 1 j S for each i S. We also need to consider where the Markov chain starts. Often, we will simply S. More generally, we could have an initial 0 for i . In this case, we need i s for some particular state s S where P X0 i : i i set X0 distribution each i S, and 1. i i S To summarize, here S is the state space of all places the object can go; i represents the probability that the object starts at the point i; and pi j represents the probability that, if the object is at the point i, it will then jump to the point j on the next step. In terms of the sequence of random values X0 X1 X2 , we then have that P Xn 1 j Xn i pi j for any positive integer n and any i probability does not depend on the chain’s previous history. That is, we require S. Note that we also require that this jump j P Xn 1 j Xn i Xn 1 xn 1 X0 x0 pi j for all n and all i j x0 xn 1 S. 1For more advanced results about this, see, e.g., Theorem 7.3 of Probability and Measure, 3rd ed., by P. Billingsley (John Wiley & Sons, New York, 1995). 624 Section 11.2: Markov Chains 11.2.1 Examples of Markov Chains We present some examples of Markov chains here. EXAMPLE 11.2.1 1 2 3 consist of just three elements, and define the transition probabilities Let S 1 4, 1 3, p22 0, p12 by p11 p32 1 2. This means that, for example, if the chain is at the state 3, 1 4, and p33 then it has probability 1 4 of jumping to state 1 on the next jump, probability 1 4 of jumping to state 2 on the next jump, and probability 1 2 of remaining at state 3 on the next jump. 1 3, p23 1 3, p31 1 2, p21 1 2, p13 This Markov chain jumps around on the three points 1 2 3 in a random and interesting way. For example, if it starts at the point 1, then it might jump to 2 or to 3 (with probability 1 2 each). If it jumps to (say) 3, then on the next step it might jump to 1 or 2 (probability 1 4 each) or 3 (probability 1 2). It continues making such random jumps forever. Note that we can also write the transition probabilities pi j in matrix form, as pi so that p31 matrix representation is convenient sometimes. 1 4, etc.). The matrix pi j is then called a stochastic matrix. This EXAMPLE 11.2.2 Again, let S form, as 1 2 3 . This time define the transition probabilities pi j in matrix pi 01 0 01 0 98 . This also defines a Markov chain on S. For example, from the state 3, there is proba­ bility 0.01 of jumping to state 1, probability 0.01 of jumping to state 2, and probability 0.98 of staying in state 3. EXAMPLE 11.2.3 Let S form by bedroom, kitchen, den . Define the transition probabilities pi j in matrix pi j 1 4 0 1 4 0 0 01 0 01 0 98 1 2 1 . This defines a Markov chain on S. For example, from the bedroom, the chain has probability 1 4 of staying in the bedroom, probability 1 4 of jumping to the kitchen, and probability 1 2 of jumping to the den. Chapter 11: Advanced Topic — Stochastic Processes 625 EXAMPLE 11.2.4 This time let S form, as 1 2 3 4 , and define the transition probabilities pi j in matrix pi . This defines a Markov chain on S. For example, from the state 4, it has probability 0 4 of jumping to the state 1, but probability 0 of jumping to the state 2. EXAMPLE 11.2.5 This time, let S matrix form, as 1 2 3 4 5 6 7 , and define the transition probabilities pi j in pi j 1 1 2 0 0 1 10 10 0 1 5 1 0 0 1 0 0 . This defines a (complicated!) Markov chain on S. 0 1 2 1 3 for all i EXAMPLE 11.2.6 Random Walk on the Circle Let S d pii around the circle. That is, pi j pd 1 0 p0 d 1 1 S, and also pi j 1 3. Otherwise, pi j and define the transition probabilities by saying that 1 3 whenever i and j are “next to” each other 1. Also, 1 3 whenever j 1, or j i, or j i i 0. If we think of the d elements of S as arranged in a circle, then our object, at each step, either stays where it is, or moves one step clockwise, or moves one step counter­ clockwise — each with probability 1 3. (Note in particular that it can go around the “corner” by jumping from d 1, with probability 1 3.) 1 to 0, or from 0 to d EXAMPLE 11.2.7 Ehrenfest’s Urn Consider two urns, urn #1 and urn #2, where d balls are divided between the two urns. Suppose at each step, we choose one ball uniformly at random from among the d balls and switch it to the opposite urn. We let Xn be the number of balls in urn #1 at time n. Thus, there are d Xn balls in urn #2 at time n. Here, the state space is S 0 1 2 d because these are all the possible numbers of balls in urn #1 at any time n. Also, if there are i balls in urn #1 at some time, then there is probability i n that we next choose one of those i balls, in which case the number of balls in urn #1 goes down to i 1. Hence, Similarly, pi i 1 i d. pi i 1 d i d 626 Section 11.2: Markov Chains i because there is probability d balls in urn #2. Thus, this Markov chain moves randomly among the possible numbers 0 1 i d that we will instead choose one of the d d of balls in urn #1 at each time. One might expect that, if d is large and the Markov chain is run for a long time, there would most likely be approximately d 2 balls in urn #1. (We shall consider such questions in Section 11.2.4.) The above examples should convince you that Markov chains on finite state spaces come in all shapes and sizes. Markov chains on infinite state spaces are also important. Indeed, we have already seen one such class of Markov chains. EXAMPLE 11.2.8 Simple Random Walk Let S cannot write the transition probabilities pi j 1 0 1 2 2 in matrix form. be the set of all integers. Then S is infinite, so we 1 p for each i S, and let X0 a. Fix a real number p with 0 1, and let pi i 1 p Fix a and pi i 1 1. Thus, this Markov chain begins at the point a (with probability 1) and at each step either increases by 1 p). It is easily seen that (with probability p) or decreases by 1 (with probability 1 this Markov chain corresponds precisely to the random walk (i.e., repeated gambling) model of Section 11.1.2. Z, with pi j 0 if j p i Finally, we note that in a group, you can create your own Markov chain, as follows (try it — it’s fun!). EXAMPLE 11.2.9 Form a group of between 5 and 50 people. Each group member should secretly pick out two other people from the group, an “A person” and “B person.” Also, each group member should have a coin. Take any object, such as a ball, or a pen, or a stuffed frog. Give the object to one group member to start. This person should then immediately ip the coin. If the coin comes up heads, the group member gives (or throws!) the object to his or her A person. If it comes up tails, the object goes to his or her B person. The person receiving the object should then immediately ip the coin and continue the process. (Saying your name when you receive the object is a great way for everyone to meet each other!) Continue this process for a
large number of turns. What patterns do you observe? Does everyone eventually receive the object? With what frequency? How long does it take the object to return to where it started? Make as many interesting observations as you can; some of them will be related to the topics that follow. 11.2.2 Computing with Markov Chains Suppose a Markov chain Xn has transition probabilities pi j and initial distribution i ? We have the i for all states i. What about P X1 i i . Then P X0 following result. Chapter 11: Advanced Topic — Stochastic Processes 627 Theorem 11.2.1 Consider a Markov chain Xn with state space S, transition prob­ abilities pi j , and initial distribution i . Then for any i S, P X1 i k pki . k S PROOF From the law of total probability, P X1 i P X0 k X1 i . k S But P X0 follows. k X1 i P X0 k P X1 i X0 k k pki and the result Consider an example of this. EXAMPLE 11.2.10 Again, let S 1 2 3 , and pi 01 0 01 0 98 . Suppose that P X0 1 1 7, P X0 2 2 7, and P X0 3 4 7. Then P X1 3 k pk3 98 0 73. k S Thus, about 73% of the time, this chain will be in state 3 after one step. To proceed, let us write Pi A P A X0 i for the probability of the event A assuming that the chain starts in the state i, that is, assuming that is the probability that, if the chain starts in state i and is run for n steps, it will end up in state j. Can we compute this? i. We then see that Pi Xn 0 for j 1 and j j i For n Pi X0 j 0, we must have X0 j . 0 if i i. Hence, Pi X0 j 1 if i j, while For n 1, we see that Pi X1 j pi j . That is, the probability that we will be at the state j after one step is given by the transition probability pi j . What about for n 2? If we start at i and end up at j after 2 steps, then we have to be at some state after 1 step. Let k be this state. Then we see the following. Theorem 11.2.2 We have Pi X1 k X2 j pi k pk j . PROOF If we start at i, then the probability of jumping first to k is equal to pi k. Given that we have jumped first to k, the probability of then jumping to j is given by 628 pk j . Hence, Pi X1 k X2 j Section 11.2: Markov Chains k X2 k X0 i j X0 i P X2 k X1 j X0 i P X1 P X1 pi k pk j . Using this, we obtain the following. Theorem 11.2.3 We have Pi X2 j k S pi k pk j PROOF By the law of total probability, Pi X2 j Pi X1 k X2 j , k S so the result follows from Theorem 11.2.2. EXAMPLE 11.2.11 Consider again the chain of Example 11.2.1, with S 1 2 3 and pi . Then P1 X2 3 p1k pk3 p11 p13 p12 p23 p13 p33 12. By induction (see Problem 11.2.18), we obtain the following. Theorem 11.2.4 We have Pi Xn j pii1 pi1i2 pi2i3 pin 2in 1 pin 1 j i1 i2 in 1 S PROOF See Problem 11.2.18. Theorem 11.2.4 thus gives a complete formula for the probability, starting at a state i at time 0, that the chain will be at some other state j at time n. We see from Theorem 11.2.4 that, once we know the transition probabilities pi j for all i S, S and all positive then we can compute the values of Pi Xn integers n. (The computations get pretty messy, though!) The quantities Pi Xn j are sometimes called the higher­order transition probabilities. for all i j j j Consider an application of this. Chapter 11: Advanced Topic — Stochastic Processes 629 EXAMPLE 11.2.12 Consider once again the chain with S 1 2 3 and pi . Then P1 X3 3 p1k pk p 3 p11 p11 p13 k S S p11 p12 p23 p11 p13 p33 p12 p21 p13 p12 p22 p23 p12 p23 p33 p13 p31 p13 0 0 1 2 p13 p32 p23 0 1 2 1 3 p13 p33 p33 31 72. i i P X0 Finally, we note that if we write A for the matrix pi j , write 0 for the row vec­ , then Theo­ tor , and write 1 for the row vector P X1 rem 11.2.1 can be written succinctly using matrix multiplication as 0 A That is, the (row) vector of probabilities for the chain after one step 1 is equal to the (row) 0, multiplied by the matrix A of vector of probabilities for the chain after zero steps , then transition probabilities. In fact, if we write n for the row vector P Xn 0 An, proceeding by induction, we see that where An is the nth power of the matrix A. In this context, Theorem 11.2.4 has a par­ j entry of the ticularly nice interpretation. It says that Pi Xn matrix An, i.e., the nth power of the matrix A. n A for each n. Therefore, n is equal to the i n 1 1 i j i 11.2.3 Stationary Distributions Suppose we have Markov chain transition probabilities pi j on a state space S. Let i : i S be a probability distribution on S, so that i 0 for all i, and 1 We have the following definition. i S i Definition 11.2.1 The distribution with transition probabilities pi j on a state space S, if j i : i S. S is stationary for a Markov chain j for all i pi j i S The reason for the terminology “stationary” is that, if the chain begins with those probabilities, then it will always have those same probabilities, as the following theo­ rem and corollary show. S is a stationary distribution for a Markov Theorem 11.2.5 Suppose chain with transition probabilities pi j on a state space S. Suppose that for some integer n, we have P Xn i for all i i for all i i S. Then we also have P Xn 1 : i S. i i 630 Section 11.2: Markov Chains PROOF If i is stationary, then we compute that P Xn 1 j P Xn i Xn 1 j i S i S P Xn i P Xn 1 j Xn i i pi j j . i S By induction, we obtain the following corollary. S is a stationary distribution for a Markov Corollary 11.2.1 Suppose chain with transition probabilities pi j on a state space S. Suppose that for some integer n, we have P Xn S. Then we also have P Xm i for all i i : i i i i for all i S and all integers m n. The above theorem and corollary say that, once a Markov chain is in its stationary distribution, it will remain in its stationary distribution forevermore. EXAMPLE 11.2.13 Consider the Markov chain with S 1 2 3 , and pi . No matter where this Markov chain is, it always jumps with the same probabilities, i.e., to state 1 with probability 1 2, to state 2 with probability 1 4, or to state 3 with probability 1 4. Indeed, if we set 1 S. Hence, for all i j 1 2, 2 1 4, and 3 1 4, then we see that pi j j i pi Thus, will stay in the distribution i forever. i is a stationary distribution. Hence, once in the distribution i , the chain EXAMPLE 11.2.14 Consider a Markov chain with S 0 1 and pi j 0 1 0 9 0 6 0 4 . If this chain had a stationary distribution i , then we must have that , 1. The first equation gives 1 0 6 with the second equation. In addition, we require that 3 2 2 5 3 2 0 0 9 , so 1 2 5. Then 1 1, so that 0 0 3 2 0 0 . This is also consistent 1, i.e., that 0 1 3 5. Chapter 11: Advanced Topic — Stochastic Processes 631 We then check that the settings 0 2 5 and 1 3 5 satisfy the above equa­ tions. Hence, i is indeed a stationary distribution for this Markov chain. EXAMPLE 11.2.15 Consider next the Markov chain with S 1 2 3 , and pi . We see that this Markov chain has the property that, in addition to having 1, for all i, it also has matrix pi j doubly stochastic.) 1, for all j. That is, not only do the rows of the (Such a matrix is sometimes called sum to 1, but so do the columns. j S pi j i S pi j Let compute that 1 2 3 1 3, so that i is the uniform distribution on S. Then we i pi j 1 3 pi j 1 3 pi Because this is true for all j , we see that chain. is a stationary distribution for this Markov i EXAMPLE 11.2.16 Consider the Markov chain with S 1 2 3 , and pi . Does this Markov chain have a stationary distribution? Well, if it had a stationary distribution i , then the following equations would have to be satisfied: 1 2 3 1 1 The first equation gives 1 4 so that 3 3 2 1 4 2 2, 0 1 4 3 4 3, 3. 2 3 2. The second equation then gives , But we also require 3 11. Then 1 1 2 3 1, i.e., 2 3 2 2 2 2 1, so that 2 11, and 3 It is then easily checked that the distribution given by 1 3 11, and 6 11 satisfies the preceding equations, so it is indeed a stationary distribution for 6 11. 2 11, 2 2 3 this Markov chain. 632 Section 11.2: Markov Chains EXAMPLE 11.2.17 Consider again random walk on the circle, as in Example 11.2.6. We observe that for j, the state one any state j, there are precisely three states i (namely, the state i 1 3. Hence, clockwise from j, and the state one counterclockwise from j ) with pi j i S pi j It then follows, just as in Example 11.2.15, that the uniform distribution, given by 1 That is, the transition matrix pi j is again doubly stochastic. i 1 d for i 0 1 d 1, is a stationary distribution for this Markov chain. EXAMPLE 11.2.18 For Ehrenfest’s urn (see Example 11.2.7), it is not obvious what might be a stationary distribution. However, a possible solution emerges by thinking about each ball individ­ ually. Indeed, any given ball usually stays still but occasionally gets ipped from one urn to the other. So it seems reasonable that in stationarity, it should be equally likely to be in either urn, i.e., have probability 1/2 of being in urn #1. If this is so, then the total number of balls in urn #1 would have the distribution Binomial n 1 2 , since there would be n balls, each having probability 1 2 of being in urn #1. 2d for i 0 1 d. We then compute that if To test this, we set j 1, then d i 1 d i i pi 2d 1 2d 1 1 j d d j 1 2d . d j 1 1 2d j 1 d Next, we use the identity known as Pascal’s triangle, which says that Hence, we conclude that d j 1 1 i pi j i S d 1 j d j 1 2d d j . j . With minor modifications (see Problem 11.2.19), the preceding argument works for S. 0 and j d as well. We therefore conclude that for all j i pi j j i S j Hence, is a stationary distribution. i One easy way to check for stationarity is the following. Definition 11.2.2 A Markov chain is said to be reversible with respect to a distrib­ i pi j ution S, we have if, for all i j p ji . j i Theorem 11.2.6 If a Markov chain is reversible with respect to stationary distribution for the chain. i , then is a i Chapter 11: Advanced Topic — Stochastic Processes 633 PROOF We compute, using reversibility, that for any j S, i pi j j p ji i S i S j i S p ji j 1 j . Hence, i is a stationarity distribution. EXAMPLE 11.2.19 Suppose S 1 2 3 4 5 , and the transition probabilities are
given by pi . It is not immediately clear what stationary distribution this chain may possess. Fur­ thermore, to compute directly as in Example 11.2.16 would be quite messy. On the other hand, we observe that for 1 C2i for some C 2 pi 1 i . Hence, if we set i 4, we always have pi i 1 i 0, then we will have i pi i 1 C2i pi i 1 C2i 2 pi 1 i , i 1 pi 1 i C2i 1 pi 1 i . i 1 pi 1 i for each i. 0 if i and j differ by at least 2. It follows that while Hence, i pi i 1 Furthermore, pi j j i pi j and so j p ji is a i i for each i stationary distribution for the chain. S. Hence, the chain is reversible with respect to Finally, we solve for C. We need 5 i 1 2i i S 2i 1 63. Thus, 1 i S i 1 11.2.4 Markov Chain Limit Theorem 1 Hence, we must have C i 2i 63 for i S. Suppose now that Xn is a Markov chain, which has a stationary distribution have already seen that, if P Xn i for all i for some n, then also P Xm i i . We i i for all i for all m n. Suppose now that it is not the case that P Xn i expect that, if the chain is run for a long time (i.e., n being at a particular state i chosen. That is, one might expect that S might converge to i for all i. One might still ), then the probability of i , regardless of the initial state lim n P Xn i for each i S regardless of the initial distribution i , i . (11.2.1) This is not true in complete generality, as the following two examples show. How­ ever, we shall see in Theorem 11.2.8 that this is indeed true for most Markov chains. 634 Section 11.2: Markov Chains EXAMPLE 11.2.20 Suppose that S 1 2 and that the transition probabilities are given by pi j 1 0 0 1 . That is, this Markov chain never moves at all! Suppose also that always have X0 1. 1 1, i.e., that we In this case, any distribution is stationary for this chain. In particular, we can take 1 2 as a stationary distribution. On the other hand, we clearly have 1 2, we do not have 1 2, and 1 2 1 1 P Xn 1 for all n. Because i i in this case. 1 P1 Xn limn We shall see later that this Markov chain is not “irreducible,” which is the obstacle to convergence. EXAMPLE 11.2.21 Suppose again that S by 1 2 , but that this time the transition probabilities are given pi j 0 1 1 0 . 1 1, i.e., that we always have X0 1 2 That is, this Markov chain always moves from 1 to 2, and from 2 to 1. Suppose again that 1. We may again take 1 2 as a stationary distribution (in fact, this time the stationary distribution is unique). On the other hand, this time we clearly have P1 Xn 0 for n odd. Hence, again we do not have limn 1 for n even, and P1 Xn 1 2 P1 Xn 1 1 1 1 We shall see that here the obstacle to convergence is that the Markov chain is “pe­ riodic,” with period 2. In light of these examples, we make some definitions. Definition 11.2.3 A Markov chain is irreducible if it is possible for the chain to move from any state to any other state. Equivalently, the Markov chain is irreducible if for any i S, there is a positive integer n with Pi Xn 0. j j Thus, the Markov chain of Example 11.2.20 is not irreducible because it is not possible to get from state 1 to state 2. Indeed, in that case, P1 Xn 2 0 for all n. EXAMPLE 11.2.22 Consider the Markov chain with S 1 2 3 , and pi . For this chain, it is not possible to get from state 1 to state 3 in one step. On the other hand, it is possible to get from state 1 to state 2, and then from state 2 to state 3. Hence, this chain is still irreducible. Chapter 11: Advanced Topic — Stochastic Processes 635 EXAMPLE 11.2.23 Consider the Markov chain with S 1 2 3 , and pi . For this chain, it is not possible to get from state 1 to state 3 in one step. Furthermore, it is not possible to get from state 2 to state 3, either. In fact, there is no way to ever get from state 1 to state 3, in any number of steps. Hence, this chain is not irreducible. Clearly, if a Markov chain is not irreducible, then the Markov chain convergence (11.2.1) will not always hold, because it will be impossible to ever get to certain states of the chain. We also need the following definition. Definition 11.2.4 Given Markov chain transitions pi j on a state space S, and a state i S, the period of i is the greatest common divisor (g.c.d.) of the set n i 0 where p n ii 1 : p n ii P Xn i X0 That is, the period of i is the g.c.d. of the times at which it is possible to travel from i to i. For example, the period of i is 2 if it is only possible to travel from i to i in an even number of steps. (Such was the case for Example 11.2.21.) On the other hand, if pii 0, then clearly the period of i is 1. Clearly, if the period of some state is greater than 1, then again (11.2.1) will not always hold, because the chain will be able to reach certain states at certain times only. This prompts the following definition. Definition 11.2.5 A Markov chain is aperiodic if the period of each state is equal to 1. EXAMPLE 11.2.24 Consider the Markov chain with S 1 2 3 , and pi . For this chain, from state 1 it is possible only to get to state 2. And from state 2 it is possible only to get to state 3. Then from state 3 it is possible only to get to state 1. Hence, it is possible only to return to state 1 after an integer multiple of 3 steps. Hence, state 1 (and, indeed, all three states) has period equal to 3, and the chain is not aperiodic. EXAMPLE 11.2.25 Consider the Markov chain with S 1 2 3 , and pi . 636 Section 11.2: Markov Chains For this chain, from state 1 it is possible only to get to state 2. And from state 2 it is possible only to get to state 3. However, from state 3 it is possible to get to either state 1 or state 3. Hence, it is possible to return to state 1 after either 3 or 4 steps. Because the g.c.d. of 3 and 4 is 1, we conclude that the period of state 1 (and, indeed, of all three states) is equal to 1, and the chain is indeed aperiodic. We note the following simple fact. Theorem 11.2.7 If a Markov chain has pi j irreducible and aperiodic. 0 for all i j S, then the chain is PROOF If pi j the Markov chain must be irreducible. 0 for all i j S, then Pi X1 j 0 for all i j S. Hence, 0 contains the value Also, if pi j 1 (and, indeed, all positive integers n). Hence, its greatest common divisor must n be 1. Therefore, each state i has period 1, so the chain is aperiodic. S, then the set n 0 for all i j 1 : p n ii In terms of the preceding definitions, we have the following very important theorem about Markov chain convergence. Theorem 11.2.8 Suppose a Markov chain is irreducible and aperiodic and has a stationary distribution i , we have P Xn limn i . Then regardless of the initial distribution i for all states i. i PROOF For a proof of this, see more advanced probability books, e.g., pages 92–93 of A First Look at Rigorous Probability Theory, 2nd ed., by J. S. Rosenthal (World Scientific Publishing, Singapore, 2006). Theorem 11.2.8 shows that stationary distributions are even more important. Not only does a Markov chain remain in a stationary distribution once it is there, but for most chains (irreducible and aperiodic ones), the probabilities converge to the station­ ary distribution in any case. Hence, the stationary distribution provides fundamental information about the long­term behavior of the Markov chain. EXAMPLE 11.2.26 Consider again the Markov chain with S 1 2 3 , and pi . We have already seen that if we set is a stationary distribution. Furthermore, we see that pi j Theorem 11.2.7 the Markov chain must be irreducible and aperiodic. P Xn We conclude that limn 1 4, and 3 0 for all i 1 2, 1 2 i i for all states i. For example, limn 1 4, then i S, so by j 1 2. (Also, this limit does not depend on the initial distribution, so, for 1 2 and limn In fact, for this example we will have P Xn P1 Xn 1 P2 Xn 1 1 2, as well.) i i for all i provided n 1. 1 P Xn example, limn Chapter 11: Advanced Topic — Stochastic Processes 637 EXAMPLE 11.2.27 Consider again the Markov chain of Example 11.2.14, with S 0 1 and pi j 0 1 0 9 0 6 0 4 . We have already seen that this Markov chain has a stationary distribution, given by 0 2 5 and 1 3 5. Furthermore, because pi j 0 for all i j 100, then we will have P X100 and aperiodic. Therefore, we conclude that limn n this conclusion does not depend on the initial distribution, so, e.g., limn i as well. i P Xn 2 5, and P X100 P1 Xn limn 0 i S, this Markov chain is irreducible i . So, if (say) 3 5. Once again, i 1 P0 Xn EXAMPLE 11.2.28 Consider again the Markov chain of Example 11.2.16, with S 1 2 3 , and pi . We have already seen that this chain has a stationary distribution 6 11. 2 11, 3 11, and 3 2 i given by 1 Now, in this case, we do not have pi j 0 and p21 0 for all i 0, so by Theorem 11.2.3, we have other hand, p32 j S because p31 0. On the P3 X2 1 p3k pk1 p32 p21 0. k S Hence, the chain is still irreducible. Similarly, we have P3 X2 3 0. Therefore, because the g.c.d. of 2 and 3 is 1, we see that the g.c.d. of the set of n with P3 Xn 0 is also 1. Hence, the chain is still aperiodic. 0, and P3 X3 p32 p21 p13 p32 p23 3 3 Because the chain is irreducible and aperiodic, it follows from Theorem 11.2.8 that 2 11 i , for all states i. Hence, limn P Xn 1 3 11 and limn 3 P Xn 2 11, P X500 6 11. Thus, if (say) n 3 11, and P X500 2 1 P Xn P Xn limn limn 500, then we expect that P X500 3 6 11. i 2 Summary of Section 11.2 such that P Xn 1 i such that P X0 A Markov chain is a sequence Xn of random variables, having transition prob­ j Xn abilities pi j pi j , and having an initial i . distribution There are many different examples of Markov chains. All probabilities for all the Xn can be computed in terms of i pi j A distribution i and pi j . j for all j is stationary for the chain if S. i i i i S 638 Section 11.2: Markov Chains If the Markov chain is irreducible and aperiodic, and limn i for all i P Xn S. i is stationary, then i EXERCISES 11.2.1 Consider a Markov chain with S and 1 2 3 , 1 0 7, 2 0 1, 3 0 2, pi . Compute the following quantities. (a) P X0 (b) P X0 (c) P X0 (d) P X1 (e) P X3 (f) P X1 (g) P X1 11.2.2 Consider a Markov chain with S
1 2 3 2 X0 2 X2 2 X0 2 1 1 2 high, low , high 1 3, low 2 3, and pi j 1 4 3 4 1 6 5 6 . Compute the following quantities. (a) P X0 (b) P X0 (c) P X1 (d) P X3 (e) P X1 11.2.3 Consider a Markov chain with S high low high X0 high X2 high high low 0 1 , and pi a) Compute Pi X2 (b) Compute P0 X3 1 . 11.2.4 Consider again the Markov chain with S for all four combinations of i 0 1 and j S. pi a) Compute a stationary distribution (b) Compute limn (c) Compute limn P0 Xn P1 Xn 0 . 0 . for this chain. i Chapter 11: Advanced Topic — Stochastic Processes 639 11.2.5 Consider the Markov chain of Example 11.2.5, with S and pi j 1 1 2 0 0 1 10 10 . Compute the following quantities. 1 (a) P2 X1 2 (b) P2 X1 3 (c) P2 X1 1 (d) P2 X2 2 (e) P2 X2 3 (f) P2 X2 3 (g) P2 X3 1 (h) P2 X3 7 (i) P2 X1 7 (j) P2 X2 (k) P2 X3 7 (l) maxn P2 Xn steps, for any n) (m) Is this Markov chain irreducible? 11.2.6 For each of the following transition probability matrices, determine (with ex­ planation) whether it is irreducible, and whether it is aperiodic. (a) 7 (i.e., the largest probability of going from state 2 to state 7 in n (b) (c) (d) (e) pi pi pi j pi j pi 640 (f) Section 11.2: Markov Chains pi 11.2.7 Compute a stationary distribution for the Markov chain of Example 11.2.4. (Hint: Do not forget Example 11.2.15.) 11.2.8 Show that the random walk on the circle process (see Example 11.2.6) is (a) irreducible. (b) aperiodic. (c) reversible with respect to its stationary distribution. 11.2.9 Show that the Ehrenfest’s Urn process (see Example 11.2.7) is (a) irreducible. (b) not aperiodic. (c) reversible with respect to its stationary distribution. 11.2.10 Consider the Markov chain with S 1 2 3 , and pi a) Determine (with explanation) whether or not the chain is irreducible. (b) Determine (with explanation) whether or not the chain is aperiodic. (c) Compute a stationary distribution for the chain. (d) Compute (with explanation) a good approximation to P1 X500 11.2.11 Repeat all four parts of Exercise 11.2.10 if S 1 2 3 and 2 . pi 11.2.12 Consider a Markov chain with S 1 2 3 and pi Hint: Do not forget (a) Is this Markov chain irreducible and aperiodic? Explain. Theorem 11.2.7.) (b) Compute P1 X1 (c) Compute P1 X2 (d) Compute P1 X3 (e) Compute limn 11.2.13 For the Markov chain of the previous exercise, compute P1 X1 11.2.14 Consider a Markov chain with S 3 . 3 . 3 . P1 Xn 1 2 3 and 3 . (Hint: find a stationary distribution for the chain.) X2 5 . pi . Chapter 11: Advanced Topic — Stochastic Processes 641 (a) Compute the period of each state. (b) Is this Markov chain aperiodic? Explain. 11.2.15 Consider a Markov chain with S 1 2 3 and pi a) Is this Markov chain irreducible? Explain. (b) Is this Markov chain aperiodic? Explain. PROBLEMS 11.2.16 Consider a Markov chain with S 1 2 3 4 5 , and pi for this chain. (Hint: Use reversibility, as in Compute a stationary distribution Example 11.2.19.) 11.2.17 Suppose 100 lily pads are arranged in a circle, numbered 0 1 99 (with pad 99 next to pad 0). Suppose a frog begins at pad 0 and each second either jumps one pad clockwise, or jumps one pad counterclockwise, or stays where it is — each with probability 1 3. After doing this for a month, what is the approximate probability that the frog will be at pad 55? (Hint: The frog is doing random walk on the circle, as in Example 11.2.6. Also, the results of Example 11.2.17 and Theorem 11.2.8 may help.) 11.2.18 Prove Theorem 11.2.4. (Hint: Proceed as in the proof of Theorem 11.2.3, and use induction.) 11.2.19 In Example 11.2.18, prove that j 0 and when j when j i pi j i S d. DISCUSSION TOPICS 11.2.20 With a group, create the “human Markov chain” of Example 11.2.9. Make as many observations as you can about the long­term behavior of the resulting Markov chain. 11.3 Markov Chain Monte Carlo In Section 4.5, we saw that it is possible to estimate various quantities (such as prop­ erties of real objects through experimentation, or the value of complicated sums or integrals) by using Monte Carlo techniques, namely, by generating appropriate random 642 Section 11.3: Markov Chain Monte Carlo variables on a computer. Furthermore, we have seen in Section 2.10 that it is quite easy to generate random variables having certain special distributions. The Monte Carlo method was used several times in Chapters 6, 7, 9, and 10 to assist in the implementa­ tion of various statistical methods. However, for many (in fact, most!) probability distributions, there is no simple, direct way to simulate (on a computer) random variables having such a distribution. We illustrate this with an example. EXAMPLE 11.3.1 Let Z be a random variable taking values on the set of all integers, with P Z j C j 1 2 4e 3 j cos2 j (11.3.1) for j Now suppose that we want to compute the quantity A 1 0 1 2 3 , where C 2 1 j j E Z Well, if we could generate i.i.d. random variables Y1 Y2 given by (11.3.1), for very large M, then we could estimate A by A A 1 M M i 1 Yi 20 2. 1 2 4e 3 j cos2 j 20 2 . YM with distribution Then A would be a Monte Carlo estimate of A. The problem, of course, is that it is not easy to generate random variables Yi with this distribution. In fact, it is not even easy to compute the value of C. Surprisingly, the difficulties described in Example 11.3.1 can sometimes be solved using Markov chains. We illustrate this idea as follows. EXAMPLE 11.3.2 In the context of Example 11.3.1, suppose we could find a Markov chain on the state of all integers, which was irreducible and aperi­ space S 1 2 4e 3 j cos2 j C j odic and which had a stationary distribution given by for j 1 0 1 2 2 S j If we did, then we could run the Markov chain for a long time N , to get random X N . For large enough N , by Theorem 11.2.8, we would have values X0 X1 X2 P X N j j C j 1 2 4e 3 j cos2 j . Hence, if we set Y1 j approximately equal to (11.3.1), for all integers j. That is, the value of X N would be approximately as good as a true random variable Y1 with this distribution. X N , then we would have P Y1 Once the value of Y1 was generated, then we could repeat the process by again running the Markov chain, this time to generate new random values 0 X [2] X [2] 1 X [2] 2 X [2] N (say). We would then have P X [2] N j j C j 1 2 4e 3 j cos2 j . Chapter 11: Advanced Topic — Stochastic Processes 643 X [2] Hence, if we set Y2 (11.3.1), for all integers j. N , then we would have P Y2 j approximately equal to Continuing in this way, we could generate values Y1 Y2 Y3 YM , such that these are approximately i.i.d. from the distribution given by (11.3.1). We could then, as before, estimate A by A A 1 M M i 1 Yi 20 2. This time, the approximation has two sources of error. First, there is Monte Carlo error because M might not be large enough. Second, there is Markov chain error, because N might not be large enough. However, if M and N are both very large, then A will be a good approximation to A. We summarize the method of the preceding example in the following theorem. Theorem 11.3.1 (The Markov chain Monte Carlo method) Suppose we wish to estimate the expected value A S, with j for j j M, we can generate values P Z 1 X [i] X [i] X [i] 0 X [i] N from some Markov chain that is irreducible, aperiodic, and j as a stationary distribution. Let has S. Suppose for i where P Z E h Z 0 for i] N . If M and N are sufficiently large, then A A. It is somewhat inefficient to run M different Markov chains. Instead, practitioners often just run a single Markov chain, and average over the different values of the chain. For an irreducible Markov chain run long enough, this will again converge to the right answer, as the following theorem states. Theorem 11.3.2 (The single­chain Markov chain Monte Carlo method) Suppose we wish to estimate the expected value A for j X0 X1 X2 has where P Z E h Z j S. Suppose we can generate values X N from some Markov chain that is irreducible, aperiodic, and j as a stationary distribution. For some integer B S, with P Z 0 for j 0, let Xi . If N B is sufficiently large, then A A. Here, B is the burn­in time, designed to remove the inuence of the chain’s starting value X0. The best choice of B remains controversial among statisticians. However, if the starting value X0 is “reasonable,” then it is okay to take B 0, provided that N is sufficiently large. This is what was done, for instance, in Example 7.3.2. 644 Section 11.3: Markov Chain Monte Carlo These theorems indicate that, if we can construct a Markov chain that has i as a stationary distribution, then we can use that Markov chain to estimate quantities associated with i . This is a very helpful trick, and it has made the Markov chain Monte Carlo method into one of the most popular techniques in the entire subject of computational statistics. However, for this technique to be useful, we need to be able to construct a Markov i as a stationary distribution. This sounds like a difficult problem! i were very simple, then we would not need to use Markov chain Monte is complicated, then how can we possibly construct a Markov chain that has Indeed, if Carlo at all. But if chain that has that particular stationary distribution? i Remarkably, this problem turns out to be much easier to solve than one might expect. We now discuss one of the best solutions, the Metropolis–Hastings algorithm. 11.3.1 The Metropolis–Hastings Algorithm Suppose we are given a probability distribution construct a Markov chain on S that has i as a stationary distribution? i on a state space S. How can we One answer is given by the Metropolis–Hastings algorithm. It designs a Markov chain that proceeds in two stages. In the first stage, a new point is proposed from some proposal distribution. In the second stage, the proposed point is either accepted or rejected. If the proposed point is accepted, then the Markov chain moves there. If it is rejected, then the Markov chain stays where it is. By choosing the probability of accepting to be just right, we end up creating a Markov chain that has i as a stationary distrib
ution. The details of the algorithm are as follows. We start with a state space S, and i on S. We then choose some (simple) Markov chain S called the proposal distribution. Thus, we : i j S. However, we do not assume 1 for each i j S qi j is a stationary distribution for the chain qi j ; indeed, the chain qi j might a probability distribution transition probabilities qi j require that qi j that i not even have a stationary distribution. 0, and Given Xn i, the Metropolis–Hastings algorithm computes the value Xn 1 as follows. 1. Choose Yn 1 j according to the Markov chain qi j . 2. Set i j min 1 j q ji i qi j (the acceptance probability). 3. With probability i j , let Xn 1 Otherwise, with probability 1 proposal Yn 1). Yn 1 i j , let Xn 1 j (i.e., accepting the proposal Yn 1). i (i.e., rejecting the Xn The reason for this unusual algorithm is given by the following theorem. Theorem 11.3.3 The preceding Metropolis–Hastings algorithm results in a Markov chain X0 X1 X2 i as a stationary distribution. which has Chapter 11: Advanced Topic — Stochastic Processes 645 PROOF See Section 11.7 for the proof. We consider some applications of this algorithm. EXAMPLE 11.3.3 As in Example 11.3.1, suppose S 2 1 0 1 2 and j C j 1 2 4e 3 j cos2 j , for j S. We shall construct a Markov chain having i as a stationary distribution. We first need to choose some simple Markov chain qi j . We let qi j be simple 0 1 2, so that qi j 1, and qi j 1 2 if j 1 or j i i random walk with p otherwise. We then compute that if j i 1 or j i 1, then i j min 1 min 1 q ji qi j j i j min 1 i 1 2 4e 3 j cos2 j 1 2 4e3i cos2 4e 3 j cos2 j 1 2 4e3i cos2 i . (11.3.2) Note that C has cancelled out, so that always be the case.) Hence, we see that for a computer to calculate. i j does not depend on C. (In fact, this will i j , while somewhat messy, is still very easy Given Xn i, the Metropolis–Hastings algorithm computes the value Xn 1 as follows. 1. Let Yn 1 Xn 1 or Yn 1 Xn 1, with probability 1 2 each. 2. Let j Yn 1, and compute i j as in (11.3.2). 3. With probability i j , let Xn 1 Xn let Xn 1 i. Yn 1 j. Otherwise, with probability 1 i j , These steps can all be easily performed on a computer. If we repeat this for n j N 1 for some large number N of iterations, then we will obtain a random 0 1 2 variable X N , where P X N EXAMPLE 11.3.4 Again, let S Let the proposal distribution qi j correspond to a simple random walk with p so that Yn 1 1 with probability 1 4, and Yn 1 1 2 4e 3 j cos2 j S. 1 4, 1 with probability 3 4. , and this time let 1 0 1 2 K e j 4 for all j for j C j Xn Xn S. 2 j j In this case, we compute that if j i j min 1 q j i qi j j i min 1 i 1, then If instead j i 1, then min 1 3e j 4 i 4 . (11.3.3) i j min 1 q j i qi j j i min min 11.3.4) 646 Section 11.3: Markov Chain Monte Carlo (Note that the constant K has again cancelled out, as expected.) Hence, again i j is very easy for a computer to calculate. Given Xn i, the Metropolis–Hastings algorithm computes the value Xn 1 as follows. 1. Let Yn 1 3 4. Xn 1 with probability 1 4, or Yn 1 Xn 1 with probability 2. Let j Yn 1, and compute i j using (11.3.3) and (11.3.4). 3. With probability i j , let Xn 1 Xn let Xn 1 i . Yn 1 j. Otherwise, with probability 1 i j , Once again, these steps can all be easily performed on a computer; if repeated for some large number N of iterations, then P X N j K e j 4 for j S. j The Metropolis–Hastings algorithm can also be used for continuous random vari­ ables by using densities, as follows. EXAMPLE 11.3.5 Suppose we want to generate a sample from the distribution with density proportional to f y e y4 1 y 3. f y dy How can we generate a random So the density is C f y , where C variable Y such that Y has approximately this distribution, i.e., has probability density approximately equal to C f y ? 1 Let us use a proposal distribution given by an N x 1 distribution, namely, a nor­ x, we choose Yn 1 y x 2 2 mal distribution with mean x and variance 1. That is, given Xn by Yn 1 this corresponds to a proposal density of q x y N x 1 . Because the N x 1 distribution has density 2 y x 2 2. 1 2 e 1 2 e 2 As for the acceptance probability x y , we again use densities, so that x y min min 1 min Ce y4 2 Ce x 4 2 3 e y4 x4 . 11.3.5) Given Xn x, the Metropolis–Hastings algorithm computes the value Xn 1 as follows. 1. Generate Yn 1 N Xn 1 . 2. Let y Yn 1, and compute x y as before. 3. With probability 1 x y , let Xn 1 x y , let Xn 1 Xn x. Yn 1 y. Otherwise, with probability Chapter 11: Advanced Topic — Stochastic Processes 647 Once again, these steps can all be easily performed on a computer; if repeated for some large number N of iterations, then the random variable X N will approximately have density given by C f y . 11.3.2 The Gibbs Sampler In Section 7.3.3 we discussed the Gibbs sampler and its application in a Bayesian statistics problem. As we will now demonstrate, the Gibbs sampler is a specialized version of the Metropolis–Hastings algorithm, designed for multivariate distributions. It chooses the proposal probabilities qi j just right so that we always have i j 1, i.e., so that no rejections are ever required. Suppose that S 2 1 0 1 2 the set of all ordered pairs of integers i etc.) Suppose that some distribution q 1 i j i as follows. j Let V i S : j2 i2 . That is, V i and j agree in their second coordinate. Thus, V i through the point i. In terms of this definition of V i , define q 1 i j in their second coordinate. If j then define 2 i.e., S is S, is defined on S. Define a proposal distribution i1 i2 . (Thus, 2 3 1 0 1 2 S, and 6 14 is the set of all states j S such that i is a vertical line in S, which passes V i , i.e., if i and j differ V i , i.e., if i and j agree in their second coordinate, 0 if One interpretation is that, if Xn i, and P Yn 1 distribution of Yn 1 is the conditional distribution of the second coordinate must be equal to i2. , what is In terms of this choice of q 1 i j V i . Hence, also V j q 1 i j j S, then the i , conditional on knowing that for j i j ? Well, if j V i , then i V j , and i j min 1 min 1 j q 1 ji i q 1 i j j i i j min min 1 1 1. That is, this algorithm accepts the proposal Yn 1 with probability 1, and never rejects at all! Now, this algorithm by itself is not very useful because it proposes only states in V i , so it never changes the value of the second coordinate at all. However, we can i1 , so that H i similarly define a horizontal line through i by H i is the set of all states j such that i and j agree in their first coordinate. That is, H i is a horizontal line in S that passes through the point i. S : j1 j 648 Section 11.3: Markov Chain Monte Carlo We can then define q 2 i j (i.e., if i and j agree in their first coordinate), then 0 if j H i (i.e., if i and j differ in their first coordi­ nate), while if As before, we compute that for this proposal, we will always have i j 1, i.e., the Metropolis–Hastings algorithm with this proposal will never reject. The Gibbs sampler works by combining these two different Metropolis–Hastings i, it produces a algorithms, by alternating between them. That is, given a value Xn value Xn 1 as follows. 1. Propose a value Yn 1 V i according to the proposal distribution q 1 i j . 2. Always accept Yn 1 and set j Yn 1 thus moving vertically. 3. Propose a value Zn 1 H j according to the proposal distribution q 2 i j . 4. Always accept Zn 1 thus moving horizontally. 5. Set Xn 1 Zn 1. In this way, the Gibbs sampler does a “zigzag” through the state space S, alternately moving in the vertical and in the horizontal direction. In light of Theorem 11.3.2, we immediately obtain the following. Theorem 11.3.4 The preceding Gibbs sampler algorithm results in a Markov chain i as a stationary distribution. X0 X1 X2 that has The Gibbs sampler thus provides a particular way of implementing the Metropolis– Hastings algorithm in multidimensional problems, which never rejects the proposed values. Summary of Section 11.3 In cases that are too complicated for ordinary Monte Carlo techniques, it is pos­ sible to use Markov chain Monte Carlo techniques instead, by averaging values arising from a Markov chain. The Metropolis–Hastings algorithm provides a simple way to create a Markov chain with stationary distribution i . Given Xn, it generates a proposal Yn 1 from a proposal distribution qi j , and then either accepts this proposal (and sets Xn 1 Xn) with probability 1 Alternatively, the Gibbs sampler updates the coordinates one at a time from their conditional distribution, such that we always have i j Yn 1) with probability i j , or rejects this proposal (and sets Xn 1 i j . 1. Chapter 11: Advanced Topic — Stochastic Processes 649 EXERCISES i i i Ce i 13 4 for i i e i 13 4 11.3.1 Suppose 1 which uses simple random walk with p 11.3.2 Suppose C 1 i i . Describe in detail a Metropolis–Hastings algorithm for i , S 2 1 0 1 2 , where C 1 2 for the proposals. 6 5 8 for i C i , where 6 5 8. Describe in detail a Metropolis–Hastings algorithm for 1 0 1 2 S 2 i , which uses simple random walk with p 5 8 for the proposals. 11.3.3 Suppose for i , where . Describe in detail a Metropolis–Hastings algorithm for 1 0 1 2 S 2 i , which uses simple random walk with p 7 9 for the proposals. R1. Let K e x 4 x 6 x8 1 for x dx 11.3.4 Suppose f x Describe in detail a Metropolis–Hastings algorithm for the distribution having density K f x , which uses the proposal distribution N x 1 , i.e., a normal distribution with mean x and variance 1. dx. 11.3.5 Let Describe in detail a Metropolis–Hastings algorithm for the distribution having density K f x , which uses the proposal distribution N x 10 , i.e., a normal distribution with mean x and variance 10. R1, and let K e x4 x 6 x 8 e x 4 x6 x 8 for COMPUTER EXERCISES 11.3.6 Run the algorithm of Exercise 11.3.1. Discuss the output. 11.3.7 Run the algorithm of Exercise 11.3.2. Discuss the output. PROBLEMS 11.3.8 Suppose S integers. For i C. Describe in detail a Gibbs sampler algorithm for this distribution , i.e.,
S is the set of all pairs of positive i2 for appropriate positive constant S, suppose C 2i1 1 2 3 1 2 3 i1 i2 i i . COMPUTER PROBLEMS 11.3.9 Run the algorithm of Exercise 11.3.4. Discuss the output. 11.3.10 Run the algorithm of Exercise 11.3.5. Discuss the output. DISCUSSION TOPICS 11.3.11 Why do you think Markov chain Monte Carlo algorithms have become so popular in so many branches of science? (List as many reasons as you can.) 11.3.12 Suppose you will be using a Markov chain Monte Carlo estimate of the form A 1 M M i 1 h X [i] N . 650 Section 11.4: Martingales Suppose also that, due to time constraints, your total number of iterations cannot be more than one million. That is, you must have N M 1,000,000. Discuss the advan­ tages and disadvantages of the following choices of N and M. (a) N 1,000,000 M 1 1, M 1,000,000 (b) N (c) N 100, M 10,000 (d) N 10,000, M 100 1000, M 1000 (e) N (f) Which choice do you think would be best, under what circumstances? Why? 11.4 Martingales In this section, we study a special class of stochastic processes called martingales. We shall see that these processes are characterized by “staying the same on average.” As motivation, consider again a simple random walk in the case of a fair game, i.e., 1 2. Suppose, as in the gambler’s ruin setup, that you start at a and keep c. Let Z be the value that you end up 0. We know from Theorem 11.1.2 1 with p going until you hit either c or 0, where 0 with, so that we always have either Z c that in fact P Z a c, so that P Z Let us now consider the expected value of Z . We have that a c or Z 0 a c 0P Z 0 c a c a. z R1 That is, the average value of where you end up is a. But a is also the value at which you started! This is not a coincidence. Indeed, because p 1 2 (i.e., the game was fair), this means that “on average” you always stayed at a. That is, Xn is a martingale. 11.4.1 Definition of a Martingale We begin with the definition of a martingale. For simplicity, we assume that the mar­ tingale is a Markov chain, though this is not really necessary. be a Markov chain. The chain is a martingale Definition 11.4.1 Let X0 X1 X2 if for all n 0. That is, on average the 0 1 2 chain’s value does not change, regardless of what the current value Xn actually is. , we have E Xn 1 Xn Xn EXAMPLE 11.4.1 Let Xn be simple random walk with p 1, with probability 1 2 each. Hence, or 1 2. Then Xn 1 Xn is equal to either 1 E Xn 1 Xn Xn 1 1 2 1 1 2 0, so Xn stays the same on average and is a martingale. (Note that we will never actually 0. However, on average we will have Xn 1 have Xn 1 Xn Xn 0.) Chapter 11: Advanced Topic — Stochastic Processes 651 EXAMPLE 11.4.2 Let Xn be simple random walk with p or 2 3. Then Xn 1 1, with probabilities 2 3 and 1 3 respectively. Hence, Xn is equal to either 1 E Xn 1 Xn Xn 1 2 3 1 1 3 1 3 0. Thus, Xn is not a martingale in this case. EXAMPLE 11.4.3 Suppose we start with the number 5 and then repeatedly do the following. We either add 3 to the number (with probability 1 4), or subtract 1 from the number (with probability 3 4). Let Xn be the number obtained after repeating this procedure n times. Then, given the value of Xn, we see that Xn 1 Xn Xn 3 with probability 1 4, while Xn 1 1 with probability 3 4. Hence, E Xn 1 Xn Xn and Xn is a martingale. It is sometimes possible to create martingales in subtle ways, as follows. EXAMPLE 11.4.4 Let Xn again be simple random walk, but this time for general p. Then Xn 1 p. Hence, is equal to 1 with probability p, and to 1 with probability q 1 Xn E Xn 1 Xn Xn 1 p 1 q p q 2 p 1. 1 2, then this is not equal to 0. Hence, Xn does not stay the same on average, If p so Xn is not a martingale. On the other hand, let Zn 1 p p X n , i.e., Zn equals the constant 1 1 corresponds to multiplying Zn by 1 to dividing Zn by 1 probability p, while Xn 1 that, given the value of Zn, we have Xn p p raised to the power of Xn. Then increasing Xn by p p, while decreasing Xn by 1 corresponds 1 with p . But Xn 1 Xn p. Therefore, we see 1 with probability q 1 p p, i.e., multiplying by p 1 E Zn 1 Zn Zn 1 1 p p Zn Zn p p Zn pZn p 1 pZn Zn p Zn 1 p 1 p Zn 0. Accordingly, E Zn 1 Zn Zn is a martingale. 0, so that Zn stays the same on average, i.e., Zn 11.4.2 Expected Values Because martingales stay the same on average, we immediately have the following. 652 Section 11.4: Martingales Theorem 11.4.1 Let Xn be a martingale with X0 n. a. Then E Xn a for all This theorem sometimes provides very useful information, as the following exam­ ples demonstrate. EXAMPLE 11.4.5 Let Xn again be simple random walk with p Xn is a martingale. Hence, if X0 is, for a fair game (i.e., for p your average fortune will always be equal to your initial fortune a. 1 2. Then we have already seen that a for all n. That 1 2), no matter how long you have been gambling, a, then we will have E Xn EXAMPLE 11.4.6 Suppose we start with the number 10 and then repeatedly do the following. We either add 2 to the number (with probability 1 3), or subtract 1 from the number (with proba­ bility 2 3). Suppose we repeat this process 25 times. What is the expected value of the number we end up with? Without martingale theory, this problem appears to be difficult, requiring lengthy computations of various possibilities for what could happen on each of the 25 steps. However, with martingale theory, it is very easy. Indeed, let Xn be the number after n steps, so that X0 10 X1 probability 1 3) or X1 equals either 2 (with probability 1 3) or 9 (with probability 2 3), etc. Then, because Xn 1 1 (with probability 2 3), we have 12 (with Xn E Xn 1 Xn Xn . Hence, Xn is a martingale. It then follows that E Xn X0 10, for any n. In particular, E X25 10. That is, after 25 steps, on average the number will be equal to 10. 11.4.3 Stopping Times a If Xn is a martingale with X0 for all n. However, it is sometimes even more helpful to know that E X T a, where T is a random time. Now, this is not always true; however, it is often true, as we shall see. We begin with another definition. a, then it is very helpful to know that E Xn Definition 11.4.2 Let Xn be a stochastic process, and let T be a random variable taking values in 0 1 2 , 0 1 2 . That is, when the event T m (i.e., whether or not to “stop” at time m), we are deciding whether or not T . not allowed to look at the future values Xm 1 Xm 2 m is independent of the values Xm 1 Xm 2 . Then T is a stopping time if for all m EXAMPLE 11.4.7 Let Xn be simple random walk, let b be any integer, and let b be the first time we hit the value b. Then b is a stopping time because the event b min n 0 : Xn b n depends only on X0 Xn, not on Xn 1 Xn 2 . Chapter 11: Advanced Topic — Stochastic Processes 653 On the other hand, let T 1, so that T corresponds to stopping just before we hit b. Then T is not a stopping time because it must look at the future value Xm 1 to decide whether or not to stop at time m. b A key result about martingales and stopping times is the optional stopping theorem, as follows. Theorem 11.4.2 (Optional stopping theorem) Suppose Xn X0 a, and T is a stopping time. Suppose further that either (a) the martingale is bounded up to time T , i.e., for some M 0 we have Xn for all n (b) the stopping time is bounded, i.e., for some M 0 we have T M. Then E X T equal to the starting value a. a i.e., on average the value of the process at the random time T is is a martingale with T ; or M PROOF For a proof and further discussion, see, e.g., page 273 of Probability: The­ ory and Examples, 2nd ed., by R. Durrett (Duxbury Press, New York, 1996). Consider a simple application of this. EXAMPLE 11.4.8 Let Xn be simple random walk with initial value a and with p be integers. Let T r s for n We conclude that E X T a. s s be the first time the process hits either r or s. Then min r T , so that condition (a) of the optional stopping theorem applies. a, i.e., that at time T , the walk will on average be equal to 1 2. Let r Xn a We shall see that the optional stopping theorem is useful in many ways. EXAMPLE 11.4.9 We can use the optional stopping theorem to find the probability that the simple random walk with p 1 2 will hit r before hitting another value s. Indeed, again let Xn be simple random walk with initial value a and p 1 2, a. We can s Then as earlier, E X T r , i.e., for the probability that the walk hits r before min r a s integers and T with r use this to solve for P X T hitting s. Clearly, we always have either X T r or X T s. Let h h s. Because E X T a, we must have a P X T hr r . Then h s. 1 E X T hr Solving for h, we see that 1 P X T r a r s s . We conclude that the probability that the process will hit r before it hits s is equal s . Note that absolutely no difficult computations were required to r to a s obtain this result. A special case of the previous example is particularly noteworthy. 654 Section 11.4: Martingales EXAMPLE 11.4.10 In the previous example, suppose r r c and s is precisely the same as the probability of success in the gambler’s ruin problem. The previous example shows that h a c. This gives the same answer r as Theorem 11.1.2, but with far less effort. 0. Then the value h P X T a s s It is impressive that, in the preceding example, martingale theory can solve the gambler’s ruin problem so easily in the case p 1 2. Our previous solution, without using martingale theory, was much more difficult (see Section 11.7). Even more sur­ 1 2, as prising, martingale theory can also solve the gambler’s ruin problem when p follows. EXAMPLE 11.4.11 Let Xn be simple random walk with initial value a and with p be integers. Let T solve the gambler’s ruin problem in this case, we are interested in g We can use the optional stopping theorem to solve for the gambler’s ruin probability g, as follows. c 0 be the first time the process hits either c or 0. To 1 2. Let 0 P X T min a c c Now, Xn is not a martingale, so we cannot apply martingale theory to it. However, let Zn 1 p p Xn . Then Zn has initial value Z0 that Zn is a martingale. Furthermore,
1 p p a. Also, we know from Example 11.4.4 0 Zn max 1 p c 1 p c p p T , so that condition (a) of the optional stopping theorem applies. We conclude 1 p a . E ZT Z0 Now, clearly, we always have either X T (with probability 1 case, ZT p 1. Hence, E ZT p a, we must have 1 g). In the former case, ZT with probability g) or X T 0 p p c, while in the latter g 1 . Because E ZT . Solving for g, we see that 1 1 This again gives the same answer as Theorem 11.1.2, this time for , but again with far less effort. Martingale theory can also tell us other surprising facts. for n that Chapter 11: Advanced Topic — Stochastic Processes 655 EXAMPLE 11.4.12 Let Xn be simple random walk with p the walk hit the value not for sure. Furthermore, conditional on not hitting large, as we now discuss. Let T min 106 takes more than one million steps, in which case T 106. 0. Will 1 some time during the first million steps? Probably yes, but 1, it will probably be extremely 1 2 and with initial value a 1 That is, T is the first time the process hits 1, unless that Now, Xn is a martingale. Also T is a stopping time (because it does not look into the future when deciding whether or not to stop). Furthermore, we always have 106, so condition (b) of the optional stopping theorem applies. We conclude that T E X T 0. a On the other hand, by the law of total expectation, we have . Also, clearly E X T X T 1 1 1 u. Then we conclude that 0 1. Let , so that P X T 1 1 u so that Now, clearly, u will be very close to 1, i.e., it is very likely that within 106 steps the process will have hit 1. Hence, E X T X T 1 is extremely large. We may summarize this discussion as follows. Nearly always we have X T However, very occasionally we will have X T of X T when X T and the case X T martingale)! 1. 1. Furthermore, the average value 1 is so large that overall (i.e., counting both the case X T 1 1), the average value of X T is 0 (as it must be because Xn is a If one is not careful, then it is possible to be tricked by martingale theory, as follows. EXAMPLE 11.4.13 Suppose again that Xn is simple random walk with p a that takes). 1, i.e., T is the first time the process hits 0. Let T 1 2 and with initial value 1 (no matter how long Because the process will always wait until it hits Because this is true with probability 1, we also have E X T 1, we always have X T 1. 1. On the other hand, again Xn is a martingale, so again it appears that we should have E X T 0. What is going on? The answer, of course, is that neither condition (a) nor condition (b) of the optional stopping theorem is satisfied in this case. That is, there is no limit to how large T might T . Hence, the optional stopping have to be or how large Xn might get for some n theorem does not apply in this case, and we cannot conclude that E X T 0. Instead, E X T 1 here. Summary of Section 11.4 A Markov chain Xn E Xn 1 Xn Xn is a martingale if it stays the same on average, i.e., if 0 for all n. There are many examples. 656 Section 11.4: Martingales A stopping time T for the chain is a nonnegative integer­valued random variable that does not look into the future of Xn . For example, perhaps T b is the first time the chain hits some state b. If Xn bounded, then E X T gambler’s ruin. is a martingale with stopping time T , and if either T or Xn n T is X0. This can be used to solve many problems, e.g., EXERCISES 3 8 Xn Xn Xn P Xn 8 p we let Xn 1 p we let Xn 1 0 1. Compute P Xn 7, while with probability 1 2Xn, while with probability 1 14. Suppose for some n, we 1, i.e., Xn is always either 8, 14 . 11.4.1 Suppose we define a process Xn as follows. Given Xn, with probability 3 8 we let Xn 1 Xn C. What value 4, while with probability 5 8 we let Xn 1 of C will make Xn be a martingale? 11.4.2 Suppose we define a process Xn as follows. Given Xn, with probability p we let Xn 1 2. What value of p will make Xn be a martingale? 11.4.3 Suppose we define a process Xn as follows. Given Xn, with probability p we Xn 2. What value of p let Xn 1 will make Xn be a martingale? 11.4.4 Let Xn be a martingale, with initial value X0 know that P Xn 17 12 P Xn 12, or 17. Suppose further that P Xn 11.4.5 Let Xn be a martingale, with initial value X0 P X8 4 Suppose further that P X8 11.4.6 Suppose you start with 175 pennies. You repeatedly ip a fair coin. Each time the coin comes up heads, you win a penny; each time the coin comes up tails, you lose a penny. (a) After repeating this procedure 20 times, how many pennies will you have on aver­ age? (b) Suppose you continue until you have either 100 or 200 pennies, and then you stop. What is the probability you will have 200 pennies when you stop? 11.4.7 Define a process Xn by X0 Xn 1 hits either 1 or 81. (a) Show that Xn is a martingale. (b) Show that T is a stopping time. (c) Compute E X T . (d) Compute the probability P X T 5. Suppose we know that 1, i.e., X8 is always either 3, 4, or 6. 6 . Compute P X8 3Xn with probability 1 4, or 81 be the first time the process Xn 3 with probability 3 4. Let T min 1 that the process hits 1 before hitting 81. 27, and Xn 1 1 6 2 P X8 P X8 P X8 4 . 3 PROBLEMS 11.4.8 Let Xn be a stochastic process, and let T1 be a stopping time. Let T2 and T3 stopping time, and which is not? (Explain your reasoning.) i i, for some positive integer i. Which of T2 and T3 is necessarily a T1 T1 Chapter 11: Advanced Topic — Stochastic Processes 657 11.4.9 Let Xn be a stochastic process, and let T1 and T2 be two different stopping times. Let T3 min T1 T2 , and T4 max T1 T2 . (a) Is T3 necessarily a stopping time? (Explain your reasoning.) (b) Is T4 necessarily a stopping time? (Explain your reasoning.) 11.5 Brownian Motion The simple random walk model of Section 11.1.2 (with p 1 2) can be extended to an interesting continuous­time model, called Brownian motion, as follows. Roughly, the idea is to speed up time faster and faster by a factor of M (for very large M), while simultaneously shrinking space smaller and smaller by a factor of 1 M. The factors of M and 1 M are chosen just right so that, using the central limit theorem, we can derive properties of Brownian motion. Indeed, using the central limit theorem, we shall see that various distributions related to Brownian motion are in fact normal distributions. Historically, Brownian motion gets its name from Robert Brown, a botanist, who in 1828 observed the motions of tiny particles in solution, under a microscope, as they were bombarded from random directions by many unseen molecules. Brownian motion was proposed as a model for the observed chaotic, random movement of such particles. In fact, Brownian motion turns out not to be a very good model for such movement (for example, Brownian motion has infinite derivative, which would only make sense if the particles moved infinitely quickly!). However, Brownian motion has many useful mathematical properties and is also very important in the theory of finance because it is often used as a model of stock price uctuations. A proper mathematical theory of Brownian motion was developed in 1923 by Norbert Wiener2; as a result, Brownian motion is also sometimes called the Wiener process. We shall construct Brownian motion in two steps. First, we construct faster and where M is large. Then, we take the limit as faster random walks, to be called Y M M to get Brownian motion. t 11.5.1 Faster and Faster Random Walks To begin, we let Z1 Z2 each M 1 2 , define a discrete­time random process be i.i.d. with P Zi 1 P Zi 1 1 2 For by Y M 0 0, and for i 0 1 2 so that Zi 1, Y M i M 1 M Z1 Z2 Zi . 2Wiener was such an absent­minded professor that he once got lost and could not find his house. In his confusion, he asked a young girl for directions, without recognizing the girl as his daughter! 658 Section 11.5: Brownian Motion Intuitively, then, Y M i M is like an ordinary (discrete­time) random walk (with p 1 2), except that time has been sped up by a factor of M and space has been shrunk by a factor of M (each step in the new walk moves a distance 1 M . That is, this process takes lots and lots of very small steps. To make Y M i M into a continuous­time process, we can then “fill in” the missing 1 M]. In this way, values by making the function linear on the intervals [i M i we obtain a continuous­time process Y M t : t 0 which agrees with Y M i M whenever t 1 M. In Figure 11.5.1, we have plotted Y 10 i 10 : i 0 1 20 (the dots) and the corresponding values of Y 10 t : 0 t 20 (the solid line), arising from the realization Z1 Z20 1 1 1 1 1 1 , where we have taken 1 10 0 316 Y 0.949 0.949 0.632 0.632 0.632 0.632 0.632 0.316 0.316 0.316 0.316 0.316 0.316 0.000 0.000 0.000 0.000 0.000 ­0.316 ­0.316 ­0.632 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 1.0 t Figure 11.5.1: Plot of some values of Y 10 i 10 and Y 10 t The collection of variables Y M indexed by the continuous time parameter t time stochastic process. : t t 0 is then a stochastic process but is now 0. This is an example of a continuous­ Now, the factors M and M have been chosen carefully, as the following theorem illustrates. Chapter 11: Advanced Topic — Stochastic Processes 659 Theorem 11.5.1 Let Y M (a) For t distributed with mean t. (b) For s t 0, the covariance t 0, the distribution of Y M t : t 0 be as defined earlier. Then for large M: is approximately N 0 t , i.e., normally Cov Y M t Y M t is approximately equal to min s t . (c) For t s mately N 0 t is approximately independent of Y M . (d) Y M is a continuous function of t. t s 0, the distribution of the increment Y M s , i.e., normally distributed with mean 0 and variance t Y M s t is approxi­ s, and PROOF See Section 11.7 for the proof of this result. We shall use this limit theorem to construct Brownian motion. 11.5.2 Brownian Motion as a Limit We have now developed the faster and faster processes Y M of their properties. Brownian motion is then defined as the limit as M processes Y M that the distribution of Bt : t of Y M t 0 . That is, we define Brownian motion Bt
: t 0 is equal to the limit as , and some of the 0 by saying of the distribution 0 . A graph of a typical run of Brownian motion is in Figure 11.5.2. 2 1 0 ­1 B 0.0 0.5 1.0 1.5 2.0 2.5 t Figure 11.5.2: A typical outcome from Brownian motion. In this way, all the properties of Y M will apply to Brownian motion, as follows. t for large M, as developed in Theorem 11.5.1, 660 Section 11.5: Brownian Motion Theorem 11.5.2 Let Bt : t (a) Bt is normally distributed: Bt (b) Cov Bs Bt (c) if 0 N 0 t (d) the function Bt s s , and furthermore Bt E Bs Bt min s t t, then the increment Bt t 0 is a continuous function. 0 be Brownian motion. Then N 0 t for any t 0; for s t 0; Bs is independent of Bs; Bs is normally distributed: Bt Bs This theorem can be used to compute many things about Brownian motion. EXAMPLE 11.5.1 Let Bt be Brownian motion. What is P B5 N 0 5 . Hence, B5 We know that B5 3 ? 5 N 0 1 . We conclude that P B5 3 P B5 5 3 5 3 5 0 910, where x x 1 2 e s2 2 ds is the cdf of a standard normal distribution, and we have found the numerical value from Table D.2. Thus, about 91% of the time, Brownian motion will be less than 3 at time 5. EXAMPLE 11.5.2 Let Bt be Brownian motion. What is P B7 N 0 7 . Hence, B7 We know that B7 4 ? 7 N 0 1 . We conclude that P B7 4 1 1 P B7 4 4 7 1 1 P B7 0 065 7 0 935. 4 7 Thus, over 93% of the time, Brownian motion will be at least 4 at time 7. EXAMPLE 11.5.3 Let Bt be Brownian motion. What is P B8 B6 1 5 ? We know that B8 B6 N 0 8 6 N 0 2 . Hence, B8 B6 2 N 0 1 . We conclude that P B8 B6 1 5 P B8 B6 2 1 5 2 1 5 2 0 144. Thus, about 14% of the time, Brownian motion will decrease by at least 1.5 between time 6 and time 8. EXAMPLE 11.5.4 Let Bt be Brownian motion. What is P B2 By Theorem 11.5.2, we see that B5 0 5 B5 B2 1 5 ? B2 and B2 are independent. Hence, P B2 0 5 B5 B2 1 5 P B2 0 5 P B5 B2 1 5 . Now, we know that B2 N 0 2 . Hence, B2 2 N 0 1 , and P B2 0 5 P B2 2 0 5 2 0 5 2 . Chapter 11: Advanced Topic — Stochastic Processes 661 Similarly, B5 B2 N 0 3 , so B5 P B5 B2 1 5 We conclude that B2 B2 3 3 B2 N 0 1 , and B5 1 P B5 P B2 0 5 B5 B2 1 5 P B2 0 5 0 5 P B5 1 5 2 B2 3 1 5 0 292. Thus, about 29% of the time, Brownian motion will be no more than and will then increase by at least 1.5 between time 2 and time 5. 1 2 at time 2 We note also that, because Brownian motion was created from simple random 1 2, it follows that Brownian motion is a martingale. This implies walks with p that E Bt N 0 t . 0 for all t, but of course, we already knew that because Bt On the other hand, we can now use the optional stopping theorem (Theorem 11.4.2) to conclude that E BT 0 where T is a stopping time (provided, as usual, that either T is bounded). This allows us to compute certain probabilities, as T or Bt : t follows. EXAMPLE 11.5.5 Let Bt be Brownian motion. Let c hit c before it hits b? 0 b. What is the probability the process will To solve this problem, we let first time the process hits b. We then let T min either hits c or hits b. The question becomes, what is P c is P BT c ? c c be the first time the process hits c, and b be the b be the first time the process b ? Equivalently, what To solve this, we note that we must have E BT c with probability h, and BT b with probability 1 B0 0. But if h c , P BT h. Hence, we must then BT have 0 E BT hc 1 h b so that h b b c . We conclude that P BT c P c b b b . c (Recall that c 0, so that b c b c here.) Finally, we note that although Brownian motion is a continuous function, it turns out that, with probability one, Brownian motion is not differentiable anywhere at all! This is part of the reason that Brownian motion is not a good model for the movement of real particles. (See Challenge 11.5.15 for a result related to this.) However, Brownian motion has many other uses, including as a model for stock prices, which we now describe. 11.5.3 Diffusions and Stock Prices Brownian motion is used to construct various diffusion processes, as follows. Given Brownian motion Bt , we can let Xt a t Bt , 662 Section 11.5: Brownian Motion where a and are any real numbers, and 0. Then Xt is a diffusion. Here, a is the initial value, (called the drift) is the average rate of increase, and Intuitively, Xt is approximately equal to the linear function a (called the volatility parameter) represents the amount of randomness of the diffusion. t, but due to the randomness of Brownian motion, Xt takes on random values around this linear function. The precise distribution of Xt can be computed, as follows. Theorem 11.5.3 Let Bt be Brownian motion, and let Xt diffusion. Then (a) E Xt a (b) Var Xt (c) Xt t, 2t, t 2t . N a a t Bt be a PROOF We know Bt not random (i.e., is a constant from the point of view of random variables). Hence, N 0 1 , so E Bt 0 and Var Bt t. Also, a t is E Xt E a proving part (a). Similarly, Var Xt Var a proving part (b). t t Bt a t E Bt a t, Bt Var Bt 2Var Bt 2t, Finally, because Xt is a linear function of the normally distributed random variable Bt , Xt must be normally distributed by Theorem 4.6.1. This proves part (c). Diffusions are often used as models for stock prices. That is, it is often assumed Bt for appropriate that the price Xt of a stock at time t is given by Xt values of a, , and . a t EXAMPLE 11.5.6 Suppose a stock has initial price $20, drift of $3 per year, and volatility parameter 1 4. What is the probability that the stock price will be over $30 after two and a half years? 1 4Bt and is thus a Here, the stock price after t years is given by Xt 20 3t diffusion. So, after 2 5 years, we have X2 5 20 7 5 1 4B2 5 27 5 1 4B2 5 Hence, P X2 5 30 P 27 5 P B2 5 1 4B2 5 1 79 . 30 P B2 5 30 27 5 1 4 But like before, P B2 5 1 79 1 1 P B2 5 1 79 1 79 2 5 P B2 5 1 0 129. 2 5 1 79 2 5 We conclude that P X2 5 30 0 129. Chapter 11: Advanced Topic — Stochastic Processes 663 Hence, there is just under a 13% chance that the stock will be worth more than $30 after two and a half years. EXAMPLE 11.5.7 Suppose a stock has initial price $100, drift of $2 per year, and volatility parameter 5 5. What is the probability that the stock price will be under $90 after just half a year? 5 5Bt and is again 5 5B0 5 Here, the stock price after t years is given by Xt 100 2t 5 5B0 5 100 1 0 a diffusion. So, after 0 5 years, we have X0 5 Hence, 99 P X0 5 90 P 99 P B0 5 5 5B0 5 1 64 0 5 1 64 90 P B0 5 0 5 P B0 5 2 32 0 010. 90 99 5 5 0 5 1 64 Therefore, there is about a 1% chance that the stock will be worth less than $90 after half a year. More generally, the drift and volatility leading to more complicated diffusions Xt , though we do not pursue this here. could be functions of the value Xt , Summary of Section 11.5 t 0 is created from simple random walk with p Brownian motion Bt speeding up time by a large factor M, and shrinking space by a factor 1 M. Hence, B0 Bt a continuous function. Diffusions (often used to model stock prices) are of the form Xt N 0 t , and Bt has independent normal increments with is s for 0 min s t , and Bt t, and Cov Bs Bt 0 Bt N 0 t 1 2, by Bt . Bs a s t EXERCISES i M used to construct Brownian motion. 1 1 1 2 (Hint: Don’t forget that 2.) 1 for M 1, M 2, M 3, and M 4 11.5.1 Consider the speeded­up processes Y M Compute the following quantities. (a) P Y 1 1 (b) P Y 2 1 (c) P Y 2 1 (d) P Y M 11.5.2 Let Bt be Brownian motion. Compute P B1 11.5.3 Let Bt be Brownian motion. Compute each of the following quantities. (a) P B2 (b) P B3 (c) P B9 (d) P B26 (e) P B26 3 (f) P B26 3 4 B5 B11 664 Section 11.5: Brownian Motion 2 5 5. 3t B5 B2 0 9 B14 14 . 1 B5 E B2 N 0 3 . 4 B8 4 2 E B17 B14 2 in two ways. (Hint: Do not forget 2 B13 3 2 B18 6 2Bt be a diffusion (so that a 5 before it hits 15. 15 before it hits 5. 11.5.4 Let Bt be Brownian motion. Compute each of the following quantities. (a) P B2 (b) P B5 (c) P B8 4 11.5.5 Let Bt be Brownian motion. Compute E B13 B8 . part (b) of Theorem 11.5.2.) 11.5.6 Let Bt be Brownian motion. Compute E B17 (a) Use the fact that B17 B14 (b) Square it out, and compute E B2 17 11.5.7 Let Bt be Brownian motion. (a) Compute the probability that the process hits (b) Compute the probability that the process hits (c) Which of the answers to Part (a) or (b) is larger? Why is this so? (d) Compute the probability that the process hits 15 before it hits (e) What is the sum of the answers to parts (a) and (d)? Why is this so? 11.5.8 Let Xt Compute each of the following quantities. (a) E X7 (b) Var X8 1 (c) P X2 5 (d) P X17 11.5.9 Let Xt 4Bt . Compute E X3 X5 . 11.5.10 Suppose a stock has initial price $400 and has volatility parameter equal to 9. Compute the probability that the stock price will be over $500 after 8 years, if the drift per year is equal to (a) $0. (b) $5. (c) $10. (d) $20. 11.5.11 Suppose a stock has initial price $200 and drift of $3 per year. Compute the probability that the stock price will be over $250 after 10 years, if the volatility parameter is equal to (a) 1. (b) 4. (c) 10. (d) 100. 3, and 12 50 1 5 t 2). 10 5, PROBLEMS 11.5.12 Let Bt be Brownian motion, and let X and variance of X. 11.5.13 Prove that P Bt P Bt x x for any t 0 and any x R1. 2B3 7B5. Compute the mean Chapter 11: Advanced Topic — Stochastic Processes 665 CHALLENGES 11.5.14 Compute P Bs will need to use conditional densities.) 11.5.15 (a) Let f : R1 such that exists K x Bt f y f x R1 be a Lipschitz function, i.e., a function for which there y for all x y R1. Compute K x y , where 0 s t, and x y R1. (Hint: You f t h f t 2 h lim 0 h for any t (b) Let Bt be Brownian motion. Compute R1. E lim 0 h Bt h 2 Bt h 0. for any t (c) What do parts (a) and (b) seem to imply about Brownian motion? (d) It is a known fact that all functions that are continuously differentiable on a closed interval are Lipschitz. In light of this, what does part (c) seem to imply about Brownian motion? DISCUSSION TOPICS 11.5.16 Diffusions such as those discussed here (and more complicated, varying co­ efficient versions) are very often used by major investors and stock traders to model stock
prices. (a) Do you think that diffusions provide good models for stock prices? (b) Even if diffusions did not provide good models for stock prices, why might in­ vestors still need to know about them? 11.6 Poisson Processes Finally, we turn our attention to Poisson processes. These processes are models for events that happen at random times Tn. For example, Tn could be the time of the nth fire in a city, or the detection of the nth particle by a Geiger counter, or the nth car passing a checkpoint on a road. Poisson processes provide a model for the probabilities for when these events might take place. More formally, we let a 0, and let R1 R2 having the Exponential a distribution. We let T0 be i.i.d. random variables, each 0, and for n 1, Tn R1 R2 Rn. The value Tn thus corresponds to the (random) time of the nth event. We also define a collection of counting variables Nt , as follows. For t 0, we let Nt max n : Tn t 666 Section 11.6: Poisson Processes That is, Nt counts the number of events that have happened by time t. (In particular, N0 T1, i.e., before the first event occurs.) 0. Furthermore, Nt 0 for all t We can think of the collection of variables Nt for t 0 as being a stochastic process, indexed by the continuous time parameter t 0 is thus another example, like Brownian motion, of a continuous­time stochastic process. 0 is called a Poisson process (with intensity a). This name comes 0. The process Nt : t In fact, Nt : t from the following. Theorem 11.6.1 For any t 0, the distribution of Nt is Poisson at . PROOF See Section 11.7 for the proof of this result. In fact, even more is true. Theorem 11.6.2 Let 0 the distribution of Nti Nti 1 variables Nti t1 t0 Nti 1 is Poisson a ti t2 t3 td. Then for i d, . Furthermore, the random 1 2 ti 1 for i 1 d are independent. PROOF See Section 11.7 for the proof of this result. EXAMPLE 11.6.1 Let Nt be a Poisson process with intensity a Poisson 3a Here, N3 5. What is P N3 Poisson 15 . Hence, from the definition of the Poisson 12 ? distribution, we have P N3 12 e 15 15 12 12! 0 083, which is a little more than 8%. EXAMPLE 11.6.2 Let Nt be a Poisson process with intensity a 2. What is P N6 11 ? Here N6 Poisson 6a Poisson 12 . Hence, P N6 11 e 12 12 11 11! 0 114, or just over 11%. EXAMPLE 11.6.3 Let Nt be a Poisson process with intensity a 4. What is P N2 (Recall that here the comma means “and” in probability statements.) 3 N5 We begin by writing P N2 1 This is just rewriting the question. However, it puts it into a context where we can use Theorem 11.6.2. 3 N5 4 ? 3 N5 P N2 N2 4 Indeed, by that theorem, N2 and N5 N2 are independent, with N2 Poisson 8 and N5 N2 Poisson 12 . Hence, P N2 3 N5 4 We thus see that the event N2 1 3 N5 N2 3 P N5 N2 e 12 121 1! P N2 P N2 e 8 83 3! 4 is very unlikely in this case. 0 0000021. 1 3 N5 Chapter 11: Advanced Topic — Stochastic Processes 667 Summary of Section 11.6 Poisson processes are models of events that happen at random times Tn. It is assumed that the time Rn Exponential a for some a by time t. It follows that Nt increments, with Nt Tn 1 between consecutive events in 0. Then Nt represents the total number of events t 0 has independent t. Poisson at , and in fact the process Nt Poisson a t for 0 Ns Tn s s EXERCISES t 0 be a Poisson process with intensity a t 0 be a Poisson process with intensity a 5 . t 0 be a Poisson process with intensity a 20 3 . 20 3 N6 13 3 20 . 340 13 N5 13 N6 13 N5 11.6.1 Let N t ing probabilities. (a) P N2 (b) P N5 (c) P N6 (d) P N50 (e) P N2 (f) P N2 (g) P N2 11.6.2 Let N t 6 and P N0 3 11.6.3 Let N t 6 and P N3 11.6.4 Let N t 6 N3 11.6.5 Let N t nation) the conditional probability P N2 6 11.6.6 Let N t explanation) the following conditional probabilities. (a) P N6 (b) P N6 (c) P N9 (d) P N9 (e) P N9 5 N9 5 N9 5 N6 7 N6 12 N6 5 . Explain your answer be a Poisson process with intensity a t 0 be a Poisson process with intensity a 2 2 N2 9 t 0 be a Poisson process with intensity a 7. Compute the follow­ 3. Compute P N1 2 1 3. Compute P N2 3. Compute P N2 0. Compute (with expla­ 1 3. Compute (with PROBLEMS 0 be a Poisson process with intensity a 11.6.7 Let Nt : t let j be a positive integer. (a) Compute (with explanation) the conditional probability P Ns (b) Does the answer in part (a) depend on the value of the intensity a? Intuitively, why or why not? 11.6.8 Let Nt : t of the first event, as usual. Let 0 0 be a Poisson process with intensity a t. 0. Let T1 be the time 0. Let 0 t, and j Nt s s j 668 Section 11.7: Further Proofs 1.) 1 Nt 1 (If you wish, you may use the previous problem, (a) Compute P Ns with j (b) Suppose t is fixed, but s is allowed to vary in the interval 0 t . What does the an­ swer to part (b) say about the “conditional distribution” of T1, conditional on knowing that Nt 1? 11.7 Further Proofs Proof of Theorem 11.1.1 We want to prove that when Xn is a simple random walk, n is a positive integer, and n and n if k is an integer such that k is even, then n k P Xn a k n n k 2 p n k 2q n k 2. For all other values of k, we have P Xn a k 0. Furthermore, E Xn a n 2 p 1 . Of the first n bets, let Wn be the number won, and let Ln be the number lost. Then n Wn Ln. Also, Xn a Wn Adding these two equations together, we conclude that n Ln. 2Wn Solving for Wn, we see that Wn a Ln Wn Wn must be an integer, it follows that n k is even. P Xn 0 unless n a k Xn Xn Wn Ln a 2. Because a must be even. We conclude that Xn a n On the other hand, solving for Xn, we see that Xn a 2Wn a a P Xn n. Because 0 Wn 0 if k k Suppose now that k a n P Wn k We conclude that n, it follows that n or k n. n is even, and k 2 . But the distribution of Wn is clearly Binomial n p . n. Then from the above, P Xn Xn n n k 2Wn a n, or Xn n, i.e., that P Xn a k n n k 2 p n k 2q n k 2, provided that k n is even and Finally, because Wn a 2Wn n, therefore E Xn k n n. Binomial n p , therefore E Wn a 2E Wn n np. Hence, because a n 2 p 1 , a 2np n Xn as claimed. Proof of Theorem 11.1.2 We want to prove that when Xn is a simple random walk, with some initial fortune a and probability p of winning each bet, and 0 0 c, then the probability P c a Chapter 11: Advanced Topic — Stochastic Processes 669 of hitting c before 0 is given by . To begin, let us write s b for the probability P c fortune b, for any 0 out to be easier to solve for all of the values s 0 s 1 s 2 and this is the trick we use. 0 when starting at the initial c. We are interested in computing s a . However, it turns s c simultaneously, b We have by definition that s 0 0 (i.e., if we start with $0, then we can never 1 (i.e., if we start with $c, then we have already won). So, those two 1 are not obtained as win) and s c cases are easy. However, the values of s b for 1 easily. b c Our trick will be to develop equations that relate the values s b for different values of b. Indeed, suppose 1 1. It is difficult to compute s b directly. However, it is easy to understand what will happen on the first bet — we will either lose $1 with probability p, or win $1 with probability q. That leads to the following result. b c Lemma 11.7.1 For 1 b c 1, we have s b ps b 1 qs b 1 . (11.7.1) PROOF Suppose first that we win the first bet, i.e., that Z1 bet, we will have fortune b before reaching 0, except this time starting with fortune b winning this first bet, our chance of reaching c before reaching 0 is now s b still do not know what s b s b and s b 1. After this first 1. We then get to “start over” in our quest to reach c 1 instead of b. Hence, after 1 . (We 1 is, but at least we are making a connection between 1 .) Suppose instead that we lose this first bet, i.e., that Z1 1. After this first bet, 1 instead of b. 1 . we will have fortune b Hence, after this first bet, our chance of reaching c before reaching 0 is now s b 1. We then get to “start over” with fortune b We can combine all of the preceding information, as follows. s b P c P Z1 ps b 0 1 1 0 c qs b 1 P Z1 1 c 0 That is , as claimed. So, where are we? We had c 1 unknowns, s 0 s 1 s c . We now know the two equations s 0 p s b in c q s b 1 1 unknowns, so we can now solve our problem! 1 for b 1, plus the c c 0 and s c 1 2 1 equations of the form s b 1. In other words, we have c 1 equations The solution still requires several algebraic steps, as follows. 670 Section 11.7: Further Proofs Lemma 11.7.2 For 1 b c 1, we have . PROOF Recalling that p q 1 we rearrange (11.7.1) as follows And finally, which gives the result , Lemma 11.7.3 For 0 b c, we have 11.7.2) PROOF Applying the equation of Lemma 11.7.2 with b 1, we obtain because s 0 0). Applying it again with b 2, we obtain By induction, we see that for b 0 1 2 c 1. Hence, we compute that for b 0 1 2 c . This gives the result. Chapter 11: Advanced Topic — Stochastic Processes 671 We are now able to finish the proof of Theorem 11.1.2. If p 1 2, then q p 1, so (11.7.2) becomes s b 1 c. Then s b bs 1 1, i.e., s 1 bs 1 . But s c b c. Hence, s a 1, so we a c must have cs 1 in this case. If p 1 2, then q p 1, so (11.7.2) is a geometric series, and becomes . Because s c 1, we must have so Then Hence in this case. Proof of Theorem 11.1.3 We want to prove that when Xn is a simple random walk, with initial fortune a and probability p of winning each bet, then the probability P 0 will ever hit 0 is given by 0 that the walk . By continuity of probabilities, we see that P 0 lim c P 0 c lim c 1 P c 0 . Hence, if p Now, if p 1 2, then P 0 1 2, then limc 1 a c 1. P 0 lim . If p then q p 1 2 then q p 1, so limc 1, so limc q p c q p c 0, and P 0 , and P 0 q p a. 1 If p 1 2 672 Section 11.7: Further Proofs Proof of Theorem 11.3.3 We want to prove that the Metropolis–Hastings algorithm results in a Markov chain X0 X1 X2 i as a stationary distribution. which has We shall prove that the resulting Markov chain is reversible with respect to i , i.e., that i P Xn 1 j Xn i j P Xn 1 i Xn j , (11.7.3) j for i tion for the chain. S It will then follow from Theorem 11.2.6 that is a stationary distribu­ i We thus have to prove (11.7.3). Now, (1
1.7.3) is clearly true if i j, so we can assume that i But if i j . j, and Xn j (i.e., we propose the state j, which we will do with probability pi j ). Also we accept this proposal (which we will do with probability i j ). Hence, i, then the only way we can have Xn 1 j is if Yn 1 P Xn 1 j Xn i qi j i j qi j min 1 j q ji i qi j min qi j j q j i i . It follows that i P Xn 1 j Xn Similarly, we compute that follows that (11.7.3) is true. min i j P Xn 1 i qi j i Xn j q ji j min j q ji i qi j It Proof of Theorem 11.5.1 We want to prove that when Y M (a) For t tributed with mean t. (b) For s t 0, the covariance t 0, the distribution of Y M : t t 0 is as defined earlier, then for large M: is approximately N 0 t , i.e., normally dis­ Cov Y M t Y M t is approximately equal to min s t . (c) For t 0, the distribution of the increment .e., normally distributed with mean 0 and variance t is approximately N 0 t and is approximately independent of Y M (d) Y M t Write r for the greatest integer not exceeding r, so that, e.g., 7 6 s is a continuous function of t. . s, 7. Then we is very close (formally, see that for large M, t is very close to t M M, so that Y M within O 1 M in probability) to t A Y M t M M 1 M Z1 Z2 Z t M . Chapter 11: Advanced Topic — Stochastic Processes 673 Now, A is equal to 1 M times the sum of t M different i.i.d. random variables, each having mean 0 and variance 1. It follows from the central limit theorem that A . This proves part (a). converges in distribution to the distribution N 0 t as M For part (b), note that also Y M s is very close to B Y M s M M 1 M Z1 Z2 Z s M . Because E Zi 0, we must have E A For simplicity, assume s t the case s E B 0, so that Cov A B t is similar. Then we have E AB . Cov A B E AB 1 M E Z1 Z2 Z s M Z1 Z2 Zi Zi Z j . Now, we have E Zi Z j 0 unless i be precisely s M terms in the sum for which i (since t s). Hence, Cov A B s M M , j, in which case E Zi Z j 1. There will j , namely, one for each value of i which converges to s as M . This proves part (b). Part (c) follows very similarly to part (a). Finally, part (d) follows because the was constructed in a continuous manner (as in Figure 11.5.1). function Y M t Proof of Theorem 11.6.1 We want to prove that for any t 0, the distribution of Nt is Poisson at . We first require a technical lemma. Lemma 11.7.4 Let gn t Gamma n a distribution. Then for n e at ant n 1 n 1, 1 ! be the density of the t 0 gn s ds e at at i i!. (11.7.4) i n 0, then both sides are 0. For other t, differentiating with respect to PROOF If t t, we see (setting j e at ai t i 1 i 1 ! e at a n 1 1t n 1 n t we see that (11.7.4) is satisfied for any n 1) that gn at at i i! i! e at ai 1t i j n 1 e at a j 1t j t 0 gn s ds. Because this is true for all t 0. ae at at i j! i n i! 0, Recall (see Example 2.4.16) that the Exponential distribution is the same as the Gamma 1 and Y distribution. Furthermore, (see Problem 2.9.15) if X Gamma 2 are independent, then X Y Gamma 1 Gamma . 2 1 674 Section 11.7: Further Proofs Now, in our case, we have Tn Exponential a R2 Gamma 1 a . It follows that Tn Gamma n a . Hence, the density of Tn is gn t e at ant n 1 n Rn, where Ri 1 !. R1 Now, the event that Nt the same as the event that Tn n (i.e., that the number of events by time t is at least n) is t (i.e., that the nth event occurs before time n). Hence, P Nt n P Tn t t 0 gn s ds Then by Lemma 11.7.4, P Nt n e at at i i ! i n (11.7.5) for any n 1. If n Using this, we see that 0 then both sides are 1, so in fact (11.7.5) holds for any n 0. P Nt j P Nt j P Nt j 1 e at at i i! e at at i i! e at at j j!. i j i j 1 It follows that Nt Poisson at , as claimed. Proof of Theorem 11.6.2 We want to prove that when 0 the distribution of Nti variables Nti Nti 1 for i t0 t1 Nti 1 is Poisson a ti t2 t3 td , then for i d, . Furthermore, the random 1 2 ti 1 1 d are independent. From the memoryless property of the exponential distributions (see Problem 2.4.14), ti 1, this will have no effect on it follows that regardless of the values of Ns for s the distribution of the increments Nt ti 1. That is, the process Nt Nti 1 for t starts fresh at each time ti 1, except from a different initial value Nti 1 instead of from N0 0. Hence, the distribution of Nti 1 u Nti 1 for u Nu and is independent of the values of Ns for s of Nu N0 already know that Nu as well. In particular, Nti independent of Ns : s Poisson au , it follows that Nti 1 u Poisson a ti ti 1 . The result follows. Nti 1 ti 1 0 is identical to the distribution ti 1. Because we Poisson au Nti 1 Nti 1 as well, with Nti Appendix A Mathematical Background To understand this book, it is necessary to know certain mathematical subjects listed below. Because it is assumed the student has already taken a course in calculus, topics such as derivatives, integrals, and infinite series are treated quite briey here. Multi­ variable integrals are treated in somewhat more detail. A.1 Derivatives From calculus, we know that the derivative of a function f is its instantaneous rate of change: f x d dx f x f x lim 0 h h h f x In particular, the reader should recall from calculus that d dx 5 0 d dx x 3 3x 2 d dx x n nx n 1 d dx ex ex d dx sin x cos x d dx cos x sin x etc. Hence, if f x x 3, then f x 3x 2 and, e.g., f 7 3 72 147. Derivatives respect addition and scalar multiplication, so if f and g are functions and C is a constant, then Thus, etc. d dx dx 5x 3 3x 2 7x 12 15x 2 6x 7 675 676 Appendix A: Mathematical Background Finally, derivatives satisfy a chain rule; if a function can be written as a composition of two other functions, as in f x g h x , then f x g h x h x . Thus, 5e5x 2x cos x 2 d dx e5x d dx sin x 2 d dx 2x 3x 2 etc. Higher­order derivatives are defined by f x d dx f x f x d dx f x etc. In general, the r th­order derivative f r x can be defined inductively by f 0 x f x and f r x f r 1 x d dx for r 24x, f 4 x 1. Thus, if f x 24, etc. x 4, then f x 4x 3, f x f 2 x 12x 2, f 3 x Derivatives are used often in this text. A.2 Integrals If f is a function, and a [a b], written b are constants, then the integral of f over the interval b a f x dx represents adding up the values f x , multiplied by the widths of small intervals around b x. That is, and where xi b a f x dx xi 1 where a xi 1 is small. d i 1 f xi xd x1 x0 xi More formally, we can set xi a i d b a and let d , to get a formal definition of integral as b a f x dx lim To compute b a f x dx in this manner each time would be tedious. Fortunately, the fundamental theorem of calculus provides a much easier way to compute integrals. It says that if F x is any function with F x Hence, f x , then b a f x dx F a F b b a 3x 2 dx b a x 2 dx b a x n dx b3 a3 a3 1 3 b3 1 n 1 bn 1 an 1 Appendix A.3: Infinite Series 677 and b a cos x dx b a sin x dx b a e5x dx sin b sin a cos b cos a 1 5 e5b e5a A.3 Infinite Series If a1 a2 a3 (or series) is an infinite sequence of numbers, we can consider the infinite sum ai a1 a2 a3 i 1 i 1 1 1 4 1 8 Formally, i 1 ai For example, clearly limN because we see that 1 2 1 4 1 8 1 2 1 16 N i 1 ai . This sum may be finite or infinite. 1 1 1 1 On the other hand, 1 16 1 2i i 1 1 2n 2n 1 2n lim N N i 1 1 2i lim N 2N 1 2N 1 More generally, we compute that ai i 1 a 1 a whenever a 1 One particularly important kind of infinite series is a Taylor series. If f is a func­ tion, then its Taylor series is given by f 0 x f 0 1 2! x 2 f 0 1 3! x 3 f 3 0 1 i! i 0 x i f i 0 (Here i! 3! thus, 6, 4! i i 1 i 2, 2 1 stands for i factorial, with 0! 24, etc.) Usually, f x will be exactly equal to its Taylor series expansion, 1, 2! 1! 2 sin x cos x ex e5x 5x ! x 3 3! 5x 2 2! x 4 4! 5x 3 3! 5x 4 4! 3x 2 etc. If f x is a polynomial (e.g., f x of f x is precisely the same function as f x itself. x 3 2x 6), then the Taylor series 678 Appendix A: Mathematical Background A.4 Matrix Multiplication A matrix is any r s collection of numbers, e.g., 17 9 etc. Matrices can be multiplied, as follows. If A is an r trix, then the product AB is an r a sum of products. For example, with A and B as above, if M AB, then j entry is given by u matrix whose i s matrix, and B is an s u ma­ s k 1 Ai k Bk 18 84 16 1 42 10 as, for example, the 2 1 entry of M equals 5 3 2 7 1 Matrix multiplication turns out to be surprisingly useful, and it is used in various places in this book. A.5 Partial Derivatives Suppose f is a function of two variables, as in f x y partial derivative of f with respect to x, writing 3x 2 y3 Then we can take a by varying x while keeping y fixed. That is, f x y x f x y x lim This can be computed simply by regarding y as a constant value. For the example above, Similarly, by regarding x as constant and varying y, we see that 3x 2 y3 6x y3 x Other examples include 3x 2 y3 9x 2 y2 y 18ex y x 6 y8 sin y3 18yex y 6x 5 y8 18ex y x 6 y8 sin y3 18xex y 8x 6 y7 3y2 sin y3 x y Appendix A.6: Multivariable Integrals 679 etc. If f is a function of three or more variables, then partial derivatives may similarly be taken. Thus, x 2 y4z6 2x y4z6 x 2 y4z6 4x 2 y3z6 y x 2 y4z6 6x 2 y4z5 z x etc. A.6 Multivariable Integrals If f is a function of two or more variables, we can still compute integrals of f . How­ ever, instead of taking integrals over an interval [a b], we must take integrals over higher­dimensional regions. f x y 7 y x 2 y3, and let R be the rectangular region given by [0 1] For example, let x [5 7] What is 1 5 R 0 f x y dx dy R the integral of f over the region R? In geometrical terms, it is the volume under the graph of f (and this is a surface) over the region R But how do we compute this? Well, if y is constant, we know that 1 0 f x y dx 1 0 x 2 y3 dx y3 1 3 (A.6.1) This corresponds to adding up the values of f along one “strip” of the region R, where y is constant. In Figure A.6.1, we show the region on integration R [5 7] The value of (A.6.1), when y 79 443 this is the area under the curve x 2 6 2 3 over the line [0 1] 6 2 is 6 2 3 3 6 2 [0 1] y 7 5 y = 6.2 1 x Figure A.6.1: Plot of the region of integration (shaded) R li
ne at y 6 2. [0 1] [5 7] together with the 680 Appendix A: Mathematical Background If we then add up the values of the areas over these strips along all different possible y values, then we obtain the overall integral or volume, as follows: f x y dx dy dx dy 7 1 5 0 x 2 y3 dx dy y3 dy 1 3 1 3 1 4 74 54 148 So the volume under the the graph of f and over the region R is given by 148. Note that we can also compute this integral by integrating first y and then x, and we get the same answer: f x y dx dy R 7 5 f x y dy dx x 2 74 54 dx y3 dy dx 5 74 54 148 0 1 4 1 3 Nonrectangular Regions If the region R is not a rectangle, then the computation is more complicated. The idea is that, for each value of x, we integrate y over only those values for which the point x y is inside R. For example, suppose that R is the triangle given by R 6 In Figure A.6.2, we have plotted this region together with the slices at x 3 x and y 3 2 We use the x­slices to determine the limits on y for fixed x when we integrate out y first; we use the y­slices to determine the limits on x for fixed y when we integrate out x first. x y : 0 2y y y = 3/2 2y = x x = 3 x = 6 x Figure A.6.2: The integration region (shaded) R the slices at x 3 and y 3 2. x y : 0 2y x 6 together with Appendix A.6: Multivariable Integrals 681 Then x can take any value between 0 and 6. However, once we know x, then y can only take values between 0 and x 2. Hence, if f x y x y x 6 y8, then f x y dx dy R x 2 f x y dy dx 6 x 2 0 0 x y x 6 y8 dy dx x 2 2 02 x 6 1 9 x 2 9 09 dx 4608 x 15 dx 1 4608 1 16 616 016 04 107 64 0 1 1 8 4 3 8264 Once again, we can compute the same integral in the opposite order, by integrating first x and then y. In this case, y can take any value between 0 and 3. Then, for a given value of y, we see that x can take values between 0 and 2y. Hence, f x y dx dy R 3 0 6 2y f x y dx dy 3 0 6 2y x y x 6 y8 dx dy We leave it as an exercise for the reader to finish this integral, and see that the same answer as above is obtained. Functions of three or more variables can also be integrated over regions of the corresponding dimension three or higher. For simplicity, we do not emphasize such higher­order integrals in this book. Appendix B Computations We briey describe two computer packages that can be used for all the computations carried out in the text. We recommend that students familiarize themselves with at least one of these. The description of R is quite complete, at least for the computations based on material in this text, whereas another reference is required to learn Minitab. B.1 Using R R is a free statistical software package that can be downloaded and installed on your computer (see http://cran.r­project.org/). A free manual is also available at this site. Once you have R installed on your system, you can invoke it by clicking on the relevant icon (or, on Unix systems, simply typing “R”). You then see a window, called ’ after which you type com­ the R Console that contains some text and a prompt ‘ mands. Commands are separated by new lines or ‘ ; ’. Output from commands is also displayed in this window, unless it is purposefully directed elsewhere. To quit R, type q() after the prompt. To learn about anything in R, a convenient resource is to use Help on the menu bar available at the top of the R window. Alternatively, type ?name after the prompt (and press enter) to display information about name, e.g., ?q brings up a page with information about the terminate command q. Basic Operations and Functions A basic command evaluates an expression, such as 2+3 [1] 5 which adds 2 and 3 and produces the answer 5. Alternatively, we could assign the value of the expression to a variable such as a ­ 2 where ­ (less than followed by minus) assigns the value 2 to a variable called a. Alternatively, = can be used for assignment as in a = 2, but we will use ­. We 683 684 Appendix B: Computations can then verify this assignment by simply typing a and hitting return, which causes the value of a to be printed. a [1] 2 Note that R is case sensitive, so A would be a different variable than a. There are some restrictions in choosing names for variables and vectors, but you won’t go wrong if you always start the name with a letter. We can assign the values in a vector using the concatenate function c() such as ­ c(1,1,1,3,4,5) b b [1] 1 1 1 3 4 5 which creates a vector called b with six values in it. We can access the ith entry in a vector b by referring to it as b[i]. For example, b[3] [1] 1 prints the third entry in b, namely, 1. Alternatively, we can use the scan command to input data. For example, b ­ scan() 1: 1 1 1 3 4 5 7: Read 6 items b [1] 1 1 1 3 4 5 accomplishes the same assignment. Note that with the scan command, we simply type in the data and terminate data input by entering a blank line. We can also use scan to read data in from a file, and we refer the reader to ?scan for this. Sometimes we want vectors whose entries are in some pattern. We can often use the rep function for this. For example, x ­ rep(1,20) creates a vector of 20 ones. More complicated patterns can be obtained, and we refer the reader to ?rep for this. Basic arithmetic can be carried out on variables and vectors using + (addition), ­ (subtraction), * (multiplication), / (division), and ^ (exponentiation). These operations are carried out componentwise. For example, we could multiply each component of b by itself via b*b [1] 1 1 1 9 16 25 or multiply each element of b by 2 as in 2*b [1] 2 2 2 6 8 10 which accomplishes this. Appendix B.1: Using R 685 There are various functions available in R, such as abs(x) (calculates the absolute value of x), log(x) (calculates the natural logarithm of x), exp(x) (calculates e raised to the power x), sin(x), cos(x), tan(x) (which calculate the trigonomet­ ric functions), sqrt(x) (which calculates the square root of x), ceiling(x), and floor(x) (calculate the ceiling and oor of x). When such a function is applied to a vector x, it returns a vector of the same length, with the function applied to each element of the original vector. There are numerous special functions available in R, but two important ones are gamma(x), which returns the gamma function applied to x, and lgamma(x), which returns the natural logarithm of the gamma function. There are also functions that return a single value when applied to a vector. For example, min(x) and max(x) return, respectively, the smallest and largest elements in x; length(x) gives the number of elements in x; and sum(x) gives the sum of the values in x. R also operates on logical quantities TRUE (or T for true) and FALSE (or F for false). Logical values are generated by conditions that are either true or false. For example, ­ c(­3,4,2,­1,­5) ­ a 0 a b b [1] FALSE TRUE TRUE FALSE FALSE compares each element of the vector a with 0, returning TRUE when it is greater than 0 and FALSE otherwise, and these logical values are stored in the vector b. The follow­ ing logical operators can be used: , == (for equality), != (for inequality) as well as & (for conjunction), (for disjunction) and ! (for negation). For example, if we create a logical vector c as follows: , =, =, ­ c(T,T,T,T,T) c b&c [1] FALSE TRUE TRUE FALSE FALSE b c [1] TRUE TRUE TRUE TRUE TRUE then an element of b&c is TRUE when both corresponding elements of b and c are TRUE, while an element of b c is TRUE when at least one of the corresponding ele­ ments of b and c is TRUE. Sometimes we may have variables that take character values. While it is always possible to code these values as numbers, there is no need to do this, as R can also handle character­valued variables. For example, the commands ­ c(’a’,’b’) A A [1] "a" "b" create a character vector A, containing two values a and b, and then we print out this vector. Note that we included the character values in single quotes when doing the assignment. 686 Appendix B: Computations Sometimes data values are missing and so are listed as NA (not available). Opera­ tions on missing values create missing values. Also, an impossible operation, such as 0/0, produces NaN (not a number). Various objects can be created during an R session. To see those created so far in your session, use the command ls(). You can remove any objects in your workspace using the rm command. For example, rm(x) removes the vector x. Probability Functions R has a number of built­in functions for evaluation of the cdf, the inverse cdf, the density or probability function, and generating random samples for the common dis­ tributions we encounter in probability and statistics. These are distinguished by prefix and base distribution names. Some of the distribution names are given in the following table. Distribution R name and arguments beta binomial chi­squared exponential F gamma geometric beta( ,a,b) binom( ,n,p) chisq( ,df) exp( ,lambda) f( ,df1,df2) gamma( ,alpha,lambda) geom( ,p) Distribution hypergeometric negative binomial normal Poisson t uniform R name and arguments hyper( ,N,M,n) nbinom( ,k,p) norm( ,mu,sigma) pois( ,lambda) t( ,df) unif( ,min,max) As usual, one has to be careful with the gamma distribution. The safest path is to include another argument with the distribution to indicate whether or not lambda x or a scale parameter (density is a rate parameter (density is is So gamma( ,alpha,rate=lambda) indicates that lambda is a rate parameter, and gamma( ,alpha,scale=lambda) indicates that it is a scale parameter. 1e x 1e x x 1 1 The argument given by is specified according to what purpose the command using the distribution name has. To obtain the cdf of a distribution, precede the name by p, and then is the value at which you want to evaluate the cdf. To obtain the inverse cdf of a distribution, precede the name by q, and then is the value at which you want to evaluate the inverse cdf. To obtain the density or probability function, precede the name by d, and then is the value at which you want to evaluate the density or probability func
tion. To obtain random samples, precede the name by r, and then is the size of the random sample you want to generate. For example, ­ rnorm(4,1,2) x x [1] ­0.2462307 2.7992913 4.7541085 3.3169241 generates a sample of 4 from the N 1 22 distribution and assigns this to the vector x. The command Appendix B.1: Using R 687 dnorm(3.2,2,.5) [1] 0.04478906 evaluates the N 2 25 pdf at 3.2, while pnorm(3.2,2,.5) [1] 0.9918025 evaluates the N 2 0 25 cdf at 3.2, and qnorm(.025,2,.5) [1] 1.020018 gives the 0 025 quantile of the N 2 0 25 distribution. If we have data stored in a vector x, then we can sample values from x, with or without replacement, using the sample function. For example, sample(x,n,T) will generate a sample of n from x with replacement, while sample(x,n,F) will generate a sample of n from x without replacement (note n must be no greater than length(x) in the latter case). Sometimes it is convenient to be able to repeat a simulation so the same random values are generated. For this, you can use the set.seed command. For example, set.seed(12345) establishes the seed as 12345. Tabulating Data The table command is available for tabulating data. For example, table(x) re­ turns a table containing a list of the unique values found in x and their frequency of occurrence in x. This table can be assigned to a variable via y ­ table(x) for further analysis (see The Chi­Squared Test section on the next page). If x and y are vectors of the same length, then table(x,y) produces a cross­ tabulation, i.e., counts the number of times each possible value of x y is obtained, where x can be any of the values taken in x and y can be any of the values taken in y. Plotting Data R has a number of commands available for plotting data. For example, suppose we have a sample of size n stored in the vector x. The command hist(x) will provide a frequency histogram of the data where the cutpoints are chosen automatically by R. We can add optional arguments to hist. The following are some of the arguments available. breaks — A vector containing the cutpoints. freq — A logical variable; when freq=T (the default), a frequency histogram is obtained, and when freq=F, a density histogram is obtained. For example, hist(x,breaks=c(­10,­5,­2,0,2,5,10),freq=F) will plot a density histogram with cutpoints 10 2 0 2 5 10 where we have been care­ 5 ful to ensure that min(x) 10 and max(x) 10. 688 Appendix B: Computations If y is another vector of the same length as x, then we can produce a scatter plot of y against x via the command plot(x,y). The command plot(x,y,type="l") provides a scatter plot of y against x, but now the points are joined by lines. The command plot(x) plots the values in x against their index. The plot(ecdf(x)) command plots the empirical cdf of the data in x. A boxplot of the data in x is obtained via the boxplot(x) command. Side­by­ side boxplots of the data in x, y, z, etc., can be obtained via boxplot(x,y,z). A normal probability plot of the values in x can be obtained using the command qqnorm(x). A barplot can be obtained using the barplot command. For example, ­ c(1,2,3) h barplot(h) produces a barplot with 3 bars of heights 1, 2, and 3. There are many other aspects to plotting in R that allow the user considerable con­ trol over the look of plots. We refer the reader to the manual for more discussion of these. Statistical Inference R has a powerful approach to fitting and making inference about models. Models are specified by the symbol ~. We do not discuss this fully here but only indicate how to use this to handle the simple and multiple linear regression models (where the response and the predictors are all quantitative), the one­ and two­factor models (where the response is quantitative but the predictors are categorical), and the logistic regression model (where the response is categorical but the predictors are quantitative). Suppose, then, that we have a vector y containing the response values. Basic Statistics The function mean(y) returns the mean of the values in y, var(y) returns the sample variance of the values in y, and sd(y) gives the sample standard devia­ tion. The command median(y)returns the median of y, while quantile(y,p) returns the sample quantiles as specified in the vector of probabilities p. For example, quantile(y,c(.25,.5,.75)) returns the median and the first and third quan­ tiles. The function sort(y) returns a vector with the values in y sorted from smallest to largest, and rank(y) gives the ranks of the values in y. The ­Test For the data in y, we can use the command t.test(y,mu=1,alternative="two.sided",conf.level=.95) to carry out a t­test. This computes the P­value for testing H0 : 0 95­confidence interval for 1 and forms a The Chi­Squared Test Suppose y contains a vector of counts for k cells and prob contains hypothesized probabilities for these cells. Then the command Appendix B.1: Using R 689 chisq.test(y,p=prob) carries out the chi­squared test to assess this hypothesis. Note that y could also corre­ spond to a one­dimensional table. If x and y are two vectors of the same length, then chisq.test(x,y) carries out a chi­squared test for independence on the table formed by cross­tabulating the entries in x and y. If we first create this cross­tabulation in the table t using the table function, then chisq.test(t) carries out this test. Simple Linear Regression Suppose we have a single predictor with values in the vector x. The simple linear regression model E y x 2x is then specified in R by y~x. We refer to y~x as a model formula, and read this as “y is modelled as a linear model involving x.” To carry out the fitting (which we have done here for a specific set of data), we use the fitting linear models command lm, as follows. The command 1 regexamp ­ lm(y~x) carries out the computations for fitting and inference about this model and assigns the result to a structure called regexamp. Any other valid name could have been used for this structure. We can now use various R functions to pick off various items of interest. For example, summary(regexamp) Call: lm(formula = y~x) Residuals: Min 1Q Median 3Q Max ­4.2211 ­2.1163 0.3248 1.7255 4.3323 Coefficients: Estimate Std. Error t value Pr( t ) (Intercept) 6.5228 1.7531 x 1.2176 0.1016 5.357 17.248 4.31e­05 *** 1.22e­12 *** ­­ Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 2.621 on 18 degrees of freedom Multiple R­squared: 0.9429, Adjusted R­squared: 0.9398 F­statistic: 297.5 on 1 and 18 DF, p­value: 1.219e­12 uses the summary function to give us all the information we need. For example, the fitted line is given by 6 5228 0 has a P­value of 1 7531x The test of H0 : 10 12 so we have strong evidence against H0 Furthermore, the R2 is given 1 22 by 94.29%. Individual items can be accessed via various R functions and we refer the reader to ?lm for this. 2 690 Appendix B: Computations Multiple Linear Regression If we have two quantitative predictors in the vectors x1 and x2, then we can proceed just as with simple linear regression to fit the linear regression model E y x1 x2 1 2x1 2x2. For example, the commands regex summary(regex) ­ lm(y~x1+x2) fit the above linear model, assign the results of this to the structure regex, and then the summary function prints out (suppressed here) all the relevant quantities. We read y~x1+x2 as, “y is modelled as a linear model involving x1 and x2.” In particular, the F­statistic, and its associated P­value, is obtained for testing H0 : 0 2 3 xl for l This generalizes immediately to linear regression models with k quantitative pre­ xk. Furthermore, suppose we want to test that the model only involves dictors x1 x1 k We use lm to fit the model for all k predictors, assign this to regex, and also use lm to fit the model that only involves l predictors and assign this to regex1. Then the command anova(regex,regex1) will output the F­ statistics, and its P­value, for testing H0 : 0 l 1 k One­ and Two­Factor ANOVA Suppose now that A denotes a categorical predictor taking two levels a1 and a2. Note that the values of A may be character in value rather than numeric, e.g., x is a character vector containing the values a1 and a2, used to denote at which level the correspond­ ing value of y was observed. In either case, we need to make this into a factor A, via the command A ­ factor(x) so that A can be used in the analysis. Then the command aov(y~A) produces the one­way ANOVA table. Of course, aov also handles factors with more than two levels. To produce the cell means, use the command tapply(y,A,mean). b5 If this is the factor B Suppose there is a second factor B taking 5 levels b1 in R, then the command aov(y~A+B+A:B) produces the two­way ANOVA for testing for interactions between factors A and B. To produce the cell means, use the command tapply(y,list(A,B),mean). The command aov(y~A+B) produces the ANOVA table, assuming that there are no inter­ actions. Logistic Regression Suppose we have binary data stored in the vector y, and x contains the corresponding values of a quantitative predictor. Then we can use the generalized linear model com­ mand glm to fit the logistic regression model P Y The commands exp 1 exp 1 1 x 2x 2x 1 Appendix B.1: Using R 691 logreg summary(logreg) ­ glm(y~x,family=binomial) fit the logistic regression model, assign the results to logreg, and then the summary their standard command outputs this material. This gives us the estimates of the errors, and P­values for testing that the i 0 i Control Statements and R Programs A basic control statement is of the form if (expr1) expr2 else expr3, where expr1 takes a logical value, expr2 is executed if expr1 is T, and expr3 is executed if expr1 is F. For example, if x is a variable taking value 2, then if (x 0) {y ­ ­1} else {y ­ 1} results in y being assigned the value be dropped. 1. Note that the else part of the statement can The command for (name in expr) expr2 executes expr2 for each value of name in expr1. For example, for (
i in 1:10) print(i) prints the value of the variable i as i is sequentially assigned values in 1 2 Note that m:n is a shorthand for the sequence m m 1 example, n in R. As another 10 for (i in 1:20) y[i] ­ 2^i creates a vector y with 20 entries, where the ith element of y equals 2i The break terminates a loop, perhaps based on some condition holding, while next halts the processing of the current iteration and advances the looping index. Both break and next apply only to the innermost of nested loops. Commands in R can be grouped by placing them within braces {expr1; expr2; ...}. The commands within the braces are executed as a unit. For example, for (i in 1:20) {print(i); y[i] ­ 2^i}; print(y[i])} causes i to be printed, y[i] to be assigned, and y[i] to be printed, all within a for loop. Often when a computation is complicated, such as one that involves looping, it is better to put all the R commands in a single file and then execute the file in batch mode. For example, suppose you have a file prog.R containing R code. Then the command source("pathname/prog.R") causes all the commands in the file to be executed. It is often convenient to put comments in R programs to explain what the lines of code are doing. A comment line is preceded by # and of course it is not executed. User­Defined Functions R also allows user­defined functions. The syntax of a function definition is as follows. 692 Appendix B: Computations function name ­ function(arguments) { function body; return(return value); } For example, the following function computes the sample coefficient of variation of the data x. coef_var ­ function(x) { result return(result); ­ sd(x)/mean(x); } Then if we want to subsequently compute the coefficient of variation of data y, we simply type coef_var(y). Arrays and Lists A vector of length m can also be thought of as a one­dimensional array of length m. p arrays, etc. If a is a R can handle multidimensional arrays, e.g., m n m n three­dimensional array, then a[i,j,k] refers to the entry in the i j k ­th position of the array. There are various operations that can be carried out on arrays and we refer the reader to the manual for these. Later in this manual, we will discuss the special case of two­dimensional arrays, which are also known as matrices. For now, we just think of arrays as objects in which we store data. A very general data structure in R is given by a list. A list is similar to an array, with several important differences. 1. Any entry in an array is referred to by its index. But any entry in a list may be referred to by a character name. For example, the fitted regression coeffi­ cients are referred to by regex$coefficients after fitting the linear model x1 + x2). The dollar mark ($) is the entry reference regex operator, that is, varname$entname indicates the “entname” entry in the list “varname.” ­ lm(y 2. While an array stores only the same type of data, a list can store any R objects. For example, the coefficients entry in a linear regression object is a nu­ meric vector, and the model entry is a list. 3. The reference operators are different: arr[i] refers to the ith entry in the array arr, and lst[[i]] refers to the i th entry in the list lst. Note that i can be the entry name, i.e., lst$entname and lst[[’entname’]] refer to the same data. Examples We now consider some examples relevant to particular sections or examples in the main text. To run any of these codes, you first have to define the functions. To do this, load Appendix B.1: Using R 693 the code using the source command. Arguments to the functions then need to be specified. Note that lines in the listings may be broken unnaturally and continue on the following line. EXAMPLE B.1.1 Bootstrapping in Example 6.4.2 The following R code generates bootstrap samples and calculates the median of each of these samples. To run this code, type y ­ bootstrap_median(m,x), where m is the number of bootstrap samples, x contains the original sample, and the medians of the resamples are stored in y. The statistic to be bootstrapped can be changed by substituting for median in the code. # # # # # # # # Example B.1.1 function name: bootstrap_median parameters: m x resample size original data return value: a vector of resampled medians description: resamples and stores its median bootstrap_median ­ function(m,x) { ­ length(x); n result for(i in 1:m) result[i] ­ rep(0,m); ­ median(sample(x,n,T)); return(result); } EXAMPLE B.1.2 Sampling from the Posterior in Example 7.3.1 The following R code generates a sample of from the joint posterior in Example 7.3.1. To run a simulation, type post ­ post_normal(m,x,alpha0,beta0,mu0,tau0square) where m is the Monte Carlo sample size and the remaining arguments are the hyperpa­ rameters of the prior. The result is a list called (in this case) post, where post$mu 2 respectively. For and post$sigmasq contain the generated values of example, and x ­ c(11.6714, 1.8957, 2.1228, 2.1286, 1.0751, 8.1631, 1.8236, 4.0362, 6.8513, 7.6461, 1.9020, 7.4899, 4.9233, 8.3223, 7.9486); post z ­ post_normal(10**4,x,2,1,4,2) ­ sqrt(post$sigmasq)/post$mu runs a simulation as in Example 7.3.1, with N 104 # # # # Example B.1.2 function name: post_normal parameters: m sample size 694 Appendix B: Computations data returned values: mu sampled mu sigmasq sampled sigmasquare rate parameter for 1/sigma^2 location parameter for mu description: samples from the posterior distribution in Example 7.3.1 x alpha0 shape parameter for 1/sigma^2 beta0 mu0 tau0square variance ratio parameter for mu # # # # # # # # # # # post_normal ­function(m,x,alpha0,beta0,mu0,tau0square){ # set the length of the data n # the shape and rate parameters of the posterior dist. # alpha_x = first parameter of the gamma dist. # alpha_x # beta_x = the rate parameter of the gamma dist. beta_x = (alpha0 + n/2) ­ alpha0 + n/2 ­ beta0 + (n­1)/2 * var(x) + n*(mean(x)­mu0)**2/ ­ length(x); 2/(1+n*tau0square); distribution = the mean parameter of the normal dist. ­ (mu0/tau0square+n*mean(x))/(n+1/tau0square); # mu_x mu_x # tausq_x = the variance ratio parameter of the normal # tausq_x # initialize the result result result$sigmasq result$mu ­ 1/rgamma(m,alpha_x,rate=beta_x); ­ rnorm(m,mu_x,sqrt(tausq_x * result$sigmasq)); ­ 1/(n+1/tau0square); ­ list(); return(result); } EXAMPLE B.1.3 Calculating the Estimates and Standard Errors in Example 7.3.1 Once we have a sample of values from the posterior distribution of stored in psi, we can calculate the interval given by the mean value of psi plus or minus 3 standard deviations as a measure of the accuracy of the estimation. # Example B.1.3 # set the data x ­ c(11.6714, 1.8957, 2.1228, 2.1286, 1.0751, 8.1631, 1.8236, 4.0362, 6.8513, 7.6461, 1.9020, 7.4899, 4.9233, 8.3223, 7.9486); post ­ post_normal(10**4,x,2,1,4,2); # compute the coefficient of variation Appendix B.1: Using R 695 ­ mean(psi ­ sqrt(psi_hat * (1­psi_hat))/sqrt(length(psi)); = .5); ­ sqrt(post$sigmasq)/post$mu; psi psi_hat psi_se # the interval cq cat("The three times s.e. interval is ", ­ 3 "[",psi_hat­cq*psi_se, ", ", psi_hat+cq*psi_se,"] n"); EXAMPLE B.1.4 Using the Gibbs Sampler in Example 7.3.2 To run this function, type post ­gibbs_normal(m,x,alpha0,beta0,lambda,mu0, tau0sq,burnin=0) as this creates a list called post, where post$mu and post$sigmasq contain the and 2 respectively. Note that the burnin argument is set to generated values of a nonnegative integer and indicates that we wish to discard the first burnin values of and 2 and retain the last m. The default value is burnin=0. m x alpha0 beta0 lambda mu0 tau0sq burnin # Example B.1.4 # # # # # # # # # # # # # # # # gibbs_normal function name: gibbs_normal parameters the size of posterior sample data shape parameter for 1/sigma^2 rate parameter for 1/sigma^2 degree of freedom of Student’s t­dist. location parameter for mu scale parameter for mu size of burn in. the default value is 0. returnrd values mu sigmasq sampled sigma^2’s sampled mu’s description: samples from the posterior in Ex. 7.3.2 ­ function(m,x,alpha0,beta0,lambda,mu0, tau0sq,burnin=0) { ­ list(); # initialize the result result result$sigmasq ­ result$mu # set the initial parameter ­ mean(x); mu ­ var(x); sigmasq ­ length(x); n # set parameters ­ rep(0,m); 696 Appendix B: Computations ­ n/2 + alpha0 + 1/2; alpha_x # loop for(i in (1­burnin):m) { # update v_i’s v ­ rgamma(n,(lambda+1)/2,rate=((x­mu)**2/ # update sigma­square beta_x ­(sum(v*(x­mu)**2)/lambda+(mu­mu0)**2/ sigmasq/lambda+1)/2); tau0sq)/2+beta0; sigmasq ­ 1/rgamma(1,alpha_x,rate=beta_x); # update mu r mu ­ 1/(sum(v)/lambda+1/tau0sq); ­ rnorm(1,r*(sum(v*x)/lambda+mu0/tau0sq), sqrt(r*sigmasq)); # burnin check if(i result$mu[i] result$sigmasq[i] 1) next; ­ mu; ­ sigmasq; } result$psi return(result); ­ sqrt(result$sigmasq)/result$mu; } EXAMPLE B.1.5 Batching in Example 7.3.2 The following R code divides a series of data into batches and calculates the batch means. To run the code, type y ­batching(k,x) to place the consecutive batch means of size k, of the data in the vector x, in the vector y. k x size of each batch data return value: Example B.1.5 function name: batching parameters: # # # # # # # # # # batching m result for(i in 1:m) result[i] return(result); ­ floor(length(x)/k); ­ rep(0,m); ­ function(k,x) { } an array of the averages of each batch description: this function separates the data x into floor(length(x)/k) batches and returns the array of the averages of each batch ­ mean(x[(i­1)*k+(1:k)]); Appendix B.1: Using R 697 EXAMPLE B.1.6 Simulating a Sample from the Distribution of the Discrepancy Sta­ tistic in Example 9.1.2 The following R code generates a sample from the discrepancy statistic specified in Example 9.1.2. To generate the sample, type y ­discrepancy(m,n) to place a sample of size m in y, where n is the size of the original data set. This code can be easily modified to generate samples from other discrepancy statistics. Example B.1.6 function name: discrepancy parameters: resample siz
e size of data return value: an array of m discrepancies m n # # # # # # # # # discrepancy ­ function(m,n) { description: this function generates m discrepancies when the data size is n result for(i in 1:m) { ­ rep(0,m); x xbar r result[i] ­ rnorm(n); ­ mean(x); ­ (x­xbar)/sqrt((sum((x­xbar)**2))); ­ ­sum(log(r**2)); } return(result/n); } EXAMPLE B.1.7 Generating from a Dirichlet Distribution in Example 10.2.3 The following R code generates a sample from a Dirichlet( 1 4) distribution. To generate from this distribution, first assign values to the vector alpha and then type ddirichlet(n,alpha), where n is the sample size. 3 2 n alpha vector(alpha1,...,alphak) sample size return value: Example B.1.7 function name: ddirichlet parameters: # # # # # # # # # ddirichlet k ­ matrix(0,n,k); result for(i in 1:k) result[,i] for(i in 1:n) result[i,] ­ length(alpha); a (n x k) matrix. rows are i.i.d. samples description: this function generates n random samples from Dirichlet(alpha1,...,alphak) distribution ­ function(n,alpha) { ­ rgamma(n,alpha[i]); ­ result[i,] / sum(result[i,]); Appendix B: Computations 698 } return(result); Matrices A matrix can be thought of as a collection of data values with two subscripts or as a rectangular array of data. So if a is a matrix, then a[i,j] is the i j ­th element in a. Note that a[i,] refers to the ith row of a and a[,j] refers to the j th column of a. If a matrix has m rows and n columns, then it is an m n matrix, and m and n are referred to as the dimensions of the matrix. Perhaps the simplest way to create matrices is with cbind and rbind commands. For example, x ­c(1,2,3) y ­c(4,5,6) a ­cbind(x,y) a x y [1,] 1 4 [2,] 2 5 [3,] 3 6 creates the vectors x and y, and the cbind command takes x as the first column and y as the second column of the newly created 3 2 matrix a. Note that in the printout of a, the columns are still labelled x and y, although we can still refer to these as a[,1] and a[,2]. We can remove these column names via the command colnames(a) ­NULL. Similarly, the rbind command will treat vector arguments as the rows of a matrix. To determine the number of rows and columns of a matrix a, we can use the nrow(a) and ncol(a) commands. We can also create a diagonal matrix using the diag command. If x is an n­dimensional vector, then diag(x) is an n n matrix with the entries in x along the diagonal and 0’s elsewhere. If a is an m n matrix, then diag(a) is the vector with entries taken from the main diagonal of a. To create an n n identity matrix, use diag(n). There are a number of operations that can be carried out on matrices. If matrices a and b are m n then a+b is the m n matrix formed by adding the matrices componentwise. The transpose of a is the n m matrix t(a), with i th row equal to the ith column of a. If c is a number, then c*a is the m n matrix formed by multiplying each element of a by c. If a is m n and b is n p then a%*%b is the p matrix product (Appendix A.4) of a and b. A numeric vector is treated as a m column vector in matrix multiplication. Note that a*b is also defined when a and b are of the same dimension, but this is the componentwise product of the two matrices, which is quite different from the matrix product. If a is an m m matrix, then the inverse of a is obtained as solve(a). The solve command will return an error if the matrix does not have an inverse. If a is a square matrix, then det(a) computes the determinant of a. We now consider an important application. Appendix B.2: Using Minitab 699 EXAMPLE B.1.8 Fitting Regression Models Suppose the n­dimensional vector y corresponds to the response vector and the n k matrix V corresponds to the design matrix when we are fitting a linear regression model is given by b as computed in given by E y V The least­squares estimate of V b ­solve(t(V)%*%V)%*%t(V)%*%y with the vector of predicted values p and residuals r given by p ­V%*%b r ­y­p with squared lengths slp ­t(p)%*%p slr ­t(r)%*%r where slp is the squared length of p and slr is the squared length of r. Note that the matrix solve(t(V)%*%V) is used for forming confidence intervals and tests for the individual i Virtually all the computations involved in fitting and inference for the linear regression matrix can be carried out using matrix computations in R like the ones we have illustrated. Packages There are many packages that have been written to extend the capability of basic R. It is very likely that if you have a data analysis need that cannot be met with R, then you can find a freely available package to add. We refer the reader to ?install.packages and ?library for more on this. B.2 Using Minitab All the computations found in this text were carried out using Minitab. This statistical software package is very easy to learn and use. Other packages such as SAS or R (see Section B.1) could also be used for this purpose. Most of the computations were performed using Minitab like a calculator, i.e., data were entered and then a number of Minitab commands were accessed to obtain the quantities desired. No programming is required for these computations. There were a few computations, however, that did involve a bit of programming. Typically, this was a computation in which numerous operations had to be performed many times, and so looping was desirable. In each such case, we have recorded here the Minitab code that we used for these computations. As the following examples show, these programs were never very involved. Students can use these programs as templates for writing their own Minitab pro­ grams. Actually, the language is so simple that we feel that anyone using another language for programming can read these programs and use them as templates in the same way. Simply think of the symbols c1, c2, etc. as arrays where we address the ith element in the array c1 by c1(i). Furthermore, there are constants k1, k2, etc. 700 Appendix B: Computations A Minitab program is called a macro and must start with the statement gmacro and end with the statement endmacro. The first statement after gmacro gives a name to the program. Comments in a program, put there for explanatory purposes, start with note. If the file containing the program is called prog.txt and this is stored in the root directory of a disk drive called c, then the Minitab command MTB %c:/prog.txt will run the program. Any output will either be printed in the Session window (if you have used a print command) or stored in the Minitab worksheet. More details on Minitab can be found by using Help in the program. We provide some examples of Minitab macros used in the text. EXAMPLE B.2.1 Bootstrapping in Example 6.4.2 The following Minitab code generates 1000 bootstrap samples from the data in c1, calculates the median of each of these samples, and then calculates the sample variance of these medians. overwritten gmacro bootstrapping base 34256734 note ­ original sample is stored in c1 note ­ bootstrap sample is placed in c2 with each one note note ­ medians of bootstrap samples are stored in c3 note ­ k1 = size of data set (and bootstrap samples) let k1=15 do k2=1:1000 sample 15 c1 c2; replace. let c3(k2)=median(c2) enddo note ­ k3 equals (6.4.5) let k3=(stdev(c3))**2 print k3 endmacro EXAMPLE B.2.2 Sampling from the Posterior in Example 7.3.1 The following Minitab code generates a sample of 104 from the joint posterior in Ex­ density takes the form ample 7.3.1. Note that in Minitab software, the Gamma distribution, as defined So to generate from a Gamma 1e x x in this book, we must put the second shape parameter equal to 1 in Minitab. gmacro normalpost note ­ the base command sets the seed for the random note numbers Appendix B.2: Using Minitab 701 = (alpha_0 + n/2) base 34256734 note ­ the parameters of the posterior note ­ k1 = first parameter of the gamma distribution note let k1=9.5 note ­ k2 = 1/beta let k2=1/77.578 note ­ k3 = posterior mean of mu let k3=5.161 note ­ k4 = (n + 1/(tau_0 squared) )^(­1) let k4=1/15.5 note ­ main loop note ­ c3 contains generated value of sigma**2 note ­ c4 contains generated value of mu note ­ c5 contains generated value of coefficient of variation do k5=1:10000 random 1 c1; gamma k1 k2. let c3(k5)=1/c1(1) let k6=sqrt(k4/c1(1)) random 1 c2; normal k3 k6. let c4(k5)=c2(1) let c5(k5)=sqrt(c3(k5))/c4(k5) enddo endmacro EXAMPLE B.2.3 Calculating the Estimates and Standard Errors in Example 7.3.1 We have a sample of 104 values from the posterior distribution of stored in C5. The following computations use this sample to calculate an estimate of the posterior probability that 0 5 (k1), as well as to calculate the standard error of this estimate (k2), the estimate minus three times its standard error (k3), and the estimate plus three times its standard error (k4). let c6=c5 le .5 let k1=mean(c6) let k2=sqrt(k1*(1­k1))/sqrt(10000) let k3=k1­3*k2 let k4=k1+3*k2 print k1 k2 k3 k4 702 Appendix B: Computations EXAMPLE B.2.4 Using the Gibbs Sampler in Example 7.3.2 The following Minitab code generates a chain of length 104 values using the Gibbs sampler described in Example 7.3.2. gmacro gibbs base 34256734 note ­ data sample is stored in c1 note ­ starting value for mu. let k1=mean(c1) note ­ starting value for sigma**2 let k2=stdev(c1) let k2=k2**2 note ­ lambda let k3=3 note ­ sample size let k4=15 note ­ n/2 + alpha_0 + 1/2 let k5=k4/2 +2+.5 note ­ mu_0 let k6=4 note ­ tau_0**2 let k7=2 note ­ beta_0 let k8=1 let k9=(k3/2+.5) note ­ main loop do k100=1:10000 note ­ generate the nu_i in c10 do k111=1:15 let k10=.5*(((c1(k111)­k1)**2)/(k2*k3) +1) let k10=1/k10 random 1 c2; gamma k9 k10. let c10(k111)=c2(1) enddo note ­ generate sigma**2 in c20 let c11=c10*((c1­k1)**2) let k11=.5*sum(c11)/k3+.5*((k1­k6)**2)/k7 +k8 let k11=1/k11 random 1 c2; gamma k5 k11. let c20(k100)=1/c2(1) let k2=1/c2(1) note ­ generate mu in c21 let k13=1/(sum(c10)/k3 +1/k7) Appendix B.2: Using Minitab 703 let c11=c1*c10/k3 let k14=sum(c11)+k6/k7 let k14=k13*k14 let k13=sqrt(k13*k2) random 1 c2; normal k14 k
13. let c21(k100)=c2(1) let k1=c2(1) enddo endmacro EXAMPLE B.2.5 Batching in Example 7.3.2 The following Minitab code divides the generated sample, obtained via the Gibbs sam­ pling code for Example 7.3.2, into batches, and calculates the batch means. gmacro batching note ­ k2= batch size let k2=40 note ­ k4 holds the batch sums note ­ c1 contains the data to be batched (10000 data values) note ­ c2 will contain the batch means (250 batch means) do k10=1:10000/40 let k4=0 do k20=0:39 let k3=c1(k10+k20) let k4=k4+k3 enddo let k11=floor(k10/k2) +1 let c2(k11)=k4/k2 enddo endmacro EXAMPLE B.2.6 Simulating a Sample from the Distribution of the Discrepancy Sta­ tistic in Example 9.1.2 The following code generates a sample from the discrepancy statistic specified in Ex­ ample 9.1.2. gmacro goodnessoffit base 34256734 note ­ generated sample is stored in c1 note ­ residuals are placed in c2 note ­ value of D(r) are placed in c3 note ­ k1 = size of data set let k1=5 704 Appendix B: Computations do k2=1:10000 random k1 c1 let k3=mean(c1) let k4=sqrt(k1­1)*stdev(c1) let c2=((c1­k3)/k4)**2 let c2=loge(c2) let k5=­sum(c2)/k1 let c3(k2)=k5 enddo endmacro EXAMPLE B.2.7 Generating from a Dirichlet Distribution in Example 10.2.3 The following code generates a sample from a Dirichlet( 1 1 where 1 1 5. 2 3 2 4 2 3 3 4) distribution, number generator (so you can repeat a simulation). a Dirichlet(k1,k2,k3,k4) distribution. gmacro dirichlet note ­ the base command sets the seed for the random note base 34256734 note ­ here we provide the algorithm for generating from note note ­ assign the values of the parameters. let k1=2 let k2=3 let k3=1 let k4=1.5 let k5=K2+k3+k4 let k6=k3+k4 note ­ generate the sample with i­th sample in i­th row note do k10=1:5 random 1 c1; beta k1 k5. let c2(k10)=c1(1) random 1 c1; beta k2 k6. let c3(k10)=(1­c2(k10))*c1(1) random 1 c1; beta k3 k4. let c4(k10)=(1­c2(k10)­c3(k10))*c1(1) let c5(k10)= 1­c2(k10)­c3(k10)­c4(k10) enddo endmacro of c2, c3, c4, c5, .... Appendix C Common Distributions We record here the most commonly used distributions in probability and statistics as well as some of their basic characteristics. C.1 Discrete Distributions [0 1] (same as Binomial 1 1 x for x x 1 ). 0 1 1. Bernoulli probability function: p x mean: variance: moment­generating function: m t 1 et for t R1 [0 1] n x for x 0 1 n 1 1 0 1] (same as Negative­Binomial 1 et n for t R1 ). n 0 an integer, n x 1 x 2. Binomial n probability function: p x mean: n variance: n 1 moment­generating function: m t 3. Geometric probability function: p x mean: 1 variance: 1 moment­generating function: m t 1 2 x for x 0 1 2 1 1 et 1 for t ln 1 4. Hypergeometric N M n , M N n probability function: N all positive integers for max 0 n M N x min n M mean: n M N variance: n M N 1 5. Multinomial an integer, each i [0 1] 1 k 1 705 706 Appendix C: Common Distributions probability function: p x1 xk n xk x1 and x1 x1 1 xk k where each xi 0 1 n xk n n i mean: E Xi variance: Var Xi covariance: Cov Xi X j n i 1 i n i j when i j r 6. Negative­Binomial r probability function: p x mean: r 1 variance: r 1 moment­generating function: m t 2 0 7. Poisson probability function: p x mean: variance: moment­generating function: m t x! e x 0 an integer r 1 r 1 x x 0 1] x for x 0 1 2 3 r 1 1 et r for t ln 1 for x 0 1 2 3 exp et 1 for t R1 C.2 Absolutely Continuous Distributions 0 (same as Dirichlet a b ). for for 1 2 R1 2 1 2 2 0 [ 1 1] 1. Beta a b a 0 b density function: f x mean: a a b variance: ab a b 2. Bivariate Normal density function: f X1 X2 x1 x2 1 2 1 2 1 exp 2 1 2 1 2 for x1 R1 x2 R1 x1 2 2 1 1 x1 1 1 x2 2 2 2 x2 2 2 i mean: E Xi 2 variance: Var Xi i covariance: Cov X1 X2 2 or 3. Chi­squared density function: f x mean: variance: 2 , 2 2 1 2 0 (same as Gamma 1x 2 2 2 1 2 ). 1e x 2 for x 0 Appendix C.2: Absolutely Continuous Distributions 707 moment­generating function: m t 1 2t 2 for t 1 2 4. Dirichlet density function: 1 k 1 i 0 for each i f X1 Xk x1 1 1 for xi 0 i xk x1 k 1 1 xk k and 0 x1 xk 1 mean: variance: Var Xi covariance when i j: E Xi Cov Xi ). 0 (same as Gamma 1 x for x 5. Exponential density function: f x 1 mean: variance: moment­generating function: m t Note that some books and software packages instead replace of the Exponential when using software to generate from this distribution. 1 for t t in the definition distribution — always check this when using another book or by 1 6. F density function for x 0 mean: variance: 2 2 7. Gamma 2 when 2 2 0 0 1 x density function: f x mean: variance: moment­generating function: m t 2 e 2 2 4 when 4 x for x 0 t for t 708 Appendix C: Common Distributions Note that some books and software packages instead replace of the Gamma when using software to generate from this distribution. in the definition distribution — always check this when using another book or by 1 8. Lognormal or log N density function: f x 2 2 mean: exp variance: exp 2 2 exp 2 1 2 2 2 0 2 R1 1 2x 1 exp 1 2 2 ln x 2 for x 0 9. N 2 R1 2 0 2 density function: f x mean: variance: moment­generating function: m t 2 2 1 2 exp 1 2 2 x 2 for x R1 exp t 2t 2 2 for t R1 10. Student density function: or t 0 ( 1 gives the Cauchy distribution) f x 1 2 2 R1 1 2 for x x 2 1 1 2 1 mean: 0 when variance: 1 2 when 2 L 1 R 11. Uniform[L R] R density function: f x mean: L variance: (R moment­generating function: m t L 2 12 R 2 L for L x R eRt eLt t R L Appendix D Tables The following tables can be used for various computations. It is recommended, how­ ever, that the reader become familiar with the use of a statistical software package instead of relying on the tables. Computations of a much greater variety and accuracy can be carried out using the software, and, in the end, it is much more convenient. 709 710 Appendix D: Tables D.1 Random Numbers Each line in Table D.1 is a sample of 40 random digits, i.e., 40 independent and identi­ cally distributed (i.i.d.) values from the uniform distribution on the set Suppose we want a sample of five i.i.d. values from the uniform distribution on S 25 , i.e., a random sample of five, with replacement, from S. To do this, pick a starting point in the table and start reading off successive (nonoverlapping) two­digit numbers, treating a pair such as 07 as 7, and discarding any pairs that are not in the range 1 to 25, until you have five values. For example, if we start at line 110, we read the pairs ( indicates a sample element) 38, 44, 84, 87, 89, 18 , 33, 82, 46, 97, 39, 36, 44, 20 , 06 , 76, 68, 80, 87, 08 , 81, 48, 66, 94, 87, 60, 51, 30, 92, 97, 00, 41, 27, 12 . We can see at this point that we have a sample of five given by 18, 20, 6, 8, 12. If we want a random sample of five, without replacement, from S, then we proceed as above but now ignore any repeats in the generated sample until we get the five numbers. In this preceding case, we did not get any repeats, so this is also a simple random sample of size five without replacement. Table D.1 Random Numbers 19223 95034 05756 28713 96409 12531 42544 82853 73676 45467 47150 99400 01927 27754 42648 71709 77558 00095 32863 29485 82425 82226 36290 90056 52711 38889 93074 60227 40011 85848 48767 52573 95592 68417 94007 69971 91481 60779 53791 35013 15529 72765 85089 57067 17297 50211 59335 47487 82739 57890 20807 47511 81676 55300 94383 14893 60940 36009 72024 17868 24943 61790 90656 19365 15412 39638 85453 46816 87964 83485 18883 41979 38448 48789 18338 24697 39364 42006 76688 08708 81486 59636 62568 45149 69487 60513 09297 00412 71238 88804 04634 71197 19352 73089 70206 40325 03699 71080 22553 32992 75730 66280 03819 56202 27649 84898 11486 02938 39950 45785 11776 70915 61041 77684 94322 24709 73698 14526 31893 32592 14459 38167 26056 31424 80371 65103 62253 98532 62183 70632 23417 26185 50490 41448 61181 75532 73190 32533 04470 29669 84407 90785 65956 86382 95857 35476 71487 13873 07118 87664 92099 58806 66979 55972 39421 65850 04266 35435 09984 29077 14863 61683 47052 81598 95052 90908 73592 75186 98624 43742 62224 87136 84826 11937 51025 95761 54580 81507 27102 56027 55892 33063 41842 81868 71035 96746 09001 43367 49497 72719 96758 12149 37823 71868 18442 35119 27611 62103 91596 39244 Line 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 Appendix D.1: Random Numbers 711 Table D.1 Random Numbers (continued) 96927 43909 15689 36759 19931 99477 14227 58984 36809 25330 06565 68288 74192 64359 14374 22913 77567 40085 13352 18638 88741 48409 41903 16925 85117 36071 49367 81982 87209 54303 00795 08727 69051 64817 87174 09517 84534 06489 87201 97245 05007 68732 16632 55259 81194 84292 14873 08796 04197 43165 85576 45195 96565 93739 31685 97150 45740 41807 65561 33302 07051 93623 18132 09547 27816 66925 08421 53645 78416 55658 44753 66812 18329 39100 77377 61421 21337 78458 28744 47836 35213 11206 75592 12609 37741 04312 68508 19876 87151 31260 08563 79140 92454 15373 98481 14592 66831 68908 40772 21558 47781 33586 79177 06928 55588 12975 99404 13258 70708 13048 41098 45144 43563 72321 56934 48394 51719 81940 00360 02428 96767 35964 23822 96012 94591 65194 50842 53372 72829 88565 62964 19687 50232 42628 88145 12633 97892 17797 83083 57857 63408 49376 69453 95806 77919 61762 46109 09931 44575 24870 04178 16953 88604 12724 59505 69680 00900 02150 43163 58636 37609 59057 66967 83401 60705 02384 90597 93600 54973 00694 86278 05977 88737 19664 74351 65441 47500 20903 84552 19909 67181 62371 22725 53340 71546 05233 53946 68743 72460 27601 45403 88692 07511 03802 77320 07886 88915 29341 35030 56866 41267 29264 77519 39648 16853 80198 41109 69290 84569 12371 98296 03600 79367 32337 03316 13121 54969 43912 18984 60869 12349 05376 58958 22720 87065 74133 21117 70595 22791 67306 28420 52067 42090 55494 09628 67690 54035 88131 93879 81800 98441 11188 04606 27381 82637 28552 25752 21953 16698 30406 96587 65985 07165 50148 16201 86792 16297 22897 98163 43400 07626 17467 45944 25831 68683 17638 34210 06283 45335 70043 64158 22138 34377 36243 76971 16043
72941 41764 77038 13008 83993 22869 27689 82926 75957 15706 73345 26238 97341 46254 88153 62336 21112 35574 99271 45297 64578 11022 67197 79124 28310 49525 90341 63078 37531 17229 63890 52630 76315 32165 01343 21394 81232 43939 23840 05995 84589 06788 76358 26622 Line 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 712 Appendix D: Tables D.2 Standard Normal Cdf N 0 1 then we can use Table D.2 to compute the cumulative distribution for Z For example, suppose we want to compute If Z function (cdf) 1 03 The symmetry of the N 0 1 distribution about 0 implies that 1 03 so using Table D.2, we have that 03 P Z z z 1 03 1 0 1515 0 8485 00 .0003 .0005 .0007 .0010 .0013 .0019 .0026 .0035 .0047 .0062 .0082 .0107 .0139 .0179 .0228 .0287 .0359 .0446 .0548 .0668 .0808 .0968 .1151 .1357 .1587 .1841 .2119 .2420 .2743 .3085 .3446 .3821 .4207 .4602 .5000 01 .0003 .0005 .0007 .0009 .0013 .0018 .0025 .0034 .0045 .0060 .0080 .0104 .0136 .0174 .0222 .0281 .0351 .0436 .0537 .0655 .0793 .0951 .1131 .1335 .1562 .1814 .2090 .2389 .2709 .3050 .3409 .3783 .4168 .4562 .4960 Table D.2 Standard Normal Cdf 06 04 .0003 .0003 02 .0003 03 .0003 05 .0003 .0005 .0006 .0009 .0013 .0018 .0024 .0033 .0044 .0059 .0078 .0102 .0132 .0170 .0217 .0274 .0344 .0427 .0526 .0643 .0778 .0934 .1112 .1314 .1539 .1788 .2061 .2358 .2676 .3015 .3372 .3745 .4129 .4522 .4920 .0004 .0006 .0009 .0012 .0017 .0023 .0032 .0043 .0057 .0075 .0099 .0129 .0166 .0212 .0268 .0336 .0418 .0516 .0630 .0764 .0918 .1093 .1292 .1515 .1762 .2033 .2327 .2643 .2981 .3336 .3707 .4090 .4483 .4880 .0004 .0006 .0008 .0012 .0016 .0023 .0031 .0041 .0055 .0073 .0096 .0125 .0162 .0207 .0262 .0329 .0409 .0505 .0618 .0749 .0901 .1075 .1271 .1492 .1736 .2005 .2296 .2611 .2946 .3300 .3669 .4052 .4443 .4840 .0004 .0006 .0008 .0011 .0016 .0022 .0030 .0040 .0054 .0071 .0094 .0122 .0158 .0202 .0256 .0322 .0401 .0495 .0606 .0735 .0885 .1056 .1251 .1469 .1711 .1977 .2266 .2578 .2912 .3264 .3632 .4013 .4404 .4801 .0004 .0006 .0008 .0011 .0015 .0021 .0029 .0039 .0052 .0069 .0091 .0119 .0154 .0197 .0250 .0314 .0392 .0485 .0594 .0721 .0869 .1038 .1230 .1446 .1685 .1949 .2236 .2546 .2877 .3228 .3594 .3974 .4364 .4761 07 .0003 .0004 .0005 .0008 .0011 .0015 .0021 .0028 .0038 .0051 .0068 .0089 .0116 .0150 .0192 .0244 .0307 .0384 .0475 .0582 .0708 .0853 .1020 .1210 .1423 .1660 .1922 .2206 .2514 .2843 .3192 .3557 .3936 .4325 .4721 08 .0003 .0004 .0005 .0007 .0010 .0014 .0020 .0027 .0037 .0049 .0066 .0087 .0113 .0146 .0188 .0239 .0301 .0375 .0465 .0571 .0694 .0838 .1003 .1190 .1401 .1635 .1894 .2177 .2483 .2810 .3156 .3520 .3897 .4286 .4681 09 .0002 .0003 .0005 .0007 .0010 .0014 .0019 .0026 .0036 .0048 .0064 .0084 .0110 .0143 .0183 .0233 .0294 .0367 .0455 .0559 .0681 .0823 .0985 .1170 .1379 .1611 .1867 .2148 .2451 .2776 .3121 .3483 .3859 .4247 .4641 Appendix D.3: Chi­Squared Distribution Quantiles 713 D.3 Chi­Squared Distribution Quantiles 2 d f If X For example, if d f distribution. then we can use Table D.3 to obtain some quantiles for this distribution 21 16 is the 0 98 quantile of this 0 98 then x0 98 10 and P Table D.3 2 d f Quantiles 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 40 50 60 80 100 0.75 0.85 0.90 0.95 1.32 2.77 4.11 5.39 6.63 7.84 9.04 10.22 11.39 12.55 13.70 14.85 15.98 17.12 18.25 19.37 20.49 21.60 22.72 23.83 24.93 26.04 27.14 28.24 29.34 30.43 31.53 32.62 33.71 34.80 45.62 56.33 66.98 88.13 109.1 2.07 3.79 5.32 6.74 8.12 9.45 10.75 12.03 13.29 14.53 15.77 16.99 18.20 19.41 20.60 21.79 22.98 24.16 25.33 26.50 27.66 28.82 29.98 31.13 32.28 33.43 34.57 35.71 36.85 37.99 49.24 60.35 71.34 93.11 114.7 2.71 4.61 6.25 7.78 9.24 10.64 12.02 13.36 14.68 15.99 17.28 18.55 19.81 21.06 22.31 23.54 24.77 25.99 27.20 28.41 29.62 30.81 32.01 33.20 34.38 35.56 36.74 37.92 39.09 40.26 51.81 63.17 74.40 96.58 118.5 3.84 5.99 7.81 9.49 11.07 12.59 14.07 15.51 16.92 18.31 19.68 21.03 22.36 23.68 25.00 26.30 27.59 28.87 30.14 31.41 32.67 33.92 35.17 36.42 37.65 38.89 40.11 41.34 42.56 43.77 55.76 67.50 79.08 101.9 124.3 P 0.975 5.02 7.38 9.35 11.14 12.83 14.45 16.01 17.53 19.02 20.48 21.92 23.34 24.74 26.12 27.49 28.85 30.19 31.53 32.85 34.17 35.48 36.78 38.08 39.36 40.65 41.92 43.19 44.46 45.72 46.98 59.34 71.42 83.30 106.6 129.6 0.98 0.99 0.995 0.9975 5.41 7.82 9.84 11.67 13.39 15.03 16.62 18.17 19.68 21.16 22.62 24.05 25.47 26.87 28.26 29.63 31.00 32.35 33.69 35.02 36.34 37.66 38.97 40.27 41.57 42.86 44.14 45.42 46.69 47.96 60.44 72.61 84.58 108.1 131.1 6.63 9.21 11.34 13.28 15.09 16.81 18.48 20.09 21.67 23.21 24.72 26.22 27.69 29.14 30.58 32.00 33.41 34.81 36.19 37.57 38.93 40.29 41.64 42.98 44.31 45.64 46.96 48.28 49.59 50.89 63.69 76.15 88.38 112.3 135.8 7.88 10.60 12.84 14.86 16.75 18.55 20.28 21.95 23.59 25.19 26.76 28.30 29.82 31.32 32.80 34.27 35.72 37.16 38.58 40.00 41.40 42.80 44.18 45.56 46.93 48.29 49.64 50.99 52.34 53.67 66.77 79.49 91.95 116.3 140.2 9.14 11.98 14.32 16.42 18.39 20.25 22.04 23.77 25.46 27.11 28.73 30.32 31.88 33.43 34.95 36.46 37.95 39.42 40.88 42.34 43.78 45.20 46.62 48.03 49.44 50.83 52.22 53.59 54.97 56.33 69.70 82.66 95.34 120.1 144.3 0.999 10.83 13.82 16.27 18.47 20.51 22.46 24.32 26.12 27.88 29.59 31.26 32.91 34.53 36.12 37.70 39.25 40.79 42.31 43.82 45.31 46.80 48.27 49.73 51.18 52.62 54.05 55.48 56.89 58.30 59.70 73.40 86.66 99.61 124.8 149.4 714 Appendix D: Tables D.4 t Distribution Quantiles Table D.4 contains some quantiles for t or Student distributions. For example, if X t d f with d f 2 359 is the 0 98 quantile of the t 10 distribution. Recall that the t d f distribution is symmetric about 0 so, for example, x0 25 0 98 then x0 98 10 and P x0 75 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 40 50 60 80 100 1000 0.75 1.000 0.816 0.765 0.741 0.727 0.718 0.711 0.706 0.703 0.700 0.697 0.695 0.694 0.692 0.691 0.690 0.689 0.688 0.688 0.687 0.686 0.686 0.685 0.685 0.684 0.684 0.684 0.683 0.683 0.683 0.681 0.679 0.679 0.678 0.677 0.675 0.674 50% 0.85 1.963 1.386 1.250 1.190 1.156 1.134 1.119 1.108 1.100 1.093 1.088 1.083 1.079 1.076 1.074 1.071 1.069 1.067 1.066 1.064 1.063 1.061 1.060 1.059 1.058 1.058 1.057 1.056 1.055 1.055 1.050 1.047 1.045 1.043 1.042 1.037 1.036 70% Table D.4 t d f Quantiles 0.90 3.078 1.886 1.638 1.533 1.476 1.440 1.415 1.397 1.383 1.372 1.363 1.356 1.350 1.345 1.341 1.337 1.333 1.330 1.328 1.325 1.323 1.321 1.319 1.318 1.316 1.315 1.314 1.313 1.311 1.310 1.303 1.299 1.296 1.292 1.290 1.282 1.282 0.95 6.314 2.920 2.353 2.132 2.015 1.943 1.895 1.860 1.833 1.812 1.796 1.782 1.771 1.761 1.753 1.746 1.740 1.734 1.729 1.725 1.721 1.717 1.714 1.711 1.708 1.706 1.703 1.701 1.699 1.697 1.684 1.676 1.671 1.664 1.660 1.646 1.645 P 0.975 12.71 4.303 3.182 2.776 2.571 2.447 2.365 2.306 2.262 2.228 2.201 2.179 2.160 2.145 2.131 2.120 2.110 2.101 2.093 2.086 2.080 2.074 2.069 2.064 2.060 2.056 2.052 2.048 2.045 2.042 2.021 2.009 2.000 1.990 1.984 1.962 1.960 0.98 15.89 4.849 3.482 2.999 2.757 2.612 2.517 2.449 2.398 2.359 2.328 2.303 2.282 2.264 2.249 2.235 2.224 2.214 2.205 2.197 2.189 2.183 2.177 2.172 2.167 2.162 2.158 2.154 2.150 2.147 2.123 2.109 2.099 2.088 2.081 2.056 2.054 80% 90% 95% 96% 0.99 31.82 6.965 4.541 3.747 3.365 3.143 2.998 2.896 2.821 2.764 2.718 2.681 2.650 2.624 2.602 2.583 2.567 2.552 2.539 2.528 2.518 2.508 2.500 2.492 2.485 2.479 2.473 2.467 2.462 2.457 2.423 2.403 2.390 2.374 2.364 2.330 2.326 98% Confidence level 0.995 63.66 9.925 5.841 4.604 4.032 3.707 3.499 3.355 3.250 3.169 3.106 3.055 3.012 2.977 2.947 2.921 2.898 2.878 2.861 2.845 2.831 2.819 2.807 2.797 2.787 2.779 2.771 2.763 2.756 2.750 2.704 2.678 2.660 2.639 2.626 2.581 2.576 0.9975 0.999 127.3 14.09 7.453 5.598 4.773 4.317 4.029 3.833 3.690 3.581 3.497 3.428 3.372 3.326 3.286 3.252 3.222 3.197 3.174 3.153 3.135 3.119 3.104 3.091 3.078 3.067 3.057 3.047 3.038 3.030 2.971 2.937 2.915 2.887 2.871 2.813 2.807 318.3 22.33 10.21 7.173 5.893 5.208 4.785 4.501 4.297 4.144 4.025 3.930 3.852 3.787 3.733 3.686 3.646 3.611 3.579 3.552 3.527 3.505 3.485 3.467 3.450 3.435 3.421 3.408 3.396 3.385 3.307 3.261 3.232 3.195 3.174 3.098 3.091 99% 99.5% 99.8% Appendix D.5: F Distribution Quantiles 715 D.5 F Distribution Quantiles F nd f dd f If X distribution For example, if nd f is the 0 975 quantile of the F 3 4 distribution. Note that if X Y then we can use Table D.5 to obtain some quantiles for this 9 98 then 0 975 then x0 975 F nd f dd f F dd f nd f and P X 4 and P 3 dd f 1 x . 1 X P Y x Table D.5 F nd f dd f Quantiles nd f 1 39.86 161.45 647.79 2 49.50 199.50 799.50 3 53.59 215.71 864.16 4 55.83 224.58 899.58 5 57.24 230.16 921.85 6 58.20 233.99 937.11 4052.18 4999.50 5403.35 5624.58 5763.65 5858.99 405284.07 499999.50 540379.20 562499.58 576404.56 585937.11 8.53 18.51 38.51 98.50 9.00 19.00 39.00 99.00 9.16 19.16 39.17 99.17 9.24 19.25 39.25 99.25 9.29 19.30 39.30 99.30 9.33 19.33 39.33 99.33 998.50 999.00 999.17 999.25 999.30 999.33 5.54 10.13 17.44 34.12 5.46 9.55 16.04 30.82 5.39 9.28 15.44 29.46 5.34 9.12 15.10 28.71 5.31 9.01 14.88 28.24 5.28 8.94 14.73 27.91 167.03 148.50 141.11 137.10 134.58 132.85 4.54 7.71 12.22 21.20 74.14 4.06 6.61 10.01 16.26 47.18 3.78 5.99 8.81 13.75 35.51 3.59 5.59 8.07 12.25 29.25 4.32 6.94 10.65 18.00 61.25 3.78 5.79 8.43 13.27 37.12 3.46 5.14 7.26 10.92 27.00 3.26 4.74 6.54 9.55 4.19 6.59 9.98 16.69 56.18 3.62 5.41 7.76 12.06 33.20 3.29 4.76 6.60 9.78 4.11 6.39 9.60 15.98 53.44 3.52 5.19 7.39 11.39 31.09 3.18 4.53 6.23 9.15 4.05 6.26 9.36 15.52 51.71 3.45 5.05 7.15 10.97 29.75 3.11 4.39 5.99 8.75 4.01 6.16 9.20 15.21 50.53 3.40 4.95 6.98 10.67 28.83 3.05 4.28 5.82 8.47 23.70 21.92 20.80 20.03 3.07 4.35 5.89 8.45 2.96 4.12 5.52 7.85 2.88 3.97 5.29 7.46 2.83 3.87 5.12 7.19 21.69 18.77 17.20 16.21 15.52 dd .900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 716
Appendix D: Tables dd f 1 2 3 4 5 6 7 Table D.5 F nd f dd f Quantiles (continued) nd f 7 58.91 236.77 948.22 8 59.44 238.88 956.66 9 59.86 240.54 963.28 10 60.19 241.88 968.63 11 60.47 242.98 973.03 12 60.71 243.91 976.71 5928.36 5981.07 6022.47 6055.85 6083.32 6106.32 592873.29 598144.16 602283.99 605620.97 608367.68 610667.82 9.35 19.35 39.36 99.36 9.37 19.37 39.37 99.37 9.38 19.38 39.39 99.39 9.39 19.40 39.40 99.40 9.40 19.40 39.41 99.41 9.41 19.41 39.41 99.42 999.36 999.37 999.39 999.40 999.41 999.42 5.27 8.89 14.62 27.67 5.25 8.85 14.54 27.49 5.24 8.81 14.47 27.35 5.23 8.79 14.42 27.23 5.22 8.76 14.37 27.13 5.22 8.74 14.34 27.05 131.58 130.62 129.86 129.25 128.74 128.32 3.98 6.09 9.07 14.98 49.66 3.37 4.88 6.85 10.46 28.16 3.01 4.21 5.70 8.26 3.95 6.04 8.98 14.80 49.00 3.34 4.82 6.76 10.29 27.65 2.98 4.15 5.60 8.10 3.94 6.00 8.90 14.66 48.47 3.32 4.77 6.68 10.16 27.24 2.96 4.10 5.52 7.98 3.92 5.96 8.84 14.55 48.05 3.30 4.74 6.62 10.05 26.92 2.94 4.06 5.46 7.87 3.91 5.94 8.79 14.45 47.70 3.28 4.70 6.57 9.96 3.90 5.91 8.75 14.37 47.41 3.27 4.68 6.52 9.89 26.65 26.42 2.92 4.03 5.41 7.79 2.90 4.00 5.37 7.72 19.46 19.03 18.69 18.41 18.18 17.99 2.78 3.79 4.99 6.99 2.75 3.73 4.90 6.84 2.72 3.68 4.82 6.72 2.70 3.64 4.76 6.62 2.68 3.60 4.71 6.54 2.67 3.57 4.67 6.47 15.02 14.63 14.33 14.08 13.88 13.71 P 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 Appendix D.5: F Distribution Quantiles 717 Table D.5 F nd f dd f Quantiles (continued) nd f 30 60 15 20 120 61.22 245.95 984.87 61.74 248.01 993.10 6157.28 6208.73 62.26 250.10 1001.41 6260.65 62.79 252.20 1009.80 6313.03 63.06 253.25 1014.02 6339.39 10000 63.32 254.30 1018.21 6365.55 615763.66 620907.67 626098.96 631336.56 633972.40 636587.61 9.42 19.43 39.43 99.43 9.44 19.45 39.45 99.45 9.46 19.46 39.46 99.47 9.47 19.48 39.48 99.48 9.48 19.49 39.49 99.49 9.49 19.50 39.50 99.50 999.43 999.45 999.47 999.48 999.49 999.50 5.20 8.70 14.25 26.87 5.18 8.66 14.17 26.69 5.17 8.62 14.08 26.50 5.15 8.57 13.99 26.32 5.14 8.55 13.95 26.22 5.13 8.53 13.90 26.13 127.37 126.42 125.45 124.47 123.97 123.48 3.87 5.86 8.66 14.20 46.76 3.24 4.62 6.43 9.72 3.84 5.80 8.56 14.02 46.10 3.21 4.56 6.33 9.55 3.82 5.75 8.46 13.84 45.43 3.17 4.50 6.23 9.38 3.79 5.69 8.36 13.65 44.75 3.14 4.43 6.12 9.20 3.78 5.66 8.31 13.56 44.40 3.12 4.40 6.07 9.11 3.76 5.63 8.26 13.46 44.06 3.11 4.37 6.02 9.02 25.91 25.39 24.87 24.33 24.06 23.79 2.87 3.94 5.27 7.56 2.84 3.87 5.17 7.40 2.80 3.81 5.07 7.23 2.76 3.74 4.96 7.06 2.74 3.70 4.90 6.97 2.72 3.67 4.85 6.88 17.56 17.12 16.67 16.21 15.98 15.75 2.63 3.51 4.57 6.31 2.59 3.44 4.47 6.16 2.56 3.38 4.36 5.99 2.51 3.30 4.25 5.82 2.49 3.27 4.20 5.74 2.47 3.23 4.14 5.65 13.32 12.93 12.53 12.12 11.91 11.70 dd .900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 718 Appendix D: Tables Table D.5 F nd f dd f Quantiles (continued) 1 3.46 5.32 7.57 11.26 25.41 3.36 5.12 7.21 10.56 22.86 3.29 4.96 6.94 10.04 21.04 3.23 4.84 6.72 9.65 dd f 8 9 10 11 12 13 14 P 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 nd f 3 2.92 4.07 5.42 7.59 2 3.11 4.46 6.06 8.65 4 2.81 3.84 5.05 7.01 5 2.73 3.69 4.82 6.63 6 2.67 3.58 4.65 6.37 18.49 15.83 14.39 13.48 12.86 3.01 4.26 5.71 8.02 2.81 3.86 5.08 6.99 2.69 3.63 4.72 6.42 2.61 3.48 4.48 6.06 2.55 3.37 4.32 5.80 16.39 13.90 12.56 11.71 11.13 2.92 4.10 5.46 7.56 2.73 3.71 4.83 6.55 2.61 3.48 4.47 5.99 2.52 3.33 4.24 5.64 14.91 12.55 11.28 10.48 2.46 3.22 4.07 5.39 9.93 2.39 3.09 3.88 5.07 9.05 2.33 3.00 3.73 4.82 8.38 2.28 2.92 3.60 4.62 7.86 2.24 2.85 3.50 4.46 7.44 2.86 3.98 5.26 7.21 2.66 3.59 4.63 6.22 2.54 3.36 4.28 5.67 19.69 13.81 11.56 10.35 3.18 4.75 6.55 9.33 2.81 3.89 5.10 6.93 2.61 3.49 4.47 5.95 18.64 12.97 10.80 3.14 4.67 6.41 9.07 2.76 3.81 4.97 6.70 2.56 3.41 4.35 5.74 17.82 12.31 10.21 3.10 4.60 6.30 8.86 2.73 3.74 4.86 6.51 17.14 11.78 2.52 3.34 4.24 5.56 9.73 2.48 3.26 4.12 5.41 9.63 2.43 3.18 4.00 5.21 9.07 2.39 3.11 3.89 5.04 8.62 2.45 3.20 4.04 5.32 9.58 2.39 3.11 3.89 5.06 8.89 2.35 3.03 3.77 4.86 8.35 2.31 2.96 3.66 4.69 7.92 Appendix D.5: F Distribution Quantiles 719 Table D.5 F nd f dd f Quantiles (continued) dd f 8 9 10 11 12 13 14 7 2.62 3.50 4.53 6.18 8 2.59 3.44 4.43 6.03 nd f 9 2.56 3.39 4.36 5.91 10 2.54 3.35 4.30 5.81 11 2.52 3.31 4.24 5.73 12 2.50 3.28 4.20 5.67 12.40 12.05 11.77 11.54 11.35 11.19 2.51 3.29 4.20 5.61 2.47 3.23 4.10 5.47 2.44 3.18 4.03 5.35 10.70 10.37 10.11 2.41 3.14 3.95 5.20 9.52 2.34 3.01 3.76 4.89 8.66 2.28 2.91 3.61 4.64 8.00 2.23 2.83 3.48 4.44 7.49 2.19 2.76 3.38 4.28 7.08 2.38 3.07 3.85 5.06 9.20 2.30 2.95 3.66 4.74 8.35 2.24 2.85 3.51 4.50 7.71 2.20 2.77 3.39 4.30 7.21 2.15 2.70 3.29 4.14 6.80 2.35 3.02 3.78 4.94 8.96 2.27 2.90 3.59 4.63 8.12 2.21 2.80 3.44 4.39 7.48 2.16 2.71 3.31 4.19 6.98 2.12 2.65 3.21 4.03 6.58 2.42 3.14 3.96 5.26 9.89 2.32 2.98 3.72 4.85 8.75 2.25 2.85 3.53 4.54 7.92 2.19 2.75 3.37 4.30 7.29 2.14 2.67 3.25 4.10 6.80 2.10 2.60 3.15 3.94 6.40 2.40 3.10 3.91 5.18 9.72 2.30 2.94 3.66 4.77 8.59 2.23 2.82 3.47 4.46 7.76 2.17 2.72 3.32 4.22 7.14 2.12 2.63 3.20 4.02 6.65 2.07 2.57 3.09 3.86 6.26 2.38 3.07 3.87 5.11 9.57 2.28 2.91 3.62 4.71 8.45 2.21 2.79 3.43 4.40 7.63 2.15 2.69 3.28 4.16 7.00 2.10 2.60 3.15 3.96 6.52 2.05 2.53 3.05 3.80 6.13 P 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 720 Appendix D: Tables Table D.5 F nd f dd f Quantiles (continued) dd f 8 9 10 11 12 13 14 15 2.46 3.22 4.10 5.52 20 2.42 3.15 4.00 5.36 nd f 30 2.38 3.08 3.89 5.20 10.84 10.48 10.11 2.34 3.01 3.77 4.96 9.24 2.24 2.85 3.52 4.56 8.13 2.17 2.72 3.33 4.25 7.32 2.10 2.62 3.18 4.01 6.71 2.05 2.53 3.05 3.82 6.23 2.01 2.46 2.95 3.66 5.85 2.30 2.94 3.67 4.81 8.90 2.20 2.77 3.42 4.41 7.80 2.12 2.65 3.23 4.10 7.01 2.06 2.54 3.07 3.86 6.40 2.01 2.46 2.95 3.66 5.93 1.96 2.39 2.84 3.51 5.56 2.25 2.86 3.56 4.65 8.55 2.16 2.70 3.31 4.25 7.47 2.08 2.57 3.12 3.94 6.68 2.01 2.47 2.96 3.70 6.09 1.96 2.38 2.84 3.51 5.63 1.91 2.31 2.73 3.35 5.25 P 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 60 2.34 3.01 3.78 5.03 9.73 2.21 2.79 3.45 4.48 8.19 2.11 2.62 3.20 4.08 7.12 2.03 2.49 3.00 3.78 6.35 1.96 2.38 2.85 3.54 5.76 1.90 2.30 2.72 3.34 5.30 1.86 2.22 2.61 3.18 4.94 120 2.32 2.97 3.73 4.95 9.53 2.18 2.75 3.39 4.40 8.00 2.08 2.58 3.14 4.00 6.94 2.00 2.45 2.94 3.69 6.18 1.93 2.34 2.79 3.45 5.59 1.88 2.25 2.66 3.25 5.14 1.83 2.18 2.55 3.09 4.77 10000 2.29 2.93 3.67 4.86 9.34 2.16 2.71 3.33 4.31 7.82 2.06 2.54 3.08 3.91 6.76 1.97 2.41 2.88 3.60 6.00 1.90 2.30 2.73 3.36 5.42 1.85 2.21 2.60 3.17 4.97 1.80 2.13 2.49 3.01 4.61 Appendix D.5: F Distribution Quantiles 721 Table D.5 F nd f dd f Quantiles (continued) dd f 15 20 30 60 120 10000 P 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 1 3.07 4.54 6.20 8.68 2 2.70 3.68 4.77 6.36 16.59 11.34 2.97 4.35 5.87 8.10 14.82 2.88 4.17 5.57 7.56 13.29 2.79 4.00 5.29 7.08 11.97 2.75 3.92 5.15 6.85 11.38 2.71 3.84 5.03 6.64 10.83 2.59 3.49 4.46 5.85 9.95 2.49 3.32 4.18 5.39 8.77 2.39 3.15 3.93 4.98 7.77 2.35 3.07 3.80 4.79 7.32 2.30 3.00 3.69 4.61 6.91 nd f 3 2.49 3.29 4.15 5.42 9.34 2.38 3.10 3.86 4.94 8.10 2.28 2.92 3.59 4.51 7.05 2.18 2.76 3.34 4.13 6.17 2.13 2.68 3.23 3.95 5.78 2.08 2.61 3.12 3.78 5.43 4 2.36 3.06 3.80 4.89 8.25 2.25 2.87 3.51 4.43 7.10 2.14 2.69 3.25 4.02 6.12 2.04 2.53 3.01 3.65 5.31 1.99 2.45 2.89 3.48 4.95 1.95 2.37 2.79 3.32 4.62 5 2.27 2.90 3.58 4.56 7.57 2.16 2.71 3.29 4.10 6.46 2.05 2.53 3.03 3.70 5.53 1.95 2.37 2.79 3.34 4.76 1.90 2.29 2.67 3.17 4.42 1.85 2.21 2.57 3.02 4.11 6 2.21 2.79 3.41 4.32 7.09 2.09 2.60 3.13 3.87 6.02 1.98 2.42 2.87 3.47 5.12 1.87 2.25 2.63 3.12 4.37 1.82 2.18 2.52 2.96 4.04 1.77 2.10 2.41 2.80 3.75 722 Appendix D: Tables Table D.5 F nd f dd f Quantiles (continued) dd f 15 20 30 60 120 10000 P 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 7 2.16 2.71 3.29 4.14 6.74 2.04 2.51 3.01 3.70 5.69 1.93 2.33 2.75 3.30 4.82 1.82 2.17 2.51 2.95 4.09 1.77 2.09 2.39 2.79 3.77 1.72 2.01 2.29 2.64 3.48 8 2.12 2.64 3.20 4.00 6.47 2.00 2.45 2.91 3.56 5.44 1.88 2.27 2.65 3.17 4.58 1.77 2.10 2.41 2.82 3.86 1.72 2.02 2.30 2.66 3.55 1.67 1.94 2.19 2.51 3.27 nd f 9 2.09 2.59 3.12 3.89 6.26 1.96 2.39 2.84 3.46 5.24 1.85 2.21 2.57 3.07 4.39 1.74 2.04 2.33 2.72 3.69 1.68 1.96 2.22 2.56 3.38 1.63 1.88 2.11 2.41 3.10 10 2.06 2.54 3.06 3.80 6.08 1.94 2.35 2.77 3.37 5.08 1.82 2.16 2.51 2.98 4.24 1.71 1.99 2.27 2.63 3.54 1.65 1.91 2.16 2.47 3.24 1.60 1.83 2.05 2.32 2.96 11 2.04 2.51 3.01 3.73 5.94 1.91 2.31 2.72 3.29 4.94 1.79 2.13 2.46 2.91 4.11 1.68 1.95 2.22 2.56 3.42 1.63 1.87 2.10 2.40 3.12 1.57 1.79 1.99 2.25 2.85 12 2.02 2.48 2.96 3.67 5.81 1.89 2.28 2.68 3.23 4.82 1.77 2.09 2.41 2.84 4.00 1.66 1.92 2.17 2.50 3.32 1.60 1.83 2.05 2.34 3.02 1.55 1.75 1.95 2.19 2.75 Appendix D.5: F Distribution Quantiles 723 Table D.5 F nd f dd f Quantiles (continued) dd f 15 20 30 60 120 10000 P 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 0.900 0.950 0.975 0.990 0.999 15 1.97 2.40 2.86 3.52 5.54 1
.84 2.20 2.57 3.09 4.56 1.72 2.01 2.31 2.70 3.75 1.60 1.84 2.06 2.35 3.08 1.55 1.75 1.94 2.19 2.78 1.49 1.67 1.83 2.04 2.52 20 1.92 2.33 2.76 3.37 5.25 1.79 2.12 2.46 2.94 4.29 1.67 1.93 2.20 2.55 3.49 1.54 1.75 1.94 2.20 2.83 1.48 1.66 1.82 2.03 2.53 1.42 1.57 1.71 1.88 2.27 nd f 30 1.87 2.25 2.64 3.21 4.95 1.74 2.04 2.35 2.78 4.00 1.61 1.84 2.07 2.39 3.22 1.48 1.65 1.82 2.03 2.55 1.41 1.55 1.69 1.86 2.26 1.34 1.46 1.57 1.70 1.99 60 1.82 2.16 2.52 3.05 4.64 1.68 1.95 2.22 2.61 3.70 1.54 1.74 1.94 2.21 2.92 1.40 1.53 1.67 1.84 2.25 1.32 1.43 1.53 1.66 1.95 1.24 1.32 1.39 1.48 1.66 120 1.79 2.11 2.46 2.96 4.47 1.64 1.90 2.16 2.52 3.54 1.50 1.68 1.87 2.11 2.76 1.35 1.47 1.58 1.73 2.08 1.26 1.35 1.43 1.53 1.77 1.17 1.22 1.27 1.33 1.45 10000 1.76 2.07 2.40 2.87 4.31 1.61 1.84 2.09 2.42 3.38 1.46 1.62 1.79 2.01 2.59 1.29 1.39 1.48 1.60 1.89 1.19 1.26 1.31 1.38 1.55 1.03 1.03 1.04 1.05 1.06 724 Appendix D: Tables D.6 Binomial Distribution Probabilities If X Binomial n p then Table D.6 contains entries computing P X k n k pk 1 p n k for various values of n k and p Note that if X X n Y Binomial n 1 p Binomial n p then P X k P Y n k where Table D.6 Binomial Probabilities .01 .9801 .0198 .0001 .9703 .0294 .0003 .9606 .0388 .0006 .9510 .0480 .0010 .02 .9604 .0392 .0004 .9412 .0576 .0012 .9224 .0753 .0023 .9039 .0922 .0038 .0001 .03 .9409 .0582 .0009 .9127 .0847 .0026 .8853 .1095 .0051 .0001 .8587 .1328 .0082 .0003 .04 .9216 .0768 .0016 .8847 .1106 .0046 .0001 .8493 .1416 .0088 .0002 .8154 .1699 .0142 .0006 .9415 .0571 .0014 .8858 .1085 .0055 .0002 .8330 .1546 .0120 .0005 .7828 .1957 .0204 .0011 .9321 .0659 .0020 .8681 .1240 .0076 .0003 .8080 .1749 .0162 .0008 .7514 .2192 .0274 .0019 .0001 p .05 .9025 .0950 .0025 .8574 .1354 .0071 .0001 .8145 .1715 .0135 .0005 .7738 .2036 .0214 .0011 .7351 .2321 .0305 .0021 .0001 .6983 .2573 .0406 .0036 .0002 .06 .8836 .1128 .0036 .8306 .1590 .0102 .0002 .7807 .1993 .0191 .0008 .7339 .2342 .0299 .0019 .0001 .6899 .2642 .0422 .0036 .0002 .6485 .2897 .0555 .0059 .0004 .9227 .0746 .0026 .0001 .8508 .1389 .0099 .0004 .7837 .1939 .0210 .0013 .0001 .7214 .2405 .0351 .0029 .0002 .6634 .2793 .0515 .0054 .0004 .6096 .3113 .0695 .0089 .0007 .07 .8649 .1302 .0049 .8044 .1816 .0137 .0003 .7481 .2252 .0254 .0013 .6957 .2618 .0394 .0030 .0001 .6470 .2922 .0550 .0055 .0003 .6017 .3170 .0716 .0090 .0007 .5596 .3370 .0888 .0134 .0013 .0001 .08 .8464 .1472 .0064 .7787 .2031 .0177 .0005 .7164 .2492 .0325 .0019 .6591 .2866 .0498 .0043 .0002 .6064 .3164 .0688 .0080 .0005 .5578 .3396 .0886 .0128 .0011 .0001 .5132 .3570 .1087 .0189 .0021 .0001 .09 .8281 .1638 .0081 .7536 .2236 .0221 .0007 .6857 .2713 .0402 .0027 .0001 .6240 .3086 .0610 .0060 .0003 .5679 .3370 .0833 .0110 .0008 .5168 .3578 .1061 .0175 .0017 .0001 .4703 .3721 .1288 .0255 .0031 .0002 Appendix D.6: Binomial Distribution Probabilities 725 Table D.6 Binomial Probabilities (continued10 .8100 .1800 .0100 .7290 .2430 .0270 .0010 .6561 .2916 .0486 .0036 .0001 .5905 .3280 .0729 .0081 .0004 .5314 .3543 .0984 .0146 .0012 .0001 .4783 .3720 .1240 .0230 .0026 .0002 .4305 .3826 .1488 .0331 .0046 .0004 .15 .7225 .2550 .0225 .6141 .3251 .0574 .0034 .5220 .3685 .0975 .0115 .0005 .4437 .3915 .1382 .0244 .0022 .0001 .3771 .3993 .1762 .0415 .0055 .0004 .3206 .3960 .2097 .0617 .0109 .0012 .0001 .2725 .3847 .2376 .0839 .0185 .0026 .0002 .20 .6400 .3200 .0400 .5120 .3840 .0960 .0080 .4096 .4096 .1536 .0256 .0016 .3277 .4096 .2048 .0512 .0064 .0003 .2621 .3932 .2458 .0819 .0154 .0015 .0001 .2097 .3670 .2753 .1147 .0287 .0043 .0004 .1678 .3355 .2936 .1468 .0459 .0092 .0011 .0001 .25 .5625 .3750 .0625 .4219 .4219 .1406 .0156 .3164 .4219 .2109 .0469 .0039 .2373 .3955 .2637 .0879 .0146 .0010 .1780 .3560 .2966 .1318 .0330 .0044 .0002 .1335 .3115 .3115 .1730 .0577 .0115 .0013 .0001 .1001 .2670 .3115 .2076 .0865 .0231 .0038 .0004 p .30 .4900 .4200 .0900 .3430 .4410 .1890 .0270 .2401 .4116 .2646 .0756 .0081 .1681 .3602 .3087 .1323 .0284 .0024 .1176 .3025 .3241 .1852 .0595 .0102 .0007 .0824 .2471 .3177 .2269 .0972 .0250 .0036 .0002 .0576 .1977 .2965 .2541 .1361 .0467 .0100 .0012 .0001 .35 .4225 .4550 .1225 .2746 .4436 .2389 .0429 .1785 .3845 .3105 .1115 .0150 .1160 .3124 .3364 .1811 .0488 .0053 .0754 .2437 .3280 .2355 .0951 .0205 .0018 .0490 .1848 .2985 .2679 .1442 .0466 .0084 .0006 .0319 .1373 .2587 .2786 .1875 .0808 .0217 .0033 .0002 .40 .3600 .4800 .1600 .2160 .4320 .2880 .0640 .1296 .3456 .3456 .1536 .0256 .0778 .2592 .3456 .2304 .0768 .0102 .0467 .1866 .3110 .2765 .1382 .0369 .0041 .0280 .1306 .2613 .2903 .1935 .0774 .0172 .0016 .0168 .0896 .2090 .2787 .2322 .1239 .0413 .0079 .0007 .45 .3025 .4950 .2025 .1664 .4084 .3341 .0911 .0915 .2995 .3675 .2005 .0410 .0503 .2059 .3369 .2757 .1128 .0185 .0277 .1359 .2780 .3032 .1861 .0609 .0083 .0152 .0872 .2140 .2918 .2388 .1172 .0320 .0037 .0084 .0548 .1569 .2568 .2627 .1719 .0703 .0164 .0017 .50 .2500 .5000 .2500 .1250 .3750 .3750 .1250 .0625 .2500 .3750 .2500 .0625 .0313 .1563 .3125 .3125 .1562 .0312 .0156 .0938 .2344 .3125 .2344 .0937 .0156 .0078 .0547 .1641 .2734 .2734 .1641 .0547 .0078 .0039 .0313 .1094 .2188 .2734 .2188 .1094 .0312 .0039 726 Appendix D: Tables Table D.6 Binomial Probabilities (continued) .01 .9135 .0830 .0034 .0001 .02 .8337 .1531 .0125 .0006 .03 .7602 .2116 .0262 .0019 .0001 .04 .6925 .2597 .0433 .0042 .0003 p .05 .6302 .2985 .0629 .0077 .0006 .06 .5730 .3292 .0840 .0125 .0012 .0001 .07 .5204 .3525 .1061 .0186 .0021 .0002 .08 .4722 .3695 .1285 .0261 .0034 .0003 .9044 .0914 .0042 .0001 .8171 .1667 .0153 .0008 .7374 .2281 .0317 .0026 .0001 .6648 .2770 .0519 .0058 .0004 .5987 .3151 .0746 .0105 .0010 .0001 .5386 .3438 .0988 .0168 .0019 .0001 .4840 .3643 .1234 .0248 .0033 .0003 .4344 .3777 .1478 .0343 .0052 .0005 .8864 .1074 .0060 .0002 .7847 .1922 .0216 .0015 .0001 .6938 .2575 .0438 .0045 .0003 .6127 .3064 .0702 .0098 .0009 .0001 .5404 .3413 .0988 .0173 .0021 .0002 .4759 .3645 .1280 .0272 .0039 .0004 .4186 .3781 .1565 .0393 .0067 .0008 .0001 .3677 .3837 .1835 .0532 .0104 .0014 .0001 .09 .4279 .3809 .1507 .0348 .0052 .0005 .3894 .3851 .1714 .0452 .0078 .0009 .0001 .3225 .3827 .2082 .0686 .0153 .0024 .0003 .8601 .1303 .0092 .0004 .7386 .2261 .0323 .0029 .0002 .6333 .2938 .0636 .0085 .0008 .0001 .5421 .3388 .0988 .0178 .0022 .0002 .4633 .3658 .1348 .0307 .0049 .0006 .3953 .3785 .1691 .0468 .0090 .0013 .0001 .3367 .3801 .2003 .0653 .0148 .0024 .0003 .2863 .3734 .2273 .0857 .0223 .0043 .0006 .0001 .2430 .3605 .2496 .1070 .0317 .0069 .0011 .0001 n 9 10 12 15 10 10 11 12 10 11 12 13 14 15 Appendix D.6: Binomial Distribution Probabilities 727 Table D.6 Binomial Probabilities (continued) .10 .3874 .3874 .1722 .0446 .0074 .0008 .0001 .3487 .3874 .1937 .0574 .0112 .0015 .0001 .2824 .3766 .2301 .0852 .0213 .0038 .0005 .2059 .3432 .2669 .1285 .0428 .0105 .0019 .0003 .15 .2316 .3679 .2597 .1069 .0283 .0050 .0006 .1969 .3474 .2759 .1298 .0401 .0085 .0012 .0001 .1422 .3012 .2924 .1720 .0683 .0193 .0040 .0006 .0001 .0874 .2312 .2856 .2184 .1156 .0449 .0132 .0030 .0005 .0001 .20 .1342 .3020 .3020 .1762 .0661 .0165 .0028 .0003 .1074 .2684 .3020 .2013 .0881 .0264 .0055 .0008 .0001 .0687 .2062 .2835 .2362 .1329 .0532 .0155 .0033 .0005 .0001 .0352 .1319 .2309 .2501 .1876 .1032 .0430 .0138 .0035 .0007 .0001 .25 .0751 .2253 .3003 .2336 .1168 .0389 .0087 .0012 .0001 .0563 .1877 .2816 .2503 .1460 .0584 .0162 .0031 .0004 .0317 .1267 .2323 .2581 .1936 .1032 .0401 .0115 .0024 .0004 .0134 .0668 .1559 .2252 .2252 .1651 .0917 .0393 .0131 .0034 .0007 .0001 n 9 10 12 15 10 10 11 12 10 11 12 13 14 15 p .30 .0404 .1556 .2668 .2668 .1715 .0735 .0210 .0039 .0004 .0282 .1211 .2335 .2668 .2001 .1029 .0368 .0090 .0014 .0001 .0138 .0712 .1678 .2397 .2311 .1585 .0792 .0291 .0078 .0015 .0002 .0047 .0305 .0916 .1700 .2186 .2061 .1472 .0811 .0348 .0116 .0030 .0006 .0001 .35 .0207 .1004 .2162 .2716 .2194 .1181 .0424 .0098 .0013 .0001 .0135 .0725 .1757 .2522 .2377 .1536 .0689 .0212 .0043 .0005 .0057 .0368 .1088 .1954 .2367 .2039 .1281 .0591 .0199 .0048 .0008 .0001 .0016 .0126 .0476 .1110 .1792 .2123 .1906 .1319 .0710 .0298 .0096 .0024 .0004 .0001 .40 .0101 .0605 .1612 .2508 .2508 .1672 .0743 .0212 .0035 .0003 .0060 .0403 .1209 .2150 .2508 .2007 .1115 .0425 .0106 .0016 .0001 .0022 .0174 .0639 .1419 .2128 .2270 .1766 .1009 .0420 .0125 .0025 .0003 .0005 .0047 .0219 .0634 .1268 .1859 .2066 .1771 .1181 .0612 .0245 .0074 .0016 .0003 .45 .0046 .0339 .1110 .2119 .2600 .2128 .1160 .0407 .0083 .0008 .0025 .0207 .0763 .1665 .2384 .2340 .1596 .0746 .0229 .0042 .0003 .0008 .0075 .0339 .0923 .1700 .2225 .2124 .1489 .0762 .0277 .0068 .0010 .0001 .0001 .0016 .0090 .0318 .0780 .1404 .1914 .2013 .1647 .1048 .0515 .0191 .0052 .0010 .0001 .50 .0020 .0176 .0703 .1641 .2461 .2461 .1641 .0703 .0176 .0020 .0010 .0098 .0439 .1172 .2051 .2461 .2051 .1172 .0439 .0098 .0010 .0002 .0029 .0161 .0537 .1208 .1934 .2256 .1934 .1208 .0537 .0161 .0029 .0002 .0005 .0032 .0139 .0417 .0916 .1527 .1964 .1964 .1527 .0916 .0417 .0139 .0032 .0005 728 Appendix D: Tables Table D.6 Binomial Probabilities (continued) .01 .8179 .1652 .0159 .0010 .02 .6676 .2725 .0528 .0065 .0006 .03 .5438 .3364 .0988 .0183 .0024 .0002 .04 .4420 .3683 .1458 .0364 .0065 .0009 .0001 p .05 .3585 .3774 .1887 .0596 .0133 .0022 .0003 .06 .2901 .3703 .2246 .0860 .0233 .0048 .0008 .0001 .07 .2342 .3526 .2521 .1139 .0364 .0088 .0017 .0002 .08 .1887 .3282 .2711 .1414 .0523 .0145 .0032 .0005 .0001 .09 .1516 .3000 .2818 .1672 .0703 .0222 .0055 .0011 .0002 Table D.6 Binomial Probabilities (continued) .10 .1216 .2702 .2852 .1901 .0898 .0319 .0089 .0020 .0004 .0001 .15 .0388 .1368 .2293 .2428 .1821 .1028 .0454 .0160 .0046 .0011 .0002 .20 .0115 .0576 .1369 .2054 .2182 .1746 .1091 .0545 .0222 .0074 .0020 .0005 .0001 .25 .0032 .0211 .0669 .1339 .1897 .2023 .1686 .1124 .0609 .0271 .0099 .0030 .0008 .0002 p .30 .0008 .0068 .0278 .0716 .1304 .1789 .1916 .1643 .1144 .0654 .0308 .0120 .0039 .0010 .0002 .35 .0002 .0020 .0100 .0323 .0738 .1272 .1712 .1844 .1614 .1158 .0686 .0336 .0136 .0045 .0012 .0003 .40 .45 .50 .0005 .0031 .0123 .0350 .0746 .1244
.1659 .1797 .1597 .1171 .0710 .0355 .0146 .0049 .0013 .0003 .0001 .0008 .0040 .0139 .0365 .0746 .1221 .1623 .1771 .1593 .1185 .0727 .0366 .0150 .0049 .0013 .0002 .0002 .0011 .0046 .0148 .0370 .0739 .1201 .1602 .1762 .1602 .1201 .0739 .0370 .0148 .0046 .0011 .0002 n 20 n 20 10 11 12 13 14 15 16 17 18 19 20 10 11 12 13 14 15 16 17 18 19 20 Appendix E Answers to Odd­Numbered Exercises Answers are provided here to odd­numbered exercises that require a computation. No details of the computations are given. If the Exercise required that something be demonstrated, then a significant hint is provided. 5 6 (b) P 1 2 3 1 (c) P 1 P 2 3 1 2 [0 1] Ac B 1 12 P 3 3 8 P 3 0 9 (b) 0 1 25% 1 6 P 4 5 12 2 3 1 6 0 for any s 1.2.1 (a) P 1 2 1.2.3 P 2 1.2.5 P s 1.2.7 This is the subset A Bc 1 12 P 2 1.2.9 P 1 5 24 P 1 1.2.11 P 2 1.3.1 (a) P 2 3 4 100 1.3.3 P late or early or both 1.3.5 (a) 1 32 1.3.7 10% 1.4.1 (a) 1 6 8 1.4.3 1 1.4.5 (a) 4 1 1.4.7 48 10 1.4.9 5 6 2 1 6 12 5 3 3 5051 2100 39 13 13 13 13 13 52 10 25 216 18 6 3 3 246 595 1.4.11 0 03125. (b) 0 96875 1 1,679,616 (b) 1 6 7 52 13 13 13 13 (b) 4 0 4134 1 7 3 12 3 12 3 18 3 11 128 0 0859 3 2 1 23 1.4.13 2 4 1 22 2 1 1.5.1 (a) 3 4 (b) 16 21 1.5.3 (a) 1 8 (b) 1 8 1.5.5 1 1.5.7 0 074 1 24 2 1 1 22 3 0 1 23 4 3 1 24 1 2 1 4 (c) 0 1 2 0 729 1 279,936 (c) 8 1 6 8 1 209,952 4 4 48 9 39 13 13 13 52 13 13 13 13 730 Appendix E: Answers to Odd­Numbered Exercises y 0 0 S A 0 9 630 4 8 limn 1 2 3 0 for y 18 Z 3 P [0 n] 1 4 P X 0 (c) W 3 4 (c) Y 4 2 2 36 P Y 6 36 P Y s and Y s 84 Z 4 s2 for all s 260 Z 5 2 (b) W 2 1 (b) Y 2 P X 1.5.9 (a) No (b) Yes (c) Yes (d) Yes (e) No 1.5.11 (a) 0 1667 (b) 0 3125 1.6.1 1 3 1.6.3 An 1.6.5 1 1.6.7 Suppose there is no n such that P [0 n] 1 P [0 1.6.9 No 2.1.1 (a) 1 (b) Does not exist (c) Does not exist (d) 1 2.1.3 (a) X s S. 2 Z 2 2.1.5 Yes, for A B. 2.1.7 (a) W 1 2.1.9 (a) Y 1 2.2.1 P X x 0 1 2 2.2.3 (a) P Y 1 36 P Y 5 36 P Y 3 36 P Y 2 36 IB 3 4 36 IB 9 2.2.5 (a) P X for all x P Y y P W 4 other choices of 25 2.2.7 P X 2.3.1 pY 2 1 36 pY 3 pY 7 pY 12 2.3.3 pZ 1 2.3.5 pW 1 2 36 pW 6 4 36 pW 15 2 36 pW 25 otherwise 2.3.7 2.3.9 53 512 2.3.11 10 2.3.15 (a) 10 3 3 7 11 3 36 IB 4 3 36 IB 10 1 1 36 pW 2 4 36 pW 8 2 36 pW 16 1 36 pW 30 6 36 pY 8 1 36 and pY y pZ 5 0 3 P Y 1 2 3 (c) P W 0 2 P W 6 2 36 pW 3 2 36 pW 9 1 36 pW 18 1 2 3 (b) P Y 0 for all y 30 2 36 pY 4 12 4 36 IB 5 0 34 P W 5 4 36 pY 10 2 36 IB 11 5 36 pY 9 0 otherwise 1 4 pZ 0 2 36 P Y 0 45 35 3 0 65 7 (b) 0 35 0 65 9 (c) 9 1 2 and pZ z 11 12 1 2 0 9 and then note this implies (b) For this example, Z 1 1 (d) W Z is not true. 1 1 2 P X x 0 for 3 36 P Y 5 36 10 11 12, P Y 4 36 P Y 4 36 P Y B 6 36 IB 7 5 9 1 36 (b) P Y 2 6 10 1 36 IB 2 5 36 IB 8 5 36 IB 6 1 36 IB 12 3 0 5 and P X 0 2 P Y 0 09 P W 2 2 0 25 and P W 3 3 0 x 0 5 and 0 12 0 for all 0 55 and P X 3 36 pY 5 x 0 otherwise 4 36 pY 6 5 36 3 36 pY 11 2 36 0 otherwise 2 36 pW 4 1 36 pW 10 2 36 pW 20 3 36 pW 5 2 36 pW 12 2 36 pW 24 2 36 and pW 36 1 36, with pW 0 1 0 35 2 0 65 8 Appendix E: Answers to Odd­Numbered Exercises 731 2.3.17 (a) Hypergeometric 9 4 2 (b) Hypergeometric 9 5 2 2.3.19 P X 5 100 1000 5 5! exp 100 1000 2.4.1 (a) 0 (b) 0 (c) 0 (d) 2 3 (e) 2 3 (f) 1 (g) 2.4.3 (a) e 20 (b) 1 (c) e 12 (d) e 4 25 1 4 2.4.5 No 1 3 M3 2.4.7 c 2.4.9 2 2.4.11 Yes 1 f x dx 2 1 g x dx 2.4.13 P Y P X du 3 2 3 2 1 2 exp y 1 2 2 dy 2 2 1 2 exp u2 2 2.5.1 Properties (a) and (b) follow by inspection. Properties (c) and (d) follow since FX x 1 for x 1 6, and FX x 2.5.3 (a) No (b) Yes (c) Yes (d) No (e) Yes (f) Yes (g) No 0 for x 1. 2.5.5 Hence: (a) 0 933 (b) 0 00135 (c) 1 90 10 8 2.5.7 (a) 1 9 (b) 3 16 (c) 12 25 (d) 0 (e) 1 (f) 0 (g) 1 (h) 0 2.5.9 (b) No 2.5.11 (b) Yes 2.5.13 (b) The function F is nondecreasing, limx 1. (c) P X 1 4 4 5 0, P X 0 and limx 2 5 F x 5 12, P X 2e 16 25 3 (b) P 1 Z 5 36 (d) P Z 4 5 1 12 (e) P Z 1 2 0 11 12 1 9 (f) P Z 2e 1 2 3 (c) 1 2 4 5 2.5.15 (a) P Z P Z 11 12 2.6.1 fY y equals 1 R 2 5 2e 1 2 3 L c for L y d c R and otherwise equals 0 c y1 3 2 for y 0 and otherwise equals 0 2.6.3 fY y 2.6.5 fY y equals 2.6.7 fY y 2.6.9 (a) fY y 2.6.11 fY y 2.6.13 fY y 2.7.1 e [y d c ]2 2c2 2 3 y 2 3e 1 6y1 2 for 0 y 8 (b) f Z z y 9 z7 2 for 0 y 1 2 sin y1 2 4 for 0 1 2 3 y 2 3 2 1 exp y y 2 3 2 2 z 2 and 0 otherwise FX Y x y 0 1 3 1 min[x y 0 min[x y min[x 2 4] y 2 4] 0 2 4] 1 1 2.7.3 (a) pX 2 otherwise (b) pY 3 0 otherwise (c) P Y pX 3 pX pX 3 pY 3 5 (d) P Y 2 2 pY X pX 17 1 5, with pX x 0 3 pY 19 0 (e) P XY 1 5, with pY y 0 0 pY 2 X 2.7.5 X x Y y X x and X x Y y Y y 732 Appendix E: Answers to Odd­Numbered Exercises c 1 cos 2x x for 0 x 1 and 0 otherwise (b) fY y 0 2 and 0 otherwise (b) fY y 5 48 1 2 otherwise, pX x 1 0 (b) 1 and f X x 0 otherwise (b) 0 otherwise (c) No 1 2 (c, 1 and 0 fY X y x = 0), thus, X and C y5 6 y 2 for fY y x 5 6 x 5 y5 x 2 (other­ C 500,000x 5 3 10, 0 (c) Yes x 1 and fY y 2 X 5 Y 2y 3 for 0 cos y 2.7.7 (a) f X x c 1 2.7.9 (a) f X x y3 2.8.1 (a) pX pY 3 2.8.3 (a) f X x 48y2 fY y 2.8.5 (a) P Y y for 0 4 2 and 0 otherwise y 2x 3 8 for x 3x 2 0 2 and 0 otherwise (c) P Y 3y2 12 for y 2 1 4 pX 9 1 4 pX 13 2 3, pY 5 1 3 otherwise, pY y 40 49 for 0 18x 49 30 49 for 0 y 6y 4 X x y 4 9 2 0 (d) P Y 9 2 X fY X y x fY y x 5 y5 x 2 2x 2 y 1 (e) P X 4y5 2 3 (otherwiseb) P Y 5 2 3 fY y 2.8.7 (a) f X x x 2 4y5 fY X y x Y are not independent. (b) f X x 0 1, 1 and 0 wise, fY X y x = 0) X and Y are not independent. (c) f X x 50x x y independent. (d) f X x and 0 y are independent. 2.8.9 P X 3 1 4 2.8.11 If X IB1 C P Y 2.8.13 (a) C is constant, then P X B2 C 500,000x 5 3 and fY y C 2048y5 3 500,000x 5 3 10, fY B1 2 1 x 4 and 0 8y for 0 50x (otherwise, fY X y x = 0), thus, X and Y are not 4 3y5 500,000 (otherwise, fY X y x = 0), X and Y C 2048y5 3 for 0 fY IB1 C and P X B1 Y B2 pY b) y 1 4 1 4 7 Others 0 0 5 Others pX c) X and Y are independent. 2.8.15 fY .9.1 h1 u1 h2 u1 2 x 2 y cos 2 u2 sin 2 u2 3 x 2 y3 2.9.3 (b) h x y at least for z u1 2 log 1 u1 h2 u2 u1 2 log 1 u1 x 2 y2 x 2 0 and z y2 (c) h 1 z 0 (d) f Z W z 3x 2 4 3y2 for 0 2x 3 for x x 2, and 0 otherwise (b) y and 0 otherwise (c) Not independent y h1 u2 2 sin 2 u2 2 log 1 u1 2 cos 2 u2 2 log 1 u1 z z e 2 z 2 2 z2 2 , 2 for Appendix E: Answers to Odd­Numbered Exercises 733 z 2 0 and 1 z 2 4, i.e., for z 4 and max z z 64 z 4, and 0 otherwise 0 and 1 y4 x 4 (c) h 1 z z1 4 4, i.e., for 1 12 pZ 5 1 18 pZ 4 1 4 z1 4 (d) f Z W z 0 and 1 z 1 18 pZ 7 e 256, and 0 otherwise 1 24 pZ 8 1 72 1 4 1 4 pZ 11 3 8 pZ 12 1 8 pZ z 0 otherwise 2.9.5 (b) h x y 1 4 for 2.9.7 pZ 2 pZ 9 2.9.9 (a) z z W P Z ( 8,16) 1 5 ( 7,19) 1 5 ( 3,11) 1 5 ( 2,14) 1 5 (0,6) 1 5 otherwise 0 8 7 1 5 for z 6 11 14 16 19, and otherwise pW 7 if U 1 2, Z 3 2 if 1 2 U 5 6, and Z 2 0, and otherwise pZ z 0 0 (c) pW 5 if U 5 6 Exponential 3 3 2 and c2 5 4 for t y U 1 4 8 7 (b) E X 1 (b) E Y 4Y 8 1 427 3 (f) E XY 12 Y 1 2, for x 0 (b) pZ z 1 5 for 2.10.1 Z 2.10.3 Y 2.10.5 c1 2.10.7 (a) For x FX x because F 1 X and F 1 t X 1 3, for 2 FY y F 1 2.10.9 Y Z U 3.1.1 (a) E X 3.1.3 (a) E X Y 2 3.1.5 E 8X 3.1.7 E XY 3.1.9 E X 3.1.11 (a) E Z 3.1.13 E Y 3.2.1 (a) C E X 3.2.3 (a) E X E Y 4 3.2.5 E 3.2.7 E Y 3.2.9 Let 3.2.11 334 3.2.13 E Y 3.2.15 Yes 30 Z 6 k 7 (b) E W 7 4 1 4 E X 8645 2062 17 24 (b) E Y 216 7 (f) E X 2Y 3 6Y 5X 77 3 17 72 E X k 214 1 1 FX x 4, FX x . F 1 t X 1 2 1]. 4, FY y 0, for 1 x 2, FX x 1 3, for 2 4, 1 (b) The range of t must be restricted on 0 1] 1 3 1 2], t 1 for t 2, (c) For y y x 0 1 3], F 1 X 1, FY y 4, FY y 2 for t 0, for 1 1. 1 2, for y 19 (d) E Y 2 370 3 (e) E X 2 8 1 (c) E X 11 (c) E X 2 113 2 p p 12 49 4 7 (b) C 1 16 E X 169 24 (c) C 5 3093, 17 8 (c) E X 2 11 20 (d) E Y 2 99 20 (e) 27 4 then 1 39 25, 2 64 25, 3 152 35 3.3.1 (a) Cov X Y 3.3.3 Corr X Y 3.3.5 E XY 3.3.7 (a) Cov X Z 3.3.9 E X X 1 n n 1 3.3.11 E X 3.3.13 Cov Z W 3.3.15 Cov X Y 3.4.1 (a) rZ t Var Z 2, m Z t 2 734 Appendix E: Answers to Odd­Numbered Exercises 2 3 (b) Var X 0 18292 2, Var Y 32 9 (c) Corr b) Corr X Z E X 2 E X , when X 1 46 Binomial XY 329 12 Cov X Y 35 12 0 0 Corr Z W 35 24 t rZ t et 2 et m Z t 2 2 t 2 t 2 rZ t 4 t 2et 2 et 2 m Z t ese es 1 , so mY s 2 2 2, Var Y 2 3 (b) E Z 2et 2 et , mY s 2, 2 et es mY 0 3 22 3 (c) E X Y 1 3 22 3 36 7 (c) E Y X 4. 2 2 25 4 1 3es e es 1 , mY s 3.4.3 mY s 2e2s e es 1 , so mY s e4sm X 3s 3.4.5 mY s e es 1 es 3.4.7 mY s 3 3.5.1 (ad) E Y X 3.5.3 (a) E Y X whenever X 3.5.7 E Z W 4 3.5.9 (a) E X Y 1 3.5.11 (a) E X 0 1 3 (e) E Y X 6 6 and E Y X 17 14 3 (b) E W Z 1 (b) E X Y 2 27 19 (b 25 4 (b) E Y X 5 2 (b) E Y X e2s 36 7 whenever X 10 3 4 2 (c) E Y X 1 3 2 3 (f) E Y X 52 95 (c) E X Y 1 4 (e) E[E X Y ] 0 1 (g) E Y X 3 2 y 0 (d) E Y X X 3 4 3 2 y3 4 3y3 3y3 (d) 4 19 4 y3 3y3 dy 27 19 (f) E[E Y X ] 6 19 x 2 1 4 dx 52 95 3.6.1 3 7 3.6.3 (a) 1 9 (b) 1 2 (c) 2 (d) The upper bound in part (b) is smaller and thus more useful than that in part (c). 3.6.5 1 4 3.6.7 (a) 10,000 (b) 12,100 3.6.9 (a) 1 (b) 1 4 3.6.11 (a) E Z 3.6.13 7 16 3.7.1 E X1 3.7.3 P X for t C 3.7.5 E X 3.7.7 E W 0 E Y 0, while P X 3 E X2 0 for t 8 5 (b) 32 75 C and P X 2 1 5 1 for 0 3 5 t t t t 0 Appendix E: Answers to Odd­Numbered Exercises 735 1 8 P Y3 3 64 P Y3 21 2 1 31 3 2 3 16 P Y3 181 3 3.7.9 E W 4.1.1 P Y3 3 16 P Y3 121 3 4.1.3 If Z is the sample mean, then P Z p 2. P Z 4.1.5 For 1 6, P max 4.1.7 If W XY then 1 j 1 j 41 3 3 64 P Y3 0 j 6 20 1 64 P Y3 3 3 32 P Y3 61 3 3 16 1 64 P Y3 91 3 21 3 3 32 P Y3 p2, P Z 0 5 2 p 1 p , and j 1 6 20. P W 1 36 1 18 1 12 1 9 0 1 9 16 25 36 2 3 5 8 10 15 18 20 24 30 4 6 12 if if if if otherwise. 1 2 otherwise, pY y 0 1 5n2 1 4.1.9 pY y 4.2.1 Note that Zn 1 2 for y P 7 U 7 Z unless 7 U 7 1 n2. Hence, for any 1 n2 0 as n . 0, P Zn Z 4.2.3 P W1 Wn n 2 P W1 Wn n 2 1 P 1 n W1 1 3 n 1 1 7 1 U ln ln 9n Fn Wn Hn P Hn Yn 1 P Hn P Hn Xn n Z unless 7 P Hn 1 P Hn n 2 Hn and P Xn 1 8 P e Hn 4.2.5 P X1 4.2.7 For all n 2 1 P 1 Hn P X1 2 ln , P Xn 1 6 Xn 0 and n n 2 4.2.9 By definition, Hn 1 Fn Hn 1 n 2 4.2.11 r 4.3.1 Note that Zn whenever 1 n2 4.3.3 W1 n W1 4.3.5 P Xn X P Yn 4.3.7 m 5 4.3.9 r 4.3.11 (a) Suppose there is no suc
h m and from this get a contradiction to the strong law of large numbers. (b) No 4.4.1 limn 4.4.3 Here, P Zn for z 1. 4.4.5 P S 4.4.7 P S 1 n2. Also, if U Z Wn n 7 U , i.e., n Wn n n 2 X and Yn 7 U . Hence, P Zn n W1 P U 7 . 1 2 i for i 1, P Zn 1 3 1, for 0 zn 1, and P Z 1 2 3 z 540 2450 7, then Zn P X z 1 1 3 X or Yn 0 3050 0 6915 P Xn P Xn P Xn 0 51 1 2 9 2 Wn 736 Appendix E: Answers to Odd­Numbered Exercises 1 a and Yi ’s from fY y 4.5.11 (a) C x exp X 2 i N D . y2. (b) For as n 1, let m y2 as n ny , show mn n 1, P Z y 1 ]. (c) For 0 y P m n 1 ] where m 0. Thus, P Xn 1 let mn Xn m m 1 [n n Xn ny . . Then show n, P Xn ny , the biggest integer not greater 4.4.9 (a) For 0 m m 1 [n n than ny. Since there is no integer in m ny , P m n m 1 n (d) For 0 P Xn y 4.4.11 4.4.13 The yearly output, Y , is approximately normally distributed with mean 1300 0 1685. and variance 433. So, P Y 4.5.1 The integral equals 4.5.3 This integral equals 1 5 E e 14Z 2 , where Z 4.5.5 This sum is approximately equal to e5 E sin Z 2 4.5.7 4.5.9 0 354 0 447 1 0 Exponential 5 . , where Z 2 E cos2 Z , where Z 1 0 g x y dx dy Poisson 5 . N 0 1 . 6 1404 3 8596 1280 1 (b) Generate Xi ’s from f X x 1. Set Di 3x 2 for 0 Xi Yi sin Xi Yi cos n. 5. Estimate E X by Mn 1 y 4y3 for 0 Xi Di for i 2 2 2. 1 0 T1 Wn Wm 24 125 Zn W1 1 5 C2 25C 2 (b) C N 18 1 2 C4 8C 144 7 C5 0 e5y h x y I[0 and estimate J by Mn N 0 1 be i.i.d. and set X N 43 629 V 3 C3 Nn D1 0 ey h x y I[0 eYi h Xi Yi Yi 12 and Ni Dn . N1 y e y dy I[0 1] x dx (b) Generate Xi and Yi 4.5.13 (a) J Tn n. appropriately, set Ti 1 y 5e 5y dy I[0 1] x dx (d) As in part (b). (e) The (c) J 0 estimator having smaller variance is better. So use sample variances to choose between them. 4.6.1 (a) U 4.6.3 C1 4.6.5 Let Z1 W1 and Y n 4.6.7 C 7 C5 4.6.9 C1 2 5 C2 1 671 (c) a 4.6.11 (a) m 60 K 5.1.1 The mean survival times for the control group and the treatment group are 93.2 days and 356.2 days, respectively. 5.1.3 For those who are still alive, their survival times will be longer than the recorded values, so these data values are incomplete. 5.1.5 x 5.1.7 Use the difference x 5.2.1 In Example 5.2.1, the mode is 0 In Example 5.2.2, the mode is 1 5.2.3 The mixture has density 5 2 2 for x 5.2.5 x 1 60 (d) 1 C7 1 c 61 (b) y 4 2 2 0 1375 2 C6 3 C3 2 C4 61 b 4 00 4 2 exp exp Zn Z1 10 y. 5 2 x x 2 2 Appendix E: Answers to Odd­Numbered Exercises 737 5.2.7 The mode is 1 3. 5.2.9 The mode is x 0. 5.3.1 The statistical model for a single response consists of three probability functions Bernoulli 1 2 Bernoulli 1 3 Bernoulli 2 3 5.3.3 The sample X1 Xn is a sample from an N 8 3 . Both the population mean and variance uniquely 2 distribution, where 10 2 2 identify the population distribution. 5.3.5 A single observation is from an Exponential [0 cient of variation. . We can parameterize this model by the mean or variance but not by the coeffi­ distribution, where 5.3.7 (a) A and A B (b) The value X B are possible. 1 is observable only when A. (c) Both 5.3.9 P1 5.4.1 FX x 0 4 10 7 10 9 10 10 3 10 2 10 1 10 22 .4.3 (a) Yes (b) Use Table D.1 by selecting a row and reading off the first three single numbers (treat 0 in the table as a 10). (c) Using row 108 of Table D.1 (treating 0 as 10): First sample — we obtain random numbers 6 0 9 and so compute X X so compute X random numbers 2 0 2 (note we do not skip the second 2), and so compute X X 3 0 Second sample — we obtain random numbers 4 0 7 and 2 666 7 Third sample — we obtain X 2 0 X X X 10 10 3 3 3 6 9 6 9 6 10 9 5.4.5 (c) The shape of a histogram depends on the intervals being used. 5.4.7 It is a categorical variable. 0 625. Let Yi be the answer from student i. Then Y 5.4.9 (a) Students are more likely to lie when they have illegally downloaded music, so the results of the study will be awed. (b) Under anonymity, students are more likely to tell the truth, so there will be less error. (c) The probability a student tells the truth 1 is is p recorded as an estimate of the proportion of students who have ever downloaded music illegally. 5.5.1 (a) f X 0 (b) FX 0 1 000 (d) The mean x the I Q R 5.5.3 (a 4667 FX 2 1 667 and the variance s2 3. According to the 1.5 I Q R rule, there are no outliers. 0 2667 f X 3 0 7333 FX 3 1 952 (e) The median is 2 and f X 4 0 8667 FX 4 0 2667 FX 1 0 2667 f X 1 25 82 f X 2 35 82 f X 3 22 82 (b) No 0 1333 2 p p 1 6.1.3 L 6.1.5 c x1 10 4 x1 6.1.11 No 6.1.13 No 1 6.2.1 6.2.3 6.2.5 0 x n 738 Appendix E: Answers to Odd­Numbered Exercises 2 125. We estimate FX 1 by FX 1 5.5.5 The sample median is 0, first quartile is I Q R 5.5.7 5.5.9 0z0 25 where z0 25 satisfies 3 0 1 150, third quartile is 0 975, and the 17 20 0 85 z0 25 0 25 2 3 2 1 5.5.11 5.5.13 5.5.15 6.1.1 The appropriate statistical model is Binomial(n, probability of having this antibody in the blood. The likelihood function is L 10 3 , where 3 1 7 0 2 [0 1] is the 3 x20 20 exp 20x and x is a sufficient statistic. 9 3 6.1.7 L sufficient statistic. 6.1.9 L 1 0 L 2 0 xn n i 1 xi e xi ! nx e n xi ! and x is a minimal 4 4817, the distribution f1 is 4 4817 times more likely than f2. a 2 3 b 2 is 1–1, and so b 4 x1 a xn x 2 is the MLE. n xi n i 1 ln xi n i 1 ln 1 32 768 cm3 is the MLE 6.2.7 6.2.9 6.2.11 3 6.2.13 A likelihood function cannot take negative values. 6.2.15 Equivalent log­likelihood functions differ by an additive constant. 6.3.1 P­value 6.3.3 P­value 6.3.5 P­value mum required sample size is 2. 6.3.7 P­value 1 05 6.3.9 P­value 6.3.11 P­value n i 1 xi 0 05 is well within the range of practical significance. nx 2 (b) The plug­in estimator is 1 0 592 and 0.95­confidence interval is 4 442 5 318 . 0 000 and 0.95­confidence interval is 63 56 67 94 . 0 00034 and 0.95­confidence interval is [47 617 56 383]. The mini­ 6.3.13 (a) so 2 s2 n 6.3.15 (a) Yes (b) No 6.3.17 The P­value 0 22 does not imply the null hypothesis is correct. It may be that we have just not taken a large enough sample size to detect a difference. x 1 x 0 as n 2 n 0 527 0 014 x 2 1 n (c) bias 0 1138 so not statistically significant and the observed difference of n i 1 x 2 i 2 2 Appendix E: Answers to Odd­Numbered Exercises 739 6.4.1 m3 z 1 2s3 n 26 027 151 373 6.4.3 The method of moments estimator is m2 m2 c2 Var X cE X and Var Y 6.4.5 From the mgf, m X 0 m1 m3 6.4.7 The sample median is estimated by 3 2 1 while the method of moments estimator of 3. The plug­in estimator is 3 is m3 1 m1 If Y cX then E Y 3 1 n 3 m2 m2 1 x 3 i 0 03 and the estimate of the first quartile is 1 28 and for the third quartile is 0 98. Also F 2 F 1 36 0 90 j. The bootstrap sample range y n 6.4.9 The bootstrap procedure is sampling from a discrete distribution and by the CLT the distribution of the bootstrap mean is approximately normal when n and m are large. The delta theorem justifies the approximate normality of functions of the bootstrap mean under conditions. 1 2. Here, 0 6.4.11 The maximum number of possible values is 1 y 1 has the largest possible is obtained when i x 1 and smallest possible value of 0. If there are many repeated xi values value x n in the bootstrap sample, then the value 0 will occur with high probability for y n y 1 and so the bootstrap distribution of the sample range will not be approximately normal. 6.5.1 n 2 4 2 6.5.3 n 2 x z0 95 6.5.5 2 x 6.5.7 n z 1 this does not contain 0 18123 0 46403 as a 0 95­confidence interval, and 10 4 1 5045 9 5413 10 3 n n 2n n 2 1 1 x 6.5.9 [0 min 1 4 45 7.1.1 Based on m 1 1 6 80 91 360 the posterior probability distributions for each of the four possible samples are as follows: x 2 45 m 1 2 m 2 2 x 23 72 m 1 2 8 45 77 360 m 2 1 x 1 18 80 1 20 2 80 1 20 1 ] 1z 2 1 1 25 n 1 2 1 1 20 1 04 1 sample 1 2 3 sample 16 2 5 m 1 1 18 115 16 115 81 115 16 2 5 m 1 2 18 77 32 77 27 77 16 2 5 m 2 1 18 77 32 77 27 77 16 2 5 m 2 2 is positive is 0.5, and the posterior probability is 18 91 64 91 9 91 7.1.3 The prior probability that 0 9992 7.1.5 n 1e 7.1.7 2 x1 n 1e d I[x n xn 1 n I[0 4 0 6] x n N 5 5353 4 81 0 6n 1 2 x1 2 1 xn Gamma 11 41 737 0 4n 1 (b) No (c) The prior must be greater 7.1.9 (a) n than 0 on any parameter values that we believe are possible. 7.1.11 (a) 1 6, so 2 is not uniformly distributed on 0 1 2 3 . (b) No 1 3 1 6 0 1 1 3 3 740 Appendix E: Answers to Odd­Numbered Exercises 7.2.1 nx n nx m n m 7.2.3 E 1 n 2 0 2 x1 1 x xn 0 n 2 x and the posterior mode is 1 2 7.2.5 As in Example 7.2.4, the posterior distribution of so E 1 x1 xn fk 1 and maximizes ln n f1 k i 1 1 1 1 1 is Beta f1 f1 k i 2 fi k i 1 f2 1 1 i f1 1 for the 1 1 i 2 k n i 1 posterior mode. 1 xn 0 2 0 7.2.7 Recall that the posterior distribution of 1 in Example 7.2.2 is Beta f1 1 x1 k . Find the second moment and use Var 2 Now 0 k i 1 xn n 1 1, so Var k i 1 E 1 x1 fi n 1 n 1 f1 n i n i 1 x1 i n 2 1 f2 xn E 2 fk 1 x1 f1 n as n xn 2 1 k i 2 7.2.9 The posterior predictive density of xn 1 is obtained by averaging the N x 1 n this is also the posterior predictive distribution. 0 density with respect to the posterior density of so we must have that 1 2 2 0 7.2.11 The posterior predictive distribution of t So the posterior mode is t and the posterior variance is nx xn 1 is nx 0 the posterior expectation is nx Pareto n n 0 0 2 ]. 0 0 1 7.2.13 (a) The posterior distribution of x n 1 s2 2 n x 2 2 0 1 (c) To assess the hypothesis H0 : 1 cdf. 1 G 2 x 2 0 x1 xn 2 0 2 0 2 is inverse Gamma n 2 0 (b) E 2 x1 2 2 0 compute the probability n is the n where G 2 0 xn 0 x where .2.15 (a) The odds in favor of A 1 odds in favor of Ac (b) B F A 1 B F Ac 7.2.17 Statistician I’s posterior probability for H0 is 0 0099. Statistician II’s posterior probability for H0 is 0 0292. Hence, Statistician II has the bigger posterior belief in H0. 7.2.19 The range of a Bayes factor in favor of A ranges in [0 probability equal to 0, then the Bayes factor will be 0. If A has posterior 7.3.1 3 2052 4 4448 7.3.3 The posterior mode is 2 n 0 z 1 2 0 1 2 nt Hence, the asymptotic ­credible interval is 2 0 and 2 x1 z 1 xn 2 7.3.5 For a sample x1 [0 1]. A simple Monte Carlo algorithm for the p
osterior distribution is 1. Generate from N x 1 n 2. Accept value is not in [0 1], then the acceptance rate will be very small for large n if it is in [0 1] and return to step 1 otherwise. If the true the posterior distribution is N x 1 n restricted to xn 7.4.1 The posterior density is proportional to n 1 exp ln 1 xi . Appendix E: Answers to Odd­Numbered Exercises 7.4.3 (a) The maximum value of the prior predictive is obtained when posterior of given 1 is 741 1 (b) The 1 1 1 3 1 2 1 3 3 59 1728 1 2 1 2 2 1 8 59 1728 32 59 27 59 a b 7.4.5 The prior predictive is given by m Based on the prior predictive, we would select the prior given by 1 2 1 7.4.7 Jeffreys’ prior is 1 2 The posterior distribution of x1 xn nx 1 n 1 x n 1 is Beta nx .4.9 The prior distribution is 7.4.11 Let the prior be 8.1.1 L 1 tional distributions of s are as follows. 3 2 L 2 N 66 Exponential 101 86. 0 092103. so by Section 6.1.1 T is a sufficient statistic. The condi­ 2 with 2 with fa s T fb fa s T fb s T fa s T fb .1.3 x 2 8.1.5 UMVU for 5 8.1.7 x 8.1.9 n 1 8.1.11 Yes 8.2.1 When power of the test is 23 120 When 3 5 The power of the test is 1 10 8.2.3 By (8.2.6) the optimal 0 01 test is of the form 0 05 c0 3 2 and 0 1 c0 1 1 Xi 1 10 1 12 1 2 1 20 0 1 30 The 1 12 2 and 0 x 1 0 x x 1 1 2 10 2 10 2 3263 2 3263 1 0 x x 2 0404 2 0404 8.2.5 (a) 0 (b) Suppose 4 8.2.7 n 1. The power function is 1 1 . 742 Appendix E: Answers to Odd­Numbered Exercises 2 2 2 1 2 8.2.9 The graph of the power function of the UMP size graph of the power function of any other size test 8.3.1 2 5 we accept H0 : 8.3.3 The Bayes rule is given by 1 to x as 8.3.5 The Bayes rule is given by n 0 numbers this converges in probability to 8.3.7 The Bayes rule rejects whenever 0 as n 3 5 so nx 2 0 2 0 n 0 1 0 test function lies above the function. 2 2 1 2 and 2 0 nx 2 0 which converges 0 and by the weak law of large B FH0 exp exp nx 2 2 0 the denominator converges to 0, so in the limit p0 p0 As is less than 1 we never reject H0 8.4.1 The model is given by the collection of probability functions 2 0 n nx : xn of 0’s and 1’s. The action space is nx 1 a n 2 0 1 n i 1 xi R1 the correct action function is A and the loss function is L x Var 1 2 exp [0 1] on the set of all sequences x1 [0 1] the correct action function is A a 2 The risk function for T is RT 8.4.3 The model is the set of densities 2 R1 on Rn. The action space is and the loss function is L a 2 0 n x Var 8.4.5 (a) Rd a function d given by d 9.1.1 The observed discrepancy statistic is given by D r P D R 9.1.3 (c) The plots suggest that the normal assumption seems reasonable. 9.1.5 The observed counts are given in the following table. a 2 The risk function for T is RT 1 2 Rd b 0 248 which doesn’t suggest evidence against the model. 2 2 2 22 761 22 761 and the P­value is 3 4 (b) No. Consider the risk function of the decision Interval 0 0 0 2] 0 2 0 4] 0 4 0 6] 0 6 0 8] 0 8 1] Count 4 7 3 4 2 2 4 ) 0 4779 Therefore, we have no evidence against the Uniform model The chi­squared statistic is equal to 3.50 and the P­value is given by (X 2 P X 2 3 5 being correct. 9.1.7 (a) The probability of the event s having S as its support. The most appropriate P­value is 0. (b) 0 729 3 is 0 based on the probability measure P Appendix E: Answers to Odd­Numbered Exercises 743 9.1.9 No 9.1.11 (a) The conditional probability function of x1 xn is nx 1 n 1 x n nx nx 1 n 1 x 1 n nx y x 2 0 x1 x1 Y x2 P Y z where z x2 Y y y X x2 P Y x1 P X y For the converse, show P Y x will change with x whenever 2 0 n is independent of and summing this over x1 leads to P X (b) Hypergeometric n n 2 nx0 (c) 0 0476 9.2.1 (a) No (b) P­value is 1 10 so there is little evidence of a prior–data conict. (c) P­value is 1 300 so there is some evidence of a prior–data conict. 9.2.3 We can write x N 0 9.2.5 The P­value for checking prior–data conict is 0 Hence, there is definitely a prior–data conict. 10.1.1 For any x1 x2 (that occur with positive probability) and y we have P Y x2 Y y X x2 Thus P X y P X y P X 10.1.3 X and Y are related. 10.1.5 The conditional distributions P Y X is not degenerate. 10.1.7 If the conditional distribution of life­length given various smoking habits changes, then we can conclude that these two variables are related, but we cannot conclude that this relationship is a cause–effect relationship due to the possible existence of con­ founding variables. 10.1.9 The researcher should draw a random sample from the population of voters and ask them to measure their attitude toward a particular political party on a scale from favorably disposed to unfavorably disposed. Then the researcher should randomly select half of this sample to be exposed to a negative ad, while the other half is exposed to a positive ad. They should all then be asked to measure their attitude toward the particular political party on the same scale. Next compare the conditional distribution of the response variable Y (the change in attitude from before seeing the ad to after), given the predictor X (type of ad exposed to), using the samples to make inference about these distributions. 10.1.11 (a) (b) A sample has not been taken from the population of interest. The individuals involved in the study have volunteered and, as a group, they might be very different from the full population. (c) We should group the indi­ viduals according to their initial weight W into homogenous groups (blocks) and then randomly apply the treatments to the individuals in each block. 10.1.13 (a) The response variable could be the number of times an individual has watched the program. A suitable predictor variable is whether or not they received the brochure. (b) Yes, as we have controlled the assignment of the predictor variable. 10.1.15 W has a relationship with Y and X has a relationship with Y 10.1.17 (a) 0 100 1 100 744 Appendix E: Answers to Odd­Numbered Exercises (b) (c) (d) X 0 X 1 Rel. Freq. 0.5 0.5 Y 0 Y 1 Rel. Freq. 0.7 0.3 Rel. Freq. X 0 X 1 0 Y Y 1 sum 0.3 0.2 0.5 0.4 0.1 0.5 Sum 1.0 Sum 1.0 Sum 0.7 0.3 1..6 0.8 0.4 0.2 Sum 1.0 1.0 5 7143 (e) Yes 10.1.19 X and Y are related. We see that only the variance of the conditional distribu­ tion changes as we change X 10.1.21 The correlation is 0 but X and Y are related. 2 2 , the P­ 5 7143 and, with X 2 10.2.1 The chi­squared statistic is equal to X 2 0 value equals P X 2 0 05743. Therefore, we don’t have evidence against the null hypothesis of no difference in the distributions of thunderstorms between the two years, at least at the 0 05 level. 10.2.3 The chi­squared statistic is equal to X 2 0 P­value equals P X 2 the null hypothesis of no relationship between the two digits. 10.2.5 (a) The chi­squared statistic is equal to X 2 10 4674 and, with X 2 0 the P­value equals P X 2 0 03325. Therefore, we have some evidence against the null hypothesis of no relationship between hair color and gender. (c) The standardized residuals are given in the following table. They all look reasonable, so nothing stands out as an explanation of why the model of independence does not fit. Overall, it looks like a large sample size has detected a small difference. 0 10409 and, with X 2 the 0 74698. Therefore, we have no evidence against 10 4674 4 8105 2 1 2 4 X m f X Y fair 1 07303 1 16452 Y red 0 20785 0 22557 Y medium Y 1 05934 1 14966 dark 0 63250 0 68642 Y jet black 1 73407 1 88191 10.2.7 We should first generate a value for X1 from the Beta 1 2 distribution and set X2 1 Beta 1 1 distribution and set X3 X2 X1 X3 1 Dirichlet 1 3 . Then generate U2 X1 U2 Next generate U3 from the X2 U3 Finally, set X4 1 X1 Appendix E: Answers to Odd­Numbered Exercises 745 j i j f i fi j for i 6. Let 6 i 1 fi j denote 6 j 1 fi j fi f j n . Compute the P­value P 2 25 1 j and compute chi­squared statistic, X 2 X 2 . f j n to see how big these are. 10.2.9 Then there are 36 possible pairs i the frequency for fi f j n 2 10.2.11 We look at the differences 10.3.1 x 10.3.3 x 10.3.5 (b) y 29 9991 2 10236x (e) The plot of the standardized residuals against X indicates very clearly that there is a problem with this model. (f) Based on part (e), it is not appropriate to calculate confidence intervals for the intercept and slope. (g) Nothing can be concluded about the relationship between Y and X based on this model, as we have determined that it is inappropriate. (h) R2 0 062 which is very low. 10.3.7 (b) b2 (d) The standardized residual of the ninth week departs from the other residuals in part (c). This provides some evidence that the model is not correct. (e) The confidence inter­ 2 is [0 0787 3 8933] val for (f) The ANOVA table is as follows. 1 is [44 0545 72 1283] and the confidence interval for 486 193 7842 01 1 9860 and b1 58 9090 Source Df 1 10 11 X Error Total Sum of Squares Mean Square 564 0280 1047 9720 1612 0000 564 0280 104 7972 5 3821 exp 1 5 3821 and P F 1 10 So the F­statistic is F 0 05 from Table D.5. Hence, we conclude there is evidence against the null hypothesis of no linear relationship between the response and the predictor. (g) R2 0 3499 so, almost 35% of the observed variation in the response is explained by changes in the predictor. 10.3.9 In general, E Y X since it cannot be written in the form E Y X 1 variable and the i are unobserved parameter values. E Y X 2 in this case and E Y X 2 10.3.11 We can write E Y X so this is a simple linear regression model but the predictor is X 2 not X 10.3.13 R2 the response, so the model will not have much predictive power. 10.4.1 (b) Both plots look reasonable, indicating no serious concerns about the correct­ ness of the model assumptions. (c) The ANOVA table for testing H0 : is given below. 2 X is not a simple linear regression model 2V where V is an observed 0 05 indicates that the linear model explains only 5% of the variation in 2 X 2 3 1 2 1 Source Df 2 9 11 A Error Total SS 4.37 18.85 23.22 MS 2.18 2.09 1 0431 with P­value The F statistic for testing H0 is given by F P F 0 39135 Therefore, we don’t have evidence against the null hy­ p
othesis of no difference among the conditional means of Y given X. (d) Since we 2 18 2 09 1 0431 746 Appendix E: Answers to Odd­Numbered Exercises did not find any relationship between Y and X there is no need to calculate these confidence intervals. 10.4.3 (b) Both plots indicate a possible problem with the model assumptions. (c) The ANOVA table for testing H0 : 2 is given below. 1 Source Df 1 Cheese 10 Error 11 Total SS 0.114 26.865 26.979 MS 0.114 2.686 04 0 04 and with the P­ 0 114 2 686 0 841. Therefore, we do not have any evidence against the null The F statistic for testing H0 is given by F value P F hypothesis of no difference among the conditional means of Y given Cheese. 10.4.5 (b) Both plots look reasonable, indicating no concerns about the correctness of the model assumptions. (c) The ANOVA table for testing H0 : follows. 4 1 2 3 Source Treatment Error Total Df 3 20 23 SS 19.241 11.788 31.030 MS 6.414 0.589 10 89 10 89 and with P­ 6 414 0 589 The F statistic for testing H0 is given by F value P F 0 00019 Therefore, we have strong evidence against the null hypothesis of no difference among the conditional means of Y given the predictor. (d) The 0.95­confidence intervals for the difference (column level mean)­(row level mean) between the means are given in the following table. 1 2 3 2 3 4 3913 1 4580 2 2746 2 5246 0 4254 0 6754 2 8080 3 0580 0 9587 1 2087 1 1746 0 6746 10.4.7 (b) Treating the marks as separate samples, the ANOVA table for testing any difference between the mean mark in Calculus and the mean mark in Statistics follows. Source Df 1 Course 18 Error 19 Total SS 36.45 685.30 721.75 MS 36.45 38.07 0 95745 36 45 38 07 The F statistic for testing H0 : with the P­value equal to P F 0 3408 Therefore, we do not have any evidence against the null hypothesis of no difference among the conditional means of Y given Course. 2 is given by F 1 0 95745 Both residual plots look reasonable, indicating no concerns about the correctness (c) Treating these data as repeated measures, the mean of the model assumptions. difference between the mark in Calculus and the mark in Statistics is given by d Appendix E: Answers to Odd­Numbered Exercises 747 1 2 7 with standard deviation s 2 00250 The P­value for testing H0 : 0 944155 2 Var Y1 Cov Y1 Y2 2 is 0 0021, so we have strong evidence against the null. Hence, we conclude that there is a difference between the mean mark in Calculus and the mean mark in Statistics. A normal probability plot of the data does not indicate any reason to doubt model assumptions. (d) rx y 10.4.9 When Y1 and Y2 are measured on the same individual, we have that Var Y1 Y2 0 If we had mea­ sured Y1 and Y2 on independently randomly selected individuals, then we would have that Var Y1 10.4.11 The difference of the two responses Y1 and Y2 is normally distributed, i.e., Y1 10.4.13 (1) The conditional distribution of Y given X1 X2 only through E Y X1 X2 X1 X2 do not interact. depends on X1 X2 E Y X1 X2 is independent of E Y X1 X2 is normally distributed. (3) X1 and X2 and the error Z Y 2Var Y1 since Cov Y1 Y2 (2) The error Z 2Var Y1 Y2 Y2 N Y 2 x 1 1 x 2 dt ln p p 1 1 as e x F x p 1 and implies x 1 1 p 1 1 2 l p el 1 X1 ln p 1 p and substitute l xk Xk x1 1 1 p be the log odds so el 10.5.1 F x x 10.5.3 Let l Hence, el 1 10.5.5 P Y 11.1.1 (a) 0 (b) 0 (c) 1 3 (d) 2 3 (e) 0 (f) 4 9 (g) 0 (h) 1 9 (i) 0 (j) 0 (k) 0 00925 (l) 0 (m) 0 0987 11.1.3 (a) 5 108 (b) 5 216 (c) 5 72 (d) By the law of total probability, P X3 8 X3 6 X3 P X1 11.1.5 (a) Here, P c 0 89819. That is, if you start with $9 and repeatedly make $1 bets having probability 0 499 of winning each bet, then the probability you will reach $10 before going broke is equal to 0 89819 (b) 0 881065 (c) 0 664169 (d) 0 0183155 (e) 4 2x arctan 10 18 (f) 2 10 174 P X1 1x1 k xk 1 . 8 8 8 0 11.1.7 We use Theorem 11.1.1. (a) 1 4 (b) 3 4 (c) 0 0625 (d) 1 4 (e) 0 (f) 1 (g) We know that the initial fortune is 5, so to get to 7 in two steps, the walk must have been at 6 after the first step. 11.1.9 (a) 18 38 (b) 0 72299 (c) 0 46056 (d) 0 (e) In the long run, the gambler loses money. 1 1 0 0 72 P1 X2 0 73 (b) P0 X3 0 28 P0 X2 0 728 11.2.1 (a) 0 7 (b) 0 1 (c) 0 2 (d) 1 4 (e) 1 4 (f) 1 2 (g) 0 3 11.2.3 (a) P0 X2 1 11.2.5 (a) 1 2 (b) 0 (c) 1 2 (d) 1 2 (e) 1 10 (f) 2 5 (g) 37 100 (h) 11 20 (i) 0 (j) 0 (k) 0 (l) 0 (m) No 1 for all j. Hence, we must 11.2.7 This chain is doubly stochastic, i.e., has have the uniform distribution ( 1 1 4) as a stationary distribution. 11.2.9 (a) By either increasing or decreasing one step at a time, we see that for all i and j, we have p n d. (b) Each state has period 2. (c) If i i j 0 for some n 0 27 P1 X2 i pi j 4 0 3 2 748 Appendix E: Answers to Odd­Numbered Exercises X2 3 9 i 13 4 1 or i 1 let 2 1 3. 2 9 2 If ! d i pi j 1 ! . 0. 1 2d i j 13 4 i, let Yn 1 p ji j p j i 4 9 (d) limn Yn 1 and let P1 Xn 0 54 1, then i min , let Yn 1 min 1 e j 8 7 9 e i 4 min so P1 X500 j with probability i j , otherwise let Xn 1 1 with probability 2 9. Let j j 6 1, . i with probability and j are two or more apart, then pi j 1 2d 1 ! , while 11.2.11 (a) This chain is irreducible. (b) The chain is aperiodic. (c) 3 9 3 11.2.13 P1 X1 11.2.15 (a) The chain is irreducible. (b) The chain is not aperiodic. 11.3.1 First, choose any initial value X0. Then, given Xn with probability 1 2 each. Let j Then let Xn 1 1 i j . 11.3.3 First, choose any initial value X0. Then, given Xn with probability 7 9 or Yn 1 j then let probability i j , otherwise let Xn 1 11.3.5 Let Zn be i.i.d. x, let Yn 1 Xn Xn y6 x 4 y8 Xn 1 11.4.1 C 11.4.3 p 11.4.5 P Xn Xn. (b) T is non­ 11.4.7 (a) Here, E Xn 1 Xn negative, integer­valued, and does not look into the future, so it is a stopping time. (c) E X T 27 40 27. (d) P X T 11.5.1 (a) 1 2 (b) 0 (c) 1 4 (d) We have P Y M Hence, P Y 1 1 5 16 11.5.3 (a) P B2 2 4 1 6 N 0 1 . First, choose any initial value X0. Then, given 10 Zn 1. Let y . Then let Xn 1 x y. i 1 Yn 1 and, if i 1 j with 1 2 4 2 (d) P B26 26 3 (f) P B26 3 x 6 x with probability 1 2 9 i with probability 1 y with probability x y, otherwise let 3 (c) P B9 B5 15 (e) P B26 3 7 9 or, if j Then let Xn b) P B3 B11 P Y M 1 x y min 1 exp Yn 1 and let 12 5 1 3 3 4 Xn 3 1 4 3Xn M M . 26 3 5 8 9 8 X0 x 8 y4 11.5.5 E B13 B8 11.5.7 (a) 3 4 (b) 1 4 (c) The answer in part (a) is larger because 5 is closer to B0 than 15 is, whereas 15 is farther than 5 is. (d) 1 4 (e) We have 3 4 it must since the events in parts (a) and (d) are complementary events. 11.5.9 E X3 X5 11.5.11 (a) P X10 P X10 11.6.1 (a) e 141413 13! (b) e 35353 3! (c) e 424220 20! (d) e 350350340 340! (e) 0 (f) e 141413 13! 6 11.6.3 P N2 e 21217 7! (g) 0 e 2 3 2 3 6 6! P N3 20 10 10 (d) P X10 20 1 10 (b) P X10 0 1, which e 3 3 3 3 5 5! 61 75 250 20 4 10 (c) 20 100 10 250 250 250 1 4 5 Appendix E: Answers to Odd­Numbered Exercises 749 11.6.5 P N2 6 2 N2 9 2 2 6 2 9 2 Index 0–1 loss function, 467 a priori, 374 abs command, 685 absolutely continuous jointly, 85 absolutely continuous random variable, 52 acceptance probability, 644 action space, 464 additive, 5 additive model, 593 adjacent values, 287 admissible, 470 admissiblity, 470 alternative hypothesis, 448 analysis of covariance, 595 analysis of variance (ANOVA), 545 analysis of variance (ANOVA) table, 545 ancillary, 481 anova command, 690 ANOVA (analysis of variance), 545 ANOVA test, 548 aov command, 690 aperiodicity, 635 balance, 520 ball, 626 bar chart, 288 barplot command, 688 basis, 559 batching, 414 Bayes factor, 397 Bayes risk, 471 Bayes rule, 460, 471 Bayes’ Theorem, 22 Bayesian decision theory, 471 Bayesian model, 374 Bayesian P­value, 395 Bayesian updating, 383 bell­shaped curve, 56 Bernoulli distribution, 42, 131 best­fitting line, 542 beta command , 686 beta distribution, 61 beta function, 61 bias, 271, 322 binom command, 686 binomial coefficient, 17 binomial distribution, 43, 116, 131, 162, 163, 167 binomial theorem, 131 birthday problem, 19 bivariate normal distribution, 89 blinding, 521 blocking variable, 523, 594 blocks, 523 bootstrap mean, 353 bootstrap percentile confidence interval, 355 bootstrap samples, 353 bootstrap standard error, 353 bootstrap t confidence interval, 355 bootstrapping, 351, 353 Borel subset, 38 boxplot, 287 boxplot command, 688 Brown, R., 657 Brownian motion, 657, 659 properties, 659 Buffon’s needle, 234 burn­in, 643 calculus, 675 fundamental theorem of, 676 categorical variable, 270 751 752 Cauchy distribution, 61, 240 Cauchy–Schwartz inequality, 186, 197 cause–effect relationship, 516 cbind command, 698 cdf (cumulative distribution function), 62 inverse, 120 ceiling command, 685 ceiling (least integer function), 295 census, 271 central limit theorem, 215, 247 chain rule, 676 characteristic function, 169 Chebychev’s inequality, 185 Chebychev, P. L., 2 chi­squared distribution, 236 chi­squared goodness of fit test, 490, 491 chi­squared statistic, 491 chi­squared n , 236 2 n , 236 chisq command, 686 chisq.test command, 688 classification problems, 267 coefficient of determination (R2), 546 coefficient of variation, 267, 360 combinatorics, 15 complement, 7, 10 complement of B in A, 7 complementing function, 315 completely crossed, 522 completeness, 438 composite hypothesis, 466 composition, 676 conditional density, 96 conditional distribution, 94, 96 conditional expectation, 173 conditional probability, 20 conditional probability function, 95 conditionally independent, 184 confidence interval, 326 confidence level, 326 confidence property, 326 confidence region, 290 confounds, 517 conjugate prior, 422 consistency, 325 consistent, 200, 325 constant random variable, 42 continuity properties, 28 continuous random variable, 51–53 continuous­time stochastic process, 658, 666 control, 254 control treatment, 521 convergence almost surely , 208 in distribution, 213 in probability, 204, 206, 210, 211, 246 with probability 1, 208, 210, 211, 246 convolution, 113 correct action function, 464 correction for continuity, 219, 358 correlation, 89, 156 cos command, 685 countably
additive, 5 counts, 102 covariance, 152, 153 covariance inequality, 440 Cramer–Rao inequality, 441 Cramer–von Mises test, 495 craps (game), 27 credible interval, 391 credible region, 290 critical value, 446, 448 cross, 587 cross­ratio, 537 cross­tabulation, 687 cross­validation, 495 cumulative distribution function inverse, 120 joint, 80 properties of, 63 cumulative distribution function (cdf), 62 data reduction, 303 decision function, 467 decision theory model, 464 decreasing sequence, 28 default prior, 425 degenerate distribution, 42, 131 delta theorem, 351 density conditional, 96 proposal, 646 density function, 52 joint, 85 density histogram function, 274 derivative, 675 partial, 678 descriptive statistics, 282 design, 519 det command, 698 diag command, 698 diffusions, 661 Dirichlet distribution, 93 discrepancy statistic, 480 discrete random variable, 41 disjoint, 5 distribution F, 241 t, 239 Bernoulli, 42, 131 beta, 61 binomial, 43, 116, 131, 162, 163, 167 Cauchy, 61 chi­squared, 236 conditional, 94, 96 degenerate, 42, 131 exponential, 54, 61, 142, 165, 166 extreme value, 61 gamma, 55, 116 geometric, 43, 132 hypergeometric, 47 joint, 80 Laplace, 61 log­normal, 79 logistic, 61 mixture, 68 negative­binomial, 44, 116 normal, 57, 89, 116, 142, 145, 234 Pareto, 61 point, 42 Poisson, 45, 132, 162, 164 proposal, 644 standard normal, 57 stationary, 629 uniform, 7, 53, 141, 142 Weibull, 61 distribution function, 62 753 joint, 80 properties of, 63 distribution of a random variable, 38 distribution­free, 349 Doob, J., 2, 37 double ’til you win, 618 double blind, 521 double expectation, 177 double use of the data, 507 doubly stochastic matrix, 631, 632 drift, 662 dummy variables, 578 ecdf command, 688 Ehrenfest’s urn, 625 empirical Bayesian methods, 423 empirical distribution function, 271 empty set, 5 error sum of squares (ESS), 545 error term, 516 ESS (error sum of squares), 545 estimation, 290 estimation, decision theory, 465 estimator, 224, 434 event, 5 exact size exp command, 686 expectation, 173 expected value, 129, 130, 141, 191 linearity of, 135, 144, 192 monotonicity of, 137, 146, 192 test function, 449 experiment, 518 experimental design, 520 experimental units, 519 exponential distribution, 54, 142, 165, 166 memoryless property of, 61 extrapolation, 543 extreme value distribution, 61 F, 62 f command, 686 F distribution, 241 FX , 62 FX a f X , 59 F m n , 241 , 63 754 F­statistic, 546 factor, 519 factorial, 16, 677 fair game, 617 family error rate, 581 Feller, W., 2, 37 Fermat, P. de, 2 finite population correction factor, 280 first hitting time, 618 Fisher information, 365 Fisher information matrix, 372 Fisher signed deviation statistic, 363 Fisher’s exact test, 484 Fisher’s multiple comparison test, 581 fitted values, 560 floor command, 685 oor (greatest integer function), 119 for command, 691 fortune, 615 frequentist methods, 374 frog, 626, 641 function Lipschitz, 665 fundamental theorem of calculus, 676 gambler’s ruin, 618 gambling, 615 gambling strategy double ’til you win, 618 gamma command (function), 685 gamma command (distribution), 686 gamma distribution, 55, 116 gamma function, 55 ­confidence interval, 326 generalized hypergeometric distribution, 51 generalized likelihood ratio tests, 455 generating function characteristic function, 169 moment, 165 probability, 162 geom command, 686 geometric distribution, 43, 132 geometric mean, 200 Gibbs sampler, 647 Gibbs sampling, 413 glm command, 690 greatest integer function (oor), 119 grouping, 274 Hall, Monty, 27, 28 hierarchical Bayes, 424 higher­order transition probabilities, 628 highest posterior density (HPD) intervals, 392 hist command, 687 hitting time, 618 HPD (highest posterior density) intervals, 392 hyper command, 686 hypergeometric distribution, 47 hyperparameter, 422 hyperprior, 424 hypothesis assessment, 290, 332 hypothesis testing, 446 hypothesis testing, decision theory, 466 i.i.d. (independent and identically distrib­ uted), 101 identity matrix, 560 if command, 691 importance sampler, 233 importance sampling, 233 improper prior, 425 inclusion–exclusion, principle of, 12, 14 increasing sequence, 28 independence, 24, 98, 101, 137 pairwise, 24 independent and identically distributed (i.i.d.), 101 indicator function, 35, 210 indicator function, expectation, 131 individual error rate, 581 inequality Cauchy–Schwartz, 186, 197 Chebychev’s, 185 Jensen’s, 187 Markov’s, 185 inference, 258 infinite series, 677 information inequality, 441 initial distribution, 623 integral, 676 intensity, 666 interaction, 522, 587 intercept term, 562 interpolation, 543 interquartile range (IQR), 286 intersection, 7 inverse cdf, 120 inverse Gamma, 380 inverse normalizing constant, 376 inversion method for generating random variables, 121 IQR (interquartile range), 286 irreducibility, 634 Jacobian, 110 Jeffreys’ prior, 426 Jensen’s inequality, 187 joint cumulative distribution function, 80 joint density function, 85 joint distribution, 80 jointly absolutely continuous, 85 k­th cumulant, 173 Kolmogorov, A. N., 2 Kolmogorov–Smirnov test, 495 kurtosis statistic, 483 Laplace distribution, 61 large numbers law of, 206, 211 largest­order statistic, 104 latent variables, 414, 415 law of large numbers strong, 211 weak, 206 law of total probability, 11, 21 least integer function (ceiling), 295 least relative suprise estimate, 406 least­squares estimate, 538, 560 least­squares line, 542 least­squares method, 538 least­squares principle, 538 Lehmann–Scheffé theorem, 438 length command, 685 levels, 520 lgamma command, 685 755 likelihood, 298 likelihood function, 298 likelihood principle, 299 likelihood ratios, 298 likelihood region, 300 Likert scale, 279 linear independence property, 559 linear regression model, 558 linear subspace, 559 linearity of expected value, 135, 144, 192 link function, 603 Lipschitz function, 665 lm command, 689 location, 136 location mixture, 69 log command, 685 log odds, 603 log­gamma function, 383 log­likelihood function, 310 log­normal distribution, 79 logistic distribution, 61, 606 logistic link, 603 logistic regression model, 603 logit, 603 loss function, 465 lower limit, 287 ls command, 686 lurking variables, 518 macro, 700 MAD (mean absolute deviation), 469 margin of error, 329 marginal distribution, 82 Markov chain, 122, 623 Markov chain Monte Carlo, 643 Markov’s inequality, 185 Markov, A. A., 2 martingale, 650 matrix, 559, 678 matrix inverse, 560 matrix product, 560 max command, 685 maximum likehood estimator, 308 maximum likelihood estimate (MLE), 308 maximum of random variables, 104 mean command, 688 756 mean absolute deviation (MAD), 469 mean value, 129, 130 mean­squared error (MSE), 321, 434, 469 measurement, 270 measuring surprise (P­value), 332 median, 284 median command, 688 memoryless property, 61 Méré, C. de, 2 method of composition, 125 method of least squares, 538 method of moments, 349 method of moments principle, 350 method of transformations, 496 Metropolis–Hastings algorithm, 644 min command, 685 minimal sufficient statistic, 304 minimax decision function, 471 Minitab, 699 mixture distribution, 68 location, 69 scale, 70 MLE (maximum likelihood estimate), 308 mode of a density, 260 model checking, 266, 479 model formula, 688 model selection, 464 moment, 164 moment­generating function, 165 monotonicity of expected value, 137, 146, 192 monotonicity of probabilities, 11 Monte Carlo approximation, 225 Monty Hall problem, 27, 28 MSE (mean­squared error), 321, 434, 469 multicollinearity, 515 multinomial coefficient, 18 multinomial distributions, 102 multinomial models, 302, 305 multiple comparisons, 510, 581 multiple correlation coefficient, 565 multiplication formula, 21 multiplication principle, 15 multivariate measurement, 271 multivariate normal, 500 2 , 57 N 0 1 , 57 N NA (not available in R), 686 nbinom command, 686 ncol command, 698 negative­binomial distribution, 44, 116 Neyman–Pearson theorem, 450 noninformative prior, 425 nonrandomized decision function, 467 nonresponse error, 277 norm command, 686 normal distribution, 57, 89, 116, 142, 145, 234 normal probability calculations, 66 normal probability plot, 488 normal quantile plot, 488 normal score, 488 nrow command, 698 nuisance parameter, 338 null hypothesis, 332 observational study, 269 observed Fisher information, 364 observed relative surprise, 406 odds in favor, 397 one­sided confidence intervals, 347 one­sided hypotheses, 347 one­sided tests, 337 one­to­one function, 110 one­way ANOVA, 577 optimal decision function, 470 optimal estimator, 434 optional stopping theorem, 653 order statistics, 103, 284 ordered partitions, 17 orthogonal rows, 236 outcome, 4 outliers, 288 overfitting, 481 pX , 42 P­value, 332 paired comparisons, 585 pairwise independence, 24 parameter, 262 parameter space, 262 757 Pareto distribution, 61 partial derivative, 678 partition, 11 Pascal’s triangle, 2, 632 Pascal, B., 2 pen, 626 percentile, 283 period of Markov chain, 635 permutations, 16 , 66 Pi , 627 placebo effect, 521 plot command, 688 plug­in Fisher information, 366 plug­in MLE, 315 point distribution, 42 point hypothesis, 466 point mass, 42 pois command, 686 Poisson distribution, 45, 132, 162, 164 Poisson process, 50, 666 polling, 276 pooling, 593 population, 270 population cumulative distribution , 270 population distribution, 270 population interquartile range, 286 population mean, 285 population relative frequency function, 274 population variance, 285 posterior density, 376 posterior distribution, 376 posterior mode, 387 posterior odds, 397 posterior predictive, 400 posterior probability function, 376 power, 341 power function, 341, 449, 469 power transformations, 496 practical significance, 335 prediction, 258, 400 prediction intervals, 576 prediction region, 402 predictor variable, 514 principle of conditional probability, 259 principle of inclusion–exclusion, 12, 14 prior elicitation, 422 prior odds, 397 prior predictive distribution, 375 prior probability distribution, 374 prior risk, 471 prior–
data conict, 503 probability, 1 conditional, 20 law of total, 11, 21 probability function, 42 conditional, 95 probability measure, 5 probability model, 5 probability plot, 488 probability­generating function, 162 probit link, 603 problem of statistical inference, 290 process Poisson, 666 random, 615 stochastic, 615 proportional stratified sampling, 281 proposal density, 646 proposal distribution, 644 pseudorandom numbers, 2, 117 pth percentile, 283 pth quantile, 283 q command, 683 qqnorm command, 688 quantile, 283 quantile command, 688 quantile function, 120 quantiles, 284 quantitative variable, 270 quartiles, 284 queues, 50 quintile, 362 R , 265 R2 (coefficient of determination), 546 random numbers, 710 random process, 615 random variable, 34, 104 absolutely continuous, 52 constant, 42 758 continuous, 51–53 discrete, 41 distribution, 80 expected value, 129, 130, 141, 191 mean, 130 standard deviation, 150 unbounded, 36 variance, 149 random walk, 615, 616 on circle, 625 randomization test, 363 randomized block design, 594 rank command, 688 Rao–Blackwell theorem, 436 Rao–Blackwellization, 436 rate command, 686 rbind command, 698 reduction principles, 470 reference prior, 425 regression assumption, 515 regression coefficients, 541 regression model, 516, 540 regression sum of squares (RSS), 545 reject, 448 rejection region, 448 rejection sampling, 122, 125 related variables, 513 relative frequency, 2 ­relative surprise region, 406 rep command, 684 reparameterization, 265 reparameterize, 309 repeated measures, 584 resamples, 353 resampling, 353 residual plot, 486 residuals, 481, 560 response, 4 response curves, 588 response variable, 514 reversibility, 632 right­continuous, 74 risk, 3 risk function, 467 rm command, 686 RSS (regression sum of squares), 545 sample, 101 sample command, 687 sample ­trimmed mean, 355 sample average, 206 sample correlation coefficient, 190, 547 sample covariance, 190, 547 sample interquartile range I Q R, 287 sample mean, 206, 266 sample median, 284 sample moments, 350 sample pth quantile, 284 sample range, 361 sample space, 4 sample standard deviation, 286 sample variance, 221, 266, 286 sample­size calculation, 273, 340 sampling importance, 233 Monte Carlo, 122 rejection, 122, 125 sampling distribution, 199 sampling study, 273 sampling with replacement, 48 sampling without replacement, 47, 48 scale mixture, 70 scan command, 684 scatter plot, 542, 551 score equation, 310 score function, 310 sd command, 688 seed values, 492 selection effect, 271 series Taylor, 677 series, infinite, 677 set.seed command, 687 sign statistic, 357 sign test statistic, 357 simple hypothesis, 466 simple linear regression model, 540 simple random sampling, 271, 272 simple random walk, 615, 616 Simpson’s paradox, 183 sin command, 685 size rejection region, 448 test function, 449 size skewed skewness, 286 skewness statistic, 483 SLLN (strong law of large numbers), 211 smallest­order statistic, 104 solve command, 698 sort command, 688 source command, 691 sqrt command, 685 squared error loss, 466 St. Petersburg paradox, 133, 134, 141 standard bivariate normal density, 89 standard deviation, 150 standard error, 221, 325 standard error of the estimate, 323 standard normal distribution, 57 standardizing a random variable, 215 state space, 623 stationary distribution, 629 statistical inference, 262 statistical model, 262 statistical model for a sample, 263 statistically significant, 335 stochastic matrix, 624 doubly, 631, 632 stochastic process, 615 continuous­time, 658, 666 martingale, 650 stock prices, 662 stopping theorem, 653 stopping time, 652 stratified sampling, 281 strength of a relationship, 513 strong law of large numbers (SLLN), 211 Student n , 239 subadditivity, 12 subfair game, 617 sufficient statistic, 302 sugar pill, 521 sum command, 685 summary command, 689 superfair game, 617 surprise (P­value), 332 survey sampling, 276 759 t command, 686 t distribution, 239 t n , 239 t­confidence intervals, 331 t­statistic, 331 t­test, 337 t.test command, 688 table command, 687 tables binomial probabilities, 724 2 quantiles, 713 F distribution quantiles, 715 random numbers, 710 standard normal cdf, 712 t distribution quantiles, 714 tail probability, 259 tan command, 685 Taylor series, 677 test function, 449, 469 test of hypothesis, 332 test of significance, 332 theorem of total expectation, 177 total expectation, theorem of, 177 total probability, law of, 11, 21 total sum of squares, 544 training set, 495 transition probabilities, 623 higher­order, 628 transpose, 560 treatment, 520 two­sample t­confidence interval, 580 two­sample t­statistic, 580 two­sample t­test, 580 two­sided tests, 337 two­stage systems, 22 two­way ANOVA, 586 type I error, 448 type II error, 448 types of inferences, 289 UMA (uniformly most accurate), 460 UMP (uniformly most powerful), 449 UMVU (uniformly minimum variance un­ biased), 437 unbiased, 437 unbiased estimator, 322, 436 760 unbiasedness, hypothesis testing, 453 unbounded random variable, 36 underfitting, 481 unif command, 686 uniform distribution, 7, 53, 141, 142 uniformly minimum variance unbiased (UMVU), 437 uniformly most accurate (UMA), 460 uniformly most powerful (UMP), 449 union, 8 upper limit, 287 utility function, 134, 141 utility theory, 134, 141 validation set, 495 var command, 688 variance, 149 variance stabilizing transformations, 362 Venn diagrams, 7 volatility parameter, 662 von Savant, M., 28 weak law of large numbers (WLLN), 206 Weibull distribution, 61 whiskers, 287 Wiener process, 657, 659 Wiener, N., 2, 657 WLLN (weak law of large numbers), 206 z­confidence intervals, 328 z­statistic, 328 z­test, 333
measure P (A/B) satisfies all three axioms of a probability measure. That is, 0 for all event A (CP1) P (A/B) (CP2) P (B/B) = 1 (CP3) If A1, A2, ..., Ak, ... are mutually exclusive events, then 1 P ( Ak/B) = [k=1 1 Xk=1 P (Ak/B). Thus, it is a probability measure with respect to the new sample space B. Example 2.1. A drawer contains 4 black, 6 brown, and 8 olive socks. Two socks are selected at random from the drawer. (a) What is the probability that both socks are of the same color? (b) What is the probability that both socks are olive if it is known that they are of the same color? Answer: The sample space of this experiment consists of S = {(x, y) | x, y Bl, Ol, Br}. 2 The cardinality of S is N (S) = 18 2 ✓ ◆ = 153. 6 Probability and Mathematical Statistics 29 Let A be the event that two socks selected at random are of the same color. Then the cardinality of A is given by N (A + 15 + 28 + 8 2 ✓ ◆ Therefore, the probability of A is given by = 49. P (A) = 49 18 2 = 49 153 . Let B be the event that two socks selected at random are olive. Then the cardinality of B is given by N (B) = 8 2 ◆ ✓ P (B) = = 28 153 . 8 2 18 2 and hence Notice that B A. Hence, ⇢ P (B/A) = = = = P (A B) \ P (A) P (B) P (A) 28 153 ✓ 28 49 ◆ ✓ 4 . 7 = 153 49 ◆ Let A and B be two mutually disjoint events in a sample space S. We want to find a formula for computing the probability that the event A occurs before the event B in a sequence trials. Let P (A) and P (B) be the probabilities that A and B occur, respectively. Then the probability that neither A P (B). Let us denote this probability by r, that nor B occurs is 1 is r = 1 P (A) P (B). P (A) In the first trial, either A occurs, or B occurs, or neither A nor B occurs. In the first trial if A occurs, then the probability of A occurs before B is 1. Conditional Probability and Bayes’ Theorem 30 If B occurs in the first trial, then the probability of A occurs before B is 0. If neither A nor B occurs in the first trial, we look at the outcomes of the second trial. In the second trial if A occurs, then the probability of A occurs before B is 1. If B occurs in the second trial, then the probability of A occurs before B is 0. If neither A nor B occurs in the second trial, we look at the outcomes of the third trial, and so on. This argument can be summarized in the following diagram. A before B 1 P(A) 0 P(B) P(A) r A before B 1 0 P(B) P(A) r A before B 1 0 P(B) P(A) r = 1-P(A)-P(B) r A before B 1 0 P(B) r Hence the probability that the event A comes before the event B is given by P (A before B) = P (A) + r P (A) + r2 P (A) + r3 P (A) + · · · + rn P (A) + · · · = P (A) [1 + r + r2 + · · · + rn + · · · ] = P (A) 1 r 1 = P (A) [1 1 P (A) P (A) + P (B) . = 1 P (A) P (B)] The event A before B can also be interpreted as a conditional event. In this interpretation the event A before B means the occurrence of the event A given that A B has already occurred. Thus we again have [ P (A/A B) = [ = P (A (A B)) [ B) \ P (A [ P (A) P (A) + P (B) . Example 2.2. A pair of four-sided dice is rolled and the sum is determined. What is the probability that a sum of 3 is rolled before a sum of 5 is rolled in a sequence of rolls of the dice? Probability and Mathematical Statistics 31 Answer: The sample space of this random experiment is S = {(1, 1) (2, 1) (3, 1) (4, 1) (1, 2) (2, 2) (3, 2) (4, 2) (1, 3) (2, 3) (3, 3) (4, 3) (1, 4) (2, 4) (3, 4) (4, 4)}. Let A denote the event of getting a sum of 3 and B denote the event of getting a sum of 5. The probability that a sum of 3 is rolled before a sum of 5 is rolled can be thought of as the conditional probability of a sum of 3, given that a sum of 3 or 5 has occurred. That is, P (A/A B). Hence [ P (A/A B) = [ = = = = P (A (A B)) [ B) \ P (A [ P (A) P (A) + P (B) N (A) N (A) + N (B) 2 2 + 4 1 3 . Example 2.3. If we randomly pick two television sets in succession from a shipment of 240 television sets of which 15 are defective, what is the probability that they will be both defective? Answer: Let A denote the event that the first television picked was defective. Let B denote the event that the second television picked was defective. Then A B will denote the event that both televisions picked were defective. Using the conditional probability, we can calculate \ P (A \ B) = P (A) P (B/A) 14 239 ◆ = = ✓ 15 240 7 1912 ◆ ✓ . In Example 2.3, we assume that we are sampling without replacement. Definition 2.2. If an object is selected and then replaced before the next object is selected, this is known as sampling with replacement. Otherwise, it is called sampling without replacement. Conditional Probability and Bayes’ Theorem 32 Rolling a die is equivalent to sampling with replacement, whereas dealing a deck of cards to players is sampling without replacement. Example 2.4. A box of fuses contains 20 fuses, of which 5 are defective. If 3 of the fuses are selected at random and removed from the box in succession without replacement, what is the probability that all three fuses are defective? Answer: Let A be the event that the first fuse selected is defective. Let B be the event that the second fuse selected is defective. Let C be the event that the third fuse selected is defective. The probability that all three fuses selected are defective is P (A C). Hence B \ \ P (A B \ \ B) \ C) = P (A) P (B/A) P (C/A 4 19 3 18 ◆ ◆ ✓ = = 5 20 ✓ 1 114 ◆ ✓ . Definition 2.3. Two events A and B of a sample space S are called independent if and only if P (A \ B) = P (A) P (B). Example 2.5. The following diagram shows two events A and B in the sample space S. Are the events A and B independent? S B A Answer: There are 10 black dots in S and event A contains 4 of these dots. So the probability of A, is P (A) = 4 10 . Similarly, event B contains 5 black dots. Hence P (B) = 5 10 . The conditional probability of A given B is P (A/B) = P (A B) \ P (B) = 2 5 . Probability and Mathematical Statistics 33 This shows that P (A/B) = P (A). Hence A and B are independent. Theorem 2.1. Let A, B then ✓ S. If A and B are independent and P (B) > 0, P (A/B) = P (A). Proof: P (A/B) = = P (A B) \ P (B) P (A) P (B) P (B) = P (A). Theorem 2.2. independent. Similarly A and Bc are independent. If A and B are independent events. Then Ac and B are Proof: We know that A and B are independent, that is P (A \ B) = P (A) P (B) and we want to show that Ac and B are independent, that is P (Ac \ B) = P (Ac) P (B). Since P (Ac \ B) = P (Ac/B) P (B) = [1 P (A/B)] P (B) = P (B) P (A/B)P (B) = P (B) P (A B) \ P (A) P (B) = P (B) = P (B) [1 = P (B)P (Ac), P (A)] the events Ac and B are independent. Similarly, it can be shown that A and Bc are independent and the proof is now complete. Remark 2.1. The concept of independence is fundamental. In fact, it is this concept that justifies the mathematical development of probability as a separate discipline from measure theory. Mark Kac said, “independence of events is not a purely mathematical concept.” It can, however, be made plausible Conditional Probability and Bayes’ Theorem 34 that it should be interpreted by the rule of multiplication of probabilities and this leads to the mathematical definition of independence. Example 2.6. Flip a coin and then independently cast a die. What is the probability of observing heads on the coin and a 2 or 3 on the die? Answer: Let A denote the event of observing a head on the coin and let B be the event of observing a 2 or 3 on the die. Then P (A \ B) = P (A) P (B Example 2.7. An urn contains 3 red, 2 white and 4 yellow balls. An ordered sample of size 3 is drawn from the urn. If the balls are drawn with replacement so that one outcome does not change the probabilities of others, then what is the probability of drawing a sample that has balls of each color? Also, find the probability of drawing a sample that has two yellow balls and a red ball or a red ball and two white balls? Answer: and P (RW 243 4 9 ◆ ◆ ✓ ◆ ✓ P (Y Y R or RW = 20 243 . 2 9 ◆ ◆ ✓ ◆ ✓ ◆ ✓ ◆ ✓ If the balls are drawn without replacement, then P (RW Y ) = ✓ P (Y Y R or RW 21 . ◆ ✓ 84 . 1 7 ◆ ◆ ✓ ◆ ✓ There is a tendency to equate the concepts “mutually exclusive” and “independence”. This is a fallacy. Two events A and B are mutually exclusive if A and they are called possible if P (A) = P (B). B = = 0 \ ; Theorem 2.2. Two possible mutually exclusive events are always dependent (that is not independent). 6 6 Probability and Mathematical Statistics 35 Proof: Suppose not. Then P (A B) = P (A) P (B) \ ) = P (A) P (B) P ( ; 0 = P (A) P (B). Hence, we get either P (A) = 0 or P (B) = 0. This is a contradiction to the fact that A and B are possible events. This completes the proof. Theorem 2.3. Two possible independent events are not mutually exclusive. Proof: Let A and B be two independent events and suppose A and B are mutually exclusive. Then P (A) P (B) = P (A ) = P ( ; = 0. B) \ Therefore, we get either P (A) = 0 or P (B) = 0. This is a contradiction to the fact that A and B are possible events. The possible events A and B exclusive implies A and B are not indepen- dent; and A and B independent implies A and B are not exclusive. 2.2. Bayes’ Theorem There are many situations where the ultimate outcome of an experiment depends on what happens in various intermediate stages. This issue is resolved by the Bayes’ Theorem. Definition 2.4. Let S be a set and let P = {Ai}m of S. The collection P is called a partition of S if i=1 be a collection of subsets m (a) S = Ai (b) Ai \ i=1 [ Aj = ; for i = j. A2 A5 Sample Space A1 A3 A4 6 Conditional Probability and Bayes’ Theorem 36 Theorem 2.4. If the events {Bi}m space S and P (Bi) = 0 for i = 1, 2, ..., m, then for any event A in S i=1 constitute a partition of the sample m P (A) = P (Bi) P (A/Bi). i=1 X Proof: Let S be a sample space and A be an event in S. Let {Bi}m any partition of S. Then i=1 be Thus A = m i=1 [ (A \ Bi) . P (A) = m i=1 X m P (A Bi) \ = P (Bi) P (A/Bi) . i=1 X Theorem 2.5. If the events {Bi}m space S and P (Bi)
that P (A) = 0 i=1 constitute a partition of the sample = 0 for i = 1, 2, ..., m, then for any event A in S such P (Bk/A) = P (Bk) P (A/Bk) m i=1 P (Bi) P (A/Bi) k = 1, 2, ..., m. Proof: Using the definition of conditional probability, we get P P (Bk/A) = P (A Bk) \ P (A) . Using Theorem 1, we get P (Bk/A) = P (A Bk) \ m i=1 P (Bi) P (A/Bi) . This completes the proof. P This Theorem is called Bayes Theorem. The probability P (Bk) is called prior probability. The probability P (Bk/A) is called posterior probability. Example 2.8. Two boxes containing marbles are placed on a table. The boxes are labeled B1 and B2. Box B1 contains 7 green marbles and 4 white 6 6 6 Probability and Mathematical Statistics 37 marbles. Box B2 contains 3 green marbles and 10 yellow marbles. The boxes are arranged so that the probability of selecting box B1 is 1 3 and the probability of selecting box B2 is 2 3 . Kathy is blindfolded and asked to select a marble. She will win a color TV if she selects a green marble. (a) What is the probability that Kathy will win the TV (that is, she will select a green marble)? (b) If Kathy wins the color TV, what is the probability that the green marble was selected from the first box? Answer: Let A be the event of drawing a green marble. The prior probabilities are P (B1) = 1 3 and P (B2) = 2 3 . (a) The probability that Kathy will win the TV is P (A) = P (A B1) + P (A B2) \ \ = P (A/B1) P (B1) + P (A/B2) P (B2) 1 3 + ◆ ✓ 3 13 ◆ ✓ 2 3 ◆ = = = 7 11 ✓ 7 33 91 429 ◆ ✓ 2 13 + + 66 429 = 157 429 . (b) Given that Kathy won the TV, the probability that the green marble was selected from B1 is 1/3 2/3 7/11 4/11 3/13 Selecting box B1 Selecting box B2 Green marble Not a green marble Green marble 10/13 Not a green marble Conditional Probability and Bayes’ Theorem 38 P (B1/A) = P (A/B1) P (B1) P (A/B1) P (B1) + P (A/B2) P (B2) = = 7 11 1 3 1 3 3 13 + 2 3 7 11 91 157 . Note that P (A/B1) is the probability of selecting a green marble from B1 whereas P (B1/A) is the probability that the green marble was selected from box B1. Example 2.9. Suppose box A contains 4 red and 5 blue chips and box B contains 6 red and 3 blue chips. A chip is chosen at random from the box A and placed in box B. Finally, a chip is chosen at random from among those now in box B. What is the probability a blue chip was transferred from box A to box B given that the chip chosen from box B is red? Answer: Let E represent the event of moving a blue chip from box A to box B. We want to find the probability of a blue chip which was moved from box A to box B given that the chip chosen from B was red. The probability of choosing a red chip from box A is P (R) = 4 9 and the probability of choosing a blue chip from box A is P (B) = 5 9 . If a red chip was moved from box A to box B, then box B has 7 red chips and 3 blue chips. Thus the probability of choosing a red chip from box B is 7 10 . Similarly, if a blue chip was moved from box A to box B, then the probability of choosing a red chip from box B is 6 10 . Box A red 4/9 blue 5/9 7/10 3/10 6/10 Box B 7 red 3 blue Box B 6 red 4 blue Red chip Not a red chip Red chip 4/10 Not a red chip Probability and Mathematical Statistics 39 Hence, the probability that a blue chip was transferred from box A to box B given that the chip chosen from box B is red is given by P (E/R) = P (R/E) P (E) P (R) 6 10 4 9 5 9 6 10 + 5 9 = = 7 10 15 29 . Example 2.10. Sixty percent of new drivers have had driver education. During their first year, new drivers without driver education have probability 0.08 of having an accident, but new drivers with driver education have only a 0.05 probability of an accident. What is the probability a new driver has had driver education, given that the driver has had no accident the first year? Answer: Let A represent the new driver who has had driver education and B represent the new driver who has had an accident in his first year. Let Ac and Bc be the complement of A and B, respectively. We want to find the probability that a new driver has had driver education, given that the driver has had no accidents in the first year, that is P (A/Bc). P (A/Bc) = P (A Bc) \ P (Bc) = P (Bc/A) P (A) P (Bc/A) P (A) + P (Bc/Ac) P (Ac) = [1 [1 P (B/A)] P (A) + [1 P (B/A)] P (A) P (B/Ac)] [1 P (A)] = 40 100 60 100 92 100 = 0.6077. 95 100 60 100 + 95 100 Example 2.11. One-half percent of the population has AIDS. There is a test to detect AIDS. A positive test result is supposed to mean that you Conditional Probability and Bayes’ Theorem 40 have AIDS but the test is not perfect. For people with AIDS, the test misses the diagnosis 2% of the times. And for the people without AIDS, the test incorrectly tells 3% of them that they have AIDS. (a) What is the probability that a person picked at random will test positive? (b) What is the probability that you have AIDS given that your test comes back positive? Answer: Let A denote the event of one who has AIDS and let B denote the event that the test comes out positive. (a) The probability that a person picked at random will test positive is given by P (test positive) = (0.005) (0.98) + (0.995) (0.03) = 0.0049 + 0.0298 = 0.035. (b) The probability that you have AIDS given that your test comes back positive is given by P (A/B) = = = favorable positive branches total positive branches (0.005) (0.98) (0.005) (0.98) + (0.995) (0.03) 0.0049 0.035 = 0.14. 0.005 AIDS 0.995 No AIDS 0.98 0.02 0.03 Test positive Test negative Test positive 0.97 Test negative Remark 2.2. This example illustrates why Bayes’ theorem is so important. What we would really like to know in this situation is a first-stage result: Do you have AIDS? But we cannot get this information without an autopsy. The first stage is hidden. But the second stage is not hidden. The best we can do is make a prediction about the first stage. This illustrates why backward conditional probabilities are so useful. Probability and Mathematical Statistics 41 2.3. Review Exercises 1. Let P (A) = 0.4 and P (A B independent? [ B) = 0.6. For what value of P (B) are A and 2. A die is loaded in such a way that the probability of the face with j dots turning up is proportional to j for j = 1, 2, 3, 4, 5, 6. In 6 independent throws of this die, what is the probability that each face turns up exactly once? 3. A system engineer is interested in assessing the reliability of a rocket composed of three stages. At take off, the engine of the first stage of the rocket must lift the rocket off the ground. If that engine accomplishes its task, the engine of the second stage must now lift the rocket into orbit. Once the engines in both stages 1 and 2 have performed successfully, the engine of the third stage is used to complete the rocket’s mission. The reliability of the rocket is measured by the probability of the completion of the mission. If the probabilities of successful performance of the engines of stages 1, 2 and 3 are 0.99, 0.97 and 0.98, respectively, find the reliability of the rocket. 4. Identical twins come from the same egg and hence are of the same sex. Fraternal twins have a 50-50 chance of being the same sex. Among twins the probability of a fraternal set is 1 3 . If the next set of twins are of the same sex, what is the probability they are identical? 3 and an identical set is 2 5. In rolling a pair of fair dice, what is the probability that a sum of 7 is rolled before a sum of 8 is rolled ? 6. A card is drawn at random from an ordinary deck of 52 cards and replaced. This is done a total of 5 independent times. What is the conditional probability of drawing the ace of spades exactly 4 times, given that this ace is drawn at least 4 times? 7. Let A and B be independent events with P (A) = P (B) and P (A 0.5. What is the probability of the event A? [ B) = 8. An urn contains 6 red balls and 3 blue balls. One ball is selected at random and is replaced by a ball of the other color. A second ball is then chosen. What is the conditional probability that the first ball selected is red, given that the second ball was red? Conditional Probability and Bayes’ Theorem 42 9. A family has five children. Assuming that the probability of a girl on each birth was 0.5 and that the five births were independent, what is the probability the family has at least one girl, given that they have at least one boy? 10. An urn contains 4 balls numbered 0 through 3. One ball is selected at random and removed from the urn and not replaced. All balls with nonzero numbers less than that of the selected ball are also removed from the urn. Then a second ball is selected at random from those remaining in the urn. What is the probability that the second ball selected is numbered 3? 11. English and American spelling are rigour and rigor, respectively. A man staying at Al Rashid hotel writes this word, and a letter taken at random from his spelling is found to be a vowel. If 40 percent of the English-speaking men at the hotel are English and 60 percent are American, what is the probability that the writer is an Englishman? 12. A diagnostic test for a certain disease is said to be 90% accurate in that, if a person has the disease, the test will detect with probability 0.9. Also, if a person does not have the disease, the test will report that he or she doesn’t have it with probability 0.9. Only 1% of the population has the disease in question. If the diagnostic test reports that a person chosen at random from the population has the disease, what is the conditional probability that the person, in fact, has the disease? 13. A small grocery store had 10 cartons of milk, 2 of which were sour. If you are going to buy the 6th carton of milk sold that day at random, find the probability of selecting a carton of sour milk. 14. Suppose Q and S are independent events such that the probability that at least one of them occurs is 1 3 and the probability that Q occurs but S does not occur is 1 9 . What is the probabi
lity of S? 15. A box contains 2 green and 3 white balls. A ball is selected at random from the box. If the ball is green, a card is drawn from a deck of 52 cards. If the ball is white, a card is drawn from the deck consisting of just the 16 pictures. (a) What is the probability of drawing a king? (b) What is the probability of a white ball was selected given that a king was drawn? Probability and Mathematical Statistics 43 16. Five urns are numbered 3,4,5,6 and 7, respectively. Inside each urn is n2 dollars where n is the number on the urn. The following experiment is performed: An urn is selected at random. If its number is a prime number the experimenter receives the amount in the urn and the experiment is over. If its number is not a prime number, a second urn is selected from the remaining four and the experimenter receives the total amount in the two urns selected. What is the probability that the experimenter ends up with exactly twentyfive dollars? 17. A cookie jar has 3 red marbles and 1 white marble. A shoebox has 1 red marble and 1 white marble. Three marbles are chosen at random without replacement from the cookie jar and placed in the shoebox. Then 2 marbles are chosen at random and without replacement from the shoebox. What is the probability that both marbles chosen from the shoebox are red? 18. A urn contains n black balls and n white balls. Three balls are chosen from the urn at random and without replacement. What is the value of n if the probability is 1 12 that all three balls are white? 19. An urn contains 10 balls numbered 1 through 10. Five balls are drawn at random and without replacement. Let A be the event that “Exactly two odd-numbered balls are drawn and they occur on odd-numbered draws from the urn.” What is the probability of event A? I have five envelopes numbered 3, 4, 5, 6, 7 all hidden in a box. 20. I pick an envelope – if it is prime then I get the square of that number in dollars. Otherwise (without replacement) I pick another envelope and then get the sum of squares of the two envelopes I picked (in dollars). What is the probability that I will get $25? Conditional Probability and Bayes’ Theorem 44 Probability and Mathematical Statistics 45 Chapter 3 RANDOM VARIABLES AND DISTRIBUTION FUNCTIONS 3.1. Introduction In many random experiments, the elements of sample space are not necessarily numbers. For example, in a coin tossing experiment the sample space consists of S = {Head, Tail}. Statistical methods involve primarily numerical data. Hence, one has to ‘mathematize’ the outcomes of the sample space. This mathematization, or quantification, is achieved through the notion of random variables. Definition 3.1. Consider a random experiment whose sample space is S. A random variable X is a function from the sample space S into the set of real numbers IR such that for each interval I in IR, the set {s I} is an event in S. S | X(s) 2 2 In a particular experiment a random variable X would be some function that assigns a real number X(s) to each possible outcome s in the sample space. Given a random experiment, there can be many random variables. This is due to the fact that given two (finite) sets A and B, the number of distinct functions one can come up with is |B||A|. Here |A| means the cardinality of the set A. Random variable is not a variable. Also, it is not random. Thus someone named it inappropriately. The following analogy speaks the role of the random variable. Random variable is like the Holy Roman Empire – it was Random Variables and Distribution Functions 46 not holy, it was not Roman, and it was not an empire. A random variable is neither random nor variable, it is simply a function. The values it takes on are both random and variable. Definition 3.2. The set {x random variable X. 2 IR | x = X(s), s 2 S} is called the space of the The space of the random variable X will be denoted by RX . The space of the random variable X is actually the range of the function X : S IR. ! Example 3.1. Consider the coin tossing experiment. Construct a random variable X for this experiment. What is the space of this random variable X? Answer: The sample space of this experiment is given by S = {Head, Tail}. Let us define a function from S into the set of reals as follows X(Head) = 0 X(T ail) = 1. Then X is a valid map and thus by our definition of random variable, it is a random variable for the coin tossing experiment. The space of this random variable is RX = {0, 1}. Tail Head X Sample Space 0 1 Real line X(head) = 0 and X(tail) = 1 Example 3.2. Consider an experiment in which a coin is tossed ten times. What is the sample space of this experiment? How many elements are in this sample space? Define a random variable for this sample space and then find the space of the random variable. Probability and Mathematical Statistics 47 Answer: The sample space of this experiment is given by S = {s | s is a sequence of 10 heads or tails}. The cardinality of S is |S| = 210. Let X : S defined as follows: ! IR be a function from the sample space S into the set of reals IR X(s) = number of heads in sequence s. Then X is a random variable. This random variable, for example, maps the sequence HHT T T HT T HH to the real number 5, that is X(HHT T T HT T HH) = 5. The space of this random variable is RX = {0, 1, 2, ..., 10}. Now, we introduce some notations. By (X = x) we mean the event {s S | X(s) = x}. Similarly, (a < X < b) means the event {s of the sample space S. These are illustrated in the following diagrams} S A X Sample Space S B X Real line x Sample Space a b Real line (X=x) means the event A (a<X<b) means the event B There are three types of random variables: discrete, continuous, and mixed. However, in most applications we encounter either discrete or continuous random variable. In this book we only treat these two types of random variables. First, we consider the discrete case and then we examine the continuous case. Definition 3.3. If the space of random variable X is countable, then X is called a discrete random variable. Random Variables and Distribution Functions 48 3.2. Distribution Functions of Discrete Random Variables Every random variable is characterized through its probability density function. Definition 3.4. Let RX be the space of the random variable X. The function f : RX ! IR defined by f (x) = P (X = x) is called the probability density function (pdf) of X. Example 3.3. In an introductory statistics class of 50 students, there are 11 freshman, 19 sophomores, 14 juniors and 6 seniors. One student is selected at random. What is the sample space of this experiment? Construct a random variable X for this sample space and then find its space. Further, find the probability density function of this random variable X. Answer: The sample space of this random experiment is Define a function X : S S = {F r, So, Jr, Sr}. IR as follows: ! X(F r) = 1, X(So) = 2 X(Jr) = 3, X(Sr) = 4. Then clearly X is a random variable in S. The space of X is given by RX = {1, 2, 3, 4}. The probability density function of X is given by f (1) = P (X = 1) = f (2) = P (X = 2) = f (3) = P (X = 3) = f (4) = P (X = 4) = 11 50 19 50 14 50 6 50 . Example 3.4. A box contains 5 colored balls, 2 black and 3 white. Balls are drawn successively without replacement. If the random variable X is the Probability and Mathematical Statistics 49 number of draws until the last black ball is obtained, find the probability density function for the random variable X. Answer: Let ‘B’ denote the black ball, and ‘W’ denote the white ball. Then the sample space S of this experiment is given by (see the figure below) 2B 3W = { BB, BW B, W BB, BW W B, W BW B, W W BB, BW W W B, W W BW B, W W W BB, W BW W B}. Hence the sample space has 10 points, that is |S| = 10. It is easy to see that the space of the random variable X is {2, 3, 4, 5}. X BB BWB WBB BWWB WBWB WWBB BWWWB WWBWB WWWBB 2 3 4 5 WBWWB Sample Space S Real line Therefore, the probability density function of X is given by f (2) = P (X = 2) = f (4) = P (X = 4) = 1 10 3 10 , , f (3) = P (X = 3) = f (5) = P (X = 5) = 2 10 4 10 . Random Variables and Distribution Functions 50 Thus f (x) = 1 , x 10 x = 2, 3, 4, 5. Example 3.5. A pair of dice consisting of a six-sided die and a four-sided die is rolled and the sum is determined. Let the random variable X denote this sum. Find the sample space, the space of the random variable, and probability density function of X. Answer: The sample space of this random experiment is given by S = {(1, 1) (2, 1) (3, 1) (4, 1) (1, 2) (2, 2) (3, 2) (4, 2) (1, 3) (2, 3) (3, 3) (4, 3) (1, 4) (2, 4) (3, 4) (4, 4) (1, 5) (2, 5) (3, 5) (4, 5) (1, 6) (2, 6) (3, 6) (4, 6)} The space of the random variable X is given by RX = {2, 3, 4, 5, 6, 7, 8, 9, 10}. Therefore, the probability density function of X is given by f (2) = P (X = 2) = f (4) = P (X = 4) = f (6) = P (X = 6) = f (8) = P (X = 8) = 1 24 3 24 4 24 3 24 , , , , f (3) = P (X = 3) = f (5) = P (X = 5) = f (7) = P (X = 7) = f (9) = P (X = 9) = 2 24 4 24 4 24 2 24 f (10) = P (X = 10) = 1 24 . Example 3.6. A fair coin is tossed 3 times. Let the random variable X denote the number of heads in 3 tosses of the coin. Find the sample space, the space of the random variable, and the probability density function of X. Answer: The sample space S of this experiment consists of all binary sequences of length 3, that is S = {T T T, T T H, T HT, HT T, T HH, HT H, HHT, HHH}. Probability and Mathematical Statistics 51 TTT TTH THT HTT THH HTH HHT HHH X 0 1 2 3 Sample Space S Real line The space of this random variable is given by RX = {0, 1, 2, 3}. Therefore, the probability density function of X is given by f (0) = P (X = 0) = f (1) = P (X = 1) = f (2) = P (X = 2) = f (3) = P (X = 3 . This can be written as follows: f (x, 1, 2, 3. The probability density function f (x) of a random variable X completely characterizes it. Some basic properties of a discrete probability density function are summarized below. Theorem 3.1. If X is a discrete r
andom variable with space RX and probability density function f (x), then (a) f (x) (b) 0 for all x in RX , and f (x) = 1. RX Xx 2 Example 3.7. If the probability of a random variable X with space RX = {1, 2, 3, ..., 12} is given by f (x) = k (2x 1), Random Variables and Distribution Functions 52 then, what is the value of the constant k? Answer: 1 = f (x) = RX Xx 2 RX Xx 2 12 k (2x 1) = k (2x 1) x= 144. 12 x x=1 X (12)(13) 2 12 # 12 Hence k = 1 144 . Definition 3.5. The cumulative distribution function F (x) of a random variable X is defined as for all real numbers x. F (x) = P (X x)  Theorem 3.2. If X is a random variable with the space RX , then F (x) = f (t) x Xt  for x RX . 2 Example 3.8. If the probability density function of the random variable X is given by 1 144 (2x 1) for x = 1, 2, 3, ..., 12 then find the cumulative distribution function of X. Answer: The space of the random variable X is given by RX = {1, 2, 3, ..., 12}. Probability and Mathematical Statistics 53 Then F (1) = f (t) = f (1) = 1 Xt  1 144 F (2) = f (t) = f (1) + f (2) = 2 Xt  1 144 + 3 144 = 4 144 F (3) = f (t) = f (1) + f (2) + f (3) = Xt 3  .. ........ .. ........ 1 144 + 3 144 + 5 144 = 9 144 F (12) = f (t) = f (1) + f (2) + · · · + f (12) = 1. 12 Xt  F (x) represents the accumulation of f (t) up to t x.  Theorem 3.3. Let X be a random variable with cumulative distribution function F (x). Then the cumulative distribution function satisfies the followings: ) = 0, ) = 1, and (a) F ( (b) F ( (c) F (x) is an increasing function, that is if x < y, then F (x) 1 1 all reals x, y. F (y) for  The proof of this theorem is trivial and we leave it to the students. Theorem 3.4. If the space RX of the random variable X is given by RX = {x1 < x2 < x3 < · · · < xn}, then f (x1) = F (x1) f (x2) = F (x2) f (x3) = F (x3) .. ........ .. ........ F (x1) F (x2) f (xn) = F (xn) F (xn 1). Random Variables and Distribution Functions 54 F(x4) 1 F(x3) F(x2) F(x1) 0 f(x4) f(x3) f(x2) f(x1) x1 x2 x3 x4 x Theorem 3.2 tells us how to find cumulative distribution function from the probability density function, whereas Theorem 3.4 tells us how to find the probability density function given the cumulative distribution function. Example 3.9. Find the probability density function of the random variable X whose cumulative distribution function is F (x) = 0.00 if x < 1 0.25 if 1  x < 1 0.50 if 1 0.75 if 3 1.00 if >>>>>>>>>>>< >>>>>>>>>>>: Also, find (a) P (X  3), (b) P (X = 3), and (c) P (X < 3). Answer: The space of this random variable is given by RX = { 1, 1, 3, 5}. By the previous theorem, the probability density function of X is given by f ( 1) = 0.25 f (1) = 0.50 f (3) = 0.75 f (5) = 1.00 0.25 = 0.25 0.50 = 0.25 0.75 = 0.25. The probability P (X Hence  3) can be computed by using the definition of F . P (X  3) = F (3) = 0.75. Probability and Mathematical Statistics 55 The probability P (X = 3) can be computed from P (X = 3) = F (3) F (1) = 0.75 0.50 = 0.25. Finally, we get P (X < 3) from P (X < 3) = P (X 1) = F (1) = 0.5.  We close this section with an example showing that there is no one-toone correspondence between a random variable and its distribution function. Consider a coin tossing experiment with the sample space consisting of a head and a tail, that is S = { head, tail }. Define two random variables X1 and X2 from S as follows: X1( head ) = 0 and X1( tail ) = 1 and X2( head ) = 1 and X2( tail ) = 0. It is easy to see that both these random variables have the same distribution function, namely FXi(x) = ( 0 1 2 1 if x < 0 if 0 if 1   x < 1 x for i = 1, 2. Hence there is no one-to-one correspondence between a random variable and its distribution function. 3.3. Distribution Functions of Continuous Random Variables A random variable X is said to be continuous if its space is either an interval or a union of intervals. The folllowing definition formally defines a continuous random variable. Definition 3.6. A random variable X is said to be a continuous random ) such that for variable if there exists a continuous function f : IR every set of real numbers A [0, ! 1 P (X 2 A) = f (x) dx. ZA (1) Definition 3.7. The function f in (1) is called the probability density function of the continuous random variable X. Random Variables and Distribution Functions 56 It can be easily shown that for every probability density function f , f (x)dx = 1. 1 Z 1 Example 3.10. Is the real valued function f : IR IR defined by ! f (x) = ⇢ 2 2 x 0 if 1 < x < 2 otherwise, a probability density function for some random variable X? Answer: We have to show that f is nonnegative and the area under f (x) is unity. Since the domain of f is the interval (0, 1), it is clear that f is nonnegative. Next, we calculate 1 f (x) dx = Z 1 2 1 Z 2 x 2 dx .   Thus f is a probability density function. Example 3.11. Is the real valued function f : IR IR defined by ! f (x) = 1 + |x| 0 ⇢ 1 < x < 1 if otherwise, a probability density function for some random variable X? Probability and Mathematical Statistics 57 Answer: It is easy to see that f is nonnegative, that is f (x) 0, since f (x) = 1 + |x|. Next we show that the area under f is not unity. For this we compute 1 1 f (x) dx = (1 + |x|) dx 1 1 x) dx + (1 + x) dx 0 Z + x +  1 2 x2 x2 . Thus f is not a probability density function for some random variable X. Example 3.12. For what value of the constant c, the real valued function f : IR IR given by ! f (x) = c 1 + (x ✓)2 , < x < , 1 1 where ✓ is a real parameter, is a probability density function for random variable X? Answer: Since f is nonnegative, we see that c 0. To find the value of c, Random Variables and Distribution Functions 58 we use the fact that for pdf the area is unity, that is 1 = 1 f (x) dx ✓)2 dx Z 1 1 Z 1 1 = = c 1 + (x c 1 + z2 dz 1 z 1( 1 1 tan = c tan ⇡. tan 1( ) 1 ⇤ Hence c = 1 ⇡ and the density function becomes f (x) = 1 ⇡ [1 + (x , ✓)2] < x < . 1 1 This density function is called the Cauchy distribution function with parameter ✓. If a random variable X has this pdf then it is called a Cauchy random variable and is denoted by X CAU (✓). ⇠ This distribution is symmetrical about ✓. Further, it achieves it maximum at x = ✓. The following figure illustrates the symmetry of the distribution for ✓ = 2. Example 3.13. For what value of the constant c, the real valued function f : IR IR given by ! f (x) = c if a x b   ( 0 otherwise, Probability and Mathematical Statistics 59 where a, b are real constants, is a probability density function for random variable X? Answer: Since f is a pdf, k is nonnegative. Further, since the area under f is unity, we get 1 = 1 f (x) dx Z 1 b = c dx a Z = c [x]b a = c [b a]. Hence c = 1 a , and the pdf becomes b 1 f (x) = b a ( 0 if a x b   otherwise. This probability density function is called the uniform distribution on If a random variable X has this pdf then it is called a the interval [a, b]. uniform random variable and is denoted by X U N IF (a, b). The following is a graph of the probability density function of a random variable on the interval [2, 5]. ⇠ Definition 3.8. Let f (x) be the probability density function of a continuous random variable X. The cumulative distribution function F (x) of X is defined as x F (x) = P (X x) =  f (t) dt. Z 1 The cumulative distribution function F (x) represents the area under the , x) (see figure below). probability density function f (x) on the interval ( 1 Random Variables and Distribution Functions 60 Like the discrete case, the cdf is an increasing function of x, and it takes value 0 at negative infinity and 1 at positive infinity. Theorem 3.5. If F (x) is the cumulative distribution function of a continuous random variable X, the probability density function f (x) of X is the derivative of F (x), that is Proof: By Fundamental Theorem of Calculus, we get d dx F (x) = f (x). d dx (F (x)) = d dx x f (t) dt ◆ = f (x) = f (x). 1 ✓Z dx dx This theorem tells us that if the random variable is continuous, then we can find the pdf given cdf by taking the derivative of the cdf. Recall that for discrete random variable, the pdf at a point in space of the random variable can be obtained from the cdf by taking the difference between the cdf at the point and the cdf immediately below the point. Example 3.14. What is the cumulative distribution function of the Cauchy random variable with parameter ✓? Answer: The cdf of X is given by x F (x) = f (t) dt Z 1 x Z 1 x Z 1 1 ⇡ = = = 1 ⇡ [1 + (t ✓ 1 ⇡ [1 + z2] dt ✓)2] dz tan 1 (x ✓) + 1 2 . Probability and Mathematical Statistics 61 Example 3.15. What is the probability density function of the random variable whose cdf is F (x) = 1 1 + e x , < x < ? 1 1 Answer: The pdf of the random variable is given by f (x) = = = d dx d dx d dx F (x) 1 x 1 + e ✓ x 1 + e) 1) (1 + e 2 d dx = x e (1 + e x)2 . x 1 + e Next, we briefly discuss the problem of finding probability when the cdf is given. We summarize our results in the following theorem. Theorem 3.6. Let X be a continuous random variable whose cdf is F (x). Then followings are true: (a) P (X < x) = F (x), (b) P (X > x) = 1 (c) P (X = x) = 0 , and (d) P (a < X < b) = F (b) F (x), F (a). 3.4. Percentiles for Continuous Random Variables In this section, we discuss various percentiles of a continuous random variable. If the random variable is discrete, then to discuss percentile, we have to know the order statistics of samples. We shall treat the percentile of discrete random variable in Chapter 13. Definition 3.9. Let p be a real number between 0 and 1. A 100pth percentile of the distribution of a random variable X is any real number q satisfying P (X q) p   and P (X > q) p. 1  A 100pth percentile is a measure of location for the probability distribution in the sense that q divides the distribution of the probability mass into Random Variables and Distribution Functions 62 two parts, one having probability mass p and other having probabil
ity mass 1 p (see diagram below). Example 3.16. If the random variable X has the density function ex 2 f (x) = 8 < 0 then what is the 75th percentile of X? : for x < 2 otherwise, Answer: Since 100pth = 75, we get p = 0.75. By definition of percentile, we have q 0.75 = p = f (x) dx Z 1 q = ex 2 dx 2 Z 1 ex = 2. = eq ⇥ q 1 ⇤ From this solving for q, we get the 75th percentile to be q = 2 + ln 3 4 . ◆ ✓ Example 3.17. What is the 87.5 percentile for the distribution with density function f (x) = |x| e Answer: Note that this density function is symmetric about the y-axis, that is f (x) = f ( x). Probability and Mathematical Statistics 63 Hence 0 Z 1 f (x) dx = 1 2 . Now we compute the 87.5th percentile q of the above distribution. 87.5 100 = = = = = q Z 1 0 Z 1 0 Z 1 1 + 2 1 2 + f (x) dx 1 2 e |x| dx + 1 2 q 0 Z 1 2 ex dx + 0 Z x dx e e q |x| dx e x dx Therefore solving for q, we get 0.125 = 1 2 q e q = ln 25 100 ◆ ✓ = ln 4. Hence the 87.5th percentile of the distribution is ln 4. Example 3.18. Let the continuous random variable X have the density function f (x) as shown in the figure below: Random Variables and Distribution Functions 64 What is the 25th percentile of the distribution of X? Answer: Since the line passes through the points (0, 0) and (a, 1 tion f (x) is equal to 4 ), the func- f (x) = x. 1 4a Since f (x) is a density function the area under f (x) should be unity. Hence a 1 = f (x) dx 1 4a x dx 0 Z a = = = 0 Z 1 8a a 8 . a2 Thus a = 8. Hence the probability density function of X is f (x) = 1 32 x. Now we want to find the 25th percentile. 25 100 = = = q 0 Z q 0 Z 1 64 f (x) dx x dx 1 32 q2. Hence q = p16, that is the 25th percentile of the above distribution is 4. Definition 3.10. The 25th and 75th percentiles of any distribution are called the first and the third quartiles, respectively. Probability and Mathematical Statistics 65 Definition 3.11. The 50th percentile of any distribution is called the median of the distribution. The median divides the distribution of the probability mass into two equal parts (see the following figure). If a probability density function f (x) is symmetric about the y-axis, then the median is always 0. Example 3.19. A random variable is called standard normal if its probability density function is of the form f (x) = 1 p2⇡ e 1 2 x2 , < x < . 1 1 What is the median of X? Answer: Notice that f (x) = f ( is symmetric about the y-axis. Thus the median of X is 0. x), hence the probability density function Definition 3.12. A mode of the distribution of a continuous random variable X is the value of x where the probability density function f (x) attains a relative maximum (see diagram). y 0 Relative Maximum f(x) mode mode x Random Variables and Distribution Functions 66 A mode of a random variable X is one of its most probable values. A random variable can have more than one mode. Example 3.20. Let X be a uniform random variable on the interval [0, 1], that is X U N IF (0, 1). How many modes does X have? ⇠ Answer: Since X ⇠ U N IF (0, 1), the probability density function of X is f (x) = 1 if 0 x 1   ( 0 otherwise. Hence the derivative of f (x) is f 0(x) = 0 (0, 1). x 2 Therefore X has infinitely many modes. Example 3.21. Let X be a Cauchy random variable with parameter ✓ = 0, that is X CAU (0). What is the mode of X? ⇠ Answer: Since X ⇠ CAU (0), the probability density function of f (x) is f (x) = 1 ⇡ (1 + x2) 1 < x < . 1 Hence f 0(x) = 2x ⇡ (1 + x2)2 . Setting this derivative to 0, we get x = 0. Thus the mode of X is 0. Example 3.22. Let X be a continuous random variable with density function x2 e bx f (x) = 8 < 0 where b > 0. What is the mode of X? : for x 0 otherwise, x2be bx 0 = df dx = 2xe bx bx)x = 0. = (2 x = 0 or x = 2 b . Answer: Hence Probability and Mathematical Statistics 67 Thus the mode of X is 2 b . The graph of the f (x) for b = 4 is shown below. Example 3.23. A continuous random variable has density function f (x) = 3x2 ✓3 8 < 0 for 0 x ✓   otherwise, for some ✓ > 0. What is the ratio of the mode to the median for this distribution? : Answer: For fixed ✓ > 0, the density function f (x) is an increasing function. Thus, f (x) has maximum at the right end point of the interval [0, ✓]. Hence the mode of this distribution is ✓. Next we compute the median of this distribution x3 ✓3 q3 ✓3   f (x) dx 3x2 ✓3 dx q 0 . Hence q = 2 1 3 ✓. Thus the ratio of the mode of this distribution to the median is mode median = ✓ 1 3 ✓ 2 = 3p2. Random Variables and Distribution Functions 68 Example 3.24. A continuous random variable has density function f (x) = 3x2 ✓3 8 < 0 for 0 x ✓   otherwise, for some ✓ > 0. What is the probability of X less than the ratio of the mode to the median of this distribution? : Answer: In the previous example, we have shown that the ratio of the mode to the median of this distribution is given by a := mode median = 3p2. Hence the probability of X less than the ratio of the mode to the median of this distribution is a P (X < a) = f (x) dx 0 Z a 3x2 ✓3 dx a 0 0 Z x3 ✓3  a3 ✓3 3p2 ✓3 = 3 2 ✓3 . = = = = 3.5. Review Exercises 1. Let the random variable X have the density function f (x) = 8 < 0 k x for 0 x   2 k q elsewhere. If the mode of this distribution is at x = p2 : 4 , then what is the median of X? 2. The random variable X has density function f (x) = c xk+1 (1 x)k for 0 < x < 1 8 < 0 otherwise, : where c > 0 and 1 < k < 2. What is the mode of X? Probability and Mathematical Statistics 69 3. The random variable X has density function (k + 1) x2 for 0 < x < 1 f (x) = 8 < 0 otherwise, where k is a constant. What is the median of X? : 4. What are the median, and mode, respectively, for the density function f (x) = 1 ⇡ (1 + x2) , < x < ? 1 1 5. What is the 10th percentile of the random variable X whose probability density function is 1 ✓ e x ✓ if x 0, ✓ > 0 f (x) = ( 0 elsewhere? 6. What is the median of the random variable X whose probability density function is 1 2 e x 2 if x 0 f (x) = 7. A continuous random variable X has the density ( 0 elsewhere? f (x) = 3 x2 8 8 < 0 for 0 x 2   otherwise. What is the probability that X is greater than its 75th percentile? : 8. What is the probability density function of the random variable X if its cumulative distribution function is given by F (x) = 8 >< 0.0 if x < 2 0.5 if 2 0.7 if 3 1.0 if x x < 3 x < ⇡ ⇡?   9. Let the distribution of X for x > 0 be >: F (x) = 1 3 Xk=0 xk e x k! . Random Variables and Distribution Functions 70 What is the density function of X for x > 0? 10. Let X be a random variable with cumulative distribution function F (x) = What is the P 0 eX   4 ? x e for x > 0 for x 0.  1 0 8 < : 11. Let X be a continuous random variable with density function f (x) = a x2 e 10 x for x 0 8 < 0 otherwise, where a > 0. What is the probability of X greater than or equal to the mode of X? 12. Let the random variable X have the density function : k x for 0 x   2 k q elsewhere. f (x) = 8 < 0 : If the mode of this distribution is at x = p2 X less than the median of X? 4 , then what is the probability of 13. The random variable X has density function (k + 1) x2 for 0 < x < 1 f (x) = 8 < 0 otherwise, where k is a constant. What is the probability of X between the first and third quartiles? : 14. Let X be a random variable having continuous cumulative distribution function F (x). What is the cumulative distribution function Y = max(0, X)? 15. Let X be a random variable with probability density function f (x) = 2 3x for x = 1, 2, 3, .... What is the probability that X is even? Probability and Mathematical Statistics 71 16. An urn contains 5 balls numbered 1 through 5. Two balls are selected at random without replacement from the urn. If the random variable X denotes the sum of the numbers on the 2 balls, then what are the space and the probability density function of X? 17. A pair of six-sided dice is rolled and the sum is determined. If the random variable X denotes the sum of the numbers rolled, then what are the space and the probability density function of X? 18. Five digit codes are selected at random from the set {0, 1, 2, ..., 9} with replacement. If the random variable X denotes the number of zeros in randomly chosen codes, then what are the space and the probability density function of X? 19. A urn contains 10 coins of which 4 are counterfeit. Coins are removed from the urn, one at a time, until all counterfeit coins are found. If the random variable X denotes the number of coins removed to find the first counterfeit one, then what are the space and the probability density function of X? 20. Let X be a random variable with probability density function f (x) = 2c 3x for x = 1, 2, 3, 4, ..., 1 for some constant c. What is the value of c? What is the probability that X is even? 21. If the random variable X possesses the density function f (x) = cx 0 ⇢ x if 0   otherwise, 2 then what is the value of c for which f (x) is a probability density function? What is the cumulative distribution function of X. Graph the functions f (x) and F (x). Use F (x) to compute P (1  22. The length of time required by students to complete a 1-hour exam is a random variable with a pdf given by 2). X  f (x) = cx2 + x 0 ⇢ x if 0   otherwise, 1 then what the probability a student finishes in less than a half hour? Random Variables and Distribution Functions 72 23. What is the probability of, when blindfolded, hitting a circle inscribed on a square wall? 24. Let f (x) be a continuous probability density function. Show that, for is also a probability every < µ < 1 density function. and > 0, the function 1 f 1 µ x 25. Let X be a random variable with probability density function f (x) and cumulative distribution function F (x). True or False? (a) f (x) can’t be larger than 1. (b) F (x) can’t be larger than 1. (c) f (x) can’t decrease. (d) F (x) can’t decrease. (e) f (x) can’t be negative. (f) F (x) can’t be negative. (g) Area under
f must be 1. (h) Area under F must be 1. (i) f can’t jump. (j) F can’t jump. Probability and Mathematical Statistics 73 Moments of Random Variables and Chebychev Inequality 74 Chapter 4 MOMENTS OF RANDOM VARIABLES AND CHEBYCHEV INEQUALITY 4.1. Moments of Random Variables In this chapter, we introduce the concepts of various moments of a random variable. Further, we examine the expected value and the variance of random variables in detail. We shall conclude this chapter with a discussion of Chebychev’s inequality. Definition 4.1. The nth moment about the origin of a random variable X, as denoted by E(X n), is defined to be xn f (x) if X is discrete E (X n) = 8 >< RX Xx 2 1 1 R >: xn f (x) dx if X is continuous for n = 0, 1, 2, 3, ...., provided the right side converges absolutely. If If n = 1, then E(X) is called the first moment about the origin. n = 2, then E(X 2) is called the second moment of X about the origin. In general, these moments may or may not exist for a given random variable. If for a random variable, a particular moment does not exist, then we say that the random variable does not have that moment. For these moments to exist one requires absolute convergence of the sum or the integral. Next, we shall define two important characteristics of a random variable, namely the expected value and variance. Occasionally E (X n) will be written as E [X n]. Probability and Mathematical Statistics 75 4.2. Expected Value of Random Variables A random variable X is characterized by its probability density function, which defines the relative likelihood of assuming one value over the others. In Chapter 3, we have seen that given a probability density function f of a random variable X, one can construct the distribution function F of it through summation or integration. Conversely, the density function f (x) can be obtained as the marginal value or derivative of F (x). The density function can be used to infer a number of characteristics of the underlying random variable. The two most important attributes are measures of location and dispersion. In this section, we treat the measure of location and treat the other measure in the next section. Definition 4.2. Let X be a random variable with space RX and probability density function f (x). The mean µX of the random variable X is defined as x f (x) if X is discrete RX Xx 2 1 1 if the right hand side exists. R µX = 8 >< >: x f (x) dx if X is continuous The mean of a random variable is a composite of its values weighted by the corresponding probabilities. The mean is a measure of central tendency: the value that the random variable takes “on average.” The mean is also called the expected value of the random variable X and is denoted by E(X). The symbol E is called the expectation operator. The expected value of a random variable may or may not exist. Example 4.1. If X is a uniform random variable on the interval (2, 7), then what is the mean of X? E(X) = (2+7)/2 Moments of Random Variables and Chebychev Inequality 76 Answer: The density function of X is f (x) = 1 5 ( 0 if 2 < x < 7 otherwise. Thus the mean or the expected value of X is µX = E(X) 1 = x f (x) dx Z 1 = = = = 7 x 1 5 dx 2 Z 1 10 x2 7 2  (49 4) 1 10 45 10 = 9 2 = 2 + 7 2 . In general, if X ⇠ U N IF (a, b), then E(X) = 1 2 (a + b). Example 4.2. If X is a Cauchy random variable with parameter ✓, that is X CAU (✓), then what is the expected value of X? ⇠ Answer: We want to find the expected value of X if it exists. The expected value of X will exist if the integral IR xf (x)dx converges absolutely, that is R |x f (x)| dx < . 1 Z IR If this integral diverges, then the expected value of X does not exist. Hence, let us find out if IR |x f (x)| dx converges or not. R Probability and Mathematical Statistics 77 |x f (x)| dx Z IR 1 = |x f (x)| dx Z 1 = 1 x 1 ⇡[1 + (x Z 1 (z + ✓) 1 = ✓)2] 1 ⇡[1 + z2] Z ⇡[1 + z2] 0 Z dx dz dz = ✓ + 1 ⇡  1 ln(1 + z2) 0 ln(1 + b2) lim b ! Since, the above integral does not exist, the expected value for the Cauchy distribution also does not exist. Remark 4.1. Indeed, it can be shown that a random variable X with the Cauchy distribution, E (X n), does not exist for any natural number n. Thus, Cauchy random variables have no moments at all. Example 4.3. If the probability density function of the random variable X is f (x) = (1 8 < 0 p)x 1 p if x = 1, 2, 3, 4, ..., 1 otherwise, then what is the expected value of X? : Moments of Random Variables and Chebychev Inequality 78 Answer: The expected value of X is E(X) = x f (x) p)x 1 p 1 x=1 X x (1 x (1 p)x 1 dp # ) p)x 1 dp # ) . p)x (1 ) 1 (1 p) p) 1 RX Xx 2 1 x (1 = x=1 X d dp ( Z " d dp (" 1 = p = p x=1 Z X 1 d dp ( d dp d dp ⇢ ⇢ 2 x=1 X ( ◆ Hence the expected value of X is the reciprocal of the parameter p. Definition 4.3. If a random variable X whose probability density function is given by f (x) = (1 ( 0 p)x 1 p if x = 1, 2, 3, 4, ..., 1 otherwise is called a geometric random variable and is denoted by X GEO(p). ⇠ Example 4.4. A couple decides to have 3 children. If none of the 3 is a girl, they will try again; and if they still don’t get a girl, they will try once more. If the random variable X denotes the number of children the couple will have following this scheme, then what is the expected value of X? Answer: Since the couple can have 3 or 4 or 5 children, the space of the random variable X is RX = {3, 4, 5}. Probability and Mathematical Statistics 79 The probability density function of X is given by f (3) = P (X = 3) = P (at least one girl) P (no girls) P (3 boys in 3 tries) (P (1 boy in each try))4) = P (X = 4) = . = P (3 boys and 1 girl in last try) = (P (1 boy in each try))3 P (1 girl in last try) = = f (5) = P (X = 5 16 ◆ . = P (4 boys and 1 girl in last try) + P (5 boys in 5 tries) = P (1 boy in each try)4 P (1 girl in last try) + P (1 boy in each try) 16 ◆ . Hence, the expected value of the random variable is E(X) = x f (x) RX Xx 2 5 = x f (x) x=3 X = 3 14 16 = 3 f (3) + 4 f (4) + 5 f (5) 1 16 42 + 4 + 5 16 1 16 + 5 + 4 = = 51 16 = 3 3 16 . Moments of Random Variables and Chebychev Inequality 80 Remark 4.2. We interpret this physically as meaning that if many couples have children according to this scheme, it is likely that the average family size would be near 3 3 16 children. Example 4.5. A lot of 8 TV sets includes 3 that are defective. If 4 of the sets are chosen at random for shipment to a hotel, how many defective sets can they expect? Answer: Let X be the random variable representing the number of defective TV sets in a shipment of 4. Then the space of the random variable X is RX = {0, 1, 2, 3}. Then the probability density function of X is given by f (x) = P (X = x) = P (x defective TV sets in a shipment of four Hence, we have x = 0, 1, 2, 3. f (0) = f (1) = f (2) = f (3 70 30 70 30 70 5 70 . Therefore, the expected value of X is given by E(X) = x f (x) RX Xx 2 3 = x f (x) 0 X = = f (1) + 2 f (2) + 3 f (3) 30 70 5 70 + 3 + 2 30 70 30 + 60 + 15 70 = = 105 70 = 1.5. Probability and Mathematical Statistics 81 Remark 4.3. Since they cannot possibly get 1.5 defective TV sets, it should be noted that the term “expect” is not used in its colloquial sense. Indeed, it should be interpreted as an average pertaining to repeated shipments made under given conditions. Now we prove a result concerning the expected value operator E. Theorem 4.1. Let X be a random variable with pdf f (x). If a and b are any two real numbers, then E(aX + b) = a E(X) + b. Proof: We will prove only for the continuous case. E(aX + b) = = 1 Z 1 1 (a x + b) f (x) dx a x f (x) dx + 1 b f (x) dx Z = a 1 1 Z x f (x) dx + b 1 Z 1 = aE(X) + b. To prove the discrete case, replace the integral by summation. This completes the proof. 4.3. Variance of Random Variables The spread of the distribution of a random variable X is its variance. Definition 4.4. Let X be a random variable with mean µX . The variance of X, denoted by V ar(X), is defined as V ar(X) = E [ X µX ]2 . It is also denoted by 2 X . The positive square root of the variance is called the standard deviation of the random variable X. Like variance, the standard deviation also measures the spread. The following theorem tells us how to compute the variance in an alternative way. Theorem 4.2. If X is a random variable with mean µX and variance 2 X , then X = E(X 2) 2 ( µX )2. Moments of Random Variables and Chebychev Inequality 82 Proof: (X 2) = E(X 2) = E(X 2) µX ]2 2 µX X + µ2 X 2 µX E(X) + ( µX )2 2 µX µX + ( µX )2 ( µX )2. Theorem 4.3. If X is a random variable with mean µX and variance 2 X , then V ar(aX + b) = a2 V ar(X), where a and b are arbitrary real constants. Proof: V ar(a X + b) = E [ (a X + b) µaX+b ]2 ⌘ E(a X + b) ] µX+ µX ]2 µX ]2 ⌘ ⌘ = E ⇣ a2 [ X ⇣ = a2 E [ X ⇣ = a2 V ar(X). ⌘ b ]2 ⌘ Example 4.6. Let X have the density function f (x) = 2 x k2 for 0 x k   ( 0 otherwise. For what value of k is the variance of X equal to 2? Answer: The expected value of X is k E(X) = x f (x) dx x 2 x k2 dx 0 Z k 0 Z 2 3 k. = = Probability and Mathematical Statistics 83 E(X 2) = k 0 Z k x2 f (x) dx x2 2 x k2 dx = = 0 Z 2 4 k2. Hence, the variance is given by V ar(X) = E(X 2) = = 2 4 1 18 k2 k2. ( µX )2 k2 4 9 Since this variance is given to be 2, we get 1 18 k2 = 2 and this implies that k = ±6. But k is given to be greater than 0, hence k must be equal to 6. Example 4.7. If the probability density function of the random variable is f (x) = 1 0 8 < |x| for |x| < 1 otherwise, then what is the variance of X? : Answer: Since V ar(X) = E(X 2) moments of X. The first moment of X is given by µ2 X , we need to find the first and second µX = E(X) 1 x f (x) dx Z 1 1 x (1 |x|) dx x (1 + x) dx + x (x + x2) dx + . x) dx 0 Z 1 0 Z x2) dx (x Moments of Random Variables and Chebychev Inequality 84 The second moment E(X 2) of X is given by E(X 2) = 1 x2 f (x) dx Z 1 1 x2 (1 |x|) dx . x2 (1 + x) dx + 1 x2
(1 x) dx (x2 + x3) dx + (x2 x3) dx 0 Z 1 0 Z Thus, the variance of X is given by V ar(X) = E(X 2) µ2 . Example 4.8. Suppose the random variable X has mean µ and variance 2 > 0. What are the values of the numbers a and b such that a + bX has mean 0 and variance 1? Answer: The mean of the random variable is 0. Hence 0 = E(a + bX) = a + b E(X) = a + b µ. Thus a = b µ. Similarly, the variance of a + bX is 1. That is 1 = V ar(a + bX) = b2 V ar(X) = b2 2. Probability and Mathematical Statistics 85 Hence or b = 1 b = 1 and a = µ and a = µ . Example 4.9. Suppose X has the density function f (x) = 3 x2 0 ⇢ for 0 < x < 1 otherwise. What is the expected area of a random isosceles right triangle with hypotenuse X? Answer: Let ABC denote this random isosceles right triangle. Let AC = x. Then x2 4 The expected area of this random triangle is given by Area of ABC = x p2 1 2 = AB = BC = x p2 x p2 E(area of random ABC) = 1 x2 4 0 Z 3 x2 dx = 3 20 . A B The expected area of ABC is 0.15 x C Moments of Random Variables and Chebychev Inequality 86 For the next example, we need these following results. For 1 < x < 1, let g(x) = a xk = 1 Xk=0 a 1 . x g0(x) = 1 Xk=1 a k xk 1 = a (1 x)2 , Then and g00(x) = 1 Xk=2 a k (k 1) xk 2 = 2 a x)3 . (1 Example 4.10. If the probability density function of the random variable X is f (x) = (1 8 < 0 p)x 1 p if x = 1, 2, 3, 4, ..., 1 otherwise, then what is the variance of X? : Answer: We want to find the variance of X. But variance of X is defined as V ar(X) = E X 2 [ E(X) ]2 = E(X(X 1)) + E(X) [ E(X) ]2 . We write the variance in the above manner because E(X 2) has no closed form 1)). solution. However, one can find the closed form solution of E(X(X From Example 4.3, we know that E(X) = 1 p . Hence, we now focus on finding the second factorial moment of X, that is E(X(X 1)). E(X(X 1)) = = = 1 x=1 X 1 x (x x (x x=2 X 2 p (1 (1 (1 1) (1 1) (1 p)x 1 p p) (1 p)x 2 p p) p))3 = 2 (1 p2 p) . Hence V ar(X) = E(X(X 1)) + E(X) [ E(X) ]2 = 2 (1 p2 p) + 1 p 1 p2 = 1 p2 p Probability and Mathematical Statistics 87 4.4. Chebychev Inequality We have taken it for granted, in section 4.2, that the standard deviation (which is the positive square root of the variance) measures the spread of a distribution of a random variable. The spread is measured by the area between “two values”. The area under the pdf between two values is the probability of X between the two values. If the standard deviation measures the spread, then should control the area between the “two values”. It is well known that if the probability density function is standard nor- mal, that is f (x) = 1 p2⇡ e 1 2 x2 , < x < , 1 1 then the mean µ = 0 and the standard deviation = 1, and the area between the values µ and µ + is 68%. Similarly, the area between the values µ 2 and µ + 2 is 95%. In this k and way, the standard deviation controls the area between the values µ µ + k for some k if the distribution is standard normal. If we do not know the probability density function of a random variable, can we find an estimate k and µ + k for some given k? This of the area between the values µ problem was solved by Chebychev, a well known Russian mathematician. He k, µ + k] is at least proved that the area under f (x) on the interval [µ 2. This is equivalent to saying the probability that a random variable 1 is within k standard deviations of the mean is at least 1 k 2. k Theorem 4.4 (Chebychev Inequality). Let X be a random variable with probability density function f (x). If µ and > 0 are the mean and standard deviation of X, then P (|X µ| < k ) 1 1 k2 for any nonzero real positive constant k. Moments of Random Variables and Chebychev Inequality 88 at least 1-k - 2 Mean Mean - k SD Mean + k SD Proof: We assume that the random variable X is continuous. If X is not continuous we replace the integral by summation in the following proof. From the definition of variance, we have the following: 2 = 1 (x Z 1 µ)2 f (x) dx µ k = Z 1 µ)2 f (x) dx + (x µ+k µ Z k µ)2 f (x) dx (x 1 + µ+k Z (x µ)2 f (x) dx. Since, µ+k µ k (x µ)2 f (x) dx is positive, we get from the above R 2 µ k Z 1 (x µ)2 f (x) dx + 1 µ+k Z µ)2 f (x) dx. (4.1) (x If x ( 1 , µ 2 k ), then Hence for x µ  k . k µ x  That is (µ x)2 k2 2. Similarly, if x k2 2 (µ  x)2. (µ + k , ), then 1 2 µ + k x Probability and Mathematical Statistics 89 Therefore k2 2 (µ  x)2. Thus if x (µ 62 k , µ + k ), then x)2 (µ k2 2. (4.2) Using (4.2) and (4.1), we get 2 k22 µ k "Z 1 f (x) dx + 1 µ+k Z f (x) dx . # Hence Therefore Thus which is µ k 1 k2 "Z 1 f (x) dx + 1 µ+k Z f (x) dx . # 1 k2 P (X µ  k ) + P (X µ + k ). 1 k2 P (|X µ| k ) P (|X µ| < k ) 1 1 k2 . This completes the proof of this theorem. The following integration formula 1 xn (1 0 Z x)m dx = n! m! (n + m + 1)! will be used in the next example. In this formula m and n represent any two positive integers. Example 4.11. Let the probability density function of a random variable X be f (x) = ( 0 630 x4 (1 x)4 if 0 < x < 1 otherwise. What is the exact value of P (|X of P (|X  2 ) when one uses the Chebychev inequality? µ| µ| 2 )? What is the approximate value  Moments of Random Variables and Chebychev Inequality 90 Answer: First, we find the mean and variance of the above distribution. The mean of X is given by 1 E(X) = x f (x) dx 0 Z 1 = 0 Z = 630 = 630 = 630 630 x5 (1 x)4 dx 5! 4! (5 + 4 + 1)! 5! 4! 10! 2880 3628800 = = 630 1260 1 2 . Similarly, the variance of X can be computed from 1 V ar(X) = 0 Z 1 = 0 Z = 630 = 630 = 630 x2 f (x) dx µ2 X 630 x6 (1 x)4 dx 1 4 1 4 6! 4! (6 + 4 + 1)! 6! 4! 1 4 11! 6 1 22 4 11 44 = = 12 44 1 44 . Therefore, the standard deviation of X is = 1 44 r = 0.15. Probability and Mathematical Statistics 91 Thus P (|X µ|  2 ) = P (|X 0.5|  0.3) = P ( 0.3 X  0.5  0.3) = P (0.2 X   0.8) 630 x4 (1 x)4 dx 0.8 = 0.2 Z = 0.96. If we use the Chebychev inequality, then we get an approximation of the exact value we have. This approximate value is P (|X µ|  2 ) 1 1 4 = 0.75 Hence, Chebychev inequality tells us that if we do not know the distribution of X, then P (|X 2 ) is at least 0.75. µ|  Lower the standard deviation, and the smaller is the spread of the distribution. If the standard deviation is zero, then the distribution has no spread. This means that the distribution is concentrated at a single point. In the literature, such distributions are called degenerate distributions. The above figure shows how the spread decreases with the decrease of the standard deviation. 4.5. Moment Generating Functions We have seen in Section 3 that there are some distributions, such as geometric, whose moments are difficult to compute from the definition. A Moments of Random Variables and Chebychev Inequality 92 moment generating function is a real valued function from which one can generate all the moments of a given random variable. In many cases, it is easier to compute various moments of X using the moment generating function. Definition 4.5. Let X be a random variable whose probability density function is f (x). A real valued function M : IR IR defined by ! M (t) = E et X is called the moment generating function of X if this expected value exists h < t < h for some h > 0. for all t in the interval In general, not every random variable has a moment generating function. But if the moment generating function of a random variable exists, then it is unique. At the end of this section, we will give an example of a random variable which does not have a moment generating function. Using the definition of expected value of a random variable, we obtain the explicit representation for M (t) as et x f (x) if X is discrete et x f (x) dx RX Xx 2 1 1 R if X is continuous. M (t) = 8 >< >: Example 4.12. Let X be a random variable whose moment generating function is M (t) and n be any natural number. What is the nth derivative of M (t) at t = 0? Answer: Similarly, d dt M (t) = d dt E et X et X d dt ✓ X et X ◆ . = E = E d2 dt2 M (t) = et X d2 dt2 E d2 dt2 et X ✓ X 2 et X ◆ . = E = E Probability and Mathematical Statistics 93 Hence, in general we get dn dtn M (t) = et X dn dtn E dn dtn et X ✓ X n et X = E = E ◆ . If we set t = 0 in the nth derivative, we get dn dtn M (t) t=0 Hence the nth derivative of the moment generating function of X evaluated at t = 0 is the nth moment of X about the origin. t=0 = E (X n) . X n et X = E This example tells us if we know the moment generating function of a random variable; then we can generate all the moments of X by taking derivatives of the moment generating function and then evaluating them at zero. Example 4.13. What is the moment generating function of the random variable X whose probability density function is given by f (x) = x e for x > 0 ( 0 otherwise? What are the mean and variance of X? Answer: The moment generating function of X is M (t) = E et X 1 et x f (x) dx et x e x dx e (1 t) x dx = = = = = 1 t) x 1 e 0 i t > 0. if 1 Moments of Random Variables and Chebychev Inequality 94 The expected value of X can be computed from M (t) as E(X) = = d dt d dt M (t) t=0 1 t) (1 = (1 = (1 = 1. 1 2 t) t=0 t=0 t)2 t=0 Similarly E(X 2) = = = (1 = 2. Therefore, the variance of X is d2 dt2 M (t) t=0 d2 1 t) dt2 (1 = 2 (1 3 t) t=0 t=0 2 t)3 t=0 V ar(X) = E(X 2) (µ)2 = 2 1 = 1. Example 4.14. Let X have the probability density function f (x) = 1 9 x 8 9 for x = 0, 1, 2, ..., 1 ( 0 otherwise. What is the moment generating function of the random variable X? Probability and Mathematical Statistics 95 Answer: M (t) = E et X 1 et x f (x) = 1 9 et x ✓ 1 8 9 x ◆ x ◆ ✓ et 8 9 ◆ ◆ x=0 ✓ X 1 et 8 9 1 x=0 X 1 x= et if et
8 9 < 1 if t < ln 9 8 . ◆ ✓ Example 4.15. Let X be a continuous random variable with density function f (x) = ⇢ b x b e 0 for x > 0 otherwise , where b > 0. If M (t) is the moment generating function of X, then what is M ( 6 b)? Answer: M (t) = E et et x e b x dx 1 e (b t) x dx b t) x 1 e 0 i b if t > 0. Hence M ( 6 b) = b 7b = 1 7 . Example 4.16. Let the random variable X have moment generating func2 for t < 1. What is the third moment of X about the tion M (t) = (1 origin? t) Answer: To compute the third moment E(X 3) of X about the origin, we Moments of Random Variables and Chebychev Inequality 96 need to compute the third derivative of M (t) at t = 0. M (t) = (1 M 0(t) = 2 (1 M 00(t) = 6 (1 M 000(t) = 24 (1 2 t) 3 t) 4 t) t) 5 . Thus the third moment of X is given by E X 3 = 24 (1 0)5 = 24. Theorem 4.5. Let M (t) be the moment generating function of the random variable X. If M (t) = a0 + a1 t + a2 t2 + · · · + an tn + · · · (4.3) is the Taylor series expansion of M (t), then E (X n) = (n!) an for all natural number n. Proof: Let M (t) be the moment generating function of the random variable X. The Taylor series expansion of M (t) about 0 is given by M (t) = M (0) + M 0(0) 1! t + M 00(0) 2! t2 + M 000(0) 3! t3 + · · · + M (n)(0) n! tn + · · · Since E(X n) = M (n)(0) for n 1 and M (0) = 1, we have E(X 2) 2! M (t) = 1 + E(X) 1! t + t2 + E(X 3) 3! t3 + · · · + E(X n) n! tn + · · · (4.4) From (4.3) and (4.4), equating the coefficients of the like powers of t, we obtain an = E (X n) n! which is This proves the theorem. E (X n) = (n!) an. Probability and Mathematical Statistics 97 Example 4.17. What is the 479th moment of X about the origin, if the 1 1+t ? moment generating function of X is Answer The Taylor series expansion of M (t) = 1 long division (a technique we have learned in high school). 1+t can be obtained by using M (t) + ( t)2 + ( t)3 + · · · + ( t)n + · · · t + t2 t3 + t4 + · · · + ( 1)ntn + · · · Therefore an = ( 1)n and from this we obtain a479 = 1. By Theorem 4.5, E X 479 = (479!) a479 = 479! Example 4.18. If the moment generating of a random variable X is M (t) = 1) e(t j j! , 1 j=0 X then what is the probability of the event X = 2? Answer: By examining the given moment generating function of X, it is easy to note that X is a discrete random variable with space RX = {0, 1, 2, · · · , }. Hence by definition, the moment generating function of X is 1 (4.5) But we are given that M (t) = M (t) = = et j f (j). 1) e(t j j! 1 e j! et j. 1 j=0 X 1 j=0 X 1 j=0 X From (4.5) and the above, equating the coefficients of etj, we get f (j) = 1 e j! for j = 0, 1, 2, ..., . 1 Moments of Random Variables and Chebychev Inequality 98 Thus, the probability of the event X = 2 is given by P (X = 2) = f (2) = 1 e 2! = 1 2 e . Example 4.19. Let X be a random variable with E (X n) = 0.8 for n = 1, 2, 3, ..., . 1 What are the moment generating function and probability density function of X? Answer: M (t) = M (0) + = M (0) + = 1 + 0.8 1 n=1 X 1 n=1 X 1 M (n)(0) E (X n) tn n! ◆ ✓ tn n! ✓ ◆ tn n! ◆ 1 n=1 ✓ X tn n! ◆ = 0.2 + 0.8 + 0.8 = 0.2 + 0.8 1 n=0 ✓ X = 0.2 e0 t + 0.8 e1 t. n=1 ✓ X tn n! ◆ Therefore, we get f (0) = P (X = 0) = 0.2 and f (1) = P (X = 1) = 0.8. Hence the moment generating function of X is M (t) = 0.2 + 0.8 et, and the probability density function of X is f (x) = 0.2| |x for x = 0, 1 ( 0 otherwise. Example 4.20. If the moment generating function of a random variable X is given by M (t) = 5 15 et + 4 15 e2 t + 3 15 e3 t + 2 15 e4 t + 1 15 e5 t, Probability and Mathematical Statistics 99 then what is the probability density function of X? What is the space of the random variable X? Answer: The moment generating function of X is given to be M (t) = 5 15 et + 4 15 e2 t + 3 15 e3 t + 2 15 e4 t + 1 15 e5 t. This suggests that X is a discrete random variable. Since X is a discrete random variable, by definition of the moment generating function, we see that M (t) = et x f (x) RX Xx 2 = et x1 f (x1) + et x2 f (x2) + et x3 f (x3) + et x4 f (x4) + et x5 f (x5). Hence we have f (x1) = f (1) = f (x2) = f (2) = f (x3) = f (3) = f (x4) = f (4) = f (x5) = f (5) = 5 15 4 15 3 15 2 15 1 15 . Therefore the probability density function of X is given by f (x) = 6 x 15 for x = 1, 2, 3, 4, 5 and the space of the random variable X is RX = {1, 2, 3, 4, 5}. Example 4.21. variable X is If the probability density function of a discrete random f (x) = 6 ⇡2 x2 , for x = 1, 2, 3, ..., , 1 then what is the moment generating function of X? Moments of Random Variables and Chebychev Inequality 100 Answer: If the moment generating function of X exists, then M (t) = = = = etx f (x) etx 2 p6 ⇡ x ! 1 x=1 X 1 x=1 X 1 etx 6 ⇡2 x2 x=1 ✓ X 6 1 ⇡2 x=1 X ◆ etx x2 . Now we show that the above infinite series diverges if t belongs to the interval ( h, h) for any h > 0. To prove that this series is divergent, we do the ratio test, that is lim n !1 ✓ an+1 an ◆ et (n+1) (n + 1)2 et n et (n + 1)2 = lim n !1 ✓ = lim n !1 ✓ ◆ n2 et n n2 et n ◆ 2 = lim n !1 et = et. ✓ n n + 1 ! ◆ For any h > 0, since et is not always less than 1 for all t in the interval h, h), we conclude that the above infinite series diverges and hence for ( this random variable X the moment generating function does not exist. Notice that for the above random variable, E [X n] does not exist for any natural number n. Hence the discrete random variable X in Example 4.21 has no moments. Similarly, the continuous random variable X whose Probability and Mathematical Statistics 101 probability density function is f (x) = 1 x2 8 < 0 for 1 x <  1 otherwise, has no moment generating function and no moments. : In the following theorem we summarize some important properties of the moment generating function of a random variable. Theorem 4.6. Let X be a random variable with the moment generating function MX (t). If a and b are any two real constants, then MX+a(t) = ea t MX (t) Mb X (t) = MX (b t) M X+a b (t) = e a b t MX t b . ◆ ✓ (4.6) (4.7) (4.8) Proof: First, we prove (4.6). MX+a(t) = E et (X+a) ⌘ = E ⇣ et X+t a et X et a = E et X = et a E = et a MX (t). Similarly, we prove (4.7). Mb X (t) = E et (b X) = E ⇣ e(t b) X ⇣ = MX (t b). ⌘ ⌘ By using (4.6) and (4.7), we easily get (4.8). M X+a b (t) = M X (t MX (t) t b . ◆ ✓ Moments of Random Variables and Chebychev Inequality 102 This completes the proof of this theorem. Definition 4.6. The nth factorial moment of a random variable X is E(X(X 2) · · · (X n + 1)). 1)(X Definition 4.7. The factorial moment generating function (FMGF) of X is denoted by G(t) and defined as G(t) = E tX . It is not difficult to establish a relationship between the moment generating function (MGF) and the factorial moment generating function (FMGF). The relationship between them is the following: G(t) = E tX = E eln tX = E eX ln t = M (ln t). ⇣ ⌘ Thus, if we know the MGF of a random variable, we can determine its FMGF and conversely. Definition 4.8. Let X be a random variable. The characteristic function (t) of X is defined as (t) = E ei t X = E ( cos(tX) + i sin(tX) ) = E ( cos(tX) ) + i E ( sin(tX) ) . The probability density function can be recovered from the characteristic function by using the following formula f (x) = 1 2 ⇡ 1 e i t x (t) dt. Z 1 Unlike the moment generating function, the characteristic function of a random variable always exists. For example, the Cauchy random variable X with probability density f (x) = ⇡(1+x2) has no moment generating function. However, the characteristic function is 1 (t) = E ei t X eitx ⇡(1 + x2) dx 1 = Z = e 1 |t|. Probability and Mathematical Statistics 103 To evaluate the above integral one needs the theory of residues from the complex analysis. The characteristic function (t) satisfies the same set of properties as the moment generating functions as given in Theorem 4.6. The following integrals 1 xm e x dx = m! if m is a positive integer 0 Z and 1 px e x dx = p⇡ 2 0 Z are needed for some problems in the Review Exercises of this chapter. These formulas will be discussed in Chapter 6 while we describe the properties and usefulness of the gamma distribution. We end this chapter with the following comment about the Taylor’s series. Taylor’s series was discovered to mimic the decimal expansion of real numbers. For example 125 = 1 (10)2 + 2 (10)1 + 5 (10)0 is an expansion of the number 125 with respect to base 10. Similarly, 125 = 1 (9)2 + 4 (9)1 + 8 (9)0 is an expansion of the number 125 in base 9 and it is 148. Since given a function f : IR IR, f (x) is a real number and it can be expanded with respect to the base x. The expansion of f (x) with respect to base x will have a form IR and x ! 2 f (x) = a0x0 + a1x1 + a2x2 + a3x3 + · · · which is f (x) = 1 akxk. Xk=0 If we know the coefficients ak for k = 0, 1, 2, 3, ..., then we will have the expansion of f (x) in base x. Taylor found the remarkable fact that the the coefficients ak can be computed if f (x) is sufficiently differentiable. He proved that for k = 1, 2, 3, ... ak = f (k)(0) k! with f (0) = f (0). Moments of Random Variables and Chebychev Inequality 104 4.6. Review Exercises 1. In a state lottery a five-digit integer is selected at random. If a player bets 1 dollar on a particular number, the payoff (if that number is selected) is $500 minus the $1 paid for the ticket. Let X equal the payoff to the better. Find the expected value of X. 2. A discrete random variable X has probability density function of the form f (x) = c (8 ( 0 x) for x = 0, 1, 2, 3, 4, 5 otherwise. (a) Find the constant c. (b) Find P (X > 2). (c) Find the expected value E(X) for the random variable X. 3. A random variable X has a cumulative distribution function F (x if 0 < x if a) Graph F (x). (b) Graph f (x). (c) Find P (X : 1.25). (f) Find P (X = 1.25). (e) Find P (X  0.5). (d) Find P (X 0.5).  4. Let X be a random variable with probability density function f (x) = 1 8 x ( 0 for x = 1, 2, 5 otherwise.
(a) Find the expected value of X. (b) Find the variance of X. (c) Find the expected value of 2X + 3. (d) Find the variance of 2X + 3. (e) Find the expected value of 3X 5X 2 + 1. 5. The measured radius of a circle, R, has probability density function f (r) = 6 r (1 r) if 0 < r < 1 ( 0 otherwise. (a) Find the expected value of the radius. (b) Find the expected circumference. (c) Find the expected area. 6. Let X be a continuous random variable with density function ✓ x + 3 2 ✓ 3 2 x2 f (x) = 0 8 < : for 0 < x < 1 p✓ otherwise, Probability and Mathematical Statistics 105 where ✓ > 0. What is the expected value of X? 7. Suppose X is a random variable with mean µ and variance 2 > 0. For a X what value of a, where a > 0 is E minimized? 2 8. A rectangle is to be constructed having dimension X by 2X, where X is a random variable with probability density function ⇣⇥ ⌘ ⇤ 1 a f (x) = 1 2 ( 0 for 0 < x < 2 otherwise. What is the expected area of the rectangle? 9. A box is to be constructed so that the height is 10 inches and its base is X inches by X inches. If X has a uniform distribution over the interval [2, 8], then what is the expected volume of the box in cubic inches? 10. If X is a random variable with density function 1.4 e 2x + 0.9 e 3x f (x) = 8 < 0 then what is the expected value of X? : for x > 0 elsewhere, 11. A fair coin is tossed. If a head occurs, 1 die is rolled; if a tail occurs, 2 dice are rolled. Let X be the total on the die or dice. What is the expected value of X? If velocities of the molecules of a gas have the probability density 12. (Maxwell’s law) f (v) = a v2e h2 v2 8 < 0 for v 0 otherwise, then what are the expectation and the variance of the velocity of the molecules and also the magnitude of a for some given h? : 13. A couple decides to have children until they get a girl, but they agree to stop with a maximum of 3 children even if they haven’t gotten a girl. If X and Y denote the number of children and number of girls, respectively, then what are E(X) and E(Y )? 14. In roulette, a wheel stops with equal probability at any of the 38 numbers 0, 00, 1, 2, ..., 36. If you bet $1 on a number, then you win $36 (net gain is Moments of Random Variables and Chebychev Inequality 106 $35) if the number comes up; otherwise, you lose your dollar. What are your expected winnings? 15. If the moment generating function for the random variable X is MX (t) = 1 1+t , what is the third moment of X about the point x = 2? 16. If the mean and the variance of a certain distribution are 2 and 8, what are the first three terms in the series expansion of the moment generating function? 17. Let X be a random variable with density function ax a e for x > 0 f (x) = 8 < 0 otherwise, where a > 0. If M (t) denotes the moment generating function of X, what is M ( 3a)? : 18. Suppose the random variable X has moment generating M (t) = 1 t)k , (1 What is the nth moment of X? for t < 1 . 19. Two balls are dropped in such a way that each ball is equally likely to fall into any one of four holes. Both balls may fall into the same hole. Let X denote the number of unoccupied holes at the end of the experiment. What is the moment generating function of X? 20. If the moment generating function of X is M (t) = 1 t)2 for t < 1, then what is the fourth moment of X? (1 21. Let the random variable X have the moment generating function M (t) = e3t 1 t2 , 1 < t < 1. What are the mean and the variance of X, respectively? 22. Let the random variable X have the moment generating function M (t) = e3t+t2 . What is the second moment of X about x = 0? Probability and Mathematical Statistics 107 23. Suppose the random variable X has the cumulative density function c)2 is F (x). Show that the expected value of the random variable (X minimum if c equals the expected value of X. 24. Suppose the continuous random variable X has the cumulative density function F (x). Show that the expected value of the random variable |X c| is minimum if c equals the median of X (that is, F (c) = 0.5). 25. Let the random variable X have the probability density function f (x) = 1 2 |x| e 1 < x < . 1 What are the expected value and the variance of X? 26. If MX (t) = k (2 + 3et)4, what is the value of k? 27. Given the moment generating function of X as M (t) = 1 + t + 4t2 + 10t3 + 14t4 + · · · what is the third moment of X about its mean? 28. A set of measurements X has a mean of 7 and standard deviation of 0.2. For simplicity, a linear transformation Y = aX + b is to be applied to make the mean and variance both equal to 1. What are the values of the constants a and b? 29. A fair coin is to be tossed 3 times. The player receives 10 dollars if all three turn up heads and pays 3 dollars if there is one or no heads. No gain or loss is incurred otherwise. If Y is the gain of the player, what the expected value of Y ? 30. If X has the probability density function f (x) = x e 0 ⇢ for x > 0 otherwise, then what is the expected value of the random variable Y = e 3 4 X + 6? 31. If the probability density function of the random variable X if f (x) = (1 8 < 0 p)x 1 p if x = 1, 2, 3, ..., 1 otherwise, then what is the expected value of the random variable X : 1 ? Some Special Discrete Distributions 108 Chapter 5 SOME SPECIAL DISCRETE DISTRIBUTIONS Given a random experiment, we can find the set of all possible outcomes which is known as the sample space. Objects in a sample space may not be numbers. Thus, we use the notion of random variable to quantify the qualitative elements of the sample space. A random variable is characterized by either its probability density function or its cumulative distribution function. The other characteristics of a random variable are its mean, variance and moment generating function. In this chapter, we explore some frequently encountered discrete distributions and study their important characteristics. 5.1. Bernoulli Distribution A Bernoulli trial is a random experiment in which there are precisely two possible outcomes, which we conveniently call ‘failure’ (F) and ‘success’ (S). We can define a random variable from the sample space {S, F } into the set of real numbers as follows: X(F ) = 0 X(S) = 1. Probability and Mathematical Statistics 109 Sample Space S F X X(F) = 0 1= X(S) The probability density function of this random variable is f (0) = P (X = 0) = 1 p f (1) = P (X = 1) = p, where p denotes the probability of success. Hence f (x) = px (1 p)1 x, x = 0, 1. Definition 5.1. The random variable X is called the Bernoulli random variable if its probability density function is of the form f (x) = px (1 p)1 x, x = 0, 1 where p is the probability of success. We denote the Bernoulli random variable by writing X BER(p). ⇠ Example 5.1. What is the probability of getting a score of not less than 5 in a throw of a six-sided die? Answer: Although there are six possible scores {1, 2, 3, 4, 5, 6}, we are grouping them into two sets, namely {1, 2, 3, 4} and {5, 6}. Any score in {1, 2, 3, 4} is a failure and any score in {5, 6} is a success. Thus, this is a Bernoulli trial with P (X = 0) = P (failure) = 4 6 and P (X = 1) = P (success) = 2 6 . Hence, the probability of getting a score of not less than 5 in a throw of a six-sided die is 2 6 . Some Special Discrete Distributions 110 Theorem 5.1. If X is a Bernoulli random variable with parameter p, then the mean, variance and moment generating functions are respectively given by µX = p 2 X = p (1 MX (t) = (1 p) p) + p et. Proof: The mean of the Bernoulli random variable is 1 µX = x f (x) x=0 X 1 = x px (1 x=0 X = p. Similarly, the variance of X is given by p)1 x 2 X = 1 (x x=0 X 1 = (x µX )2 f (x) p)2 px (1 p)1 x x=0 X = p2 (1 = p (1 = p (1 p). p) + p (1 p) [p + (1 p)2 p)] Next, we find the moment generating function of the Bernoulli random variable M (t) = E etX 1 etx px (1 = x=0 X = (1 p) + et p. p)1 x This completes the proof. The moment generating function of X and all the moments of X are shown below for p = 0.5. Note that for the Bernoulli distribution all its moments about zero are same and equal to p. Probability and Mathematical Statistics 111 5.2. Binomial Distribution Consider a fixed number n of mutually independent Bernoulli trails. Suppose these trials have same probability of success, say p. A random variable X is called a binomial random variable if it represents the total number of successes in n independent Bernoulli trials. Now we determine the probability density function of a binomial random variable. Recall that the probability density function of X is defined as f (x) = P (X = x). Thus, to find the probability density function of X we have to find the probability of x successes in n independent trails. If we have x successes in n trails, then the probability of each n-tuple with x successes and n x failures is px (1 p)n x. tuples with x successes and n n x x failures in n trials. However, there are Hence P (X = x) = n x ✓ ◆ px (1 p)n x. Therefore, the probability density function of X is f (x) = n x ✓ ◆ px (1 p)n x, x = 0, 1, ..., n. Definition 5.2. The random variable X is called the binomial random variable with parameters p and n if its probability density function is of the form f (x) = n x ✓ ◆ px (1 p)n x, x = 0, 1, ..., n Some Special Discrete Distributions 112 where 0 < p < 1 is the probability of success. We will denote a binomial random variable with parameters p and n as BIN (n, p). ⇠ X Example 5.2. Is the real valued function f (x) given by f (x) = n x ✓ ◆ px (1 p)n x, x = 0, 1, ..., n where n and p are parameters, a probability density function? Answer: To answer this question, we have to check that f (x) is nonnegative and 0. We show that sum is one. n x=0 f (x) is 1. It is easy to see that f (x) P n n f (x) = x=0 X n x ◆ x=0 ✓ X = (p + 1 = 1. p)n x px (1 p)n Hence f (x) is really a probability density function. Example 5.3. On a five-question multiple-choice test ther