text
stringlengths
235
3.08k
H0: The distribution of [...] is the same for each population/treatment. HA: The distribution of [...] is not the same for each population/treatment. We test these hypotheses at the α significance level using a χ2χ2χ2 test for homogeneity. • When there is one random sample and we are looking for association or dependence between two categorical variables, e.g. testing for an association between gender and political party, the hypotheses can be written as: H0: [variable 1] and [variable 2] are independent. HA: [variable 1] and [variable 2] are dependent. We test these hypotheses at the α significance level using a χ2χ2χ2 test for independence. • In addition to the independence/random condition, all expected counts must be at least 5 for the test statistic to follow a chi-square distribution. • The chi-square statistic and associated df are found as follows: test statistic: χ2 = (observed − expected)2 expected df = (# of rows − 1)(# of cols − 1) • The p-value is the area to the right of χ2-statistic under the chi-square curve with the appro- priate df. 354 CHAPTER 6. INFERENCE FOR CATEGORICAL DATA Exercises 6.29 Quitters. Does being part of a support group affect the ability of people to quit smoking? A county health department enrolled 300 smokers in a randomized experiment. 150 participants were randomly assigned to a group that used a nicotine patch and met weekly with a support group; the other 150 received the patch and did not meet with a support group. At the end of the study, 40 of the participants in the patch plus support group had quit smoking while only 30 smokers had quit in the other group. (a) Create a two-way table presenting the results of this study. (b) Answer each of the following questions under the null hypothesis that being part of a support group does not affect the ability of people to quit smoking, and indicate whether the expected values are higher or lower than the observed values. i. How many subjects in the “patch + support” group would you expect to quit? ii. How many subjects in the “patch only” group would you expect to not quit? 6.30 Full body scan, Part II.
A news article reports that “Americans have differing views on two potentially inconvenient and invasive practices that airports could implement to uncover potential terrorist attacks.” This news piece was based on a survey conducted among a random sample of 1,137 adults nationwide, where one of the questions on the survey was “Some airports are now using ‘full-body’ digital x-ray machines to electronically screen passengers in airport security lines. Do you think these new x-ray machines should or should not be used at airports?” Below is a summary of responses based on party affiliation.43 The differences in each political group may be due to chance. Complete the following computations under the null hypothesis of independence between an individual’s party affiliation and his support of full-body scans. It may be useful to first add on an extra column for row totals before proceeding with the computations. Party Affiliation Republican Democrat Answer Should Should not Don’t know/No answer Total 264 38 16 318 299 55 15 369 Independent 351 77 22 450 (a) How many Republicans would you expect to not support the use of full-body scans? (b) How many Democrats would you expect to support the use of full- body scans? (c) How many Independents would you expect to not know or not answer? A survey asked 827 randomly sampled registered voters in California “Do 6.31 Offshore drilling. you support? Or do you oppose? Drilling for oil and natural gas off the Coast of California? Or do you not know enough to say?” Below is the distribution of responses, separated based on whether or not the respondent has a college degree.44 Complete a chi-square test for these data to test whether there is an association between opinions regarding offshore drilling for oil and having a college degree. Include all steps of the Identify, Choose, Check, Calculate, Conclude framework. College Degree Yes 154 Support Oppose 180 Do not know 104 438 Total No 132 126 131 389 43S. Condon. “Poll: 4 in 5 Support Full-Body Airport Scanners”. In: CBS News (2010). 44Survey USA, Election Poll #16804, data collected July 8-11, 2010. 6.4. HOMOGENEITY AND INDEPENDENCE IN TWO-WAY
TABLES 355 6.32 Parasitic worm. Lymphatic filariasis is a disease caused by a parasitic worm. Complications of the disease can lead to extreme swelling and other complications. Here we consider results from a randomized experiment that compared three different drug treatment options to clear people of the this parasite, which people are working to eliminate entirely. The results for the second year of the study are given below:45 Clear at Year 2 Not Clear at Year 2 Three drugs Two drugs Two drugs annually 52 31 42 2 24 14 (a) Set up hypotheses for evaluating whether there is any difference in the performance of the treatments, and also check conditions. (b) Statistical software was used to run a chi-square test, which output: X 2 = 23.7 df = 2 p-value = 7.2e-6 Use these results to evaluate the hypotheses from part (a), and provide a conclusion in the context of the problem. 45Christopher King et al. “A Trial of a Triple-Drug Treatment for Lymphatic Filariasis”. In: New England Journal of Medicine 379 (2018), pp. 1801–1810. 356 CHAPTER 6. INFERENCE FOR CATEGORICAL DATA Chapter highlights Calculating a confidence interval or a test statistic and p-value are generally done with statistical software. It is important, then, to focus not on the calculations, but rather on 1. choosing the correct procedure 2. understanding when the procedures do or do not apply, and 3. interpreting the results. Choosing the correct procedure requires understanding the type of data and the method of data collection. All of the inference procedures in Chapter 6 are for categorical variables. Here we list the five tests encountered in this chapter and when to use them. • 1-proportion Z-test – 1 random sample, a yes/no variable – Compare the sample proportion to a fixed / hypothesized proportion. • 2-proportion Z-test – 2 independent random samples or randomly allocated treatments – Compare two populations or treatments to each other with respect to one yes/no variable; e.g. comparing the proportion over age 65 in two distinct populations. • χ2χ2χ2 goodness of fit test – 1 random sample, a categorical variable (generally at least three categories) – Compare the distribution of a categorical variable to a
fixed or known population distri- bution; e.g. looking at distribution of color among M&M’s. • χ2χ2χ2 test for homogeneity: – 2 or more independent random samples or randomly allocated treatments – Compare the distribution of a categorical variable across several populations or treatments; e.g. party affiliation over various years, or patient improvement compared over 3 treatments. • χ2χ2χ2 test for independence – 1 random sample, 2 categorical variables – Determine if, in a single population, there is an association between two categorical variables; e.g. grade level and favorite class. Even when the data and data collection method correspond to a particular test, we must verify that conditions are met to see if the assumptions of the test are reasonable. All of the inferential procedures of this chapter require some type of random sample or process. In addition, the 1-proportion Z-test/interval and the 2-proportion Z-test/interval require that the success-failure condition is met and the three χ2 tests require that all expected counts are at least 5. Finally, understanding and communicating the logic of a test and being able to accurately interpret a confidence interval or p-value are essential. For a refresher on this, review Chapter 5: Foundations for inference. 6.4. HOMOGENEITY AND INDEPENDENCE IN TWO-WAY TABLES 357 Chapter exercises 6.33 Active learning. A teacher wanting to increase the active learning component of her course is concerned about student reactions to changes she is planning to make. She conducts a survey in her class, asking students whether they believe more active learning in the classroom (hands on exercises) instead of traditional lecture will helps improve their learning. She does this at the beginning and end of the semester and wants to evaluate whether students’ opinions have changed over the semester. Can she used the methods we learned in this chapter for this analysis? Explain your reasoning. 6.34 Website experiment. The OpenIntro website occasionally experiments with design and link placement. We conducted one experiment testing three different placements of a download link for this textbook on the book’s main page to see which location, if any, led to the most downloads. The number of site visitors included in the experiment was 701 and is captured in one of the response combinations
in the following table: Download No Download Position 1 Position 2 Position 3 13.8% 14.6% 12.1% 18.3% 18.5% 22.7% (a) Calculate the actual number of site visitors in each of the six response categories. (b) Each individual in the experiment had an equal chance of being in any of the three experiment groups. However, we see that there are slightly different totals for the groups. Is there any evidence that the groups were actually imbalanced? Carry out an appropriate test and include all steps of the ICCCC framework. (c) Complete an appropriate hypothesis test to check whether there is evidence that there is a higher rate of site visitors clicking on the textbook link in any of the three groups. Include all steps of the Identify, Choose, Check, Calculate, Conclude framework. 6.35 Shipping holiday gifts. A local news survey asked 500 randomly sampled Los Angeles residents which shipping carrier they prefer to use for shipping holiday gifts. The table below shows the distribution of responses by age group as well as the expected counts for each cell (shown in parentheses). Shipping Method 18-34 Age 35-54 55+ USPS UPS FedEx Something else Not sure Total 72 52 31 7 3 (81) (53) (21) (5) (5) 97 76 24 6 6 (102) (68) (27) (7) (5) 76 34 9 3 4 (62) (41) (16) (4) (3) 165 209 126 Total 245 162 64 16 13 500 (a) State the null and alternative hypotheses for testing for independence of age and preferred shipping method for holiday gifts among Los Angeles residents. (b) Are the conditions for inference using a chi-square test satisfied? 6.36 The Civil War. A national survey conducted among a simple random sample of 1,507 adults shows that 56% of Americans think the Civil War is still relevant to American politics and political life.46 (a) Conduct a hypothesis test to determine if these data provide strong evidence that the majority of the Americans think the Civil War is still relevant. (b) Interpret the p-value in this context. (c) Calculate a 90% confidence interval for the proportion of Americans who think the Civil War is still relevant. Interpret the interval in this context, and comment on whether or not the confidence interval agrees with the conclusion of the
hypothesis test. 46Pew Research Center Publications, Civil War at 150: Still Relevant, Still Divisive, data collected between March 30 - April 3, 2011. 358 CHAPTER 6. INFERENCE FOR CATEGORICAL DATA 6.37 College smokers. who smoke. Out of a random sample of 200 students from this university, 40 students smoke. We are interested in estimating the proportion of students at a university (a) Calculate a 95% confidence interval for the proportion of students at this university who smoke, and interpret this interval in context. (Reminder: Check conditions.) (b) If we wanted the margin of error to be no larger than 2% at a 95% confidence level for the proportion of students who smoke, how big of a sample would we need? It is believed that large doses of acetaminophen (the active 6.38 Acetaminophen and liver damage. ingredient in over the counter pain relievers like Tylenol) may cause damage to the liver. A researcher wants to conduct a study to estimate the proportion of acetaminophen users who have liver damage. For participating in this study, he will pay each subject $20 and provide a free medical consultation if the patient has liver damage. (a) If he wants to limit the margin of error of his 98% confidence interval to 2%, what is the minimum amount of money he needs to set aside to pay his subjects? (b) The amount you calculated in part (a) is substantially over his budget so he decides to use fewer subjects. How will this affect the width of his confidence interval? 6.39 Life after college. We are interested in estimating the proportion of graduates at a mid-sized university who found a job within one year of completing their undergraduate degree. Suppose we conduct a survey and find out that 348 of the 400 randomly sampled graduates found jobs. The graduating class under consideration included over 4500 students. (a) Describe the population parameter of interest. What is the value of the point estimate of this parameter? (b) Check if the conditions for constructing a confidence interval based on these data are met. (c) Calculate a 95% confidence interval for the proportion of graduates who found a job within one year of completing their undergraduate degree at this university, and interpret it in the context of the data
. (d) What does “95% confidence” mean? (e) Now calculate a 99% confidence interval for the same parameter and interpret it in the context of the data. (f) Compare the widths of the 95% and 99% confidence intervals. Which one is wider? Explain. 6.40 Diabetes and unemployment. A Gallup poll surveyed Americans about their employment status and whether or not they have diabetes. The survey results indicate that 1.5% of the 47,774 employed (full or part time) and 2.5% of the 5,855 unemployed 18-29 year olds have diabetes.47 (a) Create a two-way table presenting the results of this study. (b) State appropriate hypotheses to test. (c) The sample difference is about 1%. If we completed the hypothesis test, we would find that the p-value is very small (about 0), meaning the difference is statistically significant. Use this result to explain the difference between statistically significant and practically significant findings. 6.41 Rock-paper-scissors. Rock-paper-scissors is a hand game played by two or more people where players choose to sign either rock, paper, or scissors with their hands. For your statistics class project, you want to evaluate whether players choose between these three options randomly, or if certain options are favored above others. You ask two friends to play rock-paper-scissors and count the times each option is played. The following table summarizes the data: Rock Paper 43 21 Scissors 35 Use these data to evaluate whether players choose between these three options randomly, or if certain options are favored above others. Make sure to clearly outline each step of your analysis, and interpret your results in context of the data and the research question. 47Gallup Wellbeing, Employed Americans in Better Health Than the Unemployed, data collected Jan. 2, 2011 - May 21, 2012. 6.4. HOMOGENEITY AND INDEPENDENCE IN TWO-WAY TABLES 359 6.42 2010 Healthcare Law. On June 28, 2012 the U.S. Supreme Court upheld the much debated 2010 healthcare law, declaring it constitutional. A Gallup poll released the day after this decision indicates that 46% of 1,
012 randomly sampled Americans agree with this decision. At a 95% confidence level, this sample has a 3% margin of error. Based on this information, determine if the following statements are true or false, and explain your reasoning.48 (a) We are 95% confident that between 43% and 49% of Americans in this sample support the decision of the U.S. Supreme Court on the 2010 healthcare law. (b) We are 95% confident that between 43% and 49% of Americans support the decision of the U.S. Supreme Court on the 2010 healthcare law. (c) If we considered many random samples of 1,012 Americans, and we calculated the sample proportions of those who support the decision of the U.S. Supreme Court, 95% of those sample proportions will be between 43% and 49%. (d) The margin of error at a 90% confidence level would be higher than 3%. 6.43 Browsing on the mobile device. A survey of 2,254 randomly selected American adults indicates that 17% of cell phone owners browse the internet exclusively on their phone rather than a computer or other device.49 (a) According to an online article, a report from a mobile research company indicates that 38 percent of Chinese mobile web users only access the internet through their cell phones.50 Conduct a hypothesis test to determine if these data provide strong evidence that the proportion of Americans who only use their cell phones to access the internet is different than the Chinese proportion of 38%. (b) Interpret the p-value in this context. (c) Calculate a 95% confidence interval for the proportion of Americans who access the internet on their cell phones, and interpret the interval in this context. 6.44 Which chi-square test? Part 1. Consider each of the following tables. Determine (i) if a goodness of fit test, test for homogeneity, or test for independence is more appropriate, and (ii) how many degrees of freedom should be used for the test. (a) (b) Favorite Animal Count Red Panda Koala Otter Fennec Fox Hedgehog 22 7 13 25 38 Favorite Kid Food Pizza Tacos Mac and Cheese Chicken or Veggie Nuggets Broccoli Count 167 48 171 74 2 (c) Freshman Sophomore Other Rushing Not 275 14 392 5 725 7 (d) Commute Time Count
≤ 10 minutes 11-30 minutes 31-60 minutes > 60 minutes 198 130 48 29 6.45 Which chi-square test? Part 2. Consider each of the following planned studies. Determine (i) if a goodness of fit test, test for homogeneity, or test for independence is more appropriate, and (ii) how many degrees of freedom should be used for the test. (a) A state is conducting a study to better understand pay for tradespeople in the state’s three largest cities. In each city, the state will take a random sample of tradespeople and estimate the proportion who made at least $100,000 in each of the cities. In their final report, they would also like to note whether that proportion varies across the three cities. (b) A particular gene has 3 variants that can be found in proportions p1 = 0.15, p2 = 0.60, and p3 = 0.25 in the general population. Scientists suspect different variants of this gene might indicate an elevated risk for a particular genetic disease, and one way to evaluate this is to see if the general population distribution is the same in patients with the disease. The scientists will sample 450 patients with the disease and identify which variant each patient has. (c) A candy company produces candy pieces in 5 different colors that are mixed into bags. The colors should be in the following proportions: 15% green, 22% orange, 20% yellow, 24% red, and 19% purple. As a quality control check, the company randomly samples 1500 candy pieces and wants to determine if the target proportions match those of the observed distribution. 48Gallup, Americans Issue Split Decision on Healthcare Ruling, data collected June 28, 2012. 49Pew Internet, Cell Internet Use 2012, data collected between March 15 - April 13, 2012. 50S. Chang. “The Chinese Love to Use Feature Phone to Access the Internet”. In: M.I.C Gadget (2012). 360 Chapter 7 Inference for numerical data 7.1 Inference for a mean with the t-distribution 7.2 Inference with paired data 7.3 Inference for the difference of two means 361 Chapter 5 introduced a framework for statistical inference based on confidence intervals and hypothesis tests. Chapter 6 summarized inference procedures for categorical data (counts and proportions), using the normal distribution and the chi-square distribution. In
this chapter, we focus on inference procedures for numerical data and we encounter a new distribution. In each case, the inference ideas remain the same: 1. Determine which point estimate or test statistic is useful. 2. Identify an appropriate distribution for the point estimate or test statistic. 3. Apply the ideas from Chapter 5 using the distribution from step 2. Each section in Chapter 7 explores a new situation: a single mean (7.1), a mean of differences (7.2); and a difference of means (7.3). For videos, slides, and other resources, please visit www.openintro.org/ahss 362 CHAPTER 7. INFERENCE FOR NUMERICAL DATA 7.1 Inference for a mean with the ttt-distribution In this section, we turn our attention to numerical variables and answer questions such as the following: • How well can we estimate the mean income of people in a certain city, county, or state? • What is the average mercury content in various types of fish? • Are people’s run times getting faster or slower, on average? • How does the sample size affect the expected error in our estimates? • When is it reasonable to model the sample mean ¯x using a normal distribution, and when will we need to use a new distribution, known as the t-distribution? Learning objectives 1. Understand the relationship between a t-distribution and a normal distribution, and explain why we use a t-distribution for inference on a mean. 2. State and verify whether or not the conditions for inference for a mean based on the tdistribution are met. Understand when it is necessary to look at the distribution of the sample data. 3. Know the degrees of freedom associated with a one-sample t-procedure. 4. Carry out a complete hypothesis test for a single mean. 5. Carry out a complete confidence interval procedure for a single mean. 6. Find the minimum sample size needed to estimate a mean with C% confidence and a margin of error no greater than a certain value. 7.1.1 Using a normal distribution for inference when σσσ is known In Section 4.2 we saw that the distribution of a sample mean is normal if the population is normal or if the sample size is at least 30. In these problems, we used the population mean and population standard
deviation to find a Z-score. However, in the case of inference, these values will be unknown. In rare circumstances we may know the standard deviation of a population, even though we do not know its mean. For example, in some industrial processes, the mean may be known to shift over time, while the standard deviation of the process remains the same. In these cases, we can use the normal model as the basis for our inference procedures. We use ¯x as our point estimate for µ and the SD formula for a sample mean calculated in Section 4.2: σ¯x = σ√ n. That leads to a confidence interval and a test statistic as follows: CI: ¯x ± z σ √ n ¯x − null value σ√ n Z = What happens if we do not know the population standard deviation σ, as is usually the case? The best we can do is use the sample standard deviation, denoted by s, to estimate the population standard deviation. SE = s √ n 7.1. INFERENCE FOR A MEAN WITH THE T -DISTRIBUTION 363 However, when we do this we run into a problem: when carrying out our inference procedures, we will be trying to estimate two quantities: both the mean and the standard deviation. Looking at the SD and SE formulas, we can make some important observations that will give us a hint as to what will happen when we use s instead of σ. • For a given population, σ is a fixed number and does not vary. • s, the standard deviation of a sample, will vary from one sample to the next and will not be exactly equal to σ. • The larger the sample size n, the better the estimate s will tend to be for σ. For this reason, the normal model still works well when the sample size is large. For smaller sample sizes, we run into a problem: our use of s, which is used when computing the standard error, tends to add more variability to our test statistic. It is this extra variability that leads us to a new distribution: the t-distribution. 7.1.2 Introducing the ttt-distribution When we use the sample standard deviation s in place of the population standard deviation σ to standardize the sample mean, we get an entirely new distribution - one that is similar to the normal distribution, but has greater spread. This
distribution is known as the t-distribution. A t-distribution, shown as a solid line in Figure 7.1, has a bell shape. However, its tails are thicker than the normal model’s. We can see that a greater proportion of the area under the t-distribution is beyond 2 standard units from 0 than under the normal distribution. These extra thick tails are exactly the correction we need to resolve the problem of a poorly estimated standard deviation. Figure 7.1: Comparison of a t-distribution (solid line) and a normal distribution (dotted line). The t-distribution, always centered at zero, has a single parameter: degrees of freedom. The degrees of freedom (df ) describes the precise form of the bell-shaped t-distribution. Several t-distributions are shown in Figure 7.2. When there are more degrees of freedom, the t-distribution looks more like the standard normal distribution. Figure 7.2: The larger the degrees of freedom, the more closely the t-distribution resembles the standard normal distribution. −4−2024−4−2024normalt, df = 8t, df = 4t, df = 2t, df = 1 364 CHAPTER 7. INFERENCE FOR NUMERICAL DATA DEGREES OF FREEDOM The degrees of freedom describes the shape of the t-distribution. The larger the degrees of freedom, the more closely the distribution resembles the standard normal distribution. When the degrees of freedom is large, about 30 or more, the t-distribution is nearly indistinguishable from the normal distribution. In Section 7.1.4, we will see how degrees of freedom relates to sample size. We will find it useful to become familiar with the t-distribution, because it plays a very similar role to the normal distribution during inference. We use a ttt-table, partially shown in Figure 7.3, in place of the normal probability table when the population standard deviation is unknown, especially when the sample size is small. A larger table is presented in Appendix C.3. df one tail 1 2 3... 17 18 19 20... 1000 0.100 3.078 1.886 1.638... 1.333 1.330 1.328 1.325... 1.282 ∞ 1.282 80% 0.050 6.314 2.920 2.353... 1.740 1.734 1.729 1.725
... 1.646 1.645 90% 0.025 12.71 4.303 3.182... 2.110 2.101 2.093 2.086... 1.962 1.960 95% 0.010 31.82 6.965 4.541... 2.567 2.552 2.539 2.528... 2.330 2.326 98% 0.005 63.66 9.925 5.841 2.898 2.878 2.861 2.845 2.581 2.576 99% Confidence level C Figure 7.3: An abbreviated look at the t-table. Each row represents a different t-distribution. The columns describe the cutoffs for specific tail areas. The row with df = 18 has been highlighted. Each row in the t-table represents a t-distribution with different degrees of freedom. The columns correspond to tail probabilities. For instance, if we know we are working with the tdistribution with df = 18, we can examine row 18, which is highlighted in Figure 7.3. If we want the value in this row that identifies the cutoff for an upper tail of 10%, we can look in the column where one tail is 0.100. This cutoff is 1.33. If we had wanted the cutoff for the lower 10%, we would use -1.33. Just like the normal distribution, all t-distributions are symmetric. EXAMPLE 7.1 What proportion of the t-distribution with 18 degrees of freedom falls below -2.10? Just like a normal probability problem, we first draw a picture as shown in Figure 7.4 and shade the area below -2.10. To find this area, we identify the appropriate row: df = 18. Then we identify the column containing the absolute value of -2.10; it is the third column. Because we are looking for just one tail, we examine the top line of the table, which shows that a one tail area for a value in the third row corresponds to 0.025. That is, 2.5% of the distribution falls below -2.10. EXAMPLE 7.2 For the t-distribution with 18
degrees of freedom, what percent of the curve is contained between -1.330 and +1.330? Using row df = 18, we find 1.330 in the table. The area in each tail is 0.100 for a total of 0.200, which leaves 0.800 in the middle between -1.33 and +1.33. This corresponds to the 80%, which can be found at the very bottom of that column. 7.1. INFERENCE FOR A MEAN WITH THE T -DISTRIBUTION 365 Figure 7.4: The t-distribution with 18 degrees of freedom. The area below -2.10 has been shaded. Figure 7.5: Left: The t-distribution with 3 degrees of freedom, with the area farther than 3.182 units from 0 shaded. Right: The t-distribution with 20 degrees of freedom, with the area above 1.65 shaded. EXAMPLE 7.3 For the t-distribution with 3 degrees of freedom, as shown in the left panel of Figure 7.5, what should the value of t be so that 95% of the area of the curve falls between -t and +t? We can look at the column in the t-table that says 95% along the bottom row and trace it up to row df = 3 to find that t = 3.182. EXAMPLE 7.4 A t-distribution with 20 degrees of freedom is shown in the right panel of Figure 7.5. Estimate the proportion of the distribution falling above 1.65. We identify the row in the t-table using the degrees of freedom: df = 20. Then we look for 1.65; it is not listed. It falls between the first and second columns. Since these values bound 1.65, their tail areas will bound the tail area corresponding to 1.65. We identify the one tail area of the first and second columns, 0.050 and 0.10, and we conclude that between 5% and 10% of the distribution is more than 1.65 standard deviations above the mean. If we like, we can identify the precise area using statistical software: 0.0573. When the desired degrees of freedom is not listed on the table, choose a conservative value: round the degrees of freedom down, i.e. move up to the previous row listed. Another option is to use
a calculator or statistical software to get a precise answer. −4−2024−4−2024−4−2024 366 CHAPTER 7. INFERENCE FOR NUMERICAL DATA 7.1.3 Technology: finding area under the ttt-distribution It is possible to find areas under a t-distribution on a calculator. TI-84: FINDING AREA UNDER THE T-CURVE Use 2ND VARS, tcdf to find an area/proportion/probability between two t-scores or to the left or right of a t-score. 1. Choose 2ND VARS (i.e. DISTR). 2. Choose 6:tcdf. 3. Enter the lower (left) t-score and the upper (right) t-score. • If finding just a lower tail area, set lower to -100. • For an upper tail area, set upper to 100. 4. Enter the degrees of freedom after df:. 5. Down arrow, choose Paste, and hit ENTER. TI-83: Do steps 1-2, then enter the lower bound, upper bound, degrees of freedom, e.g. tcdf(2, 100, 5), and hit ENTER. CASIO FX-9750GII: FINDING AREA UNDER THE T-DISTRIBUTION 1. Navigate to STAT (MENU, then hit 2). 2. Select DIST (F5), then t (F2), and then tcd (F2). 3. If needed, set Data to Variable (Var option, which is F2). 4. Enter the Lower t-score and the Upper t-score. Set the degrees of freedom (df). • If finding just a lower tail area, set Lower to -100. • For an upper tail area, set Upper to 100. 5. Hit EXE, which will return the area probability (p) along with the t-scores for the lower and upper bounds. GUIDED PRACTICE 7.5 Use a calculator to find the area to the right of t = 3 under the t-distribution with 35 degrees of freedom.1 GUIDED PRACTICE 7.6 Without doing any calculations, will the area to the right of Z = 3 under the standard normal curve be greater than, less than, or equal to the area
to the right of t = 3 with 35 degrees of freedom?2 1Because we want to shade to the right of t = 3, we let lower = 3. There is no upper bound, so use a large value such as 100 for upper. Let df = 35. The area is 0.0025 or 0.25%. 2Because the t-distribution has greater spread and thicker tails than the normal distribution, we would expect the upper tail area to the right of Z = 3 to be less than the upper tail area to the right of t = 3. One can confirm that the area to the right of Z = 3 is 0.0013, which is less than 0.0025. With a smaller degrees of freedom, this difference would be even more pronounced. Try it! 7.1. INFERENCE FOR A MEAN WITH THE T -DISTRIBUTION 367 7.1.4 Checking conditions for inference on a mean using the ttt-distribution Using the t-distribution for inference on a mean requires that the theoretical sampling distribution for the sample mean ¯x is nearly normal. In practice, we check whether this assumption is reasonable by verifying that certain conditions are met. Independence. Observations can be considered independent when the data are collected from a random process, such as rolling a die, or from a random sample. Without a random sample or process, the standard error formula would not apply, and it is unclear to what population the inference would apply. Recall that when sampling without replacement from a finite population, the observations can be considered independent when sampling less than 10% of the population. Sample size / nearly normal population. We saw in Section 4.2 that in order for the sampling distribution for a sample mean to be nearly normal, we also need the sample to be drawn from a nearly normal population or we need the sample size to be at least 30 (n ≥ 30). What should we do when the sample size is small and we are not sure whether the population distribution is nearly normal? In this case, the best we can do is look at the data for excessive skew. If the data are very skewed or have obvious outliers, this suggests that the sample did not come from a nearly normal population. However, if the data do not show obvious skew or outliers, then the idea of a nearly normal population is generally considered reasonable. Note that by looking at a small data set, we cannot prove that the population
distribution is nearly normal. However, the data can suggest to us whether the population distribution being nearly normal is an unreasonable assumption. THE NORMALITY CONDITION WITH SMALL SAMPLES If the sample is small and there is strong skew or extreme outliers in the data, the population from which the sample was drawn may not be nearly normal. Ideally, we use a graph of the data to check for strong skew or outliers. When the full data set is not available, summary statistics can also be used. For larger samples, it is less necessary to check for skew in the data. If the sample size is 30 or more, it is no longer necessary that the population distribution be nearly normal. When the sample size is large, the Central Limit Theorem tells us that the sampling distribution for the sample mean will be nearly normal regardless of the distribution of the population. 7.1.5 One-sample ttt-interval for a mean Dolphins are at the top of the oceanic food chain, which causes dangerous substances such as mercury to concentrate in their organs and muscles. This is an important problem for both dolphins and other animals, like humans, who eat them. We would like to create a confidence interval to estimate the average mercury content in dolphin muscles. We will use a sample of 19 Risso’s dolphins from the Taiji area in Japan. The data are summarized in Figure 7.7. Because we are estimating a mean, we would like to construct a t-interval, but first we must check whether the conditions for using a t-interval are met. We will start by assuming that the sample of 19 Risso’s dolphins constitutes a random sample. Next, we note that the sample size is small (less than 30), and we do not know whether the distribution of mercury content for all dolphins is nearly normal. Therefore, we must look at the data. Since we do not have all of the data to graph, we look at the summary statistics provided in Figure 7.7. These summary statistics do not suggest any strong skew or outliers; all observations are within 2.5 standard deviations of the mean. Based on this evidence, we believe it is reasonable that the population distribution of mercury content in dolphins could be nearly normal. 368 CHAPTER 7. INFERENCE FOR NUMERICAL DATA Figure 7.6: A Risso’s dolphin. —————————– Photo by Mike Baird (www.baird
photos.com). CC BY 2.0 license. n 19 ¯x 4.4 s minimum maximum 2.3 1.7 9.2 Figure 7.7: Summary of mercury content in the muscle of 19 Risso’s dolphins from the Taiji area. Measurements are in µg/wet g (micrograms of mercury per wet gram of muscle). With both conditions met, we will construct a 95% confidence interval. Recall that a confidence interval has the following form: point estimate ± critical value × SE of estimate n. What The point estimate is the sample mean and the SE of the sample mean is given by s/ do we use for the critical value? Since we are using the t-distribution, we use a t-table to find the critical value. We denote the critical value t. √ • For a 95% confidence interval, we want to find the cutoff t such that 95% of the t-distribution is between -t and t. • Using the t-table on page 364, we look at the row that corresponds to the degrees of freedom and the column that corresponds to the confidence level. DEGREES OF FREEDOM FOR A SINGLE SAMPLE If the sample has n observations and we are examining a single mean, then we use the tdistribution with df = n − 1 degrees of freedom. 7.1. INFERENCE FOR A MEAN WITH THE T -DISTRIBUTION 369 EXAMPLE 7.7 Calculate a 95% confidence interval for the average mercury content in dolphin muscles based on this sample. Recall that n = 19, ¯x = 4.4 µg/wet g, and s = 2.3 µg/wet g. To find the critical value t we use the t-distribution with n − 1 degrees of freedom. The sample size is 19, so df = 19 − 1 = 18 degrees of freedom. Using the t-table with row df = 18 and column corresponding to a 95% confidence level, we get t = 2.10. The point estimate is the sample mean ¯x and the standard error of a sample mean is given by s√ n. Now we have all the pieces we need to calculate a 95% con�
�dence interval for the average mercury content in dolphin muscles. point estimate ± critical value × SE of estimate s √ n 2.3 √ 19 4.4 ± 2.10 × ¯x ± t × df = n − 1 df = 18 = (3.29, 5.51) EXAMPLE 7.8 How do we interpret this 95% confidence interval? To what population is it applicable? A random sample of Risso’s dolphins was taken from the Taiji area in Japan. The mercury content in the muscles of other types of dolphins and from dolphins from other regions may vary. Therefore, we can only make an inference to Risso’s dolphins from this area. We are 95% confident the true average mercury content in the muscles of Risso’s dolphins in the Taiji area of Japan is between 3.29 and 5.51 µg/wet gram. 370 CHAPTER 7. INFERENCE FOR NUMERICAL DATA CONSTRUCTING A CONFIDENCE INTERVAL FOR A MEAN To carry out a complete confidence interval procedure to estimate a single mean µ, Identify: Identify the parameter and the confidence level, C%. The parameter will be an unknown population mean, e.g. the true mean (or average) mercury content in Risso’s dolphins. Choose: Choose the appropriate interval procedure and identify it by name. To estimate a single mean we use a 1-sample ttt-interval. Check: Check conditions for the sampling distribution for ¯x to be nearly normal. 1. Independence: Data come from a random sample or random process. When sampling without replacement, check that sample size is less than 10% of the population size. 2. Large sample or normal population: n ≥ 30 or the population distr. is nearly normal. If the sample size is less than 30 and the population distribution is unknown, check for strong skew or outliers in the data. If neither is found, the condition that the population distribution is nearly normal is considered reasonable. Calculate: Calculate the confidence interval and record it in interval form. point estimate ± t × SE of estimate, df = n − 1 point estimate: the sample mean ¯x s√ n SE of estimate: t: use a t-table at row df = n − 1 and confi
dence level C% (, ) Conclude: Interpret the interval and, if applicable, draw a conclusion in context.. A conclusion Here, we are C% confident that the true mean of [...] is between depends upon whether the interval is entirely above, is entirely below, or contains the value of interest. and 7.1. INFERENCE FOR A MEAN WITH THE T -DISTRIBUTION 371 EXAMPLE 7.9 The FDA’s webpage provides some data on mercury content of fish. Based on a sample of 15 croaker white fish (Pacific), a sample mean and standard deviation were computed as 0.287 and 0.069 ppm (parts per million), respectively. The 15 observations ranged from 0.18 to 0.41 ppm. Construct an appropriate 95% confidence interval for the true average mercury content of croaker white fish (Pacific). Is there evidence that the average mercury content is greater than 0.275 ppm? Use the five step framework to organize your work. Identify: The parameter of interest is the true mean mercury content in croaker white fish (Pacific). We want to estimate this at the 95% confidence level. Choose: Because the parameter to be estimated is a single mean, we will use a 1-sample t-interval. Check: We must check that the sampling distribution for the mean can be modeled using a normal distribution. We will assume that the sample constitutes a random sample of less than 10% of all croaker white fish (Pacific) and that independence is reasonable. The sample size n is small, but there are no obvious outliers; all observations are within 2 standard deviations of the mean. If there is skew, it is not too great. Therefore we think it is reasonable that the population distribution of mercury content in croaker white fish (Pacific) could be nearly normal. Calculate: We will calculate the interval: point estimate ± t × SE of estimate The point estimate is the sample mean: ¯x = 0.287 The SE of the sample mean is: n = 0.069√ s√ 15 We find t for the one-sample case using the t-table at row df = n − 1
and confidence level C%. For a 95% confidence level and df = 15 − 1 = 14, t = 2.145. So the 95% confidence interval is given by: 0.287 ± 2.145 × 0.069 √ 15 df = 14 0.287 ± 2.145 × 0.0178 = (0.249, 0.325) Conclude: We are 95% confident that the true average mercury content of croaker white fish (Pacific) is between 0.249 and 0.325 ppm. Because the interval contains 0.275 as well as values less than 0.275, we do not have evidence that the true average mercury content is greater than 0.275 ppm. EXAMPLE 7.10 Based on the interval calculated in Example 7.9 above, can we say that 95% of croaker white fish (Pacific) have mercury content between 0.249 and 0.325 ppm? No. The interval estimates the average amount of mercury with 95% confidence. It is not trying to capture 95% of the values. 372 CHAPTER 7. INFERENCE FOR NUMERICAL DATA 7.1.6 Technology: the 1-sample ttt-interval TI-83/84: 1-SAMPLE T-INTERVAL Use STAT, TESTS, TInterval. 1. Choose STAT. 2. Right arrow to TESTS. 3. Down arrow and choose 8:TInterval. 4. Choose Data if you have all the data or Stats if you have the mean and standard deviation. • If you choose Data, let List be L1 or the list in which you entered your data (don’t forget to enter the data!) and let Freq be 1. • If you choose Stats, enter the mean, SD, and sample size. 5. Let C-Level be the desired confidence level. 6. Choose Calculate and hit ENTER, which returns: the confidence interval the sample mean the sample SD the sample size (, ) ¯x Sx n CASIO FX-9750GII: 1-SAMPLE T-INTERVAL 1. Navigate to STAT (MENU button, then hit the 2 button or select STAT). 2. If necessary, enter the
data into a list. 3. Choose the INTR option (F3 button), t (F2 button), and 1-S (F1 button). 4. Choose either the Var option (F2) or enter the data in using the List option. 5. Specify the interval details: • Confidence level of interest for C-Level. • If using the Var option, enter the summary statistics. If using List, specify the list and leave Freq value at 1. 6. Hit the EXE button, which returns Left, Right ¯x sx n ends of the confidence interval sample mean sample standard deviation sample size GUIDED PRACTICE 7.11 Use a calculator to find a 95% confidence interval for the mean mercury content in croaker white fish (Pacific). The sample size was 15, and the sample mean and standard deviation were computed as 0.287 and 0.069 ppm (parts per million), respectively.3. 3Choose TInterval or equivalent. We do not have all the data, so choose Stats on a TI or Var on a Casio. Enter ¯x and Sx. Note: Sx is the sample standard deviation (0.069), not the SE. Let n = 15 and C-Level = 0.95. This should give the interval (0.249, 0.325). 7.1. INFERENCE FOR A MEAN WITH THE T -DISTRIBUTION 373 7.1.7 Choosing a sample size when estimating a mean In Section 6.1.5, we looked at sample size considerations when estimating a proportion. We take the same approach when estimating a mean. Recall that the margin of error is measured as the distance between the point estimate and the upper or lower bound of the confidence interval. We want to estimate a mean with a particular confidence level while putting an upper bound on the margin of error. What is the smallest sample size that will satisfy these conditions? For a one-sample t-interval, the margin of error, M E, is given by M E = t × s√ n. The challenge in this case is that we need to know n to find t. But n is precisely what we are attempting to solve for! Fortunately, in most cases we will have a reasonable estimate for the
population standard deviation and the desired n will be large, so we can use M E = z × σ√ n, making it easier to solve for n. EXAMPLE 7.12 Blood pressure oscillates with the beating of the heart, and the systolic pressure is defined as the peak pressure when a person is at rest. The standard deviation of systolic blood pressure for people in the U.S. is about 25 mmHg (millimeters of mercury). How large of a sample is necessary to estimate the average systolic blood pressure of people in a particular town with a margin of error no greater than 4 mmHg using a 95% confidence level? For this problem, we want to find the sample size n so that the margin of error, M E, is less than or equal to 4 mmHg. We start by writing the following inequality: z × σ √ n ≤ 4 For a 95% confidence level, the critical value z = 1.96. Our best estimate for the population standard deviation is σ = 25. We substitute in these two values and we solve for n. 1.96 × 1.96 × 1.96 × 25 4 25 √ n 25 4 2 ≤ 4 √ n ≤ ≤ n 150.06 ≤ n n = 151 The minimum sample size that meets the condition is 151. We round up because the sample size must be an integer and it must be greater than or equal to 150.06. IDENTIFY A SAMPLE SIZE FOR A PARTICULAR MARGIN OF ERROR To estimate the minimum sample size required to achieve a margin of error less than or equal to m, with C% confidence, we set up an inequality as follows: z σ √ n ≤ m z depends on the desired confidence level and σ is the standard deviation associated with the population. We solve for the sample size, n. Sample size computations are helpful in planning data collection, and they require careful forethought. 374 CHAPTER 7. INFERENCE FOR NUMERICAL DATA 7.1.8 Hypothesis testing for a mean Is the typical U.S. runner getting faster or slower over time? Technological advances in shoes, training, and diet might suggest runners would be faster. An opposing viewpoint might say that with the average body mass index on the rise, people
tend to run slower. In fact, all of these components might be influencing run time. We consider this question in the context of the Cherry Blossom Race, which is a 10-mile race in Washington, DC each spring. The average time for all runners who finished the Cherry Blossom Race in 2006 was 93.3 minutes (93 minutes and about 18 seconds). We want to determine using data from 100 participants in the 2017 Cherry Blossom Race whether runners in this race are getting faster or slower, versus the other possibility that there has been no change. Figure 7.8 shows run times for 100 randomly selected participants. Figure 7.8: A histogram of time for the sample of 2017 Cherry Blossom Race participants. EXAMPLE 7.13 What are appropriate hypotheses for this context? We know that the average run time for all runners in 2006 was 93.3 minutes. We have a sample of times from the 2017 race. We are interested in whether the average run time has changed, so we will use a two-sided HA. Let µ represent the average 10-mile run time of all participants in 2017, which is unknown to us. H0: µ = 93.3 minutes. The average run time of all participants in 2017 was 93.3 min. HA: µ = 93.3 minutes. The average run time of all participants in 2017 was not 93.3 min. The data come from a random sample from a large population, so the observations are independent. Do we need to check for skew in the data? No – with a sample size of 100, well over 30, the Central Limit Theorem tells us that the sampling distribution for ¯x will be nearly normal. With independence satisfied and slight skew not a concern for this large of a sample, we can proceed with performing a hypothesis test using the t-distribution. The sample mean and sample standard deviation of the 100 runners from the 2017 Cherry Blossom Race are 97.3 and 17.0 minutes, respectively. We want to know whether the observed sample mean of 97.3 is far enough away from 93.3 to provide convincing evidence of a real difference, or if it is within the realm of expected variation for a sample of size 100. Time (Minutes)Frequency608010012014005101520 7.1. INFERENCE FOR A MEAN WITH THE T -DISTRIBUTION 375 To answer this question we will find the
test statistic and p-value for the hypothesis test. Since we will be using a sample standard deviation in our calculation of the test statistic, we will need to use a t-distribution, just as we did with confidence intervals for a mean. We call the test statistic a T -statistic. It has the same general form as a Z-statistic. T = point estimate − null value SE of estimate As we saw before, when carrying out inference on a single mean, the degrees of freedom is given by n − 1. THE T-STATISTIC The T-statistic (or T-score) is analogous to a Z-statistic (or Z-score). Both represent how many standard errors the observed value is from the null value. EXAMPLE 7.14 Calculate the test statistic, degrees of freedom, and p-value for this test. Here, our point estimate is the sample mean, ¯x = 97.3 minutes. The SE of the sample mean is given by s√ n, so the SE of estimate = 17.0√ 100 = 1.7 minutes. T = 97.3 − 93.3 1.7 = 2.35 df = 100 − 1 = 99 Using a calculator, we find that the area above 2.35 under the t-distribution with 99 degrees of freedom is 0.01. Because this is a two-tailed test, we double this. So the p-value = 2 × 0.01 = 0.02. EXAMPLE 7.15 Does the data provide sufficient evidence that the average Cherry Blossom Run time in 2017 is different than in 2006? This depends upon the desired significance level. Since the p-value = 0.02 < 0.05, there is sufficient evidence at the 5% significance level. However, as the p-value of 0.02 > 0.01, there is not sufficient evidence at the 1% significance level. EXAMPLE 7.16 Would you expect the hypothesized value of 93.3 to fall inside or outside of a 95% confidence interval? What about a 99% confidence interval? Because the hypothesized value of 93.3 was rejected by the two-sided α = 0.05 test, we would expect
it to be outside the 95% confidence interval. However, because the hypothesized value of 93.3 was not rejected by the two-sided α = 0.01 test, we would expect it to fall inside the (wider) 99% confidence interval. 376 CHAPTER 7. INFERENCE FOR NUMERICAL DATA HYPOTHESIS TEST FOR A MEAN To carry out a complete hypothesis test to test the claim that a single mean µ is equal to a null value µ0, Identify: Identify the hypotheses and the significance level, α. H0: µ = µ0 HA: µ = µ0; HA: µ > µ0; or HA: µ < µ0 Choose: Choose the appropriate test procedure and identify it by name. To test hypotheses about a single mean we use a 1-sample ttt-test. Check: Check conditions for the sampling distribution for ¯x to be nearly normal. 1. Independence: Data come from a random sample or random process. When sampling without replacement, check that sample size is less than 10% of the population size. 2. Large sample or normal population: n ≥ 30 or the population distr. is nearly normal. - If the sample size is less than 30 and the population distribution is unknown, check for strong skew or outliers in the data. If neither is found, then the condition that the population is nearly normal is considered reasonable. Calculate: Calculate the t-statistic, df, and p-value. T = point estimate − null value SE of estimate, df = n − 1 point estimate: the sample mean ¯x s√ n SE of estimate: null value: µ0 p-value = (based on the t-statistic, the df, and the direction of HA) Conclude: Compare the p-value to α, and draw a conclusion in context. If the p-value is < α, reject H0; there is sufficient evidence that [HA in context]. If the p-value is > α, do not reject H0; there is not sufficient evidence that [HA in context]. 7.1. INFERENCE FOR A MEAN WITH THE T -DISTRIBUTION 377 EXAMPLE 7.17 Recall the example involving the mercury content in croaker white fish (Pacific). Based on a sample of size
15, a sample mean and standard deviation were computed as 0.287 and 0.069 ppm (parts per million), respectively. Carry out an appropriate test to determine if 0.25 is a reasonable value for the average mercury content of croaker white fish (Pacific). Use the five step method to organize your work. Identify: We will test the following hypotheses at the α = 0.05 significance level. H0: µ = 0.25 HA: µ = 0.25 The mean mercury content is not 0.25 ppm. Choose: Because we are hypothesizing about a single mean we choose the 1-sample t-test. Check: The conditions were checked previously, namely – the data come from a random sample of less than 10% of the population of all croaker white fish (Pacific), and because n is less than 30, we verified that there is no strong skew or outliers in the data, so the assumption that the population distribution of mercury is nearly normally distributed is reasonable. Calculate: We will calculate the t-statistic and the p-value. T = point estimate − null value SE of estimate The point estimate is the sample mean: ¯x = 0.287 The SE of the sample mean is: n = 0.069√ s√ 15 = 0.0178 The null value is the value hypothesized for the parameter in H0, which is 0.25. For the 1-sample t-test, df = n − 1. T = 0.287 − 0.25 0.0178 = 2.07 df = 15 − 1 = 14 Because HA is a two-tailed test ( = ), the p-value corresponds to the area to the right of t = 2.07 plus the area to the left of t = −2.07 under the t-distribution with 14 degrees of freedom. The p-value = 2 × 0.029 = 0.058. Conclude: The p-value of 0.058 > 0.05, so we do not reject the null hypothesis. We do not have sufficient evidence that the average mercury content in croaker white fish (Pacific) is not 0.25. GUIDED PRACTICE 7.18 Recall that the 95% confidence
interval for the average mercury content in croaker white fish was (0.249, 0.325). Discuss whether the conclusion of the hypothesis test in the previous example is consistent or inconsistent with the conclusion of the confidence interval.4 4It is consistent because 0.25 is located (just barely) inside the confidence interval, so it is considered a reasonable value. Our hypothesis test did not reject the hypothesis that µ = 0.25, also implying that it is a reasonable value. Note that the p-value was just over the cutoff of 0.05. This is consistent with the value of 0.25 being just inside the confidence interval. Also note that the hypothesis test did not prove that µ = 0.25. The value 0.25 is just one of many reasonable values for the true mean. 378 CHAPTER 7. INFERENCE FOR NUMERICAL DATA 7.1.9 Technology: the 1-sample ttt-test TI-83/84: 1-SAMPLE T-TEST Use STAT, TESTS, T-Test. 1. Choose STAT. 2. Right arrow to TESTS. 3. Down arrow and choose 2:T-Test. 4. Choose Data if you have all the data or Stats if you have the mean and standard deviation. 5. Let µ0 be the null or hypothesized value of µ. • If you choose Data, let List be L1 or the list in which you entered your data (don’t forget to enter the data!) and let Freq be 1. • If you choose Stats, enter the mean, SD, and sample size. 6. Choose =, <, or > to correspond to HA. 7. Choose Calculate and hit ENTER, which returns: t T-statistic p-value p the sample mean ¯x Sx n the sample standard deviation the sample size CASIO FX-9750GII: 1-SAMPLE T-TEST 1. Navigate to STAT (MENU button, then hit the 2 button or select STAT). 2. If necessary, enter the data into a list. 3. Choose the TEST option (F3 button). 4. Choose the t option (F2 button). 5. Choose the 1-S option (F1 button). 6. Choose either the Var option (F2) or enter the data in using the List option. 7. Spec
ify the test details: • Specify the sidedness of the test using the F1, F2, and F3 keys. • Enter the null value, µ0. • If using the Var option, enter the summary statistics. If using List, specify the list and leave Freq values at 1. 8. Hit the EXE button, which returns alternative hypothesis ¯x sx n t T-statistic p-value p sample mean sample standard deviation sample size GUIDED PRACTICE 7.19 The average time for all runners who finished the Cherry Blossom Run in 2006 was 93.3 minutes. In 2017, the average time for 100 randomly selected participants was 97.3, with a standard deviation of 17.0 minutes. Use a calculator to find the T -statistic and p-value for the appropriate test to see if the average time for the participants in 2017 is different than it was in 2006.5 5Choose T-Test or equivalent. Let µ0 be 93.3. ¯x is 97.3, Sx is 17.0, and n = 100. Choose = to correspond to HA. We get t = 2.353 and the p-value p = 0.021. 7.1. INFERENCE FOR A MEAN WITH THE T -DISTRIBUTION 379 Section summary • The t-distribution. – When calculating a test statistic for a mean, using the sample standard deviation in place of the population standard deviation gives rise to a new distribution called the t-distribution. – As the sample size and degrees of freedom increase, s becomes a more stable estimate of σ, and the corresponding t-distribution has smaller spread. – As the degrees of freedom go to ∞, the t-distribution approaches the normal distribution. This is why we can use the t-table at df = ∞ to find the value of z. • When carrying out inference for a single mean, we use the t-distribution with n − 1 degrees of freedom. • When there is one sample and the parameter of interest is a single mean: – Estimate µ at the C% confidence level using a 1-sample ttt-interval. – Test H0: µ = µ0 at the α significance level using a 1-sample ttt-test. • The one-sample t-interval and t-
test require that the sampling distribution for ¯x be nearly normal. For this reason we must check that the following conditions are met. 1. Independence: The data come from a random sample or random process. When sampling without replacement, check that the sample size is less than 10% of the population size. 2. Large sample or normal population: n ≥ 30 or population distribution is nearly normal. - If the sample size is less than 30 and the population distribution is unknown, check for strong skew or outliers in the data. If neither is found, then the condition that the population distribution is nearly normal is considered reasonable. • When the conditions are met, we calculate the confidence interval and the test statistic as we did in the previous chapter, except that we use t for the critical value and we use T for the test statistic. Confidence interval: point estimate ± t × SE of estimate Test statistic: T = point estimate − null value SE of estimate Here the point estimate is the sample mean: ¯x. The SE of estimate is the SE of the sample mean: s√ n. The degrees of freedom is given by df = n − 1. • To calculate the minimum sample size required to estimate a mean with C% confidence and a margin of error no greater than m, we set up an inequality as follows: z σ √ n ≤ m z depends on the desired confidence level and σ is the standard deviation associated with the population. We solve for the sample size, n. Always round the answer up to the next integer, since n refers to a number of people or things. 380 CHAPTER 7. INFERENCE FOR NUMERICAL DATA Exercises 7.1 Identify the critical ttt. A random sample is selected from an approximately normal population with unknown standard deviation. Find the degrees of freedom and the critical t-value (t) for the given sample size and confidence level. (a) n = 6, CL = 90% (b) n = 21, CL = 98% (c) n = 29, CL = 95% (d) n = 12, CL = 99% 7.2 ttt-distribution. The figure on the right shows three unimodal and symmetric curves: the standard normal (z) distribution, the t-distribution with 5 degrees of freedom, and the t-distribution with
1 degree of freedom. Determine which is which, and explain your reasoning. 7.3 Find the p-value, Part I. A random sample is selected from an approximately normal population with an unknown standard deviation. Find the p-value for the given sample size and test statistic. Also determine if the null hypothesis would be rejected at α = 0.05. (a) n = 11, T = 1.91 (b) n = 17, T = −3.45 (c) n = 7, T = 0.83 (d) n = 28, T = 2.13 7.4 Find the p-value, Part II. A random sample is selected from an approximately normal population with an unknown standard deviation. Find the p-value for the given sample size and test statistic. Also determine if the null hypothesis would be rejected at α = 0.01. (a) n = 26, T = 2.485 (b) n = 18, T = 0.5 7.5 Working backwards, Part I. A 95% confidence interval for a population mean, µ, is given as (18.985, 21.015). This confidence interval is based on a simple random sample of 36 observations. Calculate the sample mean and standard deviation. Assume that all conditions necessary for inference are satisfied. Use the t-distribution in any calculations. 7.6 Working backwards, Part II. A 90% confidence interval for a population mean is (65, 77). The population distribution is approximately normal and the population standard deviation is unknown. This confidence interval is based on a simple random sample of 25 observations. Calculate the sample mean, the margin of error, and the sample standard deviation. −4−2024soliddasheddotted 7.1. INFERENCE FOR A MEAN WITH THE T -DISTRIBUTION 381 7.7 Sleep habits of New Yorkers. New York is known as “the city that never sleeps”. A random sample of 25 New Yorkers were asked how much sleep they get per night. Statistical summaries of these data are shown below. The point estimate suggests New Yorkers sleep less than 8 hours a night on average. Evaluate the claim that New York is the city that never sleeps keeping in mind that, despite this claim, the true average number of hours New Yorkers sleep could be less than 8
hours or more than 8 hours. n 25 ¯x 7.73 s min max 9.78 6.17 0.77 (a) Write the hypotheses in symbols and in words. (b) Check conditions, then calculate the test statistic, T, and the associated degrees of freedom. (c) Find and interpret the p-value in this context. Drawing a picture may be helpful. (d) What is the conclusion of the hypothesis test? (e) If you were to construct a 90% confidence interval that corresponded to this hypothesis test, would you expect 8 hours to be in the interval? 7.8 Heights of adults. Researchers studying anthropometry collected body girth measurements and skeletal diameter measurements, as well as age, weight, height and gender, for 507 physically active individuals. The histogram below shows the sample distribution of heights in centimeters.6 Min Q1 Median Mean SD Q3 Max 147.2 163.8 170.3 171.1 9.4 177.8 198.1 (a) What is the point estimate for the average height of active individuals? What about the median? (b) What is the point estimate for the standard deviation of the heights of active individuals? What about the IQR? (c) Is a person who is 1m 80cm (180 cm) tall considered unusually tall? And is a person who is 1m 55cm (155cm) considered unusually short? Explain your reasoning. (d) The researchers take another random sample of physically active individuals. Would you expect the mean and the standard deviation of this new sample to be the ones given above? Explain your reasoning. (e) The sample means obtained are point estimates for the mean height of all active individuals, if the sample of individuals is equivalent to a simple random sample. What measure do we use to quantify the variability of such an estimate? Compute this quantity using the data from the original sample under the condition that the data are a simple random sample. 6G. Heinz et al. “Exploring relationships in body dimensions”. In: Journal of Statistics Education 11.2 (2003). Height150160170180190200020406080100 382 CHAPTER 7. INFERENCE FOR NUMERICAL DATA 7.9 Find the mean. You are given the following hypotheses: H0 : µ = 60 HA : µ = 60 We know that the sample standard deviation is 8 and the sample size is 20. For what sample mean would
the p-value be equal to 0.05? Assume that all conditions necessary for inference are satisfied. 7.10 ttt vs. zzz. For a given confidence level, t than z∗ affects the width of the confidence interval. df is larger than z. Explain how t∗ df being slightly larger Georgianna claims that in a small city renowned for its music school, the average 7.11 Play the piano. child takes less than 5 years of piano lessons. We have a random sample of 20 children from the city, with a mean of 4.6 years of piano lessons and a standard deviation of 2.2 years. (a) Evaluate Georgianna’s claim using a hypothesis test and include all steps of the Identify, Choose, Check, Calculate, Conclude framework. (b) Construct a 95% confidence interval for the number of years students in this city take piano lessons and include all steps of the Identify, Choose, Check, Calculate, Conclude framework. (c) Do your results from the hypothesis test and the confidence interval agree? Explain your reasoning. 7.12 Auto exhaust and lead exposure. Researchers interested in lead exposure due to car exhaust sampled the blood of 52 police officers subjected to constant inhalation of automobile exhaust fumes while working traffic enforcement in a primarily urban environment. The blood samples of these officers had an average lead concentration of 124.32 µg/l and a SD of 37.74 µg/l; a previous study of individuals from a nearby suburb, with no history of exposure, found an average blood level concentration of 35 µg/l.7 (a) Write down the hypotheses that would be appropriate for testing if the police officers appear to have been exposed to a different concentration of lead. (b) Explicitly state and check all conditions necessary for inference on these data. (c) Regardless of your answers in part (b), test the hypothesis that the downtown police officers have a higher lead exposure than the group in the previous study. Interpret your results in context. A market researcher wants to evaluate car insurance savings at a 7.13 Car insurance savings. competing company. Based on past studies he is assuming that the standard deviation of savings is $100. He
wants to collect data such that he can get a margin of error of no more than $10 at a 95% confidence level. How large of a sample should he collect? 7.14 SAT scores. The standard deviation of SAT scores for students at a particular Ivy League college is 250 points. Two statistics students, Raina and Luke, want to estimate the average SAT score of students at this college as part of a class project. They want their margin of error to be no more than 25 points. (a) Raina wants to use a 90% confidence interval. How large a sample should she collect? (b) Luke wants to use a 99% confidence interval. Without calculating the actual sample size, determine whether his sample should be larger or smaller than Raina’s, and explain your reasoning. (c) Calculate the minimum required sample size for Luke. 7WI Mortada et al. “Study of lead exposure from automobile exhaust as a risk for nephrotoxicity among traffic policemen.” In: American journal of nephrology 21.4 (2000), pp. 274–279. 7.2. INFERENCE WITH PAIRED DATA 383 7.2 Inference with paired data When we have two observations on each person or each case, we can answer questions such as the following: • Do students do better on reading or writing sections of standardized tests? • How do the number of days with temperature above 90°F compare between 1948 and 2018? • Are Amazon textbook prices lower than the college bookstore prices? If so, how much lower, on average? Learning objectives 1. Distinguish between paired and unpaired data. 2. Recognize that inference procedures with paired data use the same one-sample t-procedures as in the previous section, and that these procedures are applied using the differences of the paired observations. 3. Carry out a complete hypothesis test with paired data. 4. Carry out a complete confidence interval procedure with paired data. 7.2.1 Paired observations and samples In the previous edition of this textbook, we found that Amazon prices were, on average, lower than those of the UCLA Bookstore for UCLA courses in 2010. It’s been several years, and many stores have adapted to the online market, so we wondered, how is the UCLA Bookstore doing today? We sampled 201 UCLA courses. Of
those, 68 required books that could be found on Amazon. A portion of the data set from these courses is shown in Figure 7.9, where prices are in U.S. dollars. subject course number 1 American Indian Studies M10 2 Anthropology 3 Arts and Architecture...... 67 Korean 68 2 10... 1 M10 Jewish Studies bookstore 47.97 14.26 13.50... 24.96 35.96 amazon 47.45 13.55 12.53... 23.79 32.40 price difference 0.52 0.71 0.97... 1.17 3.56 Figure 7.9: Five cases of the textbooks data set. Each textbook has two corresponding prices in the data set: one for the UCLA Bookstore and one for Amazon. Therefore, each textbook price from the UCLA bookstore has a natural correspondence with a textbook price from Amazon. When two sets of observations have this special correspondence, they are said to be paired. PAIRED DATA Two sets of observations are paired if each observation in one set has a special correspondence or connection with exactly one observation in the other data set. 384 CHAPTER 7. INFERENCE FOR NUMERICAL DATA Figure 7.10: Histogram of the difference in price for each book sampled. These data are very strongly skewed. Explore this data set on Tableau Public. To analyze paired data, it is often useful to look at the difference in outcomes of each pair of observations. In the textbook data set, we look at the differences in prices, which is represented as the diff variable. Here, for each book, the differences are taken as UCLA Bookstore price − Amazon price It is important that we always subtract using a consistent order; here Amazon prices are always subtracted from UCLA prices. A histogram of these differences is shown in Figure 7.10. Using differences between paired observations is a common and useful way to analyze paired data. GUIDED PRACTICE 7.20 The first difference shown in Figure 7.9 is computed as: 47.97 − 47.45 = 0.52. What does this difference tell us about the price for this textbook on Amazon versus the UCLA bookstore?8 7.2.2 Hypothesis tests for a mean of differences To analyze a paired data set, we simply analyze the differences. We
can use the same t- distribution techniques we applied in the last section. ndiff 68 ¯xdiff 3.58 sdiff 13.42 Figure 7.11: Summary statistics for the price differences. There were 68 books, so there are 68 differences. 8The difference is taken as UCLA Bookstore price − Amazon price. Because the difference is positive, it tells us that the UCLA Bookstore price was greater for this textbook. In fact, it was $0.52, or 52 cents, more expensive at the UCLA bookstore than on Amazon. UCLA Bookstore Price − Amazon Price (USD)Frequency−$20$0$20$40$60$800102030 7.2. INFERENCE WITH PAIRED DATA 385 We will set up and implement a hypothesis test to determine whether, on average, there is a difference in textbook prices between Amazon and the UCLA bookstore. We are considering two scenarios: there is no difference in prices or there is some difference in prices. H0: µdiff = 0. On average, there is no difference in textbook prices. HA: µdiff = 0. On average, there is some difference in textbook prices. Can the t-distribution be used for this application? The observations are based on a random sample from a large population, so independence is reasonable. While the distribution of the data is very strongly skewed, we do have n = 68 observations. This sample size is large enough that we do not have to worry about whether the population distribution for difference in price might be nearly normal or not. Because the conditions are satisfied, we can use the t-distribution to this setting. We compute the standard error associated with ¯xdiff using the standard deviation of the dif- ferences (sdiff = 13.42) and the number of differences (ndiff = 68): SE¯xdiff = sdiff √ ndiff = 13.42 √ 68 = 1.63 Next we compute the test statistic. The point estimate is the observed value of ¯xdiff. The null value is the value hypothesized under the null hypothesis
. Here, the null hypothesis is that the true mean of the differences is 0. T = point estimate − null value SE of estimate = 3.58 − 0 1.63 = 2.20 The degrees of freedom are df = 68 − 1 = 67. To visualize the p-value, the sampling distribution for ¯xdiff is drawn as though H0 is true. This is shown in Figure 7.12. Because this is a two-sided test, the p-value corresponds to the area in both tails. Using statistical software, we find the area in the tails to be 0.0312. Because the p-value of 0.0312 is less than 0.05, we reject the null hypothesis. We have evidence that, on average, there is a difference in textbook prices. In particular, we can say that, on average, Amazon prices are lower than the UCLA Bookstore prices for UCLA course textbooks. Figure 7.12: Sampling distribution for the mean difference in book prices, if the true average difference is zero. EXAMPLE 7.21 We have evidence to conclude Amazon is, on average, less expensive. Does this mean that UCLA students should always buy their books on Amazon? No. The fact that Amazon is, on average, less expensive, does not imply that it is less expensive for every textbook. Examining the distribution shown in Figure 7.10, we see that there are certainly a handful of cases where Amazon prices are much lower than the UCLA Bookstore’s, which suggests it is worth checking Amazon or other online sites before purchasing. However, in many cases the Amazon price is above what the UCLA Bookstore charges, and most of the time the price isn’t that different. For reference, this is a very different result from what we (the authors) had seen in a similar data set from 2010. At that time, Amazon prices were almost uniformly lower than those of the UCLA Bookstore’s and by a large margin, making the case to use Amazon over the UCLA Bookstore quite compelling at that time. m0 = 0xdiff = 3.58 386 CHAPTER 7. INFERENCE FOR NUMERICAL DATA HYPOTHESIS TEST FOR A MEAN OF DIFFERENCES To carry out a complete hypothesis test to test the claim that a mean of diff
erences µdiff is equal to 0, Identify: Identify the hypotheses and the significance level, α. H0: µdiff = 0 HA: µdiff = 0; HA: µdiff > 0; or HA: µdiff < 0 Choose: Choose the appropriate test procedure and identify it by name. To test hypotheses about a mean of differences we use a 1-sample ttt-test with paired data. Check: Check conditions for the sampling distribution for ¯xdiff to be nearly normal. 1. Independence: Data come from one random sample (with paired data) or from a randomized matched pairs experiment. When sampling without replacement, check that the sample size is less than 10% of the population size. 2. Large sample or normal population: ndiff ≥ 30 or population of diffs is nearly normal. - If the number of differences is less than 30 and the distribution of the population of differences is unknown, check for strong skew or outliers in the sample differences. If neither is found, then the condition that the population of differences is nearly normal is considered reasonable. Calculate: Calculate the t-statistic, df, and p-value. T = point estimate − null value SE of estimate, df = ndiff − 1 point estimate: the sample mean of differences ¯xdiff SE of estimate: sdiff√ ndiff null value: 0 p-value = (based on the t-statistic, the df, and the direction of HA) Conclude: Compare the p-value to α, and draw a conclusion in context. If the p-value is < α, reject H0; there is sufficient evidence that [HA in context]. If the p-value is > α, do not reject H0; there is not sufficient evidence that [HA in context]. 7.2. INFERENCE WITH PAIRED DATA 387 EXAMPLE 7.22 An SAT preparation company claims that its students’ scores improve by over 100 points on average after their course. A consumer group would like to evaluate this claim, and they collect data on a random sample of 30 students who took the class. Each of these
students took the SAT before and after taking the company’s course, so we have a difference in scores for each student. We will examine these differences x1 = 57, x2 = 133,..., x30 = 140. The distribution of the differences has a mean of 135.9, a standard deviation of 82.2, and is shown below. Do the data provide convincing evidence to back up the company’s claim? Use the five step framework to organize your work. Identify: We will test the following hypotheses at the α = 0.05 level: H0: µdiff = 100. Student scores improve by 100 points, on average. HA: µdiff > 100. Student scores improve by more than 100 points, on average. Here, diff = SAT score after course - SAT score before course. Choose: Because we have paired data and the parameter to be estimated is a mean of differences, we will use a 1-sample t-test with paired data. Check: We have a random sample of students and have paired data on them. We will assume that this sample of size 30 represents less than 10% of the total population of such students. Finally, the number of differences is ndiff = 30 ≥ 30, so we can proceed with the 1-sample t-test. Calculate: We will calculate the test statistic, df, and p-value. T = point estimate − null value SE of estimate The point estimate is the sample mean of differences: ¯xdiff = 135.9 The SE of the sample mean of differences is: = 15.0 sdiff√ ndiff = 82.2√ 30 Because this is a one-sample t-test, the degrees of freedom is ndiff − 1. T = 135.9 − 100 82.2√ 30 = 135.9 − 100 15.0 = 2.4 df = 30 − 1 = 29 The p-value is the area to the right of 2.4 under the t-distribution with 29 degrees of freedom. The p-value = 0.012. Conclude: p-value = 0.012 < α so we reject the null hypothesis. The data provide convincing evidence to support the company�
�s claim that students’ scores improve by more than 100 points, on average, following the class. GUIDED PRACTICE 7.23 Because we found evidence to support the company’s claim, does this mean that a student will score more than 100 points higher on the SAT if they take the class than if they do not take the class?9 9No. First, this is an observational study, so we cannot make a causal conclusion. Maybe SAT test takers tend to improve their score over time even if they don’t take this SAT class. Second, the test considers the average. It does not imply that each student improved. With a sample standard deviation of 82.2 and a mean of 135.9, some students did worse after the SAT class, as shown in the histogram in Example 7.22. Differences−10001002003000510 388 CHAPTER 7. INFERENCE FOR NUMERICAL DATA 7.2.3 Technology: the 1-sample ttt-test with paired data When carrying out a 1-sample t-test with paired data, make sure to use the sample differences or the summary statistics for the differences. TI-83/84: 1-SAMPLE T-TEST WITH PAIRED DATA Use STAT, TESTS, T-Test. 1. Choose STAT. 2. Right arrow to TESTS. 3. Down arrow and choose 2:T-Test. 4. Choose Data if you have all the data or Stats if you have the mean and standard deviation. 5. Let µ0 be the null or hypothesized value of µdiff. • If you choose Data, let List be L3 or the list in which you entered the differences, and let Freq be 1. • If you choose Stats, enter the mean, SD, and sample size of the differences. 6. Choose =, <, or > to correspond to HA. 7. Choose Calculate and hit ENTER, which returns: t p ¯x Sx n T-statistic p-value the sample mean of the differences the sample SD of the differences the sample size of the differences CASIO FX-9750GII: 1-SAMPLE T-TEST WITH PAIRED DATA 1. Compute the differences of the paired observations. 2. Using the computed di�
�erences, follow the instructions for a 1-sample t-test. 7.2.4 Confidence intervals for the mean of a difference In the previous examples, we carried out a 1-sample t-test with paired data, where the null hypothesis was that the true mean of differences is zero. Sometimes we want to estimate the true mean of differences with a confidence interval, and we use a 1-sample t-interval with paired data. Consider again the table summarizing data on: UCLA Bookstore price − Amazon price, for each of the 68 books sampled. ndiff 68 ¯xdiff 3.58 sdiff 13.42 Figure 7.13: Summary statistics for the price differences. There were 68 books, so there are 68 differences. We construct a 95% confidence interval for the average price difference between books at the UCLA Bookstore and on Amazon. Conditions have already verified, namely, that we have paired data from a random sample and that the number of differences is at least 30. We must find the critical value, t. Since df = 67 is not on the t-table, round the df down to 60 to get a t of 2.00 for 95% confidence. (See Section 7.2.5 for how to get a more precise interval using a calculator.) 7.2. INFERENCE WITH PAIRED DATA 389 Plugging the t value, point estimate, and standard error into the confidence interval formula, we get: point estimate ± t × SE of estimate → 3.58 ± 2.00 × 13.42 √ 68 → (0.33, 6.83) We are 95% confident that the UCLA bookstore is, on average, between $0.33 and $6.83 more expensive than Amazon for UCLA course books. This interval does not contain zero, so it is consistent with the earlier hypothesis test that rejected the null hypothesis that the average difference was 0. Because our interval is entirely above 0, we have evidence that the true average difference is greater than zero. Unlike the hypothesis test, the confidence interval gives us a good idea of how much
more expensive the UCLA bookstore might be, on average. EXAMPLE 7.24 Based on the interval, can we say that 95% of the books cost between $0.33 and $6.83 more at the UCLA Bookstore than on Amazon? No. This interval is attempting to estimate the average difference with 95% confidence. It is not attempting to capture 95% of the values. A quick look at Figure 7.10 shows that much less than 95% of the differences fall between $0.32 and $6.84. CONSTRUCTING A CONFIDENCE INTERVAL FOR A MEAN OF DIFFERENCES To carry out a complete confidence interval procedure to estimate a mean of differences µdiff, Identify: Identify the parameter and the confidence level, C%. The parameter will be a mean of differences, e.g. county population (year 2018 − year 2017). the true mean of the differences in Choose: Choose the appropriate interval procedure and identify it by name. To estimate a mean of differences we use a 1-sample ttt-interval with paired data. Check: Check conditions for the sampling distribution for ¯xdiff to be nearly normal. 1. Independence: Data come from one random sample (with paired data) or from a randomized matched pairs experiment. When sampling without replacement, check that the sample size is less than 10% of the population size. 2. Large sample or normal population: ndiff ≥ 30 or population of diffs is nearly normal. - If the number of differences is less than 30 and the distribution of the population of differences is unknown, check for strong skew or outliers in the sample differences. If neither is found, then the condition that the population of differences is nearly normal is considered reasonable. Calculate: Calculate the confidence interval and record it in interval form. point estimate ± t × SE of estimate, df : ndiff − 1 point estimate: the sample mean of differences ¯xdiff SE of estimate: sdiff√ ndiff t: use a t-table at row df = ndi�
�� − 1 and confidence level C% (, ) Conclude: Interpret the interval and, if applicable, draw a conclusion in context. We are C% confident that the true mean of the differences in [...] and. If applicable, draw a conclusion based on whether the interval is entirely above, is is between entirely below, or contains the value 0. 390 CHAPTER 7. INFERENCE FOR NUMERICAL DATA EXAMPLE 7.25 An SAT preparation company claims that its students’ scores improve by over 100 points on average after their course. A consumer group would like to evaluate this claim, and they collect data on a random sample of 30 students who took the class. Each of these students took the SAT before and after taking the company’s course, so we have a difference in scores for each student. We will examine these differences x1 = 57, x2 = 133,..., x30 = 140 as a sample to evaluate the company’s claim. The distribution of the differences, shown in Figure 7.14, has a mean of 135.9 and a standard deviation of 82.2. Construct a confidence interval to estimate the true average increase in SAT after taking the company’s course. Is there evidence at the 95% confidence level that students score an average of more than 100 points higher after the class? Use the five step framework to organize your work. Identify: The parameter we want to estimate is µdiff, the true change in SAT score after taking the company’s course. Here, diff = SAT score after course − SAT score before course. We will estimate this parameter at the 95% confidence level. Choose: Because we have paired data and the parameter to be estimated is a mean of differences, we will use a 1-sample t-interval with paired data. Check: We have a random sample of students with paired observations on them. We will assume that these 30 students represent less than 10% of the total number of such students. Finally, the number of differences is ndiff = 30 ≥ 30, so we can proceed with the 1-sample t-interval. Calculate: We will calculate the confidence interval
as follows. point estimate ± t × SE of estimate The point estimate is the sample mean of differences: ¯xdiff = 135.9 The SE of the sample mean of differences is: sdiff√ ndiff = 82.2√ 30 = 15.0 We find t for the one-sample case using the t-table at row df = n − 1 and confidence level C%. For a 95% confidence level and df = 30 − 1 = 29, t = 2.045. The 95% confidence interval is given by: 135.9 ± 2.045 × 82.2 √ 30 135.9 ± 2.045 × 15.0 = (105.2, 166.6) df = 29 Conclude: We are 95% confident that the true average increase in SAT score following the company’s course is between 105.2 points to 166.6 points. There is sufficient evidence that students score greater than 100 points higher, on average, after the company’s course because the entire interval is above 100. GUIDED PRACTICE 7.26 Based on the interval (105.2, 166.6), calculated previously, can we say that 95% of student scores increased between 105.2 and 166.6 points after taking the company’s course?10 10No. This interval is attempting to capture the average increase. It is not attempting to capture 95% of the increases. Looking at Figure 7.14, we see that only a small percent had increases between 105.2 and 166.6. 7.2. INFERENCE WITH PAIRED DATA 391 Figure 7.14: SAT score after course minus the SAT score before course. 7.2.5 Technology: the 1-sample ttt-interval with paired data When carrying out a 1-sample t-interval with paired data, make sure to use the sample differ- ences or the summary statistics for the differences. TI-83/84: 1-SAMPLE T-INTERVAL WITH PAIRED DATA Use STAT, TESTS, TInterval. 1. Choose STAT. 2. Right arrow to TESTS. 3. Down arrow and choose 8:TInterval. 4. Choose Data
if you have all the data or Stats if you have the mean and standard deviation. • If you choose Data, let List be L3 or the list in which you entered the differences (don’t forget to enter the differences!) and let Freq be 1. • If you choose Stats, enter the mean, SD, and sample size of the differences. 5. Let C-Level be the desired confidence level. 6. Choose Calculate and hit ENTER, which returns: (, ) ¯x Sx n the confidence interval for the mean of the differences the sample mean of the differences the sample SD of the differences the number of differences in the sample CASIO FX-9750GII: 1-SAMPLE T-INTERVAL WITH PAIRED DATA 1. Compute the differences of the paired observations. 2. Using the computed differences, follow the instructions for a 1-sample t-interval. GUIDED PRACTICE 7.27 In our UCLA textbook example, we had 68 differences of paired observations. Because df = 67 was not on our t-table, we rounded the df down to 60. This gave us a 95% confidence interval (0.325, 6.834). Use a calculator to find the more exact 95% confidence interval based on 67 degrees of freedom. How different is it from the one we calculated based on 60 degrees of freedom?11 ndiff 68 ¯xdiff 3.58 sdiff 13.42 11Choose TInterval or equivalent. We do not have all the data, so choose Stats on a TI or Var on a Casio. Enter ¯x = 3.58 and Sx = 13.42. Let n = 68 and C-Level = 0.95. This should give the interval (0.332, 6.828). The intervals are equivalent when rounded to two decimal places. Differences−10001002003000510 392 CHAPTER 7. INFERENCE FOR NUMERICAL DATA Section summary • Paired data can come from a random sample or a matched pairs experiment. With paired data, we are often interested in whether the difference is
positive, negative, or zero. For example, the difference of paired data from a matched pairs experiment tells us whether one treatment did better, worse, or the same as the other treatment for each subject. • We use the notation ¯xdiff to represent the mean of the sample differences. Likewise, sdiff is the standard deviation of the sample differences, and ndiff is the number of sample differences. • To carry out inference with paired data, we first find all of the sample differences. Then, we perform a one-sample procedure using the differences. For this reason, the confidence interval and hypothesis test with paired data use the one-sample t-procedures, where the degrees of freedom is given by ndiff − 1. • When there is paired data and the parameter of interest is a mean of differences: – Estimate µdiff at the C% confidence level using a 1-sample ttt-interval with paired data. – Test H0: µdiff = 0 at the α significance level using a 1-sample ttt-test with paired data. • The one-sample t-interval and t-test with paired data require the sampling distribution for ¯xdiff to be nearly normal. For this reason, we must check that the following conditions are met. 1. Independence: Data should come from one random sample (with paired observations) or from a randomized matched pairs experiment. If sampling without replacement, check that the sample size is less than 10% of the population size. 2. Large sample or normal population: ndiff ≥ 30 or population of differences nearly normal. - If the number of differences is less than 30 and it is not known that the population of differences is nearly normal, we argue that the population of differences could be nearly normal if there is no strong skew or outliers in the sample differences. • When the conditions are met, we calculate the confidence interval and the test statistic as we did in the previous section. Here, our data is a list of differences
. Confidence interval: point estimate ± t × SE of estimate Test statistic: T = point estimate − null value SE of estimate Here the point estimate is the mean of sample differences: ¯xdiff. The SE of estimate is the SE of a mean of sample differences: sdiff√ ndiff. The degrees of freedom is given by df = ndiff − 1. 7.2. INFERENCE WITH PAIRED DATA 393 Exercises 7.15 Air quality. Air quality measurements were collected in a random sample of 25 country capitals in 2013, and then again in the same cities in 2014. We would like to use these data to compare average air quality between the two years. Should we use a paired or non-paired test? Explain your reasoning. 7.16 True / False: paired. Determine if the following statements are true or false. If false, explain. (a) In a paired analysis we first take the difference of each pair of observations, and then we do inference on these differences. (b) Two data sets of different sizes cannot be analyzed as paired data. (c) Consider two sets of data that are paired with each other. Each observation in one data set has a natural correspondence with exactly one observation from the other data set. (d) Consider two sets of data that are paired with each other. Each observation in one data set is subtracted from the average of the other data set’s observations. 7.17 Paired or not? Part I. In each of the following scenarios, determine if the data are paired. (a) Compare pre- (beginning of semester) and post-test (end of semester) scores of students. (b) Assess gender-related salary gap by comparing salaries of randomly sampled men and women. (c) Compare artery thicknesses at the beginning of a study and after 2 years of taking Vitamin E for the same group of patients. (d) Assess effectiveness of a diet regimen by comparing the before and after weights of subjects. 7.18 Paired or not? Part II. In each of the following scenarios, determine if the data are paired. (a) We would like to know if Intel’s stock and Southwest Airlines’ stock have similar rates of return.
To find out, we take a random sample of 50 days, and record Intel’s and Southwest’s stock on those same days. (b) We randomly sample 50 items from Target stores and note the price for each. Then we visit Walmart and collect the price for each of those same 50 items. (c) A school board would like to determine whether there is a difference in average SAT scores for students at one high school versus another high school in the district. To check, they take a simple random sample of 100 students from each high school. 7.19 Global warming, Part I. Let’s consider a limited set of climate data, examining temperature differences in 1948 vs 2018. We sampled 197 locations from the National Oceanic and Atmospheric Administration’s (NOAA) historical data, where the data was available for both years of interest. We want to know: were there more days with temperatures exceeding 90°F in 2018 or in 1948?12 The difference in number of days exceeding 90°F (number of days in 2018 - number of days in 1948) was calculated for each of the 197 locations. The average of these differences was 2.9 days with a standard deviation of 17.2 days. We are interested in determining whether these data provide strong evidence that there were more days in 2018 that exceeded 90°F from NOAA’s weather stations. (a) Is there a relationship between the observations collected in 1948 and 2018? Or are the observations in the two groups independent? Explain. (b) Write hypotheses for this research in symbols and in words. (c) Check the conditions required to complete this test. A histogram of the differences is given to the right. (d) Calculate the test statistic and find the p-value. (e) Use α = 0.05 to evaluate the test, and interpret your conclusion in context. (f) What type of error might we have made? Explain in context what the error means. (g) Based on the results of this hypothesis test, would you expect a confidence interval for the average difference between the number of days exceeding 90°F from 1948 and 2018 to include 0? Explain your reasoning. 12NOAA, www.ncdc.noaa.gov/cdo-web/datasets, April 24, 2019. Differences
in Number of Days−60−40−20020406001020304050−60−40−200204060 394 CHAPTER 7. INFERENCE FOR NUMERICAL DATA 7.20 High School and Beyond, Part I. The National Center of Education Statistics conducted a survey of high school seniors, collecting test data on reading, writing, and several other subjects. Here we examine a simple random sample of 200 students from this survey. Side-by-side box plots of reading and writing scores as well as a histogram of the differences in scores are shown below. (a) Is there a clear difference in the average reading and writing scores? (b) Are the reading and writing scores of each student independent of each other? (c) Create hypotheses appropriate for the following research question: is there an evident difference in the average scores of students in the reading and writing exam? (d) Check the conditions required to complete this test. (e) The average observed difference in scores is ¯xread−write = −0.545, and the standard deviation of the differences is 8.887 points. Do these data provide convincing evidence of a difference between the average scores on the two exams? (f) What type of error might we have made? Explain what the error means in the context of the application. (g) Based on the results of this hypothesis test, would you expect a confidence interval for the average difference between the reading and writing scores to include 0? Explain your reasoning. 7.21 Global warming, Part II. We considered the change in the number of days exceeding 90°F from 1948 and 2018 at 197 randomly sampled locations from the NOAA database in Exercise 7.19. The mean and standard deviation of the reported differences are 2.9 days and 17.2 days. Calculate a 90% confidence interval for the average difference between number of days exceeding 90°F between 1948 and 2018. Does the confidence interval provide convincing evidence that there were more days exceeding 90°F in 2018 than in 1948 at NOAA stations? Include all steps of the Identify, Choose, Check, Calculate, Conclude framework. 7.22 High school and beyond, Part II. We considered the differences between the reading and writing
scores of a random sample of 200 students who took the High School and Beyond Survey in Exercise 7.20. The mean and standard deviation of the differences are ¯xread−write = −0.545 and 8.887 points. (a) Calculate a 95% confidence interval for the average difference between the reading and writing scores of all students. (b) Interpret this interval in context. (c) Does the confidence interval provide convincing evidence that there is a real difference in the average scores? Explain. yscoresreadwrite20406080Differences in scores (read − write)−20−1001020010203040 7.3. INFERENCE FOR THE DIFFERENCE OF TWO MEANS 395 7.3 Inference for the difference of two means Often times we wish to compare two groups to each other to answer questions such as the following: • Does treatment using embryonic stem cells (ESCs) help improve heart function following a heart attack? • Is there convincing evidence that newborns from mothers who smoke have a different average birth weight than newborns from mothers who don’t smoke? • Is there statistically significant evidence that one variation of an exam is harder than another variation? • Are faculty willing to pay someone named “John” more than someone named “Jennifer”? If so, how much more? Learning objectives 1. Determine when it is appropriate to use a one-sample t-procedure versus a two-sample t- procedure. 2. State and verify whether or not the conditions for inference on the difference of two means using the t-distribution are met. 3. Be able to use a calculator or other software to find the degrees of freedom associated with a two-sample t-procedure. 4. Carry out a complete confidence interval procedure for the difference of two means. 5. Carry out a complete hypothesis test for the difference of two means. 396 CHAPTER 7. INFERENCE FOR NUMERICAL DATA 7.3.1 Sampling distribution for the difference of two means (review) In this section we are interested in comparing the means of two independent groups. We want to estimate how far apart µ1 and µ2 are and test whether their di�
�erence is zero or not. Before we perform inference for the difference of means, let’s review the sampling distribution for ¯x1 − ¯x2, which will be used as the point estimate for µ1 − µ2. We know from Section 4.3 that when the independence condition is satisfied, the sampling distribution for ¯x1 − ¯x2 is centered on µ1 − µ2 and has standard deviation of σ¯x1−¯x2 = σ2 1 n1 + σ2 2 n2 When the individual population standard deviations are unknown, we estimate the standard deviation of ¯x1 − ¯x2 using the Standard Error, abbreviated SE, by plugging in the sample standard deviations as our best guesses of the population standard deviations: SE¯x1−¯x2 = s2 1 n1 + s2 2 n2 The difference of two sample means ¯x1 − ¯x2 follows a nearly normal distribution when certain conditions are met. First, the sampling distribution for each sample mean must be nearly normal, and second, the observations must be independent, both within and between groups. Under these two conditions, the sampling distribution for ¯x1 − ¯x2 may be well approximated using the normal model. 7.3.2 Checking conditions for inference on a difference of means When comparing two means, we carry out inference on a difference of means, µ1 − µ2. We will use the t-distribution just as we did when carrying out inference on a single mean. In order to use the t-distribution, we need the sampling distribution for ¯x1 − ¯x2 to be nearly normal. We check whether this assumption is reasonable by verifying the following conditions. Independence. Observations can be considered independent when the data are collected from two independent random samples or from a randomized experiment with two treatments. Randomly assigning subjects to treatments is equivalent to randomly assigning treatments to subjects. When sampling without replacement, the observations can be considered independent when the sample size is less than 10% of the population size for both samples. Sample size / nearly normal population. Each population distribution should be nearly normal or each sample size should be at least 30. As before, if the sample sizes are small and the population distributions are not known to be nearly normal, we look at the data for excessive skew or outliers. If we do not find excessive
skew or outliers in either group, the assumption that the populations are nearly normal to be reasonable is typically considered reasonable. 7.3. INFERENCE FOR THE DIFFERENCE OF TWO MEANS 397 7.3.3 Confidence intervals for a difference of means What’s in a name? Are employers more likely to offer interviews or higher pay to prospective employees when the name on a resume suggests the candidate is a man versus a woman? This is a challenging question to tackle, because employers are influenced by many aspects of a resume. Thinking back to Chapter 1 on data collection, we could imagine a host of confounding factors associated with name and gender. How could we possibly isolate just the factor of name? We would need an experiment in which name was the only variable and everything else was held constant. Researchers at Yale carried out precisely this experiment. Their results were published in the Proceedings of the National Academy of Sciences (PNAS). The researchers sent out resumes to faculty at academic institutions for a lab manager position. The resumes were identical, except that on half of them the applicant’s name was John and on the other half, the applicant’s name was Jennifer. They wanted to see if faculty, specifically faculty trained in conducting scientifically objective research, held implicit gender biases. Unlike in the matched pairs scenario, each faculty member received only one resume. We are interested in comparing the mean salary offered to John relative to the mean salary offered to Jennifer. Instead of taking the average of a set of differences, we find the average of each group separately and take their difference. Let ¯x1 : mean salary offered to John ¯x2 : mean salary offered to Jennifer We will use ¯x1 − ¯x2 as our point estimate for µ1 − µ2. The data is given in the table below. Name John Jennifer n 63 64 ¯x $30,238 $26,508 s $5567 $7247 We can calculate the difference as ¯x1 − ¯x2 = 30, 238 − 26, 508 = 3730. EXAMPLE 7.28 Interpret the point estimate 3730. Why might we want to construct a confidence interval? The average salary offered
to John was $3,730 higher than the average salary offered to Jennifer. Because there is randomness in which faculty ended up in the John group and which faculty ended up in the Jennifer group, we want to see if the difference of $3,730 is beyond what could be expected by random variation. In order to answer this, we will first want to calculate the SE for the difference of sample means. EXAMPLE 7.29 Calculate and interpret the SE for the difference of sample means. SE = s2 1 n1 + s2 2 n2 = (5567)2 63 + (7247)2 64 = 1151 Using samples of size n1 = 63 and n2 = 64, the typical error when using ¯x1 − ¯x2 to estimate µ1 − µ2, the real difference in mean salary that the faculty would offer John versus Jennifer, is $1151. We see that the difference of sample means of $3,730 is more than 3 SE above 0, which makes us think that the difference being 0 is unreasonable. We would like to construct a 95% confidence interval for the theoretical difference in mean salary that would be offered to John versus Jennifer. For this, we need the degrees of freedom associated with a two-sample t-interval. 398 CHAPTER 7. INFERENCE FOR NUMERICAL DATA For the one-sample t-procedure, the degrees of freedom is given by the simple expression n − 1, where n is the sample size. For the two-sample t-procedures, however, there is a complex formula for calculating the degrees of freedom, which is based on the two sample sizes and the two sample standard deviations. In practice, we find the degrees of freedom using software or a calculator (see Section 7.3.4). If this is not possible, the alternative is to use the smaller of n1 − 1 and n2 − 1. DEGREES OF FREEDOM FOR TWO-SAMPLE T-PROCEDURES Use statistical software or a calculator to compute the degrees of freedom for two-sample t-procedures. If this is not possible, use the smaller of n1 − 1 and n2 − 1. EXAM
PLE 7.30 Verify that conditions are met for a two-sample t-test. Then, construct the 95% confidence interval for the difference of means. We noted previously that this is an experiment and that the two treatments (name Jennifer and name John) were randomly assigned. Also, both sample sizes are well over 30, so the distribution of ¯x1 − ¯x2 is nearly normal. Using a calculator, we find that df = 118.1. Since 118.1 is not on the t-table, we round the degrees of freedom down to 100.13 Using a t-table at row df = 100 with 95% confidence, we get a t = 1.984. We calculate the confidence interval as follows. point estimate ± t × SE of estimate 3730 ± 1.984 × 1151 = 3730 ± 2284 = (1446, 6014) Based on this interval, we are 95% confident that the true difference in mean salary that these faculty would offer John versus Jennifer is between $1,495 and $6,055. That is, we are 95% confident that the mean salary these faculty would offer John for a lab manager position is between $1,446 and $6,014 more than the mean salary they would offer Jennifer for the position. The results of these studies and others like it are alarming and disturbing.14 One aspect that makes this bias so difficult to address is that the experiment, as well-designed as it was, cannot send us much signal about which faculty are discriminating. Each faculty member received only one of the resumes. A faculty member that offered “Jennifer” a very low salary may have also offered “John” a very low salary. We might imagine an experiment in which each faculty received both resumes, so that we could compare how much they would offer a Jennifer versus a John. However, the matched pairs scenario is clearly not possible in this case, because what makes the experiment work is that the resumes are exactly the same except for the name. An employer would notice something fishy if they received two identical resumes. It is only possible to say that overall, the faculty were willing to o�
��er John more money for the lab manager position than Jennifer. Finding proof of bias for individual cases is a persistent challenge in enforcing anti-discrimination laws. 13Using technology, we get a more precise interval, based on 118.1 df : (1461, 5999). 14A similar study sent out identical resumes with different names to investigate the importance of perceived race. Resumes with a name commonly perceived to be for a White person (e.g. Emily) were 50% more likely to receive a callback than the same resume with a name commonly perceived to be for a Black person (e.g. Lakisha). More information is given in Appendix B – see the resume data set. 7.3. INFERENCE FOR THE DIFFERENCE OF TWO MEANS 399 CONSTRUCTING A CONFIDENCE INTERVAL FOR THE DIFFERENCE OF TWO MEANS To carry out a complete confidence interval procedure to estimate the difference of two means µ1 − µ2, Identify: Identify the parameter and the confidence level, C%. The parameter will be a difference of means, e.g. the true difference in mean cholesterol reduction (mean treatment A − mean treatment B). Choose: Choose the appropriate interval procedure and identify it by name. To estimate the difference of means we use a 2-sample ttt-interval. Check: Check conditions for the sampling distribution for ¯x1 − ¯x2 to be nearly normal. 1. Independence: Data come from 2 independent random samples or from a randomized experiment with 2 treatments. When sampling without replacement, check that the sample size is less than 10% of the population size for each sample. 2. Large samples or normal populations: n1 ≥ 30 and n2 ≥ 30 or both population distributions are nearly normal. - If the sample sizes are less than 30 and the population distributions are unknown, check for strong skew or outliers in either data set. If neither is found, the condition that both population distributions are nearly normal is considered reasonable. Calculate: Calculate the confidence interval and record it in interval form. point estimate ± t × SE of estimate, df : use calculator or other technology point estimate: the difference of sample means ¯x1 − ¯x2 SE of estimate: + s2 2 n2 t: use a t-
table at row df and confidence level C% s2 1 n1 (, ) Conclude: Interpret the interval and, if applicable, draw a conclusion in context.. If We are C% confident that the true difference in mean [...] applicable, draw a conclusion based on whether the interval is entirely above, is entirely below, or contains the value 0. is between and 400 CHAPTER 7. INFERENCE FOR NUMERICAL DATA EXAMPLE 7.31 An instructor decided to run two slight variations of the same exam. Prior to passing out the exams, she shuffled the exams together to ensure each student received a random version. Summary statistics for how students performed on these two exams are shown in Figure 7.31. Anticipating complaints from students who took Version B, she would like to evaluate whether the difference observed in the groups is so large that it provides convincing evidence that Version B was more difficult (on average) than Version A. Use a 95% confidence interval to estimate the difference in average score: version A - version B. Version A B n 30 30 ¯x 79.4 74.1 s min max 100 45 100 32 14 20 Identify: The parameter we want to estimate is µ1 − µ2, which is the true average score under Version A − the true average score under Version B. We will estimate this parameter at the 95% confidence level. Choose: Because we are comparing two means, we will use a 2-sample t-interval. Check: The data was collected from a randomized experiment with two treatments: Version A and Version B of test. The 10% condition does not need to be checked here since we are not sampling from a population. There were 30 students in each group, so the condition that both group sizes are at least 30 is met. Calculate: We will calculate the confidence interval as follows. point estimate ± t × SE of estimate The point estimate is the difference of sample means: ¯x1 − ¯x2 = 79.4 − 74.1 = 5.3 The SE of a difference of sample means is: s2 1 n1 + s2 2 n2 = 142 30 + 202 30 = 4.46 In order to find the critical value t
, we must first find the degrees of freedom. Using a calculator, we find df = 51.9. We round down to 50, and using a t-table at row df = 50 and confidence level 95%, we get t = 2.009. The 95% confidence interval is given by: (79.4 − 74.1) ± 2.009 × 142 30 + 202 30 df = 51.9 5.3 ± 2.009 × 4.46 = (−3.66, 14.26) Conclude: We are 95% confident that the true difference in average score between Version A and Version B is between -3.66 and 14.26 points. Because the interval contains both positive and negative values, the data do not convincingly show that one exam version is more difficult than the other, and the teacher should not be convinced that she should add points to the Version B exam scores. 7.3. INFERENCE FOR THE DIFFERENCE OF TWO MEANS 401 7.3.4 Technology: the 2-sample ttt-interval TI-83/84: 2-SAMPLE T-INTERVAL Use STAT, TESTS, 2-SampTInt. 1. Choose STAT. 2. Right arrow to TESTS. 3. Down arrow and choose 0:2-SampTTInt. 4. Choose Data if you have all the data or Stats if you have the means and standard deviations. • If you choose Data, let List1 be L1 or the list that contains sample 1 and let List2 be L2 or the list that contains sample 2 (don’t forget to enter the data!). Let Freq1 and Freq2 be 1. • If you choose Stats, enter the mean, SD, and sample size for sample 1 and for sample 2. 5. Let C-Level be the desired confidence level and let Pooled be No. 6. Choose Calculate and hit ENTER, which returns: the confidence interval degrees of freedom mean of sample 1 mean of sample 2 (, ) df ¯x1 ¯x2 Sx1 Sx2 n1 n2 SD of sample 1 SD of sample 2 size of sample 1 size of sample 2 CASIO FX-9750GII: 2-SAM
PLE T-INTERVAL 1. Navigate to STAT (MENU button, then hit the 2 button or select STAT). 2. If necessary, enter the data into a list. 3. Choose the INTR option (F4 button). 4. Choose the t option (F2 button). 5. Choose the 2-S option (F2 button). 6. Choose either the Var option (F2) or enter the data in using the List option. 7. Specify the test details: • Confidence level of interest for C-Level. • If using the Var option, enter the summary statistics for each group. If using List, specify the lists and leave Freq values at 1. • Choose whether to pool the data or not. 8. Hit the EXE button, which returns Left, Right df ¯x1, ¯x2 sx1, sx2 n1, n2 ends of the confidence interval degrees of freedom sample means sample standard deviations sample sizes 402 CHAPTER 7. INFERENCE FOR NUMERICAL DATA GUIDED PRACTICE 7.32 Use the data below and a calculator to find a 95% confidence interval for the difference in average scores between Version A and Version B of the exam from the previous example.15 Version A B n 30 30 ¯x 79.4 74.1 s min max 100 45 100 32 14 20 7.3.5 Hypothesis testing for the difference of two means Four cases from a data set called ncbirths, which represents mothers and their newborns in North Carolina, are shown in Figure 7.15. We are particularly interested in two variables: weight and smoke. The weight variable represents the weights of the newborns and the smoke variable describes which mothers smoked during pregnancy. We would like to know, is there convincing evidence that newborns from mothers who smoke have a different average birth weight than newborns from mothers who don’t smoke? The smoking group includes a random sample of 50 cases and the nonsmoking group contains a random sample of 100 cases, represented in Figure 7.16. fAge mAge weeks weight 5.00 5.88 8.13 male...... female 9.25 NA NA 19... 45 sex female female 37 36 41... 36 13 14 15... 50 smoke nonsmoker nonsmoker smoker nonsmoker 1 2 3... 150 Figure 7.15
: Four cases from the ncbirths data set. The value “NA”, shown for the first two entries of the first variable, indicates pieces of data that are missing. Figure 7.16: The top panel represents birth weights for infants whose mothers smoked. The bottom panel represents the birth weights for infants whose mothers who did not smoke. The distributions exhibit moderate-to-strong and strong skew, respectively. 15Choose 2-SampTInt or equivalent. Because we have the summary statistics rather than all of the data, choose Stats. Let ¯x1=79.41, Sx1=14, n1=30, ¯x2=74.1, Sx2 = 20, and n2 = 30. The interval is (−3.6, 14.2) with df = 51.9. Newborn Weights (lbs) From Mothers Who Smoked0246810Newborn Weights (lbs) From Mothers Who Did Not Smoke0246810 7.3. INFERENCE FOR THE DIFFERENCE OF TWO MEANS 403 EXAMPLE 7.33 Set up appropriate hypotheses to evaluate whether there is a relationship between a mother smoking and average birth weight. Let µ1 represent the mean for mothers that did smoke and µ2 represent the mean for mothers that did not smoke. We will take the difference as: smoker − nonsmoker. The null hypothesis represents the case of no difference between the groups. H0: µ1 − µ2 = 0. There is no difference in average birth weight for newborns from mothers who did and did not smoke. HA: µ1 − µ2 = 0. There is some difference in average newborn weights from mothers who did and did not smoke. We check the two conditions necessary to use the t-distribution to the difference in sample means. (1) Because the data come from a sample, we need there to be two independent random samples. In fact, there was only one random sample, but it is reasonable that the two groups here are independent of each other, so we will consider the assumption of independence reasonable. (2) The sample sizes of 50 and 100 are well over 30, so we do not worry about the distributions of the original populations. Since both conditions are satisfied, the difference in sample means may be modeled using
a t-distribution. mean st. dev. samp. size smoker 6.78 1.43 50 nonsmoker 7.18 1.60 100 Figure 7.17: Summary statistics for the ncbirths data set. EXAMPLE 7.34 We will use the summary statistics in Figure 7.17 for this exercise. (a) What is the point estimate of the population difference, µ1 − µ2? (b) Compute the standard error of the point estimate from part (a). (a) The point estimate is the difference of sample means: ¯x1 − ¯x2 = 6.78 − 7.18 = −0.40 pounds. (b) The standard error for a difference of sample means is calculated analogously to the standard deviation for a difference of sample means. SE = s2 1 n1 + s2 2 n2 = 1.432 50 + 1.602 100 = 0.26 pounds EXAMPLE 7.35 Compute the test statistic. We have already found the point estimate and the SE of estimate. The null hypothesis is that the two means are equal, or that their difference equals 0. The null value for the difference, therefore is 0. We now have everything we need to compute the test statistic. T = point estimate − null value SE of estimate = −0.40 − 0 0.26 = −1.54 404 CHAPTER 7. INFERENCE FOR NUMERICAL DATA EXAMPLE 7.36 Draw a picture to represent the p-value for this hypothesis test, then calculate the p-value. To depict the p-value, we draw the distribution of the point estimate as though H0 were true and shade areas representing at least as much evidence against H0 as what was observed. Both tails are shaded because it is a two-sided test. We saw previously that the degrees of freedom can be found using software or using the smaller of n1 − 1 and n2 − 1. If we use 50 − 1 = 49 degrees of freedom, we find that the area in the lower tail is 0.065. The p-value is twice this, or 2 × 0.065 = 0.130. See Section 7.3.6 for a shortcut to compute the degrees of freedom and p-value on a calculator. EXAMPLE 7.37
What can we conclude from this p-value? Use a significance level of α = 0.05. This p-value of 0.130 is larger the significance level of 0.05, so we do not reject the null hypothesis. There is not sufficient evidence to say there is a difference in average birth weight of newborns from North Carolina mothers who did smoke during pregnancy and newborns from North Carolina mothers who did not smoke during pregnancy. EXAMPLE 7.38 Does the conclusion to Example 7.35 mean that smoking and average birth weight are unrelated? Not necessarily. It is possible that there is some difference but that we did not detect it. The result must be considered in light of other evidence and research. In fact, larger data sets do tend to show that women who smoke during pregnancy have smaller newborns. GUIDED PRACTICE 7.39 If we made an error in our conclusion, which type of error could we have made: Type I or Type II?16 GUIDED PRACTICE 7.40 If we made a Type II Error and there is a difference, what could we have done differently in data collection to be more likely to detect the difference?17 16Since we did not reject H0, it is possible that we made a Type II Error. It is possible that there is some difference but that we did not detect it. 17We could have collected more data. If the sample sizes are larger, we tend to have a better shot at finding a difference if one exists. In other words, increasing the sample size increases the power of the test. mn-ms = 0obs. diff 7.3. INFERENCE FOR THE DIFFERENCE OF TWO MEANS 405 HYPOTHESIS TEST FOR THE DIFFERENCE OF TWO MEANS To carry out a complete hypothesis test to test the claim that two means µ1 and µ2 are equal to each other, Identify: Identify the hypotheses and the significance level, α. H0: µ1 = µ2 HA: µ1 = µ2; HA: µ1 > µ2; or HA: µ1 < µ2 Choose: Choose the appropriate test procedure and identify it by name. To test hypotheses about the di
fference of means we use a 2-sample ttt-test. Check: Check conditions for the sampling distribution for ¯x1 − ¯x2 to be nearly normal. 1. Independence: Data come from 2 independent random samples or from a randomized experiment with 2 treatments. When sampling without replacement, check that the sample size is less than 10% of the population size for each sample. 2. Large samples or normal populations: n1 ≥ 30 and n2 ≥ 30 or both population distributions are nearly normal. - If the sample sizes are less than 30 and the population distributions are unknown, check for excessive skew or outliers in either data set. If neither is found, the condition that both population distributions are nearly normal is considered reasonable. Calculate: Calculate the t-statistic, df, and p-value. T = point estimate − null value SE of estimate df : use calculator or other technology point estimate: the difference of sample means ¯x1 − ¯x2 SE of estimate: s2 1 n1 + s2 2 n2 p-value = (based on the t-statistic, the df, and the direction of HA) Conclude: Compare the p-value to α, and draw a conclusion in context. If the p-value is < α, reject H0; there is sufficient evidence that [HA in context]. If the p-value is > α, do not reject H0; there is not sufficient evidence that [HA in context]. 406 CHAPTER 7. INFERENCE FOR NUMERICAL DATA EXAMPLE 7.41 Do embryonic stem cells (ESCs) help improve heart function following a heart attack? The following table and figure summarize results from an experiment to test ESCs in sheep that had a heart attack. ESCs control n 9 9 ¯x 3.50 -4.33 s 5.17 2.76 Each of these sheep was randomly assigned to the ESC or control group, and the change in their hearts’ pumping capacity was measured. A positive value generally corresponds to increased pumping capacity, which suggests a stronger recovery. The sample data is also graphed. Use the given information and an appropriate statistical test to answer the research question. Identify: Let µ1 be the mean percent change for sheep that receive ESC and let µ2 be the mean percent change for sheep in the control group. We will use an α = 0.05 signi
ficance level. H0: µ1 − µ2 = 0. The stem cells do not improve heart pumping function. HA: µ1 − µ2 > 0. The stem cells do improve heart pumping function. Choose: Because we are hypothesizing about a difference of means we choose the 2-sample t-test. Check: The data come from a randomized experiment with two treatment groups: ESC and control. Because this is an experiment, we do not need to check the 10% condition. The group sizes are small, but the data show no excessive skew or outliers, so the assumption that the population distributions are nearly normal is reasonable. Calculate: We will calculate the t-statistic and the p-value. T = point estimate − null value SE of estimate The point estimate is the difference of sample means: ¯x1 − ¯x2 = 3.50 − (−4.33) = 7.83 The SE of a difference of sample means: s2 1 n1 + s2 2 n2 = (5.17)2 9 + (2.76)2 9 = 1.95 T = 3.50 − (−4.33) − 0 (5.17)2 9 + (2.76)2 9 = 7.83 − 0 1.95 = 4.01 Because HA is an upper tail test ( > ), the p-value corresponds to the area to the right of t = 4.01 with the appropriate degrees of freedom. Using a calculator, we find get df = 12.2 and p-value = 8.4 × 10−4 = 0.00084. Conclude: The p-value is much less than 0.05, so we reject the null hypothesis. There is sufficient evidence that embryonic stem cells improve the heart’s pumping function in sheep that have suffered a heart attack. frequency−10−5051015Embryonic stem cell transplantPercent change in heart pumping function0123frequency−10−50510150123Control (no treatment)Percent change in heart pumping function 7.3. INFERENCE FOR THE DIFFERENCE OF TWO MEANS 407 7.3.6 Technology: the 2-sample ttt-test TI-83/84: 2-SAMPLE T-TEST Use STAT, TESTS, 2-SampTTest. 1.
Choose STAT. 2. Right arrow to TESTS. 3. Choose 4:2-SampTTest. 4. Choose Data if you have all the data or Stats if you have the means and standard deviations. • If you choose Data, let List1 be L1 or the list that contains sample 1 and let List2 be L2 or the list that contains sample 2 (don’t forget to enter the data!). Let Freq1 and Freq2 be 1. • If you choose Stats, enter the mean, SD, and sample size for sample 1 and for sample 2 5. Choose =, <, or > to correspond to HA. 6. Let Pooled be NO. 7. Choose Calculate and hit ENTER, which returns: T-statistic t p-value p degrees of freedom df ¯x1 mean of sample 1 ¯x2 mean of sample 2 Sx1 Sx2 n1 n2 SD of sample 1 SD of sample 2 size of sample 1 size of sample 2 CASIO FX-9750GII: 2-SAMPLE T-TEST 1. Navigate to STAT (MENU button, then hit the 2 button or select STAT). 2. If necessary, enter the data into a list. 3. Choose the TEST option (F3 button). 4. Choose the t option (F2 button). 5. Choose the 2-S option (F2 button). 6. Choose either the Var option (F2) or enter the data in using the List option. 7. Specify the test details: • Specify the sidedness of the test using the F1, F2, and F3 keys. • If using the Var option, enter the summary statistics for each group. If using List, specify the lists and leave Freq values at 1. • Choose whether to pool the data or not. 8. Hit the EXE button, which returns µ1 t p df µ2 alt. hypothesis T-statistic p-value degrees of freedom ¯x1, ¯x2 sx1, sx2 n1, n2 sample means sample standard deviations sample sizes 408 CHAPTER 7. INFERENCE FOR NUMERICAL DATA GUIDED PRACTICE 7.42 Use the data below and a calculator to find the test statistics and p-value for a one-sided test, testing whether there is evidence that embryonic stem cells (ESCs) help improve heart function for sheep that have experienced a
heart attack.18 ESCs control n 9 9 ¯x 3.50 -4.33 s 5.17 2.76 18Choose 2-SampTTest or equivalent. Because we have the summary statistics rather than all of the data, choose Stats. Let ¯x1=3.50, Sx1=5.17, n1=9, ¯x2=-4.33, Sx2 = 2.76, and n2 = 9. We get t = 4.01, and the p-value p = 8.4 × 10−4 = 0.00084. The degrees of freedom for the test is df = 12.2. 7.3. INFERENCE FOR THE DIFFERENCE OF TWO MEANS 409 Section summary • This section introduced inference for a difference of means, which is distinct from inference for a mean of differences. To calculate a difference of means, ¯x1 − ¯x2, we first calculate the mean of each group, then we take the difference between those two numbers. To calculate a mean of difference, ¯xdiff, we first calculate all of the differences, then we find the mean of those differences. • Inference for a difference of means is based on the t-distribution. The degrees of freedom is complicated to calculate and we rely on a calculator or other software to calculate this.19 • When there are two samples or treatments and the parameter of interest is a difference of means: – Estimate µ1 − µ2 at the C% confidence level using a 2-sample ttt-interval. – Test H0: µ1 − µ2 = 0 at the α significance level using a 2-sample ttt-test. • The 2-sample t-test and t-interval require the sampling distribution for ¯x1 − ¯x2 to be nearly normal. For this reason we must check that the following conditions are met. 1. Independence: The data should come from 2 independent random samples or from a randomized experiment with 2 treatments. When sampling without replacement, check that the sample size is less than 10% of the population size for each sample. 2. Large samples or normal populations: n
1 ≥ 30 and n2 ≥ 30 or both population distribu- tions are nearly normal. - If the sample sizes are less than 30 and it is not known that both population distributions are nearly normal, check for excessive skew or outliers in the data. If neither exists, the condition that both population distributions could be nearly normal is considered reasonable. • When the conditions are met, we calculate the confidence interval and the test statistic as follows. Confidence interval: point estimate ± t × SE of estimate Test statistic: T = point estimate − null value SE of estimate Here the point estimate is the difference of sample means: ¯x1 − ¯x2. s2 1 n1 The SE of estimate is the SE of a difference of sample means: + s2 2 n2. Find and record the df using a calculator or other software. 19If this is not available, one can use df = min(n1 − 1, n2 − 1). 410 CHAPTER 7. INFERENCE FOR NUMERICAL DATA Exercises 7.23 Diamonds, Part I. Prices of diamonds are determined by what is known as the 4 Cs: cut, clarity, color, and carat weight. The prices of diamonds go up as the carat weight increases, but the increase is not smooth. For example, the difference between the size of a 0.99 carat diamond and a 1 carat diamond is undetectable to the naked human eye, but the price of a 1 carat diamond tends to be much higher than the price of a 0.99 diamond. In this question we use two random samples of diamonds, 0.99 carats and 1 carat, each sample of size 23, and compare the average prices of the diamonds. In order to be able to compare equivalent units, we first divide the price for each diamond by 100 times its weight in carats. That is, for a 0.99 carat diamond, we divide the price by 99. For a 1 carat diamond, we divide the price by 100. The distributions and some sample statistics are shown below.20 Conduct a hypothesis test to evaluate if there is a difference between the average standardized prices of 0.99 and 1 carat diamonds. Include all steps of the Identify, Choose, Check, Calculate, Conclude framework. Mean SD n 0.99
carats $44.51 $13.32 23 1 carat $56.81 $16.13 23 7.24 Diamonds, Part II. In Exercise 7.23, we discussed diamond prices (standardized by weight) for diamonds with weights 0. 99 carats and 1 carat. See the table for summary statistics, and then construct a 95% confidence interval for the difference in means between the standardized prices of 0.99 and 1 carat diamonds. Include all steps of the Identify, Choose, Check, Calculate, Conclude framework. Mean SD n 0.99 carats $44.51 $13.32 23 1 carat $56.81 $16.13 23 Chicken farming is a multi-billion dollar industry, and any 7.25 Chicken diet and weight, Part I. methods that increase the growth rate of young chicks can reduce consumer costs while increasing company profits, possibly by millions of dollars. An experiment was conducted to measure and compare the effectiveness of various feed supplements on the growth rate of chickens. Newly hatched chicks were randomly allocated into six groups, and each group was given a different feed supplement. Below are some summary statistics from this data set along with box plots showing the distribution of weights by feed type.21 casein horsebean linseed meatmeal soybean sunflower Mean 323.58 160.20 218.75 276.91 246.43 328.92 SD 64.43 38.63 52.24 64.90 54.13 48.84 n 12 10 12 11 14 12 (a) Describe the distributions of weights of chickens that were fed linseed and horsebean. (b) Do these data provide strong evidence that the average weights of chickens that were fed linseed and horsebean are different? Use a 5% significance level. (c) What type of error might we have committed? Explain. (d) Would your conclusion change if we used α = 0.01? 20H. Wickham. ggplot2: elegant graphics for data analysis. Springer New York, 2009. 21Chicken Weights by Feed Type, from the datasets package in R.. Point price (in dollars)0.99 carats1 carat20406080Weight (in grams)caseinhorsebeanlinseedmeatmealsoybeansunflower100150200250300350400
lll 7.3. INFERENCE FOR THE DIFFERENCE OF TWO MEANS 411 7.26 Fuel efficiency of manual and automatic cars, Part I. Each year the US Environmental Protection Agency (EPA) releases fuel economy data on cars manufactured in that year. Below are summary statistics on fuel efficiency (in miles/gallon) from random samples of cars with manual and automatic transmissions. Do these data provide strong evidence of a difference between the average fuel efficiency of cars with manual and automatic transmissions in terms of their average city mileage?22 City MPG Automatic Manual Mean SD n 16.12 3.58 26 19.85 4.51 26 7.27 Chicken diet and weight, Part II. Casein is a common weight gain supplement for humans. Does it have an effect on chickens? Using data provided in Exercise 7.25, test the hypothesis that the average weight of chickens that were fed casein is different than the average weight of chickens that were fed soybean. If your hypothesis test yields a statistically significant result, discuss whether or not the higher average weight of chickens can be attributed to the casein diet. Conditions for inference were checked in Exercise 7.25. 7.28 Fuel efficiency of manual and automatic cars, Part II. The table provides summary statistics on highway fuel economy of the same 52 cars from Exercise 7.26. Use these statistics to calculate a 98% confidence interval for the difference between average highway mileage of manual and automatic cars, and interpret this interval in the context of the data.23 Hwy MPG Automatic Manual Mean SD n 22.92 5.29 26 27.88 5.01 26 22U.S. Department of Energy, Fuel Economy Data, 2012 Datafile. 23U.S. Department of Energy, Fuel Economy Data, 2012 Datafile. City MPGautomaticmanual152535Hwy MPGautomaticmanual152535 412 CHAPTER 7. INFERENCE FOR NUMERICAL DATA 7.29 Prison isolation experiment, Part I. Subjects from Central Prison in Raleigh, NC, volunteered for an experiment involving an “isolation” experience. The goal of the experiment was to find a treatment that reduces subjects’ psychopathic deviant T scores. This score measures a person
’s need for control or their rebellion against control, and it is part of a commonly used mental health test called the Minnesota Multiphasic Personality Inventory (MMPI) test. The experiment had three treatment groups: (1) Four hours of sensory restriction plus a 15 minute “therapeutic” tape advising that professional help is available. (2) Four hours of sensory restriction plus a 15 minute “emotionally neutral” tape on training hunting dogs. (3) Four hours of sensory restriction but no taped message. Forty-two subjects were randomly assigned to these treatment groups, and an MMPI test was administered before and after the treatment. Distributions of the differences between pre and post treatment scores (pre - post) are shown below, along with some sample statistics. Use this information to independently test the effectiveness of each treatment. Make sure to clearly state your hypotheses, check conditions, and interpret results in the context of the data.24 Mean SD n Tr 1 Tr 2 2.86 6.21 7.94 12.3 14 14 Tr 3 -3.21 8.57 14 7.30 True / False: comparing means. Determine if the following statements are true or false, and explain your reasoning for statements you identify as false. (a) As the degrees of freedom increases, the t-distribution approaches normality. (b) We use a pooled standard error for calculating the standard error of the difference between means when sample sizes of groups are equal to each other. 24Prison isolation experiment, stat.duke.edu/resources/datasets/prison-isolation. Treatment 1020400246Treatment 2−20−1001020024Treatment 3−20−100024 7.3. INFERENCE FOR THE DIFFERENCE OF TWO MEANS 413 Chapter highlights We’ve reviewed a wide set of inference procedures over the last 3 chapters. Let’s revisit each and discuss the similarities and differences among them. The following confidence intervals and tests are structurally the same – they all involve inference on a population parameter, where that parameter is a proportion, a difference of proportions, a mean, a mean of differences, or a difference of means. • 1-proportion z-test/interval • 2-proportion z-test/inter
val • 1-sample t-test/interval • 1-sample t-test/interval with paired data • 2-sample t-test/interval The above inferential procedures all involve a point estimate, a standard error of the estimate, and an assumption about the shape of the sampling distribution for the point estimate. From Chapter 6, the χ2 tests and their uses are as follows: • χ2 goodness of fit - compares a categorical variable to a known/fixed distribution. • χ2 test for homogeneity - compares a categorical variable across multiple groups. • χ2 test for independence - looks for association between two categorical variables. χ2 is a measure of overall deviation between observed values and expected values. These tests stand apart from the others because when using χ2 there is not a parameter of interest. For this reason there are no confidence intervals using χ2. Also, for χ2 tests, the hypotheses are usually written in words, because they are about the distribution of one or more categorical variables, not about a single parameter. While formulas and conditions vary, all of these procedures follow the same basic logic and process. • For a confidence interval, identify the parameter to be estimated and the confidence level. For a hypothesis test, identify the hypotheses to be tested and the significance level. • Choose the correct procedure. • Check that both conditions for its use are met. • Calculate the confidence interval or the test statistic and p-value, as well as the df if applicable. • Interpret the results and draw a conclusion based on the data. For a summary of these hypothesis test and confidence interval procedures (including one more that we will encounter in Section 8.4), see the Inference Guide in Appendix D.3. 414 CHAPTER 7. INFERENCE FOR NUMERICAL DATA Chapter exercises 7.31 Gaming and distracted eating, Part I. A group of researchers are interested in the possible effects of distracting stimuli during eating, such as an increase or decrease in the amount of food consumption. To test this hypothesis, they monitored food intake for a group of 44 patients who were randomized into two equal groups. The treatment group ate lunch while playing solitaire, and the control group ate lunch without any added distractions. Patients in the treatment group
ate 52.1 grams of biscuits, with a standard deviation of 45.1 grams, and patients in the control group ate 27.1 grams of biscuits, with a standard deviation of 26.4 grams. Do these data provide convincing evidence that the average food intake (measured in amount of biscuits consumed) is different for the patients in the treatment group? Assume that conditions for inference are satisfied.25 7.32 Gaming and distracted eating, Part II. The researchers from Exercise 7.31 also investigated the effects of being distracted by a game on how much people eat. The 22 patients in the treatment group who ate their lunch while playing solitaire were asked to do a serial-order recall of the food lunch items they ate. The average number of items recalled by the patients in this group was 4. 9, with a standard deviation of 1.8. The average number of items recalled by the patients in the control group (no distraction) was 6.1, with a standard deviation of 1.8. Do these data provide strong evidence that the average number of food items recalled by the patients in the treatment and control groups are different? 7.33 Sample size and pairing. Determine if the following statement is true or false, and if false, explain your reasoning: If comparing means of two groups with equal sample sizes, always use a paired test. 7.34 College credits. A college counselor is interested in estimating how many credits a student typically enrolls in each semester. The counselor decides to randomly sample 100 students by using the registrar’s database of students. The histogram below shows the distribution of the number of credits taken by these students. Sample statistics for this distribution are also provided. Min Q1 Median Mean SD Q3 Max 8 13 14 13.65 1.91 15 18 (a) What is the point estimate for the average number of credits taken per semester by students at this college? What about the median? (b) What is the point estimate for the standard deviation of the number of credits taken per semester by students at this college? What about the IQR? (c) Is a load of 16 credits unusually high for this college? What about 18 credits? Explain your reasoning. (d) The college counselor takes another random sample of 100 students and this time finds a sample mean of 14.02 units. Should she be surprised that this sample statistic is slightly di�
��erent than the one from the original sample? Explain your reasoning. (e) The sample means given above are point estimates for the mean number of credits taken by all students at that college. What measures do we use to quantify the variability of this estimate? Compute this quantity using the data from the original sample. 25R.E. Oldham-Cooper et al. “Playing a computer game during lunch affects fullness, memory for lunch, and later snack intake”. In: The American Journal of Clinical Nutrition 93.2 (2011), p. 308. Number of creditsFrequency8101214161801020 7.3. INFERENCE FOR THE DIFFERENCE OF TWO MEANS 415 7.35 Hen eggs. The distribution of the number of eggs laid by a certain species of hen during their breeding period has a mean of 35 eggs with a standard deviation of 18.2. Suppose a group of researchers randomly samples 45 hens of this species, counts the number of eggs laid during their breeding period, and records the sample mean. They repeat this 1,000 times, and build a distribution of sample means. (a) What is this distribution called? (b) Would you expect the shape of this distribution to be symmetric, right skewed, or left skewed? Explain your reasoning. (c) Calculate the variability of this distribution and state the appropriate term used to refer to this value. (d) Suppose the researchers’ budget is reduced and they are only able to collect random samples of 10 hens. The sample mean of the number of eggs is recorded, and we repeat this 1,000 times, and build a new distribution of sample means. How will the variability of this new distribution compare to the variability of the original distribution? 7.36 Forest management. Forest rangers wanted to better understand the rate of growth for younger trees in the park. They took measurements of a random sample of 50 young trees in 2009 and again measured those same trees in 2019. The data below summarize their measurements, where the heights are in feet: 2009 12.0 3.5 50 2019 Differences 24.5 9.5 50 12.5 7.2 50 ¯x s n Construct a 99% confidence interval for the average growth of (what had been) younger trees in the park over 2009-2019. 7.37 Exclusive relationships. A survey conducted on a reasonably random sample of 203 undergraduates asked
, among many other questions, about the number of exclusive relationships these students have been in. The histogram below shows the distribution of the data from this sample. The sample average is 3.2 with a standard deviation of 1.97. Estimate the average number of exclusive relationships Duke students have been in using a 90% confidence interval and interpret this interval in context. Check any conditions required for inference, and note any assumptions you must make as you proceed with your calculations and conclusions. Number of exclusive relationships0246810020406080100 416 CHAPTER 7. INFERENCE FOR NUMERICAL DATA 7.38 Age at first marriage, Part I. The National Survey of Family Growth conducted by the Centers for Disease Control gathers information on family life, marriage and divorce, pregnancy, infertility, use of contraception, and men’s and women’s health. One of the variables collected on this survey is the age at first marriage. The histogram below shows the distribution of ages at first marriage of 5,534 randomly sampled women between 2006 and 2010. The average age at first marriage among these women is 23.44 with a standard deviation of 4.72.26 Estimate the average age at first marriage of women using a 95% confidence interval, and interpret this interval in context. Discuss any relevant assumptions. 7.39 Online communication. A study suggests that the average college student spends 10 hours per week communicating with others online. You believe that this is an underestimate and decide to collect your own sample for a hypothesis test. You randomly sample 60 students from your dorm and find that on average they spent 13.5 hours a week communicating with others online. A friend of yours, who offers to help you with the hypothesis test, comes up with the following set of hypotheses. Indicate any errors you see. H0 : ¯x < 10 hours HA : ¯x > 13.5 hours 7.40 Age at first marriage, Part II. Exercise 7.38 presents the results of a 2006 - 2010 survey showing that the average age of women at first marriage is 23.44. Suppose a social scientist thinks this value has changed since the survey was taken. Below is how she set up her hypotheses. Indicate any errors you see. H0 : ¯x = 23.44 years old HA
: ¯x = 23.44 years old 26Centers for Disease Control and Prevention, National Survey of Family Growth, 2010. Age at first marriage101520253035404502004006008001000 7.3. INFERENCE FOR THE DIFFERENCE OF TWO MEANS 417 7.41 Friday the 13th, Part I. In the early 1990’s, researchers in the UK collected data on traffic flow, number of shoppers, and traffic accident related emergency room admissions on Friday the 13th and the previous Friday, Friday the 6th. The histograms below show the distribution of number of cars passing by a specific intersection on Friday the 6th and Friday the 13th for many such date pairs. Also given are some sample statistics, where the difference is the number of cars on the 6th minus the number of cars on the 13th.27 6th 128,385 7,259 10 13th 126,550 7,664 10 Diff. 1,835 1,176 10 ¯x s n (a) Are there any underlying structures in these data that should be considered in an analysis? Explain. (b) What are the hypotheses for evaluating whether the number of people out on Friday the 6th is different than the number out on Friday the 13th? (c) Check conditions to carry out the hypothesis test from part (b). (d) Calculate the test statistic and the p-value. (e) What is the conclusion of the hypothesis test? (f) Interpret the p-value in this context. (g) What type of error might have been made in the conclusion of your test? Explain. The Friday the 13th study reported in Exercise 7.41 also provides data 7.42 Friday the 13th, Part II. on traffic accident related emergency room admissions. The distributions of these counts from Friday the 6th and Friday the 13th are shown below for six such paired dates along with summary statistics. You may assume that conditions for inference are met. 6th 7.5 3.33 6 13th 10.83 3.6 6 diff -3.33 3.01 6 Mean SD n (a) Conduct a hypothesis test to evaluate if there is a difference between the average numbers of traffic accident related emergency room admissions between Friday the 6th
and Friday the 13th. (b) Calculate a 95% confidence interval for the difference between the average numbers of traffic accident related emergency room admissions between Friday the 6th and Friday the 13th. (c) The conclusion of the original study states, “Friday 13th is unlucky for some. The risk of hospital admission as a result of a transport accident may be increased by as much as 52%. Staying at home is recommended.” Do you agree with this statement? Explain your reasoning. 27T.J. Scanlon et al. “Is Friday the 13th Bad For Your Health?” In: BMJ 307 (1993), pp. 1584–1586. Friday the 6th12000013000014000001234Friday the 13th1200001300001400000123Difference020004000012345Friday the 6th510012Friday the 13th510012Difference−50012 418 Chapter 8 Introduction to linear regression 8.1 Line fitting, residuals, and correlation 8.2 Fitting a line by least squares regression 8.3 Transformations for skewed data 8.4 Inference for the slope of a regression line 419 Linear regression is a very powerful statistical technique. Many people have some familiarity with regression just from reading the news, where graphs with straight lines are overlaid on scatterplots. Linear models can be used to see trends and to make predictions. For videos, slides, and other resources, please visit www.openintro.org/ahss 420 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION 8.1 Line fitting, residuals, and correlation In this section, we investigate bivariate data. We examine criteria for identifying a linear model and introduce a new bivariate summary called correlation. We answer questions such as the following: • How do we quantify the strength of the linear association between two numerical variables? • What does it mean for two variables to have no association or to have a nonlinear association? • Once we fit a model, how do we measure the error in the model’s predictions? Learning objectives 1. Distinguish between the data point y and the predicted value ˆy based on a model. 2. Calculate a residual and draw a residual plot. 3. Interpret the standard deviation of the residuals. 4. Interpret the correlation coe
fficient and estimate it from a scatterplot. 5. Know and apply the properties of the correlation coefficient. 8.1.1 Fitting a line to data Requests from twelve separate buyers were simultaneously placed with a trading company to purchase Target Corporation stock (ticker TGT, April 26th, 2012). We let x be the number of stocks to purchase and y be the total cost. Because the cost is computed using a linear formula, the linear fit is perfect, and the equation for the line is: y = 5 + 57.49x. If we know the number of stocks purchased, we can determine the cost based on this linear equation with no error. Additionally, we can say that each additional share of the stock cost $57.49 and that there was a $5 fee for the transaction. Figure 8.1: Total cost of a trade against number of shares purchased. llllllllllll051015202530050010001500Number of Target Corporation stocks to purchase0102030050010001500Total cost of the shares (dollars) 8.1. LINE FITTING, RESIDUALS, AND CORRELATION 421 Perfect linear relationships are unrealistic in almost any natural process. For example, if we took family income (x), this value would provide some useful information about how much financial support a college may offer a prospective student (y). However, the prediction would be far from perfect, since other factors play a role in financial support beyond a family’s income. It is rare for all of the data to fall perfectly on a straight line. Instead, it’s more common for data to appear as a cloud of points, such as those shown in Figure 8.2. In each case, the data fall around a straight line, even if none of the observations fall exactly on the line. The first plot shows a relatively strong downward linear trend, where the remaining variability in the data around the line is minor relative to the strength of the relationship between x and y. The second plot shows an upward trend that, while evident, is not as strong as the first. The last plot shows a very weak downward trend in the data, so slight we can hardly notice it. In each of these examples, we can consider how to draw a “best fit line”. For
instance, we might wonder, should we move the line up or down a little, or should we tilt it more or less? As we move forward in this chapter, we will learn different criteria for line-fitting, and we will also learn about the uncertainty associated with estimates of model parameters. Figure 8.2: Three data sets where a linear model may be useful even though the data do not all fall exactly on the line. We will also see examples in this chapter where fitting a straight line to the data, even if there is a clear relationship between the variables, is not helpful. One such case is shown in Figure 8.3 where there is a very strong relationship between the variables even though the trend is not linear. Figure 8.3: A linear model is not useful in this nonlinear case. These data are from an introductory physics experiment. −50050−50050500100015000100002000002040−2000200400lllllllllllllllllllllllllAngle of Incline (Degrees)Distance Traveled (m)0510150306090Best fitting line is flat (!) 422 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION 8.1.2 Using linear regression to predict possum head lengths Brushtail possums are a marsupial that lives in Australia. A photo of one is shown in Figure 8.4. Researchers captured 104 of these animals and took body measurements before releasing the animals back into the wild. We consider two of these measurements: the total length of each possum, from head to tail, and the length of each possum’s head. Figure 8.5 shows a scatterplot for the head length and total length of the 104 possums. Each point represents a single point from the data. Figure 8.4: The common brushtail possum of Australia. ———————————— Photo by Peter Firminger on Flickr: http://flic.kr/p/6aPTn CC BY 2.0 license. Figure 8.5: A scatterplot showing head length against total length for 104 brushtail possums. A point representing a possum with head length 94.1 mm and total length 89 cm is highlighted. Total length (cm)Head length (mm)7580859095859095100l 8.1. LINE FITTING, RESIDUALS, AND COR
RELATION 423 The head and total length variables are associated: possums with an above average total length also tend to have above average head lengths. While the relationship is not perfectly linear, it could be helpful to partially explain the connection between these variables with a straight line. We want to describe the relationship between the head length and total length variables in the possum data set using a line. In this example, we will use the total length, x, to explain or predict a possum’s head length, y. When we use x to predict y, we usually call x the explanatory variable or predictor variable, and we call y the response variable. We could fit the linear relationship by eye, as in Figure 8.6. The equation for this line is ˆy = 41 + 0.59x A “hat” on y is used to signify that this is a predicted value, not an observed value. We can use this line to discuss properties of possums. For instance, the equation predicts a possum with a total length of 80 cm will have a head length of ˆy = 41 + 0.59(80) = 88.2 The value ˆy may be viewed as an average: the equation predicts that possums with a total length of 80 cm will have an average head length of 88.2 mm. The value ˆy is also a prediction: absent further information about an 80 cm possum, this is our best prediction for a the head length of a single 80 cm possum. 424 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION 8.1.3 Residuals Residuals are the leftover variation in the response variable after fitting a model. Each observation will have a residual, and three of the residuals for the linear model we fit for the possum data are shown in Figure 8.6. If an observation is above the regression line, then its residual, the vertical distance from the observation to the line, is positive. Observations below the line have negative residuals. One goal in picking the right linear model is for these residuals to be as small as possible. Figure 8.6: A reasonable linear model was fit to represent the relationship between head length and total length. Let’s look closer at the three residuals featured in Figure 8.6. The observation marked by an “×” has
a small, negative residual of about -1; the observation marked by “+” has a large residual of about +7; and the observation marked by “” has a moderate residual of about -4. The size of a residual is usually discussed in terms of its absolute value. For example, the residual for “” is larger than that of “×” because | − 4| is larger than | − 1|. RESIDUAL: DIFFERENCE BETWEEN OBSERVED AND EXPECTED The residual for a particular observation (x, y) is the difference between the observed response and the response we would predict based on the model: residual = observed y − predicted y = y − ˆy We typically identify ˆy by plugging x into the model. Total Length (cm)Head Length (mm)7580859095859095100 8.1. LINE FITTING, RESIDUALS, AND CORRELATION 425 EXAMPLE 8.1 The linear fit shown in Figure 8.6 is given as ˆy = 41 + 0.59x. Based on this line, compute and interpret the residual of the observation (77.0, 85.3). This observation is denoted by “×” on the plot. Recall that x is the total length measured in cm and y is head length measured in mm. We first compute the predicted value based on the model: ˆy = 41 + 0.59x = 41 + 0.59(77.0) = 86.4 Next we compute the difference of the actual head length and the predicted head length: residual = y − ˆy = 85.3 − 86.4 = − 1.1 The residual for this point is -1.1 mm, which is very close to the visual estimate of -1 mm. For this particular possum with total length of 77 cm, the model’s prediction for its head length was 1.1 mm too high. GUIDED PRACTICE 8.2 If a model underestimates an observation, will the residual be positive or negative? What about if it overestimates the observation?1 GUIDED PRACTICE 8.3 Compute the residual for the observation (95.5, 94.0), denoted by “” in the �
�gure, using the linear model: ˆy = 41 + 0.59x.2 Residuals are helpful in evaluating how well a linear model fits a data set. We often display the residuals in a residual plot such as the one shown in Figure 8.7. Here, the residuals are calculated for each x value, and plotted versus x. For instance, the point (85.0, 98.6) had a residual of 7.45, so in the residual plot it is placed at (85.0, 7.45). Creating a residual plot is sort of like tipping the scatterplot over so the regression line is horizontal. From the residual plot, we can better estimate the standard deviation of the residuals, often denoted by the letter s. The standard deviation of the residuals tells us typical size of the residuals. As such, it is a measure of the typical deviation between the y values and the model predictions. In other words, it tells us the typical prediction error using the model.3 1If a model underestimates an observation, then the model estimate is below the actual. The residual, which is the actual observation value minus the model estimate, must then be positive. The opposite is true when the model overestimates the observation: the residual is negative. 2First compute the predicted value based on the model, then compute the residual. ˆy = 41 + 0.59x = 41 + 0.59(95.50) = 97.3 residual = y − ˆy = 94.0 − 97.3 = −3.3 The residual is -3.3, so the model overpredicted the head length for this possum by 3.3 mm. 3The standard deviation of the residuals is calculated as: s = (yi−ˆy)2 n−2. 426 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION EXAMPLE 8.4 Estimate the standard deviation of the residuals for predicting head length from total length using the line: ˆy = 41 + 0.59x using Figure 8.7. Also, interpret the quantity in context. To estimate this graphically, we use the residual plot. The approximate 68, 95 rule for standard deviations applies. Approximately 2/3 of the points are within ± 2.5 and approximately 95% of the points are within ± 5, so 2.5 is a good estimate for
the standard deviation of the residuals. The typical error when predicting head length using this model is about 2.5 mm. Figure 8.7: Left: Scatterplot of head length versus total length for 104 brushtail possums. Three particular points have been highlighted. Right: Residual plot for the model shown in left panel. STANDARD DEVIATION OF THE RESIDUALS The standard deviation of the residuals, often denoted by the letter s, tells us the typical error in the predictions using the regression model. It can be estimated from a residual plot. Total Length (cm)Head Length (mm)75808590958590951007580859095−505Total Length (cm)Residuals7580859095−505 8.1. LINE FITTING, RESIDUALS, AND CORRELATION 427 EXAMPLE 8.5 One purpose of residual plots is to identify characteristics or patterns still apparent in data after fitting a model. Figure 8.8 shows three scatterplots with linear models in the first row and residual plots in the second row. Can you identify any patterns remaining in the residuals? In the first data set (first column), the residuals show no obvious patterns. The residuals appear to be scattered randomly around the dashed line that represents 0. The second data set shows a pattern in the residuals. There is some curvature in the scatterplot, which is more obvious in the residual plot. We should not use a straight line to model these data. Instead, a more advanced technique should be used. The last plot shows very little upwards trend, and the residuals also show no obvious patterns. It is reasonable to try to fit a linear model to the data. However, it is unclear whether there is statistically significant evidence that the slope parameter is different from zero. The slope of the sample regression line is not zero, but we might wonder if this could be due to random variation. We will address this sort of scenario in Section 8.4. Figure 8.8: Sample data with their best fitting lines (top row) and their corresponding residual plots (bottom row). xxyg$residualsxyg$residuals 428 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION 8.1.4 Desc
ribing linear relationships with correlation When a linear relationship exists between two variables, we can quantify the strength and direction of the linear relation with the correlation coefficient, or just correlation for short. Figure 8.9 shows eight plots and their corresponding correlations. Figure 8.9: Sample scatterplots and their correlations. The first row shows variables with a positive relationship, represented by the trend up and to the right. The second row shows variables with a negative trend, where a large value in one variable is associated with a low value in the other. Only when the relationship is perfectly linear is the correlation either −1 or 1. If the linear relationship is strong and positive, the correlation will be near +1. If it is strong and negative, it will be near −1. If there is no apparent linear relationship between the variables, then the correlation will be near zero. CORRELATION MEASURES THE STRENGTH OF A LINEAR RELATIONSHIP Correlation, which always takes values between -1 and 1, describes the direction and strength of the linear relationship between two numerical variables. The strength can be strong, moderate, or weak. We compute the correlation using a formula, just as we did with the sample mean and standard deviation. Formally, we can compute the correlation for observations (x1, y1), (x2, y2),..., (xn, yn) using the formula r = 1 n − 1 xi − ¯x yi − ¯y sx sy where ¯x, ¯y, sx, and sy are the sample means and standard deviations for each variable. This formula is rather complex, and we generally perform the calculations on a computer or calculator. We can note, though, that the computation involves taking, for each point, the product of the Z-scores that correspond to the x and y values. r = 0.33yr = 0.69yr = 0.98yr = 1.00r = −0.08yr = −0.64yr = −0.92yr = −1.00 8.1. LINE FITTING, RESIDUALS, AND CORRELATION 429 EXAMPLE 8.6 Take a look at Figure 8.6 on page 424. How would the correlation between head length and total body length of possums change if head length were measured in cm rather than mm? What if head length were measured in inches rather than mm? Here, changing the units of
y corresponds to multiplying all the y values by a certain number. This would change the mean and the standard deviation of y, but it would not change the correlation. To see this, imagine dividing every number on the vertical axis by 10. The units of y are now in cm rather than in mm, but the graph has remain exactly the same. The units of y have changed, by the relative distance of the y values about the mean are the same; that is, the Z-scores corresponding to the y values have remained the same. CHANGING UNITS OF XXX AND YYY DOES NOT AFFECT THE CORRELATION The correlation, r, between two variables is not dependent upon the units in which the variables are recorded. Correlation itself has no units. Correlation is intended to quantify the strength of a linear trend. Nonlinear trends, even when strong, sometimes produce correlations that do not reflect the strength of the relationship; see three such examples in Figure 8.10. Figure 8.10: Sample scatterplots and their correlations. In each case, there is a strong relationship between the variables. However, the correlation is not very strong, and the relationship is not linear. GUIDED PRACTICE 8.7 It appears no straight line would fit any of the datasets represented in Figure 8.10. Try drawing nonlinear curves on each plot. Once you create a curve for each, describe what is important in your fit.4 4We’ll leave it to you to draw the lines. In general, the lines you draw should be close to most points and reflect overall trends in the data. r = −0.23yr = 0.31yr = 0.50 430 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION EXAMPLE 8.8 Consider the four scatterplots in Figure 8.11. In which scatterplot is the correlation between x and y the strongest? All four data sets have the exact same correlation of r = 0.816 as well as the same equation for the best fit line! This group of four graphs, known as Anscombe’s Quartet, remind us that knowing the value of the correlation does not tell us what the corresponding scatterplot looks like. It is always important to first graph the data. Investigate Anscombe’s Quartet in Desmos: https://www.desmos.com
/calculator/paknt6oneh. Figure 8.11: Four scatterplots from Desmos with best fit line drawn in. 8.1. LINE FITTING, RESIDUALS, AND CORRELATION 431 Section summary • In Chapter 2 we introduced a bivariate display called a scatterplot, which shows the relationship between two numerical variables. When we use x to predict y, we call x the explanatory variable or predictor variable, and we call y the response variable. • A linear model for bivariate numerical data can be useful for prediction when the association between the variables follows a constant, linear trend. Linear models should not be used if the trend between the variables is curved. • When we write a linear model, we use ˆy to indicate that it is the model or the prediction. The value ˆy can be understood as a prediction for y based on a given x, or as an average of the y values for a given x. • The residual is the error between the true value and the modeled value, computed as y − ˆy. The order of the difference matters, and the sign of the residual will tell us if the model overpredicted or underpredicted a particular data point. • The symbol s in a linear model is used to denote the standard deviation of the residuals, and it measures the typical prediction error by the model. • A residual plot is a scatterplot with the residuals on the vertical axis. The residuals are often plotted against x on the horizontal axis, but they can also be plotted against y, ˆy, or other variables. Two important uses of a residual plot are the following. – Residual plots help us see patterns in the data that may not have been apparent in the scatterplot. – The standard deviation of the residuals is easier to estimate from a residual plot than from the original scatterplot. • Correlation, denoted with the letter r, measures the strength and direction of a linear rela- tionship. The following are some important facts about correlation. – The value of r is always between −1 and 1, inclusive, with an r = −1 indicating a perfect negative relationship (points fall exactly along a line that has negative slope) and an r = 1 indicating a perfect positive relationship (points fall exactly along a line that has positive slope). – An r = 0 indicates no linear association between the variables, though there may
well exist a quadratic or other type of association. – Just like Z-scores, the correlation has no units. Changing the units in which x or y are measured does not affect the correlation. – Correlation is sensitive to outliers. Adding or removing a single point can have a big effect on the correlation. – As we learned previously, correlation is not causation. Even a very strong correlation cannot prove causation; only a well-designed, controlled, randomized experiment can prove causation. 432 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION Exercises 8.1 Visualize the residuals. The scatterplots shown below each have a superimposed regression line. If we were to construct a residual plot (residuals versus x) for each, describe what those plots would look like. 8.2 Trends in the residuals. Shown below are two plots of residuals remaining after fitting a linear model to two different sets of data. Describe important features and determine if a linear model would be appropriate for these data. Explain your reasoning. 8.3 Identify relationships, Part I. For each of the six plots, identify the strength of the relationship (e.g. weak, moderate, or strong) in the data and whether fitting a linear model would be reasonable. llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(a)llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(b)llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll
llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(a)llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(b)lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(a)lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(b)lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(c)lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(d)lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(e)llll
lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(f) 8.1. LINE FITTING, RESIDUALS, AND CORRELATION 433 8.4 Identify relationships, Part II. For each of the six plots, identify the strength of the relationship (e.g. weak, moderate, or strong) in the data and whether fitting a linear model would be reasonable. 8.5 Exams and grades. The two scatterplots below show the relationship between final and mid-semester exam grades recorded during several years for a Statistics course at a university. (a) Based on these graphs, which of the two exams has the strongest correlation with the final exam grade? Explain. (b) Can you think of a reason why the correlation between the exam you chose in part (a) and the final exam is higher? lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(a)lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(b)lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(c)llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll
lllllllllllllllllllllllllllllllllllllll(d)lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(e)lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll(f)Exam 1Final Exam406080100406080100Exam 2Final Exam406080100406080100 434 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION 8.6 Spouses, Part I. The Great Britain Office of Population Census and Surveys once collected data on a random sample of 170 married women in Britain, recording the age (in years) and heights (converted here to inches) of the women and their spouses.5 The scatterplot on the left shows the spouse’s age plotted against the woman’s age, and the plot on the right shows spouse’s height plotted against the woman’s height. (a) Describe the relationship between the ages of women in the sample and their spouses’ ages. (b) Describe the relationship between the heights of women in the sample and their spouses’ heights. (c) Which plot shows a stronger correlation? Explain your reasoning. (d) Data on heights were originally collected in centimeters, and then converted to inches. Does this conversion affect the correlation between heights of women in the sample and their spouses’ heights? 8.7 Match the correlation, Part I. Match each correlation to the corresponding scatterplot. (a) r = −0.7 (b) r = 0.45 (c) r = 0.06 (d) r = 0.92 8.8 Match the correlation, Part II. Match each correlation to the corresponding scatterplot. (a) r = 0.49 (b) r = −0.48 (c) r = −0.03 (
d) r = −0.85 8.9 Speed and height. 1,302 UCLA students were asked to fill out a survey where they were asked about their height, fastest speed they have ever driven, and gender. The scatterplot on the left displays the relationship between height and fastest speed, and the scatterplot on the right displays the breakdown by gender in this relationship. (a) Describe the relationship between height and fastest speed. (b) Why do you think these variables are positively associated? (c) What role does gender play in the relationship between height and fastest driving speed? 5D.J. Hand. A handbook of small data sets. Chapman & Hall/CRC, 1994. Woman's age (in years)Spouse's age (in years)204060204060Woman's height (in inches)Spouse's height (in inches)6065707555606570(1)(2)(3)(4)(1)(2)(3)(4)Height (in inches)Fastest speed (in mph)607080050100150llllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllHeight (in inches)Fastest speed (in mph)607080050100150lfemalemale 8.1. LINE FITTING, RESIDUALS, AND CORRELATION 435 8.10 Guess the correlation. Eduardo and Rosie are both collecting data on number of rainy days in a
year and the total rainfall for the year. Eduardo records rainfall in inches and Rosie in centimeters. How will their correlation coefficients compare? 8.11 The Coast Starlight, Part I. The Coast Starlight Amtrak train runs from Seattle to Los Angeles. The scatterplot below displays the distance between each stop (in miles) and the amount of time it takes to travel from one stop to another (in minutes). (a) Describe the relationship between distance and travel time. (b) How would the relationship change if travel time was instead measured in hours, and distance was instead measured in kilometers? (c) Correlation between travel time (in miles) and distance (in minutes) is r = 0.636. What is the correlation between travel time (in kilometers) and distance (in hours)? 8.12 Crawling babies, Part I. A study conducted at the University of Denver investigated whether babies take longer to learn to crawl in cold months, when they are often bundled in clothes that restrict their movement, than in warmer months.6 Infants born during the study year were split into twelve groups, one for each birth month. We consider the average crawling age of babies in each group against the average temperature when the babies are six months old (that’s when babies often begin trying to crawl). Temperature is measured in degrees Fahrenheit (◦F) and age is measured in weeks. (a) Describe the relationship between temperature and crawling age. (b) How would the relationship change if temperature was measured in degrees Celsius (◦C) and age was measured in months? (c) The correlation between temperature in ◦F and age in weeks was r = −0.70. If we converted the temperature to ◦C and age to months, what would the correlation be? 6J.B. Benson. “Season of birth and onset of locomotion: Theoretical and methodological implications”. In: Infant behavior and development 16.1 (1993), pp. 69–81. issn: 0163-6383. llllllllllllllllDistance (miles)Travel Time (minutes)010020030060120180240300360llllllllllll3040506070293031323334Temperature (F)Avg. crawling age (weeks) 436 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION 8.13 Body measurements, Part I. Researchers
studying anthropometry collected body girth measurements and skeletal diameter measurements, as well as age, weight, height and gender for 507 physically active individuals.7 The scatterplot below shows the relationship between height and shoulder girth (over deltoid muscles), both measured in centimeters. (a) Describe the relationship between shoulder girth and height. (b) How would the relationship change if shoulder girth was measured in inches while the units of height remained in centimeters? 8.14 Body measurements, Part II. The scatterplot below shows the relationship between weight measured in kilograms and hip girth measured in centimeters from the data described in Exercise 8.13. (a) Describe the relationship between hip girth and weight. (b) How would the relationship change if weight was measured in pounds while the units for hip girth remained in centimeters? 8.15 Correlation, Part I. spouses if the set of women always married someone who was What would be the correlation between the ages of a set of women and their (a) 3 years younger than themselves? (b) 2 years older than themselves? (c) half as old as themselves? 8.16 Correlation, Part II. What would be the correlation between the annual salaries of males and females at a company if for a certain type of position men always made (a) $5,000 more than women? (b) 25% more than women? (c) 15% less than women? 7G. Heinz et al. “Exploring relationships in body dimensions”. In: Journal of Statistics Education 11.2 (2003). 90100110120130150160170180190200Shoulder girth (cm)Height (cm)8090100110120130406080100Hip girth (cm)Weight (kg) 8.2. FITTING A LINE BY LEAST SQUARES REGRESSION 437 8.2 Fitting a line by least squares regression In this section, we answer the following questions: • How well can we predict financial aid based on family income for a particular college? • How does one find, interpret, and apply the least squares regression line? • How do we measure the fit of a model and compare different models to each other? • Why do models sometimes make predictions that are ridiculous or impossible? Learning objectives 1. Calculate the slope and y-intercept of the least squares regression line using
the relevant summary statistics. Interpret these quantities in context. 2. Understand why the least squares regression line is called the least squares regression line. 3. Interpret the explained variance R2. 4. Understand the concept of extrapolation and why it is dangerous. 5. Identify outliers and influential points in a scatterplot. 8.2.1 An objective measure for finding the best line Fitting linear models by eye is open to criticism since it is based on an individual preference. In this section, we use least squares regression as a more rigorous approach. This section considers family income and gift aid data from a random sample of fifty students in the freshman class of Elmhurst College in Illinois. Gift aid is financial aid that does not need to be paid back, as opposed to a loan. A scatterplot of the data is shown in Figure 8.12 along with two linear fits. The lines follow a negative trend in the data; students who have higher family incomes tended to have lower gift aid from the university. Figure 8.12: Gift aid and family income for a random sample of 50 freshman students from Elmhurst College. Two lines are fit to the data, the solid line being the least squares line. Family Income ($1000s)0501001502002500102030Gift Aid From University ($1000s) 438 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION We begin by thinking about what we mean by “best”. Mathematically, we want a line that has small residuals. Perhaps our criterion could minimize the sum of the residual magnitudes: |y1 − ˆy1| + |y2 − ˆy2| + · · · + |yn − ˆyn| which we could accomplish with a computer program. The resulting dashed line shown in Figure 8.12 demonstrates this fit can be quite reasonable. However, a more common practice is to choose the line that minimizes the sum of the squared residuals: (y1 − ˆy1)2 + (y2 − ˆy2)2 + · · · + (yn − ˆyn)2 The line that minimizes the sum of the squared residuals is represented as the solid line in Figure 8.12. This is commonly called the least squares line. Both lines seem reasonable, so why
do data scientists prefer the least squares regression line? One reason is that it is easier to compute by hand and in most statistical software. Another, and more compelling, reason is that in many applications, a residual twice as large as another residual is more than twice as bad. For example, being off by 4 is usually more than twice as bad as being off by 2. Squaring the residuals accounts for this discrepancy. In Figure 8.13, we imagine the squared error about a line as actual squares. The least squares regression line minimizes the sum of the areas of these squared errors. In the figure, the sum of the squared error is 4 + 1 + 1 = 6. There is no other line about which the sum of the squared error will be smaller. Figure 8.13: A visualization of least squares regression using Desmos. Try out this and other interactive Desmos activities at openintro.org/ahss/desmos. 8.2. FITTING A LINE BY LEAST SQUARES REGRESSION 439 8.2.2 Finding the least squares line For the Elmhurst College data, we could fit a least squares regression line for predicting gift aid based on a student’s family income and write the equation as: aid = a + b × family income Here a is the y-intercept of the least squares regression line and b is the slope of the least squares regression line. a and b are both statistics that can be calculated from the data. In the next section we will consider the corresponding parameters that they statistics attempt to estimate. We can enter all of the data into a statistical software package and easily find the values of a and b. However, we can also calculate these values by hand, using only the summary statistics. • The slope of the least squares line is given by b = r sy sx where r is the correlation between the variables x and y, and sx and sy are the sample standard deviations of x, the explanatory variable, and y, the response variable. • The point of averages (¯x, ¯y) is always on the least squares line. Plugging this point in for x and y in the least squares equation and solving for a gives ¯y = a + b¯x a = ¯y − b¯x FINDING THE SLOPE AND INTERCEPT OF THE LEAST SQUARES REGRESSION LINE The least squares regression line for
predicting y based on x can be written as: ˆy = a + bx. We first find b, the slope, and then we solve for a, the y-intercept. b = r sy sx ¯y = a + b¯x GUIDED PRACTICE 8.9 Figure 8.14 shows the sample means for the family income and gift aid as $101,800 and $19,940, respectively. Plot the point (101.8, 19.94) on Figure 8.12 to verify it falls on the least squares line (the solid line).8 family income, in $1000s (“x”) ¯x = 101.8 sx = 63.2 mean sd gift aid, in $1000s (“y”) ¯y = 19.94 sy = 5.46 r = −0.499 Figure 8.14: Summary statistics for family income and gift aid. 8If you need help finding this location, draw a straight line up from the x-value of 100 (or thereabout). Then draw a horizontal line at 20 (or thereabout). These lines should intersect on the least squares line. 440 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION EXAMPLE 8.10 Using the summary statistics in Figure 8.14, find the equation of the least squares regression line for predicting gift aid based on family income. b = r sy sx = (−0.499) 5.46 63.2 a = ¯y − b¯x = 19.94 − (−0.0431)(101.8) = 24.3 = −0.0431 ˆy = 24.3 − 0.0431x or aid = 24.3 − 0.0431 × family income EXAMPLE 8.11 Say we wanted to predict a student’s family income based on the amount of gift aid that they received. Would this least squares regression line be the following? aid = 24.3 − 0.0431 × family income No. The equation we found was for predicting aid, not for predicting family income. We would have to calculate a new regression line, letting y be family income and x be aid. This would give us: b = r sy sx = (−0.499) 63.2 5.46 a = ¯y − b¯x = 19.94
− (−5.776)(101.8) = 607.9 = −5.776 ˆy = 607.3 − 5.776x or family income = 607.3 − 5.776 × aid We mentioned earlier that a computer is usually used to compute the least squares line. A summary table based on computer output is shown in Figure 8.15 for the Elmhurst College data. The first column of numbers provides estimates for b0 and b1, respectively. Compare these to the result from Example 8.2.2. (Intercept) family income Estimate 24.3193 -0.0431 Std. Error 1.2915 0.0108 t value Pr(>|t|) 0.0000 0.0002 18.83 -3.98 Figure 8.15: Summary of least squares fit for the Elmhurst College data. Compare the parameter estimates in the first column to the results of Guided Practice 8.2.2. EXAMPLE 8.12 Examine the second, third, and fourth columns in Figure 8.15. Can you guess what they represent? We’ll look at the second row, which corresponds to the slope. The first column, Estimate = -0.0431, tells us our best estimate for the slope of the population regression line. We call this point estimate b. The second column, Std. Error = 0.0108, is the standard error of this point estimate. The third column, t value = -3.98, is the T test statistic for the null hypothesis that the slope of the population regression line = 0. The last column, Pr(>|t|) = 0.0002, is the p-value for this two-sided T -test. We will get into more of these details in Section 8.4. 8.2. FITTING A LINE BY LEAST SQUARES REGRESSION 441 EXAMPLE 8.13 Suppose a high school senior is considering Elmhurst College. Can she simply use the linear equation that we have found to calculate her financial aid from the university? No. Using the equation will provide a prediction or estimate. However, as we see in the scatterplot, there is a lot of variability around the line. While the linear equation is good at capturing the trend in the data, there will be signifi
cant error in predicting an individual student’s aid. Additionally, the data all come from one freshman class, and the way aid is determined by the university may change from year to year. 8.2.3 Interpreting the coefficients of a regression line Interpreting the coefficients in a regression model is often one of the most important steps in the analysis. EXAMPLE 8.14 The slope for the Elmhurst College data for predicting gift aid based on family income was calculated as -0.0431. Intepret this quantity in the context of the problem. You might recall from an algebra course that slope is change in y over change in x. Here, both x and y are in thousands of dollars. So if x is one unit or one thousand dollars higher, the line will predict that y will change by 0.0431 thousand dollars. In other words, for each additional thousand dollars of family income, on average, students receive 0.0431 thousand, or $43.10 less in gift aid. Note that a higher family income corresponds to less aid because the slope is negative. EXAMPLE 8.15 The y-intercept for the Elmhurst College data for predicting gift aid based on family income was calculated as 24.3. Intepret this quantity in the context of the problem. The intercept a describes the predicted value of y when x = 0. The predicted gift aid is 24.3 thousand dollars if a student’s family has no income. The meaning of the intercept is relevant to this application since the family income for some students at Elmhurst is $0. In other applications, the intercept may have little or no practical value if there are no observations where x is near zero. Here, it would be acceptable to say that the average gift aid is 24.3 thousand dollars among students whose family have 0 dollars in income. INTERPRETING COEFFICIENTS IN A LINEAR MODEL • The slope, b, describes the average increase or decrease in the y variable if the explanatory variable x is one unit larger. • The y-intercept, a, describes the predicted outcome of y if x = 0. The linear model must be valid all the way to x = 0 for this to make sense, which in many applications is not the case. 442 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION GUIDED PRACTICE 8.16 In the previous chapter,
we encountered a data set that compared the price of new textbooks for UCLA courses at the UCLA Bookstore and on Amazon. We fit a linear model for predicting price at UCLA Bookstore from price on Amazon and we get: where x is the price on Amazon and y is the price at the UCLA bookstore. Interpret the coefficients in this model and discuss whether the interpretations make sense in this context.9 ˆy = 1.86 + 1.03x GUIDED PRACTICE 8.17 Can we conclude that if Amazon raises the price of a textbook by 1 dollar, the UCLA Bookstore will raise the price of the textbook by $1.03?10 EXERCISE CAUTION WHEN INTERPRETING COEFFICIENTS OF A LINEAR MODEL • The slope tells us only the average change in y for each unit change in x; it does not tell us how much y might change based on a change in x for any particular individual. Moreover, in most cases, the slope cannot be interpreted in a causal way. • When a value of x = 0 doesn’t make sense in an application, then the interpretation of the y-intercept won’t have any practical meaning. 8.2.4 Extrapolation is treacherous When those blizzards hit the East Coast this winter, it proved to my satisfaction that global warming was a fraud. That snow was freezing cold. But in an alarming trend, temperatures this spring have risen. Consider this: On February 6th it was 10 degrees. Today it hit almost 80. At this rate, by August it will be 220 degrees. So clearly folks the climate debate rages on. Stephen Colbert April 6th, 2010 11 Linear models can be used to approximate the relationship between two variables. However, these models have real limitations. Linear regression is simply a modeling framework. The truth is almost always much more complex than our simple line. For example, we do not know how the data outside of our limited window will behave. 9The y-intercept is 1.86 and the units of y are in dollars. This tells us that when a textbook costs 0 dollars on Amazon, the predicted price of the textbook at the UCLA Bookstore is 1.86 dollars. This does not make sense as Amazon does not sell any $0 textbooks. The slope is 1.03, with units (dollars)/(dollars). On average, for every extra dollar that a book costs on
Amazon, it costs an extra 1.03 dollars at the UCLA Bookstore. This interpretation does make sense in this context. 10No. The slope describes the overall trend. This is observational data; a causal conclusion cannot be drawn. Remember, a causal relationship can only be concluded by a well-designed randomized, controlled experiment. Additionally, there may be large variation in the points about the line. The slope does not tell us how much y might change based on a change in x for a particular textbook. 11www.cc.com/video-clips/l4nkoq/ 8.2. FITTING A LINE BY LEAST SQUARES REGRESSION 443 EXAMPLE 8.18 Use the model aid = 24.3 − 0.0431 × family income to estimate the aid of another freshman student whose family had income of $1 million. Recall that the units of family income = 1000: family income are in $1000s, so we want to calculate the aid for aid = 24.3 − 0.0431 × family income aid = 24.3 − 0.431(1000) = −18.8 The model predicts this student will have -$18,800 in aid (!). Elmhurst College cannot (or at least does not) require any students to pay extra on top of tuition to attend. Using a model to predict y-values for x-values outside the domain of the original data is called extrapolation. Generally, a linear model is only an approximation of the real relationship between two variables. If we extrapolate, we are making an unreliable bet that the approximate linear relationship will be valid in places where it has not been analyzed. 8.2.5 Using R2R2R2 to describe the strength of a fit We evaluated the strength of the linear relationship between two variables earlier using the correlation, r. However, it is more common to explain the fit of a model using R2, called Rsquared or the explained variance. If provided with a linear model, we might like to describe how closely the data cluster around the linear fit. Figure 8.16: Gift aid and family income for a random sample of 50 freshman students from Elmhurst College, shown with the least squares regression line (ˆy) and the average line (¯y). We are interested in how well a model accounts for or explains the location of the y values. The R2 of a linear model describes how much smaller the variance
(in the y direction) about the regression line is than the variance about the horizontal line ¯y. For example, consider the Elmhurst College data, shown in Figure 8.16. The variance of the response variable, aid received, is s2 aid = 29.8. However, if we apply our least squares line, then this model reduces our uncertainty in predicting aid using a student’s family income. The variability in the residuals describes how much variation remains after using the model: s2 = 22.4. We could say that the reduction in the variance was: RES aid − s2 s2 s2 aid RES = 29.8 − 22.4 29.8 = 7.5 29.8 = 0.25 Family Income ($1000s)0501001502002500102030Gift Aid From University ($1000s) 444 CHAPTER 8. INTRODUCTION TO LINEAR REGRESSION If we used the simple standard deviation of the residuals, this would be exactly R2. However, the standard way of computing the standard deviation of the residuals is slightly more sophisticated.12 To avoid any trouble, we can instead use a sum of squares method. If we call the sum of the squared errors about the regression line SSRes and the sum of the squared errors about the mean SSM, we can define R2 as follows: R2 = SSM − SSRes SSM = 1 − SSRes SSM (a) (b) Figure 8.17: (a) The regression line is equivalent to ¯y; R2 = 0. (b) The regression line passes through all of the points; R2 = 1. Try out this and other interactive Desmos activities at openintro.org/ahss/desmos. GUIDED PRACTICE 8.19 Using the formula for R2, confirm that in Figure 8.17 (a), R2 = 0 and that in Figure 8.17 (b), R2 = 1.13 R2R2R2 IS THE EXPLAINED VARIANCE R2 is always between 0 and 1, inclusive. It tells us the proportion of variation in the y values that is explained by a regression model. The higher the value of R2, the better the model “explains” the response variable. The value of R2 is, in fact, equal to r2, where r is the correlation. This means
that r = ± Use this fact to answer the next two practice problems. √ R2. GUIDED PRACTICE 8.20 If a linear model has a very strong negative relationship with a correlation of -0.97, how much of the variation in the response variable is explained by the linear model?14 GUIDED PRACTICE 8.21 If a linear model has an R2 or explained variance of 0.94, what is the correlation?15 12In computing the standard deviation of the residuals, we divide by n − 2 rather than by n − 1 to account for the n − 2 degrees of freedom. 13(a) SSRes = SSM = (−1)2 + (2)2 + (−1)2 = 6, so R2 = 1 − 6 14R2 = (−0.97)2 = 0.94 or 94%. 94% of the variation in y is explained by the linear model. 15We take the square root of R2 and get 0.97, but we must be careful, because r could be 0.97 or -0.97. Without 6 = 0. (b) R2 = 1 − 0 8. knowing the slope or seeing the scatterplot, we have no way of knowing if r is positive or negative. 8.2. FITTING A LINE BY LEAST SQUARES REGRESSION 445 8.2.6 Technology: linear correlation and regression Get started quickly with this Desmos LinReg Calculator (available at openintro.org/ahss/desmos). Calculator instructions TI-84: FINDING aaa, bbb, R2R2R2, AND rrr FOR A LINEAR MODEL Use STAT, CALC, LinReg(a + bx). 1. Choose STAT. 2. Right arrow to CALC. 3. Down arrow and choose 8:LinReg(a+bx). • Caution: choosing 4:LinReg(ax+b) will reverse a and b. 4. Let Xlist be L1 and Ylist be L2 (don’t forget to enter the x and y values in L1 and L2 before doing this calculation). 5. Leave FreqList blank. 6. Leave Store RegEQ blank. 7. Choose Calculate and hit ENTER, which returns: a, the y-intercept of the best fit line b, the slope of the best fit line a