1 00:00:11,850 --> 00:00:16,370 Inshallah we'll start numerical descriptive majors 2 00:00:16,370 --> 00:00:22,270 for the population. Last time we talked about the 3 00:00:22,270 --> 00:00:25,780 same majors. I mean the same descriptive measures 4 00:00:25,780 --> 00:00:29,180 for a sample. And we have already talked about the 5 00:00:29,180 --> 00:00:35,080 mean, variance, and standard deviation. These are 6 00:00:35,080 --> 00:00:38,580 called statistics because they are computed from 7 00:00:38,580 --> 00:00:43,140 the sample. Here we'll see how can we do the same 8 00:00:44,300 --> 00:00:47,320 measures but for a population, I mean for the 9 00:00:47,320 --> 00:00:53,020 entire dataset. So descriptive statistics 10 00:00:53,020 --> 00:00:57,860 described previously in the last two lectures was 11 00:00:57,860 --> 00:01:04,200 for a sample. Here we'll just see how can we 12 00:01:04,200 --> 00:01:07,740 compute these measures for the entire population. 13 00:01:08,480 --> 00:01:11,600 In this case, the statistics we talked about 14 00:01:11,600 --> 00:01:17,200 before are called And if you remember the first 15 00:01:17,200 --> 00:01:19,800 lecture, we said there is a difference between 16 00:01:19,800 --> 00:01:24,300 statistics and parameters. A statistic is a value 17 00:01:24,300 --> 00:01:27,520 that computed from a sample, but parameter is a 18 00:01:27,520 --> 00:01:32,140 value computed from population. So the important 19 00:01:32,140 --> 00:01:37,020 population parameters are population mean, 20 00:01:37,660 --> 00:01:43,560 variance, and standard deviation. Let's start with 21 00:01:43,560 --> 00:01:45,880 the first one, the mean, or the population mean. 22 00:01:46,980 --> 00:01:50,720 As the sample mean is defined by the sum of the 23 00:01:50,720 --> 00:01:55,120 values divided by the sample size. But here, we 24 00:01:55,120 --> 00:01:57,880 have to divide by the population size. So that's 25 00:01:57,880 --> 00:02:01,140 the difference between sample mean and population 26 00:02:01,140 --> 00:02:08,950 mean. For the sample mean, we use x bar. Here we 27 00:02:08,950 --> 00:02:14,790 use Greek letter, mu. This is pronounced as mu. So 28 00:02:14,790 --> 00:02:18,790 mu is the sum of the x values divided by the 29 00:02:18,790 --> 00:02:21,210 population size, not the sample size. So it's 30 00:02:21,210 --> 00:02:24,570 quite similar to the sample mean. So mu is the 31 00:02:24,570 --> 00:02:28,030 population mean, n is the population size, and xi 32 00:02:28,030 --> 00:02:33,270 is the it value of the variable x. Similarly, for 33 00:02:33,270 --> 00:02:37,310 the other parameter, which is the variance, the 34 00:02:37,310 --> 00:02:41,520 variance There is a little difference between the 35 00:02:41,520 --> 00:02:45,480 sample and population variance. Here, we subtract 36 00:02:45,480 --> 00:02:49,700 the population mean instead of the sample mean. So 37 00:02:49,700 --> 00:02:55,140 sum of xi minus mu squared, then divide by this 38 00:02:55,140 --> 00:02:59,140 population size, capital N, instead of N minus 1. 39 00:02:59,520 --> 00:03:02,260 So that's the difference between sample and 40 00:03:02,260 --> 00:03:07,020 population variance. So again, in the sample 41 00:03:07,020 --> 00:03:12,080 variance, we subtracted x bar. Here, we subtract 42 00:03:12,080 --> 00:03:15,640 the mean of the population, mu, then divide by 43 00:03:15,640 --> 00:03:20,200 capital N instead of N minus 1. So the 44 00:03:20,200 --> 00:03:24,000 computations for the sample and the population 45 00:03:24,000 --> 00:03:30,220 mean or variance are quite similar. Finally, the 46 00:03:30,220 --> 00:03:35,390 population standard deviation. is the same as the 47 00:03:35,390 --> 00:03:38,810 sample population variance and here just take the 48 00:03:38,810 --> 00:03:43,170 square root of the population variance and again 49 00:03:43,170 --> 00:03:47,170 as we did as we explained before the standard 50 00:03:47,170 --> 00:03:51,550 deviation has the same units as the original unit 51 00:03:51,550 --> 00:03:57,130 so nothing is new we just extend the sample 52 00:03:57,130 --> 00:04:02,410 statistic to the population parameter and again 53 00:04:04,030 --> 00:04:08,790 The mean is denoted by mu, it's a Greek letter. 54 00:04:10,210 --> 00:04:12,790 The population variance is denoted by sigma 55 00:04:12,790 --> 00:04:17,030 squared. And finally, the population standard 56 00:04:17,030 --> 00:04:21,130 deviation is denoted by sigma. So that's the 57 00:04:21,130 --> 00:04:24,250 numerical descriptive measures either for a sample 58 00:04:24,250 --> 00:04:28,590 or a population. So just summary for these 59 00:04:28,590 --> 00:04:33,330 measures. The measures are mean variance, standard 60 00:04:33,330 --> 00:04:38,250 deviation. Population parameters are mu for the 61 00:04:38,250 --> 00:04:43,830 mean, sigma squared for variance, and sigma for 62 00:04:43,830 --> 00:04:46,710 standard deviation. On the other hand, for the 63 00:04:46,710 --> 00:04:51,430 sample statistics, we have x bar for sample mean, 64 00:04:52,110 --> 00:04:56,750 s squared for the sample variance, and s is the 65 00:04:56,750 --> 00:05:00,410 sample standard deviation. That's sample 66 00:05:00,410 --> 00:05:05,360 statistics against population parameters. Any 67 00:05:05,360 --> 00:05:05,700 question? 68 00:05:10,940 --> 00:05:17,240 Let's move to new topic, which is empirical role. 69 00:05:19,340 --> 00:05:25,620 Now, empirical role is just we 70 00:05:25,620 --> 00:05:30,120 have to approximate the variation of data in case 71 00:05:30,120 --> 00:05:34,950 of They'll shift. I mean suppose the data is 72 00:05:34,950 --> 00:05:37,770 symmetric around the mean. I mean by symmetric 73 00:05:37,770 --> 00:05:42,310 around the mean, the mean is the vertical line 74 00:05:42,310 --> 00:05:46,570 that splits the data into two halves. One to the 75 00:05:46,570 --> 00:05:49,570 right and the other to the left. I mean, the mean, 76 00:05:49,870 --> 00:05:52,650 the area to the right of the mean equals 50%, 77 00:05:52,650 --> 00:05:54,970 which is the same as the area to the left of the 78 00:05:54,970 --> 00:05:58,710 mean. Now suppose or consider the data is bell 79 00:05:58,710 --> 00:06:02,570 -shaped. Bell-shaped, normal, or symmetric? So 80 00:06:02,570 --> 00:06:04,290 it's not skewed either to the right or to the 81 00:06:04,290 --> 00:06:08,030 left. So here we assume, okay, the data is bell 82 00:06:08,030 --> 00:06:13,430 -shaped. In this scenario, in this case, there is 83 00:06:13,430 --> 00:06:22,100 a rule called 68, 95, 99.7 rule. Number one, 84 00:06:22,960 --> 00:06:26,300 approximately 68% of the data in a bill shipped 85 00:06:26,300 --> 00:06:31,780 lies within one standard deviation of the 86 00:06:31,780 --> 00:06:37,100 population. So this is the first rule, 68% of the 87 00:06:37,100 --> 00:06:43,920 data or of the observations Lie within a mu minus 88 00:06:43,920 --> 00:06:48,880 sigma and a mu plus sigma. That's the meaning of 89 00:06:48,880 --> 00:06:51,800 the data in bell shape distribution is within one 90 00:06:51,800 --> 00:06:55,900 standard deviation of mean or mu plus or minus 91 00:06:55,900 --> 00:07:01,480 sigma. So again, you can say that if the data is 92 00:07:01,480 --> 00:07:04,100 normally distributed or if the data is bell 93 00:07:04,100 --> 00:07:12,210 shaped, that is 68% of the data lies within one 94 00:07:12,210 --> 00:07:16,250 standard deviation of the mean, either below or 95 00:07:16,250 --> 00:07:21,710 above it. So 68% of the data. So this is the first 96 00:07:21,710 --> 00:07:22,090 rule. 97 00:07:29,050 --> 00:07:37,170 68% of the data lies between mu minus sigma and mu 98 00:07:37,170 --> 00:07:37,750 plus sigma. 99 00:07:40,480 --> 00:07:46,260 The other rule is approximately 95% of the data in 100 00:07:46,260 --> 00:07:48,980 a bell-shaped distribution lies within two 101 00:07:48,980 --> 00:07:53,240 standard deviations of the mean. That means this 102 00:07:53,240 --> 00:08:00,880 area covers between minus two sigma and plus mu 103 00:08:00,880 --> 00:08:08,360 plus two sigma. So 95% of the data lies between 104 00:08:08,360 --> 00:08:15,410 minus mu two sigma And finally, 105 00:08:15,790 --> 00:08:21,270 approximately 99.7% of the data, it means almost 106 00:08:21,270 --> 00:08:25,490 the data. Because we are saying 99.7 means most of 107 00:08:25,490 --> 00:08:29,930 the data falls or lies within three standard 108 00:08:29,930 --> 00:08:37,770 deviations of the mean. So 99.7% of the data lies 109 00:08:37,770 --> 00:08:41,470 between mu minus the pre-sigma and the mu plus of 110 00:08:41,470 --> 00:08:41,870 pre-sigma. 111 00:08:45,030 --> 00:08:49,810 68, 95, 99.7 are fixed numbers. Later in chapter 112 00:08:49,810 --> 00:08:55,010 6, we will explain in details other coefficients. 113 00:08:55,530 --> 00:08:58,250 Maybe suppose we are interested not in one of 114 00:08:58,250 --> 00:09:03,010 these. Suppose we are interested in 90% or 80% or 115 00:09:03,010 --> 00:09:11,500 85%. This rule just for 689599.7. This rule is 116 00:09:11,500 --> 00:09:15,560 called 689599 117 00:09:15,560 --> 00:09:22,960 .7 rule. That is, again, 68% of the data lies 118 00:09:22,960 --> 00:09:27,030 within one standard deviation of the mean. 95% of 119 00:09:27,030 --> 00:09:30,370 the data lies within two standard deviations of 120 00:09:30,370 --> 00:09:33,850 the mean. And finally, most of the data falls 121 00:09:33,850 --> 00:09:36,950 within three standard deviations of the mean. 122 00:09:39,870 --> 00:09:43,330 Let's see how can we use this empirical rule for a 123 00:09:43,330 --> 00:09:49,850 specific example. Imagine that the variable math 124 00:09:49,850 --> 00:09:54,070 set scores is bell shaped. So here we assume that 125 00:09:55,230 --> 00:10:00,950 The math status score has symmetric shape or bell 126 00:10:00,950 --> 00:10:04,230 shape. In this case, we can use the previous rule. 127 00:10:04,350 --> 00:10:09,610 Otherwise, we cannot. So assume the math status 128 00:10:09,610 --> 00:10:15,750 score is bell-shaped with a mean of 500. I mean, 129 00:10:16,410 --> 00:10:19,750 the population mean is 500 and standard deviation 130 00:10:19,750 --> 00:10:24,620 of 90. And let's see how can we apply the 131 00:10:24,620 --> 00:10:29,220 empirical rule. So again, meta score has a mean of 132 00:10:29,220 --> 00:10:35,300 500 and standard deviation sigma is 90. Then we 133 00:10:35,300 --> 00:10:43,200 can say that 60% of all test takers scored between 134 00:10:43,200 --> 00:10:46,640 68%. 135 00:10:46,640 --> 00:10:56,550 So mu is 500. minus sigma is 90. And mu plus 136 00:10:56,550 --> 00:11:05,390 sigma, 500 plus 90. So you can say that 68% or 230 137 00:11:05,390 --> 00:11:15,610 of all test takers scored between 410 and 590. So 138 00:11:15,610 --> 00:11:22,900 68% of all test takers who took that exam scored 139 00:11:22,900 --> 00:11:27,740 between 14 and 590. That if we assume previously 140 00:11:27,740 --> 00:11:29,980 the data is well shaped, otherwise we cannot say 141 00:11:29,980 --> 00:11:36,420 that. For the other rule, 95% of all test takers 142 00:11:36,420 --> 00:11:44,400 scored between mu is 500 minus 2 times sigma, 500 143 00:11:44,400 --> 00:11:49,760 plus 2 times sigma. So that means 500 minus 180 is 144 00:11:49,760 --> 00:11:55,100 320. 500 plus 180 is 680. So you can say that 145 00:11:55,100 --> 00:11:59,080 approximately 95% of all test takers scored 146 00:11:59,080 --> 00:12:07,860 between 320 and 680. Finally, you can say that 147 00:12:10,770 --> 00:12:13,570 all of the test takers, approximately all, because 148 00:12:13,570 --> 00:12:20,030 when we are saying 99.7 it means just 0.3 is the 149 00:12:20,030 --> 00:12:23,590 rest, so you can say approximately all test takers 150 00:12:23,590 --> 00:12:30,730 scored between mu minus three sigma which is 90 151 00:12:30,730 --> 00:12:39,830 and mu It lost 3 seconds. So 500 minus 3 times 9 152 00:12:39,830 --> 00:12:45,950 is 270. So that's 230. 500 plus 270 is 770. So we 153 00:12:45,950 --> 00:12:49,690 can say that 99.7% of all the stackers scored 154 00:12:49,690 --> 00:12:55,610 between 230 and 770. I will give another example 155 00:12:55,610 --> 00:12:59,210 just to make sure that you understand the meaning 156 00:12:59,210 --> 00:13:00,870 of this rule. 157 00:13:03,620 --> 00:13:09,720 For business, a statistic goes. 158 00:13:15,720 --> 00:13:20,720 For business, a statistic example. Suppose the 159 00:13:20,720 --> 00:13:29,740 scores are bell-shaped. So we are assuming the 160 00:13:29,740 --> 00:13:40,970 data is bell-shaped. with mean of 75 and standard 161 00:13:40,970 --> 00:13:41,950 deviation of 5. 162 00:13:44,990 --> 00:13:53,810 Also, let's assume that 100 students took 163 00:13:53,810 --> 00:14:00,840 the exam. So we have 100 students. Last year took 164 00:14:00,840 --> 00:14:05,360 the exam of business statistics. The mean was 75. 165 00:14:06,240 --> 00:14:10,920 And standard deviation was 5. And let's see how it 166 00:14:10,920 --> 00:14:17,100 can tell about 6 to 8% rule. It means that 6 to 8% 167 00:14:17,100 --> 00:14:22,100 of all the students score 168 00:14:22,100 --> 00:14:28,650 between mu minus sigma. Mu is 75. minus sigma and 169 00:14:28,650 --> 00:14:29,610 the mu plus sigma. 170 00:14:33,590 --> 00:14:39,290 So that means 68 students, because we have 100, so 171 00:14:39,290 --> 00:14:45,410 you can say 68 students scored between 70 and 80. 172 00:14:46,610 --> 00:14:53,290 So 60 students out of 100 scored between 70 and 173 00:14:53,290 --> 00:15:02,990 80. About 95 students out of 100 scored between 75 174 00:15:02,990 --> 00:15:12,190 minus 2 times 5. 75 plus 2 times 5. So that gives 175 00:15:12,190 --> 00:15:13,770 65. 176 00:15:15,550 --> 00:15:20,950 The minimum and the maximum is 85. So you can say 177 00:15:20,950 --> 00:15:25,930 that around 95 students scored between 65 and 85. 178 00:15:26,650 --> 00:15:33,510 Finally, maybe you can see all students. Because 179 00:15:33,510 --> 00:15:38,650 when you're saying 99.7, it means almost all the 180 00:15:38,650 --> 00:15:47,210 students scored between 75 minus 3 times Y. and 75 181 00:15:47,210 --> 00:15:52,970 plus three times one. So that's six days in two 182 00:15:52,970 --> 00:15:59,150 nights. Now let's look carefully at these three 183 00:15:59,150 --> 00:16:04,910 intervals. The first one is seven to eight, the 184 00:16:04,910 --> 00:16:11,050 other one 65 to 85, then six to 90. When we are 185 00:16:11,050 --> 00:16:11,790 more confident, 186 00:16:15,170 --> 00:16:20,630 When we are more confident here for 99.7%, the 187 00:16:20,630 --> 00:16:25,930 interval becomes wider. So this is the widest 188 00:16:25,930 --> 00:16:31,430 interval. Because here, the length of the interval 189 00:16:31,430 --> 00:16:37,090 is around 10. The other one is 20. Here is 30. So 190 00:16:37,090 --> 00:16:42,570 the last interval has the highest width. So as the 191 00:16:42,570 --> 00:16:48,380 confidence coefficient increases, the length of 192 00:16:48,380 --> 00:16:54,080 the interval becomes larger and larger because it 193 00:16:54,080 --> 00:16:59,160 starts with 10, 20, and we end with 30. So that's 194 00:16:59,160 --> 00:17:04,460 another example of empirical load. And again, here 195 00:17:04,460 --> 00:17:10,400 we assume the data is bell shape. Let's move. to 196 00:17:10,400 --> 00:17:15,320 another one when the data is not in shape. I mean, 197 00:17:15,600 --> 00:17:21,840 if we have data and that data is not symmetric. So 198 00:17:21,840 --> 00:17:24,440 that rule is no longer valid. So we have to use 199 00:17:24,440 --> 00:17:27,940 another rule. It's called shape-example rule. 200 00:17:37,450 --> 00:17:41,610 Any questions before we move to the next topic? 201 00:17:44,390 --> 00:17:48,150 At shape and shape rule, it says that regardless 202 00:17:48,150 --> 00:17:53,890 of how the data are distributed, I mean, if the 203 00:17:53,890 --> 00:17:58,190 data is not symmetric or 204 00:17:58,190 --> 00:18:02,910 not bell-shaped, then we can say that at least 205 00:18:05,150 --> 00:18:10,990 Instead of saying 68, 95, or 99.7, just say around 206 00:18:10,990 --> 00:18:18,690 1 minus 1 over k squared. Multiply this by 100. 207 00:18:19,650 --> 00:18:25,190 All of the values will fall within k. So k is 208 00:18:25,190 --> 00:18:30,410 number of standard deviations. I mean number of 209 00:18:30,410 --> 00:18:33,990 signals. So if the data is not bell shaped, then 210 00:18:33,990 --> 00:18:38,790 you can say that approximately at least 1 minus 1 211 00:18:38,790 --> 00:18:43,410 over k squared times 100% of the values will fall 212 00:18:43,410 --> 00:18:47,630 within k standard deviations of the mean. In this 213 00:18:47,630 --> 00:18:50,950 case, we assume that k is greater than 1. I mean, 214 00:18:51,030 --> 00:18:54,550 you cannot apply this rule if k equals 1. Because 215 00:18:54,550 --> 00:19:00,090 if k is 1. Then 1 minus 1 is 0. That makes no 216 00:19:00,090 --> 00:19:03,410 sense. For this reason, k is above 1 or greater 217 00:19:03,410 --> 00:19:09,110 than 1. So this rule is valid only for k greater 218 00:19:09,110 --> 00:19:14,390 than 1. So you can see that at least 1 minus 1 219 00:19:14,390 --> 00:19:19,270 over k squared of the data or of the values will 220 00:19:19,270 --> 00:19:24,230 fall within k standard equations. So now, for 221 00:19:24,230 --> 00:19:25,830 example, suppose k equals 2. 222 00:19:28,690 --> 00:19:32,970 When k equals 2, we said that 95% of the data 223 00:19:32,970 --> 00:19:36,370 falls within two standard ratios. That if the data 224 00:19:36,370 --> 00:19:39,350 is bell shaped. Now what's about if the data is 225 00:19:39,350 --> 00:19:43,210 not bell shaped? We have to use shape shape rule. 226 00:19:43,830 --> 00:19:51,170 So 1 minus 1 over k is 2. So 2, 2, 2 squared. So 1 227 00:19:51,170 --> 00:19:58,130 minus 1 fourth. That gives. three quarters, I 228 00:19:58,130 --> 00:20:03,370 mean, 75%. So instead of saying 95% of the data 229 00:20:03,370 --> 00:20:06,850 lies within one or two standard deviations of the 230 00:20:06,850 --> 00:20:13,070 mean, if the data is bell-shaped, if the data is 231 00:20:13,070 --> 00:20:17,590 not bell-shaped, you have to say that 75% of the 232 00:20:17,590 --> 00:20:22,190 data falls within two standard deviations. For 233 00:20:22,190 --> 00:20:26,570 bell shape, you are 95% confident there. But here, 234 00:20:27,190 --> 00:20:36,710 you're just 75% confident. Suppose k is 3. Now for 235 00:20:36,710 --> 00:20:41,110 k equal 3, we said 99.7% of the data falls within 236 00:20:41,110 --> 00:20:44,890 three standard deviations. Now here, if the data 237 00:20:44,890 --> 00:20:51,940 is not bell shape, 1 minus 1 over k squared. 1 238 00:20:51,940 --> 00:20:56,540 minus 1 239 00:20:56,540 --> 00:21:00,760 over 3 squared is one-ninth. One-ninth is 0.11. 1 240 00:21:00,760 --> 00:21:06,440 minus 0.11 means 89% of the data, instead of 241 00:21:06,440 --> 00:21:13,900 saying 99.7. So 89% of the data will fall within 242 00:21:13,900 --> 00:21:16,460 three standard deviations of the population mean. 243 00:21:18,510 --> 00:21:22,610 regardless of how the data are distributed around 244 00:21:22,610 --> 00:21:26,350 them. So here, we have two scenarios. One, if the 245 00:21:26,350 --> 00:21:29,390 data is symmetric, which is called empirical rule 246 00:21:29,390 --> 00:21:34,710 68959917. And the other one is called shape-by 247 00:21:34,710 --> 00:21:38,370 -shape rule, and that regardless of the shape of 248 00:21:38,370 --> 00:21:38,710 the data. 249 00:21:41,890 --> 00:21:49,210 Excuse me? Yes. In this case, you don't know the 250 00:21:49,210 --> 00:21:51,490 distribution of the data. And the reality is 251 00:21:51,490 --> 00:21:58,650 sometimes the data has unknown distribution. For 252 00:21:58,650 --> 00:22:02,590 this reason, we have to use chip-chip portions. 253 00:22:05,410 --> 00:22:09,830 That's all for empirical rule and chip-chip rule. 254 00:22:11,230 --> 00:22:18,150 The next topic is quartile measures. So far, we 255 00:22:18,150 --> 00:22:24,330 have discussed central tendency measures, and we 256 00:22:24,330 --> 00:22:28,450 have talked about mean, median, and more. Then we 257 00:22:28,450 --> 00:22:32,830 moved to location of variability or spread or 258 00:22:32,830 --> 00:22:37,810 dispersion. And we talked about range, variance, 259 00:22:37,950 --> 00:22:38,890 and standardization. 260 00:22:41,570 --> 00:22:48,230 And we said that outliers affect the mean much 261 00:22:48,230 --> 00:22:51,470 more than the median. And also, outliers affect 262 00:22:51,470 --> 00:22:55,730 the range. Here, we'll talk about other measures 263 00:22:55,730 --> 00:22:59,570 of the data, which is called quartile measures. 264 00:23:01,190 --> 00:23:03,450 Here, actually, we'll talk about two measures. 265 00:23:04,270 --> 00:23:10,130 First one is called first quartile, And the other 266 00:23:10,130 --> 00:23:14,150 one is third quartile. So we have two measures, 267 00:23:15,470 --> 00:23:26,030 first and third quartile. Quartiles split the rank 268 00:23:26,030 --> 00:23:32,930 data into four equal segments. I mean, these 269 00:23:32,930 --> 00:23:37,190 measures split the data you have into four equal 270 00:23:37,190 --> 00:23:37,730 parts. 271 00:23:42,850 --> 00:23:48,690 Q1 has 25% of the data fall below it. I mean 25% 272 00:23:48,690 --> 00:23:56,410 of the values lie below Q1. So it means 75% of the 273 00:23:56,410 --> 00:24:04,410 values above it. So 25 below and 75 above. But you 274 00:24:04,410 --> 00:24:07,370 have to be careful that the data is arranged from 275 00:24:07,370 --> 00:24:12,430 smallest to largest. So in this case, Q1. is a 276 00:24:12,430 --> 00:24:19,630 value that has 25% below it. So Q2 is called the 277 00:24:19,630 --> 00:24:22,450 median. The median, the value in the middle when 278 00:24:22,450 --> 00:24:26,250 we arrange the data from smallest to largest. So 279 00:24:26,250 --> 00:24:31,190 that means 50% of the data below and also 50% of 280 00:24:31,190 --> 00:24:36,370 the data above. The other measure is called 281 00:24:36,370 --> 00:24:41,730 theoretical qualifying. In this case, we have 25% 282 00:24:41,730 --> 00:24:47,950 of the data above Q3 and 75% of the data below Q3. 283 00:24:49,010 --> 00:24:54,410 So quartiles split the rank data into four equal 284 00:24:54,410 --> 00:25:00,190 segments, Q1 25% to the left, Q2 50% to the left, 285 00:25:00,970 --> 00:25:08,590 Q3 75% to the left, and 25% to the right. Before, 286 00:25:09,190 --> 00:25:13,830 we explained how to compute the median, and let's 287 00:25:13,830 --> 00:25:18,850 see how can we compute first and third quartile. 288 00:25:19,750 --> 00:25:23,650 If you remember, when we computed the median, 289 00:25:24,350 --> 00:25:28,480 first we locate the position of the median. And we 290 00:25:28,480 --> 00:25:33,540 said that the rank of n is odd. Yes, it was n plus 291 00:25:33,540 --> 00:25:37,800 1 divided by 2. This is the location of the 292 00:25:37,800 --> 00:25:41,100 median, not the value. Sometimes the value may be 293 00:25:41,100 --> 00:25:44,900 equal to the location, but most of the time it's 294 00:25:44,900 --> 00:25:48,340 not. It's not the case. Now let's see how can we 295 00:25:48,340 --> 00:25:54,130 locate the fair support. The first quartile after 296 00:25:54,130 --> 00:25:56,690 you arrange the data from smallest to largest, the 297 00:25:56,690 --> 00:26:01,290 location is n plus 1 divided by 2. So that's the 298 00:26:01,290 --> 00:26:06,890 location of the first quartile. The median, as we 299 00:26:06,890 --> 00:26:10,390 mentioned before, is located in the middle. So it 300 00:26:10,390 --> 00:26:15,210 makes sense that if n is odd, the location of the 301 00:26:15,210 --> 00:26:20,490 median is n plus 1 over 2. Now, for the third 302 00:26:20,490 --> 00:26:27,160 quartile position, The location is N plus 1 303 00:26:27,160 --> 00:26:31,160 divided by 4 times 3. So 3 times N plus 1 divided 304 00:26:31,160 --> 00:26:39,920 by 4. That's how can we locate Q1, Q2, and Q3. So 305 00:26:39,920 --> 00:26:42,080 one more time, the median, the value in the 306 00:26:42,080 --> 00:26:46,260 middle, and it's located exactly at the position N 307 00:26:46,260 --> 00:26:52,590 plus 1 over 2 for the range data. Q1 is located at 308 00:26:52,590 --> 00:26:56,770 n plus one divided by four. Q3 is located at the 309 00:26:56,770 --> 00:26:59,670 position three times n plus one divided by four. 310 00:27:03,630 --> 00:27:07,490 Now, when calculating the rank position, we can 311 00:27:07,490 --> 00:27:14,690 use one of these rules. First, if the result of 312 00:27:14,690 --> 00:27:18,010 the location, I mean, is a whole number, I mean, 313 00:27:18,250 --> 00:27:24,050 if it is an integer. Then the rank position is the 314 00:27:24,050 --> 00:27:28,590 same number. For example, suppose the rank 315 00:27:28,590 --> 00:27:34,610 position is four. So position number four is your 316 00:27:34,610 --> 00:27:38,450 quartile, either first or third or second 317 00:27:38,450 --> 00:27:42,510 quartile. So if the result is a whole number, then 318 00:27:42,510 --> 00:27:48,350 it is the rank position used. Now, if the result 319 00:27:48,350 --> 00:27:52,250 is a fractional half, I mean if the right position 320 00:27:52,250 --> 00:27:58,830 is 2.5, 3.5, 4.5. In this case, average the two 321 00:27:58,830 --> 00:28:02,050 corresponding data values. For example, if the 322 00:28:02,050 --> 00:28:10,170 right position is 2.5. So the rank position is 2 323 00:28:10,170 --> 00:28:13,210 .5. So take the average of the corresponding 324 00:28:13,210 --> 00:28:18,950 values for the rank 2 and 3. So look at the value. 325 00:28:19,280 --> 00:28:24,740 at rank 2, value at rank 3, then take the average 326 00:28:24,740 --> 00:28:29,300 of the corresponding values. That if the rank 327 00:28:29,300 --> 00:28:31,280 position is fractional. 328 00:28:34,380 --> 00:28:37,900 So if the result is whole number, just take it as 329 00:28:37,900 --> 00:28:41,160 it is. If it is a fractional half, take the 330 00:28:41,160 --> 00:28:44,460 corresponding data values and take the average of 331 00:28:44,460 --> 00:28:49,110 these two values. Now, if the result is not a 332 00:28:49,110 --> 00:28:53,930 whole number or a fraction of it. For example, 333 00:28:54,070 --> 00:29:01,910 suppose the location is 2.1. So the position is 2, 334 00:29:02,390 --> 00:29:06,550 just round, up to the nearest integer. So that's 335 00:29:06,550 --> 00:29:11,350 2. What's about if the position rank is 2.6? Just 336 00:29:11,350 --> 00:29:16,060 rank up to 3. So that's 3. So that's the rule you 337 00:29:16,060 --> 00:29:21,280 have to follow if the result is a number, a whole 338 00:29:21,280 --> 00:29:27,200 number, I mean integer, fraction of half, or not 339 00:29:27,200 --> 00:29:31,500 real number, I mean, not whole number, or fraction 340 00:29:31,500 --> 00:29:35,540 of half. Look at this specific example. Suppose we 341 00:29:35,540 --> 00:29:40,180 have this data. This is ordered array, 11, 12, up 342 00:29:40,180 --> 00:29:45,680 to 22. And let's see how can we compute These 343 00:29:45,680 --> 00:29:46,240 measures. 344 00:29:50,080 --> 00:29:51,700 Look carefully here. 345 00:29:55,400 --> 00:29:59,260 First, let's compute the median. The median and 346 00:29:59,260 --> 00:30:02,360 the value in the middle. How many values we have? 347 00:30:02,800 --> 00:30:08,920 There are nine values. So the middle is number 348 00:30:08,920 --> 00:30:15,390 five. One, two, three, four, five. So 16. This 349 00:30:15,390 --> 00:30:23,010 value is the median. Now look at the values below 350 00:30:23,010 --> 00:30:29,650 the median. There are 4 and 4 below and above the 351 00:30:29,650 --> 00:30:34,970 median. Now let's see how can we compute Q1. The 352 00:30:34,970 --> 00:30:38,250 position of Q1, as we mentioned, is N plus 1 353 00:30:38,250 --> 00:30:42,630 divided by 4. So N is 9 plus 1 divided by 4 is 2 354 00:30:42,630 --> 00:30:50,330 .5. 2.5 position, it means you have to take the 355 00:30:50,330 --> 00:30:54,490 average of the two corresponding values, 2 and 3. 356 00:30:55,130 --> 00:31:01,010 So 2 and 3, so 12 plus 13 divided by 2. That gives 357 00:31:01,010 --> 00:31:08,390 12.5. So this is Q1. 358 00:31:08,530 --> 00:31:18,210 So Q1 is 12.5. Now what's about Q3? The Q3, the 359 00:31:18,210 --> 00:31:27,810 rank position, Q1 was 2.5. So Q3 should be three 360 00:31:27,810 --> 00:31:32,410 times that value, because it's three times A plus 361 00:31:32,410 --> 00:31:36,090 1 over 4. That means the rank position is 7.5. 362 00:31:36,590 --> 00:31:39,410 That means you have to take the average of the 7 363 00:31:39,410 --> 00:31:44,890 and 8 position. 7 and 8 is 18. 364 00:31:45,880 --> 00:31:56,640 which is 19.5. So that's Q3, 19.5. 365 00:32:00,360 --> 00:32:09,160 So this is Q3. This value is Q1. And this value 366 00:32:09,160 --> 00:32:15,910 is? Now, Q2 is the center. is located in the 367 00:32:15,910 --> 00:32:18,570 center because, as we mentioned, four below and 368 00:32:18,570 --> 00:32:22,950 four above. Now what's about Q1? Q1 is not in the 369 00:32:22,950 --> 00:32:28,150 center of the entire data. Because Q1, 12.5, so 370 00:32:28,150 --> 00:32:31,830 two points below and the others maybe how many 371 00:32:31,830 --> 00:32:34,750 above two, four, six, seven observations above it. 372 00:32:35,390 --> 00:32:40,130 So that means Q1 is not center. Also Q3 is not 373 00:32:40,130 --> 00:32:43,170 center because two observations above it and seven 374 00:32:43,170 --> 00:32:48,780 below it. So that means Q1 and Q3 are measures of 375 00:32:48,780 --> 00:32:52,480 non-central location, while the median is a 376 00:32:52,480 --> 00:32:56,080 measure of central location. But if you just look 377 00:32:56,080 --> 00:33:03,720 at the data below the median, just focus on the 378 00:33:03,720 --> 00:33:09,100 data below the median, 12.5 lies exactly in the 379 00:33:09,100 --> 00:33:13,130 middle of the data. So 12.5 is the center of the 380 00:33:13,130 --> 00:33:18,090 data. I mean, Q1 is the center of the data below 381 00:33:18,090 --> 00:33:22,810 the overall median. The overall median was 16. So 382 00:33:22,810 --> 00:33:27,490 the data before 16, the median for this data is 12 383 00:33:27,490 --> 00:33:31,770 .5, which is the first part. Similarly, if you 384 00:33:31,770 --> 00:33:36,870 look at the data above Q2, 385 00:33:37,770 --> 00:33:42,190 now 19.5. is located in the middle of the line. So 386 00:33:42,190 --> 00:33:46,470 Q3 is a measure of center for the data above the 387 00:33:46,470 --> 00:33:48,390 line. Make sense? 388 00:33:51,370 --> 00:33:56,430 So that's how can we compute first, second, and 389 00:33:56,430 --> 00:34:03,510 third part. Any questions? Yes, but it's a whole 390 00:34:03,510 --> 00:34:09,370 number. Whole number, it means any integer. For 391 00:34:09,370 --> 00:34:14,450 example, yeah, exactly, yes. Suppose we have 392 00:34:14,450 --> 00:34:18,090 number of data is seven. 393 00:34:22,070 --> 00:34:25,070 Number of observations we have is seven. So the 394 00:34:25,070 --> 00:34:29,730 rank position n plus one divided by two, seven 395 00:34:29,730 --> 00:34:33,890 plus one over two is four. Four means the whole 396 00:34:33,890 --> 00:34:37,780 number, I mean an integer. then this case just use 397 00:34:37,780 --> 00:34:45,280 it as it is. Now let's see the benefit or the 398 00:34:45,280 --> 00:34:48,680 feature of using Q1 and Q3. 399 00:34:55,180 --> 00:35:01,300 So let's move at the inter-equilateral range or 400 00:35:01,300 --> 00:35:01,760 IQ1. 401 00:35:08,020 --> 00:35:14,580 2.5 is the position. So the rank data of the rank 402 00:35:14,580 --> 00:35:19,180 data. So take the average of the two corresponding 403 00:35:19,180 --> 00:35:25,700 values of this one, which is 2 and 3. So 2 and 3. 404 00:35:27,400 --> 00:35:31,940 The average of these two values is 12.5. One more 405 00:35:31,940 --> 00:35:40,920 time, 2.5 is not the value. It is the rank 406 00:35:40,920 --> 00:35:47,880 position of the first quartile. So in this case, 2 407 00:35:47,880 --> 00:35:57,740 .5 takes position 2 and 3. The average of these 408 00:35:57,740 --> 00:36:02,580 two rank positions the corresponding one, which 409 00:36:02,580 --> 00:36:10,080 are 12 and 13. So 12 for position number 2, 13 for 410 00:36:10,080 --> 00:36:13,580 the other one. So the average is just divided by 411 00:36:13,580 --> 00:36:16,660 2. That will give 12.5. 412 00:36:28,760 --> 00:36:34,900 Next, again, the inter-quartile range, which is 413 00:36:34,900 --> 00:36:44,160 denoted by IQR. Now IQR is the distance between Q3 414 00:36:44,160 --> 00:36:48,000 and Q1. I mean the difference between Q3 and Q1 is 415 00:36:48,000 --> 00:36:53,460 called the inter-quartile range. And this one 416 00:36:53,460 --> 00:36:56,680 measures the spread in the middle 50% of the data. 417 00:36:57,680 --> 00:36:59,060 Because if you imagine that, 418 00:37:02,250 --> 00:37:10,250 This is Q1 and Q3. IQR is the distance between 419 00:37:10,250 --> 00:37:14,130 these two values. Now imagine that we have just 420 00:37:14,130 --> 00:37:19,570 this data, which represents 50%. 421 00:37:21,540 --> 00:37:25,440 And IQR, the definition is a Q3. So we have just 422 00:37:25,440 --> 00:37:31,480 this data, for example. And IQ3 is Q3 minus Q1. It 423 00:37:31,480 --> 00:37:37,080 means IQ3 is the maximum minus the minimum of the 424 00:37:37,080 --> 00:37:41,540 50% of the middle data. So it means this is your 425 00:37:41,540 --> 00:37:46,980 range, new range. After you've secluded 25% to the 426 00:37:46,980 --> 00:37:52,450 left of Q1, And also you ignored totally 25% of 427 00:37:52,450 --> 00:37:57,070 the data above Q3. So that means you're focused on 428 00:37:57,070 --> 00:38:00,630 50% of the data. And just take the average of 429 00:38:00,630 --> 00:38:04,070 these two points, I'm sorry, the distance of these 430 00:38:04,070 --> 00:38:07,670 two points Q3 minus Q1. So you will get the range. 431 00:38:07,990 --> 00:38:11,170 But not exactly the range. It's called, sometimes 432 00:38:11,170 --> 00:38:16,390 it's called mid-spread range. Because mid-spread, 433 00:38:16,510 --> 00:38:19,910 because we are talking about middle of the data, 434 00:38:19,990 --> 00:38:22,430 50% of the data, which is located in the middle. 435 00:38:23,110 --> 00:38:28,550 So do you think in this case, outliers actually, 436 00:38:29,090 --> 00:38:32,930 they are extreme values, the data below Q1 and 437 00:38:32,930 --> 00:38:38,150 data above Q3. That means inter-quartile range, Q3 438 00:38:38,150 --> 00:38:42,410 minus Q1, is not affected by outliers. Because you 439 00:38:42,410 --> 00:38:49,150 ignored the small values And the high values. So 440 00:38:49,150 --> 00:38:53,890 IQR is not affected by outliers. So in case of 441 00:38:53,890 --> 00:38:58,930 outliers, it's better to use IQR. Because the 442 00:38:58,930 --> 00:39:01,610 range is maximum minus minimum. And as we 443 00:39:01,610 --> 00:39:05,030 mentioned before, the range is affected by 444 00:39:05,030 --> 00:39:11,650 outliers. So IQR is again called the mid-spread 445 00:39:11,650 --> 00:39:17,940 because it covers the middle 50% of the data. IQR 446 00:39:17,940 --> 00:39:20,120 again is a measure of variability that is not 447 00:39:20,120 --> 00:39:23,900 influenced or affected by outliers or extreme 448 00:39:23,900 --> 00:39:26,680 values. So in the presence of outliers, it's 449 00:39:26,680 --> 00:39:34,160 better to use IQR instead of using the range. So 450 00:39:34,160 --> 00:39:39,140 again, median and the range are not affected by 451 00:39:39,140 --> 00:39:43,180 outliers. So in case of the presence of outliers, 452 00:39:43,340 --> 00:39:46,380 we have to use these measures, one as measure of 453 00:39:46,380 --> 00:39:49,780 central and the other as measure of spread. So 454 00:39:49,780 --> 00:39:54,420 measures like Q1, Q3, and IQR that are not 455 00:39:54,420 --> 00:39:57,400 influenced by outliers are called resistant 456 00:39:57,400 --> 00:40:01,980 measures. Resistance means in case of outliers, 457 00:40:02,380 --> 00:40:06,120 they remain in the same position or approximately 458 00:40:06,120 --> 00:40:09,870 in the same position. Because outliers don't 459 00:40:09,870 --> 00:40:13,870 affect these measures. I mean, don't affect Q1, 460 00:40:14,830 --> 00:40:20,130 Q3, and consequently IQR, because IQR is just the 461 00:40:20,130 --> 00:40:24,990 distance between Q3 and Q1. So to determine the 462 00:40:24,990 --> 00:40:29,430 value of IQR, you have first to compute Q1, Q3, 463 00:40:29,750 --> 00:40:35,780 then take the difference between these two. So, 464 00:40:36,120 --> 00:40:41,120 for example, suppose we have a data, and that data 465 00:40:41,120 --> 00:40:51,400 has Q1 equals 30, and Q3 is 55. Suppose for a data 466 00:40:51,400 --> 00:41:00,140 set, that data set has Q1 30, Q3 is 57. The IQR, 467 00:41:00,800 --> 00:41:07,240 or Inter Equal Hyper Range, 57 minus 30 is 27. Now 468 00:41:07,240 --> 00:41:12,460 what's the range? The range is maximum for the 469 00:41:12,460 --> 00:41:17,380 largest value, which is 17 minus 12. That gives 470 00:41:17,380 --> 00:41:21,420 58. Now look at the difference between the two 471 00:41:21,420 --> 00:41:26,900 ranges. The inter-quartile range is 27. The range 472 00:41:26,900 --> 00:41:29,800 is 58. There is a big difference between these two 473 00:41:29,800 --> 00:41:35,750 values because range depends only on smallest and 474 00:41:35,750 --> 00:41:40,190 largest. And these values could be outliers. For 475 00:41:40,190 --> 00:41:44,410 this reason, the range value is higher or greater 476 00:41:44,410 --> 00:41:48,410 than the required range, which is just the 477 00:41:48,410 --> 00:41:54,050 distance of the 50% of the middle data. For this 478 00:41:54,050 --> 00:41:59,470 reason, it's better to use the range in case of 479 00:41:59,470 --> 00:42:03,940 outliers. Make sense? Any question? 480 00:42:08,680 --> 00:42:19,320 Five-number summary are smallest 481 00:42:19,320 --> 00:42:27,380 value, largest value, also first quartile, third 482 00:42:27,380 --> 00:42:32,250 quartile, and the median. These five numbers are 483 00:42:32,250 --> 00:42:35,870 called five-number summary, because by using these 484 00:42:35,870 --> 00:42:41,590 statistics, smallest, first, median, third 485 00:42:41,590 --> 00:42:46,010 quarter, and largest, you can describe the center 486 00:42:46,010 --> 00:42:52,590 spread and the shape of the distribution. So by 487 00:42:52,590 --> 00:42:56,450 using five-number summary, you can tell something 488 00:42:56,450 --> 00:43:00,090 about it. The center of the data, I mean the value 489 00:43:00,090 --> 00:43:02,070 in the middle, because the median is the value in 490 00:43:02,070 --> 00:43:06,550 the middle. Spread, because we can talk about the 491 00:43:06,550 --> 00:43:11,070 IQR, which is the range, and also the shape of the 492 00:43:11,070 --> 00:43:15,450 data. And let's see, let's move to this slide, 493 00:43:16,670 --> 00:43:18,530 slide number 50. 494 00:43:21,530 --> 00:43:25,090 Let's see how can we construct something called 495 00:43:25,090 --> 00:43:31,850 box plot. Box plot. Box plot can be constructed by 496 00:43:31,850 --> 00:43:34,990 using the five number summary. We have smallest 497 00:43:34,990 --> 00:43:37,550 value. On the other hand, we have the largest 498 00:43:37,550 --> 00:43:43,430 value. Also, we have Q1, the first quartile, the 499 00:43:43,430 --> 00:43:47,510 median, and Q3. For symmetric distribution, I mean 500 00:43:47,510 --> 00:43:52,490 if the data is bell-shaped. In this case, the 501 00:43:52,490 --> 00:43:56,570 vertical line in the box which represents the 502 00:43:56,570 --> 00:43:59,730 median should be located in the middle of this 503 00:43:59,730 --> 00:44:05,510 box, also in the middle of the entire data. Look 504 00:44:05,510 --> 00:44:11,350 carefully at this vertical line. This line splits 505 00:44:11,350 --> 00:44:16,070 the data into two halves, 25% to the left and 25% 506 00:44:16,070 --> 00:44:19,960 to the right. And also this vertical line splits 507 00:44:19,960 --> 00:44:24,720 the data into two halves, from the smallest to 508 00:44:24,720 --> 00:44:29,760 largest, because there are 50% of the observations 509 00:44:29,760 --> 00:44:34,560 lie below, and 50% lies above. So that means by 510 00:44:34,560 --> 00:44:37,840 using box plot, you can tell something about the 511 00:44:37,840 --> 00:44:42,520 shape of the distribution. So again, if the data 512 00:44:42,520 --> 00:44:48,270 are symmetric around the median, And the central 513 00:44:48,270 --> 00:44:53,910 line, this box, and central line are centered 514 00:44:53,910 --> 00:44:57,550 between the endpoints. I mean, this vertical line 515 00:44:57,550 --> 00:45:00,720 is centered between these two endpoints. between 516 00:45:00,720 --> 00:45:04,180 Q1 and Q3. And the whole box plot is centered 517 00:45:04,180 --> 00:45:07,100 between the smallest and the largest value. And 518 00:45:07,100 --> 00:45:10,840 also the distance between the median and the 519 00:45:10,840 --> 00:45:14,320 smallest is roughly equal to the distance between 520 00:45:14,320 --> 00:45:19,760 the median and the largest. So you can tell 521 00:45:19,760 --> 00:45:22,660 something about the shape of the distribution by 522 00:45:22,660 --> 00:45:26,780 using the box plot. 523 00:45:32,870 --> 00:45:36,110 The graph in the middle. Here median and median 524 00:45:36,110 --> 00:45:40,110 are the same. The box plot, we have here the 525 00:45:40,110 --> 00:45:43,830 median in the middle of the box, also in the 526 00:45:43,830 --> 00:45:47,390 middle of the entire data. So you can say that the 527 00:45:47,390 --> 00:45:50,210 distribution of this data is symmetric or is bell 528 00:45:50,210 --> 00:45:55,750 -shaped. It's normal distribution. On the other 529 00:45:55,750 --> 00:46:00,110 hand, if you look here, you will see that the 530 00:46:00,110 --> 00:46:06,160 median is not in the center of the box. It's near 531 00:46:06,160 --> 00:46:12,580 Q3. So the left tail, I mean, the distance between 532 00:46:12,580 --> 00:46:16,620 the median and the smallest, this tail is longer 533 00:46:16,620 --> 00:46:20,600 than the right tail. In this case, it's called 534 00:46:20,600 --> 00:46:24,850 left skewed or skewed to the left. or negative 535 00:46:24,850 --> 00:46:29,510 skewness. So if the data is not symmetric, it 536 00:46:29,510 --> 00:46:35,630 might be left skewed. I mean, the left tail is 537 00:46:35,630 --> 00:46:40,590 longer than the right tail. On the other hand, if 538 00:46:40,590 --> 00:46:45,950 the median is located near Q1, it means the right 539 00:46:45,950 --> 00:46:49,930 tail is longer than the left tail, and it's called 540 00:46:49,930 --> 00:46:56,470 positive skewed or right skewed. So for symmetric 541 00:46:56,470 --> 00:47:00,310 distribution, the median in the middle, for left 542 00:47:00,310 --> 00:47:04,570 or right skewed, the median either is close to the 543 00:47:04,570 --> 00:47:09,930 Q3 or skewed distribution to the left, or the 544 00:47:09,930 --> 00:47:14,910 median is close to Q1 and the distribution is 545 00:47:14,910 --> 00:47:20,570 right skewed or has positive skewness. That's how 546 00:47:20,570 --> 00:47:25,860 can we tell spread center and the shape by using 547 00:47:25,860 --> 00:47:28,460 the box plot. So center is the value in the 548 00:47:28,460 --> 00:47:32,860 middle, Q2 or the median. Spread is the distance 549 00:47:32,860 --> 00:47:38,340 between Q1 and Q3. So Q3 minus Q1 gives IQR. And 550 00:47:38,340 --> 00:47:41,880 finally, you can tell something about the shape of 551 00:47:41,880 --> 00:47:45,140 the distribution by just looking at the scatter 552 00:47:45,140 --> 00:47:46,440 plot. 553 00:47:49,700 --> 00:47:56,330 Let's look at This example, and suppose we have 554 00:47:56,330 --> 00:48:02,430 small data set. And let's see how can we construct 555 00:48:02,430 --> 00:48:05,750 the MaxPlot. In order to construct MaxPlot, you 556 00:48:05,750 --> 00:48:09,510 have to compute minimum first or smallest value, 557 00:48:09,810 --> 00:48:14,650 largest value. Besides that, you have to compute 558 00:48:14,650 --> 00:48:21,110 first and third part time and also Q2. For this 559 00:48:21,110 --> 00:48:27,570 simple example, Q1 is 2, Q3 is 5, and the median 560 00:48:27,570 --> 00:48:33,990 is 3. Smallest is 0, largest is 1 7. Now, be 561 00:48:33,990 --> 00:48:38,130 careful here, 1 7 seems to be an outlier. But so 562 00:48:38,130 --> 00:48:44,190 far, we don't explain how can we decide if a data 563 00:48:44,190 --> 00:48:47,550 value is considered to be an outlier. But at least 564 00:48:47,550 --> 00:48:53,080 1 7. is a suspected value to be an outlier, seems 565 00:48:53,080 --> 00:48:57,200 to be. Sometimes you are 95% sure that that point 566 00:48:57,200 --> 00:49:00,160 is an outlier, but you cannot tell, because you 567 00:49:00,160 --> 00:49:04,060 have to have a specific rule that can decide if 568 00:49:04,060 --> 00:49:07,400 that point is an outlier or not. But at least it 569 00:49:07,400 --> 00:49:12,060 makes sense that that point is considered maybe an 570 00:49:12,060 --> 00:49:14,700 outlier. But let's see how can we construct that 571 00:49:14,700 --> 00:49:18,190 first. The box plot. Again, as we mentioned, the 572 00:49:18,190 --> 00:49:21,630 minimum value is zero. The maximum is 27. The Q1 573 00:49:21,630 --> 00:49:27,830 is 2. The median is 3. The Q3 is 5. Now, if you 574 00:49:27,830 --> 00:49:32,010 look at the distance between, does this vertical 575 00:49:32,010 --> 00:49:35,790 line lie between the line in the middle or the 576 00:49:35,790 --> 00:49:40,090 center of the box? It's not exactly. But if you 577 00:49:40,090 --> 00:49:45,260 look at this line, vertical line, and the location 578 00:49:45,260 --> 00:49:50,600 of this with respect to the minimum and the 579 00:49:50,600 --> 00:49:56,640 maximum. You will see that the right tail is much 580 00:49:56,640 --> 00:50:01,560 longer than the left tail because it starts from 3 581 00:50:01,560 --> 00:50:06,180 up to 27. And the other one, from zero to three, 582 00:50:06,380 --> 00:50:09,760 is a big distance between three and 27, compared 583 00:50:09,760 --> 00:50:13,140 to the other one, zero to three. So it seems to be 584 00:50:13,140 --> 00:50:16,600 this is quite skewed, so it's not at all 585 00:50:16,600 --> 00:50:23,700 symmetric, because of this value. So maybe by 586 00:50:23,700 --> 00:50:25,580 using MaxPlot, you can tell that point is 587 00:50:25,580 --> 00:50:31,440 suspected to be an outlier. It has a very long 588 00:50:31,440 --> 00:50:32,800 right tail. 589 00:50:35,560 --> 00:50:41,120 So let's see how can we determine if a point is an 590 00:50:41,120 --> 00:50:50,400 outlier or not. Sometimes we can use box plot to 591 00:50:50,400 --> 00:50:53,840 determine if the point is an outlier or not. The 592 00:50:53,840 --> 00:51:00,860 rule is that a value is considered an outlier It 593 00:51:00,860 --> 00:51:04,780 is more than 1.5 times the entire quartile range 594 00:51:04,780 --> 00:51:11,420 below Q1 or above it. Let's explain the meaning of 595 00:51:11,420 --> 00:51:12,260 this sentence. 596 00:51:15,260 --> 00:51:20,100 First, let's compute something called lower. 597 00:51:23,740 --> 00:51:28,540 The lower limit is 598 00:51:28,540 --> 00:51:38,680 not the minimum. It's Q1 minus 1.5 IQR. This is 599 00:51:38,680 --> 00:51:39,280 the lower limit. 600 00:51:42,280 --> 00:51:47,560 So it's 1.5 times IQR below Q1. This is the lower 601 00:51:47,560 --> 00:51:50,620 limit. The upper limit, 602 00:51:54,680 --> 00:51:57,460 Q3, 603 00:51:58,790 --> 00:52:06,890 plus 1.5 times IQR. So we computed lower and upper 604 00:52:06,890 --> 00:52:13,350 limit by using these rules. Q1 minus 1.5 IQR. So 605 00:52:13,350 --> 00:52:20,510 it's 1.5 times IQR below Q1 and 1.5 times IQR 606 00:52:20,510 --> 00:52:25,070 above Q1. Now, any value. 607 00:52:31,150 --> 00:52:38,610 Is it smaller than the 608 00:52:38,610 --> 00:52:45,990 lower limit or 609 00:52:45,990 --> 00:52:53,290 greater than the 610 00:52:53,290 --> 00:52:54,150 upper limit? 611 00:52:58,330 --> 00:53:04,600 Any value. smaller than the lower limit and 612 00:53:04,600 --> 00:53:13,260 greater than the upper limit is considered to 613 00:53:13,260 --> 00:53:20,720 be an outlier. This is the rule how can you tell 614 00:53:20,720 --> 00:53:24,780 if the point or data value is outlier or not. Just 615 00:53:24,780 --> 00:53:27,100 compute lower limit and upper limit. 616 00:53:29,780 --> 00:53:35,580 So lower limit, Q1 minus 1.5IQ3. Upper limit, Q3 617 00:53:35,580 --> 00:53:38,620 plus 1.5. This is a constant. 618 00:53:43,200 --> 00:53:47,040 Now let's go back to the previous example, which 619 00:53:47,040 --> 00:53:53,800 was, which Q1 was, what's the value of Q1? Q1 was 620 00:53:53,800 --> 00:53:57,680 2. Q3 is 5. 621 00:54:00,650 --> 00:54:05,230 In order to turn an outlier, you don't need the 622 00:54:05,230 --> 00:54:11,150 value, the median. Now, Q3 is 5, Q1 is 2, so IQR 623 00:54:11,150 --> 00:54:21,050 is 3. That's the value of IQR. Now, lower limit, A 624 00:54:21,050 --> 00:54:31,830 times 2 minus 1.5 times IQR3. So that's minus 2.5. 625 00:54:33,550 --> 00:54:41,170 U3 plus U3 is 3. It's 5, sorry. It's 5 plus 1.5. 626 00:54:41,650 --> 00:54:48,570 That gives 9.5. Now, any point or any data value, 627 00:54:49,450 --> 00:54:55,950 any data value falls below minus 2.5. I mean 628 00:54:55,950 --> 00:55:00,380 smaller than minus 2.5. Or greater than 9.5 is an 629 00:55:00,380 --> 00:55:05,420 outlier. If you look at the data you have, we have 630 00:55:05,420 --> 00:55:09,520 0 up to 9. So none of these is considered to be an 631 00:55:09,520 --> 00:55:16,200 outlier. But what's about 27? 27 is greater than, 632 00:55:16,260 --> 00:55:23,160 much bigger than actually 9.5. So for that data, 633 00:55:24,020 --> 00:55:27,920 27 is an outlier. So this is the way how can we 634 00:55:27,920 --> 00:55:36,120 compute the outlier for the sample. Another 635 00:55:36,120 --> 00:55:39,620 method. The score is another method to determine 636 00:55:39,620 --> 00:55:43,600 if that point is an outlier or not. So, so far we 637 00:55:43,600 --> 00:55:48,300 have two rules. One by using quartiles and the 638 00:55:48,300 --> 00:55:50,540 other, as we mentioned last time, by using the 639 00:55:50,540 --> 00:55:54,200 score. And for these scores, if you remember, any 640 00:55:54,200 --> 00:56:00,030 values below lie Below minus three. And above 641 00:56:00,030 --> 00:56:03,430 three is considered to be irrelevant. That's 642 00:56:03,430 --> 00:56:07,950 another example. That's another way to figure out 643 00:56:07,950 --> 00:56:09,190 if the data is irrelevant. 644 00:56:13,730 --> 00:56:17,110 You can apply the two rules either for the sample 645 00:56:17,110 --> 00:56:20,190 or the population. If you have the entire data, 646 00:56:20,890 --> 00:56:23,950 you can also determine out there for the entire 647 00:56:23,950 --> 00:56:29,110 dataset, even if that data is the population. But 648 00:56:29,110 --> 00:56:34,490 most of the time, we select a sample, which is a 649 00:56:34,490 --> 00:56:37,790 subset or a portion of that population. 650 00:56:40,570 --> 00:56:41,290 Questions? 651 00:56:53,360 --> 00:57:00,000 And locating outliers. So again, outlier is any 652 00:57:00,000 --> 00:57:05,000 value that is above the upper limit or below the 653 00:57:05,000 --> 00:57:08,340 lower limit. And also we can use this score also 654 00:57:08,340 --> 00:57:12,680 to determine if that point is outlier or not. Next 655 00:57:12,680 --> 00:57:16,340 time, Inshallah, we will go over the covariance 656 00:57:16,340 --> 00:57:19,420 and the relationship and I will give some practice 657 00:57:19,420 --> 00:57:22,180 problems for Chapter 3.