text
stringlengths
0
1.44k
Then Google published a book called Site Reliability Engineering. It shed light on how Google had grown by gathering data from an exponential explosion in the number of websites in the world. Back when Yahoo! launched, there were about three thousand sites. Right now, that number is in the ten figures. How did Google manage its systems to collect all that data? They realized that absolute reliability was impossible and abandoned the goal of 100% uptime. Instead, they asked themselves two simple questions:
1. How much more money would we make with an extra 9?
2. Does adding that extra 9 cost more than we would make?
Because Google makes its money primarily from advertising, it could calculate how much money it lost during a web outage down to a fraction of a cent. As such, they could calculate the company’s pragmatic limit. For the sake of simplicity, let’s say Google makes $10 million a month from selling ads. That’s roughly $240 a minute. For this example, let’s further say they have 99.9% reliability, meaning their website is down for only 43 minutes a month. That means they lose $10,320 a month.
A new manager takes over and says that’s not good enough. He needs to brag to his golfing buddies that his systems have 99.99% reliability. That would equal only 4.3 minutes of downtime a month. So, how much would that extra nine cost? If going from 99.9% to 99.99% uptime cost only $10,000 a month, they would net an extra $320 a month. In this case, Google would probably invest the time and resources.
However, if the cost of that extra nine was $100,000 a month, they would lose almost $90,000 a month. In this scenario, Google would not pursue 99.99% reliability. It’s simply not economical to make their website more reliable than it already is.
Pierce realized that creating the perfect pendulum was a never-ending quest. Google realized that creating a perfect service was a never-ending goal. Shewhart realized that creating a perfect manufacturing process was a never-ending process.
You have to find the pragmatic limits.
A Show-Me State of Mind
Let’s go back to C. I. Lewis’s two types of knowledge: a priori (drawing your conclusions beforehand) and a posteriori (drawing conclusions after the fact). Pragmatists dismiss a priori thinkers. In their view, you can’t know anything without starting first with some evidence.
This ties neatly into non-determinism: You can’t be certain the apple will hit the ground until you drop it and it hits the ground. Pragmatists wouldn’t assume that Schrödinger’s cat was dead or alive at all; they’d simply open the box. Let’s look at another instance of a priori thinking with a typical survey in my IT community of DevOps. An annual survey might send out a multiple-choice question such as, “On average, how often do you deliver software?”
The choices might be:
• once a year
• once a month
• once a week
• once a day
• once an hour
Before getting the answers back, the analysts behind the survey have already concluded that organizations delivering once an hour or a day should be categorized as “high performers.” Those delivering once a week or a month are “medium performers.” Those delivering once a year are “low performers.”
The analysts draw their conclusions before even collecting the data. That’s a priori thinking.
However, the reason a “low-performing” team might deploy only once a year could be because that’s when their space probe is within line of sight. That doesn’t make them low performing. You just can’t assume you know the answers ahead of time.
A pragmatist, on the other hand, might ask the same question: “On average, how often do you deliver software?” But instead of predetermining the responses, they might engage in a question-and-answer type of feedback:
• Respondent 1: “Every day, but it’s hard to do on Mondays and Fridays.”
• Analyst: “Why is it hard on those days?”
• Respondent 2: “Whenever it’s ready to be shipped.”
• Analyst: “Do you need approval to deliver it?”
• Respondent 3: “Only when my manager tells me I can.”
• Analyst: “And when is that?”
The pragmatic analyst would collect the data and then draw conclusions about which software teams were high, medium, or low performers. That is, a posteriori, or after the fact.
When Shewhart introduced Deming to the philosophy of pragmatism in 1927, the young protégé was amply prepared to receive it. Non-determinism had shown Ed that what “everyone” knew and accepted was either more nuanced or simply wrong. The only reliable source of knowledge came from empirical evidence. Pragmatism simply made sense. Ever thereafter, what Ed taught never came from tradition or hearsay but from hard facts and careful study.
The Shewhart Cycle & the Deming Wheel
Walter Shewhart used the philosophy of pragmatism to completely rethink manufacturing. Most know the result of this today as the PDSA cycle (plan, do, study, act), what Ed called the Shewhart cycle. It has become the template to improve virtually every type of system or process:
1. First, gather evidence to create a hypothesis: What needs to change?
2. Second, make the change.
3. Third, review what happened: Is the process better or worse? Why?
4. Last, decide where to go from here: Revert to before? Iterate further?
By continually gathering data, you can continually improve your process. My software developer, Bob, didn’t have this mentality. If he did, he would have experimented with his process of getting to work. Perhaps he could have planned to catch an earlier train. He could have experimented with taking his breakfast to go instead of buying it at the train station. Maybe by laying his clothes out the night before, he’d be able to leave his apartment sooner. He could have continually experimented with all these variables to see if they helped shave some time off his commute so he could get to work closer to 8:00 a.m. instead of 8:15 a.m.
Now, think of this on the scale of mass-manufacturing. There were one hundred thousand different parts and components to a Hawthorne Works telephone assemblage. Shewhart formalized a way to continually make incremental changes. Perhaps the managers found that replacing a machine’s rubber gasket every month instead of every other month reduced the number of no-gos produced. Or maybe a certain component made on a certain line got slightly misaligned over time and needed to be checked more frequently than on other
lines.
Think about the shift in mentality. Before PDSA, the managers at Hawthorne Works would look at the huge pile of scrap at the end of the day and say,
“It is what it is. Sure wish we could find better workers.”
With PDSA, managers had a formal tool to help them track bad quality, make a change, and see if it improved product quality. They had a way to make Hawthorne Works better and better over time. They no longer made changes based on gut instinct; they had actual numbers to guide them.
But Shewhart thought only in terms of manufacturing.
It would be Deming who saw the bigger picture.