willtheorangeguy commited on
Commit
8f54409
·
verified ·
1 Parent(s): 955a455

add all 2021 transcripts

Browse files
Files changed (44) hide show
  1. A monorepo of serverless microservices_transcript.txt +373 -0
  2. A universal deployment engine_transcript.txt +263 -0
  3. Assemble all your infrastructure_transcript.txt +249 -0
  4. Bare metal meets Kubernetes_transcript.txt +363 -0
  5. Cloud Native fundamentals_transcript.txt +241 -0
  6. Cloud-native chaos engineering_transcript.txt +215 -0
  7. Connecting your daily work to intent & vision_transcript.txt +317 -0
  8. Crossing the platform gap_transcript.txt +379 -0
  9. Docs are not optional_transcript.txt +263 -0
  10. Elixir observability using PromEx_transcript.txt +253 -0
  11. Find the infrastructure advantage_transcript.txt +375 -0
  12. Gerhard at KubeCon NA 2021 Part 1_transcript.txt +587 -0
  13. Gerhard at KubeCon NA 2021 Part 2_transcript.txt +561 -0
  14. Gerhard at KubeCon NA 2021: Part 1_transcript.txt +0 -0
  15. Gerhard at KubeCon NA 2021: Part 2_transcript.txt +0 -0
  16. Grafana’s "Big Tent" idea_transcript.txt +741 -0
  17. Grafana’s Big Tent idea_transcript.txt +407 -0
  18. Honeycomb's secret to high-performing teams_transcript.txt +381 -0
  19. Introducing Ship It!_transcript.txt +253 -0
  20. Is Kubernetes a platform_transcript.txt +309 -0
  21. Is Kubernetes a platform?_transcript.txt +1173 -0
  22. It's crazy and impossible_transcript.txt +213 -0
  23. Kaizen! Are we holding it wrong_transcript.txt +809 -0
  24. Kaizen! Are we holding it wrong?_transcript.txt +0 -0
  25. Kaizen! Five incidents later_transcript.txt +523 -0
  26. Kaizen! The day half the internet went down_transcript.txt +565 -0
  27. Learning from incidents_transcript.txt +247 -0
  28. Let's Ship It!_transcript.txt +21 -0
  29. Money flows rule everything_transcript.txt +271 -0
  30. OODA for operational excellence_transcript.txt +209 -0
  31. OpenTelemetry in your CICD_transcript.txt +313 -0
  32. OpenTelemetry in your CI⧸CD_transcript.txt +0 -0
  33. Optimize for smoothness not speed_transcript.txt +201 -0
  34. Real-world implications of shipping many times a day_transcript.txt +285 -0
  35. Shipping KubeCon EU 2021_transcript.txt +363 -0
  36. The foundations of Continuous Delivery_transcript.txt +315 -0
  37. What does good DevOps look like_transcript.txt +257 -0
  38. What does good DevOps look like?_transcript.txt +855 -0
  39. What is good release engineering_transcript.txt +265 -0
  40. What is good release engineering?_transcript.txt +725 -0
  41. Why Kubernetes_transcript.txt +323 -0
  42. Why Kubernetes?_transcript.txt +600 -0
  43. 🎄 Merry Shipmas 🎁_transcript.txt +683 -0
  44. 🎄 Merry Shipmas 🎁_transcript.txt +0 -0
A monorepo of serverless microservices_transcript.txt ADDED
@@ -0,0 +1,373 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So in 2019 we spent a bit of time together. I found out about this new startup which is doing some interesting things with serverless, and we worked together for some number of weeks. It was basically a day in a week for some number of weeks... And that was a great experience. I really enjoyed myself, I met these wonderful people - Alan, Saul, Wycliffe and Damien, and it was a great experience overall. And now, Alan, Saul and Wycliffe are joining me today to talk about what has happened since, because we've been out of touch since 2019. So between 2019 and today, what happened?
2
+
3
+ **Alan Cooney:** Good question. Obviously, the big thing is there's been a pandemic...
4
+
5
+ **Gerhard Lazu:** Right.
6
+
7
+ **Alan Cooney:** ...and essentially, for your listeners, Skyhook is a travel website, so a website where you can book adventure holidays. So obviously, this has impacted us quite hard, and it's been a challenge to get through that. But at the same time, we've taken this big opportunity to really rethink how we're doing things and really improve our product, so that we can come out of this - and are starting to come out of this now - with a much, much better product for customers.
8
+
9
+ **Gerhard Lazu:** \[04:16\] So Skyhook Adventure - what does it do as a company?
10
+
11
+ **Alan Cooney:** Essentially, at its heart, Skyhook is basically a website where you can book adventure trips, like hiking to Everest base camp... Really unique trips. Or canoeing all the way across Scotland. And when you do that, you're actually booking with a local guide. Not a big company, typically a one-man operation. We find that gives you a really, really unique, authentic experience.
12
+
13
+ So that's kind of from the guest side or the customer side, in a way... But we're also on the other side, we're a business product as well; we're a place where the guides can manage their trips and do all this kind of admin that you usually do with your trip, that can take a few hours a day, and just automate most of that away.
14
+
15
+ **Gerhard Lazu:** So from that, what you've just said, how did we end up with serverless? Because that's what's happening in the backend, right? What's the link between these adventures -- do you have to be really adventurous to choose serverless?
16
+
17
+ **Alan Cooney:** That's definitely true. It's probably one for Saul. Saul is the one who really introduced us first to serverless.
18
+
19
+ **Saul Cullen:** Yeah, it's a really good question, Gerhard. I think it probably goes back to when Alan got me along... So I initially got invited along to Skyhook to help out on the payments. Payments was causing quite a challenge. Payments in the travel world is quite a complex area. It's not as straightforward as your average e-commerce business. There's lots of rules and regulations that need to be complied with, and it's quite a specialized field of payments. And I'd got a bit of experience in the payments world before that, so Alan got in touch and said "Hey, can you come along and see what we could perhaps do?" So I spent a bit of time going through that with Alan, and we did come up with a solution at the end. It's still in place to this day, it's not all perfect, but it gets the job done for us for now.
20
+
21
+ What I said to Alan - we were using a Drupal system at the time... So this marketplace, the Skyhook marketplace was based on Drupal. And I said "Hey Alan, have you thought about using some different tech to do this? You know, there's lots of things out there, there's containers, and this new thing called serverless..." And I think where I was coming at it from was I was envisaging this travel business as -- I was thinking of it almost like a travel magazine, where it was made up of multiple parts.
22
+
23
+ One part would be the trips that you browse and you look at, and I could sort of envisage that as almost like a static website, with content not changing particularly much over time. And then all these other constituent parts - the payments, the booking side, accounting, and all of those kind of aspects as being very discreet, specific tasks that nicely suited this serverless paradigm, where you would just create these Lambda functions or whatever it may be that you choose, and you set and forget, and you allow all of these discrete tasks to be handled very specifically.
24
+
25
+ That's how we then obviously joined the team slightly later on, and we went down this track of diving into the serverless world, and created the first iteration of the new Skyhook platform, which was a serverless monolith really of sorts, based on AWS... And we're using RDS as the database. We've then gone on from that journey from there, really.
26
+
27
+ **Gerhard Lazu:** \[07:54\] Right. So even though you had all these Lambda functions - that's what serverless means to you, that's actually what it translates to, Lambda functions running on AWS - they were all backed by the same RDS database. Is that right?
28
+
29
+ **Saul Cullen:** Exactly, yeah. So we were using AWS' Aurora database initially. It took us quite a while to design it, and you had to zoom out to see the whole thing, which was an interesting experience the further we got... But yeah, that's exactly right, that's how we started.
30
+
31
+ **Gerhard Lazu:** So who can tell us a bit more about that, that zooming out part? ...how that happened, and what did you discover as you started zooming out.
32
+
33
+ **Alan Cooney:** It's funny, because Saul actually had at one point printed out a version of our database schema, and the thing was huge...
34
+
35
+ **Gerhard Lazu:** Right.
36
+
37
+ **Saul Cullen:** Yeah, I'd obviously worked with SQL databases quite a lot in the past, and that was where my experience lay at the time. There were conversations when setting out, "What route do we go down here? Do we start looking at some of the more serverless-centric databases here, some of the NoSQL databases that are available that fit the serverless paradigm nicely? Or do we stick with what we know?" And at the time, the decision was to stick with what we knew.
38
+
39
+ After much consideration, you're learning so much at an early stage of going into a new, bleeding edge technology of something that's new to you, that there's this constant trade-off between picking new tools and actually getting stuff done and shipping it. And it seemed at the time that they were going down the route of choosing Dynamo, which would have been the obvious choice, given that we had elected to use the AWS platform... It seemed like there was quite a steep learning curve to that, and a lot of room for error, and we could have got ourselves into a bit of a hole that was tricky to get out of... So we didn't go for that initially, and we dove head-first into Amazon Aurora, which was quite young at the time, and evolving quite rapidly, but still quite young and missing some of the basic features and functionality that actually would have been quite nice to have.
40
+
41
+ So we continued down that route for some time. The database grew and the zooming out got further and further back, and we started to run into challenges - database migrations, and updating, and changing, and the schema lock-in that we had, all started to... You know, it was something that worked for us and it worked for a long time, but we did start to find that there were challenges there, and we then started to look at other opportunities out there and think "Hey, should we be using something that's more purpose-built for this?"
42
+
43
+ **Gerhard Lazu:** And did you?
44
+
45
+ **Saul Cullen:** Yeah, we did, in the end.
46
+
47
+ **Gerhard Lazu:** This was the beginning, right? Aurora, all those challenges, SQL-based single database, migrations were challenging, a couple of other things... So this was like, what - two years ago? Three years ago?
48
+
49
+ **Alan Cooney:** Two years ago.
50
+
51
+ **Gerhard Lazu:** Two years ago. So what else did they look like from a database perspective?
52
+
53
+ **Alan Cooney:** It's super-interesting... So we've actually moved to really splitting up our services, like Saul mentioned. We have a service for managing displaying trips, and we have a completely separate one for managing bookings, another one for accounting... And actually, each one has its own database, which is almost entirely DynamoDB. For listeners who are not aware of it, it's very similar to MongoDB, or other document-based databases. At the same time, we've moved from having a REST API, which all the serverless functions interacted with, to using GraphQL...
54
+
55
+ **Gerhard Lazu:** Interesting.
56
+
57
+ **Alan Cooney:** ...AppSync in particular, which is sort of managed GraphQL as a service. Both the API and the data layer have changed, but a lot of the underlying logic is pretty similar and is often just kept the same, just with more testing, and things like that.
58
+
59
+ **Gerhard Lazu:** So what is better about the new setup?
60
+
61
+ **Alan Cooney:** \[11:56\] I can give you the business side, and it will be interesting to hear as well on the technical side. From a business side, it's way more reliable. And you know, you have these problems as a startup, but to give you the example of a host adding their trip - so the guest or customer experience of booking a trip has always been quite smooth... But in terms of adding trips and editing trips, it's been very clunky and very bug-prone, so we'd seen multiple support tickets every day. If you gave a demo, there was a good chance it would break to a host which is obviously quite embarrassing... And that's basically gone away.
62
+
63
+ **Gerhard Lazu:** So the system is a lot more reliable today than it was two years ago.
64
+
65
+ **Alan Cooney:** Yes, and that's transposed into other metrics, like many more host sign-ups, and many more trips on the website, just from having this much more robust and easier to use set of tools.
66
+
67
+ **Gerhard Lazu:** What about developing the new setup? What is it like writing code for the new setup versus the old setup?
68
+
69
+ **Wycliffe Maina:** Yeah, \[unintelligible 00:12:50.02\] It's always easy - or easier - to keep focus, easier to deploy, and easier to know when you have that separation of concerns. You're able to know when you're updating something or not updating something, that is your code is very specific.
70
+
71
+ Another thing that \[unintelligible 00:13:14.17\] is tests, which - we have increased the number of tests we have. We have a lot of unit tests, we have always been having \[unintelligible 00:13:24.02\] go for the 100% test coverage... And we are looking to sort of like bring in some integration testing there, and some \[unintelligible 00:13:33.00\]
72
+
73
+ But all in all, the scope of this task we are doing actually is, for instance, over the last few months, we have been implementing a few services, and because you're working on a very specific area and it's a Lambda function, it's a little bit easier to work through it, be able to test it, and \[unintelligible 00:13:50.27\] service in general.
74
+
75
+ **Gerhard Lazu:** That makes a lot of sense... Rather than changing a part of this big whole, now I have discrete units that you can focus your blast area, so to speak, so if there's a failure or a problem, it's limited to that specific service.
76
+
77
+ **Wycliffe Maina:** Yeah.
78
+
79
+ **Gerhard Lazu:** So did this change the deploy times, basically how quickly the code goes into production?
80
+
81
+ **Wycliffe Maina:** Yeah, this is actually the effect of that. The deploy times are like mostly three minutes. The tests are also--
82
+
83
+ **Gerhard Lazu:** Three minutes?
84
+
85
+ **Wycliffe Maina:** Yeah.
86
+
87
+ **Gerhard Lazu:** Wow...
88
+
89
+ **Wycliffe Maina:** Tests are also faster... And it's always easier to get feedback when you are doing deployment. You can even say like -- I would even have like a local \[unintelligible 00:14:29.15\] I would just push it to the CD and see \[unintelligible 00:14:33.03\] That's the one I'm using today, because having dealt with the old system, because I joined somewhere in the middle, it was always difficult to get a quick feedback cycle. Also, it took very long to deploy the whole service, because you'd have to deploy everything together.
90
+
91
+ **Gerhard Lazu:** How long did it use to take?
92
+
93
+ **Wycliffe Maina:** About 20 minutes? I'm not sure...
94
+
95
+ **Gerhard Lazu:** 20 minutes? Wow. And you think that's too long? Some people would say two hours is too long... So it's really interesting that you think 20 minutes is too long, which again, for some would be perfectly okay. So 20 minutes was too long, and now three minutes is just about right, would you say... Right?
96
+
97
+ **Wycliffe Maina:** I personally would like it to be a little faster... \[laughs\]
98
+
99
+ **Gerhard Lazu:** Wow. Okay...
100
+
101
+ **Wycliffe Maina:** Yeah. The faster I can see results of what I'm working on the better.
102
+
103
+ **Gerhard Lazu:** Speed is addictive, right? And first of all, as you mentioned, very important - the quicker you can understand your mistake in production, the quicker you can fix it... And if you can do it so quickly that people don't even notice it, isn't that the best?
104
+
105
+ **Wycliffe Maina:** Yeah, that's even better.
106
+
107
+ **Gerhard Lazu:** Amazing. So back to you, Saul - from an architecture perspective, how many services do you have? Do they interact amongst themselves, do they share anything? What does that look like?
108
+
109
+ **Saul Cullen:** That's a really good question. Gosh, I don't know what they were at the last count. We seem to add about one a week as we move over... And as Wycliffe says, we've been doing a lot of this migration. So we keep the services very specific to tasks - we have reviews-related services that handle everything to do with customer reviews, and bookings-related services... So probably you could count our number of services on your hands at the moment. But we anticipate that growing over time, and this new architecture allows us to very quickly add new services, test them... And like you were just saying, you get to that point of failure and find where your failure is much more quickly, and then you can iterate correct and get out what the customer actually wants. And I think that's actually an interesting area.
110
+
111
+ \[16:36\] So these feedback loops is something that -- when you came along, Gerhard, I remember sitting down with you, and you said "We've got to get this DevOps cycle going, and get these feedback loops going really rapidly, so that you can learn from what you put out there and feed that back into what you're working on." That sticks in the back of my mind all the time really, and we're constantly thinking "How can we get these feedback loops going faster and faster?" And this new microservices-based architecture really has helped us with that, and we're shipping at much, much higher velocity than we were previously.
112
+
113
+ Another thing we're starting to try as well is including things like feature flags. Instead of pushing out large chunks of code, we'll every day push out multiple new features and just flag them off and show them to specific sets of customers, or ourselves internally, we'll test those. And all of these sort of architectural choices actually do have very direct impact on the customer, on how rapidly features reach them, on how rapidly we can improve those features, learn what the customer wants... So I think it's definitely something that I've put a lot of thought into, and as a team we've put a lot of thought into that as well.
114
+
115
+ **Break**: \[17:52\]
116
+
117
+ **Gerhard Lazu:** I would like to go back, Saul, to how those microservices talk amongst themselves. First of all, my understanding is that those microservices are just collections of serverless functions that get deployed as one unit. So it's just a grouping of serverless functions. They all have their own data store, which is DynamoDB... And what I'm wondering is how do they talk amongst themselves? Or do they even talk among -- I mean, is there any need to communicate between services?
118
+
119
+ **Alan Cooney:** I wish there wasn't... \[laughs\]
120
+
121
+ **Gerhard Lazu:** Wouldn't it be perfect if there was no network latency, nothing failed? Yeah, sure...
122
+
123
+ **Saul Cullen:** Absolutely... I mean, this is something that's an evolving area for us. There's lots of solutions that people tout out there... You know, people using gRPC to communicate between microservices... We're using AWS AppSync, and what we have is we have a separate API service. And that API service allows us to expose the AppSync service to each service. So everyone can use the API to query for whatever data it was.
124
+
125
+ \[20:07\] We're still at the early stages of running with this and using it, but at the moment it is working really very well for us, for the most part. Don't know if Alan wants to add anything to that, because it's an area where Alan has really pioneered a lot of that...
126
+
127
+ **Alan Cooney:** Yeah, so that's for synchronous communications specifically, which is actually quite a small part of total communication between services... And it's quite an unusual setup actually, in that the services are going back through AppSync -- because often you have a mutation to create a booking, and then the booking service will go back to AppSync, basically to the API, and say "How much availability does this particular date have? i.e. can we make the booking, or is already fully booked?" But the majority of communication happens asynchronously via an AWS Event Bridge, which is -- we ended up trying a lot of different services for this, but AWS Event Bridge has gained loads of traction recently in serverless communities, because... It's great. That's the short answer.
128
+
129
+ For example, the booking service, when you make a booking, that will put some events onto this sort of central event bus, and then the trip service will listen to that and say "There's a new booking. Let's reduce the number of spaces." And all of that happens in a few seconds, but it's asynchronous, so we don't have to worry about any problems with the services communicating, \[unintelligible 00:21:34.26\] and all the rest.
130
+
131
+ **Gerhard Lazu:** Okay. What sorts of messages do the services put on the Event Bridge?
132
+
133
+ **Alan Cooney:** A lot...
134
+
135
+ **Gerhard Lazu:** But are they JSON, do you have a specific protocol, do you have any versioning? How do they know how to read those events? Do you have any schemas? How does that look like?
136
+
137
+ **Alan Cooney:** Sure. It actually uses the same schema as our API, which makes things very simple. So the booking object on our API is the same as the booking object, as the JSON object that's put on the event bus... Alongside a standard name, so we can create booking updates, something like that.
138
+
139
+ **Gerhard Lazu:** Okay. So event bus, not even bridge.
140
+
141
+ **Alan Cooney:** Event bridge, which is an event bus, sorry.
142
+
143
+ **Gerhard Lazu:** I see, okay. That makes sense. So in terms of number of transactions, volume, latency, anything like that - can you give us some numbers? What looks like a good latency? Do you have such a thing? Do you have any SLOs, any SLIs? Anything around how well services interact?
144
+
145
+ **Alan Cooney:** Definitely I think we can get better at this, is the short answer... Most queries respond within 10 milliseconds. For a web app perspective, that's very fast... And that hasn't caused us any issues at the moment.
146
+
147
+ **Gerhard Lazu:** And that's internal, right? So the services, when they talk amongst themselves, they can expect asynchronous responses to come back within 10 milliseconds...?
148
+
149
+ **Alan Cooney:** Yeah.
150
+
151
+ **Gerhard Lazu:** Okay. And what about the public world? Do you have CloudFront? What's in front of the API?
152
+
153
+ **Alan Cooney:** This is probably one for Wycliffe, but basically Next.js sits on the front of it... But maybe Wycliffe you can explain a bit about that.
154
+
155
+ **Wycliffe Maina:** For the frontend we're using Next.js, which is based on React for those who don't know about that... So most of our important pages, that is the trips pages, the homepage with your trips, hosts, and so on, are SSR-ed. So Next.js goes to the API, fetches the data, and then sends in a fully SSR-ed static page to the frontend. \[unintelligible 00:23:36.21\] so that the page gets the data and can behave more like a \[unintelligible 00:23:43.01\] application rather than a static application, which is normally what you don't want. You want to be able to provide a rich user experience for the user.
156
+
157
+ So essentially, what this involves is the fetching between Next.js \[unintelligible 00:23:58.02\] The application is hosted on Vercel, which is the parent company of Next.js, or the company that builds Next.js and open sources it.
158
+
159
+ \[24:10\] Essentially, that goes directly to AppSync. At the moment we don't \[unintelligible 00:24:13.05\] but that might be an option for the future. So AppSync directly goes to the individual services to get the requested data, and then it does that through Next.js, which sort of caches some pages that don't change that frequently, so that the users get some pages much faster than it would involve getting them directly through AppSync \[unintelligible 00:24:34.20\]
160
+
161
+ **Gerhard Lazu:** That makes sense. So Next.js - I imagine that is a JavaScript framework, right? Based on React. That's my understanding. So how do you serve that to users? So if a user goes, for example, to SkyhookAdventure.com, I imagine they load this Next.js-based response...
162
+
163
+ **Wycliffe Maina:** yeah.
164
+
165
+ **Gerhard Lazu:** Where does that get served from?
166
+
167
+ **Wycliffe Maina:** That's based on Vercel. Vercel.com I think is the website... Which basically is like Netlify. It's built on AWS as well, and uses Lambdas under the hood. Basically, each page we have is a Lambda function, but Next.js sort of abstracts that away from us.
168
+
169
+ **Gerhard Lazu:** So I still in my head am not understanding this... So the request comes in, the DNS... What does the DNS for SkyhookAdventure.com point to?
170
+
171
+ **Wycliffe Maina:** It points to Vercel servers.
172
+
173
+ **Gerhard Lazu:** Vercel, I've never heard of them. We'll need to put it in the show notes, because I've never heard of them. Okay... And they are similar to Netlify, right?
174
+
175
+ **Wycliffe Maina:** They are much similar to Netlify. In fact, they have a lot of similarities. Netlify is the more mature of the services, but Vercel has a zero config option, which means you give them an application, whether it's Next.js or any other framework, and you can get up and running without having to configure anything.
176
+
177
+ **Gerhard Lazu:** Interesting. And how do you give them this application?
178
+
179
+ **Wycliffe Maina:** They have a GitHub application. You connect your repository to their servers, and they determine which sort of application it is, and determine the configuration required \[unintelligible 00:26:04.21\] on their servers.
180
+
181
+ **Gerhard Lazu:** Right. That's really interesting.
182
+
183
+ **Wycliffe Maina:** Of course, we are using a custom deployment environment, because we need to pass in some environment variables from AWS, which means we have our own custom CI/CD environment to deploy that.
184
+
185
+ **Gerhard Lazu:** Okay. And where does the CI/CD run? What is CI/CD in your case?
186
+
187
+ **Wycliffe Maina:** In our case we basically use GitHub Actions. The faster step is usually to get the secrets from AWS, that is SSM mostly... And \[unintelligible 00:26:35.02\] the URL of the API, AppSync API, and then passes that over to Vercel, so that it can \[unintelligible 00:26:46.04\] the build.
188
+
189
+ **Gerhard Lazu:** That sounds really interesting. Okay... Again, I've never heard of this service. I definitely wanna check it out. I think you said why you chose it, because Netlify just requires more configuration, right? That was my understanding.
190
+
191
+ **Wycliffe Maina:** Yeah. With Netlify you have to do a lot of configuration for different environments. It too has improved over the last few years, but the zero config, and then also you factor in that Vercel is the parent company for Next.js, so it's their own product...
192
+
193
+ **Gerhard Lazu:** I see...
194
+
195
+ **Wycliffe Maina:** So it becomes a very good combination.
196
+
197
+ **Gerhard Lazu:** I see. That makes sense. Okay, that makes sense. And do you have multiple environments? Do you have like staging, or per-feature environments? Or is there just like a single production?
198
+
199
+ **Wycliffe Maina:** That's the beauty of using Vercel - another advantage is that each PR you have gets its own unique URL. So if multiple people build different \[unintelligible 00:27:42.14\]
200
+
201
+ **Gerhard Lazu:** That sounds really interesting, and I really like that idea. I know Netlify does something similar. But I've never understood... For a stateful service - great. You have a feature environment. But what about the data? How do you do the data migration for that? How do you solve that problem?
202
+
203
+ **Alan Cooney:** \[28:06\] We would love to have one complete backend built per PR a well, which is close to being feasible in the serverless world, because it costs pennies to run... And then really per PR you could have your own \[unintelligible 00:28:17.03\] your own environment. But we don't have that; it doesn't seem to be at least easy with AWS... So we have \[unintelligible 00:28:25.21\] but it goes to one staging backend which has a set amount of test data.
204
+
205
+ **Gerhard Lazu:** I see. Okay, and then I imagine that GitHub Actions does any migrations that it needs to do on the staging environment, so that the PR -- is that right? Or do you have like per-PR -- like, how does GitHub Actions know what to do on the staging environment based on the type of push or whatever action it is?
206
+
207
+ **Alan Cooney:** It's basically configured in a GitHub Action file, a workflow file, per microservice, or there's a separate one for the website, the frontend... And that defines a series of steps that it goes through specifically for that service or the website.
208
+
209
+ **Gerhard Lazu:** That makes sense. So I'm imagining that you have different repositories per microservice. Is that right?
210
+
211
+ **Wycliffe Maina:** No, we have a single repository for all our services.
212
+
213
+ **Gerhard Lazu:** Yes! I've got something right! Yes! \[laughter\] I'm a big fan of single repositories. Why? Because it keeps everything simple. Now, I know that's something we discussed in 2019... How well did it actually work in practice with all these changes? I'm so curious to hear about this.
214
+
215
+ **Alan Cooney:** This has been a super-pivotal thing in terms of leverage that you yourself have had on the company, Gerhard, in terms of saying "Both let's split it up into microservices, but also, at the same time, let's bring everything into one repo on GitHub." And the beauty of it is -- for example yesterday I pushed something which actually impacted every service, and you just see a list of 30-40 ticks as the different CD pipelines are running in parallel, and then you can push it out with a huge amount of confidence without worrying about synchronizing everything.
216
+
217
+ **Gerhard Lazu:** Wow... I'm getting so warm and fuzzy right now, and it's not the weather, I can tell you that... This feels great. I think this is the best feedback I received all week, all month... I don't know, this was like amazing. Okay... Wow, it makes me so happy; you have no idea. Great.
218
+
219
+ I'm wondering now, Wycliffe, what does a merge into the main branch look like? What happens between merging into main and the code appearing in production? Can you run us through that? You have three minutes, because that's how long it takes, right? \[laughs\]
220
+
221
+ **Wycliffe Maina:** Yeah, \[unintelligible 00:30:51.14\] so it's going to be a single commit... And the first thing it does is run a few tests, that is unit tests. And once that's done -- so that means also running the linting, and also trying to build it so that we can catch all errors that are \[unintelligible 00:31:13.27\] and tests written before. The reason we do this is sometimes something that happened on a PR and the test passed, the conditions might not be duplicated on production... So we have to be sure that all the tests are passing.
222
+
223
+ Then after that \[unintelligible 00:31:33.12\] The reason we do this is because sometimes we run integration tests \[unintelligible 00:31:39.01\] staging, which is the step that comes after deploying to staging. And \[unintelligible 00:31:44.26\] on the staging environment that is the integration tests, then the deployment to master takes place; that is the deployment to production.
224
+
225
+ So all in all, depending on the amount of tests and the size of the codebase, that may take anwyhere between one to three minutes. On our new smaller services it's even faster than that.
226
+
227
+ **Gerhard Lazu:** \[32:07\] Do you find yourself pushing changes at the same time to multiple services? Alan, you mentioned yesterday you made a change... What does that look like, I'm wondering, Alan?
228
+
229
+ **Alan Cooney:** Yeah. I don't do it that often, to that many services, for sure. That was actually a change for billing tagging in AWS. But basically how it works is - for example, you want to update the website and your backend service. You can push those through at the same time, especially if the website feature is feature-flagged, or not available to users yet; you can push them at the same time, and that lets you encapsulate maybe a small piece of code that's spread across several areas, and see the change very quickly.
230
+
231
+ There's actually quite another cool feature here, which is that we use GitHub Actions, which means we can have very specific tests for specific services. Saul mentioned our API service, which basically just has a GraphQL schema, and that checks for breaking changes to the schema on every push... Which is obviously super-important if you don't want to have to version your schema... And the accounting service spins up Puppeteer to run some tests, very specific tests, on our payments provider.
232
+
233
+ **Gerhard Lazu:** That's interesting... Okay. So you have all these tests, all these services... How do you configure them? How do you set everything up in the first place? Because there's quite a bit of things, like GitHub Actions, on AWS you have all those services... How do you set everything up? What do you use for that?
234
+
235
+ **Saul Cullen:** So before CDK there's Hygen that we use. When we first build a new service, what we'll do is we'll code-gen it using a tool called Hygen, and that sets up the basic template of each service. So the core things that we require in a service will be there, will be ready to use, and will be standardized across them. So anything that needs to be defaulted. And that's actually proven extremely useful for allowing us to go away maybe individually and create a new service that you can then pass on to another team member who will have an idea of what the service should contain, you can dive in and understand it at a high level very quickly.
236
+
237
+ **Gerhard Lazu:** Okay. Where do you store all this config?
238
+
239
+ **Alan Cooney:** Hygen is very interesting in that it actually stores your config in your repo, next to where it's being used. So that makes it super-easy to edit, because obviously -- so it contains things like testing and linting setups, so we can go in there and add something in very easily if we want to change how it's done for future services.
240
+
241
+ **Gerhard Lazu:** Okay.
242
+
243
+ **Alan Cooney:** So it's committed up, essentially.
244
+
245
+ **Gerhard Lazu:** Okay, that's great. It's version controlled. I love the sound of that. How does it get applied? How do all those changes get rolled out onto AWS?
246
+
247
+ **Alan Cooney:** When we were working together a couple of years ago, Gerhard, we were using CloudFormation...
248
+
249
+ **Gerhard Lazu:** Yup, I remember that. Oh my goodness me... \[laughter\]
250
+
251
+ **Alan Cooney:** And one of the things you said, which is obvious in hindsight, is "I really hate yaml, especially when it's 500 lines long for a service." And of course, our services - they're mostly not actually Lambda code; they're things like Simple Queue Service (SQS) queues, and lots of built-in AWS products to remove the amount of work we have to do. So we now deploy that using AWS CDK, which lets us write infrastructure in TypeScript, and it also means that we can create separate node modules. They basically have some pre-built defaults in them, so if you want to stream from DynamoDB to Event Bridge, so take your data and stream it to the event bus, you can add in three lines of code - basically a custom CDK construct that we've had - that behind the scenes creates a Lambda functions and queues and dead-letter queues that alarms if it fails; all this complexity. But it's just three lines of code that says "I want this DynamoDB table to stream to my event bus."
252
+
253
+ **Gerhard Lazu:** \[35:58\] Okay, that sounds like a very good setup, I have to say... And I also would like to add that my relationship with yaml went through different cycles. It's definitely a love/hate sort of thing, I have to say that... But I think my biggest distaste from abusing yaml came from seeing it being used in CloudFormation, where it'd literally do increment, like an inc - can you imagine the string inc put in a list, and then you had two numbers which had to be incremented? A variable would be generated out of that... So basically, you'd program in yaml, which I think was abhorrent. You should never do that.
254
+
255
+ I remember that moment, and I think I will remember it till the end of my days; that was horrible. Why would you do that? If you want to do that, then just use a programming language, like TypeScript. That makes a lot of sense. I remember that moment. So I'm really glad that you went down this path, because if you do have to do that, with any sort of templating, any sort of smart logic - don't do it in yaml, please. It's just horrible. So yeah, I'm very glad that that worked so well as well. Nice.
256
+
257
+ **Alan Cooney:** Yeah, that's had a big impact on us. One of the most depressing things is changing a small piece of CloudFormation and then waiting 15 minutes for it to tell you that there's something wrong in the CloudFormation.
258
+
259
+ **Gerhard Lazu:** That's right. I remember that. That was painful. Well, I'm glad that you're in a much better place now... And with that in mind, I know that things can always improve. It's one of my favorite things about this specifically - it's easy to improve. And the whole industry keeps improving all the time. So what I'm wondering now is what could be better about your current setup? Do you have some improvements in mind that you would like to do?
260
+
261
+ **Saul Cullen:** We're constantly looking for improvements. We're working with what I think a lot of people would consider as bleeding edge technology, and that means that some of the decisions that we make don't always pan out to be the best ones. It can take trying them out to actually realize that it isn't the best tool for the job for us. That was something we talked about previously. We went down the route of using RDS as a single database store, and found that actually it made more sense to go down the Dynamo route.
262
+
263
+ What we're always constantly struggling against though is how we use our time... So we can seek to change things like we are currently - we're doing a number of migrations of all the services, but what that means is we reduce the amount of business value that we ship. So there's this push and pull constraint that we're working against with what may be best for the dev side, and what we want to work on, what we want to chop out, change, improve, go back and fix, versus what we need to ship to customers to improve their lives and to release more business value and grow the company.
264
+
265
+ To answer your question, I think an area for me that is so important is development; the development experience is something where we still need to make quite a few improvements. You mentioned your questions about how our data is set up, and whether we refresh our data and things in our staging environment... And you know, we need to improve that so that the development experience is more fluid and more true to what we then push out to production. I know Wycliffe's got a few thoughts around this, so maybe you can jump in there Wycliffe.
266
+
267
+ **Wycliffe Maina:** I think what Saul also is talking about is more of like environment portability, so being able to create a whole bucket for a PR and being able to tear it down once the PR is done. One of the biggest challenges we have mostly when we're working is - if we're working with the same service, we end up having two or more different PRs. \[unintelligible 00:39:49.25\] so this can become a frustrating point whenever you're doing that.
268
+
269
+ So we are looking to technologies or solutions to help us into that area, so that we are able to \[unintelligible 00:40:06.23\] a little bit more, as different teams work on different solutions for different areas.
270
+
271
+ **Gerhard Lazu:** \[40:17\] I think that makes a lot of sense... Being able to experiment with data, being able to do things at maybe a larger scale, production scale... Like, how does this impact production without taking production down? That would be nice, especially if you have to do migrations or big changes... So if anyone is listening to this that has an idea of how to do this better, if someone knows \[unintelligible 00:40:36.10\] AWS that are solving this problem or are thinking about it, I'm sure that Alan, Wycliffe and Saul would love to hear from you. So don't be shy. We're all friendly, all of us.
272
+
273
+ **Break**: \[40:49\]
274
+
275
+ **Gerhard Lazu:** Is there any particular incident or war story that you would like to share? Something that you've learned from. It doesn't have to be tech-related - it can be business-related - but something that obviously impacted your users. Because at the end of the day, everything that we do, whether it's coding, shipping, has an impact on our users. And when we get things wrong, they are the ones that suffer the most. So it doesn't always have to be code changes, or migrations... Sometimes it can be providers that you depend on that fail you, and in turn, you fail your users.
276
+
277
+ **Saul Cullen:** Yeah, I'm sort of having to think through this... And obviously, as a young team, we come across a lot of challenges on a daily basis. I think perhaps an area that's been particular challenging for us over the last year to 18 months has been payments, actually. When I originally joined to help out with, we went down the route of choosing our payments provider... And obviously, this global pandemic suddenly hit at the early stages of 2020. And in the travel industry, as I mentioned earlier, there were lots of these rules and regulations that you need to follow to take payments for future holidays. It creates a lot of intricacies when you're taking these future payments. You've got chargeback mechanisms and things, and Section 75 that comes into play, and travel instantly becomes what's termed a high-risk industry from the card providers and the merchant banks' perspectives.
278
+
279
+ We certainly came across an earlier incident where our payment provider disables refunds for us, unbeknownst to us. And of course, in the early stages of a pandemic occurring there's all manner of changes to bookings going on, customers no longer able to travel... And that was something that had really a very significant impact on us from an operational perspective. Suddenly, the tasks that we were working on one day had to be immediately shelved, and the immediate issue jumped upon... Because at the end of the day you've got to look after your customers. I'm a strong believer in customer experiences. As you know, Gerhard, there's a great book out there - shout-out to to Joseph Pine and James Gilmore on the Experience Economy. Great book, have a read.
280
+
281
+ \[44:14\] So at the end of the day, experience for customers comes first from our perspective, so we jumped on this and tackled it in our own way, and patched the holes as best we could... But I think it was quite a realization for us that rolling with single providers for third-party services definitely -- it's an obvious thing, but it comes with a lot of risk. And when it's a core service such as payments, it's something that you really need to think about what your options are, in the worst-case scenario. It's something we're still working on.
282
+
283
+ We've talked a lot about it actually since that occurrence, and we've got a lot of ideas of how we can fix it, there's tools out there like -- some of your listeners might have heard of things like Spreedly, where you're able to hook in with multiple payment providers rather than running a single provider like Stripe, or whoever it might be; you can maybe have two or more different providers that get selected, depending on various criteria that you can define, or allow Spreedly to define. They are a potential solution, but again, all the complexities in the specific travel world add another layer to solving that challenge. So it's a really interesting one, and I expect something a lot of people are experiencing right now.
284
+
285
+ **Gerhard Lazu:** That's a really good one, because it makes you realize how even in the tech industry, where it's all about code and shipping, you hit against business realities like payments. Real money has to flow somehow -- well, real... It's mostly virtual these days, but still, money has to travel somehow, and you start integrating with all these providers. So nevermind it's just your technology partners or providers, it's also the payment providers. And for you that have to deal with trips... The trip agencies, I imagine, the travel agencies that you have to deal with as well - how were they like for you?
286
+
287
+ **Alan Cooney:** We deal with local guides rather than agencies. They're very small companies. So to take a step back, the way it works for payments on our website is you make a payment, and it actually gets protected in usually a trust fund. That's where the complexity comes from. So you can't just go through Stripe and send it on \[unintelligible 00:46:30.07\] straight to the provider. It gets protected until after the trip.
288
+
289
+ So from this perspective, there was not really any risk as far as we were concerned, in that all this money can simply be returned if the trip can't go ahead. It just so happens that at the time our travel payments provider, and indeed several others as well, prevented all automatic refunds across their API for all customers. So we were hit with this really challenging problem.
290
+
291
+ **Gerhard Lazu:** How did you solve it?
292
+
293
+ **Alan Cooney:** Yeah, it was actually a business solution in the end. We managed to convince our provider that we were a special case, that we were very safe, and they re-enabled at that time automatic refunds for us. So it took a few weeks to solve, which was obviously very stressful and we were really concerned for our customers, who are nervous and they want to see -- if they need a refund, they want to see that quickly. So they don't worry about the financial stability of the companies involved. But that was the solution in that case. We actually had a very related incident a while later, where - again, we had a system issue with payments, and we ended up solving it with a very interesting and unorthodox approach...
294
+
295
+ **Gerhard Lazu:** I like where this is going...
296
+
297
+ **Alan Cooney:** Yeah, it was really useful actually having this event system, because basically what happened was refunds were shown they succeeded and failed in various ways, and so we replayed our event stream, this time hooked up to a Lambda function which sent an email to the support team of our payments provider to resolve the issue, and triggered a to-do to ask for us to check that it had been resolved.
298
+
299
+ **Gerhard Lazu:** \[48:04\] That's very clever.
300
+
301
+ **Alan Cooney:** A bit of a complex solution, but you have to think outside the box with these, so... Much credit to the team for creating that.
302
+
303
+ **Gerhard Lazu:** That is really genius, because statements of facts - those things happened, and what you do about those things can change. And having the ability to replay and take a different course of action for thing that happened is so powerful. Wow. So tech solves this specific problem. Interesting. And it's obviously bright minds, it's not all tech.
304
+
305
+ Okay... So we talked about this particular incident, this particular tricky situation. A company fighting for their customers - I wish that was the case more often. And I know that many companies do the right thing, but I also know companies that don't do the right thing, so this is admirable... And especially when the travel restrictions hit, I know that a lot of people were affected in many, many ways. So it's great to have a peek behind the scenes as to what that looked like, and companies fighting for their customers - the payment providers, the... It wasn't trust funds; how did you call them, Alan?
306
+
307
+ **Alan Cooney:** No, exactly, it's trust funds \[unintelligible 00:49:11.28\] usually.
308
+
309
+ **Gerhard Lazu:** Having those relationships coming into play, and you having to lean on those, and eventually the right thing happening weeks after the fact - there's a lot that goes in the background. And at that point, does it matter to ship code? Does it matter to add new features? Not really, right? Because the most important thing is doing right by your customers. I think many sometimes can get carried away in this world of tech and forget about that critical, critical aspect.
310
+
311
+ Okay, so - still thinking about your customers, the Skyhook Adventure customers, which feature that you've shipped in the last six months made you most proud, most happy? And you can go around, maybe you have different favorite features... Wycliffe, how about we start with you? Do you have a specific favorite feature? It doesn't have to be customer-facing, but it would be nice if it was.
312
+
313
+ **Wycliffe Maina:** I'd probably say hosts sign-up. I sort of consider that my baby. I worked on it for a long time, I made a lot of mistakes in the process, and learned a lot over the last few months. \[unintelligible 00:50:14.12\] an increase in host sign-ups.
314
+
315
+ **Gerhard Lazu:** How did it work before this feature was developed? How did hosts get signed up?
316
+
317
+ **Wycliffe Maina:** The whole process is that we are moving over to a new service. We sort of \[unintelligible 00:50:38.11\] each of us took an individual task. I focus on the host, I think Alan was working on the booking service... Essentially, the idea was to improve reliability so that the process of signing up was to be much smoother. So we have an approach where we guide the hosts with a process of explaining to them what it entails, and then sort of like signing them up so they can have an account, and then creating a host profile, and then going on to our trips. \[unintelligible 00:51:13.15\] we also did some improvement on the UI. I also request a lot of information upfront; before I think it was being sent over a spreadsheet, or something.
318
+
319
+ **Alan Cooney:** I think it's important to emphasize that previously the hosts filled in PDFs and a spreadsheet...
320
+
321
+ **Gerhard Lazu:** Wow.
322
+
323
+ **Alan Cooney:** You know, these are very MVP things, so it's not just -- when we say "migration", okay, there was some migration involved, but actually it was changing an MVP into a brilliant experience for the hosts.
324
+
325
+ **Gerhard Lazu:** That sounds right.
326
+
327
+ **Alan Cooney:** And we've seen a fivefold improvement in the number of hosts signing up, so that's something to be proud of.
328
+
329
+ **Gerhard Lazu:** For sure, for sure. It just goes to show, there's many areas like that that you can always improve. Knowing which one to focus on, which is the most important one - that's where the business comes into play... And they say "You know what - this is what we need, because the company will be able to do these things if we do this thing first." This is the most important thing, because it unlocks other things...
330
+
331
+ \[52:11\] So that is a very nice business working well with tech, and working well with maybe marketing, who knows... I don't know -- I mean, even though you're four people, all of you wear different hats, I know that, and you're all hands-on. That's one of my favorite things about startups - everybody gets to do everything and grow in different ways that they never experienced.
332
+
333
+ So how about you, Alan? Which is your favorite feature?
334
+
335
+ **Alan Cooney:** Yeah, this is kind of a strange one, but cancellations. It's a bit different with the Covid pandemic, but they want to cancel or change dates or do something like that... And previously, they had to reach out to us, we'd get back to them within 24 hours, maybe they had some questions about availability, how they can do everything... And actually, this is something that happens the whole time - customers wanna switch dates, for example. Now they can just do that straight -- they go onto the website, click on their booking, they can click Cancel, they see all the details about what's gonna happen, they can choose the appropriate option to change date, or get a refund, or whatever they need... It's just a colossally-better customer experienced.
336
+
337
+ **Gerhard Lazu:** That's amazing. So let me guess - is there a cancelation service now?
338
+
339
+ **Alan Cooney:** It's actually in the booking service. That one is quite big, I have to confess. The backend code is pretty simple. It's just a really nice user experience, and I know Damien from the team - who's not here today - who works in operations, that also produced massive decrease in the number of support tickets during the pandemic, as you'd imagine... Because for those who want to use self-service, they can just do it instantly.
340
+
341
+ **Gerhard Lazu:** What about you, Saul?
342
+
343
+ **Saul Cullen:** There are a lot of new features that have been going our recently that are really exciting, I think, from both the hosts side and the customer side of the Skyhook marketplace... I think one that's been asked for many times by our customers, and internally, is the ability to discover new trips. As our number of trips and hosts on the platform has increased, so has that need to be able to find the right one to go on as a customer.
344
+
345
+ What we implemented recently, as we touched upon earlier, was utilizing an Algolia third-party site search tool to provide that functionality for us. Ordinarily, something in the past may have taken weeks or months to implement, was done within 7-8 days, fully integrated, with lots of capability behind it. I was certainly really proud to see it go out, and we're starting to get metrics back on that now from customers showing a lot of them using it.
346
+
347
+ Also, we're starting to see areas where we need to make improvements to that from those metrics, where we can add features and functionality and where we can remove them.
348
+
349
+ It sort of takes me onto a slightly tangential point actually about third-party tooling. It's something that we in the last few months have started to use more of. As developers, we often think "Hey, I could build that." We've got this great thing called serverless that will take a week to build a solution to whatever problem it may be... And invariably, it ends up taking significantly more timp to ship those features.
350
+
351
+ So what we started to do, given that we're a very small team at the moment, is to look for third-party tooling to give us rapid solutions that we can then -- you know, either they provide a long-term solution for us and they're really fully-featured and they do what we need without creating too many single points of failure or issues, or they can act as a proof of concept for us. "Is this something that customers really want, and should we invest team time in it?" Because as you mentioned earlier, when you've got a small team like this and you've got a pandemic going on, really prioritization is actually the crucial thing that we've got to get right. We've got a list of features as long as our arms that we could work on and we know customers would be asking for, but which one is gonna provide us with the most business value back and the most satisfaction for our customers?
352
+
353
+ \[56:15\] So that's an area where we're turning to these third-party tools to prove some of these ideas and concepts really quickly, and reduce those feedback loops that we talked about earlier.
354
+
355
+ **Gerhard Lazu:** Any tools that you would like to mention, Saul?
356
+
357
+ **Saul Cullen:** Certainly Algolia on the search is a great tool. I think they're probably a market leader at the moment, and that's been really positive; our experience was good. Third-party email services - you know, it's very easy to start linking services into AWS SES (Simple Email Service), and things like that... But you then find yourself building a lot of your own logic behind it, and actually it's easier to still outsource to the MailChimps of this world, the Drips, Customer.io or whatever may be your preference.
358
+
359
+ **Gerhard Lazu:** What do you use?
360
+
361
+ **Saul Cullen:** We're using Drip at the moment. Drip's a specialist e-commerce email marketing tool. Our integration is relatively light. It's primarily a frontend integration that we've done so far. We'd like to hook into more of the backend and some of the events that we fire off as well; that's something we will no doubt do in due course... But the key is they're a specialist e-commerce email provider.
362
+
363
+ So for us, choosing an email provider that offers all of the things that you need when you're essentially selling things on your site is pretty key, and actually a lot of the email services have gone down that route, to try and answer more specific customer questions... Whether they're the perfect one out there - it's one of the reasons we still maintain quite a light integration as well. We want to verify that it does everything for us that we need before we dive head-first in.
364
+
365
+ **Gerhard Lazu:** That's very good, Saul. Thank you for that. I have one last question... I think Alan is the one for this. If I've been listening to this for the last 45, 50, 60 minutes, however long it was, and if there was one key takeaway from this conversation, what would that be, Alan?
366
+
367
+ **Alan Cooney:** I think the big technical takeaway certainly is that we've really enjoyed working with all these serverless tools, and they've helped us ship code, and ultimately great features and experiences to customers much faster... So definitely, if you're a developer and you're looking at some of this stuff and maybe haven't used it yet, I'd really recommend checking out things like AWS Lambda, AWS Event Bridge, or the equivalent tools with the other providers. It's really, really useful for improving velocity and ultimately what the customer gets to do with your product.
368
+
369
+ **Gerhard Lazu:** That sounds great. Alan, Saul and Wycliffe, it was my pleasure. Thank you very much for taking the time and for sharing so many good things with us. Thank you.
370
+
371
+ **Alan Cooney:** Thank you very much.
372
+
373
+ **Saul Cullen:** Thank you, Gerhard.
A universal deployment engine_transcript.txt ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So I have heard of this new tool called Dagger a few months back... And I signed up, and some weeks later I got the invite. It took me a while to look at it properly, but when I did, I really wanted to share the conversation with the people behind it, because it felt special.
2
+
3
+ So I would like to start by thanking you for making time, Sam and Solomon, to join me today. We would have had Andrea as well, but unfortunately he's not feeling well, so... Get better, Andrea, and maybe it'll happen next time, all four of us.
4
+
5
+ **Solomon Hykes:** Thanks for having us.
6
+
7
+ **Sam Alba:** Thank you.
8
+
9
+ **Gerhard Lazu:** So I've learned so much about Dagger in the last few weeks, actually... But what I don't know, and I'm sure that other people would really enjoy learning more about, is how did the Dagger idea start, Sam?
10
+
11
+ **Sam Alba:** Yeah, it started a while back, actually. We spent a few years at Docker, all of us, and Solomon was the founder, and we worked for many years for Docker, and to make Docker successful... And around 2018 we all decided to do something different in our life. It took us a while to realize that the most important thing for us was to actually work together. So that was the initial goal before starting the Dagger project - it was about starting a company to work together on something that matters to us.
12
+
13
+ \[03:46\] So we did that at the end of 2018, went to YC, participated to the batch winter '19. Solomon was actually a partner, he can share more about that experience... And we spent all of this time talking to people and learning about their problems. We interviewed a lot of different companies... And what we learned along the way was all companies - small, medium and large - are building an internal platform for application delivery. Sometimes they call it CI/CD, CI/CD pipeline, internal platform, whatever is the term, but it's always the same thing. And what we realized is in building this they were missing something in the middle, that is still missing, actually, because Dagger is not well known yet... And this missing thing is a way to program everything they have to do in order to bring a code locally, all the way to dev staging and production. This is what we're all trying to achieve right now with Dagger, is a programmable environment for developers to develop their internal platform.
14
+
15
+ **Gerhard Lazu:** So how do you see, Solomon, Dagger fitting in this landscape where the companies are struggling to find solutions, and some of them succeed, but maybe it's not what they expected, maybe it takes too long... How do you see Dagger fitting in this landscape?
16
+
17
+ **Solomon Hykes:** I think actually companies looking to build a CI/CD system, to build a delivered platform - they find tons of solutions... And that's sort of the problem - they end up with too many solutions. Different teams using different solutions, different teams deploying differently, multiple CI systems co-existing, frontend teams, backend teams, machine learning teams, infrastructure teams all have their favorite tools and systems, and there's just so much offer. There's a new startup every day coming out with a new infrastructure management tool or CI pipeline tool. There's just so many. So there's tons of great tools and systems out there, but the experience of using them is very fragmented. So where we fit in is we try, we aspire to solve this fragmentation problem by unifying what you already have into a cohesive platform. So that's where we fit in.
18
+
19
+ **Gerhard Lazu:** I like that you are talking a lot about applications and about application delivery, and not about infrastructure, which I find interesting. Why is that? Why are you thinking about applications and not about infrastructure?
20
+
21
+ **Sam Alba:** It's actually a -the answer can be- can take a long time, is my point. We could talk only about that point, actually. My opinion is the infrastructure is seen differently from different people. What I think is the infrastructure is a dependency to delivering the application, and it should be considered that way. We shouldn't see the infrastructure as its own thing. Seeing the infrastructure as a dependency drives you to make the right decisions, because it serves a purpose, and the purpose is delivering your application in a secure way for your application to run reliably and be redundant etc. We know all the best practices.
22
+
23
+ **Solomon Hykes:** Yeah, I think that infrastructure is a relative term. Infrastructure as a term - it only makes sense relative to something above it. It's the infrastructure FOR something. It's below something. It's the structure below... But below what? In our case, it's below the application. Outside of that context it's just structure, I guess.
24
+
25
+ So from our perspective, the goal is the delivery application. It's a very complex application now, because it's the cloud, and things are complicated... And it's got dependencies, like Sam said. And some of those dependencies are things that you, the application team, cannot change. They're there, and you can use them, but you can't change them. That's the infrastructure. That line, between the stuff you can change, that you wanna deliver and update and push and all that, above - that's the application. And below, things that you can't change, but you need - that's the infrastructure.
26
+
27
+ Different groups of people will place that line in different places, and also, the line will move over time. Containers, when we started with Docker and all that container thing, infrastructure was either bare metal servers or VMs. Containers were not available to most teams as an infrastructure component. So containers were something that developers set up.
28
+
29
+ \[08:12\] Containers were a way to move stuff up, to escape the constraints of existing infrastructure that IT would typically lock down too much. For example, as a developer, if I wanna install a new package for image processing or something, then I have to ask permission from a sysadmin, and maybe it's available, maybe it's not... Maybe the version they install is not a branch I want, and it's whatever the Linux distribution -- whatever Rell has. So that's just a pain to developers.
30
+
31
+ So containers started out as a tool above the infrastructure line, and then fast-forward five years, the infrastructure industry leveraged containers as a new means of delivering infrastructure, and so now it's largely below the line. So yeah, the point is the line moves over time.
32
+
33
+ **Gerhard Lazu:** Exactly, I was just gonna say that. I've heard many people talk about the value line, and how the value line is moving... And when you have a PaaS versus an IaaS, the value line shifts, and then the API changes. So the primitives, the building blocks are higher-level. If you use those higher-level building blocks successfully, you can do it a lot more quicker, and then you can standardize, and then you have economies of scale, and a bunch of things come into play... Which don't when you are talking VMs and bare metals, and so on and so forth.
34
+
35
+ The truth is that we have all those building blocks, and there's so many these days, including containers. Thank you, Solomon. \[laughs\] So now, the choice is even more difficult of "Well, what do I choose?" And then you have serverless, and you have monoliths... So application - what does application mean to you? Because it's not a monolith. It's not like my container, is it? It's more than that? How do you think about application, Sam?
36
+
37
+ **Sam Alba:** So what I saw in the past few years is a lot of people try to take their own specification of their application as it is internally in their company and try to make a standout with it. We saw a lot of different initiatives, including inside Docker with that at some point like is the application your compost file your docker file or how could it inspect. So we saw a lot of different application formats out there. What I think now, after spending some time talking to companies and working on a lot of different implementations of internal application delivery is that the application format should be considered as everything needed to deploy this application. Basically, everything you need to do in order to take this code, build it, test it, publish it... Even including continuous deployment tasks like Canary deployment, A/B testing - all of that. All of this I think is part of the application, and this application exists only in the context of a deployment. Otherwise, the application doesn't make any sense.
38
+
39
+ Something on your iPhone - it's easier to think about what is an application on your phone, although until you've put your credit card on the App Store and installed some applications, it's still an application deployment. So I think the same thing applies for any type of application out there. So to talk a bit more about those formats that were out there so far, I think they can be useful for some companies internally, like an application for a set of services, a Git repository, all of that... But this format is not portable. Only what can be ported is the way to deliver this application, which is really what Dagger is solving today.
40
+
41
+ **Gerhard Lazu:** So my understanding is that what Docker did for packaging code is what Dagger wants to do for application delivery. Is that a correct summary, Solomon? Would you agree with that?
42
+
43
+ **Solomon Hykes:** \[11:57\] In one dimension, yes. I mean, there's major differences, but there was something we were trying to do when we worked on Docker, and there were multiple opportunities, and we get to choose one. Docker made a choice to focus on being a next-generation runtime for applications. It's a new way to -- a specific build artifact, a specific runtime, and it has advantages over existing runtimes, higher-level language runtimes and high-level PaaS, building your own on top of a VM... You know, containers hit that sweet spot, and so with Docker we had no choice but to follow the market, what the market wanted out of Docker, and that was a new kind of runtime. And that eventually became an infrastructure concern.
44
+
45
+ But what we worked on that led us to Docker initially was a different goal - we were trying to standardize; we were trying to unify the industry around something, anything, so that we can all leverage at least one thing that we all had in common. But it turns out once you enter the arena as one possible runtime, you can win or lose in that arena, you can be very, very successful, and I think Docker as a runtime was very successful. Even more successful if we include the clones, and forks etc. But it was not ubiquitous, and it can't, because fragmentation is inevitable. So what we realized this time around is if we want to actually contribute something that can truly be ubiquitous, that anyone can use, regardless of their choice of runtime, and infrastructure, and language, and anything, then you have to give up on also wanting to be the runtime for the application. You have to choose. That's why Sam mentioned all these application standards, and Docker Compose through CNAB, which is - my understanding is it's kind of taking that model and trying to make it a ubiquitous standard... It will never be, because it can't, because it includes strong assumptions about what an application should look like. It gives one answer to the question "Where is the line? Where is the line between application and infrastructure? What is the shape of the line? How do you connect the two?" That's an answer... So if you're rooting for that standard, that implementation, you're rooting for everyone to adopt that answer.
46
+
47
+ What we're doing is we're rooting for everyone to define their own answer... The answer to "Where's the line between application and infrastructure, and how do you connect them?" It will be a different answer for each software team, we believe. There'll be patterns, commonalities that will come and go, but yeah, your delivery platform, the way you connect application infrastructure if you're a software-enabled business - it's strategic. It's unique to your application. And if your platform is generic, then that means your application is generic. It's not a realistic goal. So our goal is to -- and to answer your question, we're kind of picking up an original goal that we had while we worked at Docker, that we had to abandon, and now we're trying to achieve it in a different way by saying "We're not gonna run your application, we're not gonna tell you how to run your application. You tell us how you wanna run it and how you wanna deploy it, and Dagger can help you."
48
+
49
+ **Break:** \[15:07\]
50
+
51
+ **Gerhard Lazu:** What you've just told me makes a lot of sense. And the reason why it makes a lot of sense is because having spent a really long time in this space, I can see it. But what I don't know is how exactly does this solution that Dagger proposes actually work. So how do teams and application developers declare in Dagger what their application delivery flow looks like? How do they do that?
52
+
53
+ **Sam Alba:** So I'll start with this one... First of all, we --
54
+
55
+ **Solomon Hykes:** Sorry, I had to do it... \[laughter\]
56
+
57
+ **Gerhard Lazu:** No, that's good.
58
+
59
+ **Sam Alba:** Alright, we'll start again then... \[laughter\]
60
+
61
+ **Solomon Hykes:** Sorry. Go ahead.
62
+
63
+ **Sam Alba:** So we use a config language that you know and you probably mentioned already in the podcast, I'm not sure... It's CUE. I think you talked to Marcel. You interviewed him.
64
+
65
+ **Gerhard Lazu:** Yes.
66
+
67
+ **Sam Alba:** So people who are familiar with the show know the language already... Which is good for Dagger, to be honest, because the language is not well-known yet, and so it will help as the language progresses for onboarding with Dagger.
68
+
69
+ **Gerhard Lazu:** So just to add a little bit, a clarifying piece... This is Go Time episode 163, "CUE: Configuration Superpowers for Everyone." By the way, that's the reference. That's the exact episode that you can go and listen to to hear more about CUE. So CUE is one of the building blocks of Dagger.
70
+
71
+ **Sam Alba:** Yes, and so CUE provides a very compelling and powerful configuration language, that platform engineers or application developers - I mentioned civil wars because it's always someone different in a company who does take care of the CI/CD pipeline in the internal platform by extension.
72
+
73
+ So they use CUE to declare everything that they have to do in order to take the code from the code repository all the way to running the code live on any environment. Dev station or prod. So Dagger right now offers through CUE a way to define all of that, everything. The way it works roughly is -- so you use the CUE language, Dagger does not change the CUE syntax; that's very important to us. We just add the ability in a CUE configuration to attach some steps to run, and we run them inside containers, and it's fully transparent for application developers. So the way it works is you define what you want to do by using packages that Dagger providers, basically. Or you can write your own.
74
+
75
+ **Gerhard Lazu:** \[19:59\] What is a package, by the way? Can you give us an example of a package? Because that's a fairly important concept.
76
+
77
+ **Sam Alba:** Yes. So CUE offers the ability to import packages, first of all... And Dagger piggybacks on these to provide a standard library of packages of reusable building blocks. So one of these packages, for instance, is the ability to manipulate a Git repository to deal with the GitHub authentication, to integrate with TerraForm, for instance, in order to rely on some infrastructure definition, or providing some infrastructure resources along the way.
78
+
79
+ Recently we added a package for Argo CD. That was a contribution from someone in the community. The idea was to - from the application delivery pipeline to generate an Argo CD configuration and call Argo CD directly from Dagger. So there is a reusable package now.
80
+
81
+ Dagger also has the ability to define packages that you can share and import. So they don't have to be (all of them) inside the standard library. We only add packages that we think are general building blocks that people can reuse.
82
+
83
+ We also have packages that are cloud provider-specific - GCP, AWS... Inside GCP there is a package for GKE, dealing with the authentication. Then they all generate Kubernetes clients that you can reuse... So you can import those packages and use them pretty much like you would do in a programming language like Go, for instance. And then behind, once your configuration is live, there is a way to set inputs. Some of them can be secrets. For secrets, Dagger manages the encryption of secrets. Then you type "dagger up", as simple as that, and your application is live.
84
+
85
+ **Gerhard Lazu:** So Dagger takes some inputs. There's also outputs, I assume. The outputs are a result of the packages running, or the definitions that call packages? Would you call that a plan? Is that a plan in the Dagger language?
86
+
87
+ **Solomon Hykes:** Yeah, so the development flow that Sam described, what you're doing there is you're writing -- that's when you're writing a configuration for Dagger. So you're telling Dagger how to deploy your stuff. And without Dagger, you would typically do that in a bunch of places, which is one of the problems. We can't do it just once and then run it everywhere. You probably have to repeat and duplicate information and fragment it. It's gonna deploy from your laptop. You're gonna have shell scripts, a Docker Compose file maybe, a makefile, a custom Python script, some Ruby scripts, a custom JavaScript script...
88
+
89
+ There's a lot of custom and reusable tools out there for deploying from your laptop. Then sometimes you're gonna reuse the same scripts on a deployment server, a staging server, maybe you'll bring it into your CI... But then what happens in your CI is that CI wants to be CI/CD; because if you only do CI, there's not enough money to be made, because CI is infrastructure. A CI system is a runner for your scripts everytime something happens on source control; a very valuable thing to do. But there's what like 100 of them. So now, what all these CI systems are doing is adding more and more sophisticated pipeline systems. And a lot of those are configurable in Yaml. You write a Yaml description that then says "Run this script" and "Run that script" and "Connect these things." They all have a different system, but they have in common that they use Yaml, which is an awful development experience... And also, it's not your shell script; it's different. So now you have two things.
90
+
91
+ And then what happens is your CI/CD process is in place. It uses Circle CI, GitLab, GitHub, Jenkins, whatever, and it's very, master that Yaml thing, and you update it and you add -- now, sometimes there's an Action thing on GitHub, for example, a Docker container you plug in... So you start kind of adding things, and now all of a sudden that doesn't work on your laptop anymore. And also, you can't just look at the code for it. You're looking at a Yaml file, and that Yaml file says "Run this container." So you're running a container, but that's a binary thing. So now you're gonna go look for "How was that container built?" "Oh, here's the Docker file. I've found it. Oh, this one is a Python thing. It uses the APIs for this particular CI system, so it's not portable." "What if I have a different CI?" Well, you've gotta start over.
92
+
93
+ \[24:20\] So you have this fragmentation problem where the actual deployment logic is split up into lots of different pieces, using different languages... So you can't reason about it as a whole, number one. And each piece is tied to a specific runner, a specific piece of CI infrastructure. So when you're writing a configuration for Dagger, you're doing the same thing once more. You're writing a configuration that describes how to deploy. But the big difference is, 1) it's a better development experience, because it's a language that's better than a shell script, and also better than Yaml. It's sort of like the best of both standard, imperative programming and the best of a declarative system like Yaml. So that's kind of -- like you said, it's a building block. 2) There's reusable packages. So if someone is really smart - maybe yourself, you wrote a piece that you need... It could be a pattern, it could be an automation, it could be integration of a tool, TerraForm, as Sam mentioned, Argo CD, whatever, over time an ecosystem builds and you can reuse those. That seems obvious, even trivial for application developers. But as we know, in DevOps and cloud land, that actually does not exist, amazingly. Not fully. We're in the uncanny valley of delivery as code. It kind of looks like code almost, and it makes it weird, but it's definitely not as fun to do. Writing all these Yaml files, templating them, and copy-pasting shell scripts is not as fun. So it's a better experience, and you can run it anywhere.
94
+
95
+ **Gerhard Lazu:** I'm going to set the bar really high now, because you started with a very high bar...
96
+
97
+ **Solomon Hykes:** Okay...
98
+
99
+ **Gerhard Lazu:** So the way I hear it is that this is the best thing possibly since Docker.
100
+
101
+ **Solomon Hykes:** \[laughs\] Okay.
102
+
103
+ **Gerhard Lazu:** That's what I hear. Because Docker changed the way we package and we run applications... Or even code. It doesn't have to be applications. It can be stateful stuff as well. Services. Whatever. So if what you're telling me is true - and I have no doubt about that - then there will be a world, maybe a not too distant world, where Yaml will be an artifact, an output, a by-product. We will have a config language that has a runtime built in, and it has a type system built in, it has proper templating, proper secret management - all of that. It integrates with all the building blocks that we call infrastructure today, so that shipping code into anywhere (not just production) will be different.
104
+
105
+ **Solomon Hykes:** Yeah, I think that's true. Yeah. I mean, it has the potential to be that. I do think it has the potential to be as impactful as Docker. Certainly, I hope we'll be the ones to deliver on that potential, but I have zero doubt that someone will. And I don't see why not us, because we're doing it now and it's working. But yeah, it seems inevitable. It has to happen, because it's just too painful to keep doing things the way they're being done now.
106
+
107
+ **Gerhard Lazu:** It is. I can relate to that pain. I've been feeling it for years, but there hasn't been a solution that looked like it may work. And I think Dagger is the first thing that I have seen in recent years that may just work. It's a crazy idea, very ambitious, things can go whichever way... But the same thing was true for Docker. And by the way, Docker didn't start as Docker. Docker started as dotCloud. And the things that followed - I don't think many people could have foretold what was going to happen. The direction was great...
108
+
109
+ **Solomon Hykes:** Including us.
110
+
111
+ **Gerhard Lazu:** \[27:50\] Including you, exactly. That's exactly what I'm thinking. So you didn't know just how big and successful it's going to be. And when Docker came along, everything changed - for application delivery, for running systems... A couple other things happened, like Kubernetes, for example. I think that was an interesting one. But the container image and the container format, and even though the runtimes changed over the years, and I know that we use Docker Compose and Docker Swarm to run Changelog, and then we switched to Kubernetes... And it's okay, we like it; we like the container, we understand the value... But we still use a lot of makefile. We still write a lot of Yaml. And it's okay, because we've been doing it for years, but it's not great. And that's the point that you're trying to tell us - "Hey, there is a better way." So CUE is one of those amazing things. By the way, I looked it up - Configure, Unify, Execute. So it's in the name of the language.
112
+
113
+ But I know that Dagger has also another special component, and that is not Docker, even though it makes use of Docker. It's Buildkit. Can you tell us a bit more about that relationship, Sam? Because I think that's the other big, important component in Dagger, which is Buildkit.
114
+
115
+ **Sam Alba:** Yeah, absolutely. So Buildkit is indeed the other part that makes Dagger very powerful. Dagger has a -- it's in the name, the term "DAG", for Directed Acyclic Graph, which is pretty much the same execution flow that a makefile does, but in a more elaborated way thanks to CUE, and Buildkit, actually.
116
+
117
+ How Dagger works under the hood - that's really a bit technical, in the sense that it doesn't have to be understood by users. Even developers, platform engineers who are developer a Dagger configuration - they don't have to understand that. Exactly like when people use Docker Build, they actually use Buildkit behind. But they don't have to understand it. Same thing for Dagger, it's just that you don't have to write Docker files; you can build from your Docker files that you already have. You can actually call make if you want, and include your makefiles, or run your bash scripts if you want from your Dagger config... But the execution side of it happens within Buildkit, and Dagger calls to Buildkit directly through what's called LLB, Low-Level Binary, which is a binary of code that Buildkit implements.
118
+
119
+ So Dagger talks to Buildkit directly, and generates those instructions from CUE. And Buildkit offers a lot of different things - pretty much the same thing that you know from Docker file, it's just that in my opinion when you write a Docker file and you type "docker build", you'll probably use less than 10% of what Buildkit can offer.
120
+
121
+ So with Dagger today you can really step up the game by producing really fine-tuned execution from your configuration. So it's a bit abstract from when it's said like that and I'm sure Solomon will explain it better than me...
122
+
123
+ **Solomon Hykes:** I don't understand it as well that that freezes me to explain it.
124
+
125
+ **Gerhard Lazu:** This is what I propose... I was reading a blog post from Tõnis Tiigi introducing Buildkit. It's from 2017. And there's a link to a talk that Tõnis gave at DockerCon 2017, I believe. So I'll link it in the show notes. Watch that talk, which explains everything about Buildkit, including the LLB, how it works, there is the DAG... That's a great one to watch.
126
+
127
+ **Solomon Hykes:** If there's one thing to take away from our explanation of Buildkit- there's a lot to cover, but the main thing that you won't get from that presentation is that everybody else uses Buildkit to build, and Dagger using Buildkit for much more than build. And I think it was already known that Buildkit is just an incredible low-level build system. It's vastly superior to almost anything else out there, because it's low-level, so it can focus and specialize. It's kind of like LLVM in the compiler world. Very similar.
128
+
129
+ \[31:49\] But we're taking it one step further and saying -- it's so powerful that the name is wrong. I don't know what it should be called. It's not just to build; it's a generic virtual machine for DAG computation. That's how we use it. And it turns out a great application of DAG computation, in other words writing your program like a DAG, and running it like a DAG, is pipelines, especially when you have multiple pipelines and they're interconnected, and you need data to flow through them in one direction. So anything related to CI/CD will always be better when you program it as a DAG. So we leverage that. But it's a common point of confusion that we have to clear up, I think, often... It's that, you know, if you dig into the internals of Dagger and you see Buildkit and you think, "Oh great, they have this built-in build capability", which is true, as a nice side effect is anything that involves building stuff, you can leverage to Dagger APIs natively. You can reimplement your own Docker build natively to Dagger if you want, but it's not just build. Anyway, I'm repeating myself. It's important.
130
+
131
+ **Gerhard Lazu:** That sounds amazing. I can see how powerful that is. But the reality, at least for me, the implications of that is that I have to run Docker. And I uninstalled Docker on my machine about six months ago, and I don't have to update it anymore, I don't have to worry about licensing anymore, and there's a lot of contention around Docker recently. I sidestepped it in that I uninstalled it, I just use the runners in the CI/CD systems, or - and this is something recent - I'm looking at moving my development to remote hosts there will be Linux, there will be Docker... Not a problem. So from that perspective, obviously today Docker is a dependency of Dagger. And Dagger will not work without Docker. Correct?
132
+
133
+ **Solomon Hykes:** Not completely correct. You can use, if it's really important to you to use Dagger without any Docker engine, you can. Most people that we talked to choose to use it with Docker, because they have it... But yeah, not everyone has Docker installed, and not everyone must be forced to. Buildkit itself you could run as a separate daemon.
134
+
135
+ **Gerhard Lazu:** Interesting.
136
+
137
+ **Solomon Hykes:** But that demon is going to need to talk to something capable of running OCI containers. And that something could be the Docker engine, it could be Containerd, which often is bundled with a Docker engine... But you can run it separately, if that's your preference. I think there's calling runC directly, which is the even lower-level tool, so you can get away with that... Or you can do any of those - Docker Engine, Containerd - remotely. And you can even run the Buildkit daemon remotely.
138
+
139
+ So these projects - Buildkit, Containerd - can be run remotely, and there is well-documented methods for connecting to them in management, then remotely via SSH for example, etc. So Dagger supports all of that. If you add your favorite custom Buildkit and/or Containerd infrastructure, either that you wanna build or that you already have, you can leverage it with Dagger, and it just works... Which is pretty cool.
140
+
141
+ **Gerhard Lazu:** It is very cool, yeah. It is very cool. I love that world... And I'm wondering, if I was to use that option today, are there issues around, for example, volumes? Because that always used to be a problem when you would mount local volumes... Or like if you had to copy lots of stuff, and then the Docker daemon would be remote. That, in my experience, didn't used to work as well... But maybe there's no such thing in Dagger.
142
+
143
+ **Solomon Hykes:** Yeah. So the job of Buildkit is you define DAGs, graphs of things to do, of operations, and then you run it; there's a begging and an end, and then you get outputs. You provide inputs, you get outputs. Very powerful, very scalable, blah-blah-blah. But it does not use or depend on the concept of Docker volume.
144
+
145
+ **Break:** \[35:36\]
146
+
147
+ **Gerhard Lazu:** This is something which I have to ask... I've been thinking about it and I have to ask it - how is Dagger different from TerraForm?
148
+
149
+ **Solomon Hykes:** TerraForm is a great tool for managing your infrastructure, and Dagger is a great tool for making your CI/CD pipelines more portable, so that you can deploy all the different parts of your application in the same way, and deploy them from anywhere - a local machine, CI, other CI etc. If your existing CI/CD pipelines involve TerraForm in any way, calling Terraform directly, using resources provisioned by TerraForm, then you should keep doing that with Dagger. Dagger will help you integrate with TerraForm better if you're already using it. It does not replace TerraForm. It shares some commonalities in how it works internally - there's a graph, there's a declarative language etc. A lot of people ask, because they know TerraForm very well, and they -- it's mostly a matter of positioning. We have to be more clear what it's for, what it's not for... But yeah, the short answer is they're related, but complementary.
150
+
151
+ **Gerhard Lazu:** That is a great answer, thank you very much for that. So with that in mind, I love tools that use themselves, like dogfood them, and you're basically the first users, so that you can see what doesn't work. I'm a big fan of that. Does Dagger use Dagger? In what way, Sam? Tell us about it.
152
+
153
+ **Sam Alba:** So it's very important to us as well, because in my past in my career I worked on some products that we were not using internally, and it's always very difficult to just rely on your users to get feedback. You have to use your products. Honestly, sometimes it's not possible, somehow... But for us, we have to, and we need to, actually... Because as a software company, we have also problems and needs in terms of application delivery.
154
+
155
+ So there are a few areas where we use Dagger today, and a few areas where we'll use Dagger even more tomorrow. The first one that comes to mind is running the test internally. Right now we run all of our integration tests for the standard library, which is the reusable packages that Dagger provides. All of those tests are being run inside Dagger, which brings some advantages. First of all, you have CUE to define those tests. I don't know if you used any test framework in the past, but anyway, there is always some effort to implement those tests. Having CUE is really handy.
156
+
157
+ \[40:03\] Then another advantage is to make -- since all of your tests usually should run in your CI, defining all of the CI logic on your CI system is very challenging, because developing your test is software development, and as any software development, it has to be maintained and evolve over time, and grow... And doing that on your CI requires always a very long and difficult development lifecycle. I don't know if you've --
158
+
159
+ **Gerhard Lazu:** Oh, yes.
160
+
161
+ **Sam Alba:** ...tried CircleCI, and GitHub Actions...
162
+
163
+ **Gerhard Lazu:** All of them. I know what you mean exactly.
164
+
165
+ **Sam Alba:** Yeah. And so replacing all of that with a Dagger app, so you can just develop your tests locally... And then your CI just needs to do the same, as a simple Dagger app. So that makes everything your CI does portable, including running your tests.
166
+
167
+ Then there is the deployment of our DAGs. There is even more that we need to move over to Dagger. An example right now - if people have access to the repo, they will see that we use Go Releaser to build the Go binaries... So Go Releaser is nice, because it can do one thing simply and well, but there are certain things you want to do in your release process that Go Releaser cannot do, and that we need to do with Dagger. So there is an effort in progress to move over to Dagger for that.
168
+
169
+ **Solomon Hykes:** And in that case we'll keep using Go Releaser; we'll just wrap it in a Dagger configuration that calls Go Releaser and other things. It's just that if you only call Go Releaser for everything, then there's no point in having the overhead of Dagger.
170
+
171
+ **Gerhard Lazu:** That is really cool. Always the first test of a successful tool/utility product does that thing itself. And if it does - great, because you're the first one(s) to realize what isn't working. And it is in your best interest to fix it for you first and foremost, and then for everyone else. Of course. So I love that story. That's one of my favorite features of any product, the product using itself.
172
+
173
+ I'm glad that you mentioned the repository, Sam, because I know Dagger is still in a closed beta. So if someone wanted to start using Dagger today, how would they do that?
174
+
175
+ **Solomon Hykes:** Go to dagger.io, click Request Access, there's a short form... If you have a use case in mind, even if you're not sure, we ask extra questions about that. We love learning about use cases, hypothetical or real... And then we send the access relatively quickly. We're not actively hunting for people to join, because we have a fairly large pool of people already testing, and they have plenty of problems already, so we're focused on fixing them... But it's always exciting when someone joins; that's the best contribution anyone can make at the moment - it's a use case, and then you will have time to try applying Dagger to the use case and to tell us how it went... And especially what went wrong, so we can fix it.
176
+
177
+ **Gerhard Lazu:** Yes, I like that. And knowing a bit more about the story behind Docker's success and how challenging that was for the team, I think that this - at least in my mind - is one of the learning that you took away from that. So rather than making it wildly popular, everybody using it getting so much feedback that you cannot even keep up with it, is it true that this is one of the things that you're doing differently, and in a way better?
178
+
179
+ **Solomon Hykes:** We actually did the same with Docker... It didn't last as long. No, you're right that we -- yeah, yes and no, because with Docker we did focus a lot on giving early access to people, even before it worked properly. Even before it built. The first meetings we had privately you couldn't compile yet and then it ran, but it was fake. If you go in the history of Docker at the Docker repo, there's a file called fake.go, that pretended things were happening, but they weren't...
180
+
181
+ **Gerhard Lazu:** \[laughs\] It's crazy.
182
+
183
+ **Solomon Hykes:** Yeah. We had little private meetups at our office - back when you did things in offices - before launching. People would come and talk about Docker, how they would use it... But what's different is that ended pretty quickly; it lasted a couple months, then boom, it was out... And we did feel like we were not ready.
184
+
185
+ \[44:11\] So Docker was very successful. It happened for a good reason, but we did pay a price in terms of preparedness, and especially on the business side, because we didn't have a plan for monetizing it, and we didn't really think it through, and then we had to think it through in a very different environment, where -- you know, you're in the tornado, when everything is happening very quickly. The whole industry wants everything from you, and you've gotta decide things right away. So this time we're designing a cloud product in parallel to the open source tool we mostly talked about. And when we launch, we plan on having a complete picture on how you're gonna use the open source tool, how you're gonna use the cloud product, how they'll connect, and how our business will be a sustainable one, so that the tool can continue to exist and improve... Because you don't wanna be in a situation where nobody pays for it, but everybody is pissed off at you for not investing money and making it better for them... Which is basically what Docker had to go through. And then they're angry when you start charging. "Why didn't you fix my Docker Desktop bug? Grrr! Why are you charging money for Docker Desktop? Grrr!" Pick one.
186
+
187
+ **Gerhard Lazu:** "I'm just gonna uninstall it!" \[laughs\] No, that's not how it happened for me, but I know exactly what you mean... So I'm glad that this is one of the things that you're doing differently. It makes a lot of sense. I think it sets up Dagger for success. One other takeaway from what you've just said, Solomon, is that a good idea, even if it doesn't work in practice right away, it's still a great idea. And people coming around it - that's amazing to see. People that think like you. People that see and feel the pain that you do... And I really like that about Dagger, which is what I also loved about Docker. That was a real pain, you addressed it... A couple of things you could have done better... As always. Even now...
188
+
189
+ **Solomon Hykes:** Just a few!
190
+
191
+ **Gerhard Lazu:** It will always be the case. Yeah, just a few. \[laughs\] Right. We're downplaying it. And I see this different approach in Dagger, and I think - again, without the benefit of hindsight of Docker, is - I can see it working. I really can see it working. I know that the community is very important. It's essential to the success of any product. So how are you thinking about that relationship, Sam, with the community?
192
+
193
+ **Sam Alba:** Well, for me the community in a project like Dagger should be seen as an extension of the internal team. On some projects you have some code that is open source, and contributions externally that are managed on the side as a side thing, side task. For us, we try to reapply what we did at Docker, and for that specifically I think it was a success. You know, at Docker it was really an extension of the team, and no matter whether you were an external contributor or an employee, it made no difference from a project governance point of view. You were able to open a pull request and propose something, discuss a design, propose an implementation, even participate into maintaining the project, from an outside point of view, without being an employee... And we apply the same thing right now on Dagger. It's not of use because the project is still in closed beta, but for the people who have access to it, I think they can feel this way.
194
+
195
+ Same thing for internal discussions - we have a Discord channel with public channels, and we try to discuss everything related to development on public channels. We avoid side conversations internally, and we don't use Zoom. We use Discord and public channels for that. It's very important to involve everyone. That's what the community means to us.
196
+
197
+ I think without this aspect, you cannot build an ecosystem, really. You cannot really involve people and have external contributors feeling important to the project. It's not something that you can fake, basically.
198
+
199
+ **Gerhard Lazu:** I see it exactly the way you see it. I completely agree with everything that you've mentioned... So I'm wondering now, how can the community help you best? The mission is great, the idea is great, the way you're approaching the community is great... So how can the community reciprocate?
200
+
201
+ **Solomon Hykes:** \[48:20\] We have a strong opinion on open source contribution. I think it's misunderstood a lot, what open source contributors are relative to a project and product attached to it. I think when you participate in an open source project and you contribute to the code and the documentation or open an issue, any sort of contribution - that's the highest form of user engagement. An open source contributor is a power user. You know, there's users, there's power users, and then there's power users that are such power users that they actually contribute to the code and the docs of their own product. That's the ultimate level of achievement as a user.
202
+
203
+ But the word "user" is super-important, because what got you there is using the product, and liking it enough to use it more and become proficient in even how it works... And also, there's something that you disliked enough about it, or that was missing, that motivated you to contribute - maybe the code itself, maybe just the issue describing it, a detailed bug report... You know, there's a line you cross where you become basically the most valuable user possible, and it becomes a two-way exchange... And that's just what's really unique about open source, I think.
204
+
205
+ So basically, everything Sam said just before - either you build around a community of people like that, or you don't. Either you want it to be possible or you don't. With Docker, that worked really well, so we're doing it again. So that's indirectly answering your question, which is "How do you best contribute?" Just use it. Just keep using it. And if you try to use it and you fail to use it because you don't understand it, or it's broken, or maybe you're using it wrong, you're not sure - just any engagement around how you use it is automatically a contribution. Early in the funnel, where you're not really an active user, but you're trying - that's data on how can we help you and others like you get started. Just off the top of my head - the docs are incomplete, the explanation on the website is vague and confusing, the UX is pretty terrible, because it's ten layers of iteration that we haven't had time to clean up... We know all that, but from the inside, it's like a giant pile of work to do, an infinite pile of work to do... And every time you come in and contribute your priorities, your problems, your thoughts, your suggestions, you're helping us prioritize it... And that's how we'll make progress. So it's just immensely important.
206
+
207
+ That's our biggest test... There was a period at Docker when there was a lot of noise. I mean, it's still -- unfortunately, it's still kind of sticking in the history of Docker that at some point there was conflict with the community. We refused pull requests, and we argued with -- and you know, not a single time did that happen with actual users of our product that became our users. It was always with competitors and integrators that were not actually using our products. They just wanted to build another product, to get other people to use it, and they wanted the code. That's different; and it's part of the game of open source, and it's normal, but that's not the priority for us. The priority is someone who's using it, and using it so much that they wanna help improve it.
208
+
209
+ **Gerhard Lazu:** Focus is key, in everything, and knowing which are the things that you don't want to do is in my opinion more important than what you want to do. So from the perspective of focus, what are the things that you're focusing on? We've mentioned all the things that you would like to do, but you don't have time to do... What are the big items that you're focusing on in the next 3-6 months?
210
+
211
+ **Solomon Hykes:** \[51:56\] Well, we can tell you what our internal priorities are, and hopefully they align with the answer...
212
+
213
+ **Gerhard Lazu:** If you want... Yeah, I would love to do that. And I'm sure all our listeners would love to hear that. Yes, please.
214
+
215
+ **Solomon Hykes:** So we have a weekly team meeting and we talk about how everyone's doing, and what everyone's doing, whether they're stuck and need help etc. and then how everyone's work contributes to our priorities as a team, and then we update those as we go.
216
+
217
+ Right now we have a few priorities... One is a strong and engaged core community. So we need a core group of people that consider themselves Dagger developers, and they're actively contributing to the Dagger ecosystem. It's a very small one. So the number of those people is not important. 5 to 10 people that are continuously engaged and don't leave, so it's not a revolving door... It's 5 to 10 people with 100% retention, basically. You're no longer at zero... And you wanna make sure they're happy, and they feel really involved. So that's the first priority, the developer community.
218
+
219
+ The second is successful (we call them) accounts. Actual projects, actually doing something real with Dagger, being happy and continuing to do it. So we have lots of data, lots of people interacting with us, trying -- we have some analytics... So we see activity, but activity doesn't necessarily mean a successful project, a successful account. So we wanna make sure there are teams out there that are successful using Dagger, and we know who they are, and we understand why. Very important. That's the second.
220
+
221
+ And that there already is a cloud product with a great conversion and great retention, so something we can actually sell. So that's the third priority.
222
+
223
+ **Gerhard Lazu:** I'm thinking of helping you with the second point, in that I do see the Changelog setup using Dagger. I really wanna try it out. I also think that I could help you with the first one, but time and priorities, as you've just mentioned - we all have them... So it's subject to that. And I also know that -- this was a tweet that went out several weeks back. I know that you're looking for people that want to join Dagger, to actually work on Dagger.
224
+
225
+ **Solomon Hykes:** Yes.
226
+
227
+ **Gerhard Lazu:** So which is the ideal candidate that you would like to apply for those very, very few roles?
228
+
229
+ **Sam Alba:** Well, I think it's a tough question, because everyone is different, so we don't have a type of people that we can describe really well in a blob of text. That said, there is this blob of text that was tweeted a few weeks ago that explains it as much as we can... And right now, we are looking to build a small team. A small team of what we call founding engineers; a founding team, not only engineers, actually... Really, a founding team that can participate in making Dagger evolving, but not only; building the company as well.
230
+
231
+ **Solomon Hykes:** Great company.
232
+
233
+ **Sam Alba:** And so, I'd be the first ten people, and I was part of them, actually. I was the first engineer hired at dotCloud back then, along with a few other people... And we were involved not just in building the product, but also in building the company, setting the culture for the next ten after us. So this is what stage we are in today. We are looking for people who are willing to build a product, for sure. That's the first thing. Are you convinced about the problem we are solving? That's probably the very first criteria. But the second one is really "Are you ready to participate in building the company?" and everything related to that - the company's culture, engineering processes, how you want to manage your day-to-day with other people... It's really about building the company, it's about defining all of those things.
234
+
235
+ Some of those things are defined today, but they are gonna change, and everyone participates into that. It's not the founders dictating how things should happen. It's really a teamwork.
236
+
237
+ **Gerhard Lazu:** \[55:43\] I really like the way you're thinking about this, I really like the description that you put out there. I know it's really difficult to capture that ideal candidate; it's actually impossible, to be honest, because there's always wildcards and curveballs... But I really like the way you went about it, and I wanted to mention it, because it felt important and significant.
238
+
239
+ So now, as we are wrapping up, I'm wondering, as a listener, which is the most important thing for me to take away from this conversation? Solomon, would you like to start?
240
+
241
+ **Solomon Hykes:** Yeah. I think if you are involved in DevOps and CI/CD, life is painful, but it feels like the future is exciting, and we agree. So the pain is not mandatory; it's okay to solve it. \[laughs\] I think we're just in a temporary state as an industry where it's very early and very broken, but also very exciting. So we're trying to contribute, do our part in making it less painful. And we need help.
242
+
243
+ **Gerhard Lazu:** I see what you mean. How about you, Sam?
244
+
245
+ **Sam Alba:** Well, I totally agree with Solomon, and I will just add, as an extra piece of information, that although Dagger is in closed beta right now, it's really easy to get access to it. We watch the queue pretty carefully every day and make sure that people are not waiting too long to get access to it... So feel free to sign up, and you'll get an access soon. And once you get access, it feels like an open community, that will be widely open at some point soon, once we feel it's ready... And then we are available to talk about your use cases.
246
+
247
+ We are also allocating time every week with the team to make sure we are reactive on people's questions and in people asking help about building the internal platform, as well. So we can help writing and implementing internal platforms using Dagger, giving advice and all of that... So feel free to join the community.
248
+
249
+ **Solomon Hykes:** Also, if you've signed up and you're wondering why you didn't get access yet, check your Spam box... Because it's probably there.
250
+
251
+ **Gerhard Lazu:** As for my takeaway - we missed Andrea. We hope he gets better... And I'm looking forward to all four of us getting together soon, maybe after I have set up Changelog running Dagger, and there will be some learnings...
252
+
253
+ **Solomon Hykes:** Ooh...
254
+
255
+ **Gerhard Lazu:** I like the sound of that too, Solomon...
256
+
257
+ **Solomon Hykes:** Let's see if you're still happy to talk to us after that.
258
+
259
+ **Gerhard Lazu:** I'm pretty sure I will. There's something there. But thank you very much for joining me. This was a great pleasure, and I look forward to next time.
260
+
261
+ **Solomon Hykes:** Thank you.
262
+
263
+ **Sam Alba:** Thanks so much.
Assemble all your infrastructure_transcript.txt ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** This is another KubeCon 2019 follow-up, and that was episode (in Changelog) 375, when we talked with Dan and Jared about Crossplane; it was about two years ago, end of 2019. But Marques was here as well... So Jared, where is Marques?
2
+
3
+ **Jared Watts:** Marques is actually still within the Crossplane ecosystem, which is actually pretty awesome. Equinix, \[unintelligible 00:02:57.02\] so he's over there, and still contributing to Crossplane a lot. We don't miss him too much, because we still get to see him.
4
+
5
+ **Gerhard Lazu:** Should we have added him to this invite? Was it my fault for not adding him? I think it was, right? Marques, it's my fault.
6
+
7
+ **Jared Watts:** Yeah, we definitely miss him on this episode here, but you could probably get him on a podcast that's more focused on what Equinix is doing as well too, specifically.
8
+
9
+ **Gerhard Lazu:** Okay, it's great to know that I'm not the only one thinking that. That was like my follow-up thought; that's great, I love that. Okay, so many things happened since 2019... 2020 was a very interesting year, from so many perspectives... But let's just think about Crossplane. I would like to focus on that, and we'll explain why. Dan, do you remember which Crossplane version was out in November 2019, just before KubeCon? Let's see how good is your memory.
10
+
11
+ **Dan Mangum:** That is a hard question, and my memory must not be that good. I would guess somewhere around 0.10, but that could be way off.
12
+
13
+ **Gerhard Lazu:** Jared, what do you remember?
14
+
15
+ **Jared Watts:** Yeah, I think Dan you're pretty accurate there. It was either like 0.8, 0.9 or 0.10 or so. Definitely not 1.0 yet, I know that for sure.
16
+
17
+ **Gerhard Lazu:** \[04:09\] So before I checked, the only thing that I knew, it was pre 1.0. That's the only thing that I remembered... Because I checked, I know that it was 0.5.0. You'd just cut that a few days before KubeCon.
18
+
19
+ **Jared Watts:** That early?
20
+
21
+ **Gerhard Lazu:** Yeah, that early. So what version are we on now, just so that listeners have a point of reference.
22
+
23
+ **Dan Mangum:** We're now on 1.3, and we have an official support policy as well for maintaining older branches... So our active branches right now are 1.1, 1.2 and 1.3.
24
+
25
+ **Gerhard Lazu:** I love that. We'll come back to that later... But I still want to continue with this train of thought, 0.5.0, and 1.3.0. I did a GitHub Compare to see how many commits there have been between 0.5.0 and the latest tag. Do you wanna guess how many?
26
+
27
+ **Dan Mangum:** Oh, I would say over a thousand, probably...
28
+
29
+ **Gerhard Lazu:** Jared, what do you think?
30
+
31
+ **Jared Watts:** I'm gonna go like 1,300 is my guess.
32
+
33
+ **Gerhard Lazu:** Dan, do you wanna readjust, or are you happy with over a thousand? That's a bit generic.
34
+
35
+ **Dan Mangum:** I'll go with 1,299.
36
+
37
+ **Jared Watts:** \[laughs\]
38
+
39
+ **Gerhard Lazu:** Okay. 1,838, across 24 contributors. That's a lot of changes. A lot of changes have happened since November 2019. Now, I would have loved to see how many lines have been deleted and added, but I couldn't, because the compare was just too big and I couldn't see that. Now, if we take a step back from the specific changes, and contributors, and versions, what do you remember changing about Crossplane in the last two years?
40
+
41
+ **Dan Mangum:** A big part of the experience has actually changes as well, too. That's something I tell people a lot when I'm talking about somewhat of the history and the evolution of the Crossplane project... It took us a good while to land on the final experience here around compositions and building your own platforms and abstractions... And that was not in 0.5. We had an earlier -- maybe some hints towards that; we were doing something that was more tightly modeled after storage classes in upstream Kubernetes... But now we have the ability to define your own compositions and abstractions that are much more flexible and much more powerful. That's one of the biggest things experience-wise that's changed over the past year and a half since we've talked.
42
+
43
+ **Gerhard Lazu:** So before we go into what are compositions and abstractions, Dan, what is Crossplane?
44
+
45
+ **Dan Mangum:** So Crossplane is a way for infrastructure platform teams to build their own platform. A lot of folks come to it, and it's interesting that we talk about this 0.5 release to 1.3, because I think in a lot of ways the experience now kind of reflects the maturation of the project as well... So a lot of folks come to Crossplane because they want to provision infrastructure using the Kubernetes API, the API they're already familiar with for deploying their workloads... And as they grow in their adoption of the project, they start to move into these higher-level concepts. Jared already mentioned composition; we also have a concept of different types of packages and extension mechanisms... And as they move through it, they kind of start to evolve from just deploying things on Kubernetes to actually building a platform for others to consume and deploy. And we really like to give you that experience of building a platform right off the bat... So if you go to the documentation for example, you create something generic, a database, and you can select whether that's provisioned on AWS, or GCP, or Azure, or anywhere else you'd like... And you can also select different configurations that can match that database type. You may want a VPC with your database, you may want to connect it to an existing one. So we try to give you that upfront, but a lot of folks still come in and provision their infrastructure and then grow into building a platform on top of that.
46
+
47
+ **Gerhard Lazu:** So are the abstractions in your description a database? Would that be an abstraction? And then the implementation would be specifically to a provider, or... Is that how that works?
48
+
49
+ **Dan Mangum:** \[08:18\] Yup. It could absolutely be specific to a provider, and if you're across the different clouds, or you're across on-prem and on the cloud, those could be different implementations... But also different combinations of resources. For any single kind of abstract type or composite type, you can have any number of managed resources, which are the granular things that actually represent external APIs, like an RDS instance, or a VPC. You can combine any number of those to satisfy the abstract type.
50
+
51
+ So it may be the actual destination for where the underlying requests are being made, or it may be the configuration of the different resources that make up that abstract type.
52
+
53
+ **Gerhard Lazu:** That makes sense. And the composition - it comes as explained, like these things being composable, and then having -- do you have stacks, or what do you refer to those compositions like as a whole? Do they have a name?
54
+
55
+ **Dan Mangum:** So the general mechanism is referred to as composition, which is also an API type in our schema. I think the closest thing to what you're describing right now is configuration packages. This is a way to basically say "This is an abstract type definition, this is the schema for it, this is a set of compositions that can satisfy this, and this is the dependencies it has on providers, which are other types of package. The providers are things like provider AWS, provider GCP, provider Helm. And that configuration package, when you install it into your cluster, it's gonna bring along those dependencies in the form of providers, and it's also bring along those abstractions... And you can also declare dependencies on other configurations.
56
+
57
+ We really like to see people doing -- what we do see really mature Crossplane users doing is composing compositions inside of each other. So if you describe an abstract type like a database, you may make that into a higher level type called an app, that may provision a VM in a database, or something like that. So you can kind of nest these and build them together, which gives you really powerful building blocks for constructing a platform.
58
+
59
+ **Jared Watts:** To add onto that too, if I can... Getting back to what I was saying about how the experience has changed drastically in Crossplane over the past year and a half, something that's quite relevant here is that earlier on, in the earlier versions of our experience we were building, we as a project were defining what the abstractions are, like a MySQL abstraction, a Postgres abstraction, a Kubernetes cluster abstraction. And we quickly found that that's gonna lead to a one-size-does-not-fit-all type of scenario, and we learned that the community really wants to define their own abstractions, so they have complete autonomy and they're empowered to define what is the shape of the API that's important to them, and what does MySQL even mean to them, because one size does not fit all, and you don't want a lowest common denominator problem... So enabling a lot of flexibility to define exactly what these higher-level abstractions mean to your organization, to your business, to your scenarios and needs was a huge part of upgrading and making the experience in Crossplane a lot more powerful.
60
+
61
+ **Gerhard Lazu:** I think that makes a lot of sense, and I really like the way you think about this. I love it. However, there's another element which I feel is very important to this flexibility... How do you discover all those abstractions? Is there a central place that you go to just to see them? How do people share these abstractions amongst themselves?
62
+
63
+ **Jared Watts:** Yeah, good question. So there's a couple different ways to do that, and Dan has done a lot of work on this, so you can jump in on that and add more, Dan, definitely... So we can package these abstractions up and share them in any OCI-compliant registry. So they have that sort of reuse, and the ability to make themselves available to a broader audience through any sort of registry.
64
+
65
+ \[12:01\] We at Upbound are building a registry that has a lot of those rich discoverability and search and sharing type of features into it that can make it, you know, with the semantic understanding of these Crossplane packages, to make it more easy to find them and share them and reuse them etc. But at the end of the day they're packaged into just an OCI, a regular old image, so they can be shared and reused fairly easily with any registry.
66
+
67
+ **Gerhard Lazu:** I think you mentioned discoverability there, which is really important... So it's great to share them, but how will users discover them, and how will users understand how this, for example, abstraction combines with something else? How do you link them together? How do they -- a tree-like structure, or some sort of relationships? When you said, Jared, that you built this experience, where, how?
68
+
69
+ **Jared Watts:** Yeah, this experience is being built in our Upbound Cloud service that we're building. Our startup, Upbound - they're the creators of the Crossplane project, and we're building a SaaS product and a whole experience, an enterprise-focused experience around Crossplane. So since we have a complete understanding of what is the package structure, what are the contents, what does it mean to be a Crossplane composite resource and configuration and all that domain-specific knowledge, we're able to build a rich experience with discoverability and sharing all those types of things in our Upbound Cloud service on upbound.io.
70
+
71
+ **Gerhard Lazu:** That makes perfect sense. Anything to add, Dan, to that?
72
+
73
+ **Dan Mangum:** Yeah, well I really liked that Jared pointed out that any OCI-compliant registry can host Crossplane packages. That makes them extremely portable, and that becomes really important when you're an organization that has potentially really high security concerns, or only run an on-prem setting, or something like that. OCI-compliant registries have become ubiquitous in the industry, kind of alongside the rise of Kubernetes... And the ability for folks to be able to build their private images and push them to their private registries is definitely a big win.
74
+
75
+ But I know you mentioned this notion of a graph, which I think is a really big part of the untapped potential of the Crossplane community and kind of the marketplace around that. So I mentioned before that those configuration packages can declare dependencies, and you can kind of infinitely compose those. What happens is when you install a configuration, it is going to resolve all dependencies in there. Crossplane will do that for you. It actually generates kind of a manifest that says "These are the lists of packages, and these are the relationships between them." And it can go through and actually resolve to the correct version of them. So when you create a dependency and a configuration package, you say something like "I need provider AWS, and it needs to be greater than v0.18", and Crossplane will make sure that provider AWS is present in your cluster like that. And if you have two configuration packages with a common parent, it can go through and resolve that there will be no conflicts.
76
+
77
+ So we actually generate a directed acyclic graph for all the packages that are installed, which gives you that powerful ability to create a reproducible platform, where you get to the point where if you just install that parent node, that top-level node in your DAG, then you're actually able to reproduce your platform in any Kubernetes cluster where Crossplane is installed.
78
+
79
+ **Gerhard Lazu:** That answers so many of the questions which I didn't ask, but think about, so thank you, Dan. That's perfect. There's one more question which I'm thinking about, because I know that we answered the What fairly well, like what it is; how it works - we went into that to a fair bit... But I don't think we answered the most important question, and this one I think is perfect for Jared - why Crossplane? Why is it important? Why does it matter?
80
+
81
+ **Jared Watts:** Yeah, great question. I think there's maybe two different branches of thought there to perhaps explore. The first one is that some of us that created the Crossplane project - we also created the Rook project as well, too. Rook is storage orchestration for Kubernetes. We found that in the early days of persistent storage for Kubernetes; the story needed to evolve a little bit there before people started to become more comfortable with running storage or data-persistent sort of things inside the cluster.
82
+
83
+ \[16:09\] So we found there that some of the work that the special interest group for storage in Kubernetes had done was really strong. Persistent volume claim, storage classes, things like that. And we found very early on that applying those same patterns for being able to dynamically provision storage would also work very well for other types of infrastructure platform resources such as databases, and buckets, and even clusters themselves.
84
+
85
+ So that was the original why of Crossplane, is "Hey, we've done great things with Kubernetes for storage... Let's do more infrastructure resources inside of Kubernetes and bring them in to being managed and provisioned and controlled by the control plane itself."
86
+
87
+ And then beyond that, we've found that there's a very strong story too for businesses that are starting to have their own shared services infrastructure platform teams as well, too. They have a responsibility to provision infrastructure and get new services up and running for a whole set of application teams around them... So being able to have some reproducibility, being able to enable self-service for the application teams is a really strong story to be able to make their jobs easier, and for the application teams to be able to get to production faster and have reliable infrastructure, and normalizing on the standards that are practices for the whole organization. It just makes the software delivery story that has a huge dependency on infrastructure all the more strong.
88
+
89
+ **Gerhard Lazu:** So I can see how this Why is captured really well in the cloud control plane, which is the abbreviation or the short explanation for what Crossplane is. But I think the Why goes deeper into "Why I would want to use it, why is it important." And I really like this idea -- I think, this is my perspective... You take the best bits of Kubernetes and specifically the API, the unified API, the resources, and you make that available -- or actually, no. You make infrastructure available via that very simple API, and you bring all the cloud providers. When I say "all" - that's always a work in progress; there can always be more. But that's like a growing ecosystem. And thinking about infrastructure as just an API request to your Kubernetes - that's really, really powerful. So Dan, why is Crossplane important to you?
90
+
91
+ **Dan Mangum:** I think I have a bit of a unique perspective on this, as someone who's a younger individual in the industry. I like to say that I kind of grew up in the Heroku generation, in that it was always really accessible for me to be able to get access to hardware, and folks in my generation. I have this familiarity with AWS, and even these higher-level things like Heroku and other services where you just said "I'd like a database to run my app", or something like that.
92
+
93
+ One of the things that when I was experimenting with those different services I noticed really quickly is you always were operating at someone else's defined level of abstraction. So AWS you can think of as being pretty granular, and you have to understand a lot of moving parts to be able to use it effectively. So you have to understand a lot about networking to use almost every service on AWS. You have to maybe understand a little bit about how Postgres works or MySQL works to use RDS. On Heroku, at the other end of the spectrum, you get a database and you don't really get to tune it to your own liking, or anything like that.
94
+
95
+ So my interpretation of Crossplane when I first saw it open sourced (I believe) at KubeCon Seattle in 2018, I was finishing up my schooling and I recognized it as something that was gonna really revolutionize the way that organizations were able to provide a platform like that... Because if I as an individual college student was feeling the pain of these different services, you can only imagine what a large enterprise organization was feeling... So the ability to actually take that and build your own platform, but also use other people's platform... We've talked already about this marketplace, right? I envision a future, and I think others do as well, where you go and you get the bits of the infrastructure stack that you don't care about - you get those from other companies. They might publish them, individuals might publish them... Just like we consume libraries from GitHub, or something like that, and you're able to say "I wanna take these off-the-shelf bits and sprinkle in my own personalized touch", and then you get an infrastructure platform that's tailored to your needs, but also doesn't require a lot of effort for you to build out, which definitely sounds like a magical experience to me.
96
+
97
+ **Break**: \[20:32\]
98
+
99
+ **Gerhard Lazu:** You mentioned that about discovering all the things that made sense for other people - they package, they put out there. Jared, you mentioned about the Crossplane cloud, where I imagine that some of this exists; people can discover it, people can get started... I also imagine that some of these building blocks you curate yourselves and you make available to the community... Do you need to, I imagine, create an account with GitHub...? I haven't tried it. I definitely do wanna try it, maybe (you're right) after we finish recording this... But what I'm trying to understand is how much do you get when you get started, in terms of the experience? What is the experience that you get to begin with, and at what point do you need to say "Okay, I like it, I'm serious about this. I wanna start playing for Crossplane cloud"? What does that onboarding and what does that early experience look like, Jared?
100
+
101
+ **Jared Watts:** Yeah, I think that since this has been an open source project for over two years now, we've always been strongly believing in investing in the community as well, too... So we had to first build this experience and iterate on it to get to where we are today... But through that process we've gotten some great adoption and we've gotten folks that are heavily invested in the project themselves as well, too... So with the core of Crossplane, the open source upstream addition there, you can do a whole lot of this -- the core functionality is all there. So you can provision infrastructure in any cloud provider, or on premises. You can package your own abstractions and define your own platform and push those to a registry to share it with other people. So the core of the value proposition is they're in the upstream project.
102
+
103
+ I think that when you start getting to enterprise scenarios and you wanna get maybe some more visibility and a richer experience around the core concepts, then that's when you can start getting more involved with what we're building in Upbound cloud as well, too. For instance, if you want to manage a bunch of different Crossplane instances, or you've got multiple of them, maybe one for each team, or one for each environment, having some functionality to be able to manage the teams around that, and the permissions, and auditing and all that sort of stuff starts becoming important... And then I think there's a whole bunch of really great experiences you can build that provide insight and observability and debuggability and all that sort of stuff into Crossplane as well, too... With a rich browser to see all of your infrastructure that's being managed, what are the relationships between them... I think that things like that, insight and observability, manageability of the platform start becoming quite interesting as well too, which is sort of some of the experience we're building in Upbound Cloud also.
104
+
105
+ **Gerhard Lazu:** \[24:05\] I think that makes a lot of sense, because once you reach a certain scale, then you start having problems that you just wanna pay someone else to handle, because that's not the value that you're adding... And that makes perfect sense. But I'm wondering more around that discoverability feature. For example, do I need to define my own abstraction to get started? And sure, I will, but what can I use out of the box quickly to understand how this fits together? How can I discover what's out there before I grow and before I am bought into Crossplane? What does that look like?
106
+
107
+ **Jared Watts:** Yeah, let me take a quick stab at that one, too... So I think there's a couple different depths that you can start diving into. One is the open source upstream crossplane.io docs; they're very, very useful. We have a Getting Started guide that kind of introduces you into what is a composition, what is a composite resource, how do you connect Crossplane to your cloud providers to start provisioning infrastructure... And it walks you through a very simple scenario where you're creating an abstraction around a database and providing that to your application teams so that they can self-service and get their databases...
108
+
109
+ So that's a great place to get started, and I think that anybody that walks through that getting started guide on Crossplane.io, the docs there, is going to start understanding the concepts and start being productive there.
110
+
111
+ To go another step further, something else that we've done in the open source is we've created a set of reference platforms. These are higher-level abstractions that start trying to show what are some scenarios, what are some use cases, what are some things you can accomplish that go a little bit deeper than just the Hello World, Welcome, Getting Started type of guide.
112
+
113
+ So we have a handful of them... Some of them are around creating clusters and data services in the different cloud providers, like in AWS, or GCP... And then we've got one for how do you create a multi-cloud Kubernetes, how do you create an abstraction around Kubernetes and be able to provision a cloud of your choice, and provide a set of services inside of that cluster for your applications and your workloads to consume...
114
+
115
+ And then I did one in a recent talk as well too for a cloud-native. So we've created a cloud-native reference platform as well too, that composes together a lot of different projects within the CNCF ecosystem and shows some of the more modern approaches such as using GitOps and having observability and service mesh and all sorts of things inside of your application cluster as well, too. So those reference platforms are a big help to take you from a getting started to "Oh, this is what I can do at a higher level and some of the more complicated scenarios that I can solve." So we try to kind of take you through a little spectrum of your journey with Crossplane.
116
+
117
+ **Gerhard Lazu:** I really like everything that you said so far, especially the dereference architectures... And I would really appreciate having some of the links, and this is why - in 2022, for the Changelog.com setup, I see Crossplane being part of it. So I would like to have less makefiles, have less commands to run locally, and more having this control cluster, the C cluster, cluster zero, that then sets up all the other clusters, and it composes the entire Changelog.com infrastructure. So that's one of my goals for 2022. And I think that Crossplane is at a point where it can enable that relatively easily. But I see some components missing, and this is where Dan will guide me through what do those steps look like. For our Kubernetes provider - it's a managed Kubernetes, and we're using Linode. So the first thing that we'd need to do is somehow to provision Kubernetes clusters in Linode. I imagine that would need to be a Linode provider, which I don't know whether it exists yet, but it definitely didn't exist when I last checked Crossplane about a year ago. That would be the first thing.
118
+
119
+ \[27:57\] The other thing - and this is more of a nice-to-have - is integrations with Fastly, the CDN. There's certain configurations that would need to happen... And I know that this is not the \[unintelligible 00:28:06.29\] that we were talking about, like AWS, GCP, Azure; this is a CDN. But I see Crossplane fitting there really, really well, declaring our CDN as a Crossplane resource. Because it's all part of the Changelog.com stack. And success in my eyes is being able to define the entire Changelog setup in these Crossplane abstractions. So how does that sound to you, Dan, and what are we missing that I don't know yet?
120
+
121
+ **Dan Mangum:** That sounds like a perfect use case, and I will admit that I listened to the 2021 infrastructure for Changelog.com episode, and you kind of enumerated that in the past - I think it was six months before that - you had kind of said "We're moving to Kubernetes to do this", and folks had said "You're running an NGINX server, a Phoenix web app, and a MySQL database", or Postgres database I believe you said... And you got a little bit of pushback on moving to Kubernetes, because folks said "You don't have a microservice architecture. Why are you doing that?" And I love how in that episode you went through and said "Well, you know there's all of these kind of hidden dependencies that we have." I believe you mentioned certificates, CDN, CI/CD, monitoring, all of those things. And as someone who's worked on Crossplane for a significant period of time now, that was really music to my ears. So getting to your specific question, I do know that there is a provider, Linode, that is very early on, but does exist, and I believe is usable. So that's one side of it.
122
+
123
+ Getting to things like CDN - that's absolutely in scope for Crossplane. That's a little different from other infrastructure-as-a-service. But we have providers for all types of things, and that cloud-native platform that Jared was just mentioning - it makes use of a really important provider that I wanna bring up, and also a newer provider that just landed, that I think would be useful.
124
+
125
+ The first one is provider Helm. So what Jared's talking about is provisioning Kubernetes clusters, and then provisioning Helm charts into them, but that being a single package. So you create your Changelog.com instance as a Kubernetes object; behind the scenes that spins up a Kubernetes cluster, maybe it installs Linkerd into it - I know y'all had some issues with measuring latency on some requests - it puts your Phoenix app in there, NGINX, whatever else you need... It also spins up your managed Postgres instance on your cloud provider of choice, unless if -- I know you mentioned that y'all might want to continue running that in a cluster, but as you alluded to, and many folks like Kelsey Hightower have said, that's definitely something that we would encourage you to looking to manage offerings for; so you're gonna include that on a single package. And just recently, one of my co-workers, Jared and I's co-worker at Upbound and a contributor to the Crossplane project who's worked on a lot of provider Helm just created a new provider called provider Kubernetes. So if you don't wanna use Helm as your abstraction, you can actually now create Kubernetes objects directly into both the cluster that Crossplane is running in, as well as any tenant clusters you spin up.
126
+
127
+ So I think altogether we're gonna have a lot of pieces for exactly what you wanna do, and something that would be really exciting to me is, you know, y'all might create a template or a configuration package for deploying a Phoenix web app, and someone else might come along and see that in the registry and say "Hm, I also have a Phoenix web app with these components. Let me just put in the other bits I need to be able to provision my website." And you can share that, and it can verified, and go through our conformance testing, and that sort of thing, and be available to others. So I think you're on the exact right track with the direction you're going.
128
+
129
+ **Gerhard Lazu:** That's amazing. I knew that our journeys would meet at some point, and I think they were getting very close to that point where we start walking together, in a way... I'm very excited about discovering what it looks like. I'm imagining that your documentation has all the examples I need. I know exactly who to reach out to if I get stuck, so that's great... And I know that all of this happens in the open, so everybody will benefit from this and it'll be visible to everyone who wants to see how this is done...
130
+
131
+ \[32:09\] I'm also wondering - this is another component which I would like to introduce in our stack, which I feel will solve a lot of the runs on the "It runs on my computer" sort of thing, "It doesn't work on Jerod's computer." I mean different Jerods, Jerod Santo from Changelog. And I'm wondering, what is the relationship between Crossplane and Argo CD when it comes to deploying apps and keeping configuration in sync?
132
+
133
+ **Dan Mangum:** Absolutely. I'm really glad you brought that up, because a ton of Crossplane users we're seeing are using Argo CD, and that's definitely something both in the Crossplane ecosystem and Upbound Cloud that we're definitely in support of. Typically, when folks are using Argo CD, a lot of times with any sufficiently-sized architecture they'll move to this app of apps model. So you kind of have your initial app, which tells where to get your other Helm charts from, or whatever you're using to deploy...
134
+
135
+ So a big thing that can be enhanced is now alongside your applications the infrastructure is defined in the same repo... I know you mentioned you like monorepos, so we can definitely give you that experience. And you can start using GitOps to provision your database, or using GitOps to provision your CDN, or something like that... And it's tied to your deployment of your application. So you're moving from these nice packaging mechanisms for workloads to a nice packaging mechanism for an application.
136
+
137
+ And because there's a standardization on the Kubernetes API, that means if you're running Crossplane on your Linode Kubernetes service, then Argo CD can target that; if you're running a hosted control plane on Upbound Cloud, where we actually run Crossplane for you on our own infrastructure, then you can target that with Argo CD, because we give you kubectl access to that cluster. So there are definitely a lot of benefits in going with this GitOps approach. We certainly encourage that kind of outlook.
138
+
139
+ **Gerhard Lazu:** What would you recommend, Jared? Would you recommend that we set up a Kubernetes cluster where we run Crossplane, and that controls everything else? Or would you recommend that we use Upbound Cloud.
140
+
141
+ **Jared Watts:** Yeah, I think it depends on what you're going for, I think. I think that the model in Upbound Cloud works really well if you want to have a single, centralized control plane that is going to be managing a lot of other control planes in other places, so it becomes kind of a central point of managing all of your infrastructure, and you can spin up new clusters, workload clusters, and deploy applications and services into them... I think that that's a really good model for Upbound Cloud, is having a centralized point there.
142
+
143
+ I think it's a perfectly relevant model as well, too... If you're running one cluster or you want to have it on premises, then you can run a Crossplane instance yourself there, and it'll have all the workloads, all the applications, all the services within one single place as well, too. It's a perfectly fine model for that.
144
+
145
+ One thing that we started doing as well too is that we actually have released a distribution of Crossplane that helps you run Crossplane on premises; even if you're not going to run the hosted Crossplane instance inside of Upbound Cloud, you can still connect it to Upbound Cloud and get all those observability and manageability features as well, too. Even if you're running everything on premises and having all your workloads in a single place that is under your control.
146
+
147
+ **Gerhard Lazu:** One of the biggest reasons why I think I would want us to use Upbound Cloud is because the most important thing that controls everything else is a managed service. So if there's an issue with Kubernetes - well, we don't know about that. Actually, we don't even care how you run that managed Crossplane service. All we care about is that it's always available. If there's a problem, you'll fix it... And we will always know that the thing which manages everything else is healthy. I think that's a very big value proposition. And we're not asking you to manage our entire infrastructure, because you don't even know what it is, it keeps changing, so on and so forth, so I really like this decoupling... But what I do expect to happen is whatever manages everything else, you take care of it, because you know it inside out. And to me, that is like "Yes, please. Sign me up." That's what I'm thinking. Would you disagree, Dan?
148
+
149
+ **Dan Mangum:** \[36:10\] Absolutely. And one of the things that I think is a really important distinction here from other ways to provision infrastructure - so you have your kind of legacy ClickOps, if you will, where you go into the console and you create it --
150
+
151
+ **Gerhard Lazu:** No, hang on... This is too good. Please say that one more time. This is so good... This is the first time I hear that, and I love that... I think others need to take notice. We can't just skim over it; this was too good. Please say that one more time.
152
+
153
+ **Dan Mangum:** I definitely can't take credit for the term, but the term is ClickOps, where you go in and provision your infrastructure by clicking around in the console. I don't know who to attribute for that, but it's certainly not myself. But hopefully, that's not what most organizations are doing... But kind of the next evolution of that - with things like Terraform or Pulumi or infrastructure-as-code tools. And those are really great, because you can version that config that you run to go ahead and provision your infrastructure. That's an awesome model.
154
+
155
+ One of the things that could be nice about that is that you don't have a service that you have to worry about to provision that infrastructure. You run it from your local machine. The drawback of that is that if you're not actively running something, then that infrastructure is free to change or be modified, and that's especially a big deal in an organization where you have lots of people provisioning and modifying infrastructure, and things like that. So having that hosted control plane, as you're saying, you can allow someone like Upbound to host that for you. And then you also don't have to worry about your infrastructure getting out of sync, because as soon as it is, you can get an alert for it. As soon as something goes down, we can bring it back up for you, according to how you would like that behavior to be reflected. So I think you're spot on with both the fact that having that hosted kind of central point of provisioning infrastructure is really important, but also just having something that's constantly evaluating your infrastructure is a big gain over what most organizations are doing.
156
+
157
+ **Gerhard Lazu:** I mean, even if -- we're not a big team, right? Changelog is a fairly small team, like 3-4 people... And some of us are spending very little time - myself included - on the actual infrastructure side; and I think people miss this.
158
+
159
+ Now, we wouldn't want this knowledge to be stored in a wiki or captured in some docs, or even captured in some code. We would want this to be automated so that you don't need to know much once you encode what you want to happen. And as long as the control plane is a managed service, which is very important, then things will just keep being applied, and everything will be healthy on the management side.
160
+
161
+ Now, if there is a problem in the integration with the providers - well, that's a separate problem. That will happen regardless. But at least, you don't need to be an expert in SRE, an expert in ops to run this thing; it just runs itself, literally. And that's a dream. You're literally automating yourself out of the job, and I think that's the best possible approach to this kind of thing. If you automate it all, it just takes care of itself. How amazing is that?
162
+
163
+ **Jared Watts:** Yeah, and I think that something you said there, Gerhard, is kind of interesting as well, too. A lot of folks say "automating yourself out of a job", but in reality, what you're doing is you're automating yourself into an ability to handle more important problems. There's so many services and components and things underneath the stack that people are building and delivering applications today. It's not really reasonable to know every single thing and worry about every single component as well, too.
164
+
165
+ So the ability to automate and to offload some of that into managed services, or well-founded processes around automation as well too is really nice to be able to free you up, to be able to worry about more things that are important, and I guess recording other episodes of awesome podcasts as well too, in your case there.
166
+
167
+ **Gerhard Lazu:** It's scary how well I could anticipate that. I was expecting one of you two to say what you've just said, Jared, and it's scary that I could anticipate that. It's like, wow. You're blowing my mind right now... Because you're right. What about rather than doing some tedious ops work, SRE work, what about trying new services out? What about trying to level everybody else up? What about helping the industry grow? How amazing would that be? What about trying things out and helping those things improve, such as Crossplane? Now, isn't that a much more interesting proposition than configuring load balancers and figuring out why your NGINX config is wrong?
168
+
169
+ \[40:30\] I mean, that's what I wanna see, and that's what I wanna promote... So thank you, Jared, for preparing everything so nicely for that mental picture. You're not automating yourself out of a job, but you're automating yourself out of tedious tasks... Which - they get old. I mean, if you've been doing this for 10-20 years - sure, things change slightly, but it's more or less the same thing. We are proposing a new model. Crossplane is proposing a new model, and that's what gets me most excited about it.
170
+
171
+ **Jared Watts:** Yeah, and you could see the same exact thing Kubernetes did for applications, of being able to - instead of dealing with running services on a particular VM or making sure they're up and running with Systemd, Systemctl, whatever, being able to run those completely across an entire fleet of VMs, and have machines that have redundancy and consistency, and just everything working overall, and being able to self-heal, and have all that reliance over time is such a nice model... And continuing to do that in other areas of the stack, in a broader scope as well too - I think it's just a really good way to keep going with all this.
172
+
173
+ **Break**: \[41:32\]
174
+
175
+ **Gerhard Lazu:** Is there anything else that I should keep in mind as I explore this Crossplane integration, Dan?
176
+
177
+ **Dan Mangum:** I would really appreciate if you keep in mind the pain points. There's a lot of really powerful technology in Crossplane, especially around designing compositions, packaging configurations... But the experience is still a little bit painful, in my approximation. Right now, to design your schemas for your abstract types, you actually have to write an Open API v3 schema in yaml and push that, which - obviously, that's a much lighter lift than doing something like writing an application, and writing some logic, and that sort of thing.
178
+
179
+ That being said, that's an experience that we really want to improve, both in the Crossplane community, and on the Upbound Cloud side as a product. We've definitely started to invest in some of those areas, particularly around editor support, being able to do things in the browser... We recently had a hack week at Upbound, where we worked on some of those things and made some big strides... But we definitely appreciate feedback from folks like yourself, who have that knowledge of what they want the experience to be... Because for us, this is a product we use ourselves within Upbound to manage our infrastructure, and that sort of thing. That being said, it's a bit of a small sample size within our own organization.
180
+
181
+ \[43:58\] So all types of input we get, whether it's folks coming in Slack, folks opening issues, jumping on calls with us or doing a podcast episode with us - those all help us make the product better. And the great thing that you've alluded to multiple times now - it's all open source, so if you want it to be different, you can come along and make it better as well. And I think we've developed a really strong community around that for new folks to come in and empower them to be able to add the features to Crossplane that they want, or work with us to add them as well.
182
+
183
+ **Gerhard Lazu:** You're ticking all my boxes right now. It's scary, literally. Slack, how to give feedback, GitHub, the experience, the focus... I'm hearing all the right things. It's scary how excited this makes me, so I have to dial it down a bit, because it's just like - again, you're ticking all my boxes. So - okay, that's good to know...
184
+
185
+ How does this sound to you, Jared? ...if we wanted to use more than just Kubernetes to run our Changelog app in - I'm thinking multi-cloud; if we wanted for example to try out Flyio, and Render.com, and Kubernetes on Linode - what would that look like in Crossplane? Is it even possible?
186
+
187
+ **Jared Watts:** Yeah, that's a good question. I am not super-familiar myself with at least Fly; a little bit with Render... But I think something that's really important to remember here is that the machinery and framework in Crossplane is all there, such that the support for lots of infrastructure that already exists, providers for all sorts of in-cloud, on-premises sort of things - those all exist, but anything that has an API can be managed by Crossplane. So we have an extension mechanism for anybody to write providers that can have a lot of coverage of a lot of different places, and with that base layer of "Hey, here's a simple provider", you can almost think of them as a driver for a Crossplane to talk to some set of infrastructure or some set of services, being able to write a provider for that. It then gives you the ability to plug it into the rest of your infrastructure, compose them together, have a consistent model for all of your infrastructure applications and services... So it's really nice to be able to extend Crossplane through anything or to anything that has an API, and there's a lot of examples around that.
188
+
189
+ And a lot of community people are building interesting things that we didn't expect as well either. For instance, the providers you expect in Crossplane to manage cloud resources like GCP and Azure etc. - those are all there. But then we've also got some community ones to manage things like GitHub and GitLab, to be able to manage repos and teams etc. in those places. And then one for SQL to be able to create users, or tables, or things as well, too. So really, literally, anything that has an API can be incorporated into Crossplane, and so giving you the ability to then start stitching and managing everything together with a very simple, normalized, consistent interface.
190
+
191
+ **Gerhard Lazu:** This reminds me a lot of how Terraform used to work, and how we used to use Terraform for many of these things, like managed DNS, for example; we used to have that integration. What is the comparison between Terraform and Crossplane, if any?
192
+
193
+ **Dan Mangum:** Yeah, I think that's a question that we get a lot, and I know even on the Crossplane blog we have some posts about "What are the differences between them?" We've already talked about a few of them, one of them being that active reconciliation. That's kind of the obvious one. The difference between a control plane and an infrastructure-as-code tool. We see that as a really big benefit.
194
+
195
+ You know, getting down into some more specific details... And this may not be super-applicable to Changelog, for instance, because you all have a small number of folks in your company - but you know, you may grow in the future. But one of the big parts that we think is really important about the Crossplane composition model compared to other infrastructure tooling broadly is this concept of bringing the level of permissioning to the level of abstraction. If I break that down a little bit... When you use something like Terraform, you can create modules and compose them into higher-level concepts to where you as the person actually executing it and requesting infrastructure - you don't have to understand all the underlying bits.
196
+
197
+ \[48:03\] That being said, whether you're executing on your local machine, or you have some sort of jump box that you log into that has the proper credentials - whatever gets rendered out at the end of that pipeline when all the modules are resolved and the conditionals are evaluated, you need to have those permissions, or the system you're using needs to have permissions to actually create those resources on AWS, or on Linode, or wherever you're actually provisioning that infrastructure. And that's fine if you as an infrastructure admin are gonna have those credentials anyway, if you're the only person doing it. However, when you move to a platform approach, what you want to be saying is "I'm giving you the ability to create the abstract type, and I define the policy and mapping behind that. I'm never giving you permission to create the granular resource..." And that abstraction that you create is going to be long-lived, right?
198
+
199
+ So one of the big aspects of composition, kind of getting into more of the technical implementation, is there's two flavors of every abstract type that you create, which you can optionally disable one of them, but - there's a cluster-scoped version, a Kubernetes cluster-scoped resource, and then a namespace-scoped resource. So you as a developer requesting infrastructure for your application would likely create something at the namespace scope. And you can have \[unintelligible 00:49:16.11\] to say "This developer and this team can create a database in this namespace", and then you control the mapping as an infrastructure admin to how that actually gets rendered out... And the provider controller that actually provisions the infrastructure is what is given the credentials to create that.
200
+
201
+ So you're never giving the app developer and their namespace credentials to even talk to AWS. You're giving them credentials to basically be able to provision what you've defined as an abstraction, which may go to AWS, may go to Linode, may go to your on-prem infrastructure... But that isolation is really important, and persisting that isolation. That database object continuing to exist in their namespace is a really important distinction from other infrastructure systems, which we believe as you scale and as you grow and as more and more folks are provisioning infrastructure using Crossplane, that becomes even more important.
202
+
203
+ And from an Upbound Cloud perspective, we're giving you services around managing your credentials and getting a view into your global infrastructure picture, being able to have a view of that graph, of the relationship of requesting infrastructure and what actually gets rendered out, and what credentials are being used, and when you give someone the ability to create a database in their namespace, what does that mean in terms of their ability to create something on Linode? Those are all really important things that we think sets Crossplane apart from other infrastructure tooling systems.
204
+
205
+ **Gerhard Lazu:** That is a great answer, thank you very much for that. I'm sure this is something which I'll be referring back to, so I love having this recorded... Because I'm sure as I gain more experience with Crossplane, this will become more and more relevant, and even necessary to go beyond the getting started part.
206
+
207
+ You mentioned - I think either Dan or Jared, I can't remember exactly who, but you mentioned about the hack week that you recently had at Upbound... And I'm wondering, Jared, what other things came out of that hack week that you were excited about?
208
+
209
+ **Jared Watts:** Yeah, really good question. The hack week was something I was super-pumped about. I'm kind of more involved in engineering leadership these days than hands-on-keyboard technically focused... So that was something that for the team I was super-excited to make happen. So the whole guiding principle there was that people were going to be focusing on what was important to them, what's something that either they had been dreaming of, or something maybe that was a big pain point for them as well too, so just making themselves more productive, or collaborating with new teammates as well, too.
210
+
211
+ \[51:41\] I think in a hack week there's a lot of different ways you can take it, and it's really up to the individuals participating in it to get what they want out of it. For instance, some of the other things that came out of it - Dan had mentioned that provider Kubernetes had come out of it; so a brand new provider was one thing that came out of it. The ability to monitor and get metrics from on-premises instances of Crossplanes and being able to surface those up is something that came out of the hack week project. Some developer tooling around the way we build our client browser side apps as well, too; our frontend apps. Some strong developer tooling to get designers integrated more into the process and designers being able to change different UI values and styling of an app, and have that ship to production as well too was something that came out of it... And then also developer tooling for being able to have remote debuggability for clusters as well too was something that came out of it.
212
+
213
+ So it was just a whole spectrum of things... People working together on some things in open source, some things for Upbound... There's been a lot of people making progress, and people get really inspired when they get to work on something that's very important to them internally as well, too. So that was just a really cool experience.
214
+
215
+ **Gerhard Lazu:** I love the sound of that. I'm wondering, is there a blog post or something public that people can go to and see these specific tools?
216
+
217
+ **Jared Watts:** We just wrapped up the hack week recently... We had made a little bit of noise on social media about it on Twitter and stuff like that, and at the end of the week we did a demo session where we were kind of live tweeting information about it... So on Twitter there's a little bit of information, but I think we're gonna do a write-up to have a blog post about it coming up soon as well too, on Upbound's blog.
218
+
219
+ **Gerhard Lazu:** I would love to get that. So maybe by the time this episode goes live, I would love to have a link to put it in the show notes, so that others can see... Because there's a lot of cool stuff.
220
+
221
+ One thing which I haven't heard, and maybe I'm getting confused as to whether this came out of the hack week or not - it's the k8s container registry. Dan, what can you tell us about that?
222
+
223
+ **Dan Mangum:** Yeah, so k8s container registry was a project -- it's about a month old at this point... And for folks that aren't familiar, Crossplane (we've already said) uses OCI images for its packages... And it actually doesn't go through the Kubernetes node to be able to pull that. So on an individual Kubernetes node you have a container runtime which basically facilitates pulling images from various registries, and that's how you get an image to run a pod, or a deployment, or something like that.
224
+
225
+ Crossplane - our packages are very small, our OCI images are very small, because they actually just contain a stream of yaml in them... So we actually go directly from Crossplane to the registry and we pull that in. We have our own cache for those packages that are just stored in a volume and you can use whatever backing storage you want for that.
226
+
227
+ So one of the things that I saw as a pain point when people were developing new packages was that they were having to build their package and push it to the registry and then install it declaratively into Crossplane. And this is a good model, and it's definitely really useful when you're consuming packages from elsewhere... But if you're just trying to get a fast development loop, you definitely don't wanna be pushing your package to a registry just to use it in your local client cluster, or something like that. So what k8s container registry does is it actually utilizes the Kubernetes API server itself to push images through its proxy functionality.
228
+
229
+ Behind the scenes, the Kubernetes API server is just a REST API. And one of the endpoints for pods is the proxy endpoint. So k8s CR basically is just a CLI tool which will pull an image from your Docker daemon, or a tarball that you have on your local system, and it will push it to a registry running in your Kubernetes cluster through the API server, so you don't have to actually expose your pod; you don't have to create a service or a load balancer or anything like that as long as you have kubectl access and you have \[unintelligible 00:55:27.06\] to hit that proxy endpoint. You can actually just push straight into your Kubernetes cluster... Which means something like Crossplane that needs a registry has one running right beside it.
230
+
231
+ \[55:39\] We've shown things where Crossplane is running and has a sidecar container which is an OCI-compliant registry. And importantly, a lot of this functionality was very easy to build, because there's a library that we depend on in Crossplane that a lot of folks are big fans of at this point called Go Container Registry. This gives you kind of the low-level bits of how you actually construct an image. We've continue to kind of evolve our usage of that package.
232
+
233
+ And actually kind of alluding to that hack week, myself and my co-worker Michael - we worked on a way to actually build OCI images in your browser. Since we're just putting yaml in them, you can imagine putting an editor in a web page that you receive, and using actually Rust and WebAssembly we're actually able to build and push an image from inside your browser itself.
234
+
235
+ So lots of fun stuff around that, and lots of stuff that will help the developer loop maybe not be used in most production settings, but getting folks to the point where they can have a package that's consumable and really useful to them as an organization as quickly as possible - definitely a goal for us.
236
+
237
+ **Gerhard Lazu:** So what I've heard is pushing container images, OCIs straight into Kubernetes, with no external container registry, just using the kubectl, the Kubernetes API. That sounds amazing. I love that. And I have ten follow-up questions, especially around the WebAssembly and the web browser... But we're running out of time. So the only way we can solve this is by having a follow-up, which we'll talk about next. But for now, just to wrap this up nicely, this is the last thing which I'm thinking about - if someone was to take away one thing, so if one of the listeners was to take away one thing from our conversation, what would that be, Jerod?
238
+
239
+ **Jared Watts:** I think one of the biggest things here for me is that folks are starting to buy into Kubernetes and really getting understanding and seeing the power of a control plane type of approach for many of their applications, that "Hey, you can do that for your own infrastructure as well, too", and that we have a super-welcoming community that loves to talk about these things, support people, get more people involved as well, too. We've been watching the community continue to grow, and the community helping itself, and building contributions for themselves as well, too... So the more, the merrier in that party. So if you wanna manage infrastructure in Kubernetes, come to Crossplane.io and join the community.
240
+
241
+ **Gerhard Lazu:** That sounds amazing to me. What about you, Dan? What would you want people as a listener to take away from this discussion?
242
+
243
+ **Dan Mangum:** I would say that my ask for folks is to think a little bigger with their infrastructure. And what I mean by that is envision a future that seems impossible right now. A lot of folks think kind of like this pie-in-the-sky vision of being able to just consume these infrastructure packages and build them into higher-level abstractions, and then have my own control plane, have my own version of Heroku seem kind of far-fetched, and likely feels like it would be super-hard. There is a lot of tooling in place to be able to do that today in Crossplane, and folks going in and exercising that and saying "This is not quite there" or "This part is great" is gonna help us make that more of a reality. So if that's something that sounds of interest to you, which I imagine for most folks it would be, please come and try it out. It's free to try all these things, they're all open source... And see what you can build. Maybe you'll build the infrastructure package that large companies start to depend on, and that'll be useful for you for both managing your infrastructure, but also maybe getting some stars on your GitHub as well.
244
+
245
+ **Gerhard Lazu:** This was too much fun. It was very difficult to contain myself and not be more excited. Thank you very much for this lovely conversation. See you next time.
246
+
247
+ **Jared Watts:** Thank you so much for having us. It's always a pleasure to talk with you.
248
+
249
+ **Dan Mangum:** Absolutely.
Bare metal meets Kubernetes_transcript.txt ADDED
@@ -0,0 +1,363 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** Before the episode I mentioned about my history with Packet, that neither of you are aware of. I almost joined Packet in the summer of 2019, and do you know what happened?
2
+
3
+ **David Flanagan:** No.
4
+
5
+ **Gerhard Lazu:** Okay. That's the answer which I was expecting... \[laughs\] There's two people that know what happened. It's Zach and Dizzy. Zach, "Emails got lost in the shuffle." That's exactly what happened. I didn't know what he meant at the time, because I knew nothing about Equinix Metal, so I couldn't imagine just how busy Zach and Dizzy were at the time. This was, again, summer of 2019, and 2020, in January, it was announced that Equinix is acquiring Packet. So we almost ended up working together... And it's not the reactions which I was expecting. You're just looking at me like -- I'm not sure if it's disbelief...
6
+
7
+ **David Flanagan:** It is disbelief, because I can't believe what you're telling me, that you almost worked at a company, and the reason that you're not is because an email just was missed?
8
+
9
+ **Gerhard Lazu:** Emails got lost in the shuffle, yes. So I didn't follow up as much as I should have maybe... Actually, I think you're right, David. Maybe on paper I wasn't as good as I thought I was. So Zach, when he got my email he like replied because Bruce at the time was the head of engineering. But Bruce just left. And Dizzy, Dave Smith, I think he'd just joined. So there was a big shuffle, and in the background, I'm sure the Equinix stuff was happening, so he was too busy. And I was like, "Hey, can we talk? This is what I'm thinking." "Oh yes, sure. Let me connect you to Dizzy." And he said he'll get back to me, but he never did. Which was okay, because so many things happened then, even for me, so I wasn't really insisting that that happened... But it never happened, and it could have.
10
+
11
+ **David Flanagan:** \[04:16\] You should just email tomorrow and be like "Hey, can we pick this back up?" And then just come and work with us. I think that'd be great.
12
+
13
+ **Gerhard Lazu:** Okay, I'll think about that. Thank you for that. That's one idea, for sure. But the thing which I wanted us to talk about is what attracted you to Packet in the first place? I'll go last. Marques, would you like to go first? What attracted you to Equinix Metal?
14
+
15
+ **Marques Johansson:** Sure. It's interesting, your setup there, because I hadn't realized that you have a strong engineering background before we got to know each other the last time... And I'm wondering if the role that you would have been looking for would have been in engineering, or would have been, say, on our team, in the dev rel team. And that question, or that answer (whatever it is) of yours, that's what I was looking for. So I kind of moved from this pure engineering role to this hybrid engineering/marketing/dev rel role, and that's what attracted me. I had other opportunities on the table that were more engineering-focused, and I really wanted to be able to have the freedom that goes along with not having the same sort of engineering, "You know, we need this sprint over in two weeks, with these PRs merged."
16
+
17
+ I liked what I did previously at Linode, where I was pulling together an ecosystem of tools, and I guess I wanted to relive that experience a bit with the learning of Kubernetes behind me. And there was a strong use and need for that kind of tooling a Equinix Metal.
18
+
19
+ **Gerhard Lazu:** That makes sense. What about you, David? What attracted you to Equinix Metal?
20
+
21
+ **David Flanagan:** It was all one huge misunderstanding, and I'm surprised that I'm still here.
22
+
23
+ **Gerhard Lazu:** So it's basically the opposite of me, right? \[laughs\]
24
+
25
+ **David Flanagan:** I thought I was joining the Metallica fan club, and now I'm writing code and doing dev rel for a bare metal cloud company. No, I think I've found an interesting career, and I think because I've always worked directly with bare metal for the last 20 year - you know, there was no cloud back in 2001 when I got my first role... And I worked with bare metal, I had to drive on-site to fix the bare metal, I did a cloud migration ten years later, but always ended up back at bare metal.
26
+
27
+ So when the opportunity came around to work for a cloud company that was allowing me to use an API called "Get a physical server in a rack, with networking, with GPUs, with CPUs, with RAM, and no other noisy neighbors, no virtual machines" - I mean, it just seemed like magic. And the team at Equinix Metal is just phenomenal, all the people that are there. So it was a combination of my background of appreciating and preferring working with the metal, but also just the team that Mark Coleman and Tom \[unintelligible 00:06:46.07\] were putting together.
28
+
29
+ **Gerhard Lazu:** I think that makes a lot of sense, because you're right, that's one of the things which attracted me to Packet at the time, in that you could get those really amazing machines, really amazing hosts which you couldn't get anywhere else via an API call. That was as simple as it is, being able to make an API call and get this bare metal machine was new. And we could get compute via API calls as popularized by EC2 and AWS, but not bare metal machines. I think they came later to AWS. And even now, I'm not sure how they work; I think it's more complicated than if you just went to Equinix Metal.
30
+
31
+ The other thing is the focus on networking. I could appreciate the focus that Packet at the time was putting on actual hardware networking, layer 2, layer 3 stuff - that is very, very rare, by the way. And the Equinix acquisition makes sense. Equinix - isn't it all about networking, data centers? That's how I know Equinix.
32
+
33
+ So what I'm wondering is now that Packet is with Equinix? How is it different? Were you there before, or did you know Packet before? What changed since Equinix, do you know?
34
+
35
+ **Marques Johansson:** \[07:57\] I came along after the announcement was out there. What hadn't changed yet was the name, Packet; it hadn't yet changed to Equinix Metal, just as a division, as a product of Equinix. What I've noticed is that going form an org of about 200 people to an org of about 10,000 people, there's a lot more going on, and sometimes there's overlapping products and overlapping teams... So finding the right people out there to help contribute to what you're doing, and getting input from other teams and other customers - I've noticed that that's really played out... A lot of the products that we've been delivering in the last year and a half - when I say "we", I mean like the broader Equinix Metal - have been delivered because that's what the Equinix customers are looking for, and it's what the Equinix Metal customers, who are also Equinix customers, are looking for. They have services in racks, and now they wanna be able to bridge those services together.
36
+
37
+ **Gerhard Lazu:** Okay. Do you remember much about Packet, David, before Equinix Metal?
38
+
39
+ **David Flanagan:** I had used it a fair number of times. I thought it was a really cool service. It had some limitations around availability facilities. I think Packet before the Equinix acquisition was only available in 6-7 facilities... And when you look at it, that's a great acquisition, because Equinix literally are the backbone of the internet, to a certain degree. They've got over 60 sites around the world. In fact, I think the number is larger than that. And you know, direct fiber lanes into AWS, and Azure, and Google Cloud, and all of these other providers. And it's been able to take what Packet did really well, which is the ability to just stick an API in front of a bare metal machine, and be able to expand that to those extra facilities all around the world, and just make it that you can build low-latency, ridiculous services anywhere, using any \[unintelligible 00:09:46.08\] for instance... Something you can't do on other cloud providers, but possible through Equinix. I think it's just awesome. It was such a great acquisition. It was a really exciting time to be there.
40
+
41
+ **Gerhard Lazu:** So the one thing which what you've told me reminded me of was what bare metal servers used to be like before Packet... And I don't think people realize just how big Equinix Metal now is. So before, anyone used ServerBeach or ServerCentral?
42
+
43
+ **David Flanagan:** No...
44
+
45
+ **Gerhard Lazu:** IBM, they acquired some -- SoftLayer. That's it, that's the company; do you remember SoftLayer? Do you remember RackSpace, when you used to get server from those companies? ServerBeach even precedes them... But there was that SoftLayer, there was also in Europe OVH; they were a very big bare metal hosting provider... Online.net, which I use a fair amount even today... And there's a few others. Ah Leaseweb that's another big one. So that's what getting a bare metal machine used to be before Packet.
46
+
47
+ Packet came along and you thought "Well, this is neat. It's small, it's interesting, it's a crazy, good idea, very simply executed..." And I think since Equinix Metal Packet grew a lot. And I don't think people can appreciate just how big Equinix Metal actually is. I don't think people can appreciate how big Equinix Metal actually is.
48
+
49
+ So you mentioned 60 locations... What about the instances? What about the networks? Did anything change regarding the services that it offers?
50
+
51
+ **David Flanagan:** Well, you're asking what's changed from the Packet days to the Equinix Metal days, and I think there are a number of things worth highlighting - we're moving our hardware from our older, what we call legacy facilities that Packet owned, into these massive IBXes that Equinix has all around the world. By doing so, we are able to take advantage of their network capacity across global sites, and really, the capacity is not even the amazing thing; it's the \[unintelligible 00:11:41.04\] table. You know, Equinix having so many PoPs around the world, when you make a request, what you wanna see is something efficient, that's going to get you the minimum amount of hops to the destination that you need... And Equinix has the infrastructure. They have that \[unintelligible 00:11:53.29\] table and they have the ability to make sure that your request is the best that it can be.
52
+
53
+ \[12:00\] So by leveraging their backbone, using their network, moving our hardware into their facilities, we're getting access to all of that. And because Equinix have direct partnership with all of the major cloud providers, every workload on the internet is probably on AWS, GCP, Azure, maybe some Equinix Metal, maybe some DigitalOcean. But Equinix has all those connection. So by running your workloads on Equinix \[unintelligible 00:12:22.20\] Equinix Metal, when you've got to speak to other services and other clouds, you really are getting the most efficient route to that traffic... And I think that's a really important aspect of it.
54
+
55
+ And of course, there's the hardware component of it as well, something that Marques kind of touched on - there's a higher price on the actual server itself taking up the physical space within the business exchange... So we have had to let go of some of those smaller instances... But you know, that's a trade-off we're making just now, and hopefully something we can address in the future.
56
+
57
+ **Gerhard Lazu:** I think that's a great point, and I think this is most likely the best outcome, or at least one of the best outcomes, because you still keep the simplicity of provisioning these instances, of defining your network, but you also benefit from the scale of Equinix. And that is a great combination. So I don't think people realize just how much of the internet Equinix actually runs - the switches, the routers, the physical cables... There's so much of it worldwide.
58
+
59
+ **David Flanagan:** And why would people, right?
60
+
61
+ **Gerhard Lazu:** Exactly.
62
+
63
+ **David Flanagan:** Until I joined Equinix, I had no idea who Equinix was, and then I'm in the door three months and just overwhelmed by how much Equinix really is there across the internet. Really, really cool.
64
+
65
+ **Gerhard Lazu:** Exactly.
66
+
67
+ **Marques Johansson:** You kind of don't wanna know that, right? ...when there's some sort of status page outage, something wrong in an Equinix facility, it is a big deal, so it's good to not hear about those things, to have them not actually happen.
68
+
69
+ **Gerhard Lazu:** Yeah. One thing which hasn't changed, by the way - or at least I think hasn't changed - is that you still forgot to build Kubernetes. Do you remember that post from Zach? That was a great one. "Sorry, we forgot to build the Kubernetes platform", or whatever the title was, but that was a great one. I'll link it in the show notes. So you still don't have a managed Kubernetes service... Why is that, David? Why do you think that is?
70
+
71
+ **David Flanagan:** Yeah, that's a great question. I think Equinix Metal is probably one of the few clouds -- in fact, probably the one cloud that doesn't have a managed Kubernetes these days. I think what we're seeing is that Kubernetes is now becoming this ubiquitous API for deploying applications, and all cloud providers should make that easier for developers... But not Equinix Metal.
72
+
73
+ And there's some really good reasons for that. One is that Equinix Metal have no control over the hardware when it's provisioned to you. So you come along, you use the API or any of the providers and say "Here, I want some bare metal." Other than us stamping on a network configuration, that machine is yours. We can't get onto it, we can't modify it, we can't change it... So actually providing you a managed Kubernetes experience is something that's really difficult to offer, and we wanna give people the flexibility and the power -- that's why people come to bare metal as well, I think - you want something that you're not getting from virtualized hardware, and it comes down to just either pure CPU performance, networking performance, access to GPUs...
74
+
75
+ People want flexibility and the power of bare metal, which is why you come to \[unintelligible 00:15:15.24\] All of that is really important. You don't want Equinix Metal's provisioning steps or anything that we are doing to get in the way of that. The machine is, for all intents and purposes, your machine.
76
+
77
+ **Gerhard Lazu:** I think you want the purity, right? You want the purity of hardware. So keep that experience as pure as possible, without adding any of your daemons, or any of your agents, or whatever you wanna call them... So keep it as pure and pristine as possible from what you would get if you were to physically provision it in a rack... But then make it easy for people to add whatever they want in the best possible way, including Kubernetes. So what is the best way of providing Kubernetes on top of bare metal? That's where you come in; specifically you, David. Or at least that's my understanding.
78
+
79
+ **David Flanagan:** \[16:01\] That's where I come in specifically?
80
+
81
+ **Gerhard Lazu:** Yeah. Like, you with the Rawkode Academy, right? What is the best way that you can get the neatest Kubernetes on top of the bare metal infrastructure, as well as many other things? Because let's be honest, Kubernetes is an amazing technology, but it's just that. It's just one way of orchestrating containers. And a few other things - nice API, the ubiquity across all the cloud providers... But it's just software. And maybe five years from now there'll be something else even greater than Kubernetes. As difficult as that is to imagine, I'm convinced that's going to be the case.
82
+
83
+ So rather than pinning yourself to a specific technology, you're keeping the two separate, but still allowing users to mix it nicely, so they get the best of both worlds, without basically having the abstractions leak into one another, right? Because that's what tends to happen - CNIs, CSIs...
84
+
85
+ **David Flanagan:** I think that's a really interesting point that you mentioned about "Will we be running our workloads on Kubernetes in five years?" And I wanna kind of come back to that in a different tact. What is Kubernetes? It's a distributed system for running distributed systems. It is a distributed system made of multiple components. We actually have - and this is where bare metal ties it together as well - you can build your own Kubernetes cluster however you want, through all these different interfaces that are available.
86
+
87
+ Of course, the Kubernetes project only makes certain things flexible right now, which is the CSI, the CRI and the CNI. So you have free rein to pick whatever plugins you want there. But I see that evolving over the next five years. I don't think we'll be running Kubernetes in five years as what Kubernetes looks like today, but more bespoke implementations, particularly the scheduler and the kube-proxy. I think these are components that people are having a lot of hurdles with, especially at scale, and especially for HPC. Nobody is running the standard Kube scheduler for high-performance workloads, especially on bare metal. You have to use your own custom implementations to make that work.
88
+
89
+ So I think we'll still always have the Kubernetes API in five years, I just think that the underlying components of that Kubernetes cluster won't look like a Kubernetes cluster today. And with regards to how you get Kubernetes on bare metal - we're seeing conversions on Kubeadm. I think being able just to run Kubeadm on a machine is the way to go. We're seeing other tools, like kcs \[unintelligible 00:18:15.23\] all offer that same release sample initialization onboarding component. Kubeadm has gone into that for Kubernetes without removing some of the constraints that we have in those other configurations... But yeah, it's a really interesting space, and I'm really excited for what's gonna happen in the next couple of years.
90
+
91
+ **Break**: \[18:35\]
92
+
93
+ **Gerhard Lazu:** So Kubernetes on bare metal sounds great... I'm not a Kubernetes expert myself, but I have been running it in production for a couple of years. I know most of the components fairly well. I know where to look when there's problems, I know how to fix many things - not all things - but still, Kubernetes on bare metal sounds daunting to me. What are you making, David, to make it simpler? Because I know that this is a space that you are passionate about and you're working towards.
94
+
95
+ **David Flanagan:** People that want their access to the \[unintelligible 00:20:08.10\] are typically power users, and they will have their own custom configurations for Kubernetes. One of the reasons that we don't offer a managed service. But we still wanna be able to make bare metal Kubernetes for people that are just interested in the performance a little bit easier. And I think Marques would be a great person to discuss some of those options that we have available.
96
+
97
+ **Marques Johansson:** As David was saying, Kubeadm - that seems to be the most popular way to deploy Kubernetes... And what you can do is you can layer things on top of that experience and express more opinions. We offer a bunch of Terraform modules that are essentially proof of concept; integrations where you can run Terraform, define a few variables, give us the token that you want for your account, terraform apply, and then depending on the size of the nodes, your provisioning, depending on which integration, within a few minutes you'll have a cluster.
98
+
99
+ These take advantage of Kubeadm underneath, and we have others that take advantage of k3s, we have others that take advantage of Anthos, and the list goes on. There's OpenShift integrations... These are all Terraform modules, and there's a -- Pulumi is another example, where Pulumi takes advantage of Terraform drivers... I believe David has been working on some Pulumi integrations that provision a Kubernetes cluster... So there's lots of ways to get a cluster easy on metal, but the experience is generally going to wanna be tailored to what you're doing. So we don't have this managed one-size-fits-all solution; what we tend to find is that our customers are more varied, and have more precise needs.
100
+
101
+ One of the patterns that we're trying to promote is the Cluster API way of deploying, because a Cluster API is an opinionated way to deploy Kubernetes. It is a Kubernetes resource, it takes some set inputs, and a few minutes later you have a Kubernetes cluster that is managed from another Kubernetes cluster... As David was pointing out with the CNI and CCN - Kubernetes has been taking on this responsibility of managing infrastructure, and another piece of that infrastructure is Kubernetes clusters itself. So it's turtles all the way down.
102
+
103
+ **David Flanagan:** I think where Cluster API fits in - I give a lot of credit to that project - is that if you were to provision a Kubernetes cluster on bare metal with Equinix Metal through \[unintelligible 00:22:28.05\] through Terraform, through whatever means (even Ansible), you're still solely responsible for operating that control plane. No one else is taking care of that. You have to nurture it, you have to feed it, you have to tuck it into bed and put it to sleep. You really need to take care of the Kubernetes control plane; it's a very temperamental bit of software.
104
+
105
+ But the Cluster API actually brings in that reconciliation from Kubernetes, to monitor and help nurture that control plane for you. It does remediation of control plane nodes, it can do in-lane updates of closed nodes, so it can spin up new ones, cutting out old ones when things are unhealthy, you can take it out of the pool, you can add it back in... There's some options right now called cluster resource sets, which allow you to automate deployment \[unintelligible 00:23:07.04\]
106
+
107
+ So Cluster API is literally single-handedly trying to make this experience easier for people that don't necessarily know how to operate Kubernetes... Meaning it can use a managed service, like GKE or EKS, to run Cluster API, but provision a bare metal cluster on Equinix Metal... Get all that performance and flexibility, but trust the cluster API remediation to make sure your cluster is hopefully always healthy.
108
+
109
+ **Gerhard Lazu:** So that's really interesting... If I'm hearing it correctly, you can have a managed Kubernetes cluster to manage other clusters. Is that right? Is that what you're saying?
110
+
111
+ **David Flanagan:** Exactly what we're saying.
112
+
113
+ **Gerhard Lazu:** \[23:43\] Okay, that's very interesting. So how would you be able to visualize all the clusters that you're running? Because as we know, two leads to four, and four leads to eight, and so on and so forth; that's the way it just goes. So how can you keep that under control if you have one Kubernetes cluster, or even multiple Kubernetes clusters, which manage other clusters? How do you do that? It's an interesting problem.
114
+
115
+ **Marques Johansson:** It's an interesting problem that, in a sense, isn't our problem. There's a lot of tools out there, a lot of organizations that are trying to figure out that space. I mentioned Anthos as one, so if you have your GKE clusters and you want to run something on bare metal alongside that, you can use those integrations, so that you can manage your cluster that resides on Equinix Metal servers from within the GKE control panel. On cloud.google.com you're seeing our Kubernetes nodes. Rancher is another one of those tools where you can manage multiple clusters, and we have Rancher integrations...
116
+
117
+ We mentioned Kubeadm and k3s - they're yet another installer. You can take advantage of Docker Machine drivers to deploy their nodes. There's a lot of different solutions out there in the cloud-native ecosystem.
118
+
119
+ **David Flanagan:** One that I like the most probably is just using Flux or Argo, because those both have UIs. The Flux UI is quite early right now, and the Argo one is much more sophisticated... But because the cluster API is just declarative manifests, all your cluster definitions live in a Git repository that are applied in a GitOps fashion. And then you can just take advantage of the Argo UI to see all of your clusters. Those provide labels, whatever you need, and they're just there. And the same with Flux UI. And I think we'll see more tooling above in this space as well.
120
+
121
+ Because the Kubernetes Cluster API project is using the \[unintelligible 00:25:24.12\] on all of those objects within the control plane cluster -- no, the management cluster they call it... The Argo UI can also show you when you've got nodes that are unhealthy through a nice visual indicator.
122
+
123
+ You can use tools like Rancher, like Marques said, or you can use Argo. Once you're in the Kubernetes API, you've got this unlimited flexibility, which is both a good thing and a curse. There'll be dragons.
124
+
125
+ **Gerhard Lazu:** I really like that idea. I can see how that would work. So I used ArgoCD first... I think it was a few months back, with -- I think it was episode 3 or 4. I can't remember. The one with Lars, where we -- it was like the follow-up to "Why Kubernetes?" I think it was episode 5. And we looked at what it would look like for Lars' Noted app, which is a Phoenix Elixir app, to run on Kubernetes from scratch. And in that context we used ArgoCD, and it was really nice. We're still not using it for Changelog.com, but we will, very soon, I'm sure of that.
126
+
127
+ I'm wondering if you have an example, David, that you can share, of what it would look like - a management Kubernetes cluster, which is managed by ArgoCD, which in turn manages other Kubernetes clusters. Do you have such an example?
128
+
129
+ **David Flanagan:** Not specifically...
130
+
131
+ **Gerhard Lazu:** Not yet? Tomorrow. Yes, okay.
132
+
133
+ **David Flanagan:** If you go to my YouTube channel, the Rawkode Academy, there are videos of me deploying Kubernetes clusters on Equinix Metal with the Cluster API, in a declarative, GitOps fashion. I don't specifically load up the Argo UI, but for you, I will spend some time this week and we will make this happen.
134
+
135
+ **Gerhard Lazu:** Yes, please. I think that'll be amazing to see in the nodes, to see what it looks like. I'm a visual person, among many other things, but I just understand things better visually. Sound is great and audio is great, but being able to see it in one picture - I think it just lets you imagine things differently. Or at least that's what it's like for me. So I think it would help to be able to see what that looks like. Because until I've seen the ArgoCD and how well the UI works -- I mean, there are some screenshots, and sure, that would work, but what does it mean for this specific use case? I just couldn't visualize that. So having this I think would be very, very useful. I didn't even know that Flux is working on a UI, by the way... Flux CD.
136
+
137
+ **David Flanagan:** Yes, they have an alpha UI available right now. You can install it to your clusters and it works. \[unintelligible 00:27:44.02\] I subscribed to the project and I'm keeping an eye on it, because I really do like the simplicity of the Flux approach to GitOps. But the Argo UI is hard to pass up, because it's just -- as you said, for that visual representation of what is happening within a cluster or multiple clusters... It's spot on. So hopefully, Flux can catch up with that, too.
138
+
139
+ **Gerhard Lazu:** \[28:03\] Yeah, that's right. I haven't tried Flux CD, and one of the main reasons why I said "No, I think I'll go with Argo" is because of that UI, I'll be honest with you. Visually, it just makes so much sense. But now that Flux CD has a UI - interesting. Interesting. I think I need to speak to someone about that. But thank you, that was a great tip, David. Thank you very much.
140
+
141
+ So what is this Rawkode Academy? I think we might have mentioned it once... We definitely mentioned it just a few minutes ago. What is it?
142
+
143
+ **David Flanagan:** Yeah, I really need to work on my marketing skills. I should be saying it every 60 seconds.
144
+
145
+ **Gerhard Lazu:** No, no... Not on this show. \[laughs\]
146
+
147
+ **Marques Johansson:** Episode brought to you by...
148
+
149
+ **Gerhard Lazu:** Exactly... No. Go on...
150
+
151
+ **David Flanagan:** So yeah - in 2019 I spoke at 42 conferences. I loved being out there, meeting people --
152
+
153
+ **Gerhard Lazu:** Sorry, sorry - did you say 42, four, two?
154
+
155
+ **David Flanagan:** 42, yes. 42 conferences.
156
+
157
+ **Gerhard Lazu:** Oh, my goodness me.
158
+
159
+ **David Flanagan:** Because I love going out and meeting people and talking about problems, and technology, and how technology can help them... And I lost that with Covid. So when I joined Equinix Metal, I needed to kind of find a new outlet for sharing knowledge with other people, and I started a YouTube channel. So I've been streaming now for about 13 months, and the Rawkode Academy is what I've gotten out to show for it. It's livestream-focused, technology, cloud-native, Kubernetes learning experience, all broken down into 90-minute livestreams.
160
+
161
+ Fortunately, it's not me doing most of the knowledge sharing. I'm smarter than that. I get really good maintainers and founders from cloud-native open source projects to come on. They show me their project, I ask them all the questions, we break it, we fix it, and we wash, rinse and repeat as often as possible.
162
+
163
+ There's usually 2-3 episodes every week, looking at all these amazing cloud-native projects that we have in the landscape. And the landscape is so vast. I'll probably never run out of projects to demo.
164
+
165
+ **Gerhard Lazu:** That sounds like a great idea to me, and I especially like how -- because you can't go to conferences anymore, since Covid, you did this. That's a great reason to do it. Okay... Obviously, it's not that. It's all the interactions and all the stuff that you can't share, or are limited in sharing, so you had to find another outlet for that, and this is it.
166
+
167
+ **David Flanagan:** Yeah. You've gotta try and work with people and help people. This is really a difficult time; technology is constantly evolving, and it can feel really difficult to keep up. And I think we should just encourage more people to share their stories through articles, through podcasts, through livestreams, because we just need all the help we can get. This stuff is hard. It's really hard.
168
+
169
+ **Gerhard Lazu:** I'm glad that it's not just me thinking that. As fun as it is, it's damn hard... And sometimes, the fun comes from the fact that it's hard. It's a challenge, and we like a challenge, and this is it.
170
+
171
+ So Marques, which is your favorite Rawkode Academy video or livestream that you watched?
172
+
173
+ **Marques Johansson:** I'm not sure which channel... I know David's face is prominently featured. "Klustered" is by far my favorite format that he has. And my favorite episode is Thomas Stromberg and Kris Nova going head to head, trying to wreck each other's clusters. That's a great watch, I recommend it.
174
+
175
+ **David Flanagan:** That is the biggest, best serendipity I ever had when \[unintelligible 00:31:01.09\] Klustered. I'll tell you what Klustered is if you don't mind, and then...
176
+
177
+ **Gerhard Lazu:** Go on.
178
+
179
+ **David Flanagan:** ...I'll tell you about that episode. So people are saying operating Kubernetes is hard, right? Nobody thinks that stuff is easy. We've got the \[unintelligible 00:31:11.16\] all these certifications from the Linux Foundation, that people want to go and get, and tell people that they know how to do this stuff. But the learning resources don't really go deep enough, and Klustered wanted to solve that.
180
+
181
+ I had this ridiculous idea of getting some of my Kubernetes friends to purposely go and smash, bash and crash some Kubernetes clusters. And I thought "I'll just go into livestream and see if I can fix it, and have some people join me and help me along the way." And we're now over 20 episodes in, over 50 broken clusters, most of them fixed, fortunately... And it just provides a really interesting way to see how the control plane works, how to debug it, how to fix it, what to do when things go wrong. Again, we don't have these resources available online. You really learn the hard way, and that can be challenging.
182
+
183
+ \[31:58\] What's really special about that episode that Marques mentioned is that Kris Nova is a kernel hacker, and one of the earliest Kubernetes contributors there is. Thomas Stromberg worked at Google for 15 years, being involved in forensic analysis of exploits, break-ins etc. in physical hardware. So by sheer luck, putting them together, we got this episode where Kris uses LD\_PRELOAD, kernel modules, eBPF to cover up all of the tracks, all of the breaks on this machine. No normal person would have ever been able to fix this cluster. But Thomas came on, and with the forensic analysis at Google knowledge, he used something called \[unintelligible 00:32:32.16\] which is apparently a tool that can give you a snapshot of all the changes on a fail system within X amount of days or hours, and said "I'm just gonna leave that there in case I need it", and then went on to debug the cluster. Having this wonderful pot of gold at the side, with all of the answers \[unintelligible 00:32:45.28\]
184
+
185
+ So he tried to it the hard way, by doing the work to see what's wrong, and debugging... All the answers were over there, just waiting for him. And it was just a phenomenal episode. So many tips, tricks, and things to learn from it.
186
+
187
+ **Gerhard Lazu:** I'm definitely going to watch that. That sounds like an interesting one. Thank you very much, Marques. I think I would like to have one more episode to watch, so David, which is your favorite Klustered or Rawkode Academy episode, which is not the one that Marques mentioned?
188
+
189
+ **David Flanagan:** Damn, that's a hard question. So there is a really early episode, and I think I like it most because of the technology perspective. It was with the team from MayaData, who were working on a CSI driver called MayaStore, written in Rust, using the \[unintelligible 00:33:28.12\] These are all really cutting-edge technologies. And the demo was fine, but the real awesome part of it was just their CTO talking about that storage base and what storage is going to look like over the next couple of years. And I think it just stuck with me this entire time; it's just one of those great episodes. Getting knowledge from someone with so much experience, that otherwise we would not have access to. So I really loved that episode as well.
190
+
191
+ **Gerhard Lazu:** Okay. Do you remember by any chance which one it was?
192
+
193
+ **David Flanagan:** It's "An introduction to OpenEBS."
194
+
195
+ **Gerhard Lazu:** Who was the CTO for \[unintelligible 00:34:02.04\] at the time, do you remember?
196
+
197
+ **David Flanagan:** The CTO was Jeffrey Molanus.
198
+
199
+ **Gerhard Lazu:** Alright. Okay. So not the person I have in mind. Episode 14, "Cloud-native chaos engineering", the one with Uma and Karthik; Uma was definitely on MayaStore before co-founding Chaos Native.
200
+
201
+ **David Flanagan:** Yeah, that whole team was on the OpenEBS project, working on MayaStore beforehand. The \[unintelligible 00:34:21.19\] a spin-off from the test suite that they wrote for MayaStore and OpenEBS, which I think is really, really cool.
202
+
203
+ **Gerhard Lazu:** That's right. It is really cool. Okay. Can you say the name again? I forgot it.
204
+
205
+ **David Flanagan:** Jeffrey Molanus.
206
+
207
+ **Gerhard Lazu:** Jeffrey Molanus, that's him. Okay.
208
+
209
+ **Marques Johansson:** When you're running through all of these episodes - the format has shifted from the beginning to what he's currently producing. The earlier episodes has individuals fixing multiple clusters, and one of the earlier ones had my manager, \[unintelligible 00:34:49.21\] just go on there and just fix cluster after cluster after cluster... And these clusters didn't have one or two problems, they had layers of destructions. So I'm always impressed just to know that my manager, the guy who I tell when I have to take a day off, is able to fix all these clusters in a phenomenal way. What does that I think is just having this solutions architect background, and working with Kubernetes and working with clusters in that way.
210
+
211
+ **Gerhard Lazu:** That's interesting.
212
+
213
+ **David Flanagan:** Yeah, I think that breaks on clusters have evolved as well with the format. We started off with just two people on the stream, trying to fix a bunch of clusters. The breaks were, you know, someone stopped the scheduler, or someone broke the config. Now, 20 episodes later, we have \[unintelligible 00:35:36.04\] where you've got Container Solutions, and RedHat, and Talos, and DigitalOcean, all breaking these clusters and handing them over to the other team and going "Good luck." And it's became so fun and joyous and competitive at the same time, and the breaks are getting ridiculous. People are now modifying the Go code for the kubelet, recompiling it, publishing an image, and then shipping it to the cluster. The creativity in the way that people approach this now is jsut evolving so quickly. It's just so much fun to watch.
214
+
215
+ **Gerhard Lazu:** \[36:05\] So what I'm thinking is you should rename Rawkode Academy to "Break my Kubernetes" or "Fix my Kubernetes." You know, like "Pimp my ride." "Pimp my Kubernetes", something like that, I don't know... But this is a great idea, because I think there's a lot of good stuff coming out of this which is unexpected, and it's almost like a thing of its own, where -- "This sounds great." Imagine how small the problem that we're experiencing right now in Changelog.com... And that's, by the way, how this interview started... David asking about some debugging Kubernetes. Well, guess what - we have a problem in our \[unintelligible 00:36:40.22\] cluster, which I would love us to be able to debug... And I think a follow-up episode is in order, because there's nothing broken; but still, it just goes to show the complexity that goes into these things and you wouldn't even know. It's almost like every problem is unique.
216
+
217
+ You know that expression about distributed system, how when they're happy, they're all happy the same way; but when they're broken, they're broken in individual ways, in unique ways. And I think a Kubernetes cluster is exactly like that. Every single one was different. Which makes me wonder - what hope do we have? If all our Kubernetes clusters are broken in unique ways, in weird and wonderful ways, what hope do we have for running them efficiently? What do you think, David? Would you agree with that? It surely can't be that dire.
218
+
219
+ **David Flanagan:** Unfortunately, I think you may be correct. Kubernetes as a system is distributed with infinite flexibility to swap out the container runtime. What I've seen over the many episodes is that the symptoms you see from one break to another can be completely different, and the break actually turns out to be the same. So you really have no idea when you're looking at the symptoms from the cluster what is actually going on... And I think that's why we're seeing this really strong push-through for observability these days. It's the hottest topic, we're getting more and more talks about it at KubeCon, and it's because people have realized that we need better; we need to monitor these systems better.
220
+
221
+ **Gerhard Lazu:** Yeah, that makes a lot of sense. Okay. So we talked a lot about Kubernetes, and this is interesting, because this was meant to be about bare metal infrastructure, real networking, API, stuff like that. But it jsut goes to show that it's everywhere... And I'm wondering if we are getting Kubernetes everywhere, or Kubernetes just really fits so many situations and so many places, and it just makes things easier, better? Easier to reason about, I don't know... Because what do you do if you have a thousand servers? How do you manage the workloads on them? I don't know anything better than Kubernetes. I mean, I'm sure there are things better than it, but I think many people realize as broken as it is, or as complicated as it is, what's better than it? I don't know... What do you think?
222
+
223
+ **Marques Johansson:** We didn't get here by accident. We started with our System 5 configurations in our scattered user local \[unintelligible 00:38:47.29\] config files, and we moved towards containers because it helped to keep all of the system components common, and the variability of those containers reduced. So we needed a way to manage all of our containers, and Kubernetes became the common solution for that. I think the real big gains in Kubernetes that we didn't have in all those previous things - we had too much variability, we had too much interaction between components. "Why isn't it running correctly? Oh, somebody's running Apache on the same port." That's probably unreasonable. But those are the kinds of problems that you had. And perhaps it's still possible to do that in Kubernetes now, but it's all stated in a common way. And having this stateful declaration of all of your resources in our place makes debugging a bit easier. It makes it easier to reason about what's running at any given time, and what's being exposed at any given time.
224
+
225
+ Kubernetes is better than where we were before, but it's also not, in a way, because we still have all that same underlying architecture, that same underlying OS configuration that can get in our way.
226
+
227
+ **David Flanagan:** \[39:54\] And \[unintelligible 00:39:52.24\] If we go back to applications ten years ago, we were writing monolithic applications that we scaled horizontally by just snapshotting the image and throwing it out. But those monolithic applications became exceedingly hard for large development teams to be able to cooperate and deliver and maintain any sort of velocity that kept a competitive edge. And as wise as we are as technologists, we picked microservices as a way to combat that, and push that complexity from the developers down to the operations stack... And Kubernetes is what we're stuck with now because of that, because we now need to be able to horizontally scale a wide variety of microservices written in different languages, deployed on containers. That's the trade-off we've made as developers to be able to move quicker, deploy faster, and keep our customer happy as quickly as possible, \[unintelligible 00:40:40.04\] feedback loops. And that operational complexity is just the outcome of it.
228
+
229
+ **Break:** \[40:45\]
230
+
231
+ **Gerhard Lazu:** Even though we do have a monolith at Changelog.com, we're still using Kubernetes, because it handles a lot of complexity that we would need to handle differently in other places, and it would just hide it. So for example, managing DNS now is a declarative thing that happens in Kubernetes. Not all the records, and that's like another problem; external DNS is not as mature as some of us, including myself, would like. For example IPv6 - it doesn't manage. Multiple IPv4 - they don't work very well. So there's like a couple of limitations to external DNS as a thing that you run in Kubernetes. But the way it composes, it's really nice.
232
+
233
+ So you have these baseline components, that's what I call them... But one component that works really well which runs right alongside it is Cert Manager. So we manage a certificate using Cert Manager; it works fairly well, it manages all our certificates. We have about en domains; eight, nine, ten - somewhere around there. And not only that, but then within Kubernetes we run something which then keeps the certificate that Cert Manager manages synchronized with Fastly, which is our CDN. And all that complexity lives in a single Kubernetes cluster, including running the Changelog app... Everything is declarative, so even if you have a monolith, you may consider Kubernetes, because of all the other things that Kubernetes could manage for you, not just the app itself; there's all the other concerns - CI/CD. Guess what - Argo CD, Flux CD, Jenkins X... There's so many CI/CD systems that you could pick, and it works fairly well, I think.
234
+
235
+ **David Flanagan:** I'm so glad you said that, because I wrote an article in May, and the title of the article was "You may not require Kubernetes, but you need Kubernetes." And I think it's because we do get service discovery, we do get DNS, we get reconciliation and we get remediation... All of these things are just built into the control plane. And then there's the ecosystem. We have controllers of controllers of controllers; your ability as a cert manager, as a controller to provision TLS certificates, and then another controller to synchronize them to your Fastly CDN... \[unintelligible 00:43:39.16\] controllers and custom resource definitions in our declarative fashion. So it's really cool that you've got a monolith and you chose to run that in Kubernetes, because we're taking advantage of this ecosystem, this community and all of this software that is built to make certain applications easier. It applies to most applications.
236
+
237
+ **Gerhard Lazu:** \[44:00\] And this is where Marques comes in... So I know that David doesn't know this, and I know that very few (if any) listeners know this... But me and Marques - we started talking while Marques was at Linode. And at the time, we wanted to manage our Linode infrastructure for Changelog.com more efficiently using Terraform. And Marques was managing a few Terraform modules at the time, and I think he also started working at what would soon become Linode Kubernetes Engine. So it was like the beginnings of that.
238
+
239
+ Marques since went to Crossplane, by the way. That was a very interesting period, and I was thinking "Oh, hang on... Maybe this Crossplane is worth a look." I didn't have the time until recently... I will continue with that. And now Marques is with Equinix Metal. So if you think about it, this is where I stand by what I say, in that Ship It is about the people that make it happen. So we're having these conversations because Marques, you've always been in this technology space. So my question to you, Marques, is - we use Terraform... We stopped using it, by the way. Everything is now running in Kubernetes. I'm thinking of using Crossplane, and I'm wondering, Marques, what else should we be using for the Changelog setup that you have know over the years; you've been fairly familiar with it since 2018, I think... So what do you think comes next, based on what David just mentioned?
240
+
241
+ **Marques Johansson:** So you've been moving your infrastructure from -- a term I first heard from you, I think, which was \[unintelligible 00:45:26.13\] You've been moving from that to some sort of stateful configuration where you can treat your entire deployment as \[unintelligible 00:45:34.23\] And I think that's probably come up a few times. You've probably hit some walls and just taken advantage of that kill switch and just rebuild... And Terraform was that answer, I think, for a lot of people, and it's still where a lot of people are. It allows you to just have however many components you need, have each one expressed as a few lines of HCL configuration, destroy the entire environment, reapply the entire environment.
242
+
243
+ One of the hurdles of that situation though is when things don't apply cleanly, or you need somebody to actually push that button, and that's where Crossplane comes in. Crossplane takes advantage of the Kubernetes reconciliation loop to bring these infrastructure components back to life, provision them the first time, sync things up. One thing is in a failed state, another is in a successful state... That failed state is eventually going to turn green. Your deployment is going to succeed, whereas in Terraform you're generally not gonna have that experience. You might have to destroy the entire environment and bring it back up, and you're gonna have to probably push that button to reapply it.
244
+
245
+ So what do we have on the Equinix Metal side that allows you to use Crossplane? We do have a provider, and that provider allows you to deploy devices, \[unintelligible 00:46:50.05\] and IP addresses. There are many more infrastructure components that we can introduce, but we started with the ones that are most relevant.
246
+
247
+ There are some other integrations with Crossplane that are useful to consider here, because when you are provisioning something -- if we take the Terraform model, you're provisioning infrastructure and then a lot of folks will rely on SSH-ing into that infrastructure to get it configured the way that they want it to be configured. We don't have an SSH provider in this Crossplane ecosystem, at least not a fully fleshed out one... So we have to take advantage of user data. And what user data allows you to do is when you're provisioning a device define all of the scripts that need to run on that machine on first boot, and that takes out all of the variability of SSH, of "Am I going to connect to this machine? Am I going to run the same script multiple times?" You're going to define with your user data what to run at boot-up. You will not require external access to that machine, because the cloud provider's API is going to make sure that that script is executed.
248
+
249
+ \[47:55\] In our environment, where we have layer two configurations, you cannot SSH into the machine to perform the actions that you want without going through a gateway node, or without going through a serial terminal. So the way that you execute code on those machines or execute scripts on those machines is through user data.
250
+
251
+ One of the formats that's popular for configuring your user data is -- well, cloud-native is the tool, cloudconfig is the format. It's something like Salt, or Puppet, or Chef, where you have this declarative language to describe all the packages that you need installed, whether or not the system should be updated, describe files that need to be created, services that need to be running... And a common way to approach user data is to just provide a cloudconfig file that declares all of that.
252
+
253
+ So one of the Crossplane providers that I worked on introduces \[unintelligible 00:48:44.02\] to Crossplane, which you can use in conjunction with the Equinix Metal provider, and you can stagger your deployment, in a way, to say "When this resource is fully configured, take some component of that, tie that into this cloudconfig script, and then when that script is ready, use that to deploy this Equinix Metal device." So you can get these complex compositions, taking advantage of Crossplane's compositions, and I think that that's where you're gonna wanna go with this complex deployment that you have with Changelog.
254
+
255
+ **Gerhard Lazu:** I'm thinking more along the lines of having very good hardware, knowing exactly what hardware we are getting. That'll be one thing from the Equinix Metal side. Do you know that actually we run a single-node Kubernetes, because it's more reliable that multi-node Kubernetes? We've had so many issues with a three-node Kubernetes cluster... Since we've switched to a single node, everything jsut works. People wouldn't think that. It may be the fact that it is a monolithic app, it may be the fact that it is using local storage... Sorry, not local storage. Block storage. And then you can only mount -- that's the CSI limitation, you can only mount that persistent volume to a single (obviously) app instance at a time... That's something we would like to change. And we just use PostgreSQL.
256
+
257
+ The amount of issues that we've had with three nodes was just embarrassing. You shouldn't need to have that. And this is like, you know, a certified Kubernetes installation, we always kept up to date, nothing specific... Volumes not unmounting... All sorts of weird Kube-proxy issues. I know that, David, you mentioned that is like a good component... I'm not so sure, based on the amount of the problems that we've found with it...
258
+
259
+ **David Flanagan:** Yeah, I think that Kube-proxy is one of those first components that's gonna be swapped out. I think we're already seeing that from Cilium. I don't know if you use Cilium as a CNI, but they're a good proxy replacement. Use an eBPF to route all the traffic... That's what I'd go for by default now.
260
+
261
+ **Gerhard Lazu:** Really? That's interesting.
262
+
263
+ **David Flanagan:** Yeah, I remove the Kube-proxy whenever possible.
264
+
265
+ **Gerhard Lazu:** I'm pretty sure we use Calico, and I wanted to go to Cilium because of that. I need to hit Liz up. I really wanna talk to her about a few things, including this...
266
+
267
+ **David Flanagan:** Well, I do have experience of doing an online CNI replacement in Kubernetes... \[unintelligible 00:50:51.29\] we could have a bit of fun with that.
268
+
269
+ **Gerhard Lazu:** Oh, that's a good one. Okay... So yes, I've just confirmed we're using Calico. Which version of Calico, you ask... I can hear you asking that. It's version 3.19.1. So I'm not sure if that's the latest one, but anyways. So let me describe the sorts of issues that we're seeing. The tail of HTTP requests is really long. What that means is that between the 95th percentile and the 99th percentile, some HTTP requests to the app, as far as Ingress NGINX is concerned, can take 30 seconds, 60 seconds... And they're random. So we have a very long tail.
270
+
271
+ Most requests complete really quickly, but some requests are really slow. There's nothing on the database side, there's nothing on the app side, there's plenty of CPU, plenty of memory... Everything is plenty resource-wise, but what we're seeing is that some requests which go via Kube-proxy are sometimes slow, inexplicably. So yeah, isn't that an interesting one?
272
+
273
+ **David Flanagan:** \[51:54\] Yeah. I think we can have a lot of fun digging into that and seeing if we can work that one out, for sure.
274
+
275
+ **Gerhard Lazu:** So that is the follow-up which I have in mind, by the way, and the livestream I think would go really nicely with that. That's what I'm thinking.
276
+
277
+ **David Flanagan:** Yeah, I'd love to do that. I think that'd be cool. Let's do it.
278
+
279
+ **Gerhard Lazu:** So I just have to set up another one in parallel... And this is to a comment that Marques made earlier - we always set up a new setup for the next year, so that first of all we do a blue/green, so if something goes wrong, we can always go back... We can experiment, so we can try just to improve things in a way that would be difficult to do it in place... And we can also compare. So how does the new setup compare to the old setup? How much faster or how many more errors we have with the new one compared to the old one? And there's a period of time, typically a week or two weeks, where we shift traffic across, the production traffic, make sure everything holds up with real production traffic, and if we see any errors, we can always go back, because everything is still there for the old setup.
280
+
281
+ We do this so that we don't have to do upgrades in place, because we know how all that works... Not very well, by the way. Sometimes you can just run into weird issues and you wonder why you're the only one having this issue... And who can help you? Well, maybe an expert. And even then, it's a "maybe", it's not a definite. The point being, this stuff is hard, so that's why we just do another setup, and then we challenge a couple of assumptions... And it worked well so far over the years. We've simplified a lot of things that we wouldn't otherwise, and I think this is going to be the best one yet, 2022. That's what I'm thinking.
282
+
283
+ So David, where do you think that Equinix Metal would fit in Changelog? Or do you think that Equinix Metal is even a good choice for Changelog.com, considering it's just a monolith, it doesn't need that much power CPU-wise or memory-wise? It's mostly traffic, but the CDN handles most of it... So I think maybe 10% of the traffic the app sees and the infrastructure sees.
284
+
285
+ **David Flanagan:** Yeah, I think where Equinix Metal would come on is if you wanted to then think it a bit further and build your own CDN. That is a really great use case, that takes advantage of the Equinix network, as well as the performance of the metal devices themselves. What I would encourage people to do is to augment their virtualized setups with metal for CPU-intensive tasks, or stream processing, ETL pipelines etc. Even continuous integration - if your development team can get their CI/CD pipeline from five minutes down to one minute by switching \[unintelligible 00:54:11.03\] that's probably time well invested, because you're gonna be shipping faster.
286
+
287
+ **Gerhard Lazu:** I really like that you mentioned that, because the one thing which I've noticed is that whenever you have VMs, virtualized infrastructure, you tend to suffer from noisy neighbors. Weird issues that only happen on VMs. People don't realize that this stuff is real, and the bigger your setup is, the more costly it is on time... And you keep chasing bugs that are not real. They just happen because of how things are set up, and that's when bare metal will help, in that you just basically get what you pay for, like, for real.
288
+
289
+ **David Flanagan:** I don't think people have ever really dug into what a \[unintelligible 00:54:48.06\] across the different tenants on the cloud... And all these things add up. Even the \[unintelligible 00:54:57.11\] There's contention across all of this, because of the cloud provider's interest to maximize the costs and the profits from each of those physical devices. So yes, it's cheap; you can get a single vCPU, you can get half a gig of RAM and you can go run some workloads on it, but the contention will always be a challenge... And when that becomes a problem and start to cause you more problems than it's worth, you can start to look at augmenting and bringing in some metal for hybrid architectures.
290
+
291
+ **Gerhard Lazu:** That's a good point.
292
+
293
+ **Marques Johansson:** I'm concerned about your one-node cluster now...
294
+
295
+ **Gerhard Lazu:** Okay...
296
+
297
+ **Marques Johansson:** Your one node is going up against other nodes. I assume this is all in your VM-managed cluster?
298
+
299
+ **Gerhard Lazu:** Yes. It's LKE. We get a single node worker. The control plane, we just don't have access to it. We just use whatever the node provides; that one node where the workload actually runs... So we have the app, we have Ingress NGINX, a couple of pods basically in total... Again, let me tell you the exact number. It's 31 pods in total. 12 deployments, 38 replica sets, two stateful sets, 6 daemon sets. It's not like a big Kubernetes cluster.
300
+
301
+ **Marques Johansson:** Yeah.
302
+
303
+ **Gerhard Lazu:** \[56:04\] Now, we back everything up every hour. We can restore everything from back-up and we test this often - every 3 to 6 months - so we can restore everything within 27 minutes the last time that I ran this, and everything's backed up.
304
+
305
+ Also, everything goes through the CDN. So if the backend - "the origin", as it's called in Fastly - is not available, it will serve the stale content. Not the dynamic stuff, obviously... But to our users we will still be up, just a bit stale (pun intended). That's exactly what they will see. So it's unlikely that we will not be able to serve content if our origin goes down, even for a few hours.
306
+
307
+ **Marques Johansson:** So you've just got 31 pods, and the scale here - you could probably have 31 large VMs running on bare metal, and each of those 31 VMs running from 31 pods, and then some...
308
+
309
+ **Gerhard Lazu:** Mm-hm.
310
+
311
+ **Marques Johansson:** Yeah, it is interesting to imagine how it would fit, and maybe what more could fit. David pointed out - having a CDN that's taking advantage of more nodes and more networking availability... Do you have any thoughts on what you might do with more CPU and storage?
312
+
313
+ **Gerhard Lazu:** I don't think we would need more CPU and storage. I honestly don't. The app itself is a single instance, because it is a Phoenix app. It's using the Erlang VM. So it scales really nicely when it comes to CPUs available. So a single machine can serve all the traffic many times over. Like, a hundred times over. It's extremely efficient. So we don't have to worry about that side of things. This was why WhatsApp was able to scale the way they did, because of the Erlang VM, because of the bare metal infrastructure... It's the same model in our case. And the CDN picks up most of the slack.
314
+
315
+ So let's imagine that if we were to have a CDN, if we were to run a CDN ourselves, I think most of the costs would be bandwidth. And then would we use the Erlang VM? Maybe... I don't know. Maybe we'd use something else, like Varnish, or something like that (I don't know), to just cache the content and just serve it like that. But do you know who has a good article on this? Kurt from Fly.io. Build a CDN in five hours. And I know that Fly.io runs on the Equinix Metal as well, which is something that I'm going to take a closer look at. I like that relationship, and I can see many things coming together.
316
+
317
+ But these are all great ideas, and I'm wondering, if someone has been listening to this, what is the one key takeaway that they should take, Marques? What do you think?
318
+
319
+ **Marques Johansson:** You've mentioned Fly... So that's one of the strengths with Equinix Metal, is that we have a lot of partners available. So there's a lot of services that are already running here... And if they're not running on Equinix Metal, they're running on Equinix. So kind of combined strengths of different organizations.
320
+
321
+ Earlier I was talking about the Crossplane composition, and maybe that's what your solution looks like. I wanna make sure that I add a -- it's not a preface now; a suffice... That's not necessarily the direction that you should go. You did mention that you're doing the blue/green deployments, so that's excellent to hear... So try it. Try the Crossplane, try the Equinix Metal integration with that... You're going to run into some resources that haven't been implemented, you're gonna run into some providers that haven't been implemented...
322
+
323
+ So I would that of the Changelog delivery, the whole content system, in general for Equinix Metal I think the takeaway is that if you're doing something that only requires a handful of pods and you don't need a global presence and you don't need a lot of CPU, you don't need a lot of memory and disk, a VM might be the right place for you; a managed Kubernetes service might be the right place for you. But when you have a lot of bespoke and monolithic, large -- or not monolithic; it could be a bunch of microservices that need to be globally distributed, bare metal is something to investigate, and do the same sort of blue-green. Try it on bare metal, try it on some managed service and see how they stack up. I think you're gonna find that performance metrics are gonna be heavily in the favor of our setup.
324
+
325
+ **Gerhard Lazu:** I think this is meant to be controversial, this last part, so I'll make it even more so... I disagree with some of the things you've said... The direction is sound, but what I would say is that if you do use bare metal, you tend to have less problems just because you're using bare metal... Especially around latency, especially around performance, especially around things just mysteriously failing.
326
+
327
+ \[01:00:11.15\] I've seen less of those failures on bare metal, and more on VMs. And even more on specific cloud providers which I'm not going to name. I've used more than 20 over the years. I just like my infrastructure, I like my hardware, I like my networks. And -- oh, CCNA. I've just remembered that. That was an interesting one. Finding more about BGP, and RIP, and all the other routine protocols... That was an interesting one. The point being, Equinix Metal, and Equinix, just made me think of that.
328
+
329
+ The point being, when you use bare metal, your CI tends to run better. You tend to see less flakes, fewer flakes. Your app is just more responsive. And this is weird and unexpected, but that's exactly how it behaves once you reach a certain scale. So - worth trying, for sure, and it may not work out, but worth trying... Because I think there's many benefits which are hidden.
330
+
331
+ **Marques Johansson:** The layers of complexity are definitely different. On the Equinix Metal side you have control over the physical host. There's no virtualization layer, there's no virtual networking happening on the individual host hardware. All of our virtualization is performed on network hardware, the same as it would if you were collocated in a physical space. When you're dealing with VM providers, there's a virtualization up and down the stack, and there's sharing going on up and down the stack... So yeah, definitely, if you want to reduce your problem set, try the bare metal approach.
332
+
333
+ And the way that we deploy that bare metal infrastructure is open source, in a sense. So the underlying infrastructure provisioning, full chain that was used at Packet became the Tinkerbell project (Tinkerbell.org), and you can use that to experiment with bare metal in your own home lab, from just doing Raspberry Pi's, or you can deploy it in your collocated environment.
334
+
335
+ What's interesting about Tinkerbell is that it kind of takes some of these benefits that we're seeing in the cloud-native community on projects like Kubernetes and bringing that same kind of scheduling of workflows to the bare metal.
336
+
337
+ **Gerhard Lazu:** We've left all the good stuff to the end, haven't we, Marques? That's exactly what happened. But I want to hear from David, because I think he has the best one yet, I think... So David, if someone was to take away something from this conversation, the most important thing, what do you think that would be?
338
+
339
+ **David Flanagan:** Well, I think it's the -- you have covered all the good answers. However, I kind of wanna bring a different perspective to what you both said there. Bare metal brings infinite flexibility, unrivaled performance, the ability to switch architectures, use ARM devices. I think that's a really great selling point. You know, we've covered the network. All these things are great. But you're right, in that running things in VMs - you have kind of opaque problems. I mean, you run them on the metal - those go away, because you have full visibility of everything on the stack, which is great. However, you then have to operate bare metal, and I think this is still a really challenging thing. Teams these days don't have dedicated ops teams anymore like they used to when we had our own data centers. We were all DevOps, and Agile, and Terraform; we just don't have that experience.
340
+
341
+ So you know, you've gotta be careful when you're adopting bare metal. Don't walk into it lightly, thinking "And with a little bit of Linux, this will be okay." There's a lot to learn there. Like you said, even BGP. How many people can tell you what BGP is in 2021? It's not that many anymore. And why would they? Because they've had convenience at their doorstep for so long.
342
+
343
+ So bare metal will remove a whole class of problems, like you've said, but it brings in different challenges that you need to navigate. Approach wisely.
344
+
345
+ **Gerhard Lazu:** Isn't that exactly where things like the Crossplane provide for Equinix Metal? Tinkerbell, Kubernetes comes in... In that yes, all those things are still present, and you can have access to them, but maybe those higher-level abstractions give you everything you need to define all the things... Not to mention Equinix Metal. You have an API for bare metal. Who has that? I mean, more companies, and I'm not trying to sell Equinix Metal, but if you see the simplicity, how easy it is... Like, you don't have to click through menus to select the operating system, the data center... A bunch of other things. Networking... You just get that - not even via an API. You get it via the Kubernetes API, and I think that's the really amazing thing. It's this combination of this low-level and this high-level, and maybe removing a lot of the stuff that you would get in between, like in the middle, and maybe removing that out. So that's the value prop which I would like to try out for Changelog.com. How well does it work in practice? That's what I'm thinking.
346
+
347
+ **David Flanagan:** Yeah. Go spin up another M3, S3, C3 boxes, just for ten minutes, run \[unintelligible 01:04:39.20\] See the cores, see the memory...
348
+
349
+ **Marques Johansson:** It's all yours.
350
+
351
+ **David Flanagan:** Yeah, all yours, there to do whatever you need it to do. And then shut it down; it costs you 50 cents. But that's just not something you're gonna get elsewhere.
352
+
353
+ **Gerhard Lazu:** What I'm thinking is "Watch a few Rawkode Academy videos, figure out how to get Kubernetes on bare metal, and then figure out how to recreate our setup", which by the way, Kubernetes was meant to be the promise... That we declare everything, we program against the Kubernetes API... So how do we get Kubernetes on bare metal? Well, David has the answer, in one of his videos, I hope.
354
+
355
+ **David Flanagan:** Well, Marques and I are constantly working on this experience. And I won't put words in your mouth, Marques, but I personally am not a fan of tools like Terraform and Ansible to a certain degree, because they require manual triggers. Someone has to initiate the action. What we need is something that can be deployed in a more autonomous fashion, with reconciliation. We can do that through user data and custom data, which is a really cool aspect of the Equinix Metal API, and Marques and I have spent a great amount of time over the last couple of months, and we will continue to explore how to make this easier through Crossplane and other cluster operators as well. So stay tuned... Good stuff is definitely coming.
356
+
357
+ **Gerhard Lazu:** This was a pleasure. I will try the good stuff out, by the way, and I will hopefully contribute a little bit to that. At least tell you what doesn't work. I can guarantee that that's what I'm gonna do, I'll tell you what doesn't work... \[laughs\] For us, for Changelog.com.
358
+
359
+ Thank you, Marques, thank you, David. This has been a pleasure. Looking forward to a next time, and a follow-up video, David. I have not forgotten.
360
+
361
+ **David Flanagan:** Thank you for having us.
362
+
363
+ **Marques Johansson:** Thanks for having us, Gerhard.
Cloud Native fundamentals_transcript.txt ADDED
@@ -0,0 +1,241 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** I'd like to start with a story, and the story is how we met, because I thought that was a very interesting way... Do you remember how we met?
2
+
3
+ **Katie Gamanji:** The way we actually met was during the End User Partner Summit, which I was involved in at the time. This was an event only for CNCF end users, pretty much; everyone who uses cloud-native, but they don't sell it. And as part of that, we had a networking session, more like a breakout room, where we were just able to maybe interact a bit more with our people... So kind of still getting that networking vibe in a virtual space.
4
+
5
+ **Gerhard Lazu:** Yeah. What I remember is that during Priyanka's happy hour - that was like a session that we did at KubeCon, the European one...
6
+
7
+ **Katie Gamanji:** That's right, yes.
8
+
9
+ **Gerhard Lazu:** \[04:08\] That was it. We had breakout rooms, you're right. So 3-4 people would be randomly picked and they would have a breakout room and then they would chat. In one of those sessions we ended up in the same room, and there was like two more people, I think...
10
+
11
+ **Katie Gamanji:** Yes.
12
+
13
+ **Gerhard Lazu:** We were talking -- I know Splunk was mentioned, I believe... Is that right? Splunk, Redis Labs, something like that, in that conversation.
14
+
15
+ **Katie Gamanji:** There were many things mentioned, yeah.
16
+
17
+ **Gerhard Lazu:** Was Snyk mentioned as well...? So just a couple of technologies were mentioned, and what people use, and how it's going, just like a general one... And then a few weeks later after KubeCon I found out about this course, the Cloud-Native Fundamentals that you've just launched... And in that tweet - which, by the way, will be in the show notes - you wrote that after four months of very early mornings and very late nights and a lot of hard work, it's finally done, and you're very happy for it. And I was thinking "Finally. A cloud-native course that people can take." It's a practical one, one that takes a while, and takes people from nothing all the way to understanding not just the landscape, but how to use specific tools. So a very practical approach, which was sorely needed... Because the cloud-native landscape - let's be honest, it's so confusing, even to those that know it. There's like so many things there. And it's not a bad thing, because it's meant to be big, but how do you start? What is the first step that you take? I think that is less -- my perspective; maybe you disagree. Do you disagree? It's maybe less confusing to you, or...
18
+
19
+ **Katie Gamanji:** Well, let's put it this way - I do remember my journey when I started to use cloud-native tools. This was around when Kubernetes was around for two or three years. It was still very brand new. I do remember actually I had to set it up the hard way, before it was called the hard way... When I had to actually write the systemd units and files, and actually write all of the configuration there to actually make sure that the Kubelet is going to be up and running on the machine. And at the time, I managed to have a two-node cluster, but it took me a lot of Stack Overflow, a lot of waiting for the docs, and a lot of back-and-forth. It was not very concise.
20
+
21
+ Now, of course, the community has been working quite heavily on improving documentation. It's out there, it's splendid, it's in very good condition, it's maintained as well, up to date... But now the problem is that everything is overwhelming. Because when you talk about cloud-native, it's not just Kubernetes; you have so many other tools around it. So what I'm actually trying to outline with this course is more of like cloud-native is a practice; the tools can vary from one organization to the other, but once you understand the fundamentals, once you understand what it actually brings to the table, then you'll actually be able to choose the right tool for your use case. Then you'll be able to explore and maybe even advance some of the technologies forever.
22
+
23
+ One of the things I wanted to provide here is the fundamentals in making sure that the cloud-native principles are understood, and everyone will be able to use them to build further.
24
+
25
+ **Gerhard Lazu:** That is a very good way of putting it, because you're right, some people, including myself - I think "Wow, there's so much choice. This is so confusing." But I'm spoiled. I'm spoiled for choice; there's so many approaches, and there's no one better than the other. It's all contextual. So I'm complaining about the choice, but really, a lot of hard work went into creating those choices to begin with. And the reason why there is so much diversity in choosing - not only the diversity of choice, but also the community is very diverse - is because there's so many approaches. So how do you surface all those approaches? And if anything, the cloud-native landscape does a really good job of highlighting and showing all these options, which I think is a great thing to have. So picking and mixing things is very interesting, and that in itself can be a job - curating these approaches.
26
+
27
+ \[08:10\] So in the Cloud-native Fundamentals, in the course, did you do any of this curation, or how did you pick basically the approach that you follow or that you recommend?
28
+
29
+ **Katie Gamanji:** So when I was actually building the course, I really had the audience in mind who is actually gonna take this course. I was trying to make it easier for them to navigate the ecosystem, because as I mentioned, there's a canvas full of different tooling, and you can pick and choose, you can make a great platform. But is it actually something that someone who wants to start with cloud-native needs to know? So I was trying to break it down to the bare fundamentals, and again, explain the principles, what cloud-native is. It is about being declarative, it is about the self-healing capabilities, containerization, you have interoperability that you've mentioned, you have multiple solutions for the same problem. That's why we have such a diverse landscape.
30
+
31
+ So when I was trying to choose the tooling, I was trying to make it easier for the students. At the beginning, I was trying to explain that you have an application. The only requirement a student will need is to have some programming experience, because based on an application, we're gonna move it forward to different phases, the deployment is going to be within a production cluster.
32
+
33
+ So having this application, what do we do with it? We start thinking about its architecture. Is it a good application to be containerized? So we're starting with those mindsets and perspectives around the service. Then we're thinking about "How can we containerize it?" We usually look at Docker. Docker has been there for a very long time; it has been given \[unintelligible 00:09:43.25\] so it's a very good kind of knowledge to have. Once you understand it, even if you use other tools which does this packaging of your application by default in a package - for example we have tools such as Buildah or Podman; they package it for you quite nicely, without you even having to run one line of Dockerfile. But for them to understand how to package it, it's quite important; that's why I went a bit more declarative when it comes to packaging an application, creating that artifact, the Docker image.
34
+
35
+ Kubernetes by itself was one of the focuses, as well. When you're talking cloud-native, there's gonna be an element of container orchestration, hence we talked about Kubernetes resources and how it actually schedules different resources in applications, how it can expose your application to the wide world using Ingress and services... So pretty much still trying to explain the basics, but not go too further up. So the bare minimum that they will need to deploy an application.
36
+
37
+ The interesting thing is however when you choose a CI/CD pipeline, because there you have so much choice around the tooling. One thing that I've purposely done - I've split the CI and the CD into two different lessons... Because most of the people cannot really differentiate the stages within a pipeline. When I was trying to choose the tools, again, one of the things was it has to be open source. So when I was choosing GitHub Actions, again, it's something which is quite well maintained by the community; you have a lot of pre-build a lot of actions that you can use straight away... And with Argo CD - again, I wanted to make it easier for students; instead of all the time being in the terminal, I wanted that UI element of deploying and maybe visualizing your resources. So that's why I went with that approach.
38
+
39
+ Again, I was trying to put the bare minimum of "How can you have an application package deployed, automate the deployment process, and have it running within a cluster?" I was trying to choose the tools purposely to kind of fit these fundamentals and make it very easy for them to move forward.
40
+
41
+ **Gerhard Lazu:** Okay, so you mentioned that you split CI and CD... And based on the tooling that you mentioned, the way I understand it is that you use GitHub Actions as the CI, and Argo as a CD.
42
+
43
+ **Katie Gamanji:** Precisely.
44
+
45
+ **Gerhard Lazu:** Okay. And why is that? Why did you split CI and CD? That is interesting.
46
+
47
+ **Katie Gamanji:** \[12:00\] I wanted to make it very clear what is continuous integration by itself and what is continuous delivery by itself. Because continuous integration usually focuses on the code - how can you actually integrate the latest features from your application within an artifact? So the end of the CI should be an artifact; I wanted to make that very clear.
48
+
49
+ With the CI you can have, for example, testing in different environments, you can build artifacts for different platforms. However, at the end of it, the result should be an artifact. So I wanted to make that very clear with the cloud-native space that you're just gonna be represented by a Docker file, which will be able to run on any platform that runs containers.
50
+
51
+ So I kind of separated that... And when it comes to continuous delivery, it's how you actually ship that artifact to different environments. So I wanted to make it clear that a pipeline, in a way, should do continuous integration and continuous delivery, should contain all of these stages, but I don't think there is a very good understanding of where exactly continuous integration finishes, and where the continuous delivery starts. I wanted to break it down into, again, bare fundamentals. You can still have two different tools, but you can still achieve a very functional CI/CD pipeline. You don't necessarily need to bring one tool to achieve the end result. You can actually kind of have this puzzle, put two pieces together. And this is something, again, I wanted to maybe accentuate the nature of cloud-native - you have different tools, you put them together, it works... So that's another thing I wanted to highlight.
52
+
53
+ Again, it was not the main focus. I wanted, again, to make it clear, to differentiate it... But if we're looking from a different perspective, we can see this interoperability, we can see this diversity of the tooling. We can easily switch the Argo CD, for example, with Flux. We can use Spinnaker instead; we can use any other third-party provider. You can actually change that with GitHub Actions; you can completely \[unintelligible 00:13:50.21\] You can really choose different tooling here. But I wanted to maybe emphasize what are the stages and what is the result of all the stages, and make it simple.
54
+
55
+ **Gerhard Lazu:** That is really interesting, because even though we don't call it out like that to basically build and deploy Changelog.com the app itself, we do something very similar. We use CircleCI as the CI, where the end result of the pipeline is a container image. So if tests pass, if dependencies get built, we compile static assets, JavaScript, CSS, and the end result is just a container image. And for the CD part, it used to be five lines of Bash, which would be like "while true, update service". That was Docker Compose days. That's all it took, literally - a Docker update service with the latest image. And we replaced that with something called Keel.sh. I'm not sure whether it's part of the CNCF, but it runs in the context of the Kubernetes, so you deploy it... And first of all, it receives WebHooks from Docker Hub; you can configure it like that when there's like an update to the image, and then it will update the deployment with the latest version. I know that this goes against the GitOps mindset and the GitOps philosophy; I think that's a very interesting topic, which I would like us to dig into... But that's all it takes. And this separation, just to highlight what you've been mentioning - this separation works really well, in that you don't have to change your CI to change your CD as well. They're like two separate things. And whether it's five lines of code or whether it's GitOps, or whether it's something else, it doesn't really matter. The point is you have this choice to maybe change them independently and not have them coupled together.
56
+
57
+ So migrating, in our case, from Circle CI to GitHub Actions is easier than if we had this really huge pipeline that had all sorts of secrets, and it knew where the Kubernetes clusters were running, and it needed access to those... So it definitely works. I can definitely say that it works, and that's why I was trying to understand a bit more why you do that, because it's a very good approach; it's a really good approach to separate the two.
58
+
59
+ **Katie Gamanji:** \[16:06\] Absolutely. Again, here it's just preparing the mindset for -- you actually build your own pipeline. And again, it's more about the internal requirements you have within an organization. If you cannot choose open source tools, then probably you'll need to run your own Jenkins servers, and run your CI/CD pipelines there... And that's absolutely fine, because even if you use Jenkins, you still need an artifact; you still need to deploy it within an environment, be it in a data center... It doesn't necessarily need to be a Kubernetes cluster, but the fundamentals are there. So I'm trying to provide this information that they can reuse in different environments.
60
+
61
+ **Gerhard Lazu:** So I guess this philosophy -- I call it the Unix philosophy, with small utilities, and then combine and compose in infinite ways... Is that what cloud-native is to you? Or is it something else?
62
+
63
+ **Katie Gamanji:** I think it becomes that. What actually draws me to cloud-native is the diversity of tooling. And not necessarily the tooling, but the strategies... Because I've been interacting with many organizations, I've been talking with many engineers, and when you're talking about the infrastructure and their platform setup, not one platform is gonna be the same. Even if they use Kubernetes, the way they use Kubernetes, the way they deploy to Kubernetes, the way they bootstrap the cluster, maintain it - all of these answers are gonna be different for every single organization, pretty much. I haven't seen too much overlap from one setup to the other... And I think this is the beauty of this environment, of cloud-native - the diversity, this inclusivity of multiple solutions. You can actually leverage your product further. You can use very good fundamentals; you have a platform that pretty much can schedule your application, it can take care of it, it can restart it... You have higher abstraction layers, it'll maybe scale your application, and so forth. You have the fundamentals. What you need to think about is how can you further leverage your product, and maybe use the tooling that are right for your organization in terms of budget, in terms of the resources you have; sometimes outsourcing is gonna be the answer, sometimes building everything in-house is gonna be the answer.
64
+
65
+ So it really varies, but the interesting part about all of these platforms - they're different, but they leverage open source at the same time, and they try to contribute... I'm kind of amazed, because this small adoption, this small integration builds up to this organic growth of the entire cloud-native ecosystem, and open source tooling at the same time. This is maybe something very miraculous to observe and actually see how it grows.
66
+
67
+ Thinking about Kubernetes - it's been around for seven years; we're actually marking seven years now. But it completely changed the way we have these application deployment strategies. We had the VMs, and this was the buzz thing maybe ten years ago, but within seven years everything just changed completely. And we have a lot of data, a lot of surveys and reports showcasing this. We see enterprises actually thinking about Kubernetes; they're thinking about multi-cloud strategies. And because Kubernetes is agnostic, you can run it on any compute, it can be any cloud provider; as you have some compute, you have some networking components, and storage, of course, you will be able to build a cluster anywhere.
68
+
69
+ And this is the beauty of it, because we can leverage this and build with multiple clouds, and you can migrate applications quite easily using the same abstraction layer. So the beauty of it is it's a pluggable system; it's interoperable. It's diverse and organically growing, and I think this is something which is quite important, and maybe is something which is not easy to replicate. I don't think any organization will be able to maybe have the same success with an internal tool, kind of growing and gathering ideas from different communities, and build it up... So this is kind of, again, the beauty of the cloud-native, and maybe that's why I'm within this space still.
70
+
71
+ **Gerhard Lazu:** \[19:54\] Yeah. That is a very good answer, and sometimes I think if you take away Kubernetes, what are you left with? So Kubernetes definitely was the center of it all, and many things are being built on top of it and around it... But if you remove it, I think we're starting to see other scheduling platforms. I don't know whether they slowly emerge, but there are other options. I think people sometimes say "You know what, Kubernetes is too complicated." I say "Okay. Well, you can use something else, but the problems that you have to solve will still be more or less the same." So - sure, you can use something else... I don't know, I'm trying to think... If I wouldn't use Kubernetes, I think I would try maybe Nomad out... What would your choice be, Katie, if maybe let's say you couldn't use Kubernetes? What would you use?
72
+
73
+ **Katie Gamanji:** If I couldn't use Kubernetes... That's an interesting one; what is life without Kubernetes...? \[laughter\] I've been working with it for so long I'm quite biased here. But I think it really depends on what kind of applications I have here. I'm actually, again, biased from something that I would like to maybe deep-dive more... Serverless is something which has been extremely beneficial for many organizations. So if I have an application that has to be there for a certain amount of time, it's something which is timed, or timeframed, let's put it this way, then I think serverless is definitely something to have out there.
74
+
75
+ Again, it depends on the organization, but something in me would be like "I don't wanna go the data center way again", because I think -- actually, I wouldn't mind this, but I think this is a space which requires a bit of modernization as well, in terms of how we set it up. If the tooling is right, if the mindset is right, I think it can be a very good setup there as well. If you have an organization that is very restricted, and it has to deal with all the compliance, then definitely a data center is gonna be the way. But yeah, I think my answer here is I don't have one specific platform. It really depends on what I would like to deploy. I think the cloud providers have very good offerings around anything that you'd like to deploy; they have stacks built for you straight away, so sometimes you just need to put your application in and then you're gonna have it running somewhere, as long as -- for example, if you don't care which ones, you don't care about managing that infrastructure, they will be able to run it for you somewhere, and it's gonna be available to the customers.
76
+
77
+ So my answer here is - as you mentioned, there is a very good diversity to containers as well, \[unintelligible 00:22:23.01\] But it really depends on what you would like to deploy; based on that, you'd be able to choose maybe some more specific tooling, something which is right.
78
+
79
+ **Gerhard Lazu:** Yeah. A PaaS - now that you've mentioned, a PaaS may work, for sure... Because you're right, I know there are many efforts to abstract Kubernetes away. It's an implementation detail, as the load balancers - we used to configure them, and now they're just an implementation detail; it's just an Ingress. Same way, maybe you don't need Kubernetes. Maybe what you actually want is just a PaaS... And in some cases, maybe not even that. Maybe you want a serverless platform that you just push your functions and you define your integrations and inputs and outputs and all that stuff, and that's it; that's all you need.
80
+
81
+ So that is really interesting, because I think, again, going back to the choice comment - I think we're really spoiled for choice these days... And then if you're trying to cling on to your bare metal machines - that's okay, there's nothing wrong with that, but I don't think people can be as exclusivist anymore. It's not like "This is the only way and that's it." You have to be blind to not see all the other ways, which - by the way, in some case may work better. You may be spending less time toiling away at your infrastructure and maybe focusing more on the business... I don't know, that's just a thought. I know it works well for some.
82
+
83
+ **Break:** \[23:52\]
84
+
85
+ **Gerhard Lazu:** Imagine, Katie, that this is your first day in a new job. You're leading a team of five or seven developers; you're the lead developer here, and also you have a bit of an architect role... And the brief is to design an online presence, an online store for a supermarket that needs to service mobile applications, web browser, web applications as well... So more about web applications. But you're in charge of building the API, handling the data part as well, and then there are some other frontend teams which maybe build on top of it. And by the way, that may not be a good division, and you would point it out if it's not. The frontend team should be separate from the backend team, I don't know... But how do we build an online store, Katie?
86
+
87
+ **Katie Gamanji:** Well, it depends in which stage we are of building that store... Because having a first MVP I think is more important than having the full architecture grounded down to every single service that is gonna be covered within this store.
88
+
89
+ So at the beginning, the first MVP - what is actually a store? Of course, we're gonna have a web interface. This is what the customers will interact with. We're gonna have a database, which is gonna be pretty much storing all of our items that we'd like to sell. We're gonna have a backend which pretty much will connect the two things together, so pretty much will take any request coming from the frontend, make sure that it has all the information from the database, and kind of provide back the response
90
+
91
+ So these are kind of the three "major" things - usually, the frontend the backend, and then you have a database component as well. This is gonna be the bare minimum. But the fun part starts when the backend actually is not just about getting a request and providing a response back, it's about maybe expanding to different functionalities as well.
92
+
93
+ So when thinking about a store, we have a shopping cart, we can have, for example, different currencies we'd like our price to be displayed. You might have different languages, we have different categories, maybe we have even different portals for different stores which are managed by the same company. One is gonna be the grocery store, one is gonna be the clothing... There are so many varied things. Maybe the interface is gonna be the same, but some of the functionalities can change, for example. So you can really go the extra mile and personalize the entire experience for your user.
94
+
95
+ But starting with -- I think -- is have the first MVP. You don't need to segregate everything across three repositories and have everything nailed down on an architectural level. Have it working - a basic frontend, a basic backend, and a database running maybe even locally, and that's it. This is gonna be the monolithic approach. So if this is just for testing maybe, or this is just at the level of the MVP, I think having a monolith is fine at this stage. However, as you'd mentioned, if I'm in a team with a couple of engineers, five or seven engineers. I might have a frontend team, or I might have even the ability to employ new people and create different teams across the organization. In that case, that's when we start to think about "Do we need to split up this application into different components?" And most of the time, the answer is gonna be yes, because you want your application to be resilient. If one component goes down, you don't want the entire application to be down.
96
+
97
+ \[28:12\] For example, if the shopping cart is not working, if that functionality specifically is not working, everything else on the store is still working; people will be able to visualize, maybe they will still be able to get an order confirmation, or do the payment and so forth...
98
+
99
+ So you really want to segregate things. You really want to reduce the blast area if something goes wrong. And this is where usually the answer is -- "Are microservices good for me at this stage?" the answer is gonna be yes. But again, you have to think "How would you like to split those services?" We've been talking about the frontend, the backend, the databases, but then we can talk about the payment systems - you can go the extra mile there, because you have different payment methods nowadays. You can go with a shopping cart, with the currencies... Again, you can have different teams, different services for all of these applications.
100
+
101
+ The other thing that I want to mention here is that once you have a set of microservices, that is not the end. You always should consider and make sure that you reiterate on your application, on your codebases... Because if for example you've written one microservice using Java, because that was the main resource knowledge that you had within the organization at that time - it was working, it was perfectly fine, however, you would like to optimize.
102
+
103
+ For example, Java is gonna be very heavy on the CPU consumption. And then you realize you're looking to your machines and you would like to save some of that compute... And you're thinking "Is it the right time to rewrite this microservice using a different language?" Actually, there is a capstone project where students have to rewrite something from Java to Python, and they'll observe this differentiation of CPU consumption.
104
+
105
+ Having these maybe different languages and segregation of services allows you to have independent management of these applications, as long as you have a very well-documented API and you know how different products should call each other, you have a standard; you will be able to pretty much have this independent development on the services.
106
+
107
+ One thing that I wanted to mention as well - I've started to talk about this - you have to reiterate. Something you've been segregating in different microservices can be too granular, so you might think of maybe putting some of the currency and payment microservices. If you find this too granular, there's too much management... Because once you split these services, you have usually a different codebase, you have maybe a different language, at one point you might have some other teams managing independently... Sometimes having them together is the answer as well. So merging two microservices - that's an operation where you can further exploit an application or a service to make sure that you have a better management of those services.
108
+
109
+ Some of these services are completely staled, so for example you've developed a very - I don't know, maybe a very personalized shopping cart experience that no one is using, that's one of the microservices that is not used; you might be marking it as stale and completely retire it from your cluster and from your application.
110
+
111
+ What I'm trying to say for all of these operations and microservices - it's dynamic. It's not set in stone. So you have an application, you might split it in microservices, but that's not the end. You always have to reiterate and consider what is the best for your application team, for your business and for your organization at the time, and always kind of try to optimize and improve. This is the answer to many of the technology advances that we have nowadays - how can we further optimize it?
112
+
113
+ So it's a journey, but again, one thing that I want to mention here - do try to understand the requirements of your organization. Everything is gonna be driven by those. So if you're an organization that really wants to scale up, and they have all of the resources in the world, maybe you're not gonna think about optimizing the application.
114
+
115
+ \[32:01\] For you, the primary thing is gonna be be available out there; you have enough scale, you have enough resource, so cost optimization is not gonna be something you're gonna look at very frequently. However, for an organization which is in a startup, they will be trying to use all of the free level tiers resources that a cloud provides, for example. You'll have to be very thoughtful about the resources you use. You might choose some of the tooling that are free tier, just because it will get you in a position where you can still run, but be very efficient with the money you have. So really try to understand what you have at the moment and try to build with the resources that you have.
116
+
117
+ **Gerhard Lazu:** What I'm hearing is "It depends."
118
+
119
+ **Katie Gamanji:** Yes. \[laughs\]
120
+
121
+ **Gerhard Lazu:** It depends on your context, always.
122
+
123
+ **Katie Gamanji:** The short answer, yes.
124
+
125
+ **Gerhard Lazu:** Yeah, that's the first thing. But starting with a monolith as an MVP does sound like a sensible approach, especially if you're trying to prove a concept. And then based on that, it depends on how things go. You may decide to break it down into microservices. Where would you run this application or sets of microservices? Do you have a specific preference? We keep mentioning Kubernetes... Would you run it on Kubernetes, or would you use something else?
126
+
127
+ **Katie Gamanji:** I actually have many people asking me that... Some of my friends - they are actually developing products, startups, very small startup companies, and they're asking "Is Kubernetes the right thing for me at the moment?" And usually, what I answer in that circumstance is gonna be probably no, because it's just only two people, both of them developers, they don't have any infrastructure developer or cloud platform engineer within their team... So for them, managing and completely deep-diving into the Kubernetes architecture and management maybe is not the answer.
128
+
129
+ So in that circumstance, I would usually maybe suggest a cloud provider. Again, you have free tiers you can use from different cloud providers... So I would usually recommend that. However, if you are in a circumstance where you have enough engineering resources and you have enough expertise of maybe understanding how to run an infrastructure - not necessarily Kubernetes, but the basics of what exactly an infrastructure is composed of, then probably Kubernetes is gonna be the answer moving forward. Because when we're talking about Kubernetes, it's not just about containers, it's about how these containers are managed, and what kind of leverage you get in a production environment by using Kubernetes.
130
+
131
+ I've been mentioning the scheduling capabilities... For example, you have -- maybe I should introduce Kubernetes very briefly. So Kubernetes is pretty much an orchestration platform for containers that is run across a distributed amount of machines. So you can have different instances, and all of them are gonna be put together to run your applications. Now, on which node, on which instance - that doesn't really matter. That's always gonna be abstracted by Kubernetes. So one of the capabilities is gonna be the scheduling.
132
+
133
+ So based on the requirements you have for your application - for example you can choose "This application should have this amount of memory and CPU at all time. This is the very minimum of the resources I need for it to be up and running." The scheduler would take these requirements into account and place it on a specific node that will have all of these available resources.
134
+
135
+ Now, the thing is if that particular instance goes down, usually, if you'd be working in a data center, you will need to migrate the application, or you will pretty much need to trigger your load balancer to point to a different data center. Now, with Kubernetes all of this is automated. So Kubernetes will be managing or monitoring the state of your application all the time. It will say if it's no longer up and running and it will go back to the scheduler, and this will put it on a different instance, again, with enough resources and capacity to run your application. So all of these operations are automated... And this is just one of the functionality it provides.
136
+
137
+ \[35:56\] We have scalability, we have resources which will allow you to scale an application based on different events. For example, if you reach the amount of -- maybe the maximum of the CPU or memory your application can consume, you'll be able to scale forever. But now you can actually scale on external events as well. For example, maybe there is a queue you have in a -- actually, an SQS messaging queue in AWS would be able to take those metrics, and based on that, maybe scale further. It can really go the extra mile \[unintelligible 00:36:27.04\] personalized scale mechanism. And all of this, again, is automated. You have a declarative representation of your application, so your application is represented as code. It's YAML, it's not necessarily the most readable thing out there, but it's out there, and that's gonna be representing the desires that you want to have within a cluster... And that desired state is always gonna be fulfilled. You have control managers which will always make sure that what you want is actually gonna be in the cluster. And it's always in a loop to reach that ideal condition that you defined within the manifest.
138
+
139
+ So these are just some of the functionalities. I didn't even talk about the Ingress and how you actually manage the reachability to your application, how you have this abstraction across a collection of pods of services, and how can you have granular control of how your application serves different HTTP endpoints with Ingress, or how your Ingress can actually point traffic to different services and different applications. So you have a lot of availability out there, and you have custom resource definitions, and you really can go the extra mile. It's a tool that is very customizable. But more importantly, again, it has some of the basics very well set, so you don't have to think about them anymore; you just take advantage of them straight away.
140
+
141
+ So in that case, if you have a team that would like to run an application within a production environment, would like to kind of take advantage of all of these capabilities that Kubernetes provides, and they have enough resources within an organization to run it, then probably the answer is "Yes, do look into Kubernetes and start rolling it out."
142
+
143
+ **Gerhard Lazu:** I would definitely agree with everything you said from a practical perspective, because even though Changelog is a monolith, the reason why we chose Kubernetes is that it takes care of certain details in a very elegant way. We can declare everything from certificates, to DNS, to load balancers, to even cron jobs; it has even the concept of cron job. And these are just like the built-ins, never mind the specific custom resource definitions (CRDs). So you can enrich it in so many ways... And it's a really nice tool to work with, which seems to do very many things really well out of the box. Maybe some of them you won't even need. You have policies, you have the whole network policies, the built-in security model - I forget what it's called; I think OPA is one of them, the Open Policy Agent. So you can define certain constraints, certain requirements that need to be present...
144
+
145
+ So what I'm saying is it scales really well, so you can do so many things that would be very difficult to do in a different platform, and it just takes a lot of resource and a lot of knowledge and a lot of just time. Now, the good thing is once you learn it, it applies to anywhere Kubernetes runs, because it's literally the same API; a few differences, but the same API. Maybe the persistent volume that you get is slightly different, and the load balancer has some extra things based on the platform that you choose. But the language is the same. So you have this unified API, and it just makes things happen, which is amazing. And not just once, continuously. So that's great.
146
+
147
+ So you're right, you can have a single app and still get a lot of mileage out of it if you want to or can afford to invest that time. Otherwise, maybe a platform-as-a-service. Maybe that's going to be all you need. Maybe something like Heroku, or Cloud Foundry, or I think Render - that's like the new version of Heroku... Different options like that. So there are options.
148
+
149
+ Now, you wouldn't start with microservices to begin with, would you?
150
+
151
+ **Katie Gamanji:** Probably... It depends on the scale. But if I have a running MVP that's a monolith, I'm happy to move it forward and create this automated pipeline for it, if needed.
152
+
153
+ **Gerhard Lazu:** \[40:15\] How would you get updates out into production? What would you use for that? Let's say that you have a monolith... What would you choose to get updates out into production?
154
+
155
+ **Katie Gamanji:** So here's where I would actually have this pipeline. I was actually wondering what a pipeline is, because when I first hear about it, I was an intern at the time, and there was this magical pipeline that can push changes to the production, and sometimes it can take days, because you have [freeze](https://en.wikipedia.org/wiki/Freeze_(software_engineering)) changes and so forth. But a pipeline pretty much is a mechanism that will be able to roll out changes that you have within an application to the production environment. Ideally, that's gonna be automated. And this is what is nowadays known as the continuous integration and continuous delivery. So CI and CD.
156
+
157
+ With the CI and CD you usually have different stages that you would like to go through. So once you have your application, you developed a new functionality. The next thing is actually to have some tests. I think this is quite a natural thing to do if you want to have something secure in production. It's very often overlooked, but definitely do write your tests and actually do think what are those gaps that you might want to catch before pushing to production. Some of them are quite easy. Maybe some linting is gonna be the answer, maybe syntax checking, and so forth. So there are tools that does that for you, so do look into integrating those tests.
158
+
159
+ So you have the application, you test it, it kind of passes everything you've been writing out there... The next stage is to package it, building that artifact. When we're talking about an environment where we have data centers, usually the artifact is gonna be a binary. And it can have different formats as well, depending on where we run it, on which operating system, and so forth.
160
+
161
+ When we're talking about cloud-native, there's gonna be a container image, so usually a Docker image. And what's very good about the Docker image is that you can have a set of instructions building your binary or your artifact. And that's something, again, declarative. You can reuse that, you can change it accordingly, or if you don't want to use Docker for example, you can - as mentioned before - use a tool such as Podman or Buildah, which will build the container image straight away for you.
162
+
163
+ And once you have this image, usually you will need to store it somewhere. That's gonna be, again, different environments; it's gonna be an Artifactory. With cloud-native you'll be able to use something like Docker Hub, you can use Harbor, you can use Artifact Hub currently available... So there are options to store your image out there.
164
+
165
+ So all these stages, like building your functionality, testing it, packaging it and distributing it - this is gonna be the continuous integration. So you've integrated a new functionality and your end result is gonna be a binary.
166
+
167
+ Now, the next stage of it is "How do I push this binary? How do I push all of these changes to the production environment?" And this is where we have the continuous delivery. With the continuous delivery usually we have to pretty much propagate the application for different stages. When we're talking about an organization that, again, has resources, usually they are 3-4 environments that you'll need to pass it through. The first one is gonna be the dev environment, you might have a QA as well - debatable; some companies do, some companies don't - definitely have a staging, and then the final one is gonna be the production.
168
+
169
+ The reason you actually have all of these environments - and more importantly, they should be set up similarly. So the difference between them is just maybe the endpoint you reach to that cluster, so the API endpoint. But everything else in terms of the setup internally is gonna be the same.
170
+
171
+ \[43:54\] So what you actually do within the continuous delivery process is propagating it from one environment to the other... So QA, staging, and production. The passing through all of these stages - the results should be the same; the application should be up and running. So you have at least two possibilities to verify your application, and how to respond to the other components within the cluster. It's not just the application running, it's about how it affects other components within a cluster. And if it doesn't and everything is fine, even greater.
172
+
173
+ So once you reach the production stage, this is pretty much the continuous delivery process, and hopefully it's gonna be up and running all the way through. So these are pretty much all of the stages that we have.
174
+
175
+ **Gerhard Lazu:** Yeah. That's a good one. Argo CD is what you're thinking for CD?
176
+
177
+ **Katie Gamanji:** Yes. I've actually had to battle \[unintelligible 00:44:41.26\] project manager, because they wanted to use a more traditional tool here... But I was very set up to maybe promote, or maybe -- not necessarily promote, but advocate for the GitOps strategy. It's something which is there, and it's been a buzzword for the last year for the practitioners and the experts within the industry. I think they are completely tired of hearing this term. However, for students that kind of are on the journey to start their cloud-native journey, I think it's important to maybe set up the fundamentals of what GitOps is.
178
+
179
+ Now within the cloud-native space we have Argo CD and FluxCD provisioning these capabilities. Both of them currently are incubating CNC projects, and Argo CD is currently undergoing a \[unintelligible 00:45:27.01\] vote, which means it's stable, it's been used by hundreds of customers, it has a very healthy community, it has a very healthy development velocity and so forth, so it's a healthy project, it's out there.
180
+
181
+ Argo CD, actually - the reason I've picked it up is mainly because it has these Web UI, so it will be easier for students to visualize their resources. Because once you have a cluster, the only chance for you to interact with it is gonna be through the CLI, the command line... Which is still something not very comfortable for someone who starts to understand Kubernetes. So I wanted maybe to provide an extra support, a visual support for them to visualize those resources.
182
+
183
+ So that was the main reason, because I think it's gonna be easier for the intended audience here... But that doesn't mean that one is better than the other. It's pretty much - in the context, I think it's the best tool at the moment.
184
+
185
+ **Break:** \[46:15\]
186
+
187
+ **Gerhard Lazu:** That's really interesting, because you're right, the reason why between FluxCD and Argo CD I also prefer Argo CD because of that visual element. I think the UI is really nice. Not only that, but I'm seeing that other projects, like for example Kubeflow Pipelines, which is about machine learning - they also use the concept of Argo CD workflows behind the scenes... So now we start seeing that other projects are building on top of projects which were not meant to be used like that, but they're really flexible and they work really well, and they have nice UI elements; you just get the UI for free, so it makes sense for example for machine learning to use something where you can see a UI. I think that's really powerful.
188
+
189
+ \[48:10\] So it just goes to show that a tool sometimes people use it in unexpected ways, which are good, and many people like... And this is where something new and unexpected just happens. Nobody planned for this to happen, but it's a good thing. So yeah, another vote...
190
+
191
+ **Katie Gamanji:** Yeah, \[48:26\] responsive to customer feedback here. I definitely agree on this one, yeah.
192
+
193
+ **Gerhard Lazu:** ...yeah, another vote for Argo CD from here. And for CI, I think that's maybe less important... And the reason why it's less important is because, first of all, people have been doing this for such a long time, so you may already have a preference, so whatever you're using is fine. GitHub Actions is there, and I think that's what you're recommending in the course for the CI part, because it's just so simple...
194
+
195
+ **Katie Gamanji:** Yes.
196
+
197
+ **Gerhard Lazu:** I mean, you have to store the code somewhere, and then wherever you're storing the code, having the CI part as close as possible to that I think makes a lot of sense. So that's like fairly easy. And for those that use GitLab - well, you already know, but you're taken care of, so that, again, doesn't really matter; you know what to do. So that's really interesting.
198
+
199
+ Okay, what about monitoring, telemetry, logs, traces, events, any such things? Would you introduce that at these early stages, or would you just maybe mention a couple? How would you approach this?
200
+
201
+ **Katie Gamanji:** So this is a very good question, because one of the things that I'm trying to, again, advocate for is - as an application developer, you're gonna still need to understand your infrastructure, but you need to know where it's gonna be pushed, or where it's actually gonna be running and executed. This is actually quite important, because when I was talking about -- I'm talking within the Kubernetes context here. When you have an application, it's quite important to have these readiness and liveness probes, which automatically can restart your application if something goes wrong. So instead of someone waking up in the middle of the night and doing that, the cluster will be able to do that for you, as long as you have a health checking point out there.
202
+
203
+ So what I'm doing at the beginning of the course and kind of making sure that everyone understands is be aware if your application is gonna be executed. There I'm talking about the health checking points. I'm talking about the metrics endpoints; if you want to export any application-specific metrics - I'm talking about logs - be sure that you're actually logging on different stages within your applications, different functions, when they are called and so forth. I'm even talking about traces as well, because some of the traces out there or some of the APMs at the moment - they require for the libraries to be used within the application to have these super-fine, granular representation of how a request is actually solved and how a request is actually getting the response from all of the functions that are called, and so forth, so you can actually build all of these components together and have the full journey.
204
+
205
+ So I'm talking about all of these components and kind of making sure that the students understand them. However, these are gonna be covered even further in the next course - I think it's gonna be course three - which is gonna be focused solely on observability. They will talk about Grafana and Prometheus for metrics collection and visualization, they will talk about Jaeger for tracing, they will even touch upon how can you actually build these dashboards and panels and making sure that you have a very good representation of what's going on inside your application. These - again, I'm just like covering them, kind of as beginning fundamentals; it's just kind of making sure that you understand what it is about... But they're gonna be used throughout the course, and the capstone project, which is at the end - and this is something that I've developed as well - the students will really need to kind of be very thorough in building their dashboards, because these are gonna be quite crucial for them to get these results of CPU consumptions between a microservice that has been rewritten in a different language. So they'll have plenty of chances to practice their observability skills as well.
206
+
207
+ **Gerhard Lazu:** \[52:01\] That's a very good one... Because people don't think about that, and I think based on the platform that you choose, it can be either very easy, or very hard. Once you're really well into your journey and you think you have it, and everything is looking good, then you discover, "Oh, hang on... So now I need to understand how my application behaves?" Yes, you do. That's something that you should keep in mind from the beginning. And based on the platform that you choose, it can be either very easy, or very hard.
208
+
209
+ So sometimes it can be straightforward, and even then, the straightforwardness is in the approach... Because there's those complexities associated to what you care about, what your application does, how it's structured, or your microservices, how they're structured, and it's all very contextual. So it's very difficult to maybe build something that is generic. I mean, you could have an APM, that may be good enough, and you would have some traces, but is that sufficient? Maybe it is, maybe it isn't. I don't know. That's where it depends. It basically starts to depend more on the context, and less -- like because like -- because it's less generic. So that's the first thing - instrumenting your code, that started becoming more important, and only you know where to put those instrumentation calls. Nobody can tell you, because it's your code. So there's that.
210
+
211
+ **Katie Gamanji:** I completely agree on this one. And one thing that I want to mention here is that this need of understanding where your application runs is kind of bringing this need for the DevOps practice. I know it's been completely consumed as a topic, but for students that, again, are on the journey of understanding cloud-native - maybe they have been a programmer, they have been training to be a software developer, but they look into cloud-native and they wanted to transition... I think it's important for them to understand that DevOps is not a tool, it's not something physical, it's a culture. It's pretty much as you've mentioned - as an application developer and as an infrastructure engineer, how do we leverage the product further?
212
+
213
+ So this collaboration, for example, using a particular tracing application, or you need to kind of run or integrate some libraries to collect those logs, so you actually can visualize them in -- I don't know, it can be for example in Splunk, or Datadog; it depends on the tool of choice that you have within the house... So this collaboration is all about making sure that you have this full transparency of what's happening at the application layer, and the infrastructure team will be able to leverage that with the tooling that they provide.
214
+
215
+ **Gerhard Lazu:** Okay. So as we're approaching the end, I hope that you really enjoyed what Katie's been saying, because I have... And the course is currently free, or will be free...? There's something like "free" in the tweets; I think it's not very clear. Some people are getting confused about that part, so can you help us, Katie, understanding it?
216
+
217
+ **Katie Gamanji:** Absolutely. This is something that I'm clearing out myself as well. So the way I've built the course, it's supposed to be free. However, because it's part of a wider now degree - so I'm just kind of having a fourth of the entire nano-degree. So there are four courses; the first one is going to be Cloud-native Fundamentals, the second one is gonna be Message Passing, so we kind of talk about gRPC and Kafka, the third one is gonna be Observability, and the fourth one is gonna be Security. So I'm teaching the first one.
218
+
219
+ So this is kind of a wider nano-degree put together... So my course is free, but at the moment it's not yet available as a standalone free material. It is gonna be available later on this summer; unfortunately, it's taking longer than needed. But this is because currently we are having 15,000 students that are doing the first course that \[unintelligible 00:55:37.24\] Once they finish this course, they will be able to pretty much open it to the wider community as well.
220
+
221
+ \[55:46\] So it is gonna be free, it is intended to be free. I am not charging for it at all. I've built it purposely to -- part of my motivation to make cloud-native ubiquitous is making it accessible and available to everyone... Even someone who doesn't do any technology at all, I hope they'll be able to have some programming experience before and they will be able to roll out for the entire course as well.
222
+
223
+ So currently it's not yet available, but once I have the links, I'm gonna share it and make sure that everyone will have it. I'm actually for further feedback from everyone as well.
224
+
225
+ **Gerhard Lazu:** Thank you. So that's great. Maybe by the time you're listening this, the course will be free, or just like a week or two weeks away from becoming available, so you can take it. And that is the free part. I'm sure that there will be a course that people will be able to pay you for it if they want to, right? Hopefully? No? There should be a paid version as well, right?
226
+
227
+ **Katie Gamanji:** Yeah, the entire nano-degree is paid. That's why people are currently asking "Where is the free material?" So the entire nano-degree actually has a price; so if you wanna take it as part of the nano-degree, maybe your organization already has an affiliation to Udacity, which means you don't have to pay for it... Which is great, because many organizations have these training programs internally, they have a lot of contracts with these vendors as well... So maybe you'll be able to take it completely free because it's already paid for.
228
+
229
+ However, if someone would like to do it, you'll have to take the entire nano-degree and pay for it. That's the only option. Once it's available as a standalone material, I'm gonna be able to share it free for you... Because the feedback so far I've been receiving - it's actually great, because I've been developing this course starting November 2020, and I finished it in January 2021... So it's been quite intense, I would say. It's been four months from the beginning to the actual end for me working on it, and now it's been half a year since and actually I can see that the students do find it useful. It's been difficult for me to realize the impact.
230
+
231
+ One of the motivations, again, for me, is to grow the next generation of cloud-native practitioners, to make it easier for them to transition within a role that has cloud-native elements. And I've been developing this, but I haven't seen any results. Now I'm actually starting to see those, and it's really delightful to see students from across the entire world sending me messages on LinkedIn, and requests, and being like "The material is great. I really understand everything you're saying. I would like to learn further from you, that's how I would like to connect." So it really inspires me to -- again, it's been a great work that I've been doing, and I really hope it actually reaches as many students as possible.
232
+
233
+ **Gerhard Lazu:** Thank you, Katie. That sounds wonderful. Thank you for putting in the time, for caring enough about this, because it is important, but maybe many people don't realize just how important it is... As time goes by, I'm sure this will become even more and more relevant, and people will enjoy that such great materials exist... So thank you, Katie, for taking the time.
234
+
235
+ If you have not heard of this course, go and check it out. It will be in the show notes. Give Katie feedback, what you liked, what you didn't like, how she can improve it, because there's always scope for that, to improve, to make it better... But I think you will really enjoy what's already out there. If you just look at the blurb, the description, there's a lot of very valuable content.
236
+
237
+ Katie, it's been a pleasure. Thank you for sharing this with us. I look forward to speaking with you sometime soon.
238
+
239
+ **Katie Gamanji:** Thank you for having me. There is one last thing I would like to mention - taking the course is just the first step. One thing that I'm actually calling out at the end of the course is "Do reach out to the community", and I'm expecting everyone in the cloud-native community. I think this is an action that I would like every student to take, if possible. Being part of the community does not necessarily mean writing a thousand lines of code and being out there. It's being present, it's sharing your experience of using cloud-native, it's reaching out to the community... So the community is out there, and we are expecting you as well, so please do reach out, and... Yeah, let's grow the next generation of cloud-native practitioners. That's the high purpose.
240
+
241
+ **Gerhard Lazu:** Thank you, Katie.
Cloud-native chaos engineering_transcript.txt ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** If the network is not reliable, and that was a thing that it is, they will be in for a surprise. Unless you've had some network outages, or some packet loss, or even like at your home, you always think it will work, there won't be any problems. Well, it doesn't work like that in reality. Disks fail all the time. The more infrastructure you have, the more you realize how just often disks fail.
2
+
3
+ CDNs fail all the time. In episode \#10 we talked about how Fastly failed for a few minutes and half the internet went down. That was an interesting one. So how do you know that the beautifully-crafted code that you ship continuously, it's test-driven, it's beautiful, it gets out there - how do you know that it'll continue serving its purpose when failures happen? Because they will happen. I think, at least in my mind, that's where chaos engineering comes in. But what is your perspective, Uma?
4
+
5
+ **Uma Mukkara:** That's right, the chaos engineering - it was being viewed in a little bit different way earlier. It was purely for stopping the outages; SREs were tasked with the - you know, "I tried everything, but now can you do something? You are a super-SRE." "Okay, let me do chaos engineering." But I believe it's changing.
6
+
7
+ \[04:15\] Chaos engineering is a little bit more than a tough piece that is meant for super-SREs. Now chaos engineering is more of a good and easy and a must-have tool for DevOps, as long as you're trying to improve something on reliability. You're right, it's about reliability; nothing is reliable, not just networks. Almost anything that you deal with, different software is unreliable. It is reliable to some extent, but not a hundred percent.
8
+
9
+ So I believe chaos engineering is still an evolving subject, and it has evolved in the last few years from being purely a fascination for an SRE, as an expert subject for an SRE, into more of a must-have, good-to-have tool for all sorts of roles, ranging from SRE, all the way to developer. That's my observation at least.
10
+
11
+ **Gerhard Lazu:** Okay, chaos engineering is important. It's an evolving topic, it's a field that changes quite a bit... It's chaotic (pun intended). So why is it important, Karthik? Why is chaos engineering important for shipping code and writing code? What is the link there?
12
+
13
+ **Karthik Satchitanand:** I think you mentioned about Fastly going down for a few minutes, and that took half the internet down with it... And I'm sure it has cost a lot. Downtimes are extremely costly. You would want to avoid them dearly. And there is enough motivation for you to test how reliable your systems are.
14
+
15
+ Like Uma mentioned, it's not something that you only do in production, though that is fair, the benefits of chaos engineering has been most realized for the last decade or so... But it is important that you go ahead and test your systems, because there is so much changing there in your deployment environment all the time.
16
+
17
+ In today's microservices world, the application that you're deploying in your deployment environment - it could be Kubernetes, or it could be somewhere on the cloud - there are so many other moving parts that you depend on to give you that wholesome experience for the user. Things that help developers support SREs and things that are viewed by the end user. There are so many components in the deployment environment which cater to different audiences, help running the entire system... It's possible any of those components may go down, relaying in varied degree of degradation and user experience. It could inhibit your support team from serving customers better, or the customers might have a direct impact, not being able to use your service. That is something we would like to avoid.
18
+
19
+ Chaos engineering is a lot about learning your systems as well. Many times we assume certain infrastructure aids while developing code, which turn out to be untrue when you're actually deploying it... And you really want to know what's going to happen when things fail in the infra side. So yes, I think that is really about chaos engineering, as to why that is important.
20
+
21
+ **Gerhard Lazu:** So taking that, how do you chaos-engineer a CDN? That's just one that you have in your system... How do you apply chaos engineering principles to test the resiliency of your CDN? Can you do that even?
22
+
23
+ **Karthik Satchitanand:** \[08:07\] I think ultimately you would host your CDN on infrastructure that you're either putting on your own data centers, or on the cloud. So ultimately, anything that's powering software is ultimately built on some platform... And you could ahead and start off by checking what happens when something fails on the platform. It could be a disk, it could be a network, it could be some resource exhaustion that you're seeing on the platform that is hosting the CDN. And I'm sure when you're building the data network, you're still ensuring that data is spread across different machines, different regions or areas, and you're somehow building some amount of resiliency into how the data is served \[unintelligible 00:09:00.20\] end users as part of a CDN.
24
+
25
+ So you can check the extent of high availability that you have built in by targeting some very simple infrastructure faults. I would say that would be a good starting point.
26
+
27
+ **Gerhard Lazu:** Okay. What do you think, Uma? Anything to add?
28
+
29
+ **Uma Mukkara:** I mean, the CDN is a complex topic. Which part of a CDN are you talking about? Delivery, your networks need to be reliable, your supporting infrastructure needs to be reliable, and the software that runs the CDN needs to be reliable.
30
+
31
+ The idea of applying chaos engineering to your CDN is to improve something that's already mostly reliable. Today, a CDN is reliable. We all work on the internet. But it is services like Fastly going down once in a year, or even less often.
32
+
33
+ **Gerhard Lazu:** Yeah, the first time in five years for us.
34
+
35
+ **Uma Mukkara:** \[unintelligible 00:10:02.10\] "Hey, something has happened, even though I applied chaos engineering to it." In reality, it's not that simple, in my opinion. Site reliability itself is in engineering. Chaos is in engineering. So engineering comes with understanding what's going on, and there's no unique way of saying that this is exactly how I'm going to fix. It's going to depend on what is the problem in that given situation.
36
+
37
+ So I would say you can apply chaos engineering not just only to a CDN - to any other system, but really looking at the way the services are architected or deployed. And look at the services and see, "Is there something that I can see as a low-hanging fruit that's either doubtful about reliability, or constantly causing me trouble? Let me go and attack that, debug that." Then the one way to debug that is "Can I actually introduce a fault on the scene?" So you need more ways of reproducing the faults, and then you go to your SREs. SREs generally go and try to fix stuff create quick recovery points, or try to avoid that dependency on that failure... But really, you need to go back to developers to really fix the root cause of it.
38
+
39
+ So if you ask me to summarize the whole chaos engineering for a CDN, it needs to be at different levels, in cost structure and cost structure again is storage and network. If I recollect some of the scenarios that I heard of, it's always about a slow storage that caused more of a bigger issue all of a sudden and it never happened, the storage slowness. Or networks usually are very tolerant in terms of faults, but still, double/triple faults can happen.
40
+
41
+ \[12:18\] So one is about verifying how reliable is your infrastructure dependency. Try to introduce some slownesses intentionally and keep verifying your CDN continues to work. That's one level. The other level is take a look at your services and how reliable you are, and then if networks go slow, or storage goes slow, do you have a software that is reliable enough to switch on to something else, or do something that's more proactive to continue serving the data.
42
+
43
+ So as I said, it's engineering, and that's why we need good tools for site reliability engineers. That's chaos engineering.
44
+
45
+ **Gerhard Lazu:** That makes perfect sense to me. So if I had to summarize what chaos engineering is in one short sentence, to me that would be the injection of artificial faults - they're not real, they're artificial; they're made, we make them - to see how the system as a whole reacts to those artificial faults. Would you refine that, add something more to that? What is chaos engineering to you in one short sentence?
46
+
47
+ **Uma Mukkara:** I'll probably take a crack at it, and I think Karthik can give probably a better answer. I usually separate chaos and engineering as two different words in my mind. People always think chaos engineering is chaos. To me it's easy to introduce chaos. Of course, you have now better tools. It's faster to introduce chaos, but I would give more preference or more importance to the engineering side of chaos engineering. It is always about what should happen when you introduce a fault. A very simple fault, a very simple service, if it fails, how you react to it is always well tested. Your devs, your SREs, user acceptance tests... We are living in the modern day; all those systems are now very modern.
48
+
49
+ But failures do happen because something unexpected, untested has happened, and now we are looking at chaos engineering as a way to unearth those faults in a willful manner. So what is chaos engineering? In that sense it's when a fault happens, what should you look for? How do you actually search for a fault? So that's the steady-state hypothesis. I go and look at what is my stead state; you can look at just one service, or look at many services together. And if you define the steady-state hypothesis that is closer to your business or a business loss, then you will come to chaos.
50
+
51
+ The tools and the strategy design should go towards thinking more on the engineering side of "How can I avoid a certain loss?" or "How can I unearth a complex scenario or a complex faulty scenario?" and then I can split that scenario into multiple \[unintelligible 00:15:37.14\] And then that becomes easy, actually. So it's engineering, that's the way I look at it.
52
+
53
+ **Gerhard Lazu:** I'm really looking forward to Karthik's question, but before that, I would like to ask you, Uma, how do you look at a system? How do you look at the steady state of a system? What do you use?
54
+
55
+ **Uma Mukkara:** \[15:57\] I would generally define the system in the minds of people who are the \[unintelligible 00:16:03.27\] and what keeps SREs and the management of the SREs up at night. So it's something that is closer to business criticality, the service. So that's what the system to me is. It is not really about the technical stack; technical stack comes later, and that's where we introduce chaos. But the system really is about service and service catalog, hierarchy of services, dependency of services. This is what the system is.
56
+
57
+ So I would go ahead and define that map, and identify the criticality points, and then start thinking about manually to introduce a fault, what all will shake up, what can loosen up or what can fail, and who will wake up first before the customers start screaming too much. So that's what is the system in my view, where you're going to apply chaos engineering on.
58
+
59
+ **Gerhard Lazu:** So what I'm hearing is that not only you need to know all the services that make up a system, but also what does it mean for end users to be happy when it comes to using that system? So you define all the services that make the system, and also what does healthy mean for every single component in the system. And that is your steady state. Steady state is define what happiness means for your end users, capture that somehow - I imagine dashboards, metrics, logs... No?
60
+
61
+ **Uma Mukkara:** Yes. Again, it depends on how evolved or structured the system is. It's really about good dashboards, if you have, and you're using a good service-level object scheme, then you have a system that you are looking at. And if we're only measuring how often the faults are happening, and if you are really depending on how happy my customers are as a metric generally, how reliable your systems are, then you are in for a surprise. Yeah, you should have a good schematic of the service-level objectives.
62
+
63
+ **Gerhard Lazu:** That is a great answer, very complete. A lot more comprehensive than I was expecting, but it was very, very good... Which comes back to Karthik. The question was - I know we talked a lot, so let's restate the question... The question was "What is chaos engineering in one short sentence?"
64
+
65
+ **Karthik Satchitanand:** I think we are living in the times of the pandemic, so let's call it "Injecting harm to build your immunity." Just that instead of injecting harm into human beings, we're doing it on systems. So I would define chaos engineering as that. Uma made a good point about steady-state hypothesis - I think when Netflix and Salesforce and Amazon, all these folks put together the principles of chaos a long time back, the main data is the central piece of the discipline of chaos engineering, along with recommendations to try different kinds of faults, and run chaos continuously... Because you never know when the system behaves in what way, because of what change induced into it.
66
+
67
+ So yes, I think chaos engineering is a lot about scientifically trying to understand or mapping user happiness to metrics and logs and events; steady state can be very diverse, and in today's age, that diversity has just increased. You could be talking about metrics, you could be talking about the availability of some downstream service, or it could be something on your clusters. So we are talking about resources in Kubernetes, it could be the state of a resource... And there are custom resources that extend the traditional Kubernetes capabilities to a lot of domain-specific intelligence, so being able to validate that info is also part of steady state...
68
+
69
+ \[20:17\] So I think yes, chaos engineering is about willful fault injection, like you mentioned, Gerhard. Artificially inducing faults in order to verify how the system is behaving, and have good means of identifying the \[unintelligible 00:20:30.16\] steady state, and checking whether it is within tolerable limits or no.
70
+
71
+ Then it's all about doing it continuously, then going back to the drawing board, fixing your application, business logic, or maybe your deployment practices, coming back and \[unintelligible 00:20:47.09\] proceeding with the next possible outage that you can think of.
72
+
73
+ **Break**: \[20:54\]
74
+
75
+ **Gerhard Lazu:** This doesn't happen often, but I was talking to one of our listeners, Patrick F. in Slack, and he has a question - more like a suggestion - which I think is a very good one to bring up in this interview, in this conversation, in this episode. Patrick is saying that he would love to hear about practicing inefficiencies or applying non-best practices in small doses. I know it's not exactly the chaos engineering that we discussed, but I can see an overlap between doing the wrong thing on purpose and chaos engineering. What do you think, Karthik?
76
+
77
+ **Karthik Satchitanand:** I think it makes sense, and I think this is especially true when you're trying to find out how good your security systems are. There's an entire new category, or a subcategory within chaos engineering for security chaos engineering, which people are trying to find out how reliable their systems are in terms of security by introducing some vulnerabilities deliberately.
78
+
79
+ I can relate a lot to Patrick when he says running things in the non-best practice way. You can run privileged containers, mount \[unintelligible 00:23:05.27\] and basically try and see how your system behaves; is it being called out? Do you have the right policies that restrict you from doing so? These are things that you would want to find out, and not just for security. I think that's probably one thing that comes to mind straight away... But even for other scenarios maybe. We talked about running single replicas of applications.
80
+
81
+ Sometimes you would want to see what is the recovery time of your app. Let's say you were not running multiple replicas of an application; you were just going with a single replica, and there was a failure. You might want to figure out how best or how quickly you're able to recover. Maybe reschedule and bring up once again, register \[unintelligible 00:23:52.02\] and then start serving data once again. How quickly does this happen?
82
+
83
+ \[24:00\] Sometimes you might want to run in modes that are not classified as the best practices. You would still learn a lot about your system by running that way. So that's something that should be done, but most probably on staging environments or development clusters, because you would not want to attempt this in production... Because these are things you would still learn anyways while you're running it even in a non-prod environment.
84
+
85
+ **Gerhard Lazu:** Anything to add, Uma, to that?
86
+
87
+ **Uma Mukkara:** Yeah, it's actually a very interesting question... You were saying Patrick is asking "Should we implement non-best practices or inefficient practices?" I'm saying the same thing when I say chaos is a best practice. It's a must-have. That really means that you in turn use non-best practices in production \[unintelligible 00:24:53.02\]
88
+
89
+ So your best practice is do everything right. Chaos engineering says "Break something. Don't assume that everything will happen." So the best practice is to have chaos engineering. That means the best practice is not to follow always the best practices that you are asked to follow. And the result of breaking things on purpose or willful fault injection - you will improve your best practices. That means you did follow some non-best practice, and that unearthed something, so you tuned your best practices.
90
+
91
+ So I would say he is 100% right, and he's just put it differently. We are putting chaos engineering as a more polished word, but it's an absolute thing. No one can tell everything will work well.
92
+
93
+ **Gerhard Lazu:** I always keep going back to how many learnings I personally used to take from fire drills, or even Red Team Thinking. That was a very powerful one. But taking a step back and summarizing this - you tend to learn more from failures than from successes. So when you fail, there's a lot of learnings there. When you succeed - sure, but maybe it doesn't feel as significant. Maybe also because of the loss bias, I think. When you lose something, it feels worse than when you win something. I think it's rooted in that loss feels bigger, like "Oh, what?! My database was deleted?! Oh, no!" Versus "The migration just worked. Sure, it's okay. No big deal." I think that's the way to think about it.
94
+
95
+ Okay, so that was a good one... Hopefully, Patrick got what -- well, not what he was expecting, but got something good out of this. Now, I would like us to go into a specific use case, and I keep bringing this one up... The Changelog.com application. We are in a unique position to be able to experiment and learn new things in the context of the app that runs all our shows, all our podcasts. That's pretty unique as far as I know... So Changelog.com is a monolithic, three-tier application. There's a frontend, a backend and a database. It's single instance, for various reasons. Episode ten has all the details. And I'm wondering, if we were to start using chaos engineering practices, which from what I'm hearing, they're mostly targeted towards microservices; I think that's where they shine... But what chaos engineering practices could we use for our application, just to see how resilient it is?
96
+
97
+ **Karthik Satchitanand:** I think chaos engineering is as applicable and important for monolithic applications as they are for microservices. Sure, I think its adoption has been increased because of all this paradigm shift to microservices, and the fact that you have more possible failure points; the surface area for failures is much more with microservices... But that's not to say that it cannot be applied in principle to monolithic applications.
98
+
99
+ \[28:01\] In spite of being a monolith, there are some amount of dependencies that you would still have... Let's say infrastructural dependencies. We talked about databases being used as part of the stack; it's very much possible that the disks become slow, your writes become very slow, it's possible that you have space getting filled up, you don't have space anymore to write things. How you're going to behave as an application that's probably very read-intensive, and you are having some problems, but you still have enough in place to keep the users happy when you are able to record your systems manually.
100
+
101
+ So this is something that you would still check, even if you were running a monolithic application. And that's true for a lot of other infrastructure components as well. When you do chaos engineering, there are two ways of deriving the scenarios to get started with chaos. One approach is a completely explorative approach; you take a look at the system, you identify "These are the things that could go wrong", and then you start going out and doing those control failures and noticing your system and how it behaves.
102
+
103
+ The other way of deriving scenarios is to look for data, historic data of what has gone wrong before, and what is the most problematic area. How many times did I have to grow my volume? How many times did I have to increase the CPU course on my system? When there was a lot of interest, a lot of reads, a lot of traffic, what was the component that I needed to be most careful about, which displayed - not erroneous characteristics, but characteristics that you would not identify as optimal behavior. And then you go ahead and derive the scenario from there and go ahead and do it.
104
+
105
+ So that pattern is common for both monolithic, as well as microservice applications... But the general concept of chaos engineering still applies here, too. It's just that the failures here might be more tied to the infrastructure, rather than something that you would think of in case of a microservices world, where the dependencies and co-services that you are running along with your main business app offer as much as food or as much possibility of failures, rather than the hosting infrastructure, I would say.
106
+
107
+ **Gerhard Lazu:** So what tools could we use to do all those things? Is there a tool that you would recommend that we pick up and try simulating these scenarios, or faults, whatever you wanna call them?
108
+
109
+ **Uma Mukkara:** Yeah, you were asking two creators of LitmusChaos project what they would use... Of course, we both recommend --
110
+
111
+ **Gerhard Lazu:** Maybe not LitmusChaos...? \[laughter\] It can happen... Unlikely, but...
112
+
113
+ **Uma Mukkara:** If you want to run into real chaos in chaos engineering don't use Litmus, but if you want to stay organized in chaos engineering, you might choose Litmus.
114
+
115
+ **Gerhard Lazu:** Okay.
116
+
117
+ **Uma Mukkara:** Yeah, the idea of Litmus Chaos is to make sure that we provide a platform, not just an experiment. As I mentioned earlier, chaos engineering is real engineering. You go through managing the experiments, you're managing the steady-state hypothesis logic, and you keep changing it. You're not happy with what you did the last time. So how do you manage it? In your system there are multiple versions of it.
118
+
119
+ \[31:42\] We needed a platform in our prior work life, that's when we looked for some good chaos engineering tools and started writing Litmus, and it became more widely adopted. I would say you can start with Litmus, and Litmus is just a chaos engineering platform... But for you at Changelog I would also recommend best practices as -- first of all, you need to play the role of a person outside the system; try to discover, don't assume too much about how your system works. Start with \[unintelligible 00:32:15.27\] apply the logic of "Something will break when I do something crazy." That's what is \[unintelligible 00:32:24.09\] and then that brings some good unknowns hopefully the first day, and then it shakes up your co-workers, and your management, and then you start putting a better, holistic approach.
120
+
121
+ Then I would also say as a prerequisite you need to have good metrics, or a dashboard, even before you apply chaos engineering. Do you have a good monitoring system? Because when you actually do apply, it breaks, but then you need to be able to take care of observing what has gone wrong and "What do I do now?"
122
+
123
+ So it all goes hand in hand, and discovery, reliability metrics, an observability system - all those things need to be in place, and then start with probably the backend, in infrastructure. And even though it's monolithic, you can still apply some service-level chaos such as push too much traffic into one of the services that you use less, but that can cause stress on overall systems... And then there is a lot that you can do when proactively in your pre-production environment. Try to start there and learn, and then go from there either right, or left, into production. You may find something that you can improve on your pipeline, so you can go on an introduce these failures into your pipeline. That might be a good place for the overall efficiency of your DevOps.
124
+
125
+ **Gerhard Lazu:** So when it comes to starting with the Litmus platform, I imagine we would need to have an account on this platform? It's not something that we would run, is that right? A litmus is a Kubernetes application. It's not SaaS. So it's a Kubernetes application, completely open source. it's a CNCF project. You take and install Litmus on Kubernetes. It's \[unintelligible 00:34:29.07\] you can log in and you connect wherever you want to run chaos. From there you connect to chaos center, and you can then pick up a chaos experiment or a fault, and direct that fault towards your target or to the agent.
126
+
127
+ You can run it on your existing Kubernetes, or spin up a small Kubernetes cluster to run Litmus. It is quite thin, but it is a Kubernetes distributed application. You can scale it up. If your hundreds of QA SREs are using a single instance of Litmus, it can scale up easily.
128
+
129
+ **Gerhard Lazu:** Do you install it as a Helm chart? Is there like an operator that comes with its own CRDs? How does it get installed on Kubernetes?
130
+
131
+ **Karthik Satchitanand:** Yes, you're right about that. You do have a Helm chart that helps to install the control plane of Litmus. As part of the setup process of the control plane you would go ahead and set up the account. The account is most probably about the users, who's going to do the chaos...
132
+
133
+ \[35:53\] The next part is about the agent infrastructure. This is the environment you're going to actually do the experiments in. This can be the same place where you have the control plane installed. Uma mentioned that Litmus runs as a Kubernetes app... Or you could have other clusters in your fleet, where you want to do chaos, so you would be registering that into the portal. And that is where the operators and CRDs get installed, as part of the agent setup, and you can then go ahead and construct scenarios, or workflows, as we call them, to the Litmus center, the chaos center, and then they get executed inside a cluster where the agent takes responsibility of playing the manifests, the custom resources, and then reconciling them, and then actually doing the fault injection and steady-state validation process.
134
+
135
+ **Gerhard Lazu:** So I've seen somewhere - I don't remember where - Argo CD being somehow related to this as well... What is that relationship between Litmus and Argo CD?
136
+
137
+ **Karthik Satchitanand:** We use Argo workflows as part of the chaos scenario construction. We chose Argo workflows for its flexibility to order or sequence faults in different ways \[unintelligible 00:37:10.13\] We've instrumented the Argo workflows with some Litmus intelligence. The containers that carry out the steps within a workflow understand it as API. So they are \[unintelligible 00:37:23.01\]
138
+
139
+ The Argo CD part - I'm sure you might have heard of it more around the GitOps support that Litmus offers.
140
+
141
+ When we built Litmus, one of the things we wanted to do was somehow weave in the chaos engineering aspects into the standard GitOps flow that people are beginning to use... And people are trying to use GitOps to ensure the applications and infrastructure is maintaining a single source of truth, that is Git, and ensure that what is on their deployment environments match what is in their source. And there are controllers, also called as GitOps operators, which ensure that your applications are upgraded whenever they change in the source etc.
142
+
143
+ Oftentimes we see that people who've upgraded applications in their environment \[unintelligible 00:38:17.22\] or they have deployed new infrastructure want to verify its sanity. And one means of verifying sanity is by performing some chaos experiments, along with a specific expectation of what's going to happen. And they already have a hypothesis in mind that they burn into the experiment definition. The experiment has the ability to specify validation intent within it.
144
+
145
+ People want to do those sanity checks whenever they've upgraded their infrastructure or upgraded their applications, and it was done in a manual way, so we wanted to automate that and provide these users or this person with a main store on chaos experiments automatically when something is changed via the GitOps operators. That's when we brought about the event tracker functionality within Litmus. It runs as a separate microservice in your cluster.
146
+
147
+ So whenever Argo CD upgrades your application on the cluster, you have the option of triggering a predefined or a presubscribed chaos workflow against it. That happens via a call to the chaos center from the event tracker service running in your cluster.
148
+
149
+ So that is the relation that we have with Argo CD, and it is true for other popular GitOps tools as well. It could be Flux, or Keel, or you might have built in something with your own -- you might have written some tooling by yourself, using Helm... So you have the option of triggering Litmus experiments or workflows as sanity checks post a standard GitOps operation.
150
+
151
+ \[39:59\] There's another angle to it... Litmus also supports GitOps for the chaos artifacts. When you construct chaos scenarios, these workflow manifests can also be stored in Git or committed into Git automatically. When you make changes to the chaos workflows in your source, you will have those changes reflect on your chaos center as well. So that is another aspect of our way of looking at Litmus with GitOps.
152
+
153
+ **Gerhard Lazu:** Okay, that makes a lot of sense. I'm starting to form this mental model in my head of how all this fits together in our setup. I can start seeing the integration points... But what I'm wondering now, Uma, is if someone doesn't have Kubernetes, how would they start even using this?
154
+
155
+ **Uma Mukkara:** So when you talk about Litmus, you need Kubernetes to run the chaos center, where the control plane of chaos engineering is put together, where the SREs and developers interact with it, and where you interact with the chaos experiments that are stored on a hub, or on your private Git repository - all that is running as a Kubernetes application. So if you don't have a Kubernetes environment, and your chaos engineering needs is for a non-Kubernetes environment, you just need to spin up a small Kubernetes cluster to post a LitmusChaos center, and then you can still create chaos scenarios or workflows or experiments towards your monolithic legacy applications or the regular infrastructure chaos \[unintelligible 00:41:47.27\] in a cloud, or on virtual machines, all that stuff. So Litmus does not work just only for Kubernetes, it works for everyone... But we've built it as a cloud-native application for all the good reasons.
156
+
157
+ **Break**: \[42:04\]
158
+
159
+ **Gerhard Lazu:** So this is a very special topic for me... The reason why it's special is because I disagree with Kelsey Hightower about running databases on Kubernetes, and I learned it the hard way (again, pun intended), that if you run databases on Kubernetes, the database needs to be built for a distributed system that comes and goes very quickly, failures are intermittent and they can take miliseconds... It can mess up with replication. That's actually what happened in our case when we ran a PostgreSQL cluster on Kubernetes. We tried Crunchy Data, and we also tried the Zalando operator - so we tried both - and in both cases our replica fell behind. The write-ahead log just stopped replicating, and then the master (or the primary, shall I say) disk filled up, crashed, couldn't resume, couldn't restart, because the disk was full, the write-ahead log filled the disk, the replication got broken... And we couldn't promote the follower to be the leader, because it was too far behind. So we had downtime, we lost some data.
160
+
161
+ \[44:22\] So what do you think about running databases on Kubernetes, Uma? I know you have a bit of experience in this area, that's why I ask you first...
162
+
163
+ **Uma Mukkara:** Yes. Litmus \[unintelligible 00:44:29.13\] trying to fix bugs when you're trying to run databases on Kubernetes. So I kind of have an opinion that you cannot have an option of not running databases on Kubernetes forever. Five years ago that was not a requirement; two years ago people thought it's very, very difficult. Now I think there are mixed opinions; there are people running databases on Kubernetes, and there's a good, active community, data on the Kubernetes community... Things are improving, and it is an evolving subject, and tools are coming in. Databases are also changing, so the stateful sites are the root elements within Kubernetes that are enabling distributed databases. But at the same time, there are storage elements that are being built or improvised for running databases on Kubernetes.
164
+
165
+ For example, my earlier project, OpenEBS, which is still a popular subject in this space, is having the concept of containerized storage. So you try to consider the storage as container an element that is built for running data on Kubernetes. And similarly, there is an element of local PV that is started by Kubernetes itself, and there are solutions being built on top of local PV. What happens when \[unintelligible 00:46:09.28\] goes down.
166
+
167
+ So I would say there are people who are running data on Kubernetes. Because the infrastructure also becomes a microservice, you need to understand that there are more failures that can happen. Storage is not guaranteed to be running in one place. It can \[unintelligible 00:46:30.01\] and how do you actually handle that situation, handle your application to do that? So just assume that it's not just your port that can just go off and come back in. Assume that your storage also can go off and come back in. So it's a natural thing. That's why your applications just need to be aware of such scenarios and build it for more resilience. Chaos engineering as chaos-first is a principle that can definitely help in all these things.
168
+
169
+ So hopefully in a few years from now there will be questions like "Oh, we thought data on Kubernetes is not \[unintelligible 00:47:09.29\] but I see many people running it. That would be what will happen, in my opinion.
170
+
171
+ **Gerhard Lazu:** I would agree with that. I think there is a process of -- as you mentioned at the beginning of the interview, it's evolving, so I think the storage, the data layer is evolving on Kubernetes... But also the networking I think is evolving. Because in our case, the one that I mentioned earlier, it was networking, high network latency, very high packet loss, which just messed up the replication in PostgreSQL. So it wasn't specific to any operator, by the way. It wasn't Crunchy Data's fault, it was not Zalando's fault, the operator themselves - that's what I'm referring to - it was just the network was just messing up with the PostgreSQL replication. That's what the problem was.
172
+
173
+ \[47:59\] In other cases, for the app itself, when we had a three-node Kubernetes cluster - by the way, we have a single-node one; I know it's very contentious, but guess what, it works better. So reality says and the practicality says it works better. The point is when we had three nodes, those volumes that should have moved around, the PVs - they didn't. They were stuck, and they couldn't get unstuck from the node that went away. And because remained in these stuck states, couldn't dethatch, they couldn't be reattached to other nodes. So that was a bit of a problem as well, which hit us. I know that things improved and they evolved, but I don't feel they are there yet, especially if the database was not built to be a distributed one from day one.
174
+
175
+ What I'm wondering now, Karthik, is if there is such a stateful system, which was built to be distributed from day one, it understands that and it's in its DNA, is it easier to run in on Kubernetes? I'm thinking maybe a message broker that was built to be distributed. It still has some state, but it works as a distributed system. What do you think? Does that make it easier?
176
+
177
+ **Karthik Satchitanand:** Yes, I think to a great degree it does, but the network problems are not going away anywhere, Gerhard. If you take a look at the Litmus Slack channel on the Kubernetes workspace, network latency and network loss are probably the most popular discussion items. People are trying those experiments much more than they're trying other experiments... So it is something that will continue to be there. As the network also evolves, with storage and all the other concepts in the cloud-native world, we will still have to address these network problems once in a while.
178
+
179
+ Message brokers is a good example, and in fact, when we're trying to build some illustration for application-specific chaos experiments with Litmus -- so application-specific chaos is a category of chaos experiments in which the experiment business logic has some native health checks that are specific to an app, and they also consist of certain faults that are made to a particular app. These could be just the standard faults applied within an application context, or they could be some faults that are very native or very specific to a given application type.
180
+
181
+ The first application-specific experiment that we considered was Kafka. We have some communities that are actually trying out Litmus against Kafka. Strimzi is one of the Kafka providers whom we are speaking with and trying to collaborate on, trying to find good scenarios that can be used as part of this thing.
182
+
183
+ What is relevant in the message broker world is - let us say you have some very intelligent message broker that is capable of handling message queues, and doing failovers, and doing elections, and things like that... Because here also there is some amount of state involved, so you have storage at play, you have network at play, you have all these things.
184
+
185
+ One of the scenarios that we got started with was killing a partition leader, which could also be a controller broker. Then you have a series of things happening. You have reelections happening, you basically trying to speak to Zookeeper, and you're trying to ensure that the failovers happen quick enough so the consumers message timer is not breached, off session timers are not breached. These are thing you would still want to find out... These are good experiments you would still do in these kinds of environments. The first, from infra to infra. When we did this Kafka experiment on AWS, with the standard EBS-based storage class, with the AWS ENI, versus when we did it against GKE, with the GPT-based default storage class and \[unintelligible 00:52:04.04\] we saw there was a difference in the recovery times, and we saw that we needed to set different timeouts at the consumer \[unintelligible 00:52:12.08\]
186
+
187
+ \[52:16\] This experiment was a simple \[unintelligible 00:52:17.02\] You will have the need for chaos engineering in these environments as well, both to learn about the system, as well as prove some hypothesis that you might already have around timeouts and such settings that you have. So to come back to the earlier question, will data on Kubernetes become simpler when application architecture evolves to becoming distributed? Yes, I think that will definitely help... And I'm just trying to tie together chaos engineering there.
188
+
189
+ The adoption of data on Kubernetes can be accelerated, much in the way general Kubernetes \[unintelligible 00:52:53.20\] can be accelerated through chaos engineering. There are folks in the Litmus community, and I'm sure there are other projects speaking to such users as well, where they want to use Kubernetes in production, but they are not really confident in doing so. And they want to set up staging clusters, test out a lot of failures; failures on the Kubernetes control plane itself. You have your schedulers or controller managers going for a toss. You have Etcd going for a toss. And then you're also trying to see what happens when you kill pods.
190
+
191
+ The multi-attach error issue, as we typically like to call it, the volume not getting detached from one node, and therefore it doesn't get attached to the other node - this is something we've found very early in OpenEBS using the chaos experiment. And something has come up in the \[unintelligible 00:53:49.06\] to fix it today. OpenEBS has taken those fixes on board.
192
+
193
+ So I think both the application architecture, the data architecture becoming more distributed, as well as evolving chaos engineering practices will ensure that the adoption of databases into Kubernetes, as well as the general Kubernetes adoption itself will increase.
194
+
195
+ **Gerhard Lazu:** I think the most important point that resonates with me that you've made, Karthik, is around the different platforms having different recovery times. I think that's really powerful, because if you are, for example, as we are, running on Linode, we cannot apply the same approaches that someone may be running on GCP, or someone running on AWS. Infrastructure matters a lot. So then how do you know how does it behave in your case? Well, one solution would be to maybe apply LitmusChaos and see how it behaves in practice. Also, not to mention that you do upgrades to your Kubernetes. Things improve most of the time, but sometimes they get worse. So how do you find out what got worse before rolling in production and everything just failing over, and hopefully failing over? And other times just failing in unexpected ways. So how do you preempt some of that?
196
+
197
+ \[55:14\] And we all know that as much as we want to be confident from our staging experiments, the best failures happen in production. So as much as you can try to preempt things in staging, until you go into production, you won't see it. So maybe trying to generate production-level load, if it's possible? It's not always possible. That would help.
198
+
199
+ So as a listener, if I had to remember one thing from this conversation, what would that be, Uma?
200
+
201
+ **Uma Mukkara:** Yeah, so the last stage of reliability is to be able to confidently generate random triggers after you apply every change to your system in production. So you upgrade it, you have a good CI/CD system, and you apply the change in production, but also \[unintelligible 00:56:11.16\] to create a random fault because of that change. And if you are still confident, that means you are testing well. And it takes time. Chaos engineering, starting in some form in pre-production or in QA, it all helps reaching that goal, but always remember that unless you are doing that confidently, breaking things confidently, your systems are not reliable. You can just assume that they are reliable, but they're not. So use chaos engineering as a friend.
202
+
203
+ **Gerhard Lazu:** What do you think, Karthik? Do you agree with that?
204
+
205
+ **Karthik Satchitanand:** Doing chaos engineering in production is the ultimate stage, the Nirvana of a very mature practice that you've set up in your organization... So start small, and explore a lot of failures, and establish a culture of continuous chaos at all levels. Chaos has become more democratic, more ubiquitous nowadays. The philosophy of chaos has sort of percolated to all \[unintelligible 00:57:20.00\] like Uma said earlier, from developers, to QA engineers, to SREs.
206
+
207
+ So go ahead and perform chaos, and then you will be able to confidently deploy your applications and sleep better at night.
208
+
209
+ **Gerhard Lazu:** Thank you very much, Karthik, thank you very much, Uma. That was a great thought to end on. A very powerful one. So yeah, go forth and break things, that's what we're saying... In production, by the way. Because until you do that in production, it's okay, but it's not great. So for a proper challenge, the ultimate frontier some call it, go in production and break things and see how resilient your system really is... Because those are the real failures that matter, or the only failures that matter. You can learn from all the others, but the production ones are special. So the sooner you get there and the sooner you start applying these practices, as Uma and Karthik described, the better off you will be, the more resilient your system will be. And the system doesn't mean your stack, it means the value that you deliver to the people that use your system.
210
+
211
+ Thank you, Uma, thank you, Karthik. It's been a pleasure. I hope to see you again soon.
212
+
213
+ **Uma Mukkara:** Thank you, Gerhard.
214
+
215
+ **Karthik Satchitanand:** Thank you.
Connecting your daily work to intent & vision_transcript.txt ADDED
@@ -0,0 +1,317 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** One of the problems that I've been thinking in the recent months was the disconnect between what we should be doing, the vision, or the thing that feels right, but may be really hard to do because of different things of reality, which is really interesting; an interesting concept. What we actually do, day in and day out, and what we say we do. So how do we connect all these three things? You know, like when you report, you tie them all together, they kind of don't add up, and you don't know why. These are some of my thoughts which I've been having when I have first learned about EchoesHQ, and I knew that I had to talk to Arnaud and find out more, because there was something there.
2
+
3
+ The demo was great, thank you very much, Arnaud. I really enjoyed it. And now, after a few months, I would like to know about your Y Combinator demo. How did that go?
4
+
5
+ **Arnaud Porterie:** Yeah, so the Y Combinator demo was probably the most stressful minute of my life. I'm not really sure if you're aware of the setup, but it's actually just a one-minute speech, with a single slide, in front of hundreds of investors. So trying to condense the message into 60 seconds, trying to condense the message into a single slide is actually extremely hard. there are necessarily things you are not gonna say, and that are extremely important to you, but you will have to omit... And this is an interesting exercise in getting the message as clear as you can.
6
+
7
+ **Gerhard Lazu:** \[04:11\] I would love to see that slide, and I would also love to see those 60 seconds. Do you have them recorded somewhere?
8
+
9
+ **Arnaud Porterie:** No, we don't. It was not actually recorded.
10
+
11
+ **Gerhard Lazu:** Okay. So if I asked you to do those 60 seconds for us, would you be able to?
12
+
13
+ **Arnaud Porterie:** I don't think I have them in mind anymore, and to be fully honest, it's not exactly the same pitch you do to an investor than you would do to potential customers or people who understand the field... You know, it's deliberately devoid of any jargon or anything like this, which makes it a bit bland... So I don't know if it would be very valuable to the audience.
14
+
15
+ **Gerhard Lazu:** It would be interesting to me... So if you would like to do that, I would be very curious to hear what that even sounds like. As close as you can get it - the 60 seconds sounds perfect. No jargon. I'd love that.
16
+
17
+ **Arnaud Porterie:** So let me try and do it again then. The 60-second pitch at Y Combinator sounded something like this... Hi, I'm Arnaud, and I'm the founder of Echoes. I've been an engineering leader for over a decade, I have worked as a deputy CTO at VeePee before that. I was running the core team and the open source project at Docker. One problem I have consistently encountered throughout my experiences is how organizations are held back by a lack of alignment and a lack of focus.
18
+
19
+ By measuring how engineering work contributes to business goals, Echoes helps bridge the gap between technical and business teams. For example, Echoes can show you how much time we're currently spending on each of our OKRs, which helps engineering managers make the right decisions, and communicate the activity to their CEO or business partners. That's pretty much it.
20
+
21
+ **Gerhard Lazu:** Okay... So let me go and get my checkbook. Is a blank check okay? \[laughter\] Because that sounded really, really good. Okay, that was really good.
22
+
23
+ **Arnaud Porterie:** Thank you.
24
+
25
+ **Gerhard Lazu:** So I would love to see, again, that slide, and if we can, I would like to include it in the show notes, if that's okay with you, so that people, as they listen to those 60 seconds, they can look at the slides and they can, on the slide (the one slide) they can decide whether they wanna bring their checkbook.
26
+
27
+ So while doing a bit of research in Echoes - that was only recent - I noticed that you also appeared on Product Hunt.
28
+
29
+ **Arnaud Porterie:** Yes.
30
+
31
+ **Gerhard Lazu:** I think that came before Y Combinator.
32
+
33
+ **Arnaud Porterie:** Yeah, that was during the Y Combinator batch. It's one of the things that Y Combinator really pushes you to do, it's to be public and to launch as fast as you can. That includes Product Hunt, that includes Hacker News, any other medium you can think of. And I think it's one of the most important lessons I got out of Y Combinator, actually. For some reason, the instinct of a founder is often to be stealth, to work in the shadows until something is ready... One of the lessons of Y Combinator is that it's never gonna be ready and it's never gonna be good enough for you... So you have to put it out and to see what potential customers think of it, because that is the only signal that really matters. And it's very uncomfortable at first, but I totally agree that it was the right thing to do.
34
+
35
+ **Gerhard Lazu:** So in my mind, that translates to "Ship it, and let's see what happens."
36
+
37
+ **Arnaud Porterie:** Yes.
38
+
39
+ **Gerhard Lazu:** Literally, that.
40
+
41
+ **Arnaud Porterie:** It is exactly that. It is shipping early, shipping often, and - you know, I had this conversation very early with one of the partners at YC about "I don't feel that I'm necessarily ready to launch", and what he responded I think was absolutely great. He asked "Do you remember the day that Airbnb launched?" No, obviously, I don't remember. And nobody does. And it doesn't matter, because the truth is there's not a single day that Airbnb actually launched. They launched a hundred times during several years, and that's the reality of any business.
42
+
43
+ **Gerhard Lazu:** That is a really good story. I'm gonna ask Adam to see if he managed to talk to the founders of Airbnb, and if he didn't, I would love to hear that... Because you're right - getting it out there, getting the feedback will change everything, many times over. So with that in mind, how did Echoes change since you've launched at Y Combinator, which was 2-3 weeks ago? ...very recently. September 1st, I think.
44
+
45
+ **Arnaud Porterie:** \[08:17\] We launched during July, actually... And of course, yeah, a lot of things change, because you get to talk to a lot of customers, you get to talk to a lot of potential users, and to see -- I would say you can test the messages that work and the messages that don't, you can see what is truly hitting a nerve for your potential users, and what are the pains that they are experiencing the most... And of course, as a business, it also helps you refine what is the prototype customer, what is the ideal customer, who are the people that you should be talking to more etc.
46
+
47
+ In this regard, I would say the clear thing that I have learned is that the problem we're trying to solve is absolutely universal. Any company that has, I would say, above 20 engineers, is struggling with too much to do, and a lack of visibility, especially in companies where the leadership is not necessarily technical, or doesn't necessarily have an engineering background, and for whom engineering work easily boils down to a black box when it's not very clear what is everybody working on, why is there so many engineers etc.
48
+
49
+ **Gerhard Lazu:** So why is connecting our daily work as engineers to intent important? Why is that important?
50
+
51
+ **Arnaud Porterie:** Because I think that ultimately, the only thing that any employees within a company share is that they're putting efforts for the success of the company. And my strong belief is that if there was a way to express that simply, and to represent simply how our allocation of efforts actually maps to company's success and to our different intents, it would create this shared understanding of what everybody is doing that would help with conversations.
52
+
53
+ I'm really passionate about the boundary between technical problems and human problems, and I often feel that some of the biggest technical achievements are really solutions that have changed the relationship between different groups.
54
+
55
+ With my background, working on Docker, there's been a lot of debate about what was the true value of Docker - is it the image format? Is it the container runtime? Is it the API? But when I look back at what we did at Docker, for me the biggest success and the biggest takeaway is that we got groups that beforehand were not sharing so much together, the dev on the one side and the ops on the other, and we brought this shared understanding and this common artifact that actually made all discussions way smoother... And I think this is really the key value of Docker. This is the same thing that we're trying to reproduce with Echoes, with a whole different group, which is engineering on one hand, and I would say business stakeholders on the other... And yeah, that's creating this shared understanding and then seeing how we are collectively going in the right direction.
56
+
57
+ **Gerhard Lazu:** So let me see if I understood this correctly... What you're saying is that getting people to share more in a safe way, in both directions, from top to bottom and from bottom to top, is the key to success for Echoes.
58
+
59
+ **Arnaud Porterie:** I think it's one of the keys of success, and it's doing so by sharing the right level of information. Because you know, the thing we have with our industry is that we're doing tech, and tech is easily measurable, in that there's a lot of things we can instrument; there's a lot of things we can measure and we can put numbers on a lot of things. This doesn't make all numbers good to share, for multiple reasons. The first one is that not everything is actionable to the audience.
60
+
61
+ One of the numbers that we have talked a lot, and that obviously at this point I think everybody knows is a bad idea, but just to illustrate as an example, is the number of lines of code. Everybody knows at this point that sharing the number of lines of code written is not a good proxy for anything meaningful. But I think it's also true for tons of other things that we're measuring on a daily basis that are not necessarily actionable to a CTO, to a CFO, or even to your PM. And I think we have to be very thoughtful in drawing the line between the things that are the operational details of a team, and that should belong to the team and only to the team, and the things that are about our direction, our success, our impact, which are of course the things that we are being challenged on, and that we are ultimately responsible for.
62
+
63
+ **Gerhard Lazu:** \[12:24\] So this is really interesting, and I know that many people agree with that, and many people, when it comes to visualizing work, this is one of the things which they fear the most - fearing that they will be judged on the number of lines written - or deleted, because that happens as well - or PRs closed, PRs merged, issues answered and closed... But it doesn't say anything about the quality of the response; like, if you can do it in one line, why would you write ten lines? And if deleting more lines and adding lines makes the thing - whatever that may be, project, product, utility, tool, service - better, why wouldn't you do that?
64
+
65
+ So it's almost like the things that we are measuring - they are wrong, and maybe the way we think about measuring from a technical perspective just doesn't work at these levels. So what does?
66
+
67
+ **Arnaud Porterie:** Yeah, so that's really the bulk of the reflection behind Echoes, is to say that there's a lot of things we could be measuring, but the only thing that truly matters is whether we are actually creating value for our business in a sustainable way. And this is why we believe that measuring the intent is so important, because everything starts with the intent. You cannot say anything about success if you don't know why you were doing things in the first place. And interestingly, why we're doing things in the first place is not something that is easily captured in a structured way in any of the systems that we're using today... And that's the difference that we're trying to make as a starting point, on which we want to build a lot of things that are going to give visibility and actionable insights into the activity of the teams. Yeah, sorry, I lost my thought...
68
+
69
+ **Gerhard Lazu:** That's okay... I have a follow-up question, which maybe we'll juggle the important part... How do you measure intent?
70
+
71
+ **Arnaud Porterie:** Well, what we're doing with Echoes is to give you a central way to define what are the things that we are working toward as an organization; what are the things that you value and that you are working for as an organization. By default, what we suggest is just a set of three outcomes that are extremely generic and applicable to pretty much any organization out there, which is to say that as an engineering organization, there's probably things that you're doing to create customer value, things you're doing to mitigate risk, and things you're doing to maintain your own throughput, which is your own ability to deliver software repeatedly and safely in the foreseeable future.
72
+
73
+ And then, this is entirely up for customization by the user. It's up to you to figure out whether you wanna map your OKRs, whether you wanna capture the word that is planned vs the word that is not planned... There's no good or bad answer here; it's really depending on what you value as an organization and what is it that you want to capture. Then we're gonna publish those outcomes to different systems and give an easy way for engineers to express why they're doing what they're doing in the first place.
74
+
75
+ So in the very simplest case, this is just gonna be materialized as labels in GitHub, and for engineers it means that they can add a label to their pull request with why they're doing things... Because we're using this central definition of intent, that is applicable across teams, regardless of the operational details of each and every team.
76
+
77
+ There's a trend today that I think is here to stay, which is that individual teams are getting more and more autonomous, and they want flexibility and autonomy in the tools they use, and in the way they operate on a day to day basis. The reality at this point is that most organizations that are beyond 50 engineers - they have a diversity of tools and processes; you're gonna have one team using JIRA, you're gonna have one team using Linear, and you're gonna have one team using GitHub issues, and that's perfectly fine. But of course, this is a challenge for management, because then how do you get a consolidated view on what is everybody working on, and on the activity? That's what we're trying to overcome by having the central definition of intent, and then giving an easy way for engineers to express the intent behind work.
78
+
79
+ \[16:06\] And then, regardless of the operational details, the operational diversity of the teams, you keep having a central view of exactly in which direction we're going and what is it that we're executing against.
80
+
81
+ **Gerhard Lazu:** So capturing the Why - very important, and we'll come back to that. Is it true that an issue or a ticket or whatever the unit of work may be, is a step towards a specific Why? Is that how you think about it?
82
+
83
+ **Arnaud Porterie:** Yeah, absolutely.
84
+
85
+ **Gerhard Lazu:** Okay.
86
+
87
+ **Arnaud Porterie:** And you know, when we're talking about managing two outcomes, it's usually open-ended, in that, you know, when you're trying to influence -- let's say an onboarding time, a transformation rate, a number of active users, whatever... There's no ending to this; it's not time-bound, it's not scoped. You may have an objective that goes with it to reach a certain threshold, but in general, those are pretty much the North Stars of the company, and it's not gonna change on a daily basis... And there's no end to it.
88
+
89
+ **Break:** \[17:04\]
90
+
91
+ **Gerhard Lazu:** So I see intent as a direction towards which we agree to head towards. In my mind, the destination is the vision. Something that we'll never reach, by the way, but we will do our best to get as close as possible to it. If this sounds familiar to you - you being the listener - it's because Simon Sinek talks about this a lot. He's one of my favorite authors... And I'm wondering, how does this definition or mental model about intent and vision match yours, Arnaud?
92
+
93
+ **Arnaud Porterie:** Oh, it's actually a great point. I'm also a big fan of Simon Sinek, and of course, one of his major books is called "Start with Why", which is exactly what we're trying to do. There is an interesting point about this in how we designed Echoes. When I started with this product, I got feedback from people who told me "You shouldn't ask engineers to label pull requests with why they're doing things. You should have some kind of machine learning thingy that is going to guess why we're doing the things we're doing, so that we don't require extra action." I think this is a mistake for two reasons. The first one is that I don't think we can infer from code why we're doing what we're doing. Ultimately, we're just gonna do guesswork on the file name, or the habits of the developer, and I don't think this is gonna be relevant, and I don't think this is gonna be a reflection of the reality.
94
+
95
+ \[19:53\] The other thing is - I think it's actually great to ask people why they're doing what they're doing in the first place, as long, of course, as it doesn't take ten minutes for then on an hourly basis to respond to the question. But I think it's a good forcing function to make sure that both on the managers' part you make it clear enough that this is the set of objectives that matter, and that it's well-communicated enough, and then there's little ambiguity about why we're doing what we're doing... And then I think it's a good forcing function for engineers to also put their work into context and to remember that yeah, what they're doing is actually a part of a bigger whole, and that is the big picture that matters.
96
+
97
+ I do strongly believe, actually, that the best engineers that I've worked with - and to be fair, most of the engineers - they deeply care about the impact that they are having on their business. And I think it's a good thing to make this relationship between their work and the business goals explicit, rather than implicit.
98
+
99
+ **Gerhard Lazu:** That makes a lot of sense to me... And I wanna ask you why do you do what you do. I'm pretty sure you've already answered it, but I wanna make it explicit. So why do you do what you do?
100
+
101
+ **Arnaud Porterie:** I do what I do because I'm passionate about developer empowerment. This is what drove me into engineering management in the first place, this is naturally what got me to work at Docker and to contribute to open source... And when I started thinking about what is the next thing that I could do for developer empowerment, for me the answer was not to try and build a developer productivity tool. For me, the answer is to try to help companies outside the one that I'm working at try and have a better context for an engineer to deliver their best work. And I strongly believe today that most organizations are dysfunctional in some way, and that engineers often deserve a better context... And I hope that I will help with Echoes in this way.
102
+
103
+ **Gerhard Lazu:** That is a great answer. It makes perfect sense to me, and I'm rooting for you; I really am. That was before I knew that you're a Simon Sinek fan; now that you're a Simon Sinek fan - oh, yes. Go, Arnaud, all the way! It's super-powerful, the Why.
104
+
105
+ Okay, so I'm thinking... What happens when you disagree about the Why? There's a fundamental difference of opinions about why we do what we do. What do you do in that case? What would you recommend?
106
+
107
+ **Arnaud Porterie:** I think this is an important point, because the truth is most of the organizations that we're talking to that started to adopt Echoes, the starting point to using it is to actually agree on capturing the Why and agree on why are we doing what we're doing. It's interestingly not always an easy discussion in all companies.
108
+
109
+ But I think here, again, is a great forcing function. If as a management team you cannot agree on the objectives that are actually important for the company, how do you expect that anybody is gonna execute properly? Then when there's a mismatch between the Why as it was defined as a company vision, and the people executing on that vision - well, naturally, this is the kind of mismatch that mean that either there's a disconnect and potentially a wrong fit between the employees and the vision... But I think this is probably a way deeper problem that needs to be addressed, and that of course is not gonna be addressed by any tool.
110
+
111
+ We don't have the pretention that we're gonna help companies build a vision. What we promise is that we're gonna help them close the feedback loop between the vision they have and what is actually happening on the field with the teams, and how the efforts are actually being invested.
112
+
113
+ **Gerhard Lazu:** In my experience, usually when people disagree about the Why, it's when some number of people think about making money, and it's an example of Why which in my opinion is very poor... Because that's almost like a side effect, if you're doing it right. Or for example getting more followers. You shouldn't be focusing on getting more followers. That should be a by-product of you doing the right thing.
114
+
115
+ So in your experience, Arnaud, what are good examples of good Why's, the right Why's, which are the opposite of the examples that I just gave?
116
+
117
+ **Arnaud Porterie:** \[24:02\] As long as you're working for a company, the ultimate goal and the ultimate Why is always to make money. However, of course, there is hopefully a deeper mission and something that people believe in, and believe they're doing for the greater good.
118
+
119
+ On a daily basis, you're not gonna motivate anybody by telling them that what they're doing is strictly to create shareholder value. This is not a motivational engine for any employee I've ever known. Thankfully, there's way more proxies to this that are actually actionable by engineers, and that are actually related not so much to making money, than to customer satisfaction. And I think one of the trends that we're seeing right now that I think is extremely positive is with a product team organization model, with this idea that teams should be autonomous and should have a complete control over a vertical of the business, and complete control on one aspect of the business - that helps put engineers closer to their customers, which I think is absolutely key. And this is true whether the customer is external or internal. And I think this is the major Why. In my mind, what should be the most important engine to any engineering team is truly the empathy and the customer satisfaction.
120
+
121
+ **Gerhard Lazu:** I 100% agree with that. 100%. Honestly. If this was a bull's eye, you've hit it. So I can see how shipping, getting it out there, whatever "out there" means, by the way - whether it's a product, whether it's a service, whether it's a utility, it doesn't really matter... Getting it in front of customers, hopefully paying - that's what we all want; nobody wants to work for free, because things aren't free... So typically what happens is you get all types of layers between the engineers, the ones that are getting it out there, the actual code, the value, hopefully, the business value... It's not just code; there is some value being delivered. The customer is using it, and then the customer is feeding back what is not working, or what could be improved, or stuff like that.
122
+
123
+ So how can teams and companies resolve this tension between having the different layers, like sales, marketing, product management, or just product, engineering, and the customer? Because the bigger you are, the more complicated these interactions become, the slower the information flows, and the more difficult it is to do anything. It can take many months, many years, and it's just a function of the number of people involved. Too many actors, too much chatter... Everything moves slowly.
124
+
125
+ Autonomous teams - I can see how that would help, but what does an autonomous team mean? Does it mean like a group of ten people, one of them being a salesperson, a manager, a product person, a designer? What does that mean? What does an autonomous team mean to you?
126
+
127
+ **Arnaud Porterie:** In my mind, the autonomous team is really the definition from Marty Cagan, the author of "Inspired" and other great books... It's really about having full autonomy on the delivery of value to the customers. So in a sense, you're right that it's not all-encompassing. It's not a small company that has its own marketing, its own sales team, or its own customer support... But the truth is, it could. It could, in some way. The problem is that, of course, there's economies at scale, and there's organization challenges that make this extremely hard.
128
+
129
+ Still, when we see modern engineering organizations adopt the product team model in its simplest form, which is to say that at least in a single team you're gonna find frontend expertise, backend expertise, QA expertise, potentially SRE expertise, a product owner etc. then you're gonna get something that at least is autonomous in how it delivers value to the customer. Then the feedback loop, of course, is gonna get more complex as the organizations get bigger... But I think this really a matter then of making sure that -- you talked about layer... Well, making sure that it's not so much layers, and that there's actually verticals within those layers, with easy access to the other functions within the organization.
130
+
131
+ \[28:08\] I've never had the chance to work at one of the highly scalable engineering organizations such as Amazon or Microsoft or others, but they seem to have figured this out, in a way.
132
+
133
+ **Gerhard Lazu:** Okay, okay... I know that many good books came out of different orgs, on different topics, including management... But it always comes down to the fact that what people report on or what they paint - it's usually not the reality. The reality is slightly more skewed, and it's very contextual. Maybe for this person it was, but for the company as a whole - not really. And we all have good intent, but then the reality kicks in, all the reporting, you realize you actually can't talk to customers. In some companies you just can't. Unless they have a problem you can't talk to them... Because why would you? You know, the customer doesn't pay you... And this is -- we talked about this with Ian Miell it's just basically how money flows. In big companies, money flows differently. And money flows kind of rules what needs to happen; you have budgeting, and that's different, allocations, and so on and so forth. So things are really complicated.
134
+
135
+ But I do like that Echoes is trying to simplify some of this. It's going to maybe drive some uncomfortable truths and get people to have some uncomfortable discussions, and make them realize what is important, and more importantly, why those things are important. And as you mentioned, making money - well, that is almost like breathing... Like, sure, you have to breathe... But what else? What else is there?
136
+
137
+ This reminds me -- I'm not sure whether you've read this book, "Drive" by Daniel Pink.
138
+
139
+ **Arnaud Porterie:** Nope.
140
+
141
+ **Gerhard Lazu:** I started reading it, I haven't finished it, but he touches on this aspect of what drives us - hence, Drive. Money? Yes. I mean, there's a couple of basic needs, and some maybe a bit more sophisticated, depending on where you are in life, and what your reality looks like... But there always has to be more. What is the thing that I'm working towards? Can I see it, first of all? Do I agree with it? And then the intent, we're going that direction, and how do we measure whether we're making progress in the right direction. I think that's where Echoes comes in. That's the way I understand it.
142
+
143
+ **Arnaud Porterie:** Yeah, absolutely. That's correct. And to your point also about how money flows, and the complexity of the relationship with the business - what we're seeing also today is that the most successful companies don't make a distinction between business and tech, and are starting to understand that actually this is the same thing more and more, as companies become software-powered. And this is also what we're trying to do - trying to get those two groups closer, helping... You know, I hate this word, but this mythical digital transformation that is all about bringing those two groups together, but never actually succeeds in doing so... Why? Because usually, it's somebody pointing a finger toward tech and saying they have to change.
144
+
145
+ But no, collectively we have to change. It's not tech that has to change. And that's also what we're trying to get to with Echoes - trying to get both engineers to take a step towards business by making sure that their work is well connected with the intents of the company, and separately, helping the business take a step toward engineering by understanding and also having empathy with their day-to-day, the reality of their work, the complexity of keeping the lights on besides the 10,000 other projects that we have to do... And yeah, again, creating this shared understanding that hopefully we'll make everybody come closer and make better decisions together as a group.
146
+
147
+ **Gerhard Lazu:** That is a good one. I can see how intent is very closely related to communication. You can have the best of intent if you don't communicate it, or if you don't communicate it well... It doesn't really matter, or it matters very little. So how may we structure information differently, for different layers? Because as information travels, it has to be more condensed, or more compressed, so that it makes sense... Because lines of code, or small changes because of X, Y and Z means very little to our CTOs or CEOs, especially the more layers there are.
148
+
149
+ \[32:04\] So how may we structure this information so it's more efficient as it travels up the top... And then obviously, the inverse is true - how do make it clearer? Because it's fuzzy at the top. The intent, it's meant to be vague on purpose, so that people can just fill in the dots? You don't wanna dictate, "Now we have to do this." If we're going in that direction, and how do we make it happen - well, it's up to us all to define that.
150
+
151
+ So what are your thoughts about communicating this information, structuring it in a way that makes sense, and then expanding it or compressing it based on which layer it's at?
152
+
153
+ **Arnaud Porterie:** Yeah, I think there's inspiration to take into the OKR model, and basically accepting the fact that organizations kind of have a fractal structure, when you can consistently zoom in and see more details appearing. But that doesn't change the fact that you can always zoom out and abstract away the details to see the bigger picture... And that's basically the approach that we're following, is to say that ultimately, this is a deep tree of proxies that contributes to company success, from this individual feature that we're shipping, contributing to that bigger picture about the segment that we're trying to please, contributing to the fact that we're gonna have a better retention, contributing to the fact that we're gonna make more revenue at the end of the month. But ultimately, it's really this level of zooms that you have to look into, and make sure that you capture each of those proxy metrics and each of those actionable things that are the things that we're working on in our day-to-day.
154
+
155
+ **Break:** \[33:27\]
156
+
157
+ **Gerhard Lazu:** How did your experience with Docker influence Echoes?
158
+
159
+ **Arnaud Porterie:** A lot, in a ton of different ways. The first thing is that Docker was obviously about developer experience... And it made me realize a lot about how to build products developers love... And also about building a community, and creating enthusiasm about tech in a way that hopefully was positive, was human, and was friendly. I'm very proud, I think we've built a very welcoming and friendly community with the Docker project.
160
+
161
+ \[36:01\] The other thing is I was running the core team and the open source project for many years, and of course, the activity on the project was so high, and the number of maintainers comparatively was so low... We needed some tooling. And this is where I started building some analytics and some dashboards about the activity, about the health of the open source project, purely to help us manage the flow. At the peak of the project, we probably were, I would say, 20 maintainers working on the project, with more than 400 contributors on a weekly basis. So the asymmetry is really extremely high... And you know, the tools that we're using on a daily basis, such as GitHub, they're just not built for this, which is totally fair... But that means that there was a gap we needed to fill, and this is what really got me into building reports, getting data about how the activity was evolving, and how we could be better as a group... And it's, of course, a very strong inspiration behind Echoes.
162
+
163
+ **Gerhard Lazu:** So are those tools that you've built - I imagine they were proprietary, internal to Docker, that you'd never made public. Is that right? that's my assumption.
164
+
165
+ **Arnaud Porterie:** No. Actually, one of them was open source... And it's probably still on GitHub, but of course, I'm not maintaining it anymore... But you know, to run an open source community, you wanna look at things that are significantly different than for running a company.
166
+
167
+ **Gerhard Lazu:** Okay.
168
+
169
+ **Arnaud Porterie:** In the case of an open source community you care about things such as being welcoming to new contributors, you care about things like fairness... For example, making sure that every contributor has somehow an equal chance of getting their pull request merged within the project, regardless of the fact that they are members of the broader community, or a friend company, or an employee of Docker itself. This was the kind of things that we were looking for to make sure that we were actually fair, and that you were not advantaged in any way by your employer, your position or your reputation.
170
+
171
+ **Gerhard Lazu:** And how did you do that? That sounds like a hard problem, by the way.
172
+
173
+ **Arnaud Porterie:** It is a hard problem, but it's super-interesting, and it's a lot about just measuring, for example, how many review comments we're gonna get on a pull request, how fast it's gonna get merged, what is the likelihood of a pull request getting merged based on its size, and making sure that, for different groups of a community, it's not actually biased toward any particular group, especially not the employees, not the maintainers, not the broader contributors etc.
174
+
175
+ **Gerhard Lazu:** That is really interesting. So how does this differ from a company, from a product which is commercial?
176
+
177
+ **Arnaud Porterie:** The biggest difference is that the intent of an open source project cannot be easily captured. A company has a vision, a company has a mission, and ultimately, of course, a company's goal is to make money. An open source project, depending of course on the nature of the governance, doesn't have this kind of shape. It is typically gonna have a vision, it is typically gonna have a purpose, but even when it does, it is possible that the community will pull it in a different direction. And then, depending on the governance model, depending whether you have a BDFL, depending whether you have a company backing the project, you may have some constraint with regards to what is gonna be considered acceptable or not for the project. But the reality is that you cannot really plan for what direction it's gonna take. And even more important than this, with the people coming in on the project and contributing to the project, you don't necessarily know what are their intents; you don't know whether they're contributing as a hobby. You don't know whether they're contributing as passionate users that are excited about the project. Or perhaps they're contributing because there's a commercial interest from their employer on this particular project, and they have a roadmap or an idea in mind that is not necessarily transparent... And that's also fine; this is what open source is about.
178
+
179
+ **Gerhard Lazu:** Okay. Fascinating. How do you reconcile a company that is built on open source, has an open source project, and also has a commercial product which is built on top of the open source project? How do you reconcile that? How did Docker reconcile that, by the way? ...which I know is a very hard question.
180
+
181
+ **Arnaud Porterie:** \[40:08\] That's the question that everybody is trying to answer, basically... It is an extremely tricky question. Did Docker figure it out? I'm afraid to say it didn't, because the truth is the commercial success of Docker didn't match the community success of the project, and the industry-wide adoption of the project. I wish I knew the answer, but I don't think there is a clear one. A lot of it is about making sure that, as a company, your intent in doing open source is very clear, and that you're doing it for a very deliberate reason, not for the sake of saying that you are open source. You have to be very clear, whether you're trying to build a marketing channel, whether you're trying to build a community, whether perhaps it's also just a requirement of your segment, because all the alternatives are open source, and it wouldn't make sense commercially to try and go to market with a closed source product. And you know, that's all questions that are very business-specific.
182
+
183
+ To be fully transparent, at this point, when I'm discussing with startup founders and people who consider having bits of their product being open source, I tend to potentially push back more to ask "Are you really sure you need this? Are you really sure you know why you're doing it?" Because doing it for the wrong reason can be extremely detrimental to the business. And that's not to say that I'm not pro open source, of course. I'm very much pro open source. But I think it has to be done for the right reason, and it has to be understood that it's a significant challenge, and it's a significant challenge to do right.
184
+
185
+ **Gerhard Lazu:** So is Echoes open source?
186
+
187
+ **Arnaud Porterie:** No, in our case there's literally no good reason to make it open source.
188
+
189
+ **Gerhard Lazu:** Interesting. Okay... Of course, this is a deliberate decision, based on what you've just said... Do you imagine yourselves going open source? Is that even an option that you may want to go back to? Or are you pretty set that it's just going to be closed source?
190
+
191
+ **Arnaud Porterie:** No, nothing is set... Especially not for a company our stage. You never know, but again, we would need a good reason for having bits of the product be open source... And even when it is, it's never gonna be 100% open source. Again, you need to have a commercial strategy around this, and it begs the question of where do you draw the line, what is the commercial value of the product, where does a project make sense, and being clear about this distinction. At this point, I don't see a reason why we should... But you know, time will tell whether --
192
+
193
+ **Gerhard Lazu:** Things change all the time, yeah. Of course. You always learn something new, then it stops making sense, and then you just do the right thing, whatever that may be. Okay.
194
+
195
+ **Arnaud Porterie:** Yeah. In the case of Docker, Docker would not be as successful as it is today if it hadn't been open source in the first place, obviously. What I think is extremely interesting with Docker - I think the open source model of distribution here worked so well that we got all caught into how successful it became, and how fast it became successful.
196
+
197
+ There was a situation where the project was so successful that companies would call to ask how they could buy, but we didn't have anything yet to sell... And that is the whole paradox about this thing.
198
+
199
+ **Gerhard Lazu:** Yeah, that's an interesting one. So I know that there is this InfoWorld article written by Scott Carey titled "How Docker broke in half." And you wrote "Docker was a life-changing experience for me, and I wish things turned out differently." Is this something that you'd like to expand on?
200
+
201
+ **Arnaud Porterie:** It ties back to what I was answering before - of course, I would have hoped that the commercial success of Docker would have matched the success in terms of the impact it had on the industry. What can I say on this...? There's so much emotional aspect to the Docker story that it's always complicated to be very clear about my thoughts there... But it was a fascinating experience, because we were really caught in the middle of a tornado where the project was massively successful, but you have to imagine that internally this was a startup that was extremely young. We had literally 50 engineers when I joined, we were getting pulled in every direction...
202
+
203
+ \[44:08\] Just to give you an example, the week that I joined Docker was my first week in the U.S. I moved from Europe to San Francisco to join Docker. On my first day I was told "We're leaving for Redmond tomorrow. We have to meet with Microsoft for a partnership."
204
+
205
+ **Gerhard Lazu:** Wow.
206
+
207
+ **Arnaud Porterie:** That was the unreal situation in which we worked in...
208
+
209
+ **Gerhard Lazu:** Your first day... Wow.
210
+
211
+ **Arnaud Porterie:** Yeah, that was my first week - flying out to Redmond to talk to Microsoft employees about Docker... Which I had literally joined two days ago. The whole thing was a tornado. And of course, you start thinking "Yeah, this might actually be a significant business. This might actually make a significant and durable impact on our industry..." Which it had. But the commercial success was not there, because we were probably not there fast enough compared to the adoption of the project... And yeah, of course, I wish that things turned out differently, because it was a very intense human experience; probably one of the best of my career's, because I don't see myself having this kind of impact again in my lifetime. I can hope for, of course, but it's still a pretty unique opportunity.
212
+
213
+ It's too bad that it didn't get the commercial success it deserved. But that's what it is. And again, I also said that this is not the end of the story. The company still exists, there are still good people working in this company, and for whom I wish, of course, the best possible success. They have a better focus now than we had, also because somehow the hype is gone, and that is not necessarily a bad thing. That means that they can focus on their customers, without the heat of the spotlight, without the heat of getting all the industry attention, and all the competitors, and all the cloud providers being super-interested about what you're doing... So that might actually be still a good opportunity.
214
+
215
+ **Gerhard Lazu:** I know this is impossible for our listener to see, to experience. I'm going to try and do the best job I can of conveying this... But I could see the spark in your eye when you were talking about Docker. I could see a shift in the body language... And this is, again, impossible to capture. But I can tell that it meant a lot to you, and I can tell that there were some great moments, maybe some unique moments which were created, which will be impossible to redo... But it was special. It was special, and you cherish it as such, which is great to see.
216
+
217
+ I'm wondering, what would you have done differently? Do you wish you would have done something differently?
218
+
219
+ **Arnaud Porterie:** I don't wish I would have done anything differently. I've done my best, and that's absolutely the best I could do, and that's pretty much it. The only thing I could regret - but of course, that would have been a different world - is having joined Docker at a time where I had more experience, and maybe more weight in the organizations, to try and influence it in a different way. But you know, that's just wishful thinking, assuming that I would have done better, which is of course not proven, and that we'll never know.
220
+
221
+ I think at this point truly the only thing that I care about is hoping to make an impact on our industry with Echoes in the same way that we managed to make an impact on our industry with Docker. And I know that may seem like a stretch, given that we're talking about two wholly different topics, but the truth is I think the underlying motivation is not so far, and I think the underlying point is really the same when you look at it.
222
+
223
+ **Gerhard Lazu:** So what makes Echoes special, in your mind?
224
+
225
+ **Arnaud Porterie:** I think what makes it special is the fact that we're actually trying to do something that is virtuous for everyone - virtuous for companies and virtuous for the engineers. And I think this is really the key here - not all tools are positives. And you know, there's this motto that the tools is not gonna define your culture; I don't actually really agree with this. I think that the tools you choose say a lot about your culture, and I deeply care about doing something that is positive, that is positive for everyone who is exposed to it. Of course, the engineers, but not only...
226
+
227
+ \[48:05\] And that I think is really why I'm putting my soul into this project, is to make sure that yes, we're actually gonna be able to do something that improves somehow the quality of life of our peers working in this industry that is fascinating, that is challenging. I'm really attached to all those people, whom I know are extremely talented, and sometimes deserve that their organization would be better for them to deliver value.
228
+
229
+ **Gerhard Lazu:** So how would you like the world to help you with Echoes? In what way would you like to receive that help, and what sort of help would you need right now to make Echoes a success?
230
+
231
+ **Arnaud Porterie:** We're super-early. We're at a point where any feedback is good to take. Clearly, what I would love is for people to check our website and send over any questions, feedback, doubts they may have. I'm of course happy to give as many demos as I can, and get people started on the product if they wanna give it a try... But yeah, I'm deeply convinced that we have built something special; we're extremely early on the journey, so it's of course extremely hard for me to tell if this is gonna be a success or not. What I can tell for sure is that, again, doing my best, putting my soul into this, trying to have a positive impact on our industry... And yeah, time will tell if I'm correct or not.
232
+
233
+ **Gerhard Lazu:** Those are very good reasons, and I'm sure that there are many like-minded people which would like to contribute to that vision in some shape or form. So that's what I'm thinking about.
234
+
235
+ Who would you recommend Echoes for? And by the way, it's Echoeshq.com, not Echoes.com. I checked. There's a great story behind that... Do you wanna share the story behind Echoes.com and Echoeshq.com?
236
+
237
+ **Arnaud Porterie:** I don't know about the story...
238
+
239
+ **Gerhard Lazu:** Okay, okay. So I did a bit of research, and I was typing Echoes.com... Apparently, Adam Stanley is the owner of that domain. And that domain is registered in early January, 1995.
240
+
241
+ **Arnaud Porterie:** Yeah, I saw it was a pretty old website. At some point, if the company is successful, probably we'll try and buy the .com. This is not my priority right now.
242
+
243
+ **Gerhard Lazu:** Of course.
244
+
245
+ **Arnaud Porterie:** I haven't told you, by the way, about the reasoning behind the name, but... I'm a big Pink Floyd fan, so a lot of the projects that I've worked on are actually named after Pink Floyd songs or Pink Floyd albums. This is what brought the name Echoes. There's actually an Easter Egg in the logo that refers to Pink Floyd, but this one is pretty well hidden... So far, only one person has figured it out.
246
+
247
+ **Gerhard Lazu:** Wow, okay. Challenge accepted. Anyone else that's listening, if you wanna get into this... Okay. I like this game. I like riddles like these. Which is your favorite Pink Floyd song, by the way?
248
+
249
+ **Arnaud Porterie:** I think it's Dogs, but Echoes is not far behind. Naming the company Dogs would not have been great, so... Echoes was a better fit.
250
+
251
+ **Gerhard Lazu:** \[laughs\] Yeah, I would agree with that. Even though it's D and Docker, I can see the resemblance... But Echoes, I like it. And the signal, right? The signals that you're trying to send - I think it fits really well. That's fascinating, how things just kind of make sense in weird and wonderful ways... And you wonder if there's more than just a coincidence. I always do that - is there something more? Am I meant to do this? It just feels like so -- it just fits.
252
+
253
+ **Arnaud Porterie:** It feels the same in software. You know when things are on the right track. There is a magical thing, it's the foresight that just happens when pieces fit together, and you can tell that it all makes sense and you're actually on the right path.
254
+
255
+ **Gerhard Lazu:** That's something which I'm constantly looking for... Like, when does it make sense? And you know it. It's like a gut instinct. And when it feels right, and there's so many -- you have the signals, you have the echoes... But maybe you're not picking up on them, or you don't even know where to look. But we all know it when it happens, and we're trying to do it again and again... And sometimes it's elusive, but it's so worth going for that. Totally, totally behind that.
256
+
257
+ Who would you recommend Echoes for, by the way?
258
+
259
+ **Arnaud Porterie:** \[51:49\] Honestly, pretty much any engineering organization that I would say is above 20 engineers, and needs to have visibility into how they operate. I don't think there is a bigger discriminant behind this. What's very clear in our conversations is that we're not a good fit for engineering managers who are looking for surveillance. If you're looking to see if people are busy, that's not gonna help. We're not gonna show you if people are busy; we're actually not gonna show you anything about individuals. We're only looking at things on a team level.
260
+
261
+ I actually think that users of Echoes have to be comfortable about the fact that this is gonna show transparency about how engineering is operating, and also about the quality of the management itself. Because you know, it tells more about the quality of the management organization than it says about the quality of the engineers themselves... Which I think is well overdue.
262
+
263
+ **Gerhard Lazu:** That's the one thing which I really liked. Actually, one of the many things which I liked. First of all, Echoes never accesses the code.
264
+
265
+ **Arnaud Porterie:** Yes.
266
+
267
+ **Gerhard Lazu:** It can never count lines of code contributed or deleted, and that's done on purpose... And I think that's a great, great thing. The other great thing is about making this a shared field, where we all meet, and we can see how well the different orgs, groups, units interact between themselves. You're trying to capture the interactions, which is really valuable, and I haven't seen it done before. Performance reviews, and grading managers, and grading employees - that's totally not what you do. How well do the different parts interact? Very valuable.
268
+
269
+ And I like how you combine all the things, the Why, the What is happening, the day to day... You're literally solving, I think, a problem which I had, and that's why when I reached out to you, I thought "Well, this may be it. This just may be it."
270
+
271
+ So even though at Changelog there's just a handful of people, would you still recommend Echoes for small teams? Three, four, five people.
272
+
273
+ **Arnaud Porterie:** Yeah, of course.
274
+
275
+ **Gerhard Lazu:** Okay.
276
+
277
+ **Arnaud Porterie:** We're using Echoes internally, of course, to manage our own roadmap. You know, there's this thing that if you're very small, you are extremely resource-constrained, and that means that you have to be laser-focused. And one very easy way to be laser-focused is to challenge yourself and to measure how you're spending your time, to make sure that you're actually spending it on the right things. That's what we're doing internally at Echoes, that's what other small companies are doing with the product...
278
+
279
+ I would say it's somehow a different use case, because in this case you don't have this very high level of certainty as a larger group what everybody is doing. But still, you get this data-driven aspect of your work that helps you reflect about how you're truly allocating your efforts. And again, there's only 24 hours in a day, and we're all doing our best.
280
+
281
+ At the end of the day, the best thing you can do is just be deliberate about the things you do and the things you don't, and this is how we can help.
282
+
283
+ **Gerhard Lazu:** Intent. Coming back to that - intent, vision, impact. All those important things; very good ones. I don't know if this is possible, but I would love to see a screenshot of Echoes for Echoes. I would love to see that.
284
+
285
+ **Arnaud Porterie:** Yeah, I think we could do it. I don't think there's anything really hidden there.
286
+
287
+ **Gerhard Lazu:** Yes, please. That would be amazing - to see that, to have that visibility into how Echoes is using Echoes, to see what is important to you, what is the progress that you're making towards that... Because I remember the screenshot that you had; I think this was in -- actually, it was one of your tweets, photo 2, it's a recent one, i'll link it in the show notes. But to see the real one day, what it looks like - that would be fascinating.
288
+
289
+ So what comes next for Echoes? The next six months, what are like the big items on your list? I think list is a poor word, but you know what I mean.
290
+
291
+ **Arnaud Porterie:** I mean, there's literally an infinite number of things I have in mind. Again, we're trying to be focused, and that means we're trying to listen to our users more than we listen to ourselves.
292
+
293
+ The most important bits right now are things such as being able to tie outcomes to metrics. We've talked a lot about how we allocate our efforts and the Why behind work, but if you want to measure impact, then you have to tie this back to observable results. That's what we're working on right now, to make sure that you can bind those intents to actual measurable things, and observe whether your team is having an impact.
294
+
295
+ \[56:05\] Other things that we're working on are the very greedy, boring details of any early company; things like integrating with HR systems. Super-fancy, super-shiny... They're the thing that every engineer is dreaming of doing... But still, it has to be done, because this is what the life of a company is.
296
+
297
+ And then where we're gonna go from there in the future - again, tons of ideas; where the market is actually gonna take us is up to be seen, but we're collecting a data that didn't exist anywhere else, which is why we're doing things in the first place, and I think there's a lot of potential in this in a variety of different contexts. And the future will tell if there's a market for it or not.
298
+
299
+ **Gerhard Lazu:** I'm really excited about Echoes itself - about the idea, about the person, now that I got to know you a little bit more... I can see so many similarities and so many challenges, and so many lessons learned... But also, I see the intent behind Echoes, and that attracts me. I wanna see what happens next. Six months from now, twelve months from now, where will Echoes be? Because the direction - it's amazing. I love it. It has all the right ingredients... So what will actually happen? What delights will we have from you and your team? That's what I'm looking forward to.
300
+
301
+ **Arnaud Porterie:** Well, me too.
302
+
303
+ **Gerhard Lazu:** I really want you to be part of Changelog. I would like to use Echoes within Changelog, so that we can gain visibility into how we do things, and what we do... Because as you mentioned, our time is super-constrained, and then my time is the most constrained on... I can count the hours that I can dedicate on Changelog infrastructure, on Changelog code in a month, so I have to be super-super focused. And what does that mean? What does my hour, when I spend one hour - what did I spend it on? And it's not like the units... It's whenever I did something, what did it contribute towards? Was it throughput, was it customer values, outcomes, whatever...? I forget the exact namings, but I'm sure that you'll help me define them, and there are good examples to understand what is important and why those things are important.
304
+
305
+ And as a listener, if I had to take away one thing from this conversation, what would you like them to take away?
306
+
307
+ **Arnaud Porterie:** I think the biggest takeaway for me, and the thing that I hope listeners will agree on is that - I think engineering management overall is still in its infancy. Our industry is young, we're still trying to figure out what are the recipes that work and the recipes that don't. The one thing we know for sure is that as engineers, we're working within companies to make them successful, and we care about having an impact. And this is way more important than those five minutes of developer productivity we can gain with this or that tool, or this or that thing.
308
+
309
+ And yeah, truly, I hope that the takeaway is really that the future of most businesses depends on us, and it's up to us now to make it more efficient, and more pleasant for our industry.
310
+
311
+ **Gerhard Lazu:** I'm really looking forward to that. I'm really looking forward to what you do next, Arnaud. Seriously.
312
+
313
+ **Arnaud Porterie:** Cool.
314
+
315
+ **Gerhard Lazu:** And on that node, thank you very much. It has been a pleasure, and I'm looking forward to next time. Thank you.
316
+
317
+ **Arnaud Porterie:** Thank you very much for having me.
Crossing the platform gap_transcript.txt ADDED
@@ -0,0 +1,379 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So today I'm joined by my favorite startup team: Chris Hedley, Colin Humphreys and Paula Kennedy. Welcome!
2
+
3
+ **Colin Humphreys:** That's a warm welcome, Gerhard. You must only know one startup team...
4
+
5
+ **Gerhard Lazu:** Well, you're not very far from the truth... \[laughter\] Because - why do I say "my favorite", right? So seven years ago I had a platform talk with Colin, which convinced me on the spot to join the team, to join the team, to join the CloudCredo startup. And that's why I say you're my favorite startup team... Because in the last seven years I haven't known a better startup than CloudCredo.
6
+
7
+ **Colin Humphreys:** Oh, thank you, Gerhard. That's very kind... Can I say you are unequivocally my favorite podcast host?
8
+
9
+ **Gerhard Lazu:** And this is your first podcast, so yes... \[laughs\]
10
+
11
+ **Colin Humphreys:** You are absolutely my favorite. You are also my least favorite at the same time, because you are the only one I know... But yes, you are absolutely, unequivocally - like, canonical truth - you're my favorite one, Gerhard.
12
+
13
+ **Gerhard Lazu:** \[03:59\] Thank you, Colin. I appreciate that. Thank you. But I'm wondering, do you remember the platform talk that we had seven years ago, Colin? Do you still remember it?
14
+
15
+ **Colin Humphreys:** We were together for a long period of time, for those seven years, which in particular -- because let's be straight here, Gerhard, I talk a lot about platforms, to many people's great cost...
16
+
17
+ **Gerhard Lazu:** I know you do.
18
+
19
+ **Colin Humphreys:** ...in terms of time. But I do speak a lot about platforms. So which particular platform talk are you talking about?
20
+
21
+ **Gerhard Lazu:** The one that convinced me to join CloudCredo. This was the OpenCredo office, you met me the first time, and you were sharing the vision that you had about platforms, and why you thought Cloud Foundry at the time was amazing...
22
+
23
+ **Colin Humphreys:** I couldn't have been at the CloudCredo office, because we squatted in with --
24
+
25
+ **Paula Kennedy:** No, the Open Credo one.
26
+
27
+ **Colin Humphreys:** Oh, the Open Credo office. Yes, we definitely squatted there.
28
+
29
+ **Gerhard Lazu:** The Open Credo office, yes.
30
+
31
+ **Colin Humphreys:** That was seven years ago, so that was arguably kind of a midpoint in a journey that we've all been on for a really long time... And it's been very interesting. I think, I could have had that self-awareness and that reflection; we know we're near the end of that journey. There's a huge swelling, a rise in platforms at the moment. Talk about platforms is going through the roof. Everyone thinking about "How can we enable applications teams to deliver more?" So I think - yes, we're talking about seven years ago, but I have been building platforms for 20 years odd now. I'm sure Chris, as well as Gerhard - we've all worked in this area for so long now, and Cloud Foundry was amazing. I have learned so, so much from it. I say "was amazing." It is amazing. It handles a particular set of needs incredibly well, and I've learned so, so much from it. It's gonna be interesting to think about, in the context of this conversation, where were we seven years ago. What had we learned that took us to that point seven years ago, and what did we learn in those seven years, what's the future trajectory, what's the next seven years... Will we be talking in seven years, Gerhard, about where we are now, and looking back on this podcast, and saying "Do you remember when we said that stuff about that thing? Do you remember when Kubernetes was a thing?" It will be interesting...
32
+
33
+ **Gerhard Lazu:** I can see that happening. I can definitely see that happening. I also am very glad that we're recording this conversation, so we can listen back to it seven years from now... And I wish we had recorded our conversation seven years ago. But what I do remember is a talk that you gave at the OpenCredo office. The talk was about your journey when it comes to deploying apps... And it starts with building a data center. That really got me, because you were saying "That's how not to do it. I've done it. And I wish others wouldn't repeat my mistake." So that really caught me.
34
+
35
+ Now, we'll come back to this, but you asked me to do something in that talk, which I haven't done, and I will do during this recording. And I'll explain it to the others a bit later. So seven years ago you asked me to do something; I didn't forget, I just couldn't do it at the time. It's coming; just be prepared. \[laughs\]
36
+
37
+ **Paula Kennedy:** So you've waited seven years to do this thing,
38
+
39
+ **Gerhard Lazu:** I think, yes. \[laughs\]
40
+
41
+ **Colin Humphreys:** That's impressive patience.
42
+
43
+ **Paula Kennedy:** Wow...
44
+
45
+ **Colin Humphreys:** I'm intensely curious about what it's gonna be... And I wonder if it's gonna be worth the wait.
46
+
47
+ **Gerhard Lazu:** It will be.
48
+
49
+ **Chris Hedley:** They do say revenge is best served cold. Seven years is a long time to cool off, so I'm looking forward to this.
50
+
51
+ **Colin Humphreys:** Seven years is absolutely zero. Zero degrees Kelvin revenge, so I wonder how this is gonna go. But Gerhard, you need to let us know when you do the thing that you promised seven years ago, and haven't been able to do in the intermediate period. I'm somewhat terrified now; slight sense of trepidation... But I'm looking forward to it.
52
+
53
+ **Gerhard Lazu:** Trust me, it will be great. It will be great.
54
+
55
+ **Colin Humphreys:** You mentioned the talk, and you mentioned how I started off the talks a while back, talking about projects I'd worked on the past... And I use that word "project" very specifically, where people were wasting huge amounts of money. And the particular one we're talking about here was a 12-million pound project in which I worked for three years with a number of people in the order of hundreds, to deliver a system that was canceled approximately a month after it was delivered, because the users hated it. And I think one of the biggest trends we've seen in the past ten years is really shifting from projects to products, and product thinking. And that's like a massive, massive shift in the industry. Thinking that things aren't just like once and done. This notion of continuous iteration, small batches, fast feedback, those kind of things.
56
+
57
+ \[08:13\] That product orientation, learning about your users, iterating towards their needs, and "you write it, you run it" kind of mentality with the teams that deliver those products - that's been a change that swept through every level. And people commonly talk about that in terms of the application teams. "Oh, our application is no longer a project. This is gonna be a product, and we're gonna have a team that are there for the life of the application and while it's the differing value" But I would say the key thing that I've honed in on - and I know you have as well, the other people on the call - I know you've all honed in on this notion of platform-as-a-product, and taking that product methodology, taking that product thinking, the user centricity, taking that and bringing it to the platform layer. And that's kind of the key thing that our company stands for, and that we stand for, is taking that product-orientated thinking, and the entire team composition, you write it, you run it, the ethos, a user-centric design, lean product management, extreme programming - that whole set of patterns, and bring in those patterns to life at the platform layer of the stack.
58
+
59
+ **Gerhard Lazu:** I'm really glad that you mentioned that, Colin, because that ties in really nicely with something that Paula did recently. I think the equivalent of "Let's build a data center" seven years ago today means "Let's build a platform." And two years later you still haven't shipped anything; you've just built a platform. So I can see history repeating itself. And Paula, you gave a talk about a month ago at the DevOps Enterprise Summit, is it?
60
+
61
+ **Paula Kennedy:** I did. Yes, DevOps Enterprise Summit, yes.
62
+
63
+ **Gerhard Lazu:** And the talk was about crossing the platform gap. Can you tell us about it?
64
+
65
+ **Paula Kennedy:** Yeah. So I was very lucky to have my talk selected. And what it was really about was - as Colin mentioned, it was very much about what we've seen in the last few years. And even though we have been talking about the challenges of teams trying to get from kind of the infrastructure layer to deploying applications - we've been talking about that for a long time, and DevOps was kind of supposed to help where things could get shifted from apps, and people could collaborate together. But what we've seen is that if the "you build it, you run it" mentality is what our teams are being asked to do, they're being asked to take on more and more things. And when they are trying to build particularly on something like Kubernetes, which has lots of pieces that have to be wired together, it means the gap to get from infrastructure to delivering the actual value is just getting more and more complicated, and there's more things to manage. And we're big fans of team topologies. I don't know if you've read that book; fantastic book. But they talk about cognitive load as being a big problem for those app teams, because they're being asked to manage more and more components in the stack, they're trying to juggle more and more things, and it just means that they can't focus on delivering the application. So that's what we describe as the platform gap.
66
+
67
+ There's a fantastic blog about it on our website, and I talked about it as well, as you mentioned... But it's basically like "How can we make it easier for teams to cross that platform gap?" And the thing I talked about at DevOps Enterprise Summit was there's two parts to it. One is about organization change, which team topologies handles really, really well. It's about having application teams able to focus on their core value, and then have a platform team that can provide the supporting platform, and then there's a kind of enabling teams and specialist teams that can also support... And it's a way of organizing to get fast flow across the business. That's really what team topologies is about.
68
+
69
+ And then the second part that I talked about specifically, which Colin mentioned, was platform as a product. So when you have your platform team in place, what should they be doing? What are the skills that they need? What are the things that they should focus on? Treating their platform as an internal product, making it useful, making it compelling, making it like the right platform for the app teams... And how do they go about it. That was what I talked about. It seemed to go well... It was kind of odd though; the experience of pre-recording the talk was quite interesting. The whole experience of doing that for the DevOps Enterprise Summit was something I hadn't done before. It was quite interesting. But it seemed to go well...
70
+
71
+ **Gerhard Lazu:** \[12:17\] So if it's pre-recorded, does it mean that I could watch the talk? Is it online?
72
+
73
+ **Paula Kennedy:** It is. So there's a whole video library... I think what you need to do is -- I think you can get two weeks free. You can sign up -- you sort of sign up for a membership and you get two weeks to watch as many talks as you want for free. So you could go watch it.
74
+
75
+ **Gerhard Lazu:** Right. Okay.
76
+
77
+ **Paula Kennedy:** What was interesting was I did so many takes of it... This is just a weird story, but - I did so many takes, because I wanted to get it perfect. And I think when you give a live talk, you have at it, and if you mess it up, it's done. When you're pre-recording, you're like "Oh, I messed that up", and you do it again. And I did multiple, multiple, multiple takes. By the time I nailed the perfect recording, it was 1 o'clock in the morning. But it was perfect. No mistakes, I said everything I wanted, awesome. And then when I watched it back, it was kind of low-energy... Because by the time I did it, it was 1 o'clock in the morning, so my energy level was quite flat...
78
+
79
+ **Gerhard Lazu:** Yeah, it makes sense...
80
+
81
+ **Paula Kennedy:** ...but yeah. I enjoyed it.
82
+
83
+ **Gerhard Lazu:** Okay. I know what you mean. I used to do things like that before I discovered video editing and editing my talks. That just changed my life. \[laughter\] So you're right... You have to also -- like, voice-over, that's amazing, especially for showing something... But that also takes time.
84
+
85
+ You're right, I think not having conferences in-person, and having talks pre-recorded, it just makes certain things difficult. And this definitely is one of them. But I'm sure it's better than giving the talk from the U.K, on a U.S. timezone, and be awake at 1 o'clock or 2 o'clock in the morning... So yeah, at least there's that. Okay.
86
+
87
+ Where do you stand when it comes to platforms, Chris? What is your perspective, and how do you see this space?
88
+
89
+ **Chris Hedley:** That's a very interesting question, Gerhard. I guess I used to be, if you go far enough back in time to my pre-CloudCredo days, I guess I was on the application teams. I was an application developer. I was writing business-facing applications to serve industries, the banking industry, the sports betting industries, governments projects... And I was lucky enough to be in the U.S, working on a project with VMware at the time. This would have been circa 2010-2011, working with big banking clients out of the U.S, and VM will with it. And through my connections on the ground there with our clients, I got to see Cloud Foundry for the first time. Cloud Foundry was kind of the first on-premise platform as a product, you could call it that if you liked. It provided that PaaS-like experience.
90
+
91
+ I looked at Cloud Foundry, I saw an app being pushed into it, I saw cf push for the first time, I saw cf create service and cf bind work for the first time... And this was a very, very early version of CF. I don't even think it'd been open sourced at this time. I think I looked at that platform and I said to myself - I probably said it out loud - "I will never work on a project ever again that doesn't use this technology or one like it." And from that moment onwards, I kind of got sucked into the CF ecosystem, I got sucked into the platform ecosystem... And actually, unfortunately, I never got to work on a project as an application developer that got to push code into a Cloud Foundry on a live project. From that point on, I only end up working on Cloud Foundry itself, or on platform teams standing up Cloud Foundries for other developers... Because anybody who was interested in CF at that time kind of got sucked into the ecosystem. And on the back of that experience, CloudCredo came along, we became that kind of Cloud Foundry open source consultancy, very small, out of London, and then eventually we were acquired by Pivotal... So I just went on the journey of building the platforms, and that's kind of where it's led me.
92
+
93
+ \[15:49\] But to answer your question directly, I think it's the forced multiplier that I've observed greater platforms can have on organizations and on development teams, to be able to build a platform and offer it to a set of users and just reduce the number of things that those users have to worry about, just to remove that kind of organizational friction, if you like, so people can get stuff done. I think it's that specifically that kind of keeps me motivated. I don't see the problem as being solved yet; there are still so many opportunities, there are so many organizations that need help... K8s has come along, it's fantastic... K8s is arguably a platform building technology, rather than a platform you can offer to end users in a kind of meaningful, consumable way... So the opportunity there to kind of continue the journey and try and build some abstraction in and around K8s to kind of continue to help and continue pushing that kind of platform movement forward is, again, the thing that continues to motivate me. And here we are, been a hastle trying to do something about that, hopefully.
94
+
95
+ **Gerhard Lazu:** Yeah.
96
+
97
+ **Colin Humphreys:** I wonder if we should maybe all think about the elephant in the room... So for those people who are listening, four of us are all actually standing in one room and we all have one hand each on an elephant, and that elephant has Cloud Foundry written on it. So we've touched on it a few times... And it's worth actually covering, like - what did we learn from that journey? Where is our thinking now, and what have we taken on and moved on? Because I think it's really important to talk about that, given that we've all got a great degree of history working with Cloud Foundry.
98
+
99
+ I think, from my personal perspective, I think the thing that I really didn't understand enough at the time, and now I've grown to understand, is the notion of -- in fact, maybe it's sitting right in front of me... You have application teams, and you have platform teams. And we talk about those in terms of, in glowing terms, you write it, you run it. Prior to that, we would have development teams, and operations teams. We had dev and ops, and we said "Yeah, this doesn't work well", people are throwing things over the wall. So then we flipped that sideways on, to instead say "Your application teams now write their code and look after their code." And then low down the stack, the platform teams build the platform, develop the platform, and also operate the platform. So everyone's looking after their layer, as it were, and providing APIs to the layer up. You may have an infrastructure team, one or more platform teams, many application teams, and everyone's looking after their part. But the part that I think that we've learned it didn't work out so well is the notion that any one vendor or group of vendors can provide a platform that's fit for purpose in all organizations. Now, I put my hand up, I was 100% on board at Pivotal, building something called One Platform, which was gonna be the one platform to rule them all.
100
+
101
+ Now, this, in hindsight, was short-sighted, because we actually learned that the 80/20 rule really fit well. Nearly everyone was doing 80% of stuff that's kind of normal, and 20% that's differentiating. But over time, you start to talk to enough customers, and we spoke to -- you know, at Pivotal we had hundreds, like 350(ish), and we spent time with them, we learned about them... But the 20% that was different about each customer was different with each customer. So to develop a common one platform that would suit all of them - it became impossible. And what we'd actually done is violate the understanding we had about the world, in that your platform team shouldn't just be operating somebody else's platform. They're also developers of the platform. And to me, at the moment, everyone is fixated on app devs, developer experience, this whole set of things. And then you get this emerging set of patterns around app ops; everyone's talking about app ops, how do we make that work...
102
+
103
+ And then you've got vendors still trying to build the one platform to rule them all, and no one's addressing platform developers, or platform development responsibilities of a platform team. How do we build and curate a great platform? How do we develop that, how do we operate that, how do we monitor that, how do we measure that, how do we do all the responsibilities necessary to build a great platform. And I feel that nobody out there is addressing what it means to be a good platform developer, and to bring that set of responsibilities to a platform team. Everyone thinks platform teams just take off-the-shelf software and run it, and I think that's where Pivotal went wrong; I think that's where all the big vendors go wrong. No one's trying to help people develop great platforms. And that's where we come in. We at Syntasso have built a framework called Kratix which is all about helping you to build the platform relevant to your organization, and I think that's a really challenging set of concerns that no one's looking at. And the framework that we've built is about enabling you to build the framework that's for your organization, not saying to you "Here is a whole platform. Take it and use it in your organization." Because we've learned that platform, if handed to everybody, will not fit everybody's needs.
104
+
105
+ **Gerhard Lazu:** \[20:24\] I'm really glad that you mentioned that, for a couple of reasons. The primary one - and I think it's the only one which I'm going to mentioned - is that people want that one Kubernetes experience. They look at the cloud-native ecosystem and they say "This is too confusing. Give me the version that I need." That thing doesn't exist. And it doesn't exist because you need to know what is important to you.
106
+
107
+ So what are the principles that you're trying to convey in this platform, that you're trying to embed in this platform? So once you know what those principles are, with what is important to you - and this, by the way, is different across different industries, across even different teams.
108
+
109
+ So once we establish what those things are, how do you build that one platform, which by the way, it's only going to be your platform; I don't think anyone else will be able to use it... Maybe your competitors, but they're busy doing other things, by the way...
110
+
111
+ **Colin Humphreys:** Agreed, yeah.
112
+
113
+ **Gerhard Lazu:** So do you think about this differently, Paula?
114
+
115
+ **Paula Kennedy:** Well, it's interesting, because there's the Kelsey Hightower tweet, which is basically around everybody wants a PaaS; they just want to build it themselves. And I think that's where we've tried to learn the lessons from Cloud Foundry. People loved the cf push experience. They loved being able to write some code, have an idea -- like, the promise for Cloud Foundry was "Write some code in the morning... Have an idea, write some code, push it, Bob's your uncle, you've got a running application in production." People love that. And I think developers like it, and business owners like it, and customers like it... That's what people want. But, to what Colin said, the PaaS experience that they're looking for - everybody's actual platform as a service needs to fit them, needs to fit their bespoke needs. And Cloud Foundry tried to put all the wiring in a box, and say "Here's the box. Just use it." It just didn't fit. There were too many edge cases, and they were all different for different customers. And so it's kind of challenging.
116
+
117
+ I think people want the simplicity of a Cloud Foundry experience, but they want the composability of Kubernetes. They want to be able to wire together the thing they want. But wiring together the thing they want is really hard, so people are looking for abstractions. Maybe they're looking for the vendors, or the cloud providers, to say "Just give me everything that I need, and make it really easy for me to use it." But I think where we're trying to see our place in the market is we want to give people that opportunity to have a simpler experience, but they can build it themselves in an easier way; Syntasso is trying to really focus on "Platform team needs to build the platform for your business." It's the only way you're gonna get the right platform, is if you tailor it for your organization. And 80% of that you can get from the cloud-native landscape or from different pieces, but platform teams are gonna have to put it together.
118
+
119
+ We talked about cognitive load for the app teams - we're trying to reduce that by shifting things to the platform team. But where Syntasso is now trying to help is "How do we help the platform team?" Because the more stuff we pile on them, and the more pressure we put on them to say "You need to build the right platform, you need to choose the right pieces, and you need to wire them together, and you need to make sure it all works for those app developers who are really precious", who's gonna help the platform team? And that's where we're trying to focus... Because it's like, they need to be able to build the right platform, and give a PaaS experience to their customers.
120
+
121
+ **Gerhard Lazu:** That's actually a really good point. I really like how you're thinking about that, and I would love to hear how Chris is thinking about the how part. So that sounds amazing... How do you actually achieve that platform builder?
122
+
123
+ **Chris Hedley:** Yeah, it's fascinating. So just to extend on what Paula has just said - I think there's often an assumption that goes unsaid, that platform teams have one set of customers, i.e. the application developer teams... I think in our experiences we've realized they quite often don't. They have internal audit teams as a customer, they have internal security teams as customer, they have finance in the form of billing, tracking as customers, and I don't think we've seen any platform tooling or technologies or frameworks or whatever you wanna call them out there that has enabled the platform team to service all of those customers.
124
+
125
+ \[24:29\] So CF was brilliant to just serving the Twelve-Factor App use case; not so great if billing came along and said "Tell me how much that particular application is consuming in compute resource, so we know how much to charge the team." Not so great if security come along and request certain runtime security scanning features to be taking place within the platform. All of that stuff got very difficult in the PaaS'es that were out there. I think we've taken on those learnings that we've picked up through our seven years of experience in and around the CF world and Pivotal VMware in CloudCredo, and we're trying to break that open a little bit, and we're trying to provide toolings and technologies that first of all allow the platform teams to provide great consumable APIs to their consumers, so people can get frictionless access to the software they need to build their own software, to serve their customers.
126
+
127
+ We're also trying to figure out abstractions that are meaningful to the platform team, so they can also service their other customers, so they can inject what they need at runtime into the software to make sure the billing box is ticked, the older boxes are ticked, the continuous secure software supply chains events are all taking place when they should be. That the right monitoring stack is injected into the software that's required. We could go on all day listing the needs of a platform, and I think Kratix tries to encapsulate that learning and provide almost like a lifecycle -- not just the ability to provide the API, but to provide a lifecycle for the request of a piece of software, so that platform teams can add the custom things they need into a request for software. And it's specifically that that we're thinking about.
128
+
129
+ And then once that happens, how do you even distribute that software across the kind of infrastructure as state, so that users can start using it. I think once you've rolled all of those things up, that's quite a gnarly problem to have to grapple with.
130
+
131
+ **Break:** \[26:25\]
132
+
133
+ **Gerhard Lazu:** We mentioned Kratix and Syntasso a couple of times, and as you know me, Chris, I like my whats. "What is JSON?" That was a very interesting question that I used to put during interviews, when we used to interview at CloudCredo. So --
134
+
135
+ **Chris Hedley:** Has anybody managed to answer the question yet, Gerhard? \[laughter\]
136
+
137
+ **Gerhard Lazu:** Yes, they did, actually. People -- you know, they just don't get flustered in the moment; they just take it at what it is... They're just "What is JSON?" The acronym, what does it stand for. So what is Kratix, Chris?
138
+
139
+ **Chris Hedley:** First of all, I think the meaning of Kratix we should probably call out to the top there, just on the back of that JSON conversation. So all credit goes to Paula here...
140
+
141
+ **Paula Kennedy:** Or blame. One or the other...
142
+
143
+ **Chris Hedley:** We're all about git praise... Unless it's colon, and then it's git blame.
144
+
145
+ **Gerhard Lazu:** Yeah. \[laughs\]
146
+
147
+ **Paula Kennedy:** So the name Kratix came from a Greek word... So there's a bit of a tie-in to Kubernetes being a Greek word, so that's how Syntasso -- I'm going through the whole naming. That's how Syntasso got its name. So Syntasso came from a Greek word, which means to compile things in an orderly fashion. When we were thinking about what we were trying to do, and the complicated cloud-native landscape, and how can we wire together a good platform experience - that's where Syntasso came from.
148
+
149
+ And then Kratix kind of -- we kept on the theme. Kratix comes from a Greek word which means to keep -- like, in Greek, I'm not gonna say it, because I can't speak Greek... But in the phrase "to keep a promise", the Greek word for "keep" is something that looks a bit like Kratix, and that's where we were like "Oh, let's make it sound a bit more technical", and therefore we came up with Kratix.
150
+
151
+ **Gerhard Lazu:** Okay.
152
+
153
+ **Chris Hedley:** I think that to keep the promise is kind of -- that leads us quite nicely into a little bit more to what Kratix actually is... I think we looked -- because we were doing a lot of our investigation into the state of the K8s ecosystem; we were looking at operators as a technology to build and distribute software. And operators are great, they do a tremendous job, but they are also somewhat limiting when you start limiting at some customer kind of infrastructure, when they've got like one-to-many (maybe hundreds) K8s clusters in their data centers.
154
+
155
+ **Colin Humphreys:** The first problem I think Kratix is really trying to solve, in terms of "How do we do this?" - you have to imagine that most people we speak to live in a landscape where the infrastructure is many, many Kubernetes clusters. It's the first thing I wanna say; this is all about multiple Kubernetes clusters. We've seen this as a trend... I think most people starting off in Kubernetes would do the one big cluster pattern. That's very much how -- you know, Red Hat's OpenShift, that's where they started. They were like "Yeah, we're gonna do one big cluster, everything's gonna go in there", but then in order to make one big cluster work properly, you have to put guard rail after guard rail after guard rail on Kubernetes. Then you can't do Helm charts, you can't do the operations you wanna do, and it becomes a bit of a nightmare. So much fuss. And then governance and compliance come in, and you say "Hang on a minute... These things can't all share this cluster." You end up even if he was gonna do dev stage prod you end up with many Kubernetes clusters. Typically, customers we've worked with end up in the order of hundreds, if not thousands of Kubernetes clusters. This has been -- I wouldn't say exactly accelerated by GKE, AKS, EKS... You know, public cloud, freely available, clusters as cattle, which is somewhat unkind, but... Many, many clusters freely available.
156
+
157
+ So the landscape is infrastructure nowadays - let's just be straight and direct about this. Infrastructure nowadays is multiple Kubernetes clusters. That's your infra, what you're gonna do next. So you're trying to get from that, to building a meaningful platform API for your organization, and that's what you have to do. So in your own platform team, you have to get from Kubernetes clusters to a meaningful platform API. So this is where Kratix gets involved to help you make that happen. It's a framework, you lay it down firstly onto a platform API cluster, where you install it, with all its controllers, its CIDs, that kind of stuff goes down into platform API cluster. And then you tell it about either some static worker clusters, or you tell it how to create new clusters, so then you have it in charge of your Kubernetes topology. The way in which it then starts to send out messages and give instructions to those worker clusters - we use the GitOps toolkit for that.
158
+
159
+ So effectively, when you deploy Kratix for free, you get a complete GitOps topology. Everything is kind of auditable, traceable etc. via your Git repository of choice, be that GitHub, or if you wanna actually use S3 as a repo... Whatever you wanna use, we use the GitOps toolkit for pushing these things out.
160
+
161
+ \[32:16\] So now when you've got the assets up, you have your platform cluster, you have your worker clusters, either static or dynamically created. So what you've got now is no platform API; you're just ready to build one. From there, we then have this concept which Paula was talking about, about promises, which is where the name Kratix comes from, to keep a promise. And your job on the platform team is to collaborate with the application teams, start thinking about their needs... We've talked about platform-as-a-product before, product thinking. Thinking about the needs of the application teams, and prioritizing those needs such that you build the most important promise first. And when we say a promise, that is to deliver something as a service, from your platform, to the customers of your platform. So imagine that your teams are all spending all their time configuring and monitoring and maintaining Java application servers.
162
+
163
+ The first thing you wanna do in your platform team is make Java application servers available, as a service, from your platform API. So you would build a promise for that; you would talk to them about the API they want... When they are building this application, what do they care about? Is it heap size, is it Java tunables? Do they want small, medium and large? What is it they care about? Build out that contract with them, and encode that in a CID that forms a large part of the promise.
164
+
165
+ Then you take that and you add into that all of the needs that your business has, so the platform-level concerns, such as billing, metrics, monitoring etc. Those things are all encoded into the promise as a pipeline. You take that promise and you add it to Kratix, and then Kratix now is able to offer Java application servers as a service to those teams. And when they ask for one of those Java application servers, the pipeline will fire to take care of all the business needs that need to happen... And then the definition of a secure, compliant Java application server will get sent to one of the remote workers, and then that will be available for the application teams to consume.
166
+
167
+ And then you go to the muse "Okay, is that working well for you? Can we develop it? Are there other promises you need?" And then you iteratively and incrementally build out a platform as a product, as a series of promises in Kratix.
168
+
169
+ **Gerhard Lazu:** So just for me to understand this... If I was going to link the concept of a promise to something that I'm familiar with, I think I would choose a template. So we have templates of how things should look like, and you gave the example of Java applications. So what are the things that Java application developers care about, and then we encode them; we have some sane defaults, and maybe we have some sizing, and rather than having to worry about this every single time, across N clusters, as you mentioned, there'll be these promises which will have sane defaults, and it's super-simple to deploy your Java app. Is that the experience that you imagine, Chris? Is this what that looks like?
170
+
171
+ **Chris Hedley:** I think it's exactly that, yes. The promise is providing that abstraction for the platform teams to bring in the complexity into the promise that they do not want to or need to expose to the end user. So an application developer team just giving a small Java stack, whatever that means for them, and then the platform team can encapsulate what that means in reality, kind of JVM sizing technologies that hang around outside the JVM to enable that team... And I think that that's exactly that. And then on top of that, that platform team can then also inject into that running software whatever other tools they need to enable that platform, be it security, compliance, audit, monitoring, you name it. And again, that's a set of complexities that the end using team do not have to worry about. They know that when they ask for a piece of software, they get the software that they want, but they also know that the software that they are getting is compliant with the needs of the entire organization, and they can just get on with whatever it is that they are developing, without worrying about all of that other complexity.
172
+
173
+ **Gerhard Lazu:** \[36:12\] So Paula, if we were to link these concepts that we've talked about - the promises, the different teams, the application developers... If we were to link these -- actually, no. I'm thinking more about the Kubernetes primitives. So the promise is something that Kratix brings; but there's also all the operators; they still exist there. So how does this map to Team Topologies, the book that you mentioned, and what is left out? Because I know that the Kubernetes operators is not something that I think fits with team topologies, because it's just too much detail, and then everyone gets to do their -- but maybe I'm misunderstanding this.
174
+
175
+ **Paula Kennedy:** That's an excellent question. So for the concept of a promise - you're right in saying that it's essentially an abstraction above operators. We're not trying to get into the space of building our own operators, or writing good operators. Operators exist; that's already a space that people are in. What we are trying to do with the promise is an abstraction above operators, that allows - as Colin and Chris enumerated - the platform team to offer things as a service. And that's the link between team topologies and what we've been talking about. So the team topologies -- as well as having the different team types, which I've mentioned... So the platform team, the application or streamline teams, and the enabling and kind of sub-system teams.
176
+
177
+ They also talk about interaction modes. And the key ones that I talked about in my conference talk were collaboration, and then x-as-a-service, which Colin described briefly. So really, where we think about platform as a product, tying the whole thing together. When we think about platform as a product, your platform needs to be a product, an internal product that you think about; you think about it as customers, you think about it as product lifecycle, you treat it like a product. If step one of that is "Who are your customers?", you need to go and talk to them and you need to understand their user needs. And that's the collaboration part from Team Topologies. They're very clearly defined.
178
+
179
+ The first interaction mode - collaboration. In our world, when we think about Kratix, that looks like this kind of promise framework; you're gonna go talk to the app team, you're gonna figure out with them a custom resource definition, what things do you care about, what things do you wish to define? You agree on that in the collaboration mode, and then the next step is delivering this thing as a service. As Colin mentioned, it could be kind of a whole Java stack, it could be Jenkins as a service. It could be as big or as small as the needs of the team, and you only find out what those things are by that collaboration mode.
180
+
181
+ So you go talk to them, you define what they need, you define the custom resource definition, and then the platform team creates that in this promise abstraction, and then presents back to the application team "Hey, here's the five things that you care about every time you want to ask for a Java stack. So fill in these five things, and we will magic you up one, on-demand, whenever you need it."
182
+
183
+ That's the abstraction, and that's how we are thinking of tying all of these concepts together. Platform as a product, being able to talk to customers, collaboration, and then taking all of that collaboration, codifying it into something as a service. And then you deliver it as a service, and that's the product.
184
+
185
+ Another thing that Team Topologies mentions is this ongoing lightweight collaboration. Because as Colin mentioned, the difference between project and product. Another difference is projects get started, and then they get finished. Products are long-lived, ongoing, so for your platform you need to not just -- it's not building the platform and then it's finished. It's a product that needs maintenance, it needs looking after, it needs continuing to be fit for purpose. So this lightweight, ongoing collaboration that Team Topologies talks about is also an essential part.
186
+
187
+ \[40:00\] Are the promises still the right ones? Are they still meeting the team's needs? Are there new promises that they need? Do they need to end-of-life some promises? That whole product lifecycle that you have with a normal product applies to the platform. That was a long answer, sorry...
188
+
189
+ **Gerhard Lazu:** No, that was actually very good, because it helped me visualize all the interactions, all the teams, how they map to the promises, those technical components, whether that's at a technical layer... So that was very helpful for me, thank you.
190
+
191
+ And you mentioned something really important, because I know step one is always the easy one. Like, let's just get this up. So you get your platform cluster, you get some worker clusters, and you define some basic promises... And then what? Well, that's when actually the hard work starts, the collaboration that you mentioned. What about upgrading operators? What happens when those operators need to upgrade the resources which they manage? How does that actually work? Also, how do you test that the promises that you've defined or that you've changed, how will they interact with the promise that already exists out there? I don't know who wants to take this, because it's a really meaty question, and you can answer like a subset of it... But it's up for grabs.
192
+
193
+ **Colin Humphreys:** Sure, I'll take that. I'm feeling --
194
+
195
+ **Gerhard Lazu:** Confident. \[laughs\]
196
+
197
+ **Colin Humphreys:** No, confident isn't the right word. Some type of trepidation. This isn't the question I was waiting seven years for...
198
+
199
+ **Gerhard Lazu:** No. But it's coming. \[laughs\]
200
+
201
+ **Colin Humphreys:** I also wanna take a brief step back to something you asked there, Gerhard, that was super, super-interesting to me... You asked about templating. That's really interesting. But again, if this is a simple templating system, why are we not just using Helm? Why are we just not using any of the innumerable templating languages that are out there? Because there is a lot more obviously more value than that. We can actually use Helm within the system. But it offers far more than that. Day tier is actually which is the question he just asked is actually really exciting for our customers, because you start to start thinking about "What happens if I don't just offer a Java app server? What happens if I offer multiple Java app servers, and the CI system, and the CI system to deploy to them, and all of the scanning and everything else?" So the app team just need to say "You know what - I'm gonna start working on a Java app", and then they get everything they need to make that happen. They get the whole, complete setup to make that come to life in a very meaningful way. So they're just saying "I've been working on an app and we expect to have this type of traffic", and that's what's in the CID for them. And then when they ask for that, everything comes to life beneath. You have the whole, complete, all the environments, the pipelines, everything that comes there, it's all delivered for them, and it's all security scan compliant, it's registered in the right bidding system... All of that complexity, all of that cognitive load they would previously need to be exposed to and feel the burden of - that's all now encoded in the promise. But they're getting something that is relevant to their organization.
202
+
203
+ Now, there's no off-the-shelf platform out there, either SaaS, or vendor software, that can get you that. You have to build that yourself. Your platform team do have to make that come to life. But when it comes to life, it unlocks the power of your application teams, because they're not getting everything that's there.
204
+
205
+ You then raised a great question as well, about "Well, that's great, because that sounds like a really useful experience for those organizations." But what happens day 2, day 3, day 100, day 1,000? How does that journey look?
206
+
207
+ Firstly, Paula said this specifically, and we said this to everybody that we have talked to about this... Effectively, we're taking high-level user requests, we're breaking it down into a series of documents via the pipeline and everything else, we're pushing out those documents via the GitOps pipelines to multiple servers. When they hit those multiple servers, our system makes sure that the operators will be there, and the operators themselves are kept up to date, so we push out those definitions. But if your operators that you choose to use in those promises aren't able to do upgrades, Kratix isn't gonna magically fix that for you.
208
+
209
+ As Paula mentioned, we are a level above operators in terms of the abstractions here. You need to create or choose off-the-shelf great operators put out to the workers so that when somebody asks for a Java wrap server, they get a Java app server. And also, when they try to upgrade a Java app server from one version of Java to the next version of Java, everything doesn't fall apart.
210
+
211
+ \[43:54\] Now, our promise would enable you to push down version documents to the workers, to say "You should now be in this state, you should now be in this state." And this is arguably the beauty of Kubernetes; this is a fantastic API server, and it's declarative and convergent. But the controllers you put in there and the operators you put in there need to be able to converge. If they can't, you've got a fairly big problem. That's not the problem we're tackling. We are basically saying "We get you the high-level APIs, declarable by your platform team, we get you the pipeline that encodes all of your business processes, we get you the ability to take those resources and push them out to a complex topology of Kubernetes servers, and keep all of that up to date. Everything is what we would say is south bound of the operator, so the operator itself that's pushed out, and everything that happens after that - that's down to the operator that you choose to put out there. And there are loads of frameworks putting operators out there; we are compatible with all of them. Any operator will work in our system. But what we aren't saying is that our system will fix bad operators.
212
+
213
+ **Gerhard Lazu:** Okay. So Colin, you don't know what you're talking about, and I'm leaving this talk. That's what you asked me to do seven years ago. When you were giving the OpenCredo talk, you asked me to just say "Colin, you don't know what you're talking about", and just leave the talk, towards the end. Obviously, you know what you're talking about... \[laughs\]
214
+
215
+ **Colin Humphreys:** That's not the kind of thing I would say, by the way...
216
+
217
+ **Gerhard Lazu:** Yeah, it was meant to be like a riff... So that was all very accurate, Colin, and very well put, so thank you.
218
+
219
+ **Break:** \[45:18\]
220
+
221
+ **Gerhard Lazu:** One thing which you mentioned, Colin, that I really get, and I start seeing how things are starting to come together was the versioning that's built into the Kubernetes API. So I can see how you can have multiple versions of the same promise, at the same time, easily, because the platform - I'm doing air quotes, because Kubernetes is not a platform, but there are some primitives there that you can use and you can get really far, supports that.
222
+
223
+ **Colin Humphreys:** Yeah, we've really seen a lot of value in that API. And as much as I publicly say Kubernetes is a waste of time - and it is a waste of time if you are on those applications teams really trying to get to the end goal; but if you're a platform team and you're trying to build a platform, it's absolutely stellar. For platform builder it's a superb, superb tool. I love it as an API server. I actually think these scheduling in the pods is almost irrelevant. And as an API server of pluggable CIDs, and its dynamic nature - it's truly, truly superb, and I adore it for that reason.
224
+
225
+ \[48:14\] And that's why we, with a small company, have been able to build what I believe to be a really, really impactful, meaningful framework, Kratix, so quickly - it's because we're just leveraging the best of Kubernetes, platform as a product, and bringing it to most Kubernetes clusters. We've taken the best of Kubernetes, we've combined it with the best of GitOps, and we've produced this framework that I'm really confident is gonna have a huge impact.
226
+
227
+ **Gerhard Lazu:** I have seen this link in the past, when we were rocking on Cloud Foundry. And there was Bosch as well. Awesome piece of tech. I think the combination didn't quite work, and I'll get to that in a minute... because we had Cloud Foundry, which had a scheduler, Bosch, which was kind of doing a lot of the same stuff with agents, and how it was scheduling jobs, and the lifecycle of jobs and managing that. And then we had Concourse, which again, had a scheduler. So we get three types of schedulers, slightly different, with templating languages, and their own rules, and their own lifecycle management... And then Kubernetes came along, which for me was the perfect combination of the three different types of schedulers, and it had some extras. So finally, we could unify those three things. And we have seen CI/CD systems like Concourse, pecked on I'm thinking, build on the Kubernetes API, exposing the jobs, and the pipelines... And Argo CD as well. And I'm sure that Flux as well. And this is the intriguing part... Because I don't know anyone that is using Flux at the scale and for this purpose. So Flux, the way I understand it, is part of your GitOps toolkit, which is a core component of Kratix. And I'm really intrigued by why Flux, and not Argo CD. So what is in that Flux ecosystem that attracted you to it? Who can answer that?
228
+
229
+ **Chris Hedley:** I think we were looking at the problem of how do we get pieces of software deployed on K8s clusters that could be distributed across many different logically-discrete K8 clusters. And the GitOps toolkit has done a tremendous job of that. It has very powerful tools that allow it to listen to a message store, be it Git, or a bucket, or Docker, for example, and just pull down something when it sees a change, and it will apply that quite happily to the cluster that it's deployed on.
230
+
231
+ **Gerhard Lazu:** Do you mean like the Concourse resources that have triggers? Do you mean in that way?
232
+
233
+ **Chris Hedley:** That feels like a trick question, Gerhard. I'm not an expert on Concourse or Concourse's triggers, so can you explain back to me how you mean that, and maybe we can find a pattern there?
234
+
235
+ **Gerhard Lazu:** So you know how we had those resources, like for example GitHub repositories, and then a new version would trigger that resource? Then that trigger could be the input to a job, and you could have multiple inputs. So to me, when you describe this component of the flux - I don't know what exactly it's called - to me it sounds like that primitive resource which triggers based on a new version. And the version could be a Git SHA, or in S3 that's like a new version for an object... There's a version if you have, for example, a semver resource... You had all those triggers. And to me, this sounds very similar, in that you trigger on certain outside events. What is that component called in Flux, do you know?
236
+
237
+ **Chris Hedley:** I don't know what the component is called, but to go back and maybe answer your original question of why Flux or the GitOps toolkit versus a non-K8s-native technology, since it's Kubernetes... I think it is the K8s native way that the GitOps toolkit is being engineered from the ground-up. I think you mentioned earlier a whole suite of technologies. You mentioned Bosch, you mentioned Cloud Foundry, and the Diego Scheduler, you mentioned Concourse, which has its own scheduling technology built-in... All of them have their own APIs, they have their own templating engines, and a way of getting software up and running in the way that you need to.
238
+
239
+ \[51:51\] I think K8s is genius, and I think Colin touched on this earlier - that the thing that really sold Kubernetes to me was the custom resource definition kind of pattern that they've come up with, and then sitting the controllers and operators behind that as an API, to then control the thing that you're trying to control. And I think the GitOps toolkit, Kratix itself, operators to some extent - they're all using that consistent API, and I think it's that leveler. As a platform developer, you're only having to learn one set of patterns, and those patterns are transferable across multiple different technologies. I think that's where the technology choices really come to the foreign kit.
240
+
241
+ So if you learn that kind of CI/CID operator pattern, those learnings are transferable. Whereas Concourse may well have the patterns already, it may be as powerful, but it's yet another learning curve, it's yet another technology that you have to orchestrate on top of the K8s.
242
+
243
+ **Gerhard Lazu:** Yeah. How do you test Kratix? I'm really interested in that. How do you test a platform builder? Do you just build many platforms? Do you use property-based testing? How does that even look like? I wouldn't know where to start... What do you do today? Let's do that - what do you do today? How do you test Kratix?
244
+
245
+ **Chris Hedley:** How do we test Kratix... So I can give you the very blunt, honest, business-focused answer to that, Gerhard... We're currently three people. We've spent the last nine months getting a business off the ground. So we've been coming up with a business narrative, looking at the problem space... Kratix was the technology that we built to tackle that. We think it's a great technology, we think it has huge amounts of power, but certain dials have been turned through the development of Kratix, and that continuous testing that I think you're hinting at is maybe not quite where it needs to be in Kratix right now. We've been focusing on other problems.
246
+
247
+ **Colin Humphreys:** This is confusing the last two questions, what Chris is saying. So firstly, we test Kratix by using mesh space fit forward and we have a set of sample promises. We have a Redis promise and a Postgres promise which we inject testing it's all working testing we're doing all the right things with those promises. Manage the life cycle of the promises. I think Chris was conflating that maybe with the Argo CD things you were touching on, and being like effectively what do we use for our CI server internally. Because right now, effectively, we have one pair, myself and Chris, working on the code. We code, we test on our laptops, and we commit. We don't have CI, because you don't need to integrate, because it's just stuff coming from the two of us. So that's maybe how those things come together.
248
+
249
+ I did want to touch on that as well - Argo CD... Like, why use Flux rather than Argo? Argo I think is more specific for applications in Kubernetes; it's a CD server. Whereas Flux is almost like an agent you put out in all of your remote clusters, specifically to pull from repos and stay up to date with those repos. Flux is very much as a whole it's very much fit for the purpose of what we're trying to do, and it's focused on that sole responsibility.
250
+
251
+ But I think when you said "Your GitOps toolkit" - I wanna be very clear here, it's not ours. I need to thank lots of people for their contributions. We're standing on the shoulders of giants here.
252
+
253
+ **Gerhard Lazu:** Of course.
254
+
255
+ **Colin Humphreys:** Thank you so much to the many people that contributed to the GitOps toolkit. We haven't put anything back in yet. We're a tiny company. I wanna say a huge, huge thanks to those people - for the GitOps toolkit, for Kubernetes itself. Everyone out there that's part of the ecosystem - a huge thank you to all of you, because you help companies like us be able to come to life and get value going quickly.
256
+
257
+ So as mentioned, we have a Ginkgo-based test suite for Kratix, which you can just try out; it's all there, github.com/syntasso/kratix. You can run the test suite. It does require some Kubernetes testing infrastructure around KinD, but it's all there for you to run, should you choose to run it.
258
+
259
+ We did actually start Kratix entirely test-driven from the outset as a set of behaviors defined in style syntax. So we very much started test-first, and that way of thinking about things. But I think as we -- it's actually fairly straightforward to test, I would say, because promises are not actually that clever, if that makes sense. They're not complex it's the way we look at it. We are taking Kubernetes, we are injecting CID and controllers into it, and we are asserting the behavior of them. So it really doesn't get things too wildly complicated, but the power of that system, because you can inject something of that nature, is tremendous. So yeah, our testing setup, effectively Ginkgo-kind, that set of tools, comes together to give us a good feedback cycle.
260
+
261
+ \[56:07\] I think as we move towards being able to assert complex suites of upgrades, in terms of like "If this promise changes, then that promise changes", what happens in any interactions between them across multiple clusters... So our cost of testing is gonna go through the roof. So I get that. But right now we're not at that stage.
262
+
263
+ **Gerhard Lazu:** So first of all, I think I need to give a bit of background. We have worked for so many years together, that when Chris answers something, I understand what he's not saying... And I don't think the listeners are getting that same experience... And for that I have to apologize, that I can't convey that. Colin actually understood what Chris was not saying, so Colin said what Chris wasn't saying. So thank you, Colin, for that as well. It makes a big difference when a group of people like this comes together. The downside is that there's a lot being said not explicitly... So that's what happened here.
264
+
265
+ Yes, I looked at the code, I was thinking of Ginkgo as well; Colin, thank you very much for that. What I was thinking and what I was trying to hint to is the complexity of these types of tests. Because they're integration tests, right? Like, how does this CRD, when it's set up, actually behave in practice? Does it do what it's supposed to do? How do you test that? That to me sounds like an expensive test to run from the beginning. KinD makes it easier, in that you can run the whole Kubernetes in Docker, but still, it is an expensive test to run. It's not like you're unit-testing. And you can only get so far with that, because you're really generating CRDs... And how they interact with the Kubernetes API. So your primitives are already high-level and expensive. So you can't really simplify that. Or at least I wouldn't know how. So what you've done - that's exactly how I would approach it.
266
+
267
+ **Colin Humphreys:** Thank you. That's good to hear. It's a little slower than we'd like, so I think as we move on to -- as it gets more complex, we're gonna have to find ways of doing things. But thank you again to the Kubernetes community. There's some awesome tooling out there around just running like API server in memory. Problem is a lot of our stuff does require the controllers to run as well, but - I mean, we use Kubebuilder; another huge thanks to the Kubebuilder community. It's awesome. Kubebuilder v3 - absolutely loving it. It's really good.
268
+
269
+ So I think the hard work of everyone else in the community is what's enabled us to go fast and keep ourselves sane, and do all the things we need to do. And again, testing - I think it's gonna get more complex. We've looked on this for years, Gerhard; as the testing suite gets slow and grows, then you put effort into the testing suite, and you improve it, and you continuously tread that tightrope, like investment into testing versus trying to keep things going fast. And if your speed slows down, you think "We need to invest a bit more in testing to get it back up again", and you treat that tightrope.
270
+
271
+ As Chris was saying earlier, right now we're so far biased towards going fast with a small team we don't even have a CI system, because we run all the tests locally, as one pair, and then commit. So that is CI.
272
+
273
+ **Gerhard Lazu:** That makes sense. I remember doing that, and people looking at me saying "Are you crazy? Is there Jenkins running on a Mac?" Yes, I have a spare one. Why not. It's just me and two other people in the same office. We don't have a big team. So it just makes sense. Especially when you're taking things off the ground... I think, again, this is something that maybe people don't (or can't) appreciate, because we haven't done a good job at explaining it - is the company is you three. So you have a CEO, COO, and CTO, or VP of Eng... Again, why not CTO, Chris? Why aren't you a C-suite? Explain. \[laughs\]
274
+
275
+ **Chris Hedley:** Why am I not C-suite... I think the honest answer to that question, Gerhard, is if you form a company, you do it because you want to do the thing that you want to do in life. As you've probably gathered from this podcast, I'm not a great public-facing person. I'm not the person to stand on the rooftops and shout about the public, to shout about the time. I'm very much an internal-focused person. You know my strengths, we were together for a few years...
276
+
277
+ **Gerhard Lazu:** I do.
278
+
279
+ **Chris Hedley:** I enjoy working with individuals, I love running teams, I love getting heads-down into the code. And as the company grows and changes, that's the role I want to do. So you start a company to do the thing that you want to do...
280
+
281
+ **Gerhard Lazu:** Yeah. That makes a lot of sense, actually.
282
+
283
+ **Chris Hedley:** \[59:53\] ...not get pushed into a role that you're uncomfortable with and I don't want to do. It's more to it than just the label, right? So that's the honest answer. That, and Colin won't let me.
284
+
285
+ **Gerhard Lazu:** \[laughs\] He's keeping CTO for someone else... Is that what it is? No, it's not that.
286
+
287
+ **Chris Hedley:** Keeping it for himself.
288
+
289
+ **Gerhard Lazu:** For himself. \[laughs\] CEO and CTO. I don't think that has happened before.
290
+
291
+ **Chris Hedley:** He knows he will be ousted as CEO eventually, and he's keeping CTO around.
292
+
293
+ **Gerhard Lazu:** His left bicep is CEO, and his right bicep is CTO. And they're massive. \[laughs\] Okay. So what's coming next for Kratix and for Syntasso in the next six months, for example? Do you have anything on the horizon? Growing the team, developing Kratix... What do the next six months look like for you?
294
+
295
+ **Paula Kennedy:** That's a great question. What does it look like...? So since we been going, we are a small company, as I think Chris and Colin mentioned. Some might say that we have over-engineered some of our processes, because we have come from a background of Pivotal, big product company, VMware, huge product company, with three people. But we are very focused on OKR framework, we have our objectives and key results, we have our board meetings, we have our OKR progress meetings, we have our retrospectives...
296
+
297
+ **Gerhard Lazu:** I see where you're going with this, yes.
298
+
299
+ **Paula Kennedy:** You can see where I'm going?
300
+
301
+ **Gerhard Lazu:** I see, yes.
302
+
303
+ **Paula Kennedy:** We pour a lot of our learnings from the last seven years into our tiny three-person company... So we are regularly having objective and key result meetings to review, and we have regular board meetings to plan what's the next three months, what's the next three years. The interesting thing is that, you know, plans change; the plan is the plan, that's why the plan changes, as we like to say...
304
+
305
+ **Gerhard Lazu:** Yeah. You need to have it, but it will change, so don't worry about it. Yeah, I know what you mean.
306
+
307
+ **Paula Kennedy:** Yeah. It's quite open right now.
308
+
309
+ **Chris Hedley:** You hint at a lot of things that -- like, we would be lying if we weren't thinking about some of the things you've hinted at there, Gerhard. So we are three people, I think we've run that message home. As we start to work with more and more customers - and believe it or not, they're awesome - that's starting to constrain us even further. So as we to work with customers, that means we have to potentially slow down on some of the product development side, for example... And that's something we're not comfortable with. So then you look to "Well, what levers can we pull to grow?" Which might mean bringing more people into the company to scale the engineering team, for example, or to scale the consultancy side. As we focus on those two things, perhaps that means we take our eye off marketing, and then all of a sudden you've got that problem.
310
+
311
+ So we're constantly reflecting on what our constraint is within the business, and we're constantly looking to address that. And I certainly speak for myself... I'm sure Paula and Colin wanna object to this too much, we're probably start to dile it down to a three, as we start to work with customers and continue the product development, something's gonna have to scale somewhere over the next coming months... And we are thinking about all of those options. That's the fun part of a company that's smaller. It's exciting, and there's always something new.
312
+
313
+ **Colin Humphreys:** What's been awesome, but also really scary, has been that our customers we've been talking to have said to us, effectively - we've had this directly from a few of them... What we've built here with Kratix - it's a system, it's not a tool. It's very easy to talk about a small shop tool and build these small shop tools, and many of our customers are really good at doing that... But then they have their organizational challenges, structural challenges, these kinds of things. Because Kratix encodes the opinions from Team Topologies and makes some real via software. Our customers are taking Kratix and using it to perform a Reverse Conway Maneuver, where they're saying "Okay, this thing is gonna help me build a great platform team. I'm gonna help that good platform team have great interactions with the application teams, and that's the setup I want in my organization. So I'm gonna help my platform team deploy and get value from Kratix, and load it with the promises we need in my company, and that will help my organization move towards the structure I want to have."
314
+
315
+ So they're saying to us "This is awesome. You have this system that will help my company become a better system." But that also then scared us, because we're like "That sounds great... And you're from this company with 10,000 people, and we're from this company with three." And therefore, we will be taking investment, we will be hiring people, we will be scaling up to meet that demand, because what we're building here is not just a small-shop tool, as I say. This is about organizational change via people and software together, and that's non-trivial to deliver. But as I say, the three of us, we're trying to do it, we're gonna scale, we're gonna grow, we're gonna make it happen, hopefully. Fingers crossed.
316
+
317
+ \[01:04:23.18\] But yeah, so the path forward for us is very much continuing to work with those customers, continuing to build on the success of Kratix, taking and scaling Syntasso as a company, so we can build out around Kratix. If anyone out there that's listening would like to contribute to Kratix or to try it out, or to give us feedback... It's Apache 2 licensed; github.com/syntasso/kratix. Please do try it out, please do give us feedback, please feel free to contribute... Whatever you can do to help us out. We'd greatly, greatly appreciate it. Even if you just wanna try it in your org, and say "Actually, this wasn't for us. Here's why." That will be greatly appreciated by all of us here on the team.
318
+
319
+ **Gerhard Lazu:** And that is the Colin that seven years ago convinced me to join CloudCredo. You've just listened to him. Colin in like two minutes. That was it. That was great.
320
+
321
+ **Colin Humphreys:** Thank you. I have to do my best, because so far we've had Chris say we don't do testing...
322
+
323
+ **Gerhard Lazu:** \[laughs\] That's not true, by the way... As I said, the things --
324
+
325
+ **Colin Humphreys:** ...and Paula say that we've got way too much process.
326
+
327
+ **Paula Kennedy:** We love process. We love a bit of process.
328
+
329
+ **Colin Humphreys:** So we don't do testing, we've got way too much process... \[laughter\]
330
+
331
+ **Chris Hedley:** We hate testing. We love process.
332
+
333
+ **Colin Humphreys:** I'm just like, "Where do I even start here?" I have to try and put the best foot of the company forward before somebody shoots it. \[laughter\]
334
+
335
+ **Gerhard Lazu:** So my feedback to everything that you've said, all three of you, is the blog doesn't lie; go check out the blog. Go and watch Colin's crazy talks through the years; they're amazing, you'll have so much fun. Go and watch Paula's talk; I haven't seen it, but I've seen your other talks, Paula, and I know it's going to be good. Chris is cold and grumpy on the outside, but he's really warm and fuzzy on the inside, and he will really look after you as a manager. I had Chris as a manager for many years, and that's what actually happened. So if it would have been as bad as you have thought at a certain point, I don't think we would have worked together for like 5-6 years... So it's much, much better than it sounds from the outside. And the GitHub repo never lies; go and check the code, it's all public... And see what you think.
336
+
337
+ So as we are prepared to wrap this up, for someone that's been listening to this, hopefully all the way to the end...
338
+
339
+ **Chris Hedley:** If they're still listening... Yeah, fair play...
340
+
341
+ **Gerhard Lazu:** \[laughs\] Yeah, if they're still listening... What is the key takeaway? And we can maybe start with Paula. What do you think is the key takeaway, Paula?
342
+
343
+ **Paula Kennedy:** \[01:06:41.17\] I think for us a thing that we've learned is people need platforms to help them go faster. And we've seen the pattern. It's interesting how -- I think for me, I feel like I've been talking about this platform as a product, platform gap for quite a long time, and I think a problem that is ongoing... And all the new tooling, and the cloud-native landscape, and vendors coming out with new things - it's all still a problem. The actual challenge of trying to build the right platform to be able to go faster is still a problem that everyone's facing... So if there are people who've made it to the end of this podcast and are still here listening to us, I think if people out there and they are having these challenges, if they are either in a platform team and they're struggling because there's too much load being put on them, or if they're in an application team and they can't get anything from their platform team because they're somehow delivering too slowly - anyone who's got those kind of challenges, those are the people we'd love to talk to, we'd love to help, we'd love to learn from... That's what we're here to do.
344
+
345
+ **Gerhard Lazu:** Thank you, Paula. Over to you, Chris.
346
+
347
+ **Chris Hedley:** I would +1 what Paula said. I think the key takeaway, as I reflect on what Syntasso can add to the industry - if it's on the platform developers to reduce the cognitive load on their customers, the application developers, for example, then it's on Syntasso and Kratix to help produce the cognitive load on those platform team developers. It's like, we're here to help. We've been there, we've felt the pain, we've created some of it in the past, let's be honest, and we're here to really help in that space. We want to help the platform teams.
348
+
349
+ **Gerhard Lazu:** Thank you. Colin?
350
+
351
+ **Colin Humphreys:** Yeah, I think I'm actually gonna reflect some words back at you, Gerhard... Solomon Hykes, founder of Docker, on your show said "If your platform is generic, then your application is generic." So we know that people wanna build differentiation and value into their apps, therefore you're gonna need a differentiated and valuable platform within your organization. With Kratix, we try to make that easier for platform teams to build out a platform that your organization needs. It sounds a bit corny, but we're trying to build rails for platform development.
352
+
353
+ **Gerhard Lazu:** Or Phoenix. It's less corny, Phoenix... Changelog runs Phoenix, and it's great. But yeah, I know what you mean.
354
+
355
+ **Colin Humphreys:** So yeah, a framework for building platform as a product is what we're trying to build, and we all know the future of infrastructure is gonna be more declustered Kubernetes. So we are at the intersection of those two technologies, and if you wanna help with building out platforms or products on Kubernetes, if you wanna talk to us about it... I mean, the real thing we'd like to do is to take a look at Kratix, give us feedback, reach out and talk to us. That would be absolutely wonderful.
356
+
357
+ So if I had one takeaway for people to do, please do reach out to myself, Paula, maybe not Chris as much... \[laughter\] I'm joking, of course.
358
+
359
+ **Gerhard Lazu:** Yeah, he's inwards-facing, you already established that. \[laughter\]
360
+
361
+ **Colin Humphreys:** It's become quite clear in the whole podcast. You don't wanna talk to me, you wanna talk to Chris.
362
+
363
+ **Gerhard Lazu:** He's too polite, yeah.
364
+
365
+ **Chris Hedley:** If you're in the marketing pitch, speak to Colin. If you want the honest stances about what really goes on - yes, speak to me. \[laughter\]
366
+
367
+ **Colin Humphreys:** That is actually true.
368
+
369
+ **Gerhard Lazu:** Yeah, definitely. So thank you very much for today. I had great, great fun. It's been too long since we hung together. This was good... And I'm thinking six months from now... First of all, Team Topologies - I have to add it to my queue. There's eight already, but that's okay, I can manage one more. One more book.
370
+
371
+ **Paula Kennedy:** It's very good. Very, very good. Not too many pages. Very practical advice.
372
+
373
+ **Gerhard Lazu:** Thank you. And trying Kratix out. I love playing with tools, and - what did you say? Systems. That's what Colin said; it's not a tool, it's a system. And that's intriguing. There's so much more happening there. So there's one to follow. Thank you very much for today. See you next time.
374
+
375
+ **Paula Kennedy:** Thanks!
376
+
377
+ **Chris Hedley:** Thanks for having us, Gerhard.
378
+
379
+ **Colin Humphreys:** Thanks, Gerhard. Take care.
Docs are not optional_transcript.txt ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gehard Lazu:** The way I found out about the work that you do, Kathy, is by Jerod logging a news item, and the title was Maybe It's Time to Re-think the docs, and I was thinking, "Yes, it is time to re-think the docs."
2
+
3
+ And one of the things that I really liked about what you wrote - this was a blog post, I think on GitHub, if I remember correctly; I can't remember exactly which part. GitHub was really big. I think -- well, now I know that you were at GitHub at the time. Since, you've joined Vercel. And this is like an interesting behind the scenes we were meant to record, I think, a few weeks back, but you'd only just joined, things were a bit crazy, so we just delayed it by a few weeks. It was the right call, I'm happy we did that.
4
+
5
+ And the one thing which really resonated with me from what you wrote was that when you're stuck on a problem and you turn to the docs, there's a moment of magic when you find the solution; you try it out and it all works. And that's really clicked, because that's exactly what you would want the docs to do. When you're stuck, you'd like to reference something; you read it and then you get it.
6
+
7
+ **Kathy Korevec:** Yeah.
8
+
9
+ **Gerhard Lazu:** What made you capture it so well? Because it was perfect.
10
+
11
+ **Kathy Korevec:** Well, thank you. I think probably what made me capture it so well is that I've been working with a team of writers for the past couple of years, and they have taught me a lot about how to write. I noticed that my writing got a lot better and I became a much more of a stickler for good structure and documentation. And just like even in my Slack messages or my emails and things like that, the writers really rubbed off on me, and that was really nice, because being really detailed about writing is not my strong suit. That might sound kind of weird coming from somebody who led a documentation team, but I know that about myself; I actually leaned on the writing team pretty heavily to help me edit things and make things sound a lot better.
12
+
13
+ But I think for me, I'm really passionate about enabling any developer anywhere in the world to build and ship world-class software, regardless of where they are or what machine they're on. And I think that unlocking that moment of like, "I have built something and now it works" is really, really important to me. Not only because -- like, just for me, I remember that feeling of coding something really simple when I was getting into working on the web, and it just felt so cool; it felt so much like, "Oh my God, this is magical. I can create stuff."
14
+
15
+ My family - I am from a family of musicians and artists, and I was the scientist. And I was always kind of like, "What I'm into is not what they're into." I'm into coding and mathematics, and I was actually really into a lot of like animal science and things like that. So I was very different.
16
+
17
+ So when I figured out like, how to make websites, it was kind of at this moment of like, "I feel like I, in a way, belong", because I was making something that could be considered a work of art. I was never that good, but that moment of magic of making something and seeing it work was really important to me.
18
+
19
+ And I've been blocked before, and even on just simple things that are rudimentary, that I do all the time, that I don't always remember what the syntax is or whatever, going to the documentation and having it very clearly laid out for me, was really important just to get me back to the business of coding, basically.
20
+
21
+ I think docs are very, very important, and I used to subscribe to the belief of like docs are a crutch to enable poor user experience, or things like that... And that definitely was a long, long time ago. But I believe that and I've definitely grown out of that thinking. I think docs are part of the product. And the more they become part of the product, the more they unlock that magical moment for people.
22
+
23
+ **Gerhard Lazu:** I think that's really important, because as someone that writes code, gets out there and just has to answer, "Does it work? Does it do what I think it does? Does it address the problem that our users have?" That's great, but to go beyond that; there's so much more. And that's almost like level one. Well, what about level 10? And there's like all these layers, and I'm not sure exactly where docs fit, but it's definitely part of the whole story. You can't just be writing code and you can't have just great automation on its own.
24
+
25
+ **Kathy Korevec:** Yeah.
26
+
27
+ **Gerhard Lazu:** Docs play a very important role. And it doesn't mean that your software or your system or whatever you have is not self-explanatory. It doesn't mean that. The crutch - you've put it perfectly. It's not that; it's another layer, another perspective to your products, to what you build, to what you believe in. So how do you capture that?
28
+
29
+ **Kathy Korevec:** I really strongly believe that docs are an API to the product, and we need to treat that -- like, there are several different interfaces... There's the SDK, there's the web, there's the application and then there's documentation; it's an interface for the code. And when you think about it that way, it really helps you connect the docs to the code in a way that is just going to empower more people to use your product. And that's what you want.
30
+
31
+ At GitHub we saw a lot of our documentation was used by people who were very new to the platform. And that makes sense. I think a lot of documentation teams probably see similar kinds of traffic. And I think for those kinds of users, it's like it's on us to connect the dots for them, and make sure that they're onboarding sufficiently and in a way where they don't feel like they're like, "Okay, well, I'm using this new thing."
32
+
33
+ Take Vercel -- I’m at Vercel right now. Take Vercel as an example. We could have people joining and signing up for Vercel all day long. But if they get frustrated and they can't deploy their site, they're probably not going to come back, and they're probably going to go to one of our competitors. Now, that's totally fine, because that means that, they're getting their answers somewhere. And I would rather have them on the web than not on the web, not deploying their site. So I'm fine if they can't figure it out with us and they go somewhere else. But I do want to know what their frustration was. And if it was documentation - that hurts, because that's one of the places that we can easily update and we can easily help them through that experience, and I think it's our obligation to do so. Because it is so simple. And we do get so many signups coming through. And it's kind of like, when I was at GitHub, you'd see this kind of like, people sign up, and then some of them churn off of the platform. And I kind of have this belief that docs can be a part of helping them through and helping them stick around and helping them use GitHub in a really cool way. Not in a selfish way, but we want to obviously keep our users using our products, but I want to make sure that they're using them to fulfill their dreams, and that's where docs can really help them.
34
+
35
+ **Gerhard Lazu:** That makes a lot of sense to me. And I really get it. Through and through, for years, same page, definitely same page. But I'm wondering, for someone that is focused 80-90 percent to just shipping code, what would you tell to that person when it comes to the documentation
36
+
37
+ **Kathy Korevec:** Yeah, so the one of the pieces of feedback I hear a lot is that developers don't like to context-switch between working in their IDE or looking at -- getting stuck looking at code, going back and forth to all these different tools, they're working in the CLI, they might need to go to github.com or vercel.com for things... That can cause just a lot of churn, I think... And so when you're spending a lot of your time in all of these different tools, you do you get a lot of that whiplash. And so I think where docs can help you and where we can take docs is creating systems that bring docs to where the developer is a lot more. And so one of the things -- I don't know, if I talked, I can’t remember if I talked about this in that article that you referenced in the beginning of the conversation, that maybe it's time we re-think docs... But I do think that creating better API's for documentation so that developers can bring that documentation into where they are, or either themselves, maybe they want to have access to the functionality in a certain way, because they're building tutorials, or they need access to the documentation, where they are in the IDE, things like that - I think we can do a much better job at bringing the content and bringing the docs to where you are, even in line as you're working with a CLI, for example. And so I think a lot of documentation, especially for developer tools, is very, very web-focused, and I think it can we can really start to think of it as a system that can work with the developer toolset in the workflow, as they're going through developing, looking at staging etc. and then actually deploying it.
38
+
39
+ **Gerhard Lazu:** I think that makes a lot of sense. And I'm almost wondering, if someone that really cares about code, shipping good code - would tests be optional? I think the answer is no for the majority. What about actually getting the code out in production? Is that optional?
40
+
41
+ I would like to think that for most of our listeners it's not, right? You want to see your code in production, you want to understand the behavior at production scales, so on and so forth. So the natural question is, why are docs optional? Why do you think they're optional? So why do you think docs aren't optional?
42
+
43
+ **Kathy Korevec:** Well, I think they're not... Yeah.
44
+
45
+ **Gerhard Lazu:** Yeah.
46
+
47
+ **Kathy Korevec:** There's a little bit of philosophy here, and I think it probably depends on -- the answer probably depends on who you talk to. But my philosophy is that the product doesn't exist until it's documented. And the reason for that, and I think you can probably connect the dots, is that I really feel that documentation is part of the product itself. And if you don't have documentation for people to get their answers, or for people to give you additional feedback about how they may be interacting with the docs, or with the product itself, then I think you've kind of failed.
48
+
49
+ But I do think that the reason why a lot of people, maybe people don't think docs are optional, or maybe they haven't thought about this question to the extent that I think would really benefit them if they do deep-dive into this, is that a lot of times you're up against a timeline and you want to get stuff out. And docs typically falls after you have finished a feature or finished a part of the product... And then you're up against "We’ve got to ship this today", and docs often just gets cut from the launch plan. And I think that's unfortunate, because there are so many opportunities... If you think about docs as code and you start to structure how you work with documentation, you can actually document things really easily along the way. And especially if you're working really closely with writers and developers who -- at GitHub, we always talked about the EPD, the Engineering, Product and Design team. And right before I left, we recently started talking about the WEPD, the Writers Engineering and Design team, and bucketing them all in one, because really, the writers can help a lot with end user testing, they can help with feedback, they can help you make sure that your entire feature actually makes sense for the end user.
50
+
51
+ If you think about writers almost as designers, but at the end of your iteration, they can really help you improve a lot of those details over time. And if you do that, then you're constantly kind of writing documentation alongside, as the software is being developed, and iterating on that documentation to make it better and better and better, because you're treating it as part of the product within the same system that you're developing. And I think that's probably not something that a lot of teams -- if you're lucky enough to have a team of writers and think about documentation, then oftentimes you're thinking about it at the end of the cycle, and then it is at risk to get cut before it goes out.
52
+
53
+ **Gerhard Lazu:** I think that's exactly right. And that's how \[unintelligible 00:14:37.22\] even started, when you said that we need to bring the docs closer to where the code writing happens. And having writers involved with that process is one way, but what happens when you have a smaller team where you have, I won't say like a one-person company, but let's say a five-person company, where you don't have a dedicated writer. What would you recommend in that case of that team does?
54
+
55
+ **Kathy Korevec:** Yeah, I mean -- we're in this situation right now at Vercel, we're hiring technical writers, but we don't have any on staff. We have documentation, we have great documentation for Next.js and Vercel. How did that happen?
56
+
57
+ So I think that the people who know the most about a product are the people who are building it. And the benefit of this kind of situation is that you can get those people to write the documentation. And that's often what happens, but you kind of have to have this ethic that I talked about, which is like documentation can never be an afterthought; you have to think about it as you're building, and also celebrated throughout. So that's been my experience so far at Vercel.
58
+
59
+ And when I was at Heroku, we were in the same situation. We had documentation, but we didn't have writers. And the product team -- a lot of people would say it falls on the product’s team shoulders to write the documentation, or it falls on the engineers’ shoulders. I think it's an opportunity to express what you have built, and also take a step back and almost go into the same mindset that you had when you were writing the spec, when you were approaching the problem. Now you're at the end of the problem and you're about to ship it; you can go into that same mindset of like, "Let's review and make sure that what I've built actually matches what I intended." And that what this writing process can help you with.
60
+
61
+ That said, we do have people in both situations at Vercel and at Heroku, we do have people thinking about the documentation system and thinking about how to write good articles, and providing that information to the rest of the team in a form of a system - system notes, system documents etc. So that when you do go to -- if you're not a writer like me, when you do go to write documentation, you have the resources you need, to know like, "Okay, well here's how the system works. Here's how the taxonomy works." There's somebody kind of thinking about that. And sometimes that falls to a product manager or somebody who's doesn't have a documentation background. And I think that's awesome. I think having people thinking about the system in terms of the way that it's designed, who are outside of our writing discipline, is really great, because they're thinking about the product of the docs itself, and I think that that really helps.
62
+
63
+ **Gerhard Lazu:** I think that makes a lot of sense.
64
+
65
+ **Kathy Korevec:** If you don't have any of those people, have a champion.
66
+
67
+ **Gerhard Lazu:** Yes, have a champion. Okay, so someone that reminds you of the importance... Someone like you, right? Hey, docs is important. This is why it's important. This is what that looks like, so on and so forth.
68
+
69
+ **Kathy Korevec:** Yeah.
70
+
71
+ **Gerhard Lazu:** So that makes a lot of sense. Product people helping with the documentation, if you don't have writers. It helps to have writers for the taxonomy, for the structure, for like the higher-level concepts that are specific to writing, what makes good writing, which is very important... And you mentioned how much it changed your outlook on good documentation having worked with good writers, so that is important.
72
+
73
+ But what about -- I'm just a developer, okay? I have to write my code, write my tests, preferably first, commit, push, get it out there, get it in production." At what point should I, the developer, write the docs?
74
+
75
+ **Kathy Korevec:** You hit on a good point, which is -- I don't want to underestimate the work that it takes to write good docs. I think technical writers are -- and this may be controversial to say, but I think they're undervalued in the industry. I think especially technical writers, for very technical tools, for developer tools, for the space that I tend to work in, we demand a lot from our writers, and what happens is they're undervalued, so they're overstretched. So we only hire like one or two of them to document an entire product. And that means that where you might hire one or two technical writers, you have five or six product managers who are shipping things, a team of 100 engineers who are shipping things... So those two technical writers then have to know everything about the system, whether it's existing legacy, or new and upcoming; and that's a lot of demand on them to not only understand what a technical writing system looks like, but also really fully understand what the product is.
76
+
77
+ So when you find these people who work on these small teams like this, they're brilliant and they can do a ton of things; not just writing, but most of the time, they want to do writing, because that's what they've chosen to do for their profession. So I definitely don't want to undervalue them and say that their job can be done by somebody who's not a technical writer.
78
+
79
+ But I think if you don't have a technical writer on your team, you can do things like document as you're going, and that's kind of what I was talking about before, where it's like, treat documentation and writing as part of your development process. You know, that introduces another step in a lot of ways, and something that you can forget about, but it helps you in the end, especially looking back... Like, okay, you now have a record of the decisions you've made and how things have changed, and that can help your development process in the long run.
80
+
81
+ **Gerhard Lazu:** So the way I understand, there are about three stages. The first stage, the incipient one, is docs don't exist. Now, that's stage zero, and that's a bad one to be. That's maybe the worst one to be in, just ignore docs altogether.
82
+
83
+ **Kathy Korevec:** Yeah.
84
+
85
+ **Gerhard Lazu:** The first stage, the first proper stage is to write some docs the best you can as a developer, as you build a feature. Don't leave them last, because you'll forget; think about the user. If anything, it forces you to think about the end user more as you write the docs. How will they be consuming this? What problems might they have? ...so on and so forth.
86
+
87
+ Then, as a product team, encourage the product team to help writing the docs, because they have a higher level of perspective, a deeper, longer, wider, however you want to take it - just a larger point of view into how this feature fits in the product, and that helps. But what you should really be doing - and this is like level three, or the last one - have some technical writer; have this team, the cross-functional team, which includes a technical writer, which is super important, and maybe is no longer optional, because our products are growing more complex, things are changing faster, and you forget the human element. You think it’s just like about slinging code, just get it done. It breaks. "Okay, we'll fix it. Let's move on." Well, maybe what you should be doing is maybe slowing down and investing in good documentation.
88
+
89
+ **Kathy Korevec:** Yeah, definitely. You know, you mentioned the human aspect of it, and I think that's really important. One of the things that I talked about in that article is writing documentation that can be flexible for how people learn; people learn in many different ways. I am a very visual person, and so I like to pair a lot with people, I like to see their screens, and I like to see and watch other people doing things. So videos really help me learn. And other people like to get hands-on, and that really helps them learn.
90
+
91
+ Some people, and I envy them, but just reading helps them learn. And so I think it's really important, if you can, think about documentation in a way that meets people's needs, and helps them learn in different ways. And one of the things that we started to look into at GitHub was - you know, just take the simple article template, and say we're looking at a guide or a tutorial. That guide can be presented on one page in three different ways. You can have the text on the page for those people who read through it, you can have an interactive element, where people can play around with the code. You could actually manipulate that code and take it with you if you needed to. That can be really powerful too, because then you're merging the learning and the development in one place, and you have that context in your head as you approach your project.
92
+
93
+ And then you can also - maybe in the same place where you have the interactive element, you have a toggle between interactive or video. And then if you're watching a video, have it be somebody just literally showing their screen, walking through the steps of that article, because the point isn't necessarily to talk about something else or introduce other ideas. The point is to learn what's on the page, and that can be very valuable. And if you do have a team that can introduce content in various different ways, I think it'll pay dividends to your goals for signup churn or for engaged users and things like that.
94
+
95
+ **Break:** \[23:25\] to \[25:40\]
96
+
97
+ **Gerhard Lazu:** This is exactly what I was thinking about hearing you talk about documentation itself - most think it is just text; walls and walls of text, man pages. They have their place, but that's not the type of documentation that we are thinking about. And that's actually the second thing, which I really loved about that post, which is maybe it's time to re-think docs. I'm going to link to it in the show notes... Where documentation is not just text - it's videos, it’s the interactive elements. Because documentation - and these are your words, Kathy, and I love them - they're learning experiences. That's what it is. You're trying to learn something; you don't understand something, you're blocked. And that blocked means you're missing a piece or maybe multiple pieces. So how do we get those pieces? Reading text - most people are fine with that. But I think we're seeing -- I don't want to say like a new age of developers of like a new trend, but I think in today's age, video, I don't think it's optional anymore. I think it's something that people expect to have. Also the interactive element - super-important. So if you stop thinking about docs as text, but more as learning experiences, then you start realizing, "Well, hang on, do you mean all my demos and all my pitches, and all my things are actually docs?" In a way, they are. "Are my blog posts, docs?" In a way, they are.
98
+
99
+ I mean, sure, you can do that at the end. I know that Amazon, with starting -- what was the name of the book? Hang on, let me just pick it up. Working Backwards. I forgot about it, there's just too many books. Working Backwards. That's a great way to start with a feature. Imagine it finished, imagine it in front of users, imagine the press release. Maybe that is the final doc that you create, and then you just work backwards from that, maybe.
100
+
101
+ So I really like this about your article Kathy, where you, first of all, give an example of what GitHub docs look like in certain sections, and you give an example of what they should look like. I love that; it's so clear, even in the example itself. How did you come up with the idea?
102
+
103
+ **Kathy Korevec:** The idea to introduce interactive elements?
104
+
105
+ **Gerhard Lazu:** Yes, interactive elements to expand people's horizons, that "Hey, it's not just text, it's actually videos. It's actually interactive elements."
106
+
107
+ **Kathy Korevec:** Yeah.
108
+
109
+ **Gerhard Lazu:** Just open people's eyes to re-thinking docs, literally. That's the essence in my mind of that blog post.
110
+
111
+ **Kathy Korevec:** Totally. So when I was at Heroku, there was a rule that we were never allowed to put any marketing into our documentation. And I really subscribed to that; like, yes, don't put marketing, don't get in a developer's way with trying to sell them things. And I think a lot of people take that to heart and they think like, "Okay, well marketing means slide decks and demos and videos and things that actually make a page come to life." And so in order to not have marketing in your page, you have to make it really dull. And that really sucks for people like me, who learned in different kinds of ways.
112
+
113
+ And so I was just thinking - you know, I constantly want to revisit, "Was this a good idea?" Just because we're doing it, it doesn't mean that we should be doing it. And you kind of fall into this trap of like, "Oh, well this is the way that it's been done forever." And so I really like to question that, question my own thinking, and question my team's thinking a lot, of like, "Well, just because that's the answer doesn't mean it's the right one."
114
+
115
+ So I kind of took a step back and said, "Well, what if we introduce -- what if we talk about the value of what GitHub Actions is and introduce just a little bit of marketing?" And I realized, it wasn't marketing that I was introducing, I was introducing a story. And we should be able to tell the story of this feature, of this product, and who it's for, and why you might use it on your project in the documentation; not just this one to one "Here is the feature, here's the UI" or "Here's the CLI component" or whatever. And then just like have that one to one, UI-to-text document. We should show people using it. And then I just got thinking like, "Okay, well, what if we add a video?"
116
+
117
+ I looked around -- as a product manager, I looked around and did a bunch of competitive research. I saw that Netlify was adding videos. Why can't we add videos? CircleCI is adding tutorials that are not written by anybody on their team, they're written by the community. Why can't we have that in our documentation?
118
+
119
+ So it's kind of this FOMO moment, of like, "Well, other people have this thing. I want this thing." And then I started to put it together... You can actually incorporate all of these elements into one screen, if you think about it like a system. And then it can start to help that product or that feature really come to life for people. And then you go through -- like, I went through my typical product development process, which is like sketch something out, show it to somebody, get some feedback and then go a little bit further.
120
+
121
+ But what's really interesting is -- you mentioned something that I really like, which is you're new to this project, you're new to a product, you want to test it out, and documentation is really important for people who are kind of new to something... I think that's very, very true, and in a lot of ways, I structured -- I didn't talk about this in the article, but in a lot of ways, I structured the way the pages were laid out based on our most trafficked user, which was the person who is new to the screen. I wanted to prioritize their use case.
122
+
123
+ That said, we get a lot of -- at GitHub, we got a lot of traffic from people who are not new to GitHub, but were new to the documentation in certain ways, where they were like using GitHub for very predictable things... But then when GitHub Universe happened and we introduced a brand new feature, everybody would get really excited about it and turn to the documentation right away to go learn about it. And so those folks we definitely want to cater to as well. So it's kind of like, you've been here for a while, we've just introduced a new feature, and so we want to be a little bit splashy with it. So we want to introduce some cool things.
124
+
125
+ I mentioned GitHub Actions... One of the things we did was we revamped the GitHub Actions page to where we weren't just documenting like a one to one, like I said, "Here's the UI. Here's the UI in text form." We were starting to incorporate a lot more of the community.
126
+
127
+ So GitHub Actions - one of the coolest things about it is that the community writes these Actions and workflows. And they sometimes are hard to find... Especially if you're just getting started with GitHub Actions, you have to kind of like troll through a bunch of repositories, and go in and read somebody else's -- we could talk about repository documentation too, but go and read their documentation about what this thing does... So the discoverability was hard.
128
+
129
+ So what we did was we actually built a component within the GitHub Actions page that pulled some of that information into the GitHub Actions documentation itself, and it allows you to search different code examples for how to use GitHub Actions. And I think that unlocked a lot of the a-ha for people, because they were like, "Oh, well, GitHub Actions - I can go and try it right now."
130
+
131
+ That component was really easy to put together. It was literally like, we added some frontend code, and then it was me updating a YAML file with -- like, manually updating the YAML file with links to all of these repositories. I think I wrote a bunch of the description text myself, just to get it out there and see if people were using it, and they loved it. So there was just this kind of like -- you went from having FOMO to participating in the FOMO. That's what the documentation unlocked.
132
+
133
+ And the final thing I'll say about this is wanting people to have that magic moment, and then seeing them have it is super-gratifying for anyone who is building a product. That was really fun, and that was very motivating for me to keep going.
134
+
135
+ **Gerhard Lazu:** This makes a lot of sense in my head now. So I remember using GitHub Actions when it first came out... And I remember a lot of things being primordial, and just a lot of questions not being answered. And over time, it got better and better and better, to the point that I like it really much. I mean, at this point, I've tested all the CI's that there have been in the last 15, all the popular ones, by far. So anything that you have used as a listener, I have tried out myself, and even used myself. And I can say that GitHub Actions as a product was brilliant, and it continues being brilliant. Documentation plays a big part in it. It's still YAML, and there's still that dissonance between, "Well, what do I put in this file?" And then you go to the docs and you have all the reference, and "Okay, so what do I actually want?" You have some examples to get you started. It's okay, it gets you there, but then it keeps hand-holding you through the entire process. And you can build some pretty good pipelines with GitHub Actions, just by following the docs.
136
+
137
+ You look at some examples, more advanced ones, all the different GitHub Actions you get from the marketplace, they also help and they have their own documentation, so that's good as well. The marketplace makes a big difference, but overall, it's a nice experience. And even though GitHub Actions itself, it has various limitations... As everything, by the way. Nothing's perfect, stop looking for it, it doesn't exist, by the way. It's really, really good, and I liked it.
138
+
139
+ And as a whole, it felt more human than any of the other CI systems that I used. And as I said, I went through many of them; look at the show notes, I'll drop a few, the most popular ones. But there's something to be said about the product. There's something to be said about the experience, whether it's a learning one, or using one, it doesn't matter where you stand. That experience is really important. And documentation - it contributes to that. You can’t have --
140
+
141
+ **Kathy Korevec:** It really helps.
142
+
143
+ **Gerhard Lazu:** ...a good product experience without good documentation.
144
+
145
+ **Kathy Korevec:** I think what you're getting at is -- there's kind of like two things. Documentation is a detail that you should not overlook. When we first released GitHub Actions - this is a little bit of inside baseball, but when we first released GitHub Actions, and I think a lot of people who release a big product will empathize with this... It was definitely awesome, and it was like "Oh, my gosh, this is going to change a lot of workflows." And it connects a lot of workflows right inside of GitHub, so that was super powerful. But, it wasn't perfect. Not that we were seeking perfection, but it needed some improvements. And one of the things the team did was they took a couple of the iterations and they said, "Okay, what details matter here? What should we focus on?" And so they shipped the thing, and then they revisited it again, and again and again. And so even in the UI, like -- and they also pulled the documentation into the UI, which I love, but they perfected those details over time. And I think what you're saying is a reflection of that hard work that went into that.
146
+
147
+ **Gerhard Lazu:** Yes.
148
+
149
+ **Kathy Korevec:** And I think even the documentation - we released the initial documentation for GitHub Actions, and it was okay, but we needed, especially as the product evolved, we needed to iterate and keep up. And when I first started on the team, improving the documentation at GitHub as a whole was a big mandate of mine... But I picked a white whale, which was GitHub Actions, because we had been getting a ton of feedback about the GitHub Actions documentation specifically. So I said, "Well, if I'm going to do it, I'm going to jump into the deep end and I'm going to go after what we get the most amount of feedback from."
150
+
151
+ And one of the things I learned in that experience was that it is really powerful seeing how other people do things. And not only for myself, but for other people. It's like, you go to Stack Overflow and you ask for an example, you're seeing how somebody else might solve your problem, and it's the same thing that we wanted to apply, bringing these code examples into documentation. It's like, how are other people working with GitHub Actions? How are they building their workflows and their pipelines? What are the other examples that I could then apply to what I'm doing? There's a lot of power in that kind of thing. It goes beyond a template. It is actually how somebody is using it; that can be really eye-opening.
152
+
153
+ **Break:** \[37:58\] to \[39:03\]
154
+
155
+ **Gerhard Lazu:** The one thing which I really liked about GitHub Actions is seeing how it changed week by week, month by month, and it's that journey that I was on by using GitHub Actions that felt comfortable. I knew that the shortcomings which it had will be addressed. I knew that the frustrations, the small ones, with the documentation, with whatever it may be, eventually will be addressed. You had a very good GitHub community where I could ask questions, people could join, and it felt like it's part of the GitHub experience, of the GitHub Actions experience specifically. And that really helped. The marketplace kept growing, people kept building more awesome stuff... So it wasn't just GitHub, it was the whole GitHub Actions community contributing. And that felt great. Seeing things improve constantly, at a comfortable pace, the trust element growing - it was amazing. I really liked, and I still like and enjoy being part of that journey. How much did you have to do with that, Kathy?
156
+
157
+ **Kathy Korevec:** How much did I have to do with that?
158
+
159
+ **Gerhard Lazu:** Yes.
160
+
161
+ **Kathy Korevec:** Specifically on the GitHub Action side... So I was not on the GitHub Actions’ team. I worked closely with them, in that my direct colleagues are running that team. But one of the things that -- I can talk about it from a documentation standpoint, if that's what you mean...
162
+
163
+ **Gerhard Lazu:** Yes.
164
+
165
+ **Kathy Korevec:** But one of the things that you're touching on, which I think is really, really important, is that when you started using GitHub Actions, it was a product that was in beta, basically. We had just shipped it, and there were some things that we needed to improve, and the team was really interested in collecting a bunch of feedback, so that we could improve those things.
166
+
167
+ At that moment, you are taking a huge leap of trust, because you're talking about putting a lot of probably your critical developer workflow infrastructure onto GitHub Actions. And in order to use CI/CD, and to use that integration is really, really important for the success of your products long-term. So you're trusting GitHub to update the product in a way that it's going to work for you; that's a ton of trust. So I take, a couple steps back, and I'm thinking like "Okay, why did you do that? Why did you make that decision to trust GitHub in that way?" And I think that trust is earned.
168
+
169
+ And if we go back a couple years before, or like a year before I joined, GitHub had lost a lot of that trust with users. And I think very famously with the Dear GitHub letter. So the Dear GitHub letter, if people don't know what that is - it was a letter that was from the community, asking for certain features and certain products to be updated in a way that could really, really help maintainers. And maintainers are a big part of GitHub, maintainers of open source projects. So there were a lot of kind of like low-hanging fruit things, there were a lot of bigger projects that people were asking for.
170
+
171
+ And so kind of in response to that, in response to losing the trust of some of our maintainers and some of our users, we decided to put together a project that I called \[unintelligible 00:42:02.18\] And we talked about this publicly, we talked about this -- we can link to it in the show notes... We talked about this in the Dear GitHub letter; it exists in a GitHub repository, so you can go in and see openly what the conversation is around all of this stuff.
172
+
173
+ But we decided to take some of those requests and rapidly iterate on them, rapidly iterate on our platform, to not only win the trust back of some folks who we consider very near and dear to the heart of GitHub and in the DNA of GitHub, but also to focus on the details, because the details are what matter for developers. And if you don't focus on the details, those are things that add up and end up getting in your way and end up really kind of a trickling stream into a raging gorge that you can't cross.
174
+
175
+ **Gerhard Lazu:** That's right.
176
+
177
+ **Kathy Korevec:** And we wanted to fix these small details. And over time, they add up to a huge win for our community and for these maintainers, and they paid off in dividends. And one of those dividends is trust. And so when you come to GitHub Actions for the first time and you're thinking, "I'm going to now put a critical piece of my workflow onto this new in-beta, just-shipped product", you can trust that GitHub is going to focus on those details over time, and that's exactly what we did.
178
+
179
+ And the GitHub Actions team shipped this thing, and then they said "Okay, well, there are certain parts of it that aren't perfect." And we are, like -- you're always striving for perfection, you're never going to hit it, and so it's good motivation to keep on fixing things as you go.
180
+
181
+ And on the documentation side, we felt the same thing. We shipped the first set of documentation for GitHub Actions, and it wasn't great. We got a lot of feedback about it very publicly, about how we could improve GitHub Actions.
182
+
183
+ And so when I started on the documentation team, I just kind of said "Well, one of the biggest frustrations from our community that I can see publicly and in the feedback we're getting just directly to the team is that the GitHub Actions documentation can be improved." So I picked that as my first thing to focus on when I was re-imagining how documentation could look at GitHub.
184
+
185
+ And some of the things I learned right away was that people had a hard time finding examples in the wild of what other people are doing, and they wanted to see that in order to -- there's a little bit of like "Let me see what other people doing so I can help contextualize it for myself." And then there's a little bit of like, "Let me see what other people are doing so I can trust that system more." You know, get proof that this works for other people. And so we wanted to incorporate that a little bit.
186
+
187
+ Also, at the time that we started revamping a lot of these docs, we introduced a new documentation type tutorial into our system, which exists in a lot of different documentation systems... But we spent a lot of time thinking about and working with a team of technical writers, about 20 of them, who were working across GitHub, but a couple of them who were working specifically on GitHub Actions... But we embedded those technical writers on the GitHub Actions team, to work with the engineers and designers on these tutorials, so that we could kind of like get closer to the metal, I guess, and ship tutorials that made a lot more sense than they did in the past.
188
+
189
+ **Gerhard Lazu:** The thing which I'm thinking about now is writing documentation and getting documentation out there, part of the same repository. I know that a lot of what we talked about is bigger teams, larger organizations... But if you're a smaller team - again, just a handful of engineers - and you're trying to ship stuff and do everything else that you do, as you would expect when you're part of a small team, would you recommend to have a single repository and put all your documentation there? Or would you recommend focusing on the code first, on how everything behaves, and then having documentation separately to your code? What do you think?
190
+
191
+ **Kathy Korevec:** I mean, I think it depends on how you work. I think if it's useful for you to have it separate, then do that. If it's useful for you to have it in the code itself, then do that. I think, if you're just starting out, you're probably going to private beta or like ship something in public, but ask for a lot of feedback... And so it can be really helpful at that time to have the documentation in mind with the code.
192
+
193
+ The way that I would think about it, and this is one thing that I do when I'm just starting a new project, is that I think about my documentation like I would release notes. And when I get to the end of my day or whatever, I'm kind of thinking about like "Okay, what does this release look like for people? What are the bullet points that I really want to highlight?" And I just start there, and then that helps me; I can fill in the gaps later on, but at least I have the skeleton. If I think about it in terms of like, how are people going to be consuming this once I released this package, or something like that.
194
+
195
+ **Gerhard Lazu:** The reason why I ask this is because I can see a lot of things coming together in a single GitHub repository, and I think that in itself is a very powerful concept. Not only you have code, you have the readme, you have GitHub Actions in the same context... Though you don't leave your repository; it's just another tab. You have discussions, which I love to see... And there was also the wiki concept for many, many years, but I don't think that's quite worked as well as it could have. It was okay, but I don't think it brought the concept of documentation close to where the work was happening. I know that there's also the project concept, and I think that's slightly separate, but then you have issues in PRs, which are kind of linked with the project.
196
+
197
+ **Kathy Korevec:** Yes, issues in PRs.
198
+
199
+ **Gerhard Lazu:** I mean that's what a project is, right? All the items that you're doing are the issues and the PRs. So I'm wondering, in your opinion, Kathy, what would a better documentation implementation look like in the context of a GitHub repository, alongside all the other things which I just mentioned?
200
+
201
+ **Kathy Korevec:** We often talked about whether we should have introduced a docs’ tab in the repository, and it would kind of take -- you know, like you mentioned, the wiki, and this is kind of what a wiki is supposed to be... But I think the word wiki is a little overloaded, and the wiki product didn't take off, I think, in the way that it could have for -- the teams that I talked to, that are using wikis, it really, really works. But you kind of have to make a commitment to -- wikis can be kind of messy, and they're intended to be kind of messy... Whereas documentation is something that you're making a commitment to it being part of the product for the end user. And while you're updating it, you're constantly thinking about, like what I said, you're thinking about, like "What if the release notes look like this? And how do I want my users, once I open this up, to consume it?" And wikis can be a little bit more organic... And that's just my opinion. I'm sure what I'm saying is possibly controversial with some people who really, really love wikis. I used to love them; I don't really use them that often anymore.
202
+
203
+ But I think there's something really powerful about having the documentation right there with the code, in the repository, especially like -- the documentation I was working with was documentation that was external, and also accessible without a login or anything, so external to the product. And so you almost have like the product up on one screen and the documentation up on another screen while you're using it, and that's like product documentation.
204
+
205
+ But what we're kind of talking about now is repository or code documentation, and having that all in one place is really, really useful.
206
+
207
+ Having the readme document where you have installation information, and you have update information, and how to navigate this repository - all of that's really, really powerful, and that's a piece of documentation. And so introducing something similar to where you could potentially have the documentation or a docs tab is really cool, because you can also then tie that into how people are pushing code to the repository. And so you could say in every single pull request, or you have a PR template, or you have an issue template - you could have a portion of that write directly to the docs tab, documents in that tab if you wanted to. You could use GitHub Actions to pull that content in if you wanted to.
208
+
209
+ So there's something really powerful about that kind of a workflow, where not only are you getting the context of having the docs in a tab for the end user, but you're also thinking about the published flow, and automating that published flow in kind of a cool way. And we've been talking a lot about like -- if you're one developer on the team, you don't have writers, this kind of a workflow could actually help you. It's something we talked about a lot on the documentation team, like how do we go beyond just product documentation and think about improving code documentation?
210
+
211
+ **Gerhard Lazu:** I'm a big fan of everybody meeting in a single place and then seeing what happens, and a GitHub repository - to me, that's what it is. The discussions are happening there, the automation is happening there via GitHub Actions, the code is definitely happening there, your issues, your pull requests, community contributions are happening there... The docs, I think, should happen there, too. GitHub Pages - it works; it works for you well. The wiki - sure. The wiki was a really weird one, because it was a repo inside a repo, and I don't think many people knew that.
212
+
213
+ **Kathy Korevec:** Yes.
214
+
215
+ **Gerhard Lazu:** You could just clone the wiki, and then you would have a repository of your wiki. And that was a bit like, "What?!" That was just a bit awkward. but again, it worked. And people that knew it, loved it, and used it, and so on and so forth.
216
+
217
+ Once you start having many repositories, unless you need them -- and if you're like thousands and thousands of people, you definitely need them. Well, I don't know... Facebook, single repo; Google, single repo. I mean, maybe, I don't know. I think there are extremes, but let's not get bogged down in this.
218
+
219
+ **Kathy Korevec:** You know, it's really interesting... One of the things that we had to solve for when we open source documentation, speaking of "Do I have one repo or do I have multiple ones?" is that we had to actually -- because we were the documentation team, we were documenting products that were under development, and so sometimes, we could do that out in the open, because the community knew about those things that we were working on, but sometimes we wanted to keep it a secret. And GitHub wants to make a big splash or a big launch, or we're not ready to accept feedback or whatever. And so we had to document those things in private. But we had an open source documentation project going on, and so we actually created two different repositories for documentation. One was internal and one was external. And we used GitHub Actions to bi-directionally sync the two every 15 minutes, and we would actually only sync the PRs merged to the main branch. Everything else was different on both sides, but the code was a mirror, which was pretty cool. I thought it was a nice workaround for us.
220
+
221
+ **Gerhard Lazu:** Okay. Okay. Are those GitHub Actions public, by any chance?
222
+
223
+ **Kathy Korevec:** Yes, the one that we used - it's called Repo Sync, and that's an open source project.
224
+
225
+ **Gerhard Lazu:** I'll check it out, because that sounds really interesting. That sounds really interesting.
226
+
227
+ **Kathy Korevec:** Yes.
228
+
229
+ **Gerhard Lazu:** We're approaching the end of this... I know that some people, some listeners may be confused. The reason why they're confused is because when I started talking to Kathy, she was at GitHub; but when we recorded - she's at Vercel now. So I think that the only logical next step is to do another interview with Kathy from Vercel, because this one sounds a lot like Kathy from GitHub, right?
230
+
231
+ **Kathy Korevec:** \[laughs\] Yes.
232
+
233
+ **Gerhard Lazu:** That's, at least, what I think. So --
234
+
235
+ **Kathy Korevec:** Yes, we can totally do that.
236
+
237
+ **Gerhard Lazu:** Yes, one focused on Vercel and the amazing work that you do there, and the amazing work that the team does, I think would be well deserved. Let's put a pin in it for now. We didn't even talk about Kathy's philosophy page, kathy.pm/philosophy, that's a great one. You have to check it out right now. Put on pause, go and check it out; it's so good. I was going to ask you about your three favorite items from that page, but I think -- I mean, if you have a quick answer, we can do that...
238
+
239
+ **Kathy Korevec:** My three favorite... Well, I've just added one that I think -- maybe I'll just add my one favorite.
240
+
241
+ **Gerhard Lazu:** Okay. Yes.
242
+
243
+ **Kathy Korevec:** It's at the very, very top, I just added it. It's about embracing failure, and I think that's really, really important. I think a lot of people in this industry, myself included, have a lot of imposter syndrome. I have a ton right now, because I just started at Vercel, and I'm working with very, very smart people, and I have a job that is -- it's the biggest job I've ever done. So I think the way that I combat kind of imposter syndrome is to embrace failure as much as I can. And if I do that, then I'm constantly thinking like a scientist, because I'm trying to prove myself wrong in order to ship the right thing for the customer. And learning that was huge, because it gave me this huge out for failure.
244
+
245
+ **Gerhard Lazu:** This is exactly what I've been talking to someone, I can't remember their name, two days ago. And I was talking to someone else, I think it was Patrick, two weeks ago, about learning from failure. Brian Lyle, I remembered him.
246
+
247
+ I was not talking, replying to a tweet; let me be specific and clear. I was replying to a tweet saying about how much more we learned from failure than from success. And my opinion is that it has to do with the bias for loss. People feel losing a lot harsher than when they win. They have stronger emotions about loss than success. And I think when you feel that you've failed, it has stronger emotions and stronger reactions within you. And it feels like a more meaningful experience, but it doesn't need to be a negative one, by the way. I mean, okay, it depends on the type of failure. I don't want to like go down the rabbit hole.
248
+
249
+ But if your experiment failed, you've learned something. And if you learn from failure - well, is there a better thing? I don't know what you could learn better or what source of learnings is better than failure. And if you start looking at it like that, the world is your oyster.
250
+
251
+ **Kathy Korevec:** Yes, totally. I mean, the fastest way to being right is to admit that you were wrong.
252
+
253
+ **Gerhard Lazu:** Exactly.
254
+
255
+ **Kathy Korevec:** For me, it helped. It helped me embrace this fear of failure. I think it's something that you have to embrace and use to help you just get better.
256
+
257
+ **Gerhard Lazu:** Yes. I would ask you, if a listener had to remember one thing from this conversation, what would that be? But from my perspective, it would be just what we discussed, learning from failure. Is there something else that as a listener I should take away from this other than that, which I think is the top one, the top item?
258
+
259
+ **Kathy Korevec:** You know, I think that because we were talking about documentation, probably one of the biggest things I would take away from this conversation is that if you are shipping things specifically for developers, documentation is going to unlock that magic moment for them, no matter what. And that's why documentation matters the most, is helping people get to that, "A-ha! Oh my God, I got it. And it works. And I don't have to sit here and bang my head against the wall anymore, because I learned from the docs." That feeling is so gratifying.
260
+
261
+ **Gerhard Lazu:** That sums it up so nicely, there's nothing more to add. Kathy, this has been a pleasure. Thank you very much.
262
+
263
+ **Kathy Korevec:** Yes, thank you for having me. This has been really fun.
Elixir observability using PromEx_transcript.txt ADDED
@@ -0,0 +1,253 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** Hey, welcome to the show. We have Alex today with us, Alex Koutmos. Some of you may know him from Beam Radio, for those that are listening. Elixir - you have Elixir Tips going; tip \#100 landed not long ago, right?
2
+
3
+ **Alex Koutmos:** I do indeed, yeah. And then I'm taking a small hiatus from Twitter tips regarding Elixir; but I will be back into it shortly, don't worry everyone.
4
+
5
+ **Gerhard Lazu:** Yeah. So Alex has been around the Erlang/Elixir community for some years now; I don't know how many...
6
+
7
+ **Alex Koutmos:** I think it's gotta be like six years now. I read Saša Juric's book "Elixir in Action" back in 2015, and I was hooked on the Beam since then. Yeah, I guess since 2015 I've been working on the Beam.
8
+
9
+ **Gerhard Lazu:** That sounds awesome. So the way I know you, Alex, is from the work that you've been doing on the Changelog app, which happens to be an Elixir/Phoenix/Erlang behind the scenes app. You've been doing some fantastic optimizations, especially with those N+1 queries. Thank goodness for that, because the website will be much slower without...
10
+
11
+ **Alex Koutmos:** Oh, yeah.
12
+
13
+ **Gerhard Lazu:** Yeah. And those things didn't happen in a void, right? So you had this amazing library, which you just happen to have; I don't know how many libraries you have, but I'm sure you have a few... But this is prom\_ex, or Prom E-X, as I like to pronounce it, because of that underscore... PromEx - can you tell us a bit more about that, what that is, the library?
14
+
15
+ **Alex Koutmos:** \[04:14\] Sure thing. I guess the elevator pitch for PromEx is that you drop in this one library, you add it to your application supervision tree, and then you do some slight configuration, kind of like in an Ecto repo, where you slightly configure your repo, you slightly configure your PromEx module, and then you say "Hey, I want a metrics plugin for Phoenix, a metrics plugin for Ecto", I also have one for Oban, and LiveView... So you kind of pull in whatever plugins you want that are applicable to your project... And then that's literally it. That's all you have to do. And then you have Prometheus metrics for all the plugins that you configured, and then for every plugin that I write that captures Prometheus metrics there's also a corresponding Grafana dashboard that PromEx will also upload to Grafana for you if you choose to have PromEx do that. That's kind of like an end-to-end solution for monitoring. You can set PromEx up and get dashboards and metrics in five minutes.
16
+
17
+ **Gerhard Lazu:** I really like that part, especially the Grafana dashboards. Sometimes it's just so difficult to integrate it just right, get the correct labels, get the correct things... What happens when there's an update? Then you'd have to update the Grafana dashboard. And the one really interesting thing that PromEx - I'm pronouncing it the way you're pronouncing it, Alex; it's your library, so you're the boss here... So PromEx - I like how it manages all aspects of metrics, all the way from the Erlang VM, all the metrics, not just Erlang metrics, but as you mentioned, all those libraries, all those components of an Elixir/Phoenix app... And end-to-end, including when you have new deploys.
18
+
19
+ **Alex Koutmos:** Exactly.
20
+
21
+ **Gerhard Lazu:** I felt those annotations were so sweet, because it basically owns the entire chain. It will annotate your Grafana dashboards when there are deploys. I felt that was amazing. Like, never mind managing them, which is super-cool, you also got annotations as to who deployed, and which commit was deployed. That was so cool.
22
+
23
+ **Alex Koutmos:** Oh, yeah. These have been pain points for me personally probably since like 2017, because I've been using Prometheus and Grafana for some time now... And I feel like every project I was doing the same boilerplate every single time, with the annotations and stuff like that. But even after I set up that boilerplate, I'd still have problems where it's like "Oh, look, a library maintainer updated their Prometheus package" and you've got some slightly different metrics. Now I have to manually know about that and then go pull down their JSON definition for the Grafana dashboard, and then I have to go onto Grafana, copy and paste it... Lo and behold, there's some slight label discrepancies... This churn all the time - there had to have been a better way.
24
+
25
+ I've been playing around with these ideas for probably a couple years now. PromEx is kind of that materialization of all those ideas. It's slightly opinionated; I feel like a good tool should have some opinions... If those opinions align with the library consumers, that's great. Else, maybe look elsewhere and see if some other solutions fit your problems better.
26
+
27
+ **Gerhard Lazu:** That's right. I remember your early days - I would say maybe the beginning of PromEx, when we were trying to figure out what dashboards are missing, and can we improve them slightly... So I remember us working together, a little bit - it wasn't a massive amounts; just enough to make them nice. The integration was really nice. I remember when you added support for custom dashboards, which we do make use of, by the way... So we have some custom dashboards as well, that PromEx can upload for you. That was a great feature... So now we store our Grafana Cloud dashboards with the app, and PromEx updates them. So we have nice version control going around.
28
+
29
+ \[07:49\] And you heard that right, we do use Grafana Cloud. We used to run our own Grafana, but then it was much easier to set up Grafana Agent, scrape all the metrics, scrape all the logs from our apps, from all the pods, from everything; we have even the Node Exporter integration in the Grafana Cloud Agent. We ship all those things to Grafana Cloud, PromEx handles most of the dashboards for us, which is really cool, and we have that nice integration going from our infrastructure, which is running Kubernetes (implementation detail, I suppose). We have a really nice setup, all version-controlled, and PromEx handles a lot of the automation between the Grafana Cloud and our app... Or should I say the other way around - between our app and the Grafana Cloud.
30
+
31
+ So just to backtrack a little bit, all this was possible -- I think the beginning was the application. So Changelog.com, it's publicly available, freely available source code; it's a Phoenix application. That was an excellent idea, Jerod. I don't wanna say it's one of the best ones you've had, but it was a genius idea to do that. It was so good. And what that meant is that we were exposed to this whole ecosystem which is Erlang, Elixir, Phoenix, and there's so many good things happening in it.
32
+
33
+ So the app, Changelog, is running Phoenix 1.5 right now, Elixir 1.11, but 1.12 came out, so I'm really excited to try that out... And Erlang 23. But as we all know, Erlang 24 got shipped not long ago, and that is an amazing release. What gets you excited about Erlang 24, Alex?
34
+
35
+ **Alex Koutmos:** I think the biggest thing is probably the most obvious one, which is the just-in-time compiler that landed in OTP 24. That has some big promises in store for everyone running Elixir and Phoenix applications. I think a few months ago I was actually playing around with the OTP 24 release and I had a dummy Phoenix app... And I just hit it with an HTTP stress tester. It was a very simple app; I don't even think it had a database backend to it. It was literally just pass some JSON, get a response back. And there were measurable differences between the OTP 24 - I think it was release candidate 1 I was running at the time - and OTP 23. I was pretty impressed that it was just a very simple Hello World style REST endpoint; you still saw some pretty big performance gains.
36
+
37
+ So I'm really curious to see people taking measurements in production with actual live traffic, and see what the performance characteristics look like for applications with the changeover.
38
+
39
+ **Gerhard Lazu:** Yeah. I mean, Changelog can definitely benefit from that. It would be great to measure by how much; I think that's one of the plans, to try -- now that OTP 24 is properly out, we had the first patch release land, and we also had just today, a few hours ago, thanks to Twitter and thanks to Alex, ARM support. ARM64 support for OTP 24 with the just-in-time compiler.
40
+
41
+ So for those that have tried it or would like to try it, and are wondering why, the performance increases between 30% and 50%. So it can be up to 50% faster whatever you're running, just simply by upgrading to 24. And yeah, depending on how it was compiled, how your code was compiled, it could be even higher. So it depends based on which optimizations you're picking up from OTP 24.
42
+
43
+ Okay, so how would someone using PromEx - how would someone figure out what is faster? So you have your app, your Phoenix app or your Elixir app... I'm imagining that PromEx works with Elixir as well; I don't have to have Phoenix. Is that right?
44
+
45
+ **Alex Koutmos:** Yeah. And the idea was to decouple the two. Because you might wanna grab Prometheus metrics on your application, but maybe it's like a key worker. There's not gonna be a Phoenix component there. But as we all know, Prometheus needs to scrape something over HTTP, unless you're using remote write. We'll get into that a little bit later.
46
+
47
+ So PromEx actually does ship with a very lightweight HTTP server, and it'll just serve your metrics for you. So you could very easily run PromEx inside of like a key worker, expose that one endpoint and have your Prometheus instance come and scrape it at its regular interval.
48
+
49
+ **Gerhard Lazu:** Yeah, that's right. And you expose metrics. Just metrics.
50
+
51
+ **Alex Koutmos:** \[12:10\] Yeah, for now it's metrics. Earlier you mentioned Grafana Agent, and the idea is to eventually ship that as part of PromEx. It will be like an optional download. So as PromEx is starting, if you configure it to push Prometheus metrics, you can have PromEx download the agent, get it up and running in a supervision tree... Then you don't even need to have PromEx serve up an HTTP server. You can push metrics directly.
52
+
53
+ I've actually used Grafana's cloud offering. It's quite nice, and it makes the observability story super nice, especially if you're running in Heroku, or Gigalixir, places where maybe you don't own the infrastructure end-to-end, and it's tough to have a Prometheus instance scraping your stuff over the public internet. So remote write, Grafana Agent - all super-exciting things, and hopefully coming soon to PromEx.
54
+
55
+ **Gerhard Lazu:** That's really interesting. So this is such an amazing piece of information, which I don't know how I've missed, but I'm glad that you've mentioned this... Because we were thinking a couple of weeks back "How can we run the Changelog app on Render and have all the metrics and all the logs ship to Grafana Cloud, without having to set up something else that scrapes the metrics, and tails the logs, and then forwards them?
56
+
57
+ So this is super-exciting, because you have metrics already. I am feature requesting logs, please, so that we can ship the logs as well using the Grafana Cloud agent, which I know it supports them. And then the only thing remaining would be traces, which by the way, it also supports.
58
+
59
+ So we have metrics, logs and events. That is a very special trio. Can you tell us a bit more about that, Alex? What are your thoughts on that special trio?
60
+
61
+ **Gerhard Lazu:** We could start with the abstract and then we can work down into the technical nitty-gritty. So those three that you mentioned just happen to be the pillars of observability. All three of those are the pillars of observability. It's theorized that if you have all three of these pillars in your app, you've achieved the coveted observability, and all your SREs and your DevOps people in your organization will come and shake your hand, and all will be well in the world.
62
+
63
+ But jokes aside, the idea is that these three different types of observability tools yield different benefits for your application. So with logs, if you're capturing logs in your applications or your services, you can see in very nitty-gritty detail what's happening on every single request, what's happening if there are errors, if there are warnings, if you're having trouble connecting to other services... You get very fine-grained detail as to what's going on. This is super-awesome, and it's very helpful to have this very in-depth information.
64
+
65
+ The problem is that you can kind of be inundated by too much information, and it's very difficult to extrapolate higher meaning out of all this nitty-gritty detail. Then, if you've ever run like an ELK Stack and had to administer that, you know the pains of trying to index all this data.
66
+
67
+ Then you might say "Okay, let's only log what's important", and I'm sure people with production apps have had their DevOps people come to them and say "Hey, let's dial back the logging. It's a little too much, and Elasticsearch is just keeling over."
68
+
69
+ Then you reach for other tools, like metrics. Metrics eventually find their way into some sort of a time series database, and they're usually pretty efficient in comparison to logs, because they're more bounded. You have a measurement, you have a timestamp, and you have some labels associated with it. A little asterisk there, because that kind of depends on what your time series database of choice is. But that's kind of roughly speaking what goes into capturing time series data.
70
+
71
+ \[15:57\] So given that you've paired down what information you're capturing, you could start a lot more efficiently, and it's a lot easier to query, and you can keep these for way longer periods of time. But the problem is there that you've now traded off high-fidelity logs for explicit metrics that you're capturing over time. Again, a trade-off, and there are different tools for the job, and you kind of reach for what's best at that particular point in time.
72
+
73
+ And then traces is kind of like a merger of the two, logs and metrics, where you can see how long your application is sitting in different parts of the application; if you're making external service calls, how long are you waiting for those external service calls... If you have something like Istio setup and you can track requests across services, you can see how long it takes to balance across service A, B, C and D, and how long it takes to unroll and go all the way back to the original caller... And then again, you get some metadata associated with those traces, and timestamps, and stuff like that.
74
+
75
+ Again, all three of these are different tools, they have some overlap, but it's really a matter of picking the best tool for the job. It'd be nice if you have all three of those in your company or application, but in the real world it is tough to get all three of these stood up and running efficiently, and running effectively.
76
+
77
+ **Gerhard Lazu:** I really like the way you think about this, I have to say... There is something pragmatic about, and something like - you can have this within five minutes... But I also am very wary, because I've been following Charity Majors' Honeycomb and those perspectives for many years, and my understanding is that the only thing you should care about is events. And if you have a data store that understands arbitrarily-wide events, something that can query them just in time, at scale, then you don't have to trade off the cardinality constraints that metrics have, versus the volume of logs that is just too much, and the indexing, and how basically that happens behind the scenes. So the implementation that limits you to how you use those logs.
78
+
79
+ So I think that perspective is very interesting, and I will definitely follow up on that some more in the context of this show, of Ship It. But I'm also aware of where we are today - and when I say "we", I mean the Changelog app - what we have already set up, and that ideal, which is that everything is an event. I think whether we want to or not, I can see how we are going on the journey, maybe some are more frustrated, others are more enlightened, but I can see how events potentially have the answer to all these things. But right now, the reality is that we still have to make this choice between metrics or logs. Traces as well. They're like separate components. And I think that Grafana Cloud is doing a pretty good job with Cortex, which is a Prometheus that scales, basically, Loki, which is for indexing logs, and it's great to derive insights out of that, and Tempo, which I haven't used yet, which is for traces. But these are the three components in the Grafana Cloud that serve these three different functions.
80
+
81
+ I think it's very interesting to get to that tool which unifies them all, and Grafana Cloud could be it, but there are others as well. Now, I'm not going to go through all the names, because that's boring, but what is interesting is that we seem to be going in the same direction. And we may argue between ourselves whether the pillars of observability are a thing, or are just a big joke - different perspectives - but I think ultimately what really matters is being able to understand what is happening in your application, or what is happening with your website, or your service, or whatever. Unknown unknowns. I'm not going to open that can of worms... But the point being is "Do you understand what is happening?" It may be imperfect, it may be limited, but do you have at least an idea of where to look, where the problems are?
82
+
83
+ \[19:59\] And I do know that PromEx helped us or helped you with the N+1 queries. It was very obvious "Hey, we have a problem in Ecto, and this is what that problem looks like, and this is how we fix it. And yes, we fixed it. Does Erlang 24 improve things to Erlang 23, and in what way?" And we can answer those questions as well.
84
+
85
+ So I think that monitoring is not going anywhere, and I think everybody respects it for what it is... But we also are aware that there are better ways, and we should improve this. So with that in mind, where do you see PromEx going? What are the hopes and the goals for the project?
86
+
87
+ **Alex Koutmos:** Yeah, sure thing. So I'm gonna first address a couple points that you've made, and then I'll answer the question.
88
+
89
+ **Gerhard Lazu:** Sure.
90
+
91
+ **Alex Koutmos:** And this is just my own personal opinion. I don't see everything rolling up into one solution. I just don't think it's feasible at the moment. Like, would it be nice if everything was an event, and we could easily search it, and everything is hunky-dory? I think everyone would agree that yes, that would be great. And I think we've tried this in the past - stuff everything in ELK, write some nice regex expressions, and extrapolate metrics from those regex expressions from your Elasticsearch database. From organizations that have gone down that route, it's extremely painful.
92
+
93
+ I think for now, for the foreseeable future, having those explicit tools for explicit purposes I think makes sense, just because they're very different problems that are trying to be solved, and trying to have one unifying tool that does all the things I don't think will pan out well.
94
+
95
+ But I do like the approach that Grafana is taking, and the observability community in general, where they're trying to provide bridges from one pillar to another. A perfect example is exemplars in Prometheus, where your Prometheus metrics can have an exemplar tag on them, and it'll effectively say "Hey, this metric data point is applicable to this trace." And you can kind of jump and say "Okay, something weird is happening here in the metrics. I'm getting a ton of 500's. Let me look at an exemplar for that 500." You can click through and you can kind of shift your focus from metrics and go to traces, but still have that context of that problem that I was having 500s.
96
+
97
+ So I like that approach better, where you can bounce between the different pillars of observability, but still have the context of "I'm trying to solve this problem. What is going on at this moment in time?" I like that approach. Again, that's just my personal opinion.
98
+
99
+ And to that end - and I'll go back to your original question now - I would like to get PromEx to a point where it does take into account things like traces, and you could use exemplars... And if Grafana Agent's incorporated into PromEx, you could very easily use Syslog and export logs from your application via Syslog to Grafana Agent, and then those find their way to Loki... So I don't wanna tailor PromEx solely to Grafana, but I do see that Grafana is offering a lot of tooling that is very powerful, and I would love to leverage it. Hopefully that answers the question there.
100
+
101
+ **Gerhard Lazu:** I think that's a very interesting perspective. I love that.
102
+
103
+ **Break:** \[23:11\]
104
+
105
+ **Gerhard Lazu:** That was a really interesting point that you've made, Alex, just before the break, and I would like to dig into it a little bit more. I would like to hear more about PromEx, the hopes and goals, because I think there's more to unpack there... But I find it very interesting how the exemplars that you have in metrics, how they link to traces. You've mentioned something very interesting about logs, and how a lot of information can be derived from them if the logs are in the right format.
106
+
107
+ In our Changelog app, just to give that example, we have a lot of logs - actually, most logs are still in the standard, unstructured format. So you have long lines of text, and that's okay, but that's where the regex are needed, to extract meaning from those lines.
108
+
109
+ So the thing which i've found to work a lot better, for example Ingress NGINX, which we also run, is to use JSON logging. So we put all the different information, which you can think of them as metrics, in that one very wide event which is the log line.
110
+
111
+ For example, status 200, how many bytes, how long it took, which was the refer, stuff like that. And that information, when it ends up in Loki, writing LogQL queries, which are very similar to PromQL queries, makes it easy to derive graphs, which we would typically get from metrics, from your logs.
112
+
113
+ So then the boundaries between metrics and logs are blurry. You don't really know whether "Was this a log, or was this a metric?" Does this really matter? It's what your understanding is from metrics and logs.
114
+
115
+ So that makes me wonder, how are logs and metrics different if you use logs as JSON, and you have this arbitrarily wide metric, if you wish - because it's a kind of metric, right? You have all these metrics like status, as I said, bytes, time taken - all those are metrics, and they all appear in a single line. So what is the difference then between the metrics that you get in Prometheus, which have a slightly different format, and the value is at the end, and then you have many metrics that you may put together, like for example for samples or summaries... But in logs they're slightly different, and yet the end result is very similar. What are your thoughts on that?
116
+
117
+ **Alex Koutmos:** Yeah, I think in the spirit of just-in-time/JIT, I think that's effectively what we're doing with logs when we try to extrapolate the metrics out of them, is through this event into the ether with a whole bunch of data associated with it. Maybe we don't know what we wanna do with it at the end, but given that that event is in the database, we can extrapolate some metrics out of it. So we're just-in-time kind of getting some metrics out of that log. You could go down that route.
118
+
119
+ I think that for some scenarios that may be your only option. Let's say you're running an external service, and all it's giving you is structured logs out. There's no way to tie in maybe an agent inside of there, or get internal events and hook in your own Prometheus exporter... For some scenarios, that may be your only option. And then I think that's a valid use case. Read the structured logs, and generate some metrics out of them.
120
+
121
+ But for when you can control those things, I think storing them in a time-series database will be beneficial for the team, because it's less stress on the infrastructure, it'll be far more performant... So that's, again, a bit of a trade-off there as to what route you go down.
122
+
123
+ **Gerhard Lazu:** That's interesting. Okay. So PromEx - big on metrics. Maybe logs? Are you thinking maybe log?
124
+
125
+ **Alex Koutmos:** \[28:07\] Perhaps... I think the extent of the log support out of PromEx will be just the shipping mechanism, given that the plan is to have Grafana Agent as part of PromEx's optional download. You can target that Grafana Agent for exporting logs to Loki. But I don't think PromEx will transform into a library where it also provides structured logging mechanisms. I think there's some good stuff already built into the Elixir logger on that front... But that's not a problem I'd like to tackle in the PromEx library.
126
+
127
+ **Gerhard Lazu:** Okay, that makes sense. What about events?
128
+
129
+ **Alex Koutmos:** So like traces, for example?
130
+
131
+ **Gerhard Lazu:** I'm thinking events we have from the Erlang library and the Erlang ecosystem. It's very rich, in that it can expose all sorts of events, and I think this is where we are touching on the OpenTelemetry and the sort of things that the Erlang and Elixir ecosystem have going for them, which I think is a very good implementation, a very good story around telemetry.
132
+
133
+ **Alex Koutmos:** Yes, yes. So let's rewind a little bit out of PromEx and talk about what you're hinting at here... So there are a couple projects in the Elixir and Erlang ecosystem. OpenTelemetry as far as I understand right now is an implementation of the OpenTelemetry spec. I think it's solely just for tracing. I think even that library, so OpenTelemetry, builds upon another Elixir and Erlang library called Telemetry; that lives in a GitHub organization - I think its beam-telemetry. But that library, Telemetry, offers library authors a way to surface internal library events to whoever is using that library. It's completely agnostic for how you structure these things, aside from you capture some measurements associated with that event and some metadata. That's pretty much it.
134
+
135
+ So every library can surface events, and you as the consumer of that library can say "Okay, I wanna pull out these measurements from the event, and maybe this metadata from the event." A perfect example would be the Phoenix web framework will surface an event when it's completed a request, when it's serviced a request. And inside of that event it'll have a measurement for how long it took to surface that request, so that'll be your duration... And then the metadata may be the route that the person hit, or the response status code, the length of the response payload etc. And then if you choose to hook on to that telemetry event, you can use all that data. If you don't hook on to that event, it's effectively like a no-op. So you're not losing any performance per se here.
136
+
137
+ That's effectively how PromEx works. All these libraries that I attach to are emitting these telemetry events. I just so happen to hook into all these telemetry events, and then generate Prometheus metrics out of them.
138
+
139
+ I think the story there in Elixir and Erlang is very unique, because the ecosystem has kind of said, "Okay, we're all gonna use these foundational building blocks." And I think -- the last time I looked on hex.pm, I think there were like 140 libraries using telemetry, which means now across the ecosystem we have this ubiquitous language for how do we surface internal events in our libraries... Which is very powerful, because now I don't need to learn how Phoenix exports events, and how Oban exports events, and how Ecto exports events... It's all the same thing; I just need to hook into an ID for what that event is, and I'm off to the races at that point, and I can capture any information that I like.
140
+
141
+ **Gerhard Lazu:** \[31:45\] That explains why PromEx was such a -- I wouldn't say straightforward, but almost like it was obvious how to put it together. It was obvious what users want and need, because you have all these libraries that expose these events; they're there, you can consume them. So Ecto this week, Oban next week... I'm simplifying it, a lot, but roughly, that's how you were able to ship support for all the different libraries, because they all standardized on how they expose events. Is that a fair summary?
142
+
143
+ **Alex Koutmos:** Yeah, that's exactly right. It is quite a bit simplified...
144
+
145
+ **Gerhard Lazu:** It's an oversimplification, of course.
146
+
147
+ **Alex Koutmos:** Because a lot of times I'll sit down to write a PromEx plugin, and as I'm writing plugin, I'm like "Hm, I need some more data here." So I'll make a PR to the library author, and say "Hey, I think we need some additional metadata here, some additional measurements", and then we have to go through that PR cycle, and I have to wait for a new release to get cut, and then I have to make the Grafana dashboard... So there's a good amount of work. But yeah, effectively, that's it - see what events that library emits, hook into them, convert them into meaningful Prometheus metrics, make the Grafana dashboard, and then ship it.
148
+
149
+ **Gerhard Lazu:** That's a good one, actually. I like that, especially the last part. Especially the ship it part.
150
+
151
+ **Alex Koutmos:** Yeah, I thought you'd like that.
152
+
153
+ **Gerhard Lazu:** Okay. So you have all these events... So I'm wondering if - you're ingesting events, you're translating them into metrics... Is there a point where you could just expose those events raw, and then something like for example Honeycomb, which loves events, could just consume them. I think that's how the Honeycomb agent, in some languages, works. They just expose the raw events.
154
+
155
+ **Alex Koutmos:** I'd have to play around with that and see... Some of these events have a lot of metadata associated with them. Again, let's say that Honeycomb is infinitely scalable, and it doesn't take any compute time - yeah, sure thing; just dump a couple thousand lines of metadata per event into Honeycomb. But yeah, I'd have to play around with Honeycomb specifically to see if that's event possible.
156
+
157
+ **Gerhard Lazu:** I'm also fascinated by it, because I think the take is very interesting, and I can see the uniqueness, I would like to understand it more, how they make that possible, for sure... And the challenges -- I mean, if they pulled it off, which apparently they have, that's impressive. And I think it takes an understanding of how complicated these layers are, just to understand what a feat that is in itself. So that's interesting...
158
+
159
+ So we telemetry, we have PromEx, you mentioned about plugins... Is there anything specific that you would like to add to PromEx next, anything that users are maybe asking for, anything that you would like to ship, which you know would be a hit?
160
+
161
+ **Alex Koutmos:** Yeah, so aside from Grafana Agent, which I think some people are excited about...
162
+
163
+ **Gerhard Lazu:** I am. Big fan. Please...
164
+
165
+ **Alex Koutmos:** \[laughs\] So one thing I forgot to mention was -- so in addition to supporting all these first-party plugins and Grafana dashboards (and you kind of hinted at this before), users of PromEx are encouraged to make their own PromEx plugins and their own Grafana dashboards... And those plugins and dashboards are treated identical to how the first-party things are. So you're able to upload those dashboards automatically on application init, your events will be attached automatically... So all those first-party plugins are kind of dogfooding the architecture. I wanted to see how easy it was to create plugins and dashboards and have them all kind of co-exist together.
166
+
167
+ So the idea is that you use PromEx for all the shared libraries in the ecosystem, and then you write your own plugins and Grafana dashboards for things that are specific to your business, that obviously are not gonna be supported in PromEx. So that's one thing I forgot to touch on. And then what was the original question?
168
+
169
+ **Gerhard Lazu:** I was asking if there are any specific libraries that you are looking to integrate with. And I'm looking at the available plugins list, and I can see which ones are stable. This is, by the way, on github.com/akoutmos/prom\_ex. And there's a list of available plugins. A bunch of them are stable: Phoenix, Oban, Ecto, Phoenix-Bream, and the application... And then some are coming soon, like Broadway, Absinth... I'm not sure whether I'm pronouncing that correctly...
170
+
171
+ **Alex Koutmos:** Yeah, yeah. Just like the booze.
172
+
173
+ **Gerhard Lazu:** Right. I don't know... I really don't know.
174
+
175
+ **Alex Koutmos:** \[36:17\] Yeah, me neither.
176
+
177
+ **Gerhard Lazu:** Okay.
178
+
179
+ **Alex Koutmos:** So Broadway - that plugin is more or less done. I've made some changes to Broadway itself, and those changes were accepted and merged into the Broadway project. I don't think there's been a release cut as of us recording right now. So that plugin is kind of on hold until a release gets cut, and then I can kind of say that PromEx depends on this version of Broadway, if you choose to use the Broadway plugin... Because I added some additional telemetry events.
180
+
181
+ **Gerhard Lazu:** The idea is to get Broadway wrapped up. For those who don't know what Broadway is - it's a really nifty library where you can drop it into your project and you could read from various queue implementations, and it takes care of a lot of the boilerplate in setting up a concurrent and parallelized worker. So you can read from Rabbit, and you can configure "Hey, I want 100 Beam processes reading from Rabbit at the same time and processing the work from there." I think it supports Rabbit, Kafka, and I think Redis as well.
182
+
183
+ But yeah, Broadway is on the list... And then Absinth is on the list after that, because that's the Elixir GraphQL framework. So that seems to be pretty popular. Yeah, after those two are wrapped up, I'm just gonna go on hex.pm, see which one has the most downloads after that, and just -- think of that as a priority queue. Whatever libraries have the most downloads and are the most popular, just make plugins for them, as long as they support telemetry.
184
+
185
+ **Gerhard Lazu:** That makes so much sense. Of course. The way you put it, it's obvious. What's the most popular? That thing. Okay... Well, that will have the most users and will be the most successful, and people will find it the most useful. So yeah, that makes perfect sense. I like that. Very sensible.
186
+
187
+ **Break:** \[38:00\]
188
+
189
+ **Gerhard Lazu:** So one of the things that we wanted to do - I think we were mentioning this towards the beginning of the show... We were saying how Erlang 24 just shipped. It was a few weeks ago, the final 24 release. We have the first patch release... And we wanted to upgrade the Changelog app to use Erlang 24. So here's the plan... By the time you're listening to this, either next day or a few days after, we will be performing a live upgrade on the Changelog.com website, from Erlang 23 to Erlang 24. We have PromEx running, we have all the metrics, and we will see live what difference Erlang 24 makes to Changelog.com.
190
+
191
+ \[39:40\] PromEx is obviously instrumental, all the metrics and all the logs get shipped to Grafana Cloud, so that's how we will be observing things, and we will be commenting out what is different, what is better, what is worse. So with that in mind, I'm wondering if there's any assumptions or expectations that we can set ahead of time. What are you thinking, Alex?
192
+
193
+ **Alex Koutmos:** Yeah, so I've been thinking about this for a little while... Because measuring things before and after changes - it just excites me, to see that you've made a change and you have some measurable differences between how it was before and how it is afterwards. So I've been thinking about this, and some of my hypotheses are that memory usage will go up slightly, because that interpreted code that was compiled to native needs to be stored somewhere. So memory usage will go up slightly... And then I imagine most things CPU-bound will be sped up. So serializing and deserializing from JSON, serializing and deserializing from Postgres database - all these things, we should see a considerable change in performance. Those are kind of top of mind at the moment. How about you?
194
+
195
+ **Gerhard Lazu:** I'm thinking that the end result that the users will see, because of those serialization speed-ups, is a lower latency. So responses will be quicker. Now, if you have listened to the Changelog 2021 setup, you will know that if you're accessing Changelog, you're going through the CDN. So every single request now goes through Fastly. And what that means is that the responses are already ten times faster, or maybe faster still. So your responses are served within 50 milliseconds; that's what the Grafana Cloud probes are telling us.
196
+
197
+ So the website is already very fast, because it's served from Fastly. What we will see, however - we have probes that also hit the website directly. So expect the response latency, if you go directly to the backend - or to the origin, as the CDN calls it - it will be slightly lower. I also expect the PostgreSQL - maybe not the queries necessarily, but the responses, as you mentioned, Alex, because of the serialization, to be slightly faster. So I would expect the data from the database to load quicker. And that will also result in quicker response time to the end users.
198
+
199
+ I'm very curious what happens with context switches. Are we going to have fewer context switches, so less work on the CPU, or more? Obviously context switches are not just like the work the CPU does, but I think things will be a lot less work to do, so fewer context switches. CPU utilization - I think it will go slightly down, but right now we don't have to worry about that because we have 32 CPUs. All the AMD EPYCs, the latest one - thank you, Linode; those are amazing. Everything is so much quicker. And we have the NVMe SSDs... Everything is super-quick. But yeah, for more, listen to the 2021 Changelog setup where we cover some of these. And I think the blog post will come out.
200
+
201
+ That's what I expect to see... So will it make a difference for the users? I don't think it will, because they have the CDN. So everything is already super-quick, as fast as it can be. You have TLS optimizations, you have data locality of all the good stuff, because the CDN just serves requests from where you are.
202
+
203
+ For the logged in users, because obviously those requests we can't cache, things will be slightly quicker. So for Adam, for Jerod, whoever is working on the admin, those things will be quicker.
204
+
205
+ Another thing which I do know that we do - we do background processing on some of the S3 files, the logs and stuff like that... So expect those to be quicker. But I don't know by how much. I think we're using Oban for that, aren't we, Alex?
206
+
207
+ **Alex Koutmos:** Yeah, we're using Oban. I think Oban was set up just to send out asynchronous emails. I don't know if there was any other work being done by Oban. But now that you mention those things, we probably should have metrics in place to capture those S3 processing jobs, see how long they take pre and post OTP 24.
208
+
209
+ **Gerhard Lazu:** Yeah, that's right. That's a really good one. That'll be a great one to add. Okay, I'm really looking forward to that. And if you've listened to this, you can watch it live. And if you haven't, that's okay; you'll see it on Twitter. We will post. Maybe we'll even do a scheduled livestream. Does that make sense for you, Alex? What do you think?
210
+
211
+ **Alex Koutmos:** Yeah, it works for me.
212
+
213
+ **Gerhard Lazu:** \[44:06\] Okay. So no impromptu. We'll schedule it and we'll say "On this time, at this day, at this hour." Okay, I like that. That's a great idea, actually. So we'll have like at least a few days of heads up, and then you can listen to this, and then you can watch that, how we do it. Great. that makes me very excited. Okay.
214
+
215
+ So we're approaching the end, and I think we need to end on a high... Because it's Friday when we're recording this, it was a good week, and the weekend is just around the corner... So what do you have planned for this weekend, Alex? Anything fun?
216
+
217
+ **Alex Koutmos:** This weekend... I think I have one thing I wanna do in PromEx, but then I'll be building a garden. So I'll be outdoors, using the table saw, and the miter saw, and the nailgun, and putting together some nice garden beds.
218
+
219
+ **Gerhard Lazu:** Okay, well that sounds amazing. You have to balance all the PromEx and all the Erlang/Elixir work somehow, right?
220
+
221
+ **Alex Koutmos:** Oh, yeah. You need to find a healthy balance between open source work, the full-time job, and a little bit of fun for yourself.
222
+
223
+ **Gerhard Lazu:** Yeah, that's for sure. So building a garden - that sounds amazing. You must be either very good or very brave, I'm not sure which one. Either a great DIYer, or very brave, you'll figure it out. Which one is it?
224
+
225
+ **Alex Koutmos:** I don't wanna be arrogant or anything, but I think I'm a decent DIYer. I also used to tinker around with cars quite a bit before I had a family... When it was okay to financially irresponsible and buy a $3,000 motor just because I felt like it. Nowadays you can't do that... \[laughter\]
226
+
227
+ **Gerhard Lazu:** Okay, different times... Right?
228
+
229
+ **Alex Koutmos:** Yeah, exactly.
230
+
231
+ **Gerhard Lazu:** Different world.
232
+
233
+ **Alex Koutmos:** I could buy a motorcycle anytime I wanted to. I didn't have to worry about providing for my kiddos. I go with safe hobbies, like building garden beds or doing some woodworking.
234
+
235
+ **Gerhard Lazu:** Okay, that sounds great. So I hope the weather is going to be great, because for me, the weather has been rubbish for the whole week. Windy... I wouldn't say it's cold, but it's not nice; it's been raining all day every day, we had some downpours as well... So it hasn't been really great. And right now I'm looking at it like -- I was going to do a barbecue; I love barbecuing, the proper charcoal one... But the weather is not good. Maybe we get the parasol out, so it doesn't rain on my barbecue regardless, maybe... I don't know. But what we have to do is post the pictures. Because how can people appreciate how good of a DIYer you actually are if they don't see your work?
236
+
237
+ **Alex Koutmos:** Well played, sir. Well played. I'll have to take some selfies. I usually stray from the selfies... \[laughs\]
238
+
239
+ **Gerhard Lazu:** And videos. Those are very important, because if you don't take videos, someone else could be doing the work and you just take pictures. No... That would never happen, right? Only in movies. \[laughter\]
240
+
241
+ **Alex Koutmos:** Never, never.
242
+
243
+ **Gerhard Lazu:** Alright, Alex. Well, it's been a pleasure to have you on the show. I really enjoyed this. I'm looking forward to doing what we said we will do. That's super exciting. Shipping Erlang 24 for Changelog.com - that'll be great. And which version of PromEx are we at now? Do you know which one is the latest?
244
+
245
+ **Alex Koutmos:** I don't remember... I think 1.1.0 is the latest... And I think the Changelog is on 1.0.1.
246
+
247
+ **Gerhard Lazu:** Right. So not that far behind, but...
248
+
249
+ **Alex Koutmos:** Yeah, we'll bump it up.
250
+
251
+ **Gerhard Lazu:** That's great, okay. So we shipped that. That is exciting. Ship a garden in the meantime as well; maybe a barbecue. We'll see. This has been tremendous fun. Thank you, Alex. Looking forward to the next time.
252
+
253
+ **Alex Koutmos:** Likewise, thank you.
Find the infrastructure advantage_transcript.txt ADDED
@@ -0,0 +1,375 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** Well, hi, Zac. I've been looking forward to this for a really long time... Summer of 2019, specifically. Welcome, and thank you for making this happen.
2
+
3
+ **Zac Smith:** Well, Gerhard, it only took us a year and a half, but we're ready now.
4
+
5
+ **Gerhard Lazu:** Yeah. The last year didn't count. It was a crazy one, right?
6
+
7
+ **Zac Smith:** Exactly.
8
+
9
+ **Gerhard Lazu:** I have many questions, but I'll start with this one... Were you at KubeCon, by any chance?
10
+
11
+ **Zac Smith:** This past -- in October was it?
12
+
13
+ **Gerhard Lazu:** Yeah, the North America one.
14
+
15
+ **Zac Smith:** No, I wasn't there. We have a great team there, and we were doing our cloud-native cookbook. I'm not sure if you've got a copy.
16
+
17
+ **Gerhard Lazu:** I didn't, no.
18
+
19
+ **Zac Smith:** Yeah, we decided to organize an open source cookbook that we all did during the pandemic, which was - you know, we were all stuck at home, doing something, and so we got, you know, I'm gonna call it cloud-native luminaries to give us their favorite recipe, and we made a physical cookbook. It's available on GitHub, so if you wanna add your recipe...
20
+
21
+ **Gerhard Lazu:** Okay...
22
+
23
+ **Zac Smith:** So that was the big giveaway. I couldn't go, I was unfortunately busy with something else... But we did have a pretty good team there; it was a great turnout. Really nice to see people come together.
24
+
25
+ **Gerhard Lazu:** Right. Okay... So that cookbook sounds great. The fact that you weren't there - it's okay; I missed things, but I didn't miss you, so I'm feeling better not being there in person... That would have been a disappointment.
26
+
27
+ Do you typically go to KubeCons, by the way? Do you have time for KubeCons?
28
+
29
+ **Zac Smith:** I mean, I used to in the past. Now it's a little bit different. I've gone from being -- let's call it a CEO of a startup, a company called Packet, which I ran for many years... To now being a busy executive at a Fortune 500 company, which - you know, I have a little bit different set of responsibilities, and part of that is with customers in the field, but a lot of that is also internal and part of our strategy... And you know, I'm gonna call it corporate functionality.
30
+
31
+ \[04:02\] So I haven't been to any conferences over the past few years, but I used to go regularly. I was on the road 2-3 weeks a month, including conferences. To me, conferences have always been -- especially things like KubeCon, and earlier I remember being at DockerCon in Barcelona in 2015... And so the best part about these conferences to me is the hallway track; I just love seeing and meeting and hearing from people -- you just get that pulse on what's going on when you can go around that hallway track and see what people are talking about. So to me, that was always my favorite part of going to conferences.
32
+
33
+ What was my other favorite one...? CoreOS Summit, Monitorama, that was a good one... KubeCon, yes... You know, I didn't ever go to like the Gartner IT Summits those things weren't my gig.
34
+
35
+ **Gerhard Lazu:** Okay. So you're right, that's one of the things which I missed the most about being there in person... So even though I did attend this KubeCon, it was a virtual attendance... But I know what you mean; your responsibilities change, and things are a bit different. You're trying to be there as much as you can in spirit, if not in person... But you're there, because I've seen some pictures you were retweeting from KubeCon. That's why I was thinking maybe you were there.
36
+
37
+ **Zac Smith:** Oh, we had all of our spies. I think about maybe 15 people from the Equinix team went to KubeCon... So it was good. And you know, my favorite conference ever - I don't know if you have a favorite... My favorite was always the ARM Tech Conference.
38
+
39
+ **Gerhard Lazu:** Interesting.
40
+
41
+ **Zac Smith:** The reason why I love the ARM Tech Conference - because it was 100% hallway track. So they would do one kick-off meeting in the beginning, some kind of keynote thing, and then they make you all go away, and they would take over Clare College in Cambridge, and they would take over the professors' rooms, and you would each have a minder from ARM, and then they would just set up speed dating between all of the attendees, and you would do half an hour or 20-minute meetings, and then you'd all switch. And then you'd switch again. So it was basically just hallway track. It was so cool.
42
+
43
+ **Gerhard Lazu:** That's amazing. I think that sounds a little bit like Priyanka's happy hour at KubeCon... But yeah, I really enjoy that format. I know what you mean.
44
+
45
+ Okay, so I have been a fan of Equinix Metal for a really long time... And actually, it's been so long that it was called Packet. So it's been many, many years. And I've already shared my perspective why in episode 18, with Marques Johansson and David Flanagan, Rawkode... So there's nothing else to add from my side. But I'm wondering, how was the transition for you from Packet to Equinix Metal, besides you not being able to go to conferences which we already got? \[laughs\]
46
+
47
+ **Zac Smith:** Without having to change my T-shirts?
48
+
49
+ **Gerhard Lazu:** Yes, that as well. \[laughs\]
50
+
51
+ **Zac Smith:** Yeah, that's a great question. There's a lot of emotion built into that for me. As a founder, you spend years kind of thinking of something, dreaming/working on it, putting your soul into it, and then in my case - we were acquired by a great company, Equinix, and your role changes. It's no longer this thing, especially as a founder/CEO, which I was kind of the leader of that, along with my colleagues, and obviously the whole team... But you know, there was a lot of personality built into Packet. Packet was very much a reflection a little bit of things and values that the founders cared about. So that is different when you then go into a much more established business, and you have to figure out -- it's certainly a totally different challenge in how to meld values systems, culture, in a brand, obviously... You know, your customer make-up and how you engage and whatnot, and just the pulse of how you run your business.
52
+
53
+ So for me, that was one of the bigger shifts, was just going from -- I mean, Packet was 150 people at its biggest, and we were very much focused and built around speed... How do we find product fit, how do we service our customers, how do we listen...? Because we weren't market-leading anything; we were just trying to prosecute a vision and a mission around making hardware automated for developers. That was as simple as it got. And where that would take us, we weren't even exactly sure.
54
+
55
+ \[08:11\] And Equinix is a much different business; we're well over 10,000 people, we have 23 years in business, 10,000 customers... It is big, and it has a robust and strong culture of its own. So that was a big shift, just moving from kind of -- I'm gonna call it the upstart, forward-thinking, future-driven startup, to a market-leading Fortune 500 business, and then figuring out my role within that.
56
+
57
+ And then of course there was -- I'm gonna call it personal/emotional ties. I've mentioned to with a few other people recently, but this is a second business that I had sold. The first business of mine - I joined a gentleman by the name of Raj Dutt in the early 2000's, a company he has started called Voxel, which we then sold to a public firm called Internap, back in 2011. And I was much younger at the time, and I had never done that before, and frankly, I didn't deal with it very well. You know, you're taking something that you had so personal, and then suddenly we sold the business, and then I still was taking it very personally. And I wasn't ready to deal with it.
58
+
59
+ Raj went off and founded a company called Grafana. I started Packet... And what I did is during this transaction, when we knew we were gonna sell the business, the first thing I did - I talked to my brother Jacob and I said "Man, we're gonna have to get ourselves a therapist." Because a lot of this is just dealing with the emotions of a founder... Because I knew we would change our name, and things that were special to us would not be important or the right things for Equinix, and things like that... And sometimes you could take that very personally.
60
+
61
+ So I think my experience, the first kind of go-around helped me to, in one way, be a little prepared for it, and in the other way just know that I was gonna go through it. So last year, when we changed from Packet and rebranded the business as Equinix Metal, it was still a journey, and I kind of take a little bit of pride that, you know, I'm gonna call it thousands of people throughout the industry still call it Packet, and won't be able to replace the words in their mouth... So I was like, "Okay, fine...", because our brand was meant for something that is important to people.
62
+
63
+ That's the other thing... So one, your role changes dramatically from what you're doing and where you're at, and two, you've gotta deal with some stuff as the founder, around maybe the mission that you're on, or the reflection of that for yourself, and help channel that energy in a positive way. So those were, I would say, the two biggest things.
64
+
65
+ Of course, there have been other things, which are both opportunities and not related to our product and our capabilities, and our scale, and all kinds of other things... But those are the ones that are most personal to me.
66
+
67
+ **Gerhard Lazu:** Okay. So there's a follow-up question, but first I have to ask another question, which is linked to what you said. It was very comprehensive. Thank you very much. The precursor is "Why do you do what you do?"
68
+
69
+ **Zac Smith:** Like the big Why, or the little Why?
70
+
71
+ **Gerhard Lazu:** The big Why.
72
+
73
+ **Zac Smith:** Yeah, I mean - my wife asks that to me pretty often... So a few things. I can't give you one answer, but I would say that I love creating things, for sure. I love being involved in that, I love leading it, I love tackling unsolved problems... Just building. So I am a native builder, and you can kind of tell with my personality type, I'm pretty action-oriented, I'm curious, I wanna kind of unravel and understand, and then I wanna do something about it. If I identify a challenge or a problem -- my wife hates it, because everytime she complains to me about something, I try and fix it. And she's like "I'm not trying to have you fix it!" I'm like, "I know, but--"
74
+
75
+ **Gerhard Lazu:** "Just listen to me, damn it!" \[laughs\]
76
+
77
+ **Zac Smith:** Yeah, she's like, "No, I just wanna --"
78
+
79
+ **Gerhard Lazu:** I know how it goes, yeah. I know. I can relate to that.
80
+
81
+ **Zac Smith:** So that would be, I think, why I'm an entrepreneur, and why I have that spirit to create things... And it involves -- I invested in some companies, I like to help other founders... You know, I always am interested in that creation aspect, and that really kind of -- I'm gonna call it "satiates" a need within my own mind, which I'm always just a very curious person.
82
+
83
+ \[12:05\] And then the other one, which is like "Why this?" After Internap, I kind of vowed not to play in the world of internet infrastructure. I was like, "I'm gonna go get myself a real job, something that isn't 24/7, with all the challenges of our plumbing world of the internet..." And of course, two years later I started Packet, and I said "Nah, I wanna work on internet infrastructure, and build a better internet." \[laughter\] And why did I do that? First and foremost, I really believe in the foundational capabilities that we were able to provide, and I think still provide... And why do I believe that? Because I think technology really does best, and our innovation around technology does best with diversity. And that's not just diversity in people, and diversity in thought, but diversity in businesses that can take advantage of technology. So I kind of saw that there was a real need to provide access to fundamental technology to an incredibly diverse set of users and projects and companies; that if we didn't work on making it easier to consume hardware no matter what it was, where it was, or what you put on top of it, that part of that -- you know, that messy part of innovation, where the magic of software and hardware come together would go away... Or at least would become much more bespoke, and I'm gonna call "unavailable" to most people. So that was kind of one of the Why's.
84
+
85
+ And then the other big Why is that I really firmly believe in -- basically, you could kind of say, with all the challenges we have, and I think this week is a big climate change summit going on in Europe, and you can kind of say that some of our biggest challenges today can be looked at as something where we could go back and reduce and maybe even think up a world where we use a lot less technology... Technology being computers, but technology even being like cars, or something.
86
+
87
+ Or we could think of a forward way, where we figure out how to use technology in better ways, more sustainable manners, and use that as kind of lean in to the technology side, versus lean out... And I've always been of the latter, how to lean in. That was one of my -- I believed it from just a pure resources perspective we have to create the right software and the right hardware together. It's not about making it 10% more efficient, it's about making it 10,000 times more efficient...
88
+
89
+ **Gerhard Lazu:** Oh, yes...
90
+
91
+ **Zac Smith:** ...it's the only way that happens... And I'm sure we'll get to the chips and the things at some point, but--
92
+
93
+ **Gerhard Lazu:** We will.
94
+
95
+ **Zac Smith:** ...to me, that was really one of the imperatives. And the other imperative which is why I'm so excited to be where I am here at Equinix is to change the business model that we fundamentally have around the distribution of technology. When you look at a computer, like a server, 70% of the Carbon impact of the computer happens in making the computer, and only 20%-30% of it is in using the computer for its whole lifetime. And then of course, there's some residual effect of actually recycling - if people actually even do that - of the computer.
96
+
97
+ I really firmly believe in moving to a more circular economy, and when we think about the big movements we had over the last 10 or 20 years, I grew up in the movement from IT to cloud... Which was really not a technology shift as much, in my opinion, as a consumption model shift. It was aligning outcomes of the provider; instead of "I want to sell you this server", it says "I want to help you use this thing", which is close to "I want you to have the outcomes that you want." And without addressing, frankly, the OEM and the silicon business models, which currently are in the business of -- if they don't sell you another chip, they don't make any money. Now, that's just not a sustainable business model for the future of our world.
98
+
99
+ **Gerhard Lazu:** That's right.
100
+
101
+ **Zac Smith:** \[15:46\] So the same thing is happening with OEMs, who are early -- like, they're starting to make that shift with as-a-service. Just a little bit. But in reality, still - if they don't build another server and sell you another server, they don't get paid. And to me, these are massive, multi-hundred-million-dollar businesses. And frankly, especially with the silicon, the main control point for intellectual property - which it doesn't have a way to monetize the intellectual property in a sustainable way, in a sustainable manner. So to me, that's the other reason - is that it started Packet with the idea that we could impact and change the way that technology was distributed and operated... Because we're gonna go through these kind of fundamental shifts in what the things are, and who's using them, and how the expectations are asked. Well, if there's gonna be a new operating model, now would be the time to introduce a sustainable business model to the whole thing as well. So that's the other reason why -- that's the other big Why.
102
+
103
+ **Gerhard Lazu:** My follow-up question to this was "How did Equinix Metal change your Why, coming from Packet to Equinix Metal?" But I think we can draw the conclusions, we can draw the parallels, because some of the things that you were alluding to is scale, first of all; the complexity of the whole supply chain when it comes to infrastructure.
104
+
105
+ I think that people can kind of imagine how Equinix Metal makes it easier... Not to mention the interconnectivity, all the data centers. So all that - basically, you have a lot more leverage to use in delivering that Why. Anything to add?
106
+
107
+ **Zac Smith:** I mean, there are some positives and negatives to it, right? Back in Packet days, we were -- I'm gonna call it an "arms dealer" for transformation, as it pertained to this level of the stack... Which was just like -- I always thought, like - hey, there's real estate e.g. data center as a service, which is a pretty scaled model. It's for the efficient capital, there's many providers, of which Equinix I would say is the leading provider, definitely by market share related... But there's like scaled business for like -- if you wanna access one of the world's best data centers, you don't have to build one of the world's best data centers.
108
+
109
+ And then there was this thing called IaaS, which was everything from the computer, and the network, through databases and load balancing. It was a whole thing. It had a ton of verticalized software opinion built into that. So what Packet was doing was we were trying to democratize that lowest layer. I called it hardware as a service. How do we enable the consumption of hardware and make it able to touch software? And as an independent, we had some advantages, which were hard to realize at our tiny scale... But in theory, we had some advantages, which we could help do this anywhere. That was an interesting concept; we had put out edge deployments, with tower companies, we had done private deployments with some large enterprises... We were really trying to figure out "Hey, where would this be needed?"
110
+
111
+ Now, obviously, as Equinix, we were a little bit more focused on our own large-scale real estate portfolio... But with that come some incredible advantages. No longer do we have to think "Wow, how could we change this?" I'm like, "Well, we have 240 buildings in 65 markets around the world. We could start there." And we can also help our existing 10,000 customers, who were really struggling, frankly, with the breadth of our platform, and also just figuring out how they are going to react and make a difference related to their ESG initiatives around climate change. And what they're doing is they're saying "Can you help?" And what I've found is that's been a really, really collaborative engagement. And what I've been so pleased about working with Equinix versus just (I'm gonna call it) another large company is that Equinix has the word "ecosystem" everywhere. And in fact, Equinix stands for Equality in the Internet Exchange. That was the name of the business.
112
+
113
+ **Gerhard Lazu:** Hm, interesting. I didn't know that, actually.
114
+
115
+ **Zac Smith:** Yeah. So it was created as a neutral place for the internet to grow. So what I've been really pleased is the support I've gotten throughout the leadership, from our CEO Charles, throughout the rest of the organization and whatnot... In doing a lot of this innovation in the open. So I get to share the Open 19 project within the Linux Foundation, which is where we work on sustainable, liquid cooling power and distribution models for computers... And also in the Linux Foundation for the CNCF for some of our provisioning capabilities and whatnot... So that's been a real joy, as well as looking at the scale and breadth of what we can do to impact and really start a flywheel, moving way faster than what we could have done on our own.
116
+
117
+ **Break:** \[20:12\]
118
+
119
+ **Gerhard Lazu:** So why do I keep getting drawn back to Equinix Metal? And you're right, it's hard for me to say that, but I have to. I just wanna say Packet, but... So what keeps me getting drawn into Equinix Metal - the reason why I keep coming back is because you have the best hardware, hands-down. So I have been running bare metal servers over my entire career, with LeaseWeb, OVH, Scaleway (they used to be called Online; now they merged), Rackspace, SoftLayer... Remember them? Remember SoftLayer?
120
+
121
+ **Zac Smith:** Of course.
122
+
123
+ **Gerhard Lazu:** And in my opinion -- so having all these data points, I think that Equinix Metal does it best. What's the secret?
124
+
125
+ **Zac Smith:** That's a good one. I don't know. First and foremost, I randomly landed in this industry and figure out I love -- I love this industry, by the way. I love the community around -- the plumbers, the builders. It's just one of these unique places of the internet that is so apprentice-driven and collegial, which is, I think, really special in terms of people who build the lower layers of the stack. It's not something that - I go to cocktail parties or whatever, and people are like, "What do you do?" and I was like "I work at Equinix." They're like, "The gym?" \[laughter\] So most people don't know what we do... But underneath the hood, I think everybody really works well together. So that's kind of just like a fun part... You have to have passion, and I have a strong amount of passion for this part of the innovation stack. So I think that's it.
126
+
127
+ But when we started Packet, we had this super, super-clear vision... And I think I've already repeated it once here, but it was "How can we automate hardware, no matter what it is, no matter where it is, and no matter what software you put on top of it?" That was the thing. And what we knew is we knew our place in the world. That if we could enable a highly programmatic way to interact with hardware, no matter what it is - and that's a deceptively simple statement; it is actually extremely hard.
128
+
129
+ \[24:07\] And so I was like, "This is what we're gonna be the best in the world at. We're gonna figure out how to enable hardware no matter what it is", to this massive world of software, call it 40 million developers in the world, who wanna use all the stuff, right? And they need to make their amazing software work with this amazing piece of hardware... Which, by the way, what is "this piece of hardware"? I think it's one of the most misunderstood things that goes out there. People are like, "Oh, computers are a commodity..." You know, except if you're trying to do something special that changes the world, like make your car go left and right, or talk at the walls all day long, or carry around a supercomputer in your pocket called a cell phone all the time, that has all the widgets, right? Like, that's where hardware and software come together and create this really magical thing.
130
+
131
+ So I think our focus on that just pure mission, which was - we knew that we had enough there to prosecute, and we could spend the vast majority of our careers trying to make hardware accessible for software, knowing the pace of hardware is changing so rapidly. We're like in the golden age of hardware right now, with the kind of competitiveness between the silicon manufacturers, the business model changes, the hyperscalars, the demand and volume driven by mobile interacting with the ending of Moore's Law in its own regard has just created this huge place for innovation, I think, in software. You also kind of have this natural thing which I'm happy to play a little bit of a part on, where now multi-arch in the data center is a reality...
132
+
133
+ **Gerhard Lazu:** Oh, yes...
134
+
135
+ **Zac Smith:** When 5-6 years ago that just wasn't right? It was x86 or you're done. And now you've got serious languages that have been made multi-arch and have the build capacity and the CI pipelines and the related ecosystem to make that continue, and "build upon itself." That's happened faster than I expected, where the software has met the hardware... And the hardware is also changing so rapidly that there's just so much to do.
136
+
137
+ So I envision that over time we'll create a much better what I'm gonna call HCL (hardware compatibility list) for the internet, that effectively can be an idempotent view of every single piece of hardware ever, and that would allow all the software to be able to choose and understand how to work with it. We're pretty far off from that... But I think we can get somewhere there.
138
+
139
+ But I think that's where I'm gonna give your answer, is just like being super-crazy, laser-focused on what we do... And I've spent a lot of time in my first few years at Packet -- I'm not gonna say fending off revolutions, but a lot of people... The clearest one on my mind was I almost had most of my management team walk out because they said we had to launch load balancers, otherwise nobody could use our thing. I said, "No... I think software will figure it out. Let's just provide really easy, smaller hardware instances, and they'll figure everything else out", and they're like "No, it's too hard. We've gotta do load balancing." And then look a couple years later... You've got Ingress controllers, and service mesh, and all these kinds of different -- BGP control in Golang... It's cool. I mean, it's not for everybody, but software solved the problem.
140
+
141
+ So a lot of that was just staying super-focused on what we did, and I think some of those other providers that you mentioned, of which I'm huge fans, and know the founders of most of those businesses, that moved our industry in their own way... But they became (I'm gonna call it) all-purpose platforms in a lot of ways. And that's probably right, in some regards... A lot of the industry has moved to that direction, especially with hyperscale clouds, having these just robust software catalogs and ecosystems... We've been fortunate enough to have venture backers at Packet who really saw our vision for what it was, which is staying fundamental in the primitives business... And frankly, here at Equinix, which really knows that it is a builder of physical infrastructure that can move at software speed. That's our job. Our job is not to do all the things; our job is to enable an ecosystem so that they can do all the things. So that has allowed us to continue to focus in on just like "Let's be the best at this, in the whole world." That's it.
142
+
143
+ **Gerhard Lazu:** \[28:12\] I can see the importance of that, and I can see many decisions which were controversial, such as "Let's not build a Kubernetes." Like, what?! No, everybody's building a Kubernetes. What are you on about?!"
144
+
145
+ **Zac Smith:** Oops... I forgot to build a Kubernetes service...
146
+
147
+ **Gerhard Lazu:** That's exactly the title, yes... That was one of the great blog posts which I had the pleasure of reading... And it shows in the small things, as well as the big things. But for me, one of the reasons, again, why I loved Packet, and now I love Equinix Metal, is that I could provision an instance type, the c3.small, with the highest CPU clock speed ever. You can't get a faster CPU clock speed anywhere. It turbos to 5 gigahertz. Now, that creates other problems... But my Erlang benchmarks run fastest on Packet. It's unreal; like 20%, 30% faster... And you can't reproduce that anywhere else. You can get a dedicated instance in AWS and it will not be faster. And that was surprising... And that was like four years ago, or three years ago.
148
+
149
+ **Zac Smith:** \[laughs\]
150
+
151
+ **Gerhard Lazu:** So not much has changed since then. But there is this problem... You wrote a little bit about this, the liquid cooling imperative; that's another great blog post. By the way, do you know that one of my favorite downtimes is to read your blog posts?
152
+
153
+ **Zac Smith:** Uh-oh... \[laughter\]
154
+
155
+ **Gerhard Lazu:** No, they're really good. They're short, they're well thought through, and you convey a lot of information in a very good way - compressed information... They're great.
156
+
157
+ **Zac Smith:** Well, we have a term for that here at Equinix...
158
+
159
+ **Gerhard Lazu:** What is it?
160
+
161
+ **Zac Smith:** Well, we started it at Packet and it's due to my twin brother Jacob, which is "Craft, not crap." So we don't ship any crap content. Only crafted content. So... Craft, not crap.
162
+
163
+ **Gerhard Lazu:** It shows. It shows. So the hot chips, coming back to the 5 gigahertz one - there is the cooling problem. Can you give us the TL;DR on that? Because you thought quite a bit -- again, I don't want you to reproduce a whole blog post...
164
+
165
+ **Zac Smith:** Sure.
166
+
167
+ **Gerhard Lazu:** But as a summary, as a TL;DR, why is that important? Because there's another big initiative that is linked to this, the Open 19, and I see a link there... And I can see you being the innovators behind this. But tell us more about that.
168
+
169
+ **Zac Smith:** Yeah, so the TL;DR is that chips are getting hotter. Why are they getting hotter? Mainly, we're getting dense, the nanometers are getting smaller on the fad processes. That's how you kind of stuff more transistors in. In order to then do that, you need to push way more power through these things, and we've created innovative ways, like what Lisa and team have done at AMD about chiplets, and having lower yield requirements, and putting multiple chips on a single die. But in the end, we're just running into a physics barrier here. You add it by adding more layers onto it. So suddenly, you've got multiple layers in the thin fad or whatever they call it. Even with memory and NVMe. So everything is having denser transistors, with more power going through them, and you have this kind of movement towards the way, as you kind of get rid of the nanometers, your only way to make things go faster and more efficient is to push more power through them.
170
+
171
+ So that's one in the general-purpose, large-scale silicon trends that we're dealing with. And the second thing is we have way more sophisticated purpose-built technology at this point, like GPUs, or accelerators. We have things that are very, very specific at doing one thing very well, and you then keep them busy, so you just use a lot more heat. There's an electricity problem that we have there, and certainly, as we shift to a more renewable energy footprint, instead of just buying credits and offsets actually generating things like green hydrogen, so you can offset demand and use it, exposing -- there was a great panel with the Intel team last week or the week before about how to expose to the world of software reliable metrics on "Well, that would not be a good time for you to reindex all your data stores. Maybe you should do it at noon, in our Texas data center, instead of at 2 AM in our Frankfurt data center, where we don't have any renewable energy." We don't have a way to even express that in our industry, a standardized way, let alone to do something about it. We desperately need that...
172
+
173
+ \[32:20\] But anyways, getting back to it - accelerators and purpose-built technology are getting hotter... So you have this electricity thing, more juice into the rack, and denser, effectively... And then you have the other problem, which is cooling. We're kind of getting to the upper barriers of two things. Number one, we're getting to the upper barriers of how we can air-cool this stuff. A lot of the times -- and you can see simulations, that about 20%-30% of the energy in a data center is just fans. If you ever walked into a data center, they're very loud. They're loud because there's all these little tiny, 20-milimeter fans running at the back of every server, just sucking the air through, just to create airflow on individual computers, to pull it over those chips and those heat sinks.
174
+
175
+ So in big data centers you've got 20%-30% of the energy just using fans to pull air around... And then we're getting to this density level where you just can't cool it if there's not enough air flow to be able to do that... And especially in a mixed data center. In a hyperscale data center is where you can build around one specific thing, you can kind of purpose-build some of the stuff around it, you can (as I like to say) build your data center around your computers... You can't do that at some place like Equinix, where every enterprise service provider has different things. I also kind of believe that we're gonna have a future of compute that's more heterogeneous, versus homogenous... So we're gonna have a few of a lot of things, versus a lot of one thing. So I kind of think that we have to solve this in a more scalpel-driven manner.
176
+
177
+ So moving the liquid -- I'm not gonna go into all the things, but just think of it like your car radiator or air conditioning. Pulling a liquid that turns into a gas over the hot part, the chip, the plate, whatever, and then being able to do that does a few things. Number one, it can be way more efficient. You can stop all those fans, you can stop pushing air around which doesn't go in the right place, at the right time, and start to put the right cooling at the right place.
178
+
179
+ The other thing you can do is create a much, much higher differential between the intake and the output. What that allows is - you've probably heard of things like heat pumps. You can actually turn that back into energy. So you've got a natural thing called a giant turbine called "thousands of computers creating heat." That sounds kind of like a power plant to me, right? Right now we literally just exhaust that, we're just trying to get rid of it. But if you can create a differential and actually capture (I'm gonna call it) hot enough liquid, you can actually turn that back down to energy, or sell it to the grid for municipal purposes, or whatnot. You can use that energy if you can capture it.
180
+
181
+ And then the most important part of that process is actually today most of our data centers and most of the data centers in the world use evaporative cooling, and that takes millions of gallons of water per day to evaporate this heat. And that is simply not sustainable. So we need to move into a closed system, where we can keep the water and the liquid and not evaporate it all along with it.
182
+
183
+ So there's these momentous challenges and opportunities... I think it's -- like what I've touched on earlier, related to some of the business model changes are gonna be necessary to that... But as we -- like, for example at Equinix we have a goal of reaching Carbon-neutral by 2030 using science targets... We have to explore all of these options with not only ourselves, but our ecosystem partners - the silicon partners, the OEMs, our customers etc.
184
+
185
+ \[35:44\] I think one of the biggest challenges we have right now is that in an enterprise data center, with this diversity of technology that's going on, everything from Dell servers to NVIDIA DGXes, to boxes that you brought in from your -- you know, "This is a ten-year-old server I've got... Let me bring it into the collo." Still useful, and actually that's probably one of the best things you could do, is continue to use that server, so we don't have to make a new one.
186
+
187
+ **Gerhard Lazu:** Reuse, yeah.
188
+
189
+ **Zac Smith:** Reuse is the best thing we could possibly do... And luckily, software is getting sophisticated enough to deal with that. At least until we get a more robust recycling program built in with the silicon manufacturers, where we can recapture that and put it back into use.
190
+
191
+ Well, one of the problems with this diversity is that there's no current standard for how racks get put together... So if you've ever built a PC - I grew up building PCs...
192
+
193
+ **Gerhard Lazu:** Oh, yes.
194
+
195
+ **Zac Smith:** You used to have the ATX case.
196
+
197
+ **Gerhard Lazu:** Recently. Even servers. I still have it upstairs. 2011, that's the last server which I've built with supermicro. It's still up there, in the loft.
198
+
199
+ **Zac Smith:** There you go. \[laughs\]
200
+
201
+ **Gerhard Lazu:** Not liquid-cooled, unfortunately...
202
+
203
+ **Zac Smith:** In the PC world we had a standard called ATX. So you had an ATX case - ATX mini, whatever. And the cool thing about that was if you got an ATX case and somebody else made an ATX motherboard, and on the back of it you had an ATX cut-out on the pins, you could kind of make anything, and you didn't have to reinvent the logistics around the computer, like sheet metal, and power supply, and fan, and all those other things. Well, we don't have that in the rack today. Every single rack is bespoke designed; as people plug in these servers - well, where are the power cables? Where's the fans? Where is all the power supplies? None of it is standardized, and it's extremely hard to build these things. Just imagine putting in liquid cooling. Now we're putting water everywhere. This is mechanically complicated, and potentially -- not dangerous per se, because you're usually gonna use some non-conductive liquid... But whatever. It's complicated. We've already got hundreds of cables going into the back of these racks, and now we have like water tubes going in and out? Like, oh my God, right?
204
+
205
+ So that presents a huge logistical thing where we need to create a standard for the rack. And not like "Everybody build this computer", but "Everybody build to this standard mechanical form factor, so we can all connect." Almost like Nespresso capsules; they work in the machine. Or like how many things in construction, like outlets - all look the same, right? Well, it's because you create a standard so that the whole industry can work. Maybe Nespresso is a bad example... Whatever.
206
+
207
+ But the other thing is actually related to -- you know, in terms of creating this ability to go into racks easier, is so that we can actually design a system where we can take the thing out of the rack. Today it's so expensive and complicated to put things in the market. We never think about how to move them from it. So it's all like a one-way street. And then we just try and get rid of it. And then what we do is we throw away all the stuff - we throw away the sheet metal, we throw away the cables, and the copper, we throw away the power supplies, we throw away all this infrastructure around it, just because we want a newer CPU.
208
+
209
+ **Gerhard Lazu:** That's crazy.
210
+
211
+ **Zac Smith:** That's crazy. So we've gotta do something about that, from a sustainability perspective, but also so we could do things like put the right technology in the right place, at the right time. For example, imagine if we put in your coolest c3.small processors into Ashborn. They're great, and whatever, but it turns out we need some of them in Atlanta. Right now it's so expensive and so heavy of a lift to move anything from anywhere. We're requiring specialized people, and lots went where are the boxes? We don't have any standard boxes in our industry. Well, better go get brand new boxes... You know, things like that - there's a massive amount of waste. Well, what if we could standardize and we could pull out server SLED and for $10 via a standardized FedEx box put it into Atlanta? Well, holy cow. I can just imagine the defrag that we could do on our data centers for our customers.
212
+
213
+ **Gerhard Lazu:** Oh, yes.
214
+
215
+ **Zac Smith:** But that's not a possibility right now, without creating some sort of a STDIN-rack ATX case, so that way things can go and move, yet innovation can still happen within it.
216
+
217
+ **Gerhard Lazu:** Is this where Open 19 comes in? Is this is it? Basically, Open 19 is what you've just described, this standardization?
218
+
219
+ **Zac Smith:** \[40:01\] I would say that's the vision, which is instead of kind of dictating the technology, it's around creating an open standard for the mechanical form factor, and that's it. And that's really important, because having innovation to occur in both proprietary and open manners is very important for hardware supply chains... If you've ever made hardware, it's really expensive to go and do. Spooling motherboards isn't cheap; inventing chips isn't free. And so we need kind of a robust set of options for the intellectual property model of the technology that goes inside... But if we could then start standardizing as an industry, especially as our challenges around power and cooling and heat capture become front of mind for most companies, and are imperative for all of us, that's gonna provide a really amazing outlet for all of us to work together - OEMs, customer supply chain, data center operators etc.
220
+
221
+ We've chosen to invest in Open 19. It has a special kind of blind mate connector design. So the idea is that you shouldn't ever have to go to the back of the rack. You basically have a sheet metal kind of cage, and on the back you have blind mate power, blind mate data, and soon blind mate liquid cooling loops... So if you have a server that can mate with those, it automatically engages. But you never have to go to the back of the rack and do all this complicated stuff. So my vision would be that that FedEx driver can literally come in, walk it in, slot it, and walk away, and it would work. That would be the dream. We're not there yet, but if we can get there - wow. That would change how we use technology.
222
+
223
+ **Gerhard Lazu:** Yeah, I would get two of those, please. Can you send this FedEx guy up in my loft and slot two of those in? I would definitely want two of those as well.
224
+
225
+ **Zac Smith:** And especially if we could do it with -- I mean, you just think about places like Equinix, or whatever... We could do it with reusable packaging. Like, "Okay, it's a brick. It's of this size, we've got a package... Like, plop-plop, here's your thing. We'll come back with a brick if you wanna move it. We'll come back with a reusable box." And I think that that in and of itself is a huge reducer of waste, but it could enable this movement of technology to the right place, at the right time.
226
+
227
+ **Gerhard Lazu:** Yeah. This sounds an awful lot like containers for software. That's exactly it - this is the standard...
228
+
229
+ **Zac Smith:** Stuff it into here...
230
+
231
+ **Gerhard Lazu:** Create the standard, and then everything is going to slot in. Like, spin up the container, and that's it. Well, okay.
232
+
233
+ **Zac Smith:** That's a good phrase, and we can use our physical infrastructure at software speed. Then we need to create -- Kubernetes is to the containers as something is to the physical hardware mover
234
+
235
+ **Gerhard Lazu:** I was too busy creating the hardware equivalent of Kubernetes... That's why I didn't create Kubernetes. \[laughter\] Okay...
236
+
237
+ **Zac Smith:** There you go... You've uncovered the secret.
238
+
239
+ **Break:** \[42:45\]
240
+
241
+ **Gerhard Lazu:** So you've mentioned about multi-architecture becoming a thing, a big thing in the data center... And I have seen at least four developer workstations, like when it comes to the new Apple M1 chips... I think they're amazing. I don't have one, but I'm looking forward to it. I know that Intel has always been great for single-core clock speeds. That's why I was mentioning the 5 gigahertz... But if you need lots of cores, AMD - I think especially with the Rome architecture, really had a home run this year. I was following it, and it was just amazing. My dev workstation - it's an AMD Epic Rome. Rome is the second generation of AMD Epic. You know, but I'm not sure whether all listeners know that.
242
+
243
+ **Zac Smith:** Don't worry, here comes Milan. It's coming out soon.
244
+
245
+ **Gerhard Lazu:** Right. So how do you see this, between Intel AMD ARM, the whole chip play... I know that you provide ARM servers, but I haven't seen them publicly... But what does this multi-arch look like from an Equinix metal perspective? And from your perspective, from chips... Because you love chips even more than I do. And I mean the CPUs, I don't mean the chips that you eat.
246
+
247
+ **Zac Smith:** \[laughs\] So let's see... The best way I can answer is we've always -- I loved investing in the ARM ecosystem because it really pulled us as a company back in (I think it was) 2016 when we launched the Cavium ThunderX, which was the first 64-bit server capable ARM processor that you could buy. There was a few before that, but they didn't really come to fruition enough to be able to -- kind of general-purpose.
248
+
249
+ And the reason why we did that, which a lot of people questioned me from that time, especially at our company. They were like -- we're a very open and transparent business, which I was always very proud of, and people hopefully felt that they could say what they needed to say, or whatever they thought... And some people said "Why are we doing this ARM stuff? There's just no money in it." And I said "It's because we need to force ourselves --" kind of like how cloud-native... We found a lot of cloud-native developers developed on Packet or on Metal because it forced them to be not reliant on cloud provider services... Because we didn't provide any. You couldn't get stuck on our load balancer, because there wasn't one.
250
+
251
+ **Gerhard Lazu:** \[laughs\] Not our problem...
252
+
253
+ **Zac Smith:** But that's what I wanted to do with moving the ARM, was making sure that we could be really agnostic around what the technology was. And I always pushed people internally and said -- whether it's Intel, or ARM, or some other thing that somebody invents, which I'm sure they will, we wanna be really good at turning it on and off in a repeatable, secure way for our customers, and then helping the world of software to touch it.
254
+
255
+ And so ARM was a really great opportunity to push that envelope, because nothing worked. Like, everything you thought would work -- like, "Oh, we'll boot." "Well, not really." UEFI oops, that's a little different. Oh IPixie. All kinds of things throughout the bootchain process and whatnot had to be worked on until you could do it... And I remember sitting with Syed, who's the CEO of Cavium, and I was like, "We're gonna need to provision and delete these things like thousands of times a day, until it is boring. It's just not boring right now. And then we'll get like Debian, and CentOS, and Ubuntu, and some other things working on it every day, boring, with all the build things, and all the repos, and all the things that needed to get rearchitected in multi-arch for that."
256
+
257
+ And I remember one of the first ones we did is I called up -- what's his name? He used to be a client of mine way back in like the early 2000's, but he was the maintainer of the build infrastructure for Golang. And I remember calling him up through a friend and basically being like "Yo--" He worked at Google. "Can I give you access to our works-on-ARM ecosystem so you could start doing builds of Golang natively on ARM?" He's like, "Well, you could always compile it yourself." I'm like, "Yeah, but that's a lot of work for everybody to do every time they wanted to try--"
258
+
259
+ **Gerhard Lazu:** Oh, yes...
260
+
261
+ **Zac Smith:** So we just kind of slowly built that up... And that was a really cool way for us to make sure that we're being agnostic on architecture. Now, of course, later Intel was challenged by AMD with their chiplet architecture, and Lisa's kind of forward-thinking vision... Mark Papermaster and whatnot creating a purpose-built (I'm gonna call it) technology or chip architecture for Cloudera... Just provided a huge amount of competition and an alternative in the marketplace... But now you've got this -- you know, we'll see, but like NVIDIA is buying ARM, or at least attempting to... I'm not sure what's the state of the regulatory approvals or whatnot...
262
+
263
+ \[48:11\] But now you have these three really good, really competitive now path back at Intel, he is moving hard, from what I can tell on the outside... And it's great to see three giant, pretty consolidated chip companies, all fighting it out. This is good. This is really great. And in the meantime, you have people like Amazon creating Graviton and pushing the limits there and showing what's possible, or Apple doing M1... And now even developers -- I mean, I was on a podcast with a developer friend of mine recently, and he was talking about how much he loved developing on ARM. I was like, "You wouldn't hear that three years ago..."
264
+
265
+ **Gerhard Lazu:** Oh, yes.
266
+
267
+ **Zac Smith:** He's like, "But it's so much faster to do it natively." I was like, "Whoa...! ARM laptops, here we go...!" \[laughs\] It's like, Ubuntu on your desktop, right? It's gonna happen one day.
268
+
269
+ So I think if you've just got this nice, healthy, competitive silicon environment, you've got a bunch of different technology tracks that people are going off of... And frankly, the software world has become because of both (I think) the two critical ones, Apple having moved to ARM for its own chip - that's gonna help make a lot of developers experience native ARM architecture. Obviously, the mobile world has carried that through... And then the second one is with cloud providers like Amazon even adopting their own ARM technology. I think that really will just cement a multi-arch world, which will prepare us for whether it's OpenPOWER, or RISC... You know, things that are truly open ISAs. And that's the difference there, is that whether it's ARM, which is still a licensed instruction set, or x86, similar, and then maybe we'll see -- Intel will also start licensing it... But you know, RISC, with RISC-V, and OpenPOWER are truly open, and they have no intellectual property ties to them beyond their licensing regime, but it's an open source license... Which is really neat. Because in my opinion, that's where the next phase of super-bespoke chips comes out of, where you can use an architecture really liberally... And I think we haven't seen that yet. I think RISC-V is on the radar, but it's not here... So whether it's Sofi or whatnot come to market, but someway where we could see companies of all sizes, maybe even pretty early companies developing and having their own chip that just did their workload. It'd be pretty cool. More Apple M1's, but for different companies.
270
+
271
+ **Gerhard Lazu:** So you're the second person that I know of who speaks very passionately about RISC. Dan Mangum, from Upbound Crossplane, he's the first one... And I know that he's really passionate about RISC-V. Besides the open source model, is there something more to it? Is it like the potential what RISC-V could become, and the chips that could be built with that instruction set? Is that what gets you really excited? Is there anything else beyond that? Because right now, it's very nebulous; it could be an amazing thing. But if you were to use it -- like, you can use ARM today. Can you use RISC-V today? I don't think you can. There's no implementation of RISC-V as far as I know.
272
+
273
+ **Zac Smith:** There's a great podcast that I listened to, maybe it was last year, from NPR, about RISC-V. But it was great. People are using RISC-V just within their own proprietary silicon... For example, some of the big machine learning products and whatnot, they use a ton of RISC-V. And I think where it comes down to is although the licensing model will be good, and certainly, I'm gonna call it, a liberating tool, that will just kind of create competitive and licensed models. And I think it's really just gonna be the overarching assembly -- like, RISC-V is a pretty new language, or a pretty new ISA. This is an architecture built recently. That's kind of cool.
274
+
275
+ **Gerhard Lazu:** So modern, is what you mean by that.
276
+
277
+ **Zac Smith:** Yeah.
278
+
279
+ **Gerhard Lazu:** Okay.
280
+
281
+ **Zac Smith:** \[51:58\] And I think that's powerful. I'm not smart enough to even understand what that means, I just kind of have to believe that there's some pretty big advancements we've all made in 20 years in terms of how we can build architectures... So I think that that's gonna be the fun part, is to see what comes out of that and where people can take -- as it gets more mature, and there's a line on chip factories for that, from the silicon fabs... Like, okay; well, that would be cool. What if you could produce a chip that just did this one thing that your software needed? And that's where you get into the "Oops, I did it 10,000 times faster and more efficient" thing, versus anything else.
282
+
283
+ **Gerhard Lazu:** I see, I see.
284
+
285
+ **Zac Smith:** And maybe the barrier to that just goes way down... Kind of how ARM did it for certain parts of the market, but maybe for the next phase.
286
+
287
+ **Gerhard Lazu:** So I'm going to mention now the third article, the third blog post that you wrote, "Five predictions for hardware in 2021." I really enjoyed that. I would ask you how they played out, but let's leave that for another time, if ever... I'm more curious about your two predictions in hardware for 2022. Do you have any?
288
+
289
+ **Zac Smith:** Oh, that's a good one. I haven't thought about it yet, man...
290
+
291
+ **Gerhard Lazu:** Well, you have to... Because I'm looking forward to that blog post, and you have to start writing it... \[laughs\]
292
+
293
+ **Zac Smith:** Well, don't hold me back into 2022 but the two most interesting things I think of related to hardware right now... First and foremost, we're gonna have to solve the sustainability problem. This is just not gonna work. So whether it's because people come out with licensed CPUs, like "Sign up for your subscription to technology from whoever" versus "Buy this thing", and also the related kind of (I'm gonna call it) the surround sound stuff around the cooling, the power, whatever. We're gonna have sustainability. Silicon is at the heart of that. Hardware needs to become a sustainable, circular economy; it is not currently today. So that's probably not gonna be done in 2022, but --
294
+
295
+ **Gerhard Lazu:** At least the beginning of that, yeah.
296
+
297
+ **Zac Smith:** I think we're gonna make progress on it... We already see it happening throughout our industry, which is regulatory impacts customers... All of our biggest customers bring sustainability as their number one issue now. It didn't use to be there.
298
+
299
+ **Gerhard Lazu:** That's a good one.
300
+
301
+ **Zac Smith:** Even 18 months ago it wasn't even on the radar. Now - right at the top. So... Okay, that's great, because now we see business drivers... I think people are gonna pay for this too, which is really important... Because you don't just get sustainability for free. We don't get to just do "Oh, we did green power for you. It was a good marketing thing." No, no, no. We invested tons of money to make meaningful impact to change our world; that is going to cost money. We are going to invest together. So I think that's an important -- that's number one.
302
+
303
+ I think actually number two is that at some point, if we can solve this distribution of technology - right thing, at the right place, at the right time, so that way you could pull up on your iPhone and see the tracking of your cool computer to the right market, and then just turned on... And if we could snap our fingers and -- let's say you figured out just the right technology that you needed to use for your platform, and then you clicked a button or hit an API call, and somebody like Equinix got it into 50 to 60 markets around the world in like a couple weeks... That would be rad. \[laughs\] And I think we would see disruption in content delivery, and CDNs, and edge computing, and all kinds of things that we would do, and networks, and all things it could run on - I'm gonna call it hardware and software moving at their own pace.
304
+
305
+ So pending we solve this distribution thing, I think the big -- and this is, again, probably... Now, you'd have to ask me what's the 2025 predictions. That's way more my style. But 2022, I'm not sure.
306
+
307
+ 2020-something - the other thing I think is gonna be security. Right now, people just try and get the hardware or the thing in the right place, at right time, and they're lucky to have it. That is not going to be our long-term challenge. We'll solve that. Then we need to solve a way different approach to security, and that has to start at the hardware level. So I think our enablement of hardware-level security has barely begun. Most people don't think about it in the software enablement side. They think about "Oh, I'm gonna encrypt my stuff, I'm gonna get my TLS going, I'm going to do all those things..." But really, even things like basic time protocols, basic boot processes... Is this machine the thing I think it is? Who touched it in the supply chain?
308
+
309
+ **Gerhard Lazu:** \[56:14\] Oh, yes.
310
+
311
+ **Zac Smith:** You know, I always say "Why hack the app, when you can just hack the one-dollar chip at the factory?"
312
+
313
+ **Gerhard Lazu:** Oh, yes.
314
+
315
+ **Zac Smith:** So I think we've gotta start thinking about like a zero-trust approach to hardware, and that will allow us to increasingly move these very important parts of our life into hardware we never touch or operate ourselves. We have a trust a third-party? I don't know... You shouldn't trust a third-party. But right now we don't have a mechanism for third-party hardware to be zero trust... And I think that's the next biggie wave.
316
+
317
+ **Gerhard Lazu:** So I know, like supply chain security...
318
+
319
+ **Zac Smith:** 202x.
320
+
321
+ **Gerhard Lazu:** 202x, yeah. I can see that one, even like in software, where we have been doing it for long enough... When it comes to containers, when it comes to various CI/CD systems, when it comes to two different platforms even, how software moves with those different platforms and you shouldn't trust any of them - how do you ensure things are secure? How do you ensure things remain signed? I can see this being a big thing coming.
322
+
323
+ Coming back to what you mentioned earlier about sustainable hardware, and how we cannot throw away hardware. We have to replace the parts which are broken, or are obviously an advantage to upgrade them, like the CPU, without upgrading everything else, and making it so simple that the FedEx guy or gal can come into a data center and just plug it in.
324
+
325
+ **Zac Smith:** FedEx robot. FedEx robot.
326
+
327
+ **Gerhard Lazu:** Yeah, that as well. It may happen. So it can happen, and maybe should happen, because this is like the whole more sustainable hardware, more sustainable economies of scale, because they have to be big for them to work... And you're right, it is top of the mind for many people, especially this week.
328
+
329
+ So I can see a very nice link - and I'm sure that you can see it as well - between what you've just mentioned and Equinix Metal. So how does this map to the Equinix Metal priority for 2022? I know that you promised priorities in a few weeks in your last blog post, on November 4th...
330
+
331
+ **Zac Smith:** You're trying to get a teaser, you can't do that... \[laughs\]
332
+
333
+ **Gerhard Lazu:** Yes... You promised a few. I just want one. So can you give us one?
334
+
335
+ **Zac Smith:** Well, I think we'll make meaningful progress on the distribution capabilities. I always like to tell people that Equinix metal is not a bare-metal cloud. We're a hardware distribution platform, an operator for fundamental infrastructure... So we'll enable more places where you can do that. We've been really fortunate to be able to invest heavily and put Equinix Metal in 18 markets around the world. I think we'll expand that and go to more. I think though that what we'll do is we'll move this -- my prediction is that we'll move some of these things which are kind of loosey-goosey right now... Like, we're going to do field trials of our pluggable liquid cooling. We've already been doing it for about a year in one of our data centers. We're gonna move there with customers in the coming months, using some prototypes that we've been building...
336
+
337
+ **Gerhard Lazu:** Interesting.
338
+
339
+ **Zac Smith:** So we'll move out there, and we're gonna do that with some partners... OEMs, supply chain partners etc. So I think that'll be really important, because as Equinix, we're fortunate that we're always building data centers. I can't remember from our last earnings call how many are under construction right now, but it's a lot... So we have this opportunity to really optimize and change what we're putting into the ground around some new hardware delivery model...
340
+
341
+ So my hope is we'll make progress on that, and hopefully with customers and in the open, so that everybody can learn, and we can try and (let's say) exit 2022 with a super-clear path to disruptive sustainability from a power and cooling perspective.
342
+
343
+ **Gerhard Lazu:** I love that. That's something I can get behind... Oh, yes. Yes, please.
344
+
345
+ **Zac Smith:** \[01:00:00.10\] I haven't found somebody who can't get behind that. Everybody is like "That makes a lot of sense, and I wanna be part of it." So I think making sure we do that in an open way is gonna be really important.
346
+
347
+ And the second thing is I think we're gonna see the OEMs, Dell, HP, CISCO, Lenovo etc, even NetApp, and F5, and Pure, and the people who make purpose-built technology in hardware - I think we're gonna see just massive business model shifts. The cat's out of the bag. People want aligned business models as a service... And that's gonna be a really, really big turn for these aircraft carrier style companies; they're big businesses that are really used to shipping you the technology and you doing everything, and now they're gonna turn it and running it for you somehow... We're gonna feel the ripple effects of that. But I'm so excited about it, because that's the first leading indicator of how we can make the business models more aligned.
348
+
349
+ And people sometimes -- you know, they originally inferred that Equinix Metal was kind of in conflict with cloud providers... I don't think so. We've recently enabled things like Amazon EKS, and Anthos... Because I see cloud providers as software companies, that when at the right scale, run aggregated infrastructure for you. But when not at the right scale, call it pretty much anything beyond multiple megawatts - they don't really wanna run the technology for you. They just wanna sell you the software and services. And I think that that's a pretty aligned model with Hyperscale that we can help support.
350
+
351
+ And with OEMs, as they move into this as-a-service model, I think we can be super-helpful with Equinix Metal to help them be the best in the world with that. It's one of the main reasons why 2014 - we've been making it so that we can automate hardware, no matter what it is, and where it is, and what runs on it. We might wanna add one other thing - or who owns it... Because it doesn't really matter, right? Your server, my server, Dell's server... It's just a server. And can we make it consumable and usable? That just requires an adjust. That's a startup guy talking. It just requires a business model change. \[laughter\]
352
+
353
+ **Gerhard Lazu:** But that's simple, right? We'll figure it out...
354
+
355
+ **Zac Smith:** Let's figure that out... \[laughter\] Pull request on version 1.2 of the business model.
356
+
357
+ **Gerhard Lazu:** Exactly. Or send me your pull request and I'll consider it. I'll merge it. Who knows, maybe...
358
+
359
+ **Zac Smith:** I'll consider it...
360
+
361
+ **Gerhard Lazu:** Okay. So as we are about to wrap this up, I'm wondering that -- like, from a listener perspective, if there was one thing that I would take away from this conversation, what would you like it to be?
362
+
363
+ **Zac Smith:** Well, I would like more people, and especially software-minded people, to be interested and open to (I'm gonna call it) the disruptive innovation that can happen when you pair magical software with the right hardware. I think it's not only super-cool, I think it's an imperative for us long-term to be good at that. Not everybody, but I think that that's an open place, and I'd love people to come away excited about the opportunities of making a difference with technology, about doing so in a sustainable way... And not just because it's good for the granola country planet, it's like... Because it's both good for you, and doing good, but -- what is it? Doing well by doing good...
364
+
365
+ Another one of my blog posts a year or two ago, which is about creating a bigger tent... An ecosystem-driven way, where we can create more value by solving these problems together, instead of (I'm gonna call it) a siloed way, where we take the value. It's like the Carbon industry right now, where instead of pulling in raw resources and extracting them for ourselves, kind of like drilling for oil, we can create new technologies like renewable solar, or even Carbon capture, what Stripe is doing... That's a way that you can do well, but also create a bigger opportunity tent. I think that's the other powerful part that I'd love to impart to software-minded users, is that we can really work together between software and hardware to solve some of the biggest challenges in the world, but we cannot do it on our own. Together, that's a pretty powerful combination, and I'd love to be part of that ecosystem.
366
+
367
+ **Gerhard Lazu:** That's a really good one... And I know that Tinkerbell OSS is a great example of what you've just said. So if you're wondering, like, "This sounds a bit handwavy..." Well, no, because there's actual projects that you can go and check out, and they look really good... Which shows the investment and commitment to those technologies. The otel-cli is another one... And there's a couple of other examples in the Equinix labs, which is a great way to see some of the ideas which float around, and I'm sure new ones will appear next year.
368
+
369
+ **Zac Smith:** Yeah. Tinkering with hardware and software together? Come on by.
370
+
371
+ **Gerhard Lazu:** Tinkering, I love that. Like, where did Tinkerbell come from? Tinkering. There you have it. Let's tinker with hardware. I love it.
372
+
373
+ Zac, thank you very much for indulging my curiosity. I had a great conversation about hardware, and you gave me some crazy ideas for 2022, and I would love to have you back at Ship It. Thank you very much.
374
+
375
+ **Zac Smith:** I appreciate you having me here. Thank you.
Gerhard at KubeCon NA 2021 Part 1_transcript.txt ADDED
@@ -0,0 +1,587 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** One of my favorite talks from KubeCon in May, the European one, was Overview and State of Linkerd, and you all did a fabulous job... But I have to say, between you and Matei, I'm not sure who was the better one. I think it was a great, great talk. No, seriously, how is Matei doing?
2
+
3
+ **William Morgan:** He is doing great. He is doing really fantastic. He's kind of a rising start in the CNCF. He was a Community Bridge participant as a student, just (I think) a year ago... And then he has already risen to the levels of Linkerd maintainer. So yeah, he's really fantastic.
4
+
5
+ **Gerhard Lazu:** I really love that story, like him shipping code... Going from nothing to shipping code for Linkerd - that was amazing to see. And the enthusiasm, and the fresh perspective - all that's been great.
6
+
7
+ So in May we heard many good things, many great things about Linkerd 2.10. I know that Linkerd 2.11 is out, so what is new, in the new version?
8
+
9
+ **William Morgan:** Yeah, great question. So 2.10 was a big step, and 2.11 is even bigger. This is the first time where we have introduced policy into Linkerd, which means that you can now control which services are allowed to connect in to communicate with each other. Prior to 2.11, whenever you told Linkerd "Hey, I'm service A, and I wanna talk to service B", Linkerd has done its best to make that happen. It'll do retries if there's a transient failure, it'll do load balancing, it'll do all this stuff. And now with 2.11, for the first time you can say "No, A is not allowed to talk to B, unless these conditions are met."
10
+
11
+ **Gerhard Lazu:** \[04:20\] Okay.
12
+
13
+ **William Morgan:** So that's a big -- you know, for anyone who's in the security world, this is the idea of micro-segmentation, and this sort of thing becomes very important.
14
+
15
+ **Gerhard Lazu:** How do you declare that? Do you have a UI, do you have a configuration? How does that work?
16
+
17
+ **William Morgan:** Yeah, so a lot of our design principles in Linkerd are to allow you to do powerful things with as little configuration as possible. And the way we do that typically is by sticking as close as we can to Kubernetes primitives. So rather than inventing some new version of a service - well, we just use regular Kubernetes services; rather than inventing an abstraction layer on top of these other things - we just give you those Kubernetes objects directly.
18
+
19
+ So we've tried to avoid introducing CRDs, and I think prior to 2.11 we had two CRDs I think in total, from two years of development, or five years of development, or however you wanna count it. But with 2.11 we introduced two new CRDs.
20
+
21
+ The way that it works is you express policy by using a set of annotations that you can set at the cluster level, at the namespace level, at the workload level... Or, in addition to that, you can add these CRDs that basically specify the types of traffic that are allowed. And that combination together is really elegant, because it means you can express a wide variety of things, from either a very open cluster that only has certain exceptions, like this sensitive service - you can only talk to it under these conditions, all the way to "Everything's locked down, and the only traffic that can happen is traffic that I've explicitly allowed to happen", and everything kind of in between.
22
+
23
+ **Gerhard Lazu:** Yeah. Okay. That makes perfect sense, especially from the Kubernetes' primitives side; I really like how you're thinking about that. But one thing which I really loved about Linkerd was the visual element - the dashboards, the graphs, all that stuff. That was amazing. So I'm wondering, from that perspective, do you also allow some customization via the UI, which then gets translated to those native Kubernetes primitives?
24
+
25
+ **William Morgan:** Yeah, so one thing we've never done, and probably never will, is allow you to create those objects through the UI. So we've always wanted the UI to be a read-only tool that allows you to understand the state of the system. But once you get into like -- you know, you're dragging a slider, or you're pressing buttons to implement YAML... It just gets very hairy very quickly. And the security concerns, and permissions, and all that stuff. So we've kept the UI totally read-only.
26
+
27
+ **Gerhard Lazu:** That sounds great to me. That is a very wise decision, and I'm sure we'll come back to this later, another time, not today... But that sounds great. So which is your Linkerd top-of-the-mind item? And this can be something that you will be working on, or something that is like a hard problem that you've been working for some time, or something that you're excited about Linkerd, which is outside of this release or outside of the features... Which is your top of your mind?
28
+
29
+ **William Morgan:** Yeah, so for me I think it's a theme more than anything else... And it's a theme that we didn't really expect when we were first starting to develop Linkerd, but it's one around security, around especially security of the traffic in your cluster.
30
+
31
+ So we came into Linkerd in the early days of the project very reliability-focused. Our background was at Twitter, and Twitter was constantly down, at least at the time... So our vision for what we were doing was we were gonna have load balancing, and retries, and blue-green deploys, and all these reliability techniques. And what we learned early on was that a lot of the use -- I mean, some people love that stuff, but a lot of the use of Linkerd was for Mutual TLS. Why? Because people wanted to encrypt the traffic in transit. Why? Because either they had these regulatory concerns - "Oh, we work with financial data, and the government basically says we have to do this", or they just have security concerns. "We're running in the cloud. We don't have any control over the network. The best practice is we should maintain confidentiality."
32
+
33
+ \[08:09\] So that was like our foray into the world of security, and that theme has continued to develop through the policy features, the micro-segmentation, and onto other features, more types of policies... You know, there's a lot more we can do in this area of "How do you secure the traffic in your cluster?" And it's a blossoming area, because everyone I think is becoming a little more comfortable with Kubernetes, so the operational concerns are -- I wouldn't say they're taken care of, but they're understood. And now they're in the world of "Oh, crap... Now I can run it, but how do I secure it? How do I make sure that if one node gets hacked, that everything doesn't fall apart?" Or more likely, if someone deploys a mistake, it can't accidentally delete our users, or expose sensitive information to the outside world.
34
+
35
+ So that theme has been just developing for us over the past couple of releases, and it's gratifying not just because things like that are cool, but because people are using it, and they're getting a lot of value out of it, which is kind of like the end goal of Linkerd; if no one's using it -- I don't know, to me that's a little unsatisfying.
36
+
37
+ **Gerhard Lazu:** Yeah. I know that that is a very big, complicated, meaty problem to tackle, which you're not going to solve in a patch release, maybe not even in a major release. It'll take many, many cycles to get it right... And it's changing as well, with all the new rules and regulations. I know that this is something which you are passionate about, because I've seen your blog post. I've only skimmed it, the one about MTLS in Kubernetes. I intend to go back and read it properly. That's a good one, so thank you for that. There's a lot there.
38
+
39
+ My top-of-mind is "Can Linkerd 2.11 still do linkerd install | kubectl apply -f?" Because that was amazing. You can install Linkerd in your Kubernetes with Linkerd? That just blew my mind when I first saw it, and I'm wondering, does it still work?
40
+
41
+ **William Morgan:** Yup, that still works. We've maintained that. That's not typically the production deployment, because people are moving into repeatable deployments, and Helm charts, and config-as-code... But yes, that still works. I think that's still really important, because a lot of people -- believe it or not, Linkerd has been around for six years at this point, or something... It was the first service mesh project ever. But people are still coming into it fresh face, like "Never heard of a service mesh before, I'm trying to understand this thing, I've just learned Kubernetes..." So there's a big audience to Linkerd every day, where you're not ready to like Helm it up; you're just trying to play around with this thing and understand it... So yeah, that still works.
42
+
43
+ **Gerhard Lazu:** How would you recommend someone that installs Linkerd in production? So this is a very nice getting started, which I find very valuable, especially when I'm trying things... I love when tools are really easy to use, and this is in my perspective one of the ways in which Linkerd is super-easy to get started with... But how would you recommend that someone installs Linkerd in production?
44
+
45
+ **William Morgan:** Yeah, so what we've seen basically is people using Helm, or Terraform, or tools that allow you to do it in a programmatic and repeatable way... And I think that's probably the best practice for production. You wanna be able to - especially if you're in the world of spinning up multiple clusters, or starting to treat your clusters as cattle and not as pets, you want those deploys to be repeatable, and you wanna know exactly how things were set up when you come back to it three years later. So you don't want it to be in someone's terminal window, and they closed their laptop three years ago, and then they left the company, and now you're like "Hm, I wonder how this was involved" So that's the best practice.
46
+
47
+ **Gerhard Lazu:** Okay. One of the things which I've seen and I quite liked, especially when it comes to some projects which can be a bit more involved to set up, is there's an operator which is just meant to install things, and then you apply a thing, and the operator knows how to install itself... Because then the thinking goes the operator can also automate upgrades, which I think is an interesting proposition.
48
+
49
+ **William Morgan:** \[12:02\] Yeah.
50
+
51
+ **Gerhard Lazu:** So does Linkerd have something like that, or is Linkerd thinking about something like that?
52
+
53
+ **William Morgan:** It's certainly something we've discussed in the past, and I don't think there's a reason why we wouldn't do it. Easing upgrades especially is something I'd love to do. The upgrade to 2.11 is actually pretty easy, but going from 2.9 to 2.10 was painful. Some of the configs changed, and stuff like that. I don't know that that would have been 100% automatable, but it would have been something we could assist, at least. And there's other operations too, that I think an operator would be helpful with. So yeah, we're open to it. PRs welcome.
54
+
55
+ **Gerhard Lazu:** Nice. Very smooth, very smooth. Okay. So the upgrade from 2.10 to 2.11 - is it just apply the Helm upgrade? Is that all it takes?
56
+
57
+ **William Morgan:** That really should be it. We didn't change -- there was one or two breaking changes around the mechanics of some of the multi-cluster stuff, but the majority of 2.11 is really additive... Which, again, is a theme that we try and stick to with Linkerd. So all of the policy stuff, which was a new feature - that's all built on top of all the MTLS stuff. And all the MTLS stuff is built on top of the Kubernetes primitives of service accounts, and mutating WebHooks, and whatever else. It just kind of compounds, and you get these very nice situations where the moment you install Linkerd - I mean, it's awesome that you can install it really quickly, but what's even more awesome to me is that when you install it and you mesh your pods, you actually have MTLS working out of the box there, without doing any config.
58
+
59
+ If you read that long, long MTLS guide that you talked about - the vast majority of that is complicated stuff, and at the end I'm like "But you don't have to do any of that, because you can just install Linkerd and it does all that stuff for you." And that means that all the policy stuff can then be built on top of the identities that MTLS provides, that are cryptographically-secure identities, and it's all done in this zero-trust fashion, where the enforcement point is at the pod granularity, it's not at the firewall or the edge of the cluster... So all this nice stuff happens.
60
+
61
+ **Gerhard Lazu:** Okay. Do you have any dependency on something like cert manager, or maybe a specific Kubernetes version? What does that look like?
62
+
63
+ **William Morgan:** So for Kubernetes versions we basically try and support the most recent three Kubernetes versions... And often we'll have support for earlier ones, but it's not -- really the policy is like "Okay, the most recent three." Now, if you really have to do something with an older release, maybe we can make that work.
64
+
65
+ In terms of dependencies on cert manager - there's not an explicit dependency, but one thing you do have to figure out when you're running Linkerd is the certificate rotation, not of the pods themselves, but of the cluster-level issuer certificate. We have some docs on how to have that automated with a cert manager... Or you can just remember to do it. But by default, if you run that Linkerd install command, that generates a certificate that's only valid for a year. So you have a year then to figure "Okay, here's how I'm gonna rotate it."
66
+
67
+ **Gerhard Lazu:** Right. That's a good one. Yeah, that actually catches quite a few people.
68
+
69
+ **William Morgan:** It does.
70
+
71
+ **Gerhard Lazu:** They don't think about that.
72
+
73
+ **William Morgan:** Yeah.
74
+
75
+ **Gerhard Lazu:** But maybe if you upgrade, does it get rotated part of the upgrade? Because that would solve the problem... No, it doesn't.
76
+
77
+ **William Morgan:** No, it doesn't, because -- I don't believe it does. Actually, I'm not sure.
78
+
79
+ **Gerhard Lazu:** Okay.
80
+
81
+ **William Morgan:** But in relation to this, there's also the trust certificate or the trust route, which definitely doesn't get rotated as part of an upgrade... And that also has a one-year expiration. So you know, it is easy to install and it's easy to make things work, but like with any sophisticated piece of technology, as you push it into production, there's stuff that you need to be aware of. We actually wrote a production runbook for Linkerd on Buoyant.io. So if you want our advice as the company that has installed Linkerd and helped people operate Linkerd in a lot of different places - and in fact, we operated it ourselves - if you want our best advice for how to install it, you can read through the runbook; we talk about certificate rotation, and some other things you wanna be aware of.
82
+
83
+ **Gerhard Lazu:** That's a good one. Okay, I didn't know about that. Thank you, that's a great, great tip.
84
+
85
+ **William Morgan:** \[15:55\] You've gotta make sure you don't have clock skew between the nodes, because all these TLS certificates - you don't have time components, and if you've got a big clock skew, then things are not gonna be able to connect, even though they should. There's details. It turns out computers are complicated; as much as we try to simplify them, there's details.
86
+
87
+ **Gerhard Lazu:** So I'm wondering, what are you looking forward to the most when it comes to KubeCon? This KubeCon which is --
88
+
89
+ **William Morgan:** Oh, for me that's easy, and it's actually not really project -- well, it's kind of project-related... It's just being there in person with other human beings. For me, that's so gratifying. I think open source can be a little isolating, because a lot of your interactions with people are. They come into the -- you know, in our case, the Slack channel, they're like "Hey, I have this problem", and then you help them fix it, and they're like "Thanks" and they leave. And then the next person comes and presents you with another problem, and you develop this kind of transactional relationship. And what you don't see in that, which you do see in person, what you don't see on Slack, is people then go off and they deploy Linkerd and they're really successful, and their company is thankful, and everything is working well... They don't come back to the Slack to say -- well, sometimes they do... But usually, they're like "Okay, cool! Now I can do the rest of my job."
90
+
91
+ But in-person, when you talk to these people, you realize there actually are a ton of people who are running Linkerd, it's solving big problems for them, and now they have an opportunity to come up and tell you about that. So that aspect has always been really amazing for me. And the virtual conferences, as much as I like the convenience of not having to hop on an airplane, they don't quite have those same thing. So... That's a long answer to a short question. I'm looking forward to the human interaction...
92
+
93
+ **Gerhard Lazu:** Oh, yes. Don't we all. Don't we all. I wish there wasn't a screen today, yeah. \[laughs\]
94
+
95
+ **Gerhard Lazu:** Another human that's not part of my family! Isn't that nice?! \[laughs\]
96
+
97
+ **William Morgan:** They're sick of hearing about it.
98
+
99
+ **Gerhard Lazu:** Right. Okay. So if someone's listening to this, and you are using Linkerd, and especially if it works, and you don't think you need to get back to William and the Buoyant team and the Linkerd community - that's actually wrong. Go and show a sign of gratitude. Say "Hey, thank you. This is great." Share your use case, share what you like about it. Even if everything is perfect, sharing that is worth it. People will appreciate it. And you've heard this from William, so... Do as William says, that's what I say.
100
+
101
+ **William Morgan:** Yeah. At a minimum, swing by -- if you're at KubeCon, swing by and say hi.
102
+
103
+ **Gerhard Lazu:** Yeah, that as well. That as well. I wish I could swing by, but I can't. Next one.
104
+
105
+ **William Morgan:** Next one.
106
+
107
+ **Gerhard Lazu:** If you come to Europe... Because that's where the next one will be. So anyways... For the people that can't attend KubeCon, like myself, and they'll be catching up on videos - any advice that you have for those people? How can they make the most out of it, even though they can't be there in person, and some of them are just catching up on the videos. What can they do?
108
+
109
+ **William Morgan:** Yeah. So you know, I don't know if I have great advice. My relationship with virtual conferences is not a great one... It's just a different experience. I don't know, I think like many of us, I sit in front of a screen all day, and it's really hard to wanna keep doing that in any other form... But I will say, we have a Buoyant virtual booth, and we're trying to make that as fun and as interesting as possible. I'll be hanging out there... You know, even though I'm in-person at the event, I'll also be spending time on the virtual booth. We've got the runbook, and a bunch of other Linkerd stuff... We've got an opportunity for you to get -- I think we're raffling off Linkerd swag... So if you visit us, you've got a chance that we'll actually ship you a hat and some shirts, and stuff.
110
+
111
+ So I don't know about the rest of the conference, but I think the Linkerd booth at least will be interesting.
112
+
113
+ **Gerhard Lazu:** Okay. Did you have time to check the talk schedule? Anything interesting, any talks that you're looking forward to?
114
+
115
+ **William Morgan:** Well, now I'm gonna seem like a bad person, because I only looked at the Linkerd talks.
116
+
117
+ **Gerhard Lazu:** That's okay, that's fine. That's perfectly fine.
118
+
119
+ **William Morgan:** Yeah, we have one --
120
+
121
+ **Gerhard Lazu:** My kids are also the best, you know what I mean? \[laughs\]
122
+
123
+ **William Morgan:** \[19:43\] So there are two talks at KubeCon that I'm particularly excited about... Actually, one of them is gonna be a ServiceMeshCon, which is a day zero event, which I have mixed feelings about as a conference... But there is a really cool talk there from the folks at Elkjøp which is the largest retailer in the Nordics... And it's like a multi-billion-dollar business that everyone in that region knows about, about how they use Linkerd and Kubernetes to replatform their entire company. So that one's really cool... That's Frederic, who is also a Linkerd ambassador, and is heavily involved with the project, so it's really awesome to see him be able to talk about what he did with it.
124
+
125
+ And then the other one that I'm really excited about is from (I guess) the other part of the world, which is the folks from Maintain Australia; they have this amazing story where they basically 10x their throughput using Linkerd, their entire system. They have a really big deployment through combination load balancing and some other stuff. So again, they talk about that at KubeCon proper. I think that's on Friday.
126
+
127
+ So those two things I'm really excited about, because I've been talking to these people, to both of them, for a long time, and... Yeah, I'm just really excited to get their story out there. They're both really exciting stories.
128
+
129
+ **Gerhard Lazu:** Okay. I will make sure to check them out as well. I will put them in the show notes for people to check them out if they'll be available... But that's great, thank you for sharing that.
130
+
131
+ When it comes to the people that you're most looking forward to meeting - anyone in particular that you wanna shout out?
132
+
133
+ **William Morgan:** Oh, boy... Actually, I'm meeting a ton of people there... But is there any one I wanna shout out? No, I don't think so. \[laughs\]
134
+
135
+ **Gerhard Lazu:** That's good. It's too many. Let's pretend it's so many - like, no particular name comes to your mind. That's okay. That works, too.
136
+
137
+ **William Morgan:** You know, one thing that's weird is I'm gonna be meeting people who have worked on Linkerd for a long time, who I've never actually met in-person. That part is exciting. I'm gonna be meeting people who work at Buoyant who I've never actually met in-person... Even though I'm the CEO, I've never actually met them in-person, so I'm gonna meet them for the first time at KubeCon. I mean, that's just a sign of the crazy times we live in.
138
+
139
+ **Gerhard Lazu:** Well, I hope everybody shows up, and everybody will be just as excited as you to meet them.
140
+
141
+ **William Morgan:** I hope so.
142
+
143
+ **Gerhard Lazu:** And happy afterwards. Like, they all want to do it again.
144
+
145
+ **William Morgan:** Yeah. Everyone will be smiling behind their mask...
146
+
147
+ **Gerhard Lazu:** Exactly, yeah. You can't see it. So yeah, if they're frowning -- well, actually, if they're frowning, you can see. But anyways, anyways... Anything interesting happening in the next six months for Linkerd that you want to share? Anything coming up?
148
+
149
+ **William Morgan:** Oh, boy. Gosh, I feel like we've just had all the interesting things happen at once. We had graduation happen just like a few months ago, 2.11... And now we're planning 2.12, and 2.13... Do we have anything specific beyond some really cool releases coming up? I don't know... A lot of what I've been focusing on recently has actually been on Buoyant Cloud, which is our SaaS kind of complement to Linkerd... And there's a free tier, so you can check it out and you can use it without having to actually swipe your credit card, at least at small scales... And there, a lot of the exciting stuff we've been working on is how do we take all the cool stuff that's in Linkerd and actually extend that out, so that - you know, yes, you're getting metrics, but can we just post those metrics for you? Yes, you're getting data about which services are talking to which ones, can we draw that in a nice topology math for you? Yes, you're getting MTLS... Can we break down that traffic into these different categories? So there's a lot of cool stuff happening on the Buoyant Cloud side.
150
+
151
+ But yeah, I think from Linkerd - you know, a couple more releases... We're gonna keep going down the path of policy... The other big thing we wanna focus on is a mesh expansion, which means running the data plane, the proxies themselves, which are these ultra-light Rust proxies - running them outside of Kubernetes... Control plane is still gonna be in Kubernetes, but that way you can extend your mesh out to a VM, certain non-Kubernetes environments. Apparently, people run code outside of Kubernetes, so I hear...
152
+
153
+ **Gerhard Lazu:** Mm-hm. There is a world outside of Kubernetes. Sometimes for me it's hard to believe as well.
154
+
155
+ **William Morgan:** It's scary.
156
+
157
+ **Gerhard Lazu:** William, this has been everything I imagined it would be. Thank you very much for making the time. It's been my pleasure, thank you.
158
+
159
+ **William Morgan:** It's been an absolute pleasure to be here, and thank you for having me.
160
+
161
+ **Break:** \[23:32\]
162
+
163
+ **Gerhard Lazu:** So the first time and the last time that we spoke it was two KubeCons ago; that's how I measure it. And when I say KubeCons, I mean KubeCon North America. That was Changelog episode 375; we had a discussion with the Prometheus core maintainers, and you were one of them... And that was 2019, as I mentioned. So what is new with you, Frederic, since then?
164
+
165
+ **Frederic Branczyk:** So yeah, actually, since 2019, a lot has happened. I guess I can go chronologically from that point on. So in 2019 I actually did give a keynote at KubeCon in Barcelona - so that was the other KubeCon that was happening that year - about the future of observability. That was together with Tom, who I believe you spoke to at the same KubeCon as well.
166
+
167
+ **Gerhard Lazu:** Yes.
168
+
169
+ **Frederic Branczyk:** So we were talking about a couple of predictions that we felt like were going to happen to the observability space, and one of my predictions was that I felt like continuous profiling was going to establish itself as an area within observability. And for that keynote I had put together a proof of concept that I very creatively called ConProf (you know, continuous profiling). It got some traction, but I never really had enough time to work on it beyond the proof of concept. Yeah, I guess at some point the pandemic probably had some part in it... Half a year into the pandemic I felt like there still wasn't enough being done in that space, I felt... So I thought to myself it's kind of now or never, and I end of last year decided to make it my full-time job, and I founded Polar Signals.
170
+
171
+ I guess because of the history of when I worked at CoreOS, and we got acquired by Red Hat, I had quite a lot of interest from investors pretty much immediately... But at the same time, I didn't feel like we had explored the space enough to take on VC money immediately, and raise money that we wouldn't know what to do with. I guess it's just me personally, the kind of person I am. I wanted to understand what I would do with money if we raised it, so...
172
+
173
+ **Gerhard Lazu:** I would like to stop you there, because this is really important, and I don't think listeners know this... Having looked at what you're about, it's not enough to observe, you have to understand. I think this understanding runs very deep for you, and I can see the connection to "You have to understand. You have to really know what you're doing", and I would like to connect these two dots, because they're important, and they will keep coming back. But please, carry on.
174
+
175
+ **Frederic Branczyk:** Yeah. Thank you for making that point. I think I know where you're going. So we started the company, and a really good friend from CoreOS times, he many years ago at a GopherCon he told me "If you ever start a company, I wanna be the first person to work with you. And he kept his word. In November 2020 he joined Polar Signals. Since then, a couple more people have joined, and in February of this year we launched a private invite-only beta of our product for continuous profiling... And I guess we should talk a little bit about what continuous profiling is.
176
+
177
+ \[27:56\] Essentially, profiling itself has been around ever since programming has. When we did our research, we found it had gone back at least to the '60s and '70s, because everybody, as soon as they started programming, needed to understand what was happening with the code that they had been writing. What was using the CPU time. And especially in the '60s and '70s it was so much more precious to have CPU time. So profiling has been around for a while. There had been two problems with it - one was for the longest time profiling was incredibly expensive to do in production. You would only do it to specific processes, on-demand, because you didn't wanna create too much additional overhead.
178
+
179
+ There was one thing that kind of led to us being able to do this in production and always on, and one of those things is what we call sampling profiling. So instead of tracing exactly absolutely everything a process does, we only a hundred times per second look at what the program does at that particular moment in time, and capture the stack trace of what it does... Because essentially, the stack trace represents what the program is doing, right?
180
+
181
+ So for some hyperscalers this was already enough to build continuous profiling tools for them to consume internally, because they could do it always-on in production now.
182
+
183
+ Now, as it goes with so much cloud-native technology and developments, that wasn't necessarily accessible to everyone... And one of the really amazing things that also have happened somewhat recently has been eBPF. eBPF allows us to capture this data at an even lower overhead, because we can already capture it in the form that we are going to consume it afterwards. We don't need to use some pre-baked format that may have a ton of information that we don't need, a ton of detail we don't need. We can produce exactly the data that we want, and make that exportable to user space, and then ingest it into our storage.
184
+
185
+ So that was definitely also a really big part of what created a movement... But this doesn't really have to do with overhead. There's also another aspect, which is just kind of Kubernetes unifying the observability space, in a way. And I think we might have talked about this in our last session, actually... The way that Prometheus also, and Kubernetes have kind of standardized a lot of terms in our industry. It just makes us all speak the same language.
186
+
187
+ This is super-powerful, because all of a sudden, when I say pod, and you say pod, we immediately know what we're talking about. So this is much more cultural than it is technological, but it means that our knowledge is transferable. So this is incredibly powerful... And then the last piece is putting all of this together, eBPF with Kubernetes now allows us to automatically discover all of the containers that are running in our infrastructure, and be able to look at all the CPU time that is being consumed in our infrastructure at once. And the reason why this is so powerful is because all of a sudden we can now say "This stack trace in this binary is what's causing 20% of our CPU time." If we optimize this stack trace away, we're now saving 20% of CPU time in our infrastructure. That's huge. Think of the banks, automotive companies, any company that has a large cloud bill - they can save millions of dollars with these kinds of measurements. It's just, the reality is they can do these measurements today.
188
+
189
+ **Gerhard Lazu:** \[31:38\] And it doesn't really matter what language you're using, right? Because everything runs as a pod. It doesn't matter whether it's Java, whether it's Go, whether it's Erlang... It really doesn't matter. The point being is, you run this agent on your Kubernetes worker node, where all these pods are being scheduled, and you can see out of the pods which are being scheduled, out of the containers which are running within those pods, which are the ones that consume the most CPU. And I imagine this goes beyond CPU. It goes to memory, disk operations, network operations, iOpperations, all that nice, important stuff that the kernel knows about, and it presents to you via eBPF in a way that makes sense to you, and it doesn't matter what language is making that call... Whether you have a serverless frameworks... It really doesn't matter. It's really powerful.
190
+
191
+ I like the way you're thinking about this. I was going to ask you, Parca.dev is the thing that you're opening up to the world at this KubeCon, and I was going to ask you why do you Parca. But I think the answer is "To cost-optimize." But maybe there's something more to it...
192
+
193
+ **Frederic Branczyk:** First of all, I think -- and we said this in our announcement as well... I think just the people that we are and the company that we are building - I think we needed to have an open source piece to be ourselves. So even if there wasn't anything else, that would probably would have already been enough of an argument for us... But I think more importantly, continuous profiling is - even though there are now several vendors, several projects out there - in the only one year that Polar Signals has existed, right? There are several companies that have sprung up, several vendors that have created products... But it's still a really young space, and it's still not very well understood. So in a way, the open source project is also about democratizing this for the community and educating the community about continuous profiling, so that when we talk about continuous profiling hopefully in a year or two everyone understands it, like when I say distributed tracing.
194
+
195
+ **Gerhard Lazu:** So if I understand correctly, it's your need to understand what the system does, and the itch that you're scratching is you wanting to understand what is happening on those nodes. So that's why you did it. As simple as that.
196
+
197
+ **Frederic Branczyk:** Absolutely.
198
+
199
+ **Gerhard Lazu:** I love that. I love that.
200
+
201
+ **Frederic Branczyk:** The back-story actually goes a little bit further than where I started. The reason why I even went into putting together that proof of concept with ConProf was because I read a paper by Google where they described these methodologies, how they used these kinds of methods to cut down on infrastructure costs every quarter by multiple percentage points. And I was just amazed by that for several reasons. One, I just wanted to have this tool while I was working on Prometheus. And the other one was I had worked on Prometheus - at least I thought to myself, "I think I know a thing or two about working with data over time." So I think that's kind of what ultimately created the circumstances of me wanting to create a tool like this.
202
+
203
+ **Gerhard Lazu:** So I got the tool up and running in seconds. That just shows easy it is to get started. This was just local. I didn't want to venture in our production Kubernetes cluster, because I have something else in mind for that... But in a few seconds, I could access the UI, I could see the CPU time, and the UI - what surprised me is it's better than the first Prometheus one that I remember. And I think the secret to this is your coffee machine. \[laughter\] Let me explain, okay? Let me explain. \[laughs\]
204
+
205
+ When I first heard of Parca a few weeks back, I checked it out, and it was looking good, but it wasn't as polished as it is today. Just in a matter of a few weeks, I was astounded by how fast you're iterating on it. And I think that it's your new coffee machine. Is that it? What's the secret?
206
+
207
+ **Frederic Branczyk:** I would say it has a part in it... \[laughter\]
208
+
209
+ **Gerhard Lazu:** Okay...
210
+
211
+ **Frederic Branczyk:** \[35:41\] I think the UI is actually an evolution of several attempts at it. The very first one was actually within our closed source beta product, where... You know, when we launched it in February this year, we used this to work really closely with a couple of early users to understand what is it that they -- beyond the UI even, what is it that they want from an experience from a tool like this... But then also, of course, with ourselves using this software, like how do we wanna use it.
212
+
213
+ I think there's so much dogfooding that was going on from basically day one, because this is a tool that we've built for ourselves. We wanted to put that work into it.
214
+
215
+ **Gerhard Lazu:** What do you use the tool for? This is really interesting. I love this story. I mean, there's a theme here... Every great product dogfoods itself. And the developers, and the product, and the entire team that works on it, uses it on a daily basis, understands the shortcomings, and fixes them, maybe even before users see those problems. I think there's a theme here. But how do you use Parca for Parca? This question, and the answer, fascinates me.
216
+
217
+ **Frederic Branczyk:** Yeah, so actually, this is a cool topic that I think we even wanna run blog post series about, because I think there are just so many aspects to this that I would love to talk about...
218
+
219
+ **Gerhard Lazu:** Can we have a short answer? Because this is a short piece... But it's obvious that we need a much longer one.
220
+
221
+ **Frederic Branczyk:** Yeah. Basically, boiled down, Parca itself is a really performance-sensitive software. It has a specifically designed storage and query engine, so that we can actually do all these amazing things with continuous profiling. So we use Parca to optimize Parca... This is kind of a vicious cycle, because we keep creating this more and more performant software to create more and more performant software, to do even more powerful things, to optimize it even further.
222
+
223
+ **Gerhard Lazu:** Oh, yeah.
224
+
225
+ **Frederic Branczyk:** So it's really addicting almost.
226
+
227
+ **Gerhard Lazu:** I love that. I love that. We do the same thing; I'm a big fan of that. That's it. That loop is one of my favorite loops. Amazing. So just to switch gears a little bit, and think about the KubeCon and what's going to happen this week... What are you looking forward the most at this KubeCon? Is there something you're looking forward to?
228
+
229
+ **Frederic Branczyk:** I think - of course, this probably reflects my own interests quite a lot, and what we do with Parca as well... But I'm really excited about how the eBPF space is evolving into more of a production-ready state, if that makes sense. I feel like it's very similar to the first hype wave of service mesh that we had, where everybody was talking about it, but no one was using it... And then one or two KubeCons after that, suddenly there were all of these great stories about how people were actually running it and using it in really useful ways.
230
+
231
+ And so I feel like we're kind of at a turning point with eBPF as well, where so many people have gotten their hands on it that we're suddenly seeing all these really incredible applications for it. So I'm really looking forward to a bunch of the eBPF talks that are coming out.
232
+
233
+ **Gerhard Lazu:** Any specific talks?
234
+
235
+ **Frederic Branczyk:** There's one by Derek Parker who works on the Delve debugger, which is kind of the de facto debugger in the Go community. I think he's doing some really interesting things. There's even some integrations into the debugger with eBPF. I find that really interesting... But the really cool thing about eBPF is almost its unpredictability of what you can do with it. Because it allows us to do such wild things anywhere in the kernel, attached to any kind of event, people have come up with super-innovative things that we were able to do in the past with kernel modules, but let's be honest, nobody really enjoyed the user experience of that. And now, all of these things are being productionized, and I'm just really excited about all the possibilities.
236
+
237
+ **Gerhard Lazu:** Hm. That sounds interesting. So anything eBPF-related, that's where your interest is... You and Derek Parker, did you say?
238
+
239
+ **Frederic Branczyk:** Yeah.
240
+
241
+ **Gerhard Lazu:** \[39:51\] Okay. I've heard Derek Parca... Derek Parker, okay... \[laughter\] That's a good one. Park- everywhere, right?
242
+
243
+ **Frederic Branczyk:** That was completely unintentional... \[laughs\]
244
+
245
+ **Gerhard Lazu:** Yeah, that's what happens... And I'm imagining that you're not going to attend the conference in-person, right?
246
+
247
+ **Frederic Branczyk:** Yeah... Unfortunately, as much as I would have wanted to, and unfortunately, travel restrictions are still in place for Europe to travel to the U.S. But you know, there's always another KubeCon.
248
+
249
+ **Gerhard Lazu:** Yeah, it was the same for me. You're right. I really wanted to be there in person. So what advice do you have for those who couldn't attend, and will be attending virtually, and some will be catching up on the videos, because they won't be able to attend virtually because of the time difference.
250
+
251
+ **Frederic Branczyk:** Yeah, I mean - look, it's like, half of the world that's not able to attend this KubeCon, so you're not alone... I know there are several folks that are doing just local meetups, or local virtual meetups, or just going for lunch or something, find your local group... Or if not, just watch the recordings. The platforms have become so much better since the first time we've done these virtual conferences. Just try to be a part of it as much as you can, given the circumstances...
252
+
253
+ You know, we've got KubeCon EU coming up next year, it's at the end of the winter, so no matter what happens, that's the time when Covid cases went down anyways. I feel like the next KubeCon in EU is gonna be great. A lot of us are gonna be able to attend that one, if not this one.
254
+
255
+ **Gerhard Lazu:** Those are some great tips. Is there anything interesting happening in the next six months for Parca that you want to share?
256
+
257
+ **Frederic Branczyk:** I think in a way a lot what we're -- we shared it really early intentionally, to understand what the community also wants from a project like this. We intentionally did not immediately release multiple types of visualizations, or we didn't immediately go all-in on a query language, or stuff like that. We do think these things are on the horizon, but it's just so much -- you're gonna create something so much better when you work with a community and talk to a lot of people. It's just like creating any product... But we just feel like we owe it to the open source community, because really, the open source community has made us who we are today... So if we can give back a little bit of that, then we've achieved our goal.
258
+
259
+ **Gerhard Lazu:** Wow. That's amazing. I wish everybody thought like that... And I think most people think like that in the CNCF space, and it just goes to show... That's it - this right here is the reason why the CNCF is as successful; because people think like you do. It's amazing to see that.
260
+
261
+ The one thing which I would like to do as we are wrapping this up is I want to congratulate you on the hiring page, which I think is a baseline for others to follow. It's simple, it's to the point, it's inviting... It makes me want to find out more, and that is saying a lot. So I would like to congratulate you once again... Like, well done for striking such a great balance... And I'm sure that it's so simple because a lot of thinking went into it, and a lot of refinement... And again, I'm seeing a trend here, which I really like; that's been great to see. Thank you.
262
+
263
+ **Frederic Branczyk:** Thank you.
264
+
265
+ **Break:** \[43:00\]
266
+
267
+ **Gerhard Lazu:** So the first time that I've heard about COSI was at KubeCon EU in May... And in that COSI talk, Andrew and Steven did an amazing job. My concluding thought was that it made me reconsider the operating system that I want for Changelog.com... And I do have to say that while I didn't get there, I'm really glad that we have this opportunity to talk with your amazing microphone, Andrew.
268
+
269
+ **Andrew Rynhard:** Yeah, I have since upgraded, since KubeCon EU. I think that was with my blue baby bottle this one is the sennheiser mkh 416. This one is made for like voiceover, so... Yeah, I'm loving it.
270
+
271
+ **Gerhard Lazu:** It's an amazing sound, I have to say, and there's also something natural there, so I really like it. You know, listening to that talk, and seeing the visuals that Steven produced - they were amazing. That was a great one.
272
+
273
+ **Andrew Rynhard:** Awesome.
274
+
275
+ **Gerhard Lazu:** So since KubeCon EU, which is about five months now, what is new in the world of COSI?
276
+
277
+ **Andrew Rynhard:** So COSI proper, as far as what it is, and the GitHub org, and outside of Talos, not much has looked like it has changed. But in Talos itself, we've been implementing a lot of the ideas, and kind of using that as a proving grounds, if you will, for the idea. And it's actually working out phenomenally well. We have since rewritten our entire networking stack of Talos on top of the concepts of COSI, and it's really, really cool -- I mean, where do I even start... When you submit your configuration to Talos, the controllers just pick it up, they know when to set up bonding, they know the order in which you should set up the interfaces to get bonding going, validation on whether or not the particular combination of options for an interface, say, just won't work... Tons of validation around things.
278
+
279
+ We've since launched a product called KubeSpan, which we could probably get into more later, but it's basically a way to do automated WireGuard. And in Talos, all you really do is you set up two little configuration, you set them enabled true, or something to that effect, and, then all of a sudden, all these nodes know how to reconfigure themselves reactively... And this is all really because of the ideas around cozy. Otherwise, we're gonna be stuck with SSH, and going in and manually executing classic UNIX utilities... And sure, it would work, but it would not feel clean; it would feel very hacky. So I'm pretty proud of what the team has been doing.
280
+
281
+ **Gerhard Lazu:** First of all, when I looked at Talos, it looked really interesting. The getting started part - I struggled a little bit. And you know, Sidero came along, and that made some things easier... COSI was really interesting, because the concepts - they were not specific to an implementation, but they were like a standard that you were trying to build, and I really liked that.
282
+
283
+ I do have to say, since trying Sidero - the first one I think was 0.1, when I struggled... I haven't tried it since. I know it's 0.3... So even though I would love to start with this, how would I start? Where would I go with Talos? Which is the first thing that I would do? What would you recommend?
284
+
285
+ **Andrew Rynhard:** Yeah, so we have the ability to basically spin up Kubernetes clusters right there on your laptop, built into our CLI. I'd say that that's the easiest way. If you wanna get a feel for what it's like to interact with an operating system that's API-driven and has a CLI, and doesn't have SSH, and all these things, that is the easiest way - you just do a simple command. Tell a CTL cluster create.
286
+
287
+ \[47:55\] The good news is this kind of translates really well into, say, running it on bare metal. You could literally grab that configuration file, maybe modify the networking section a little bit, turn on a machine with an ISO file, and submit the configuration file that you had running from your mock environment... By the way, which runs in Docker or Qemu. Those are probably the two easiest ways. One has a benefit of being more developer-friendly... Let's say that you're developing an application and you want something to represent your testing or production environments closely; that's when tell a CTL cluster create is really nice, because you could just spin up a Kubernetes cluster, you've got one a minute or two later, and it matches, at least API-wise, everything that you're gonna run in production. And then getting that to work in actual bare metal - that's another story. Typically, that just involves networking, and that's where 90% of all the problems happen.
288
+
289
+ So at that point, it's really just crafting the networking section, as we just talked about. COSI is gonna roll those out for you; well, Talos using COSI... The easiest way to get started on bare metal, I would say, is using the ISO. After that, PXE booting. PXE booting is a whole other level, and that's where we have our Sidero product, which aims to streamline that whole process and really own it for you... But that's the natural progression that I would go towards. Of course, you have the cloud in there somewhere, and right after you -- you know, that's where they diverge, right when you're talking about using the ISO or not.
290
+
291
+ In the cloud it's a little bit different. You have to have some image that's been uploaded, and all of our documentation goes through how to upload the image. In our releases we have the assets already prepared for you. You follow the documentation to upload the image into your particular cloud, and all you do really is turn it on with the correct user data.
292
+
293
+ So what I'm getting at really at the end of the day is - it just really boils down to "How do I get Talos just simply installed, or running somewhere?", whether that's a VM, or containers, or bare metal... And then it's just knowing the configuration file. In the same way that with Kubernetes I know that I have Kubernetes; do I really care where it's running? I know that I can describe my application and how it should run using declarative Yaml. We're bringing that same experience into the operating system.
294
+
295
+ So getting started - it's really just grasping the idea that you just need to turn Talos on, however that may be and wherever that may be, and get comfortable with the configuration file and being able to submit and update the system.
296
+
297
+ **Gerhard Lazu:** I can see where I've been going wrong, because I usually start in the cloud, and I usually start with PXE booting. And I think that is possibly the hardest way... So if you start there without knowing the lay of the land, you went in extreme mode, so good luck trying to figure all those things out. I think this was actually even before COSI, like six months ago, nine months ago, somewhere around there. And I know that you've made strides since then, and things are clear, things are better, as you would expect. So I think that I know what I'm going to do next, and for someone that doesn't ever run Docker locally, I just like everything in the cloud, because if it's on my machine - well, how do I know it will run in the cloud? But I know that Talos makes it slightly different; even though most things it runs locally, but it will not work the same in the cloud, and that's always a friction.
298
+
299
+ **Andrew Rynhard:** I wanna touch on that, because I actually think that that's really important to point out, and that's actually a huge motivating factor around Talos, was because I was managing Kubernetes clusters, and the first place that I was doing this we were debating, "Should we do this with bare metal? Can we run CoreOS?" Well, typically we ran CentOS, but we're also running this up in AWS... And I wanted this consistency story. And then we also had our developers that were saying "Hey, I wanna be able to actually spin this up on my local laptop and not depend on anything that you guys have set up." Even though we went to great length to give them testing environments, they still ended up just creating their own.
300
+
301
+ \[52:02\] So Talos is really beautiful in that sense, because it's literally the same image. The same image that runs right there on your laptop can be rolled out to anywhere - Raspberry Pi's, the cloud, bare metal... Anywhere that you can imagine. And the experience is going to be consistent, more or less. Obviously, when you're running in containers, you have the element of a kernel being the host operating system's kernel, and networking, and stuff like that... But that's minor. Those are things that you can kind of craft after the fact.
302
+
303
+ **Gerhard Lazu:** I feel that you've shared a secret with us, or at least with me... And now I know what I need to do next, so thank you very much for that.
304
+
305
+ **Andrew Rynhard:** Of course.
306
+
307
+ **Gerhard Lazu:** The next thing which I'm thinking about is why would someone want to pick Talos over, let's say, Debian or Ubuntu? What would you say to them?
308
+
309
+ **Andrew Rynhard:** Yeah, so this is a question we usually get. One of the main reasons that you really would consider Talos over, like you said, something like Debian, is because these things simply come with way too much at the end of the day. They come with package managers, they come with an extra set of packages that you simply don't need if all you're concerned with is running Kubernetes. In some cases you even have to do upgrades of the nodes for things completely unrelated for the purposes of running Kubernetes. And this is just unnecessary, to put it simply.
310
+
311
+ So the first point is the minimalism that you're gonna get with Talos. It's only about 15 MB. At the end of the day, you're gonna get something extremely small comparative to everything else out there. You're gonna get no package manager. We don't even have SSH or Bash. And the reason why we did things like that - or why we removed those - was because if you've ever operated Kubernetes at any scale, you've found yourself constantly duplicating work. You had to manage users, you had to manage hardening, you had to manage automation... But at two different layers. You had Kubernetes itself that you had to worry about, and then at the operating system itself.
312
+
313
+ So the whole goal with Talos is to just remove that Node element entirely, so that you can focus on just the cluster. We like to tell people that we want them to look at the cluster as one giant machine; and then nodes simply as more compute to that. So it's just more CPU and RAM to a bigger machine. We can't really look at it like that if we have to concern ourselves with who's logging on there, what if they changed permissions, automating it... This overhead simply should go away. And that's first and foremost one of the reasons why you should consider Talos.
314
+
315
+ And secondly, we have a really strong security emphasis. We recently just went through a whole exercise of actually securing our supply chain. So now everything's completely reproducible, you can get all of the checksums and make sure that you're actually running the intended version of Talos. The file system is read-only. As I mentioned, Talos is only 15 MB; what I didn't mention is that it's delivered as a SquashFS, which is only read-only, and there is no other way to run it. It is also completely ephemeral.
316
+
317
+ Now, Kubernetes of course needs places to write things, and there's only one place in Talos that's writeable; it's /var. At least writeable in the sense that it's going to be persisted across reboots. Of course, we have /temp and things like that, but that is completely ephemeral and only Talos uses those places.
318
+
319
+ So you're gonna get a much more hardened experience. You're gonna get people that can't -- you're gonna completely eliminate the possibility of people going on there and making a node a snowflake. It's really just Kubernetes that can change. So that's a huge benefit when you're talking about running anything more than ten nodes.
320
+
321
+ **Gerhard Lazu:** I know that everybody's thinking about security chain attacks, and security of everything - software, developers, signing... Can you sign everything, from your commit to the release, to the artifact, to what it runs, when it runs, so that you can trace it all the way back to the origin of the code being written? That's really, really important. I really like this minimalist story, not just from a security perspective, but that you only run what you absolutely need, and you run it with the least privileges; that is very powerful, and I think it somehow has been forgotten in the age of containers and Docker, because it was like the Wild Wild West for a long time... And I'm really glad these concerns are now coming back, because I know how important they were 10-15 years ago. So I can see the cycle; we're back where we started.
322
+
323
+ \[56:28\] So from that perspective, I know that these minimalist systems, one of the things that they replace - and I wonder if Talos does the same thing - they replace glibc for something like musl. And what then tends to happen is glibc is a lot more hardened, battle-hardened, battle-tested, so the performance on glibc of anything tends to be better. So what I've seen is weird crashes, weird degradations, weird IO performance when you don't run glibc. So what does Talos use?
324
+
325
+ **Andrew Rynhard:** We actually use Musl, and we haven't seen that at all. And I think that may largely be due to the fact that the only reason that we run musl - let's see... We only have a handful really on the rootfs. We have Containerd, and we have xfsprogs, and maybe some LVM tooling, and then Talos itself. So the actual C libraries that are running in Talos are practically negligible; it's practically zero. We don't even have systemd. In fact, our init system is a new init system that we're building for the purposes of these style of operating systems, API-driven operating systems. So that is written in Go. Practically, everything that we do is in Go.
326
+
327
+ So I think maybe that can be contributed to the fact that we are running musl, but we haven't run into any issues. And then, Kubernetes itself, since it's delivered in containers, those containers have glibc. So the role that musl really plays in our ecosystem is very small.
328
+
329
+ **Gerhard Lazu:** Yeah. I always had it different. Usually, the host would run glibc, and the container would run musl. And then that combination, from that direction, always seemed to create problems. This was about two years ago I remember, when we were looking at RabbitMQ the image, in the context of running it at performance, at scale, what's the most you can get, and then you have the Erlang VM, so it's slightly different... But I do remember the Alpine-based images had all sorts of weird issues that the Ubuntu ones never had. But now again, this is the container image, this is not the host. So I'm really curious to try it out for myself and see what is different. Who knows, performance could even be better. Which kernel version are you using, by the way?
330
+
331
+ **Andrew Rynhard:** We are on the latest LTS. I think it's 510.62
332
+
333
+ **Gerhard Lazu:** Nice. Okay, okay.
334
+
335
+ **Andrew Rynhard:** We used to run the latest Linux kernel, and we still kind of go back and forth on what we should do... And I think we're now leaning more towards LTS, because the changes that Linux introduces sometimes just cause us more headaches, especially when you're on the bleeding edge versions of it. But the latest LTS so far has been really great for us. We've been playing with the idea of maybe having LTS-style releases of Talos, which are pinned to LTS versions of the Linux kernel, and then having more edge versions, which are running the latest stable LTS. But today, we're still playing with the overall strategy that we wanna take long-term, and we just kind of settled on LTS for now, because that's kind of a safe play.
336
+
337
+ **Gerhard Lazu:** So speaking about LTS' strategy and roadmaps - anything interesting coming in the next six months? So between this KubeCon and the next one for Talos and COSI?
338
+
339
+ **Andrew Rynhard:** Yeah, I'd say the biggest one is this week we're announcing KubeSpan, which is -- I mean, I'm just super-excited about this idea, and I haven't even explained it yet...
340
+
341
+ **Gerhard Lazu:** Okay... Yes, please... That sounds very interesting. Please.
342
+
343
+ **Andrew Rynhard:** \[59:50\] Yes... The idea is that, since Talos can run practically anywhere, we're finding people want to bridge, say, bare metal clusters with instances running in the cloud. And so far, there hasn't been any good solutions for this. With Talos we're kind of uniquely-positioned; since it's API-driven, we own the whole stack, we've got COSI managing the network... And so what we did is we went ahead and we actually wrote a tooling to basically automate the key distribution and the peer discovery of WireGuard VPN. So I can spin up a cluster right here in my closet that's running on Raspberry Pi's, and extend that out to AWS really simply, really easily... And the latency is somewhere like -- I think the latency that WireGuard adds is somewhere around a millisecond... So it's negligible. But you get this consistent experience network-wise, regardless of where you're running that particular node. Even the pod traffic can be routed over it. Kubernetes can actually be configured to purely talk over the WireGuard network.
344
+
345
+ So the idea with this long-term, the vision is that we're gonna have users/customers running in the data center, bare metal, which is a large part of our user base - all of a sudden they have an influx in traffic, and they need to expand the cluster. But they don't have the resources. Okay, fine. Let's just expand out to AWS momentarily, and when things calm down, we'll scale it back down to our core infrastructure. Or even another data center, spill it over to another data center.
346
+
347
+ Now, a completely different use case, but very similar, is maybe the edge. I have some Raspberry Pi's that I actually wanna join up to a cluster at the core, which is hosted in AWS. But maybe these Raspberry Pi's are running in shipping trucks, and they have intermittent network connectivity. That's kind of troublesome when you're talking about running the Kubernetes control plane... But a worker - you know, it can kind of go in and out, and I think the story there could be better on the Kubernetes side... But at least using WireGuard, as soon as they get any kind of networking, whether it's some WiFi when they pull up to a store, or mobile data - they can join the cluster with WireGuard, and everything just seems as if they're right there, on the same network.
348
+
349
+ **Gerhard Lazu:** That's really interesting... So let me see if I understood this correctly. You're saying that you can scale out your Kubernetes clusters on-demand, wherever, whether it's your closet, or whether it's on the data center, or the cloud... You can maintain the same privacy of the network, everything is encrypted, the data on those workers - you think it's ephemeral data, so that you don't store any state there, so that you can scale back in... And KubeSpan makes this seamless. Is that what you're saying?
350
+
351
+ **Andrew Rynhard:** That's exactly what I'm saying. Of course, there's little caveats, like -- the way WireGuard roughly works is you need at least one direction of communications. So in the case of, say, my private cluster running right here in my closet, it needs to be able to at least reach the workers. The workers don't necessarily need to reach it. It can establish the channel that way.
352
+
353
+ So there are some limitations within the system that you can find in the documentation stuff; over my head when it comes to networking. Something around Cones and NATs, and stuff...
354
+
355
+ **Gerhard Lazu:** Is it IPv4, or IPv6? What network does it lay down? Or dual stack?
356
+
357
+ **Andrew Rynhard:** Either.
358
+
359
+ **Gerhard Lazu:** Wow. Okay, I wanna try it out.
360
+
361
+ **Andrew Rynhard:** Yeah, you should.
362
+
363
+ **Gerhard Lazu:** I wanna try out how it all actually works.
364
+
365
+ **Andrew Rynhard:** It's pretty neat. In fact, one of our engineers - he just created a video of him just spinning up Talos right there in Kemu, right there on his laptop, and then joined an AWS Graviton instance to it.
366
+
367
+ **Gerhard Lazu:** Wow, okay.
368
+
369
+ **Andrew Rynhard:** So it's pretty neat. I'm super-excited about it.
370
+
371
+ **Gerhard Lazu:** I will put that link in the show notes, because that sounds like something which I would want to try out. That sounds amazing. Okay, okay... So - shifting focus a little bit towards KubeCon and what's happening this week. First of all, will you be attending in person?
372
+
373
+ **Andrew Rynhard:** I will, I can't wait.
374
+
375
+ **Gerhard Lazu:** You will. Okay. Amazing. What are you most looking forward to? Meeting people, let me guess...
376
+
377
+ **Andrew Rynhard:** \[01:03:56.07\] I just wanna see another human. That's exactly what it comes down to. No, actually - that is true, but more specifically, the thing that I'm really looking forward to is meeting everybody that works at Sidero Labs. We've been fully remote for two years now. I think I've only met a couple of people that are currently at the company, and I just can't wait for us all to get together and just have a dinner, go to the bar, whatever. Just have a good time and actually not have to worry about seeing each other over pixelated streams and audio issues. So just seeing another human is gonna be really nice, and especially meeting everybody that's a part of the company.
378
+
379
+ **Gerhard Lazu:** Wow. So this is two out of three people -- actually, both people that go to KubeCon in person, you and William Morgan... You're both looking forward to the same thing. William Morgan from Linkerd, from Buoyant - he was saying the same thing. Meeting the rest of his company, meeting the community, and meeting another human being. He's really looking forward to that.
380
+
381
+ Okay, I think everybody's on the same page... And I have to say, those that couldn't make it in-person, myself including, we wish we could be there... But by the time EU comes along, I'm sure things will be easier, and then next year, for the next KubeCon North America, I hope to be there in-person, and meet all the great people that -- you know, KubeCon is so big; you can never meet everybody that you want to, but at least there will be fewer people this year, so it'll be a bit better for meeting in-person.
382
+
383
+ **Andrew Rynhard:** Yeah. And speaking of EU, we will be there as well, too... So maybe we could see each other then.
384
+
385
+ **Gerhard Lazu:** Amazing. Okay, yes. Tick! So what advice do you have for the people that can't attend the conference in-person? Anything that you recommend to them?
386
+
387
+ **Andrew Rynhard:** You know, nothing that you're not gonna get from the CNCF as far as their recommendations go. Attend their virtual booths... I would say join the CNCF Slack. That was really fun when I did KubeCon EU; just talking to people, and all kinds of random channels... That was a blast. It did a decent job of giving me that camaraderie that you're looking for when you go to KubeCon. So I'd say that you should sign up for that immediately.
388
+
389
+ **Gerhard Lazu:** Okay. And then what about the people that want to do catch-up videos? Because for example, it may be too late in the night for them and they can't be up all hours... Anything you would tell them?
390
+
391
+ **Andrew Rynhard:** Set aside enough time, because there are a lot of really cool things... And just try to prioritize. Because you're not gonna get through all of them; figure out the ones that probably are most applicable to you, things you're most excited about, and just have fun watching them.
392
+
393
+ **Gerhard Lazu:** Speaking about that - which talks are you excited about? Anything in particular?
394
+
395
+ **Andrew Rynhard:** I've noticed my taste has changed ever since I've become into a role where I'm playing more of a management role and business role. I do get hands-on technically, but less and less over time... So I'm finding myself gravitating more towards things like building community... There's a particular talk on how to make contributors maintainers... Building your brand, stuff like that.
396
+
397
+ Technical stuff - there is one on supply chain that I wanna go look at... But I am reserving a lot of time for just talking to people as well. So I'll maybe grab a few, but they're gonna be less technical.
398
+
399
+ **Gerhard Lazu:** Okay. Well, Andrew, this has been a pleasure. I'm really glad that we had this opportunity. KubeCon EU just flew by and I didn't have time, but now I'm so glad that we had this time together. I'm really looking forward to trying Talos, to trying Sidero, and seeing KubeSpan, how well does it work in practice. Thank you very much for sharing all these amazing things with us.
400
+
401
+ **Andrew Rynhard:** Yeah, thank you for having me. It was a blast.
402
+
403
+ **Break:** \[01:07:29.11\]
404
+
405
+ **Gerhard Lazu:** So KubeCon is my favorite time to catch up with the cloud-native community, with the people, with the events, new features, new products... It's such an eventful time, KubeCon. I love it. But also new beginnings. So we only spoke -- was it like a month ago? It wasn't that long... Episode \#18.
406
+
407
+ **David Flanagan:** Yes, it was around 4-5 weeks ago I think it was...
408
+
409
+ **Gerhard Lazu:** And you have been really busy in this one month, right? So tell us about it. What happened in the last month?
410
+
411
+ **David Flanagan:** Well, we brought a new person into this world, which has been rather time-consuming...
412
+
413
+ **Gerhard Lazu:** Okay...
414
+
415
+ **David Flanagan:** I can't remember if we spoke about this during the last one...
416
+
417
+ **Gerhard Lazu:** We didn't.
418
+
419
+ **David Flanagan:** My wife was pregnant, and now we have a beautiful baby boy who's entered this world. His name is Caleb; he is two weeks and five days old... And because that wasn't enough change in a short period of time for me, I also decided "You know what - let's change jobs as well." So the last time we spoke, I was working at Equinix Metal, and I am now a developer advocate for Pulumi.
420
+
421
+ **Gerhard Lazu:** So I think that this is going to be my favorite announcement from this KubeCon, which is the newest and youngest member of the cloud-native community, Caleb. He's - what, two weeks? Three weeks?
422
+
423
+ **David Flanagan:** Two weeks and five days, yeah.
424
+
425
+ **Gerhard Lazu:** Well, I don't think there is a younger member of the cloud-native community. So two weeks and five days, you said... That's just crazy. Okay...
426
+
427
+ **David Flanagan:** Well, he will be watching some of the KubeCon festivities and talks remotely with me, as obviously in the U.K. we are travel-banned until November 1st... So I will be participating as much as I can through my laptop and through the video material... And I'm sure Caleb will be throwing up on me for a good few of those sessions.
428
+
429
+ **Gerhard Lazu:** \[laughs\] Or falling asleep, I would like to think... Like, during those boring sessions... Boring to him, obviously. Like, "Kubernetes what?!" He'll just fall asleep like "Spiffy this" or "Spiffy that?" Yeah that sounds like a nice nursery rhyme. Anyways... I just thought about this - this is maybe the best strategy to shift your body clock to the West Coast timezone without actually traveling, right? Because a new baby will keep you awake through the night, so you can watch all the talks; you'll be awake. I hadn't thought about this, but this is genius, David.
430
+
431
+ **David Flanagan:** Yeah. And not is at the clock and then you know my regular 7 or 8 hours asleep at all. So why not spend some of those times awake, catching up with some great cloud-native material, and stuff like that. It'll be good. And of course, it's KubeCon; it's been remote for the last four editions. I think this is the fourth remote one since the pandemic.
432
+
433
+ **Gerhard Lazu:** Yeah.
434
+
435
+ **David Flanagan:** So you know, the hallway track on Slack, and Discords, and Twitter... Twitter is always very active, so there's always something to keep you company during those late nights.
436
+
437
+ **Gerhard Lazu:** So which is your process of joining remote KubeCons? Tell me about it. And then I can share with you my process and see how it compares to yours. How do you do it?
438
+
439
+ **David Flanagan:** Well, I wish I could say I was really methodological about it, and I knew exactly what talks I was gonna watch each day, but I don't. I really just kind of show up and log into the platform and see what's happening then and there. I definitely watched a lot of it after KubeCon, so that I could do the 2x on YouTube; I'm very guilty of 2x-ing a lot of these sessions and slowing down as required... But I do try to catch a few things live as much as possible, and it's really just -- especially with having a young one right now, my method is gonna be slightly different from previous KubeCons. So I'm really just taking it day by day. We're on a DC event right now, I'm logging on, I'm going "Okay, I've got 40 minutes. What can I catch right now?" and just try to do as much as I can in the moment. But it's not as well-planned as I would expect. I'm sure you've got it down to the letter, right? You must know exactly every session you're gonna check out.
440
+
441
+ **Gerhard Lazu:** \[01:12:05.17\] Yeah, something like that... Actually, I try to drop in on all of them. I'm making use of three monitors, plus an iPad... I have a picture from the last KubeCon that I attended... And then I just like watch three sessions and I mute, and I just pick one, listen for a minute, then switch to another one, switch to another one... And that's how I just like consume three at the same time. And then when something is -- I mean, it's interesting, but maybe there's something more interesting, I just switch to another one. But I can consume three, that's my max. I think four will be a bit challenging.
442
+
443
+ When it comes to the sessions, I don't pick them ahead of time, because the titles and the descriptions can be misleading. I try to drop in on them as you would do, and then I just pick and choose. But I try to drop on all three of them, which is impossible if you're in-person... So I think this is the best way to do it virtually when it comes to consuming the talks. But what about interacting with the KubeCon, the rest of the attendees? How do you do that? Or do you even do that?
444
+
445
+ **David Flanagan:** Yeah, I do try and remain active. I'll go back to the schedule first, actually, just a little bit on that... So you're not the first person I've heard who has multiple talks running at the same time. There's that community member NoJarJ who does four or five talks at the same time as well, and Slacking between them. I don't know how yous do it. I'm a complete single-tasker. I don't have the focus or attention span to do multiples, so - mad respect there. But I'm always following the operations track mostly this KubeCon, as I was the chair of that track. I helped select all of the talks that you're going to see.
446
+
447
+ **Gerhard Lazu:** Oh, wow. I didn't know that...
448
+
449
+ **David Flanagan:** Yeah, I don't think I've actually told anyone. I didn't really talk about it on Twitter either, but I did chair that...
450
+
451
+ **Gerhard Lazu:** That's amazing. Wow...
452
+
453
+ **David Flanagan:** I helped pick all the talks. If you don't like them, it's sadly my fault, and one other person... But it should be a pretty good KubeCon.
454
+
455
+ **Gerhard Lazu:** Right. Okay... Well, then that means that you know all the talks that are -- well, not all the talks... Like, you have a good idea of the talks that are coming, the themes, the speakers... That's amazing. Anything that you would recommend in particular? Something that resonated with you from that track?
456
+
457
+ **David Flanagan:** Yeah, I think my bias definitely helped work some of the selection. I've got an affinity for GitOps and infrastructure as code, so there's a lot of really good sessions that feature using Argo for deployment... You know, we've talked a bit about that last time; I think we're both fans of the project... And we're just seeing more and more sessions submitted on Argo every single year, and it's just because of the demand. People wanna be able to do this automated GitOps dev-based deployment. So you'll see a lot of sessions there.
458
+
459
+ A lot of sessions on infrastructure as code, with TerraForm and Crossplane really popular this year... We've seen a lot of submissions talking about Crossplane...
460
+
461
+ **Gerhard Lazu:** Interesting.
462
+
463
+ **David Flanagan:** ...which is great to see. And of course, there's a few preliminary sessions on there from some of my new teammates from Pulumi as well.
464
+
465
+ **Gerhard Lazu:** I see, okay. So when it comes to GitOps, Flux or Argo? What do you think?
466
+
467
+ **David Flanagan:** Oh, I'm so on the fence... I actually use both. I really love the simplicity of Flux; it just seems to work. But I love the Argo UI, and I wish I could merge them together sometimes. Previously I mentioned that Flux are working on the UI; it's still super-early. I don't recommend people use it yet. There are many, many bugs. But I do tend to use Flux, but I'm getting more familiar and comfortable using Argo.
468
+
469
+ I think the challenge with Argo is the custom resources are slightly more complicated, especially when you have to adopt the App of Apps model, which is an app to deploy an app, which has sub-apps... And I haven't really got my head around that completely; I'm not as fluent with it as I am with Flux, but I definitely think both tools are really great. I don't think you can go wrong with using either. I think it comes down to just whichever one you've used first, whichever one you're comfortable with. They're both great projects.
470
+
471
+ **Gerhard Lazu:** Yeah. So it's a matter of trying them, I suppose, and seeing what works for you. That's one of my favorites.
472
+
473
+ **David Flanagan:** Well, we got this announcement - was it two years ago? Maybe you'll remember... But the team -- was it Intuit who were the original creators of Argo, and Flux. They had this big joint announcement where they said "We're gonna consolidate both of the tools to get this one GitOps tool to rule them all", and it was gonna be called GitOps Toolkit... It never really happened, and now we're back to this divergence data, where we have multiple tools kind of trying to fulfill the same thing.
474
+
475
+ **Gerhard Lazu:** \[01:16:07.02\] Yeah, 2019... I was actually there, at that KubeCon, and I was so excited. That was also the North America one. And I would like to dig more into that to see why that happened. I didn't get a chance to speak to the Flux team. They're on my list, they really are... But it's like -- you know, too many things happening. But the day will come. There's a GitOps Days, I think; there's like a summit coming, like next week, I believe...
476
+
477
+ **David Flanagan:** Maybe. I can't remember exactly.
478
+
479
+ **Gerhard Lazu:** Yeah, something like that. It's happening as well, and that will be an interesting one to watch. But I would really like to understand what happened there with Flux and Argo, and what are the strengths and the weaknesses of one versus the other. The UI - that's like a good one.
480
+
481
+ I do have to say, even though I have tried Argo, I haven't tried Flux. So this GitOps summit which is coming, I'm hoping, I'll be able to try it out in that context. I'm looking forward to that. Okay...
482
+
483
+ **David Flanagan:** I think the Flux chain is a bit better when it comes to being a bit more agnostic on the tools you wanna use to actually generate the Yaml. You know, not all of our GitOps repositories are straight Yaml manifests; we're using tools to customize are the carvel dev tools or we're using capitan. There's so much choice there. Decision fatigue is real, especially in the cloud-native landscape...
484
+
485
+ Flux makes it a lot easier to say "I wanna use a tool to generate the manifest before we do the apply stage." With Argo I think it's a little bit more convoluted. There has to be a concept of a provider, if I remember correctly... And they're not all supported. But that could have changed since the last time--
486
+
487
+ **Gerhard Lazu:** Cool. I definitely have to follow up on it, so thank you for that. Thank you, I really appreciate it. In the context of KubeCon, coming back - so there's the operations track. Any other track that you're excited about?
488
+
489
+ **David Flanagan:** All of them, I think. I'm in a really unfortunate position, which you probably are as well - we need to really stay on top of a lot of this, as well as our day jobs... And we have our extra-curricular activities where we need to be knowledgeable on a lot of these domains. So I really am watching all of the tracks as much as possible, and 2x-ing all the talks on YouTube. But anything to do with continuous integration and delivery is something that I'm really keen on following, talks about infrastructure as code, of course... I definitely love tools that are doing this.
490
+
491
+ One of the reasons I joined Pulumi is just because it directly is everything I love doing with platforms, which is taking the primitive tools that we have, like Flux, and Argo, and Kubernetes, and cloud providers, and being able to give developers a platform to deploy their application. My interest and Pulumi's interest are just the same there. Infrastructure as code, continuous integration, continuous delivery - those are the main things I wanna see from KubeCon this year.
492
+
493
+ **Gerhard Lazu:** I would like to dig into that a bit more, because that's like the other big thing that changed in the last month for you, the new job with Pulumi. I think Kat Cosgrove - is she there as well, at Pulumi, I believe?
494
+
495
+ **David Flanagan:** Yes. Matt Stratton, Kat Cosgrove and Laura Santamaria - they are my teammates. They're the developer advocacy team at Pulumi, and I'm joining in with some great people there, definitely.
496
+
497
+ **Gerhard Lazu:** Yeah, a big shout-out to them. That was the first thing which I wanted to do. And the second thing is ask you, as I asked you before, "Why Pulumi specifically?" Why Pulumi? Could you see this one coming? Let's be honest, did you see this one coming?
498
+
499
+ **David Flanagan:** Of course, of course.
500
+
501
+ **Gerhard Lazu:** Okay. \[laughs\]
502
+
503
+ **David Flanagan:** I always look back at my career, and I've always worked for relatively small shops. Every time I write a line of code, I've always been responsible for the deployment to production. And I've never had that throw over the wall scenario.
504
+
505
+ So infrastructure as code and continuous integration and deployment - these are just things that I've always had to do. I've never been able to dodge that bullet, unfortunately. I think I cut my teeth like the rest of us using TerraForm and HCL, and I think -- TerraForm is a fantastic tool. No one's ever gonna say otherwise. But it has some really rough edges when it comes to programmatically defining some elements... Like, nodes in the cluster are doing loops or conditionals... These things get a little bit tricky because of the constraints of the HCL language.
506
+
507
+ \[01:19:54.29\] Now, I know with TerraForm 0.10 they started to bring in some of these primitives, but these primitives already exist in high-level programming languages, which is where Pulumi shines. It comes down and it says "Well, you can just define your resource graph using the language that you're familiar with." I'm a big fan of Go, I'm a big fan of TypeScript; they're both options available to me... But Pulumi also supports any of the .NET languages, it supports Python, and I'm sure there's other things coming, and there's some really cool announcements that I managed to found out just yesterday coming to KubeCon...
508
+
509
+ So there's all these languages that already have loops, conditionals, the ability to provide a single func-That's my favorite thing on Pulumi. So I'm going a little scarbridge but been able to say I want a Kubernetes cluster on GCP, and I want different node pools that look like this. And I want a load balancer, and I want some applications deployed to that cluster as part of the bootstrap process. Now, I could do that as a HCL TerraForm module, but as a TypeScript Pulumi application, I can actually make that a function call, publish it to npm, and then anyone can pull that out. You can literally do npm install raw good Kubernetes cluster package, call that function as many times as you want, get all these clusters with everything encapsulated in that way... And I just think that is a super-power. And I think once you see that and you start to use that approach, looking at more abstractions like HCL or Yaml, you're just like "Why? Why am I constraining myself to the opinions and the subjective nature of other people that think that that's the best way to do it, when my experience may be slightly different... And programming languages are the best way to encapsulate that knowledge.
510
+
511
+ **Gerhard Lazu:** So this is really interesting, from multiple perspectives. I see a couple of products, tools, however you wanna call them, enter this space in recent months... One which is top of my mind, which is, by the way, an episode that's going to ship, I think, this week. I mean, by the time you're listening, it'll be like a few weeks back... Dagger. And I really like how they're making use of CUE and BuildKit. So CUE as a language to define these things sounds really interesting... So I'm wondering, how does CUE compare to HCL and Pulumi? Pulumi in the case of Pulumi being the actual programming language... Versus something like Crossplane, which is supposed to be your control cluster, which then you define your compositions and your -- there's something else we have called that I forget, Compositions, and - it's not an abstraction... Do you remember what it is? There's a composition in Crossplane...
512
+
513
+ **David Flanagan:** Yeah, the XRDs. So you can actually have a single resource, but then create multiple sub-resources below it. I think they're called compositions, or XRDs.
514
+
515
+ **Gerhard Lazu:** Yeah. And there's like another name... So it's two things. The compositions are the things that you can bind them in... But they have these providers, they interact with all the IAAS's you can declare you Yaml, so you declare you GK cluster in Google, and it just makes it happen, and all the other things that you want are within that IAAS, and it works accross IAAS's. So I'm wondering, how does Pulumi compare with Crossplane? Let's start with that. And how does Pulumi compare with Dagger, which is using CUE, rather than a programming language.
516
+
517
+ And CUE - I mean, it is kind of a programming language, but it's more like a data language; that's the way I see it. And I know that you know a bit more about CUE with Brian Ketelsen. You have CUE Blocks?
518
+
519
+ **David Flanagan:** Yeah, Brian Ketelsen and I are the creators and maintainers of CUE Blocks. We're both huge fans of CUE. We think it's just a great language for defining schema, applying constraints, and even doing some basic comprehensions and mathematics with them.
520
+
521
+ So it's not a Turing-complete programming language, but they are starting to add more query APIs and other things to bring it in line with some of that.
522
+
523
+ So I really like Dagger... I have done an episode with Solomon on Rawkode Live, where we dug into Dagger and we did some deployments. I think it's a really good tool, and I love seeing CUE used in this way. It's very similar to TerraForm, in the regards of that you have to have something that understands the abstract form. The HCL, the Yaml, or even the CUE is just compiling down to Yaml at the end of the day anyway.
524
+
525
+ You're still constrained in that you can't do a lot of conditional logic. Loop logic does exist in CUE, and you can do some things like that... But then modifying things within the loop gets a bit difficult, because you've only got access to that array count. So it depends on your use case. But I think Dagger is great, and that they're moving beyond, into like where Boundary is as well. I'm not sure if you're familiar with HashiCorp's Boundary...
526
+
527
+ **Gerhard Lazu:** \[01:24:12.25\] No.
528
+
529
+ **David Flanagan:** But I think that second step is like "Okay, we provide the platform or the infrastructure, but what about the applications that then belong and live on that application?" And that's where Boundary comes in, fulfilling the continuous delivery component of your application. And Dagger kind of moves right into that and provides like a single interface to all of it, which I think is really cool. But the constraints are still there, very similar to HCL.
530
+
531
+ Crossplane things get really interesting. Crossplane still has defined -- you're still constrained by Yaml. You can only see so much that is not programming, but you're not gonna be able to provide a function that does a thing, but you can provide a composite resource that does a thing.
532
+
533
+ What I really love about Crossplane is that continuous reconciliation. That's something Pulumi doesn't do yet, as well and it's the first thing I wanna change and I'm gonna be like "We need to get into this space."
534
+
535
+ **Gerhard Lazu:** Oh, yes.
536
+
537
+ **David Flanagan:** We have to control the actual reconciliation, and not just the client-side reconciliation. So I think Crossplane is killing it there. I don't think any other product is as good as Crossplane in that regard. The fact that I can have that controller running on my Kubernetes cluster, if I delete an S3 bucket, it's gonna be recreated. Of course, there are things that can happen there that are bad; it could be data in the S3 bucket and we have to build workflows onto it to restore it from a backup... These are not things that really happen yet. Crossplane is, and I know they are because they're a great team. Crossplane is great, they have a great reconciliation with Kubernetes event modele, gonna be a lot familiar with people, and they're gonna be really happy with that approach. I want to see Pulumi do more of that, control the execution of Pulumi, and not just have a client-side.
538
+
539
+ And Dagger is great. Solomon and the team are fantastic. It's still not a programming language, but you can still do some really cool things with CUE. I think where Dagger is going to excel is that if something is difficult to do with TerraForm, and even difficult to do with Crossplane, you have to have the provider first. Dagger has made it really easy to provide really superficial providers by just taking the queue and saying this is what I need to do. It's a very small amount of Go, there's not a lot of boilerplate, and I think we'll see a lot of adoption because of that... But hopefully, Pulumi is now a well-positioned place to try and help on both of those fronts as well.
540
+
541
+ **Gerhard Lazu:** The other tool that I've seen take a similar approach is CDK from Amazon, where you get to declare your infrastructure using a higher-level language. TypeScript - I know that's something which is pushed at Amazon, which makes sense, with CDK... I've used it briefly; it was okay. Way better than using the Yaml alternative. That was like the most horrible Yaml I've seen in my life, where you get to do like "inc", which is the function, and then get two arguments which are defined like in an array, and then you get an operation, you capture the result, and then you reuse that result as a variable. That was horrible, and all defined in the Yaml. That was crazy. That was the craziest Yaml I've seen. CDK was better in that respect, so I can see some similarities there.
542
+
543
+ It's interesting that you run it client-side. And when you say client-side, I imagine the CI could run it as well if it has all the secrets, but still, it's not built into the product. So that's interesting. Maybe there's a Pulumi cloud? I don't know. I don't know enough about Pulumi is what I'm getting at; and also, what I'm getting at is I would like to find out more... So you know what the follow-up is, right?
544
+
545
+ **David Flanagan:** Yeah. CDK is a really cool tool, and it's very similar to Pulumi. It doesn't have the provider support, and it doesn't support the TerraForm providers out of the box... You know, like what Pulumi tries to do with their generators.
546
+
547
+ The CDK is awesome, and I think what really excels here is that Pulumi and CDKs shine when you're using TypeScript. I think it's such a great language for infrastructure as code, because it's strictly-typed. You can have interfaces that you can define for the different properties, you need to go out to expose, you're just using export keyword. All of these things just -- TypeScript is just great. I think if you haven't tried to do any infrastructure as code using TypeScript, or CDK, or Pulumi, you should just go try it. It's so cool.
548
+
549
+ \[01:27:56.28\] And the way that the Node ecosystem and TypeScript allows you to pass functions around, or even to first class, they can be exported, they can be renamed, they can be bound, there can be higher order, you can pass functions in functions... The flexibility there is phenomenal, so I encourage everyone to try TypeScript first, before going to any of the other languages.
550
+
551
+ **Gerhard Lazu:** But not you. You're Go, right?
552
+
553
+ **David Flanagan:** I do most of my Pulumi in TypeScript.
554
+
555
+ **Gerhard Lazu:** Really?
556
+
557
+ **David Flanagan:** I have started doing it in Go, and I just -- it's not as nice. Error checking all of the time is still very present in Pulumi Go, so I just stick to TypeScript, actually. When I was working at Equinix Metal, I handled all of the Tinkerbell CI/CD infrastructure using Pulumi with Go, and it was super-painful.
558
+
559
+ **Gerhard Lazu:** Oh, interesting.
560
+
561
+ **David Flanagan:** I actually opened an issue going "Please let me do this in TypeScript."
562
+
563
+ **Gerhard Lazu:** Okay... And how did that go? Is it still open, the issue?
564
+
565
+ **David Flanagan:** We closed the issue and left it in Go just because the work was done, but TypeScript causes first class functions to support higher order functions, being able to pass them around, being able to publish it to npm... There's just so many convenience factors there. That ecosystem is great. Dependencies in Go - does anyone love them? Probably not.
566
+
567
+ **Gerhard Lazu:** Yeah, I know... Things are better now. I mean, I still have nightmares from 6-7 years ago. Early Go, when it was just released. It was amazing as a language, but oh my goodness, the whole dependencies... I keep forgetting, there were all these tools which were being invented, which were like half-working, and mostly not working. I even forget the names of those tools, and they were so annoying. They were trying to be helpful, they were trying to address the pain, but I think they were causing more pain in the process. So I remember that, and that's actually a good point.
568
+
569
+ **David Flanagan:** Yeah, we used to vendor everything and commit them to our own Git repositories, which was terrible. And then we had that semi-official Dep, which just magically disappeared, because GoMod came out, with 1.10, or 1.11... 1.11 I think it was. And it's been better, I've gotta say. Since more projects are now running GoMod, my life is easier, but it's still definitely challenging.
570
+
571
+ **Gerhard Lazu:** Okay. So as we are getting close to wrapping this up, I have one more thought which I wanna share with you... And it's more like a question, really. What happens with Rawkode?
572
+
573
+ **David Flanagan:** Oh, that's not stopping. I've been taking a nice break, spending time with my family for the last couple of weeks... But Rawkode Live will be back in anger in November, with just more -- you know, the cloud-native ecosystem is not standing still. There are so many projects out there. I think what we will see changing in Rawkode is I'll probably move away from just high-level introductions to all these tools.
574
+
575
+ It's great having the founder there and just showing people how to get started but I really wanna get into use case-specific stuff. I've been talking to more people in the community and going "Why are you actually doing with this tool? What problem is it solving for you?" So I think we can show people not just the Getting Started guides from all these projects, but "Here's a real use case that this organization has, and here's what they're doing with this tool", to give people a bit more inspiration, and hopefully to remove some of that cognitive -- what did I call it there...? Fatigue. Decision fatigue. We wanna try and remove some of this. If you're standing there and you're like "What GitOps tool do I use?" or "Which CI do I use?" Like, okay - what is your use case? Is it similar to this organization, or this one? Here's the one they use, and how they're going about it and what they do with it. So yeah, you'll see more use case-driven stuff in the next few months.
576
+
577
+ **Gerhard Lazu:** That's really exciting, because I'm thinking exactly the same way. I mean, it's great to have all these conversations to get people interested, and to get people steered into what resonates with them, so that they know what's out there... And there's so much out there, as you mentioned. But once you do that, you kind of start -- I don't know, you feel which way you'd want to go, which way gets you most excited, and then the next natural step is to explore that space. You don't want to stay shallow all the time. I mean, breadth is very important; there comes a point you wanna go a bit deeper than the first hour or the first two hours, which is just the very early beginning of any tool, really.
578
+
579
+ **David Flanagan:** Yeah. Everything we do is difficult. Software development is not easy, it doesn't matter how long you've been doing it. In fact, it probably gets harder the longer you've been in it. But I think having that breadth of knowledge of what the tools are, when to use them and roughly what they do is really important for everyone. But at some point, you do need to go down and actually use it in anger. You have to be able to solve real problems with the tool. You might even actually be a consultant, and you can jump from company to company and just say "Oh, use this tool, and use that", and then move on and never actually have them implemented. But at some point, you do need to use these tools in a real use case-driven fashion... So yeah, I wanna tackle that and make that easier for everyone.
580
+
581
+ **Gerhard Lazu:** Well, I'm really looking forward to the new and better Rawkode Live, and I'm looking forward to what you do next... But I encourage you taking these couple of weeks, months, however long it's going to be, to make sure everything is nice and smooth, the transition in the new job is smooth... The onboarding is very important, and very often it's skipped. You just get thrown straight in the middle of it. And that can be okay; it's not always bad. But sometimes it's better to just go slower, go smoother, take the lay of the land and enjoy it... Because we keep moving too fast through things, I feel... It's like an acceleration of the next thing, the next thing, and it's not enough time on enjoying or appreciating the present.
582
+
583
+ **David Flanagan:** I couldn't agree more. I've definitely taken another couple of weeks just to spend time with the family, and then I'll come back in November, hopefully do some Rawkode stuff. I've got big plans for Klustered, big plans for Rawkode Live, and with Pulumi being my new role, I think it's the first time in a long time that my particular interests in technology are directly aligned with the work that I'll be doing. So again, lots of great stuff.
584
+
585
+ **Gerhard Lazu:** And we're looking forward to that, David. Thank you very much for joining us. This was great, thank you.
586
+
587
+ **David Flanagan:** Thank you for having me, it's a pleasure.
Gerhard at KubeCon NA 2021 Part 2_transcript.txt ADDED
@@ -0,0 +1,561 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So I've attended the last KubeCon virtually - this was KubeCon EU - and I got the impression that the biggest trend then was eBPF. Everybody was talking about it, and some were calling it "the JavaScript for the kernel", "Kernel 2.0", all sorts of references. How do you think about eBPF, Liz?
2
+
3
+ **Liz Rice:** So I've also heard that idea of it being -- it's expressed as eBPF is to the kernel what JavaScript is to an HTML page, in that it makes it programmable. Kind of interesting analogy, but it kind of makes my brain hurt, so I find it easy to just think about the kernel.
4
+
5
+ So what eBPF allows us to do is to run custom programs that we load into the kernel and we associate them with events. And because there are so many different types of events that we can attach our programs to, and because they're in the kernel, there's only one kernel per host, so these programs have access to pretty much everything that's happening on the entire machine, and that makes them incredibly powerful and incredibly useful for observing what's going on, security, and of course, networking as well. So yeah, I'm very excited about eBPF.
6
+
7
+ **Gerhard Lazu:** \[04:15\] That was exactly my impression as well. I really like this idea where we have all those containers running on this host, and then you have many hosts... But still, when it comes to the hosts, why is this particular set of containers struggling? What is going on there? Networking is such a big issue, even today. I think things are getting better, but I remember 3-4 years ago it was like the Wild Wild West in the world of Kubernetes. IP tables? Oh, my goodness me. Don't get me started.
8
+
9
+ **Liz Rice:** \[laughs\]
10
+
11
+ **Gerhard Lazu:** Yes... So I think eBPF is making things a little bit more visible, a little bit more understandable, and that helps.
12
+
13
+ **Liz Rice:** And we can skip past those IP tables by just --
14
+
15
+ **Gerhard Lazu:** Yes...
16
+
17
+ **Liz Rice:** ...we'll just ignore that. We'll just use eBPF instead, and that does lead to some genuinely measurable performance improvements, which is really nice.
18
+
19
+ **Gerhard Lazu:** So when it comes to the end users, what is the eBPF helping them with? Understanding things, networking? Is there something more to it? I mean, that's at the surface. If we peel back the first layer, what do we have underneath?
20
+
21
+ **Liz Rice:** So I think one thing to be clear about is that although a lot of us as engineers are getting very excited about eBPF programs, and I love to talk about "Hey, let's write an eBPF program", in reality, most people are not going to need to write eBPF programs themselves, much like most of us aren't involved in kernel programming, but we use the kernel all the time. And I think we're increasingly gonna see tools that build on eBPF primitives, if you like, and offer us really useful abstractions. There's lots of different projects in the CNCF that are starting to do that, and I'm sure we're gonna see some more coming forward.
22
+
23
+ There's a history of observability in particular using eBPF. Brendan Gregg has been doing amazing work for several years with all these different command line tools you can use to measure, get metrics on pretty much everything that's happening across your system. But until recently, that's all been very command-line driven, quite low-level. How many TCP packets have been dropped is a very useful question to be able to answer, but sometimes you want a higher-level abstraction, and I think that's why we're seeing a lot of the innovation, in this bringing eBPF power and capabilities into tools that are at the kind of levels that answer questions for end users.
24
+
25
+ **Gerhard Lazu:** Okay. So I know that one tool that you're very familiar with is Cilium and I'm wondering where does Cilium and eBPF meet, because end users - I think they would know more about Cilium features and what that helps them do, see and understand, and less about eBPF specifically, the technology that Cilium makes use of.
26
+
27
+ **Liz Rice:** Yeah, so Cilium has always made use of eBPF. It was originally created as a networking project that uses eBPF to create that network plumbing between different end points in your system. And I think probably a lot of users just know it as a Kubernetes CNI. But it's actually a lot more than that. It's also offering sort of CNI with lots of bells and whistles, so things like observability, being able to look at network flows, network policies, so giving you security enforcement at a network level... And increasingly, some of our roadmap features take it to the next level with things like eBPF by service mesh, which I think service mesh is a really great example of something where by running code in the kernel we don't have to instrument each individual application, and that's a big benefit; it's gonna make things much simpler for people to deploy.
28
+
29
+ **Gerhard Lazu:** \[07:58\] So does Cilium -- I know that it exposes all these metrics and all this visibility into what is happening under the hood, especially from a networking perspective and from a communication perspective... But Cilium - what are the components in the Cilium product, or project? ...I'm not sure how you wanna call it. Because obviously, there's the CNI, and there's other things. What are the big components that make Cilium?
30
+
31
+ **Liz Rice:** Yeah, so when you run Cilium, you install a Cilium agent on every node, and if all you want is networking capabilities, then that gets you going. You probably want to start being able to see those network flows, and to do that, you'd install a component called Hubble, which collects this network information, and the Kubernetes identity is associated with it.
32
+
33
+ So if you look at Hubble flows, you can see traffic flowing between different Kubernetes pods. And then there's also a Hubble UI which pulls that flow of information, brings it into a much more sort of human-readable form... For example, showing you a service map, and showing you how traffic is flowing between these different Kubernetes services. And maybe there are issues. You can see the packets that are being dropped within that UI. So that's very useful in terms of debugging a network issue.
34
+
35
+ **Gerhard Lazu:** What about when it comes to alerting, monitoring, that side of things? ...when there is a problem, you being informed that "Hey, there's a problem." Is there such a component, or would you integrate Cilium with something else for that capability? What does that story look like?
36
+
37
+ **Liz Rice:** Yeah, you'd integrate that with something else. I think a lot of people will push the flow data into some kind of sim for example.
38
+
39
+ **Gerhard Lazu:** But I'm thinking about, for example, packet loss. There's a lot of congestion, or lots of retries, whatever the case may be. Is there a way to monitor or to consume the Cilium metrics, I'm assuming, and then have alerts?
40
+
41
+ **Liz Rice:** So you can absolutely get the metrics into Prometheus, or showing Grafana... There's some beautiful screenshots in the -- I can't quite remember where I saw them recently, but just this whole series of amazing Grafana graphs that you can use to diagnose your network.
42
+
43
+ **Gerhard Lazu:** Okay. I'm not sure whether you can tell by now that I'm really interested in trying Cilium out for real, in a production environment. I really am. And I'm trying to figure out what the components are.
44
+
45
+ So my next question would be "Where would you recommend that I start? Do I take the Helm chart, is there an operator? What does the Getting Started look like?
46
+
47
+ **Liz Rice:** So there's a few different options... There is a Helm chart, there's a command-line tool, the Cilium CLI, which makes it as simple as installing the CLI, and then "cilium install", and "cilium hubble install" if you'd like to add that...
48
+
49
+ **Gerhard Lazu:** I like that Getting Started. Oh, yes.
50
+
51
+ **Liz Rice:** It does really make that Getting Started experience nice. Also, if you want a helping hand, we're just about to start a series of weekly install fests. So the idea is to have a session with someone who's experienced in Cilium, they're kind of guiding you through the process, and it'll be interactive, so that if people have issues and questions, they can get help along the way. So that's kicking off -- I think our first one is either this week or next week, but there's a new feature on the Cilium.io website to book your place on one of those install fests.
52
+
53
+ **Gerhard Lazu:** I love the sound of that. I wasn't expecting for that answer, but that's amazing. That's exactly what I'm looking for, so thank you, Liz, for thinking ahead of time...
54
+
55
+ **Liz Rice:** \[laughs\]
56
+
57
+ **Gerhard Lazu:** This is perfect. Okay, I really love where this is going. So I'm thinking of watching you code live, which is at the top of my list for this KubeCon. It's one of the must-do for me at this KubeCon, to watch you code live. Can you tell us a little bit more about that? Where the idea came from, how do you intend to do that, what are you intending to cover...
58
+
59
+ **Liz Rice:** Yeah, so I've done a few talks about eBPF programming... There's lots of different frameworks and libraries that you can use, and you can write your user space code in different languages, like Python, and Go, and Rust now as well. My Rust isn't quite up to doing live-coding in that myself, but... \[laughs\]
60
+
61
+ **Gerhard Lazu:** \[12:10\] What do you use for live-coding?
62
+
63
+ **Liz Rice:** I typically use either -- Go is kind of my go-to language, but for ease of demonstrating a lot of eBPF capabilities, I'll quite often use the BCC framework, which supports Python. That's also very easy to read in a live coding environment. And occasionally, I've done some that you see. Because the kernel programs, the eBPF programs that you're actually running in the kernel, that are typically written in C, can now be written in Rust, so I'm gonna have to up my Rust game... But because the kernel part is often written in C, a lot of eBPF programmers are also comfortable in that language, and I'm therefore writing the user space part in C as well.
64
+
65
+ **Gerhard Lazu:** Okay. And what would you like to cover in those sessions? Which is your step number one, step number two? What do you tend to cover in those? I haven't watched one, but again, top of my list, as I mentioned.
66
+
67
+ **Liz Rice:** Yeah, so the kind of step one is usually Hello World. I think that's step one in any programming scenario. And running a little program in the kernel that will just trace out Hello World in response to perhaps a system call, or perhaps a network event. And that's very easy to set up.
68
+
69
+ Then maybe we go down the direction of "How do we get information in and out of the kernel? So there's a concept in eBPF called maps, where there are shared data structures, so that we can parse information between eBPF programs, or into user space, between kernel and user space. Or maybe we go in a networking direction. I did a virtual office hours yesterday, where I did some -- live-coding maybe is... I made my life easier in yesterday's virtual office hours by having some pre-prepared code and sort of commenting and uncommenting things out... But it's all running live, so... \[laughs\]
70
+
71
+ **Gerhard Lazu:** I think that's the best way to approach it, if you think about it, because live coding is about going through it and explaining to users "This is what this does." It's less about typing. I think that's the least interesting part. And it's how we think about things, and how we start structuring things... I think that is really, really helpful. But yes, I will -- do you have a live coding session today?
72
+
73
+ **Liz Rice:** Yeah, I've got one 6:30 UK time today, and another one tomorrow that I think is a little bit earlier, but I've gotta check my calendar...
74
+
75
+ **Gerhard Lazu:** Okay, okay. Great. Thank you for that. Okay. So I know that this KubeCon - one of the things that you do is you have a talk, cloud-native super-powers with eBPF. I know that it's really late for you - 12:30 you said? I was looking... So I intend to joining and keep you awake.
76
+
77
+ **Liz Rice:** Oh, thank you...
78
+
79
+ **Gerhard Lazu:** How are you heckling? Do you like heckling?
80
+
81
+ **Liz Rice:** I love heckling. I love questions. \[laughs\]
82
+
83
+ **Gerhard Lazu:** Okay, great. So that's what I intend to do. That sounds good.
84
+
85
+ **Liz Rice:** Fantastic.
86
+
87
+ **Gerhard Lazu:** I know that Duffie Cooley recently joined you... What's it being like working with him? And by the way, hi, Duffie, if you're listening.
88
+
89
+ **Liz Rice:** Yeah, Duffie is great. We're so pleased that he's joined us at Isovalent. He's in L.A. at the moment, so he and Dan, our CEO, are our kind of on-site presence, and then most of the team are kind of involved more remotely. But yeah, we're super-excited to have Duffy; he's such a great -- he's got so much experience in networking as well. I've always sort of known him more on a security and obviously Kubernetes background. It turns out he has loads of networking experience as well, so... He's fabulous to have on board.
90
+
91
+ **Gerhard Lazu:** Okay. Do you get to pair with him, or just bounce ideas off? What does working with him look like? I know that you have live shows with him, I know about that... What happens outside of that?
92
+
93
+ **Liz Rice:** Yeah, so we are eight hours different, so that makes it a little bit more difficult to collaborate than ideal, but... Yeah, we're definitely figuring out some of the ways that we want to tell stories, doing Echo, which is our livestream; that's a lot of fun to do together... Yeah, it's a delight to have him in the company.
94
+
95
+ **Gerhard Lazu:** \[16:05\] That sounds great.: Speaking about KubeCon - I know that you'll be remote, virtual... I've seen even your Twitter tagline change; I'm thinking of doing the same, but it's a great idea. "I'll be at KubeCon, but virtually. So I'll be there, but you won't see me, unless it's online." What are you looking forward to the most at this KubeCon?
96
+
97
+ **Liz Rice:** Well, I'll be completely honest, I'm very much looking forward to the project updates announcements about new projects joining the CNCF. That's only in about an hour away from now, so... Keep your eye out for a project that we know and love becoming a CNCF project.
98
+
99
+ **Gerhard Lazu:** Hm... I'm looking forward to that. Okay. By the way, this goes life I think in about two weeks, so the announcements that you want to make, you can, because it's going to be post-KubeCon, so... If there's anything like that, it's fine.
100
+
101
+ **Liz Rice:** In that case... \[laughs\]
102
+
103
+ **Gerhard Lazu:** Go on.
104
+
105
+ **Liz Rice:** I'll trust you. It's only an hour away, anyway. It's not even secret, but we officially announced today that Cilium is becoming a CNCF incubation-level project, so... I'm excited about that...
106
+
107
+ **Gerhard Lazu:** Yes...!
108
+
109
+ **Liz Rice:** ...as a Cilium person, and I'm excited about that as a TOC person, because it means we've got networking finally on the landscape. We've got a couple of sandbox projects, but we didn't have anything that was really production-hardened filling that kind of C&I box on the landscape... So I feel like that's a really nice box that we're ticking from a CNCF perspective, and obviously, hugely exciting from a Cilium community perspective.
110
+
111
+ **Gerhard Lazu:** That sounds amazing. Wow, okay... Right. I mean, you've just added another big reason why I wanna do certain things... But okay, okay. Let me not get ahead of myself. I always do that, I get too excited. I mean, this sounds great; I'm really looking forward to that, by the way. So for the people that can't attend KubeCon in person, like you and me, what would you recommend? How would you recommend that they feel part of it without actually being there in person?
112
+
113
+ **Liz Rice:** For me, I find that the interaction, even if it's chat, is what makes me feel connected to people. And also, if you're attending a talk and there are speakers, speakers love getting questions. It shows that you're paying attention. So don't be shy, type those questions in. Or if you are able to be there in person, ask questions. And I also think although it can sometimes be a little bit difficult to take the leap into turning your camera on in some kind of hallway track event, but if you're tempted, even slightly tempted, it can be so rewarding to get into a video chat. Sometimes there'll be virtual office hours...
114
+
115
+ I think for us, in our timezone, most of the first social hallway track events are likely to be in the middle of the night, so maybe I'll be getting less of that this time, but...
116
+
117
+ **Gerhard Lazu:** Yes, that's right.
118
+
119
+ **Liz Rice:** And get into Slack... There'll be loads of people watching. Every time I go on Twitter and I see a photo of someone, I'm thinking "They're in L.A, and I'm kind of jealous." But I also know there are lots of us who aren't able to be there... So we're all in the same boat, and I'm sure we all chat to each other, whatever timezone we're in.
120
+
121
+ **Gerhard Lazu:** That's right. Slack does help, I have to say. KubeCon EU, I know just in our timezones, that made some things easier... But it was still virtual, so we had to adapt to that. So having Slack helped. Happy hours, the impromptu, ad-hoc sessions, where a bunch of us would get together, whether it was four or five of us and we were talk - that would really help to meet people that you would normally meet. And it was like, I never had a bad conversation, even though people that I'd met for the first time. So that was a good experience.
122
+
123
+ I think the virtual office hours - that is a great idea. Conversations like these help, and more of this happening live would help for sure... But I think we're all trying to figure this out, and we don't expect it to be permanent. I mean, it's now -- I think this was an unfortunate situation, because from November I know that U.K. and much of Europe can go to the U.S, so it was just bad timing, I suppose.
124
+
125
+ **Liz Rice:** \[20:05\] Yeah... Although I hope that we do keep some of this virtual element going, because I see there were a lot of people who, for financial reasons or commitment reasons - you know, there were many reasons beyond Covid why people can't necessarily make it to an event. So I think if we can maintain some of the virtual elements, I do think that brings more people in. And it won't ever be quite the same as being there in person, but it is still an opportunity to connect.
126
+
127
+ I was actually gonna say, the platform that they're using this time around seems quite good for... Certainly, when I did the virtual office hours yesterday - it works. You can have conversations with people. So yeah, we're getting there.
128
+
129
+ **Gerhard Lazu:** That is a very interesting perspective, and I do have to say, it makes a lot of sense, especially for, as you mentioned, people for which traveling is difficult; it is a considerable financial investment for many attendees, and it just opens up. We have so many more people joining this wonderful community that I don't think they would have the opportunity otherwise. So in a way, it is a blessing in disguise, and I think I did talk about this at some point... But I forgot about that, and you're right; so thank you for reminding me.
130
+
131
+ **Liz Rice:** \[laughs\]
132
+
133
+ **Gerhard Lazu:** So as we are preparing to wrap up, I'm wondering if there is anything interesting happening for eBPF, or Cilium in the next six months that you would like to share.
134
+
135
+ **Liz Rice:** Well, I guess we've started off with those weekly install fests so that's our kind of initial -- I mean, I think from a feature roadmap perspective there are some pretty interesting things coming down the pipeline, and in particular I think kernel service mesh... In general, I think the whole service mesh space is pretty confusing right now, and I think we are all seeing some evolution in the different products that are out there, and Cilium is definitely gonna be a big part of that story.
136
+
137
+ **Gerhard Lazu:** Okay... Well, I didn't need any more reasons, but I go them, to watch this even more closely and try it out myself, and try running it in production, just to see what's it like with some significant demands with traffic, to see how it holds up, to see what it shows us. I'm really excited about that, so...
138
+
139
+ **Liz Rice:** And if you do have any questions or issues, the Cilium Slack community is super-helpful, so jump in there and let us know how you get on. We wanna hear.
140
+
141
+ **Gerhard Lazu:** That's another great tip. Thank you, Liz. Thank you very much for making the time; it's been an absolute pleasure, thank you.
142
+
143
+ **Liz Rice:** Thank you for having me.
144
+
145
+ **Break:** \[22:32\]
146
+
147
+ **Gerhard Lazu:** So out of everyone that I spoke to so far then, you're the first one that you're at KubeCon in person... So tell us what's it like for everyone that couldn't make it.
148
+
149
+ **Dan Mangum:** Yeah, absolutely. Well, first of all, it is incredibly nice to be able to see folks that I haven't been able to see in a number of years... And also some folks that I'd never met in person before. So regardless of the whole situation with Covid and all, I definitely feel very privileged to be here, and I don't take that lightly.
150
+
151
+ In terms of comparing to previous KubeCons, I've actually been mostly to virtual KubeCons, just because we've been in this pandemic stage for so long. I did have the opportunity to go to KubeCon in San Diego in person, which obviously you remember, because we recorded a great podcast episode there... And it definitely feels different from that. The CNCF has done an incredible job of making this a very safe environment with their health and safety protocols, so that's been very impressive in terms of spacing, in terms of making sure everyone's comfort levels with being close to people or being in proximity of others is adhered to. That's been very impressive. There's absolutely less attendance than there has been at past KubeCons, and one of the things I've noticed is there's a lot more just community members here, rather than end users, I'd say, which has pros and cons. It's always really nice to talk to end users, because they're the folks that really motivate product roadmaps, and CNCF project roadmaps, and that sort of thing, and it's really valuable to hear from them... But it's also really nice to be able to collaborate with other projects. I've been spending a lot of time talking to other maintainers, talking to other companies, seeing what they're up to, talking about different integrations that could be possible... So it's a different feel, but its unique atmosphere I think is really advantageous in some respects.
152
+
153
+ **Gerhard Lazu:** That sounds great. So how did you make it work, Jared? Because I know that you're remote, but you have the virtual office hours... How did you make those work? Did they help? How did that feel for you?
154
+
155
+ **Dan Mangum:** Yeah, it's actually kind of interesting... I was just kind of thinking about it and reflecting a little bit while Dan was answering... I live in San Diego, so I'm actually fairly close, in proximity to where KubeCon is being held, in Los Angeles... But then my schedule ended up getting up booked up with so many virtual commitments that it didn't make it super-possible to go up there and then do everything all at the same time.
156
+
157
+ So the CNCF did a good job in organizing this and making all the virtual events possible, to kind of be inclusive and make sure that as a hybrid event people are getting opportunities to participate either in person, but also back at home, wherever that may be.
158
+
159
+ So the virtual office hours that we'd ran yesterday was quite successful, with a lot of people joining in, a lot of questions being asked also... So the ability to connect with people virtually and not feel left out from the in-person event running on is working quite well, and everyone's still feeling, as far as I could tell, pretty connected and getting lots of chances to participate, which is really good.
160
+
161
+ **Gerhard Lazu:** Were there any questions that really stood out to you? ...something really memorable, that made you think, or something really interesting that you weren't expecting?
162
+
163
+ **Dan Mangum:** There were a lot of good questions yesterday. One of the things that I realized too is that while I'm presenting and questions are flooding in, it's really good to have multiple people there, to be able to support and answer questions and do that asynchronously in addition to the ones we answer on camera... Because there's just too many questions to answer on camera, and also get through all the material.
164
+
165
+ So I was trying to focus on delivering the material, while everyone else was attacking all the questions. Dan, do you remember any specific ones that you were jumping on while I was presenting?
166
+
167
+ **Jared Watts:** \[27:33\] Yeah, absolutely. Like you said, there were a lot of really great questions. The ones that really stuck out to me... And this is something that's kind of been a point of interest for folks throughout all of Crossplane's lifecycle - that's handling of sensitive data. So with Crossplane we have two major sources of sensitive data, one of them being credentials to talk to cloud providers or external APIs, and the other one being credentials to communicate with the infrastructure that you're provisioning using those external APIs. And so some of the progress we've made around being able to supply external API credentials via secret stores like Vault, and injecting those into the file system of our providers, and that sort of thing, as well as some of the proposals around how we publish connection details to that infrastructure that comes up - it's always really exciting when you go from one conference to the next iteration of it, and you have some solutions for the folks that had questions about that the previous time, or you at least have something where you have a design for what it's gonna look like... So those kind of topic areas around security, and credentials and that sort of thing was something that really stuck out to me in the questions that we got.
168
+
169
+ **Dan Mangum:** There was also a question that really stuck out in my mind, and now it just popped back in... Somebody asked "I can just go into the GCP console and in the UI and create infrastructure. Why do I need Crossplane at all?"
170
+
171
+ **Gerhard Lazu:** Ha-ha! That's a good one.
172
+
173
+ **Dan Mangum:** So the thing that really stuck in my mind is 1) hey, we could probably improve our educational content, and messaging, and really make it more clear to people what the value is. So that's an improvement we can make on our side, there's no question about it. But you know, a big point of the project is that a lot of times you most certainly don't want to be giving direct access to the cloud provider consoles to your developers and have them being able to willy-nilly create resources on their own. You want to be able to have a separation of concerns, and kind of gate the access that they get to resources there... So that is a big value selling point of the project. That kind of stuck to me, that - hey, maybe we need to be messaging that a little bit better.
174
+
175
+ **Gerhard Lazu:** Here's an idea for you... Next time someone asks you this, I think you should introduce Dan as the CCOO, Chief ClickOps Officer... \[laughs\] And say "We created a role." That was such a good thing. So ClickOps is real, and we have just the right antidote for it, and he's called Dan Mangum. \[laughs\] So yeah, that's a good one.
176
+
177
+ Okay, okay... This is actually something which I've been thinking about as well. I started using Crossplane to manage all my GKE clusters. It works great, I never wanna go back... And not even to the CLI, which is really weird, because CLI is great, but Crossplane is better from that perspective, so I really enjoy that. And in that world, I was wondering - how can we handle secrets better? Because you know, secrets in Kubernetes by default, base64-encoded - well, sorry, that's not really secret. Anyone can get it. So that's a great one. I will definitely want to follow up on that...
178
+
179
+ But I have another thing on my mind, because San Diego was mentioned a couple of times... And I had an amazing run around the San Diego marina... So I'm wondering then - was the run in L.A. better than your San Diego one? What can you tell us about it?
180
+
181
+ **Dan Mangum:** So you're catching me at a good time... Right before this podcast I got back from the SigRun event we had this morning, where there was about 15 of us or so that ran through L.A. And I can say absolutely that running in L.A. is not as good as running in San Diego. There are a lot of stoplights...
182
+
183
+ I had one run out to Dodger Stadium earlier this week, and that was pretty nice, but overall I would not recommend coming to Los Angeles as a destination spot for getting your runs in.
184
+
185
+ **Gerhard Lazu:** Right. So next KubeCon I'm thinking of a place where we can all enjoy running a lot more, right? Because that's the most important criteria for choosing the KubeCon location... \[laughter\] That's a good one. Do you run, Jared? I never asked, and I don't know. Do you run?
186
+
187
+ **Jared Watts:** I'm more of a person who likes to do their exercise in combination with a goal, like a direct activity... So surfing and ice hockey are my big exercise things. I just had an ice hockey game last night, so I'm having a little bit of trouble waking up this morning and feeling a little sore, a little banged up from some of the violence out there... So Dan saying he was getting back from his run this morning when I -- it's not the same morning that I have had so far.
188
+
189
+ **Gerhard Lazu:** I see. \[laughs\] Right. That's interesting, surfing. I've never tried it. I think out of the two activities, that sounds a very interesting one that I would be up for trying. So let's see where KubeCon happens next in the U.S. Is it Detroit? I've heard Detroit being mentioned. Is that real?
190
+
191
+ **Dan Mangum:** Yup. They announced yesterday -- KubeCon EU I believe is in Valencia, and KubeCon North America is gonna be in Detroit, which is... I'm pumped about it, coming to the Midwest. I think that's kind of exciting, because we sometimes miss out on some events in the Midwest.
192
+
193
+ **Gerhard Lazu:** \[32:10\] I see, okay. No surfing there, I'm imagining, in Detroit, being in the Midwest...
194
+
195
+ **Dan Mangum:** I don't think so...
196
+
197
+ **Jared Watts:** I haven't heard of it as a surfing destination...
198
+
199
+ **Dan Mangum:** Concrete surfing maybe. \[laughter\]
200
+
201
+ **Gerhard Lazu:** Yeah. Or Valencia. That's a good one. Okay... Yeah, that's more for like yachting, I suppose, or something like that. Okay. So let's talk about the big news. Crossplane was announced for incubation status, was it a few weeks ago before KubeCon? That is really big, and I'm wondering what changed for you? What changed for Crossplane day to day, as a project, with it entering the incubation phase? Jared, what do you think?
202
+
203
+ **Jared Watts:** Yes, the incubation thing is definitely something that I put a lot of effort into, with the due diligence, and making sure that the proposal is really covering all aspects of the project... So I've got a good finer on the pulse in terms of the project growth, and the maturity, and all that sort of stuff.
204
+
205
+ So one thing that's kind of interesting is that it is a bit of a long process, so the vetting and diligence is pretty thorough... Which is a good thing, because that's how you -- you know, projects that make it to this level are given a stamp of maturity, and the ecosystem as a whole can have confidence in them that they're mature, and that they're reliable, and they check a certain set of criteria...
206
+
207
+ So the process was a long thing, so it was a bit of a rolling experience there, where if the project is still maturing, and while we're almost at incubation but not quite... So with the announcement itself though, we absolutely saw a new influx of adopters and users coming in to check out the project. Looking at some of the metrics and stats, the graphs for GitHub stars, or Slack members etc. went vertical for about a week or two, which was really cool to see that - you know, hey, we've made some inroads and we built a community. But there's more people out there to reach, and the CNCF is helping us do that with declaring the project more mature, and making a lot of noise about it...
208
+
209
+ Day to day how the project is run is not changing, because the governance is there, and the project release processes and all these sort of things are pretty healthy and really well done... So that doesn't change. But the influx of people coming, and more people to try it out, and the community continuing to grow because now they feel it's mature enough to do that is really encouraging to see.
210
+
211
+ **Gerhard Lazu:** Right. What about you, Dan? What makes you most excited about Crossplane reaching incubation status?
212
+
213
+ **Dan Mangum:** Absolutely. Well, Jared touched on a bunch of great things there, and Jared absolutely loved this effort, and a ton of effort and work went into it... So we're very appreciative to all of that that he put in, and just let us sit back and work on the project. But kind of building on some of the things that he already mentioned, one of the things that I really love about Crossplane being an incubating project is a lot of folks that I talk to now, who are new folks that I'm meeting, at least have some sort of baseline knowledge of what our mission is, which allows us to kind of get to more advanced conversations faster.
214
+
215
+ I absolutely love talking to folks who don't know anything about Crossplane and wanna hear about the big-picture vision, and that sort of thing... But we can really kind of get down to brass tacks and talk about more tangible things when folks come in and already have a little bit of an idea of what we're trying to do... And that gives us ideas as maintainer about "What do folks need to take this to the next level, and that sort of thing?" I think just that visibility has been a huge boone for us already.
216
+
217
+ **Gerhard Lazu:** It's crazy that -- I remember 2019, when we started talking about Crossplane, this new thing... Some people had heard of it, but it was still very new. It took -- I'm not sure at what stage you were at then, but now you're incubating... There was a sandbox stage? Were you in sandbox back then, two years ago, 2019?
218
+
219
+ **Jared Watts:** We weren't even part of the CNCF at that point, in our first conversation...
220
+
221
+ **Gerhard Lazu:** Okay... When did you join the CNCF, by the way?
222
+
223
+ **Jared Watts:** In June 2020.
224
+
225
+ **Gerhard Lazu:** Okay... So it took about a year and a bit to go from sandbox to incubation.
226
+
227
+ **Jared Watts:** Yeah, exactly. We started the process to apply for incubation probably March of this year, so it was about nine months or so that we started getting serious and putting the proposal out there, and then the process itself took about six months.
228
+
229
+ **Gerhard Lazu:** \[36:06\] Yeah. I think that in my mind explains a lot about the level of busyness that I've seen, and the level of activity... Because even before then, I can imagine this must be a really thorough process, as you mentioned, for a good reason... And it's great to see this journey that you're on. I mean, 2019, as you mentioned, not even part of the CNCF, but you were there... Almost like, "Oh yes, that was there, and I wanted to use it since then." I'm finally using Crossplane, and I love what I see there, so I have so many questions... And I'm sure that many more people will have many more questions.
230
+
231
+ Which is the best way of, first of all, finding out about Crossplane, starting to use it, and then once you get a bit more intermediate in the Crossplane usage, what do you do next? What does that trajectory look like in your mind, Dan?
232
+
233
+ **Dan Mangum:** Yeah, so a lot of folks start off with just coming to our getting started guide, and getting introduced to what that looks like... And one of the decisions we've made in our Getting Started guide is to incorporate some of our actual more advanced concepts early on. And when I'm talking about more advanced concepts, that's mostly our composition engine and our packaging. And despite introducing these earlier, because they are tools that are used to build abstractions, folks actually get a nicer interface to using Crossplane right off the bat. They're able to use these advanced concepts without actually understanding all of the little bits of it.
234
+
235
+ So usually, folks will go through that process, and in our Getting Started guide we have an abstraction of a database and show how that can create an RDS instance on AWS, or a Cloud SQL instance on GCP, all from the same spec, from the actual resource that you're creating in your Kubernetes cluster.
236
+
237
+ So generally, what folks will do is they'll go through that process and they'll start to kind of see the bigger picture. And then honestly, a lot of the way that folks continue to dive into the project is, number one, looking at some of the content that we've put out there on YouTube, and that sort of thing... Victor joined Upbound and the Crossplane community and has been putting out some great content around that... And then also just our Slack workspace has exploded over the past six months or so, and there are countless folks in there just asking questions, learning more about it.
238
+
239
+ One of the really rewarding things to see as a maintainer is community members helping other community members. Because you know, earlier on it was mostly community members coming along and asking maintainers questions, and then answering those, and that didn't scale super-well. Now that we have end users helping each other use Crossplane and talking about what features they'd like to see, what things worked for their organizations, how that would affect others - that's really where we see folks really get into the weeds of Crossplane and start to understand how they can extend it for their specific use cases.
240
+
241
+ **Gerhard Lazu:** Yeah, building that community is super-important. I know that is such a huge and important part of what you do every day. I mean, I see everywhere - Twitter, YouTube, Slack... So much activity, and now that will only pick up. And you're right, there's a point where people have to start helping one another out, because it can't be on you, the project maintainers.
242
+
243
+ So I think that is one important thing for people listening to this, to try and help others. If you're into Crossplane and you know something - help your friend that you may not know yet, but get to know him/her, and see how you can help one another out.
244
+
245
+ One thing which I would like to say is that the GCP provider - there was a very recent version, I think 0.18 or 0.19, I can't remember exactly... That upgrade was very interesting, and I think that those things will become -- when you deprecated the GKE Cluster for the cluster, so there was like an export to be made, and then a re-import to be made... That was a fairly involved process. So I'm wondering, going forward, is that something that you're thinking about, Jared, in terms of how to make it smoother for users? Because if people will keep spending a lot of time on figuring that out or even performing it... To be honest, what I've done - I just didn't bother with the upgrade. I deleted all the clusters, removed, re-installed, because it was too involved. I tried it, but at step number five or six I said "Hm... This is just too much work." So I'm wondering how you're thinking about the continued usage and the upgrades going forward, so that users - their lives are easier.
246
+
247
+ **Jared Watts:** \[40:11\] Yeah, that's a really good question. There's a couple of thoughts that come to mind. First is that there was a lot of thought put into that. It wasn't an easy decision of "Oh, hey, let's just make this change here and roll that out." Dan drove that effort to begin with, so he made a proposal about it, explained it very thoroughly, and gave the entire community a sense of what the situation is, with GCP having kind of a beta API that some people may wanna depend on, and then a stable API which other people may wanna depend on... So kind of supporting two different APIs from the cloud provider itself, with different varying levels of guarantees around breaking changes and things like that...
248
+
249
+ So Dan did a really good job laying all that out, putting it out to the community, and then spending a couple of months actually with getting feedback and kind of understanding it. So that was a good thing there. And then Hassan did a really good job of writing up a migration guide.
250
+
251
+ So something I learned from the Rook project - you know, storage orchestration for Kubernetes - that I'm also involved with, is that migrations are one thing, but if you don't provide any path at all for people, then that could be a failure. So there are some manual steps with that upgrade, or the migration, and having the guide to do that, to give people the opportunity - it was something I was definitely proud that we paid attention to that and had some empathy for the community to go ahead and invest in that.
252
+
253
+ And then the last comment I'll make there is that, you know, there's different levels of maturity and guarantees within the Crossplane ecosystem itself also. So Crossplane as a core project - you know, the functionality and machinery and tooling to build your own custom platforms etc, that is at a 1.0 or 1.5 almost now. That's stable, the API there. There are some guarantees around breaking changes and backwards compatibility and things like that... So we don't anticipate and haven't done any difficult migrations in core Crossplane in quite a while, and we're gonna stick to that... Unless we do like a 2.0, and that will be very explicit as well. But for the providers - they are not at that same level of stability yet, so they're still in an alpha/beta sort of phase, where there are gonna be some of those breaking changes perhaps as things are being figured out and matured along the way... But we shouldn't see that in core Crossplane.
254
+
255
+ **Gerhard Lazu:** It's very nice that you've laid out all that background, because I remember looking at the issue that Dan opened - it was really good, really well thought out. There wasn't a lot of engagement on the issue. Maybe that happened on Slack, or elsewhere... But I really like that I could follow the trail all the way to the source, and see "Well, this has been happening for a while. Thought has been put into this." You're right, that guide was really good; I followed it, it worked, but I was thinking "Do I really wanna do this? There's like too much stuff here, and I have to always -- like, step number 3 or 4." Then I still have to continue, there's like four others, or something like that... So I was like halfway through, and I thought "You know what - it would be easier to do that."
256
+
257
+ What I want to say is that having gone to the end, having gone to the latest version of the GCP provider, everything that I thought it would have, it had. So the new cluster resource behaved a lot better than the GKE Cluster one. So it was worth getting there. And once I had that, I found the extra properties, especially around auto-scaling, very useful. So I love seeing that. That was a great end state to get to.
258
+
259
+ So as we are about to wrap this up, anything coming in the next six months that you'd like to share with us?
260
+
261
+ **Dan Mangum:** So I'll talk a little bit about some of the future things that we have planned for Crossplane. And some of this -- you know, Crossplane, as we all know here, is a CNCF project, so when I talk about what I want to see in Crossplane, that doesn't necessarily mean it's gonna happen... It's my personal desire for what happens, and my contribution to the roadmap as a maintainer. So we'll see how other maintainers and other community members feel about my proposals.
262
+
263
+ \[43:56\] One of the things that I am really interested in is our provider deployment model. Right now, the way provider packages work is it's essentially a stream of YAML, which is a bunch of different CRDs, and then it's a reference to an image that lives on a registry somewhere, or is already in your cluster, that you run, that runs the controllers for all of those different resource types that you're installing.
264
+
265
+ Now, the way that we actually set up that controller for you when you install a provider is we create a Kubernetes deployment, and that's the only way we do it right now. That doesn't have to be the case, right? The deployment is one way to manage your workload within a Kubernetes cluster; you could also create a Knative function, you could create something external to your Kubernetes cluster; it could be a Lambda function on AWS that had access to your Kubernetes cluster, and you can also start to think of things as more granular than our monolithic providers we have right now, where you can think of just custom logic that you need to run that's kind of the glue between your different providers.
266
+
267
+ So those are a lot of different options, but essentially, what you can imagine there is an interface for different provider deployment models, and you can say "I'd like to install my provider, and I want Crossplane to use this deployment engine for it to set that up, and I can manage it in a certain way."
268
+
269
+ What that also gives you the ability to do is you may not manage your core Crossplane control plane, but you may manage some of the custom logic that you wanna introduce into it. Obviously, thinking of a hosted control plane model you can think about that in a -- an external organization could run your control plane for you, but you kind of do that last-mile API interaction where you supply credentials, and that sort of thing, on your own infrastructure, in your own AWS account.
270
+
271
+ So thinking about some flexibility around that and some partitioning as well. Right now when you install a provider of AWS, you get all of the provider types installed. You really shouldn't have to do that... So really customizing and making more granular provider installs and API extension mechanisms are something that's gonna be top of mind for me over the next six months to a year.
272
+
273
+ **Gerhard Lazu:** I have so many questions to that... We are out of time, but I really wanna hear what Jared is thinking about for the next six months.
274
+
275
+ **Jared Watts:** Awesome, yes. Quick thing for Dan there - you mentioned it's a community-driven project, and he has his own proposals etc. the community can always weigh in and see if there are good ideas... Historically speaking, Dan's proposals tend to be pretty well accepted and good ideas... So what he's saying there probably will be something the community likes.
276
+
277
+ So for me - I'll quickly throw in two things that I think are really exciting over the next six months. The provider coverage and the custom compositions. Provider coverage - we'll have a lot more to share about that pretty soon, but basically doing code generation to automatically generate Crossplane providers for the full surface area of c cloud provider's API. You know, AWS has almost 700 resources... So being able to have a Crossplane provider to do all of those resources and have very full coverage is very exciting, and that's coming along pretty soon.
278
+
279
+ And then the other one - custom compositions. The composition engine is fairly powerful, where you can compose together all of your resources and infrastructure, and then provide those as the high-level abstraction to developers... it's a powerful model, but then there's some things we could do to improve that. If you wanna do some custom logic, or templating, or flow control, or anything like that, we're enabling a way to do that, with the language of your choice. So to be able to extend the composition engine and be able to write however you want to, in whatever language you want to, some logic and details about generating custom compositions at runtime, which will kind of open the door to really any scenario that anyone can think of. So that'll be a nice kind of last-mile thing for scenarios that aren't really covered with the defaults machinery right now.
280
+
281
+ **Gerhard Lazu:** Well, all I can say is please continue blowing my mind the way you are. There's a very special way that you blow my mind every single time I talk to you. This is amazing. Thank you very much.
282
+
283
+ \[47:58\] The other thing which I'd like to say is stay cool. Crossplane is really cool. Just keep doing what you're doing, and keep reconciling. I'm enjoying KubeCon, but especially reconciling. So thank you Dan, thank you Jerod. This has been a pleasure.
284
+
285
+ **Dan Mangum:** Awesome.
286
+
287
+ **Jared Watts:** Right on. Thank you so much for having us again, Gerhard. I always love to be on this show.
288
+
289
+ **Dan Mangum:** Absolutely.
290
+
291
+ **Break:** \[48:14\]
292
+
293
+ **Gerhard Lazu:** So I know, David, that this is your first KubeCon, and I am very curious to hear what was it was like for you.
294
+
295
+ **David Ansari:** It was very interesting. I really enjoyed the hybrid format of this KubeCon, because unfortunately I couldn't be there in person. I would like to go there in person, but unfortunately there was still a travel ban for most of Europeans... So it was still very interesting to participate virtually, and to listen to talks, and being able to reach out people and to ask questions.
296
+
297
+ **Gerhard Lazu:** Okay. Did you slack? How did you reach out to people? Zoom? How did that work for you?
298
+
299
+ **David Ansari:** Yeah, so mainly over the MeetingPlay platform. When I was attending a talk, I could just ask my questions and they got live-answered; so that was a nice experience. There was the possibility to reach out via Slack, but I didn't use Slack too much.
300
+
301
+ **Gerhard Lazu:** What about Zoom? Were there any Zoom sessions that you attended? I know that Priyanka used to do happy hour... I don't know whether she did this KubeCon, but that was one of my favorite sessions at a previous KubeCon EU, which was also a virtual one. No Zoom sessions.
302
+
303
+ **David Ansari:** To be honest, I missed all the Zoom sessions. I wasn't aware that those Zoom sessions existed. Did you attend some?
304
+
305
+ **Gerhard Lazu:** Yeah. That's what I said - not this one; I attended the previous one, and that was actually my favorite part of the conference, so that KubeCon EU. This was -- I was going for a different experience. I was going more like talking to people like I'm talking to you... I attended a few talks, there were some specific ones that I really enjoyed, and I wanted just to get a bit more involved... There were virtual office hours, which I participated in a few... So I had a slightly different experience, closer to what I would have had if I had gone there in person, which I also couldn't do... So this was less of a virtual -- I tried to make it less of a virtual one for me, and more of an in-person without being there, which sounds a bit weird, but I enjoyed talking to people as much as I could, which is what happens when you're there. It's less about the talks and it's more about the interactions, so that's what I did. I know that this was not just your first KubeCon, it was your first KubeCon as a speaker.
306
+
307
+ **David Ansari:** That's correct.
308
+
309
+ **Gerhard Lazu:** How was that for you? Tell us about it.
310
+
311
+ **David Ansari:** It was a lot of fun, and the experience was very good from start to end. I first applied (I think) a few months ago, directly after KubeCon Europe. I was actually listening to Ship It episode 2, where some tips were given on how to submit an abstract... So I submitted my abstract I think just two days after the episode came out, and it worked. I was lucky. And from then on, the communication went very well.
312
+
313
+ \[51:53\] There was very good content being given to the speakers on how to prepare, with checklists and deadlines, and the communication was very good from start to end. Especially Cody - he was answering very quickly, so that was nice. I pre-recorded the talk and I submitted it one month before the conference; that was the beginning of September. Then after I was very relaxed, because once I submitted the talk, I knew nothing can go really wrong. So I would just be there, the talk would play, and then I'd jump in for the Q&A... So it was a very relaxed and nice experience overall.
314
+
315
+ **Gerhard Lazu:** I attended the talk, I have to say, and I really enjoyed it, especially how quickly you're answering questions... And I think there is something very unique about pre-recorded talks. Maybe the interaction isn't -- obviously, it's not the way you would interact if you were giving it in-person and you had a connection with the audience, because... Well, you're not there and you can't see the audience... So in that case, I think a pre-recorded talk makes sense. But the highlight of that is that you can answer questions as they come in. And it was great to see you answering some of those questions. I mean, some of them were tough ones, and not only was the talk really polished, by the way, because you could take your time to record, and re-record, and get it just right... Your video editing skills are really good, by the way. I know that you've edited it yourself, and it was great. I really genuinely enjoyed watching it. So from my perspective as a viewer, it was great.
316
+
317
+ **David Ansari:** Thank you very much.
318
+
319
+ **Gerhard Lazu:** You're welcome. During the talk, what was it like when you could -- basically, you were attending your own talk, and also you were answering questions. What was that like?
320
+
321
+ **David Ansari:** So the experience was very good, and I think the talk being pre-recorded has many advantages for both speaker, but also for the attendees. Because for the attendees it is just frictionless. They have a better experience. They can ask questions live when they don't understand something, and I can directly answer via live chat. So that was good.
322
+
323
+ And as you mentioned, you can just pre-record the videos, you have multiples tries, you can edit it if you want... And to be honest, I was even having some parts of the videos which I had to edit and pre-record five times, just because the demo didn't work, for example... And it just results in a better end version, which you can then also share.
324
+
325
+ So the questions came in, and I could just answer them during the talk, and as the video was playing, I couldn't even pay attention to the video itself, because I was focusing on the Q&A part, and also the conversation thereafter was great.
326
+
327
+ My problem was a bit that my video was 32 minutes, and I had 35 minutes in total, so just 3 minutes left for Q&A. That was a bit short. But you can always continue the conversation after the talk.
328
+
329
+ **Gerhard Lazu:** So are you saying that you wish the talk was shorter, so that you would have had more time for Q&A?
330
+
331
+ **David Ansari:** Yes. So if I had to do the talk again, I would shorten it by probably 3-4 more minutes, just to leave enough room for questions in the end... Because I think that's one of the most valuable parts of the talk, so that you have a vivid discussion. Because that's the most important part of a talk, the discussion in the end. It's less about us telling something to people, or teaching about a certain concept, it's more about the discussion which is valuable, so that you get feedback from the users, and you see which parts they don't understand, you see what they are interested in, the questions they ask around certain topics, or certain topics come up more often than other topics, for example.
332
+
333
+ And you even see how advanced your users are... I was a bit surprised, because people that joined didn't even know what RabbitMQ is, which made me think that maybe I should have introduced RabbitMQ even better at the start of the talk.
334
+
335
+ **Gerhard Lazu:** So I think the level at which the talk was was intermediate-experienced, I believe. It wasn't a beginner talk. I also think - you're right, making it shorter is great, because there's two rules. Don't give out all of the information. And I won't tell you the second rule. That's it.
336
+
337
+ **David Ansari:** \[laughs\] I'm curious now... Do you hold it for the end? \[laughter\]
338
+
339
+ **Gerhard Lazu:** \[56:09\] No, I mean - there's two rules, and you only say one, right? Like, don't give out all the information; that's it. \[laughs\]
340
+
341
+ **David Ansari:** Okay, now I get it. Now I get it.
342
+
343
+ **Gerhard Lazu:** Okay. \[laughs\] So the idea being that you want the audience -- I mean, that's basically what prompts the questions, right? If you tell them half the story... I mean, there's so much more than you could tell them, but what do THEY want to know? And then they come to you asking about questions that you haven't thought about... It's more about telling a user what's possible, getting users excited, making them imagine things, and then see what they do with that. I mean, was it exciting enough? What are they thinking? What do they wish you had told them that you haven't, for time reasons, for the conversation reasons... As you mentioned, it's about the discussions. And the way you generate discussions is by making it interesting and short, and letting them decide "Well, what shall we do next?" It doesn't always work like that, obviously. You have to know your audience... But I think that's what happened here. So it was very compressed, it was very condensed, many concepts were introduced, and that's what it was meant to be. You know, "I'll give you a taste from many different things, and then you tell me what you would like to know more."
344
+
345
+ I'm wondering if maybe had you maybe spent more time in Slack, you could have continued some of those conversations there. I don't know. But what I do know is that another talk which I attended, that was Liz Rice's talk on eBPF, in the talk the Q&A didn't work. We could ask questions, but she couldn't answer, and then we moved into Slack, and we had a good conversation between the different people there. It was mostly question answering, but also someone - I forget their name - added some extra information, and it was good to see in the Slack channel that conversation.
346
+
347
+ So I think that's a good idea, to say "Hey, if you wanna know more, if you wanna talk o me, I'll be there. Leet's hang out. Again, it's just an idea. Who know if it works out until you try it.
348
+
349
+ Okay, so what I'm hearing is that for first-speakers I think that having the talk prerecorded may be a better experience, because that's stage fright, being there for the first time, being overwhelmed by emotions, being overwhelmed by what's happening... There's too much stuff happening, right? Especially at a big conference like KubeCon. So it can be a bit overwhelming. So I'm wondering if this is a good idea of starting your KubeCon experience. How did you feel? Did you feel relaxed? What was the predominant feeling as you were giving this talk and as you were preparing for the talk?
350
+
351
+ **David Ansari:** So as I started the talk, I was very relaxed, because I knew that everything was pre-recorded, so nothing could go really wrong. I know that it can be intimidating when you go on stage, because if you do a live demo for example, many things can go wrong. So the talk being pre-recorded is just much more comfortable for the speaker. So I could fully focus just on the questions part, and that was very valuable to the attendees.
352
+
353
+ **Gerhard Lazu:** So what are you thinking about the next KubeCon? Will you attend it in person, virtually, will you give a talk? Would you prefer to give a talk virtually, or would you like a pre-recorded one? Or do you prefer to give a talk in person? What are you thinking?
354
+
355
+ **David Ansari:** So if I have the chance to go to a conference in person, I would go there in person, because it's really about meeting the people. For me, a conference has two sides. The first side is really learning something, hearing talks, and having technical conversations, and the second part is meeting people and getting to know contributors to other projects. And that second part came a bit short for me this KubeCon, just because it was virtual. So for the Europe KubeCon next year, I'll try to go there in person.
356
+
357
+ **Gerhard Lazu:** Are you thinking of giving a talk, or submitting one?
358
+
359
+ **David Ansari:** I would like to.
360
+
361
+ **Gerhard Lazu:** \[01:00:04.16\] So if the talk - let's imagine that it gets accepted. Are you thinking of giving it in-person, or pre-recording, as you have this time?
362
+
363
+ **David Ansari:** The next time I would give it in person, just to also practice.
364
+
365
+ **Gerhard Lazu:** Yeah. So what I'm looking up here - I just wanted to confirm because I sometimes get his name wrong. So there's this person that I admire when it comes to public speaking. His name is Matt Abrahams, and he gave a couple of talks about memorable communication, even wrote a book, a very good one - small one, but important one - Speaking Out Without Freaking Out. He had a TED talk, and he has a couple of great talks online, on YouTube, about how to make your communication memorable, how to deal with anxiety while giving talks publicly... Or there's different types of talking where you prepare, and the ad-hoc ones would just happen... And that really helped me to become a more confident speaker. So it may not work for you, but I would recommend checking it out and see if there's something valuable there, that relates to you. So that's what I would say. It helped me, and it may help you.
366
+
367
+ Cool... So what did you enjoy the most about this KubeCon?
368
+
369
+ **David Ansari:** I enjoyed most that there were so many different tracks that I could choose from. The whole ecosystem is very wide. I think there were around 8, 9 or even 10 tracks in parallel. There was a lot of topics and talks to pick from, so that was a very good experience.
370
+
371
+ **Gerhard Lazu:** So it was a variety. Yes, it is a big conference, you're right. It's one of the biggest ones I know, and it's just so diverse. I love the diversity of KubeCon. I'm not aware of any conference that gets diversity better... And I mean diversity from all perspectives.
372
+
373
+ Any favorite talks, anything that stood out, that was memorable? ...because we spoke about memorable communication.
374
+
375
+ **David Ansari:** I didn't watch --
376
+
377
+ **Gerhard Lazu:** There were too many. \[laughter\]
378
+
379
+ **David Ansari:** So for me it was quite late, since I'm in Europe... So on Wednesday my talk started at 11:30, so before that I just watched one talk, to see how things are working with the platform, and after I was too tired to continue watching talks at 1 AM in the morning. The next day I was watching one which was about a new generation of NATS, just to see how the NATS messaging system works compared to RabbitMQ. So I enjoyed watching that.
380
+
381
+ **Gerhard Lazu:** You do know that all these talks - first of all, you can watch them on-demand, in the platform, before they become available on YouTube. So what I tend to do - and this KubeCon this is what I've done... While I haven't watched the talks as they happened, only a few, what I've done - I would go back to the previous day, what I've missed... Because you're right, staying up until 3-4 o'clock - it doesn't really make sense. Well, at least not to me. And what I've done the next day - I would go over the previous talks, go over all of them, see if there's something that resonated with me, and if it did, I would watch parts of it, the parts that really stood out. And that was a good experience, because I could consume the talks much quicker before they become available on YouTube. So I could consume a lot more content, and content that was relevant for me. That is actually one of my favorite parts of the virtual conference, where all this recorded, and it's available as it happens. So I enjoy the platform. I think the platform enables you to consume and to connect to the conference in a different way. I thought that was good.
382
+
383
+ **David Ansari:** What was the most valuable content to you?
384
+
385
+ **Gerhard Lazu:** I really enjoyed eBPF, I have to say. Something like the whole eBPF ecosystem - super, super-interesting, Liz Rice's talk, "Cloud-native superpowers with eBPF." Because I just love the kernel, I just love that observability, understanding what's happening inside the kernel... That's the talk that really resonated with me; it's something that I picked up at the last KubeCon. But this one I could focus a bit more on the eBPF ecosystem.
386
+
387
+ \[01:03:59.03\] I didn't even know that there's actually an eBPF foundation. I learned about that at this conference. And yeah, it's just really interesting - networking, and the kernel, and performance and metrics, and that sort of thing.
388
+
389
+ My most important take-away about eBPF is that it's all about kernel events. And events - I mean, I love eventing. It's a great concept. And I think the way it's implemented - the underpinnings are really solid. I can see some amazing things coming out of this.
390
+
391
+ **David Ansari:** Have you used eBPF in the projects you're working on?
392
+
393
+ **Gerhard Lazu:** Not yet, but all that is going to change in the next few months. So Parca.dev - that's one of the first things that I'll be setting up. And the next one will be Cilium. Cilium, with Hubble, and a couple more things... I think the level of observability from a kernel perspective is unique. I haven't seen anything like that before. And now that you mention that, I think the only utility that I've used that uses eBPF under the hood was Netdata, but not extensively. Only at a brief, superficial level. It's good, and it's not much different than it was before, with eBPF, or since it and eBPF integration, but that's the first one that I have used with eBPF, now that I remember. What else would you like to talk about?
394
+
395
+ **David Ansari:** One good experience was the speaker support. So there was a dedicated Slack channel, and support was answering, with a response time less than a minute. So when we asked a question, it just got flagged, and someone was saying they will look up the answer or get in touch with us. That was really great support.
396
+
397
+ **Gerhard Lazu:** Well, that sounds like VIP speaker support to me, and I'm glad that it worked so well in practice.
398
+
399
+ **David Ansari:** It was, it was.
400
+
401
+ **Gerhard Lazu:** Yeah. I'm really happy when ideas like that work well out in practice, because you never know what's going to happen... But it just goes to show that KubeCon is a really well organized event. And there's so many moving parts to it... It's just crazy how much happens behind the scenes. And big props to all the organizers and to everyone that made it happen. It was difficult, because it was both in-person and virtual, and I think the combination worked really well. But next time, I'm also thinking of going in person. So Valencia, next year - I would very much like to be there. And who knows, maybe we'll meet. Wouldn't that be nice?
402
+
403
+ Okay, David. Well, thank you for making the time. This was an absolute pleasure. Looking forward to meeting you at the next KubeCon.
404
+
405
+ **David Ansari:** Thank you for having me.
406
+
407
+ **Break:** \[01:06:32.18\]
408
+
409
+ **Gerhard Lazu:** I'll ask the question that Steven was afraid to ask - and afraid, I'm doing air quotes. What even is Sigstore? \[laughter\]
410
+
411
+ **Dan Lorenc:** So that's a funny story, actually... That question came from a chat between me and Steven, and we were just messing around a little bit. So I was actually the one that asked that question to Steven.
412
+
413
+ **Gerhard Lazu:** I see... \[laughter\] That's historic there...
414
+
415
+ **Dan Lorenc:** Yeah, he has a funny habit of dropping my name off and then posting our conversations, which I love to read on Twitter. He's great. \[laughter\]
416
+
417
+ **Gerhard Lazu:** Okay, so what did he answer when you asked him that? He just didn't...?
418
+
419
+ **Dan Lorenc:** Yeah, so Sigstore is an open source project that is part of the Linux Foundation. It's not like a lot of traditional open source projects, because there's a bunch of awesome code on GitHub and in the community, but it also has some production infrastructure that that community is operating as a public benefit for the rest of the open source world. So there's a bunch of code, which is awesome - you can fork it, you can contribute to it - but we also maintain a running copy of that code, for people to use day to day and use in production.
420
+
421
+ So it's a couple of different components, but overall, the goal of the Sigstore project is to make it easy and free to sign and verify open source software.
422
+
423
+ We were heavily inspired by the LetsEncrypt model. So if you're familiar with LetsEncrypt, what LetsEncrypt did, operating a free certificate authority for web browsers... They made it so all of the web traffic became encrypted over a couple of years. These have been around since the early '90s, but we just weren't seeing movement in the percentage of web traffic that was encrypted, all the websites still have that red X at the top, years and years ago, if you remember what it was like before LetsEncrypt... And then they solved the problem by making it free, easy and automated to do it. So now with one line in your Kubernetes yamls now you can just get free certificates for everything... Not overnight, because a ton of hard work went in from the LetsEncrypt people... Compared to the overall timeline the internet's been around, the shift was immediate almost. We try to do the same thing for open source software.
424
+
425
+ **Gerhard Lazu:** How is this different from PGP?
426
+
427
+ **Dan Lorenc:** Yeah, it's a great question. So PGP has been around for a while. PGP is a bunch of open source standards for cryptographic operations. This includes things like signing, verification, but also things like encryption of files, of messages, of all of these different things. So PGP is kind of like a huge cryptographic kitchen sink, and it also provides some basic primitives for PKI and key distribution and things like that, that are pretty opinionated. I don't know if you've ever heard of key signing parties, and the PGP web of trust, and stuff like that. It's a really cool, really elegant model, that just unfortunately hasn't caught on too much today.
428
+
429
+ Sigstore takes a slightly different approach. It uses some different encryption standards, some slightly more modern ones. Particularly, we really rely on things like transparency logs, which weren't really around back when PGP got started; they'd really taken off across the browser ecosystem in probably the last decade... I think it hasn't even been quite that long. But they have a lot benefits. PGP is completely decentralized, transparency logs are slightly more centralized, but they provide some cool guarantees where there's a central operator, but you don't actually have to trust them. So you get a lot of the benefits of both worlds, where somebody can run a service for you, which is easy, everybody can find it, everybody can use it, but you don't actually have to trust that operator. The only thing you have to trust is that they'll keep the thing running. And people can make back-ups and mirrors, but they can't tamper with that log, which eliminates a lot of the problems with centralized infrastructure.
430
+
431
+ **Gerhard Lazu:** Okay. So one of the things that I always use PGP for is signing my git commits. So I'm wondering, what else should I be signing, and what should I be using from the Sigstore ecosystem to sign things?
432
+
433
+ **Dan Lorenc:** \[01:12:00.12\] Yeah, so signing git commits is a pretty important topic. There's like the git commit -S flag, which uses your PGP keyring, which is set up in your computer, to sign those commits. That integration is actually baked pretty heavily into git. There's dozens of different ways to sign things, Sigstore isn't the only way either... But git is pretty coupled to PGP today. And there's actually a bunch of ongoing work with some of the Git core maintainers and some other contributors to start refactoring that, and making it so that Git can use other techniques to sign things... We're helping with that work to hopefully make Sigstore also kind of like a first-class citizen in the git signing world.
434
+
435
+ **Gerhard Lazu:** Okay.
436
+
437
+ **Dan Lorenc:** But separately, you wanna sign everything. It's kind of where we're going here in this world. Signing commits is great; they can be used to back-up and provide other guarantees about who actually authored those commits as they travel from your computer, to GitHub, to forks across GitHub, to package managers and everything like that. But that's just one link in the software supply chain. Security has been a huge hot topic over the last couple of years, and signing commits is kind of the first step. You're on a computer, you're typing code on your keyboard... That is the birth of software, as that code gets entered into your editor. So signing that makes a lot of sense.
438
+
439
+ As it gets pushed up to a repository and it gets tagged, you wanna sign those tags too, so somebody knows that the release was authorized. As those tags get pulled down and compiled into artifacts, it makes sense to start signing those, too. And that's where Sigstore is starting to see the most adoption right now, in signing various release artifacts. It could be zip files, or tarballs, or more commonly today we're starting to see container images used for generic package management artifacts. One of the projects in Sigstore called Cosign is dedicated to signing container images.
440
+
441
+ And the kind of cool thing is because the container image standards have gotten so pervasive, we're starting to see people cram all their things into container images that aren't even container images.
442
+
443
+ **Gerhard Lazu:** Oh, yes.
444
+
445
+ **Dan Lorenc:** So the WebAssembly modules have a little specification for how to store those in a container image without having a whole new package manager. So all these artifacts that come out from your build system, from your CI/CD system are very important to sign too, because there's tons of different attacks that could happen when you lose that link between an opaque binary blob in the source code repository it actually came from.
446
+
447
+ **Gerhard Lazu:** I think Go has possibly the best time when it comes to signing, because you can do it from scratch, and then you don't worry where from the scratch comes from... I think it's from scratch; it's just empty, there's nothing there... But what about when you do, for example, from Ubuntu? That happens still quite a bit. Can you use Cosign to check that from Ubuntu - not just that layer, but everything underneath has been signed? Does that exist today?
448
+
449
+ **Dan Lorenc:** Yeah. We're talking about kind of base images and image hierarchies and stuff here when it comes to containers, but... Yeah, a couple things there. Go has some awesome support for static compilation as a Go binary, which means you can throw it into a container image without any of the other operating system runtime stuff. So if you do it from scratch, that's awesome; there's no base image to check, the only thing in there is your binary and some configuration. So you can sign that resulting image, and in that case there's no base image to check. And you can actually look at a container and prove that it was from scratch later, after it was built. There would only be one layer inside of that, you don't have to worry.
450
+
451
+ There's been some other recent work in the OCI (Open Containers Initiative) to start propagating a lot more meta data around. One of the issues that's been around for a while is that if you did it from Ubuntu and you threw a Go application into there, it's really hard to figure out after it was built that it was actually from Ubuntu, or which Ubuntu that was. But a couple months ago, one of my colleagues, Jason Hall at Red Hat, finally got a new field approved in the OCI specification for a standard base image annotation. So build tools can start setting that in these JSON manifests to indicate which Ubuntu was used, where it was found, what the digest of that was at that time... And you can actually check that later, so you don't even really need to trust that tool. So it's all about kind of leaving these breadcrumbs around.
452
+
453
+ Now that we have that new breadcrumb (you know, from the fairytale), you can follow that back and you can find the Ubuntu image and you can check to see if that was signed by the original publisher. So this is something that just in the last couple of months has started becoming possible.
454
+
455
+ **Gerhard Lazu:** Yeah, that's really cool.
456
+
457
+ **Dan Lorenc:** \[01:16:02.01\] A good use case there, if you wanna see that in practice, is actually something that fits right between from scratch and Ubuntu, which are the distroless base image suite, if you're familiar with those...
458
+
459
+ **Gerhard Lazu:** Yes.
460
+
461
+ **Dan Lorenc:** Yeah, so they're way closer to from-scratch. They have just a couple of other files you might need, even if you have a static Go application. Things like CA certificates, timezone data... A whole bunch of small text files that your application might need or expect to be in certain places. And those are actually built and signed with the Sigstore tooling...
462
+
463
+ **Gerhard Lazu:** Interesting.
464
+
465
+ **Dan Lorenc:** ...and they have a bunch of other cool properties, like they're reproducible, so we have a whole bunch of different build systems reproducing those builds and publishing, and kind of proofs that they reproduced them... So you can look all of that up in our transparency log, and verify it all the way back to the from-scratch.
466
+
467
+ **Gerhard Lazu:** As far as I know, distroless is a concept that comes from Google... And I'm wondering, is that something that you're involved with, distroless?
468
+
469
+ **Dan Lorenc:** Yeah, so I started that project with one of my co-workers, Matt Moore, years and years and years ago. We kind of did it as a proof of concept, to show what some of this stuff looked like. We were playing around with the Bazel toolset at that time, and we got reproducible container builds working, and it was pretty cool. He even talked at a conference - I think it was like a JFrog swampUP when we just kept playing with the repository. We didn't really expect much to come out of it.
470
+
471
+ Then a couple of years later, like it happens in open source, the Kubernetes release engineering team, so Stephen Augustus and his crew, moved all of the Kubernetes-based images from Debian, or something like that, to distroless, without even really telling us. So all of a sudden, overnight, this became a piece of critical infrastructure for the entire container ecosystem, what started as like a little hobby project.
472
+
473
+ **Gerhard Lazu:** Wow. I'm connecting some very important dots right now... We don't have the time to go into this. You have no idea how relevant this is to many of the topics and threads that I have in the background. I intend to come back to this in a few months; maybe a few weeks, but I'm thinking months.
474
+
475
+ **Dan Lorenc:** Perfect.
476
+
477
+ **Gerhard Lazu:** But I'd like to talk about the big news right now...
478
+
479
+ **Dan Lorenc:** Sure.
480
+
481
+ **Gerhard Lazu:** ...and that is the Chainguard About page.
482
+
483
+ **Dan Lorenc:** \[laughs\]
484
+
485
+ **Gerhard Lazu:** That's one of my favorite About pages... Can you tell us the story about that? First of all, let me explain how it works, because I love that. So if you go to Chainguard.dev/about, and you click on the faces of the different people that are part of Chainguard, something amazing happens... And I'll let you discover that. \[laughs\] But can you tell us the story behind it, Dan?
486
+
487
+ **Dan Lorenc:** Sure. My version of the story is that we announced Chainguard, our new company, a couple of weeks ago. Scott Nichols, one of our co-founders, was working very hard on that website to get it set up. I can't do any kind of design at all; I'm terrible at frontend stuff, and everything like that, so I hadn't even really been paying too much attention to it... And the website went out, and it was awesome. And then everybody on Twitter just started laughing and telling all these jokes about the About page, and I had no idea what was happening. They were talking about all of these Easter eggs... And it took me a couple of days before somebody finally showed me what was happening.
488
+
489
+ Scott put in a really funny Easter egg about my hair here that we're talking about now. If you click on anybody's faces on the About page, you get a pretty cool effect.
490
+
491
+ **Gerhard Lazu:** Okay... That is your hair. \[laughter\]
492
+
493
+ **Dan Lorenc:** I think it's a photoshopped, exaggerated version of my hair... But yeah.
494
+
495
+ **Gerhard Lazu:** Has the pandemic something to do with it?
496
+
497
+ **Dan Lorenc:** Yeah, so my hair has had a couple of phases in the last few years. But yeah, I basically haven't gotten it cut since the pandemic started. There was a brief face where -- I have a very curly hair, and I was just kind of going out like this for a while... But as you can tell now, it kept growing, and it has now collapsed under its own weight, and it has fallen down... So those pictures -- that hair is a little outdated, but it did look like that at one point in time.
498
+
499
+ **Gerhard Lazu:** That's crazy. That is my favorite part, by the way... \[laughs\]
500
+
501
+ **Dan Lorenc:** I think he was looking at the analytics stats for our page - because we've put an analytics thing on there - and the About page has more views than anything else on the website right now.
502
+
503
+ **Gerhard Lazu:** \[laughs\] So we'll just pile on top of that. Alright. The effect on Kim - I think it looks the best. I tried all the faces, but I think her - it suits her. \[laughs\]
504
+
505
+ **Dan Lorenc:** Yeah, I didn't even realize he did it for all of the faces at first. I thought it was just mine.
506
+
507
+ **Gerhard Lazu:** For all of them, yeah.
508
+
509
+ **Dan Lorenc:** Yeah, it took me a little bit to realize the full extent of the Easter egg.
510
+
511
+ **Gerhard Lazu:** \[01:20:05.03\] Any other Easter eggs that you're aware of, that we should check out?
512
+
513
+ **Dan Lorenc:** Not that I'm aware of... You can ask Scott Nichols. He probably has more.
514
+
515
+ **Gerhard Lazu:** Yeah, Scott, please... Stop working on features. Give us more Easter eggs.
516
+
517
+ **Dan Lorenc:** Perfect.
518
+
519
+ **Gerhard Lazu:** So why do you think that the world needs Chainguard then?
520
+
521
+ **Dan Lorenc:** Yeah, I think we need something here. So I've been working on software supply chain security for probably the last three(ish) years, kind of full-time almost. I got worried about it a little bit before then... But yeah, I've been doing kind of nothing but that for about the last three years. The most of that time was at Google.
522
+
523
+ So yeah, three years ago nobody even understood it, the term wasn't around, nobody cared about it... We were kind of running around, telling everybody "You should be paying attention to what goes into these containers" and everybody said "Oh, we have other problems. This is fine." Until probably a year and a half ago was when things started turning around. We started getting all these reports of different open source libraries being attacked, or taken over by malicious actors... Companies started having internal attacks, insider threats... And finally, the huge one, the tipping point was the famous attack on SolarWinds, back in December of last year, the SUNBURST attack... And that kind of led to the downstream effects that all of the other customers of SolarWinds had. The impact kind of led to the whole shift kind of overnight, and people saying "Yeah, we haven't been paying attention to this for years. What's going on? Let's go try to fix this."
524
+
525
+ **Gerhard Lazu:** Yeah.
526
+
527
+ **Dan Lorenc:** It led to government regulations, the EU is working on new standards, the U.S. government put out an executive order, calling for institutions to start figuring out what to do, and kind of change the way that we build software to fix all of this and make it more secure, leave a lot more of those kind of verifiable breadcrumbs (as we talked about) around, to make a lot of these attacks harder.
528
+
529
+ **Gerhard Lazu:** I'm really glad that the world is taking this seriously; it was high time. And thank goodness nothing worse happened... But it is obvious that we have to act fast on this, and I'm glad that you, first of all, are a small team of crazy people that really believe in this. I think that is the best way of driving change, and I'm glad that many other companies are paying attention.
530
+
531
+ I'm sure over the next year, next two years, this will just grow in popularity and importance, and I'm glad that someone like you is steering this. And when I say you, I mean Chainguard.
532
+
533
+ **Dan Lorenc:** Well, thank you...
534
+
535
+ **Gerhard Lazu:** So I know that you're back from KubeCon now... KubeCon is over for you, at least in person. What was it like to be there in person?
536
+
537
+ **Dan Lorenc:** It wasn't as weird as I thought. I hadn't been around big crowds of people in a while, it's been a long time... I was at one smaller conference a couple of weeks ago and sort of warmed back up to it. It was just awesome to see the energy; I could tell the whole community needed this to get back together, set aside some time to talk about open source, and relax a little bit, as things start to get back to normal. It was exhausting though, I'll say that. It's a long week; those conference sessions were long days, and I think I just forgot how tiring these conferences can be.
538
+
539
+ **Gerhard Lazu:** I know that you had also Supply Chain Security Con. I almost called it Supply Chain Con. Crazy. \[laughs\] No, Supply Chain Security Con. Can even referred to it as a -1 event, which I think is important in relation to KubeCon. I really like that. How was that?
540
+
541
+ **Dan Lorenc:** Yeah, so Supply Chain Security Con was a day -1 event - I think I kind of made up that term. KubeCon has kind of had a long history of having day zero events, or collocated events the day before the conference. There's just been so many topics to cover, and so on, since we've had a kubecon the organizers decided to have two of those. So the Monday of this week - the conference officially started Wednesday, but the Monday of this week we started off with a day -1 event called Supply Chain Security Con, that the Continuous Delivery Foundation and a bunch of other companies helped sponsor and put together.
542
+
543
+ **Gerhard Lazu:** Okay. So this makes me think of the coolness wall at Top Gear. I don't know if you remember that, I don't know if you watch Top Gear, but they had a wall and they used to rank cars... And sub zero were the really cool ones. Sub zero was like the coolest car category they had. So -1 sounds a bit like sub zero to me... I think there's a link there.
544
+
545
+ So as we are preparing to wrap this up, I have two more questions. Your favorite KubeCon moment, and what is coming in the next six months.
546
+
547
+ **Dan Lorenc:** Oh, my favorite KubeCon moment was the talk from Jon Johnson Jr. and Dan Mangum, on crazy things you can do with OCI registries.
548
+
549
+ **Gerhard Lazu:** Oh, yes.
550
+
551
+ **Dan Lorenc:** I can't wait until that recording gets posted, which you might have seen some of the buzz around on Twitter... They actually built a chat application. They worked inside of OCI-compliant container industries. That was just awesome. They answered the actual Q&A for the talk using this chat application. So the audience was there, asking questions, and layers and container images were getting thrown around to make it all work... But that was awesome. That was my favorite moment.
552
+
553
+ **Gerhard Lazu:** Amazing. What's happening in the next six months for Chainguard, for you...? Anything interesting? Are you getting a haircut? \[laughs\]
554
+
555
+ **Dan Lorenc:** Probably, probably... It's getting a little long at this point. But yeah, for Chainguard we're figuring out what we're gonna be doing. Getting our feet under ourselves, and just trying to stay focused and double down on the awesome momentum we've had in Sigstore, and continuing to push that forward across all the different language ecosystems, and package managers, and container images around the world.
556
+
557
+ So yeah, I look for hopefully even more Sigstore adoption than we're already seeing, and then starting to figure out what we're doing as a company.
558
+
559
+ **Gerhard Lazu:** Dan, thank you very much for making the time. This has been an absolute pleasure. I'm looking forward to next time, and I hope it won't be that long before we meet again. Thank you very much.
560
+
561
+ **Dan Lorenc:** Sure. Thanks a lot for having me.
Gerhard at KubeCon NA 2021: Part 1_transcript.txt ADDED
The diff for this file is too large to render. See raw diff
 
Gerhard at KubeCon NA 2021: Part 2_transcript.txt ADDED
The diff for this file is too large to render. See raw diff
 
Grafana’s "Big Tent" idea_transcript.txt ADDED
@@ -0,0 +1,741 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.00 --> 4.78] Hey, how's it going? I'm your host, Gerhard Lassou, and you're listening to Ship It,
2
+ [5.04 --> 9.96] a podcast about getting your best ideas into the world and seeing what happens.
3
+ [10.28 --> 16.04] We talk about code, ops, infrastructure, and the people that make it happen. Yes,
4
+ [16.26 --> 20.62] we focus on the people because everything else is an implementation detail.
5
+ [21.06 --> 25.98] I last spoke to Tom in Changelog episode 375 when I went to my first KubeCon.
6
+ [25.98 --> 31.38] So many things changed since then. The one thing that didn't change is me using Grafana on a daily
7
+ [31.38 --> 38.30] basis. But what is this new thing called Loki? And what about Tempo? While the 2021 Changelog.com
8
+ [38.30 --> 44.18] setup uses Grafana Agent with Prometheus and Loki via Grafana Cloud, we don't use Tempo. Yet.
9
+ [44.58 --> 48.78] By the way, are you curious to know how Grafana Cloud can offer such a generous free tier?
10
+ [49.26 --> 54.90] Tom has a really good answer. The solution is built into the Cortex architecture. And yes,
11
+ [54.90 --> 58.98] Cortex is the reason why we have a VP of product on Ship It in the first place.
12
+ [59.38 --> 63.76] Anyways, would you like to watch me and Tom pair and build Grafana dashboards like pros?
13
+ [64.12 --> 69.04] Tom has this really interesting approach that I would like to learn too. We can either live pair
14
+ [69.04 --> 74.18] or record and then publish the video. Let me know your preference via our Changelog Slack
15
+ [74.18 --> 79.62] or just plain Twitter. Otherwise, I'll just pick one at random. I recommend that you listen to this
16
+ [79.62 --> 85.20] episode in combination with episodes three and 11. That's the best way to get a more complete
17
+ [85.20 --> 90.72] picture of the topics that we discussed today. Big thanks to our partners Fastly, LaunchDarkly,
18
+ [90.84 --> 96.88] and Linode. Our bandwidth is provided by Fastly, learn more at Fastly.com, feature flags powered by
19
+ [96.88 --> 103.70] LaunchDarkly.com, and we love Linode. They keep it fast and simple. Check them out at linode.com
20
+ [103.70 --> 104.90] forward slash changelog.
21
+ [110.90 --> 116.64] What's up, shippers? This episode is brought to you by our friends at Fly. Fly lets you deploy your
22
+ [116.64 --> 123.98] apps and databases close to your users in minutes. You can run your Ruby, Go, Node, Deno, Python,
23
+ [124.48 --> 130.48] or Elixir app and databases all over the world. No ops required. Fly's vision is that all apps should
24
+ [130.48 --> 135.02] run close to their users. They have generous free tiers for most services, so you can easily prove
25
+ [135.02 --> 139.68] to yourself and your team that the Fly platform has everything you need to run your app globally.
26
+ [140.10 --> 144.74] Learn more at fly.io slash changelog and check out the speedrun and their excellent docs.
27
+ [145.14 --> 148.50] Again, fly.io slash changelog or check the show notes for links.
28
+ [151.68 --> 155.60] We are going to shift in three, two, one.
29
+ [155.60 --> 174.62] Last time that we spoke, Tom, was at KubeCon 2019 North America. That was actually my first
30
+ [174.62 --> 180.50] KubeCon in San Diego, and it was an amazing one. I loved it. This was actually changelog episode
31
+ [180.50 --> 187.98] 3.75. And again, it was one of my favorites. That was almost two years ago. I know that a lot of
32
+ [187.98 --> 193.86] things have changed. First of all, Grafana was at version 6 back then. Now it's at version 8,
33
+ [194.10 --> 198.70] which was a massive improvement from version 7, which was a massive improvement from version 6.
34
+ [199.58 --> 204.14] What other things changed in the last two years, almost two years since we spoke?
35
+ [204.38 --> 208.68] Oh, wow. Yeah. I mean, two years. How do we cover two years in five minutes?
36
+ [208.68 --> 215.40] I think working backwards, we've launched Tempo, the tracing system from Grafana Labs, which is
37
+ [215.40 --> 222.02] kind of cool. Slightly different take on distributed tracing, focusing on very efficient storage of the
38
+ [222.02 --> 228.12] traces itself and very, very scalable. We've done Loki 2.0, our log aggregation systems,
39
+ [228.64 --> 232.96] over two years old now. And with Loki 2.0 came a much more sophisticated query language.
40
+ [232.96 --> 238.16] That's really cool because now you can start to use Loki and anger and really kind of
41
+ [238.16 --> 243.52] extract metrics and really dig into your logs with it. That was a really exciting design process
42
+ [243.52 --> 248.56] for the language as well, because we always wanted it to be really heavily inspired by Prometheus,
43
+ [248.82 --> 253.24] but it's logs in the end. It's different to time series. So we actually collaborated with
44
+ [253.24 --> 258.98] Frederick from the Prometheus team. And he really influenced the design. I remember one of the calls,
45
+ [258.98 --> 264.32] we came up with one of the things that I think makes LogQL really cool, which is you've got the
46
+ [264.32 --> 269.88] pipeline operator for filtering logs. So you use pipelines to filter your logs. And we kind of stuck
47
+ [269.88 --> 274.08] with that for everything in the log space. And then the minute you start working with metrics,
48
+ [274.36 --> 278.54] you start using brackets and it looks like PromQL, like the Prometheus query language.
49
+ [278.84 --> 282.68] And it just means you look at a query and it's really obvious that that part of the query deals
50
+ [282.68 --> 287.68] with logs and that part of the query deals with metrics. Working backwards more exemplars in Prometheus
51
+ [287.68 --> 292.66] and in Grafana. So you can link from metrics to traces. You put little dots on the graphs and
52
+ [292.66 --> 296.24] the dots indicate a trace and you can click on it. And that whole kind of experience works.
53
+ [296.66 --> 302.74] And you bring up KubeCon 2019, right? I think that was the year Frederick and I gave a keynote address
54
+ [302.74 --> 311.36] on the future of observability. And in that keynote, we predicted that linking metrics and logs and
55
+ [311.36 --> 316.52] traces and correlating and building experiences that combine them would be the future. Now, of course,
56
+ [316.52 --> 321.22] it's like a bit tongue in cheek because I have the great opportunity and I'm very lucky to be able
57
+ [321.22 --> 325.72] to influence what we do at Grafana Labs. So, you know, we've kind of spent the last two years making
58
+ [325.72 --> 331.78] that keynote happen and making it possible to combine those metrics and logs and traces in a
59
+ [331.78 --> 335.12] single development experience, in a single on-call kind of instant response.
60
+ [335.66 --> 340.96] I could go on. Like there's so many things that have changed, right? We've grown hugely at Grafana Labs.
61
+ [340.96 --> 348.18] We're now over 400 people, which is just like I joined when we were about 25, 26 people three and
62
+ [348.18 --> 354.68] a half years ago. So we launched a GEM, Grafana Enterprise Metrics, which is our kind of self-managed
63
+ [354.68 --> 360.38] enterprise version of Cortex, the scalable version of Prometheus, the other CNCF project.
64
+ [360.74 --> 364.60] Yeah, there's so many. And I'm really still only talking about kind of the second half of last year.
65
+ [364.60 --> 368.88] And I guess, you know, when you ask that question, everyone always responds with pandemic as well.
66
+ [369.26 --> 373.28] I kind of glossed over that, but we had a global pandemic. I think what's really interesting,
67
+ [373.34 --> 378.80] obviously, is huge impact, but Grafana Labs was set up from day zero to be remote first.
68
+ [379.44 --> 384.66] And so I think we've been super lucky that the impact has been less than it has been on other
69
+ [384.66 --> 389.08] organizations. Yeah, like I could go into any more of those, but I'll stop there.
70
+ [389.08 --> 393.70] Yeah. I think I remember that the future of observability keynote that you gave,
71
+ [393.94 --> 399.86] that was a really good one, inspirational one. And I could see it. I could see it just as like
72
+ [399.86 --> 405.78] the vision that you shared. And I remember thinking, wow, if they pull it off, this is going
73
+ [405.78 --> 410.28] to be amazing. And guess what? You did. And even more so.
74
+ [410.34 --> 413.34] I can't take all the credit, right? Like we did, I did the keynote with Frederick.
75
+ [413.54 --> 417.70] When I say you, I mean Grafana Labs, like, you know, the whole org, right?
76
+ [417.70 --> 422.96] That you're part of the whole team that you're part of. But you like, you know, you were there,
77
+ [423.44 --> 427.90] you had this vision, you shared it. I'm sure everybody contributed to it. And then everybody
78
+ [427.90 --> 434.12] made it happen. And I really love that journey, seeing how things have been happening with Loki.
79
+ [434.20 --> 440.32] I remember when Loki version one came out and I thought, wow, this makes so much sense. I was so
80
+ [440.32 --> 445.14] keen to start using it. And we did, even for changelog. We used Grafana for a long time,
81
+ [445.14 --> 451.32] Prometheus. Then we went to Loki and that was great. And then we thought, hmm, if only we could
82
+ [451.32 --> 456.84] delegate this problem to someone else. And guess what? Grafana Cloud came along, the hosted managed
83
+ [456.84 --> 462.32] service. You had some very generous tiers. Once that changed, everything changed. So all of a sudden,
84
+ [462.38 --> 465.88] we no longer had to run our own Grafana and Prometheus. Not that it was difficult,
85
+ [465.88 --> 472.94] but it's much easier to just run the Grafana agent. That's all you need. Send everything to
86
+ [472.94 --> 479.80] Grafana Cloud and it just works. And with the last changes of the alerts, like I think that was the
87
+ [479.80 --> 484.72] weak point of Grafana for a long, long time. And now you saw that as well. So there are all these
88
+ [484.72 --> 490.98] things just falling into place naturally and being able to know what's coming and seeing it happening
89
+ [490.98 --> 496.30] every six months, right? There's like more and more and more. It's like, we know what to expect.
90
+ [496.76 --> 499.86] You're delivering. Please carry on. That's what I'm thinking.
91
+ [500.54 --> 504.52] Thank you very much. Yeah. You know, I miss so much out of my what's happened because yeah,
92
+ [504.66 --> 511.42] unified alerting is a huge step in the Grafana story. I'm really pleased as the way the company
93
+ [511.42 --> 515.22] came together. We used to have two alerting systems, right? We had the Grafana alerting system
94
+ [515.22 --> 519.06] and the Prometheus alerting system. And they were worlds apart. You know, on one hand,
95
+ [519.06 --> 523.22] the Grafana alerting system is probably the easiest one that exists out there, right? It's very
96
+ [523.22 --> 527.50] accessible, very easy to get started with. And on the other hand, the Prometheus system is probably
97
+ [527.50 --> 532.20] one of the most sophisticated and powerful ones. And so I think it was really exciting, right? How
98
+ [532.20 --> 537.64] the team could combine the power of the Prometheus system, right? With multi-dimensional alerts,
99
+ [538.22 --> 543.66] with alert managers, routing, grouping, and deduping, and silencing, and bundle all these features
100
+ [543.66 --> 549.52] into Grafana in a way that makes them easy to use and gives you that level of user experience
101
+ [549.52 --> 554.66] that people have come to expect. And best of all, like we haven't duplicated any features,
102
+ [554.72 --> 559.08] right? We're just using Alert Manager under the hood. We're using the same API as Prometheus
103
+ [559.08 --> 564.38] under the hood. So it's true to our open source routes as well. And that's like, the team did a
104
+ [564.38 --> 569.98] fantastic job with unified alerting. I think the thing you say about cloud, right? The generous free
105
+ [569.98 --> 574.40] tier, for instance, we launched that in January, I think. We've always had a kind of free tier.
106
+ [574.66 --> 579.62] We've always allowed you to have a free Grafana instance, for instance. The work that goes into
107
+ [579.62 --> 584.32] actually being able to offer a free tier, there's so much going on behind the scenes,
108
+ [584.38 --> 588.86] right? Just at a very architectural level. The point I'd always make here is that
109
+ [588.86 --> 595.80] you need the marginal cost of a new Prometheus instance, or of a new Loki instance, or a new
110
+ [595.80 --> 599.96] Tempo instance. You need it to be effectively zero, right? You can't offer a free tier unless
111
+ [599.96 --> 604.38] the cost of the thing you're offering is as close to zero as possible. So this means
112
+ [604.38 --> 609.00] behind the scenes, right? We can't be spinning up a new Prometheus pod, or a new Loki pod,
113
+ [609.10 --> 613.80] or a new Grafana pod, or a new Tempo pod for every customer that signs up, right? That would
114
+ [613.80 --> 619.54] get too expensive for us to offer it. We're not that big a company yet. And so fundamentally,
115
+ [619.68 --> 623.52] the architecture of all of these systems has to be multi-tenanted, right? And we've built,
116
+ [623.86 --> 626.62] and this is where Cortex comes in, right? We've built this horizontally scalable,
117
+ [626.62 --> 632.40] multi-tenant version of Prometheus, which means provisioning a new instance in that multi-tenant
118
+ [632.40 --> 636.56] cluster is basically free. It doesn't really cost us anything. I mean, once you start sending
119
+ [636.56 --> 641.26] metrics, there's some cost incurred, but because it's multi-tenanted, we can start to take advantage
120
+ [641.26 --> 645.82] of kind of statistical multiplexing techniques and really get to a point and really drive down
121
+ [645.82 --> 649.82] the cost of offering that service, which allows us to make the free tier so generous.
122
+ [649.82 --> 655.68] And that architecture has been replicated in Loki. Well, not replicated. It uses the same code.
123
+ [655.76 --> 660.52] It uses the same module system, the same ring, the same architecture, and the same techniques
124
+ [660.52 --> 668.22] in Loki and in Tempo. And that consistency across the offerings just also carries over to the kind
125
+ [668.22 --> 672.80] of operational and cognitive burden of running this because it's the same, because you scale it in
126
+ [672.80 --> 677.70] the same way and you do instant response in the same way. So yeah, it's incredibly exciting to
127
+ [677.70 --> 683.60] finally feel like you're in the last mile of delivering on a vision that's been in progress
128
+ [683.60 --> 688.00] for kind of five or six years. So everything that you've said makes a lot of sense to me,
129
+ [688.16 --> 694.90] but I know that many people will be confused because you are VP of product. How on earth does a VP of
130
+ [694.90 --> 700.90] product know so many things about code and how things actually work? And I know that you're one of
131
+ [700.90 --> 705.52] the Cortex co-authors, right? You've started Cortex. I don't know who the other author is.
132
+ [705.52 --> 711.30] It was Julius actually from the chap who was one of the original founders of the Prometheus project.
133
+ [711.68 --> 712.30] Julius Valtz?
134
+ [712.64 --> 713.24] Julius Valtz.
135
+ [713.56 --> 721.94] Right. Okay. So you and Julius, you started Cortex, which went to grow. And I think it's part of the
136
+ [721.94 --> 727.22] very important component of Grafana Cloud as an engine, an inspiration for Loki, which I think
137
+ [727.22 --> 731.24] you also had something to do with, right? Like when you started the code base. So how does that work?
138
+ [731.24 --> 736.68] How can you be VP of product and code go at a very advanced level? How does it work?
139
+ [737.10 --> 742.92] Titles in the abstract, pretty meaningless, right? So yes, my title is VP of product. And I do have a
140
+ [742.92 --> 747.74] lot of kind of product management responsibilities in the company, but my background is a software
141
+ [747.74 --> 753.84] engineer. I've been a software engineer now for 15, 16 years. I've always worked on open source code
142
+ [753.84 --> 758.52] bases, you know, straight out of university. I was kind of tangentially involved in the Zen
143
+ [758.52 --> 763.38] hypervisor project. And so I worked a little bit on the kind of control tools there. I started a
144
+ [763.38 --> 768.76] company that got involved in the Cassandra distributed database. And then, you know, then
145
+ [768.76 --> 774.54] worked on Prometheus and Cortex. I've just always been a software engineer. I took a brief stint as
146
+ [774.54 --> 779.42] doing some engineering management at Google, also some site reliability engineering, where I kind of
147
+ [779.42 --> 784.34] learned a lot about the whole monitoring side of things. But yeah, at the end of the day, I've always
148
+ [784.34 --> 789.76] been a software engineer. I've always been passionate about this kind of thing. And it's just, you know,
149
+ [789.76 --> 795.58] I don't get to do as much software engineering now as it perhaps seems. You know, I have a large team
150
+ [795.58 --> 800.08] of software engineers who do that and really should take a lot more of the credit than perhaps I do.
151
+ [800.52 --> 805.48] But yeah, I still, you know, I was doing, I did a few PRs yesterday. That was mostly on some kind
152
+ [805.48 --> 810.72] of continuous deployment for some internal SLO dashboards. But I still, you know, I still try and
153
+ [810.72 --> 815.26] write bit of code. We had a hackathon recently internally where everyone in the company took a
154
+ [815.26 --> 820.70] week to kind of just code on whatever their imagination had been, you know, noodling over
155
+ [820.70 --> 825.66] for the past few months. And I took part. That was like, that was pretty cool. I managed to get a
156
+ [825.66 --> 829.04] couple of days of solid coding in. I'm not going to tell you what the project was though, because
157
+ [829.04 --> 834.68] that might become a future product. Who knows? Interesting. I was just going to ask that if any of
158
+ [834.68 --> 840.30] those projects are public, but I'm sure the good ones will be, right? Oh yeah. No, no. Some of them are,
159
+ [840.30 --> 846.14] right. So Bjorn and Dieter and Ganesh were working on one of their hackathon projects was
160
+ [846.14 --> 850.38] high definition histograms in Prometheus. And Ganesh has already tweeted about that and will
161
+ [850.38 --> 854.60] be putting out more information and the codes out there in public. I've seen that. There's a few of
162
+ [854.60 --> 859.62] them that are public and a lot of them are going to form future projects and potentially even future
163
+ [859.62 --> 864.92] products. I can give you a bit of a hint what the project I was working on was. So not a lot of
164
+ [864.92 --> 870.60] people know Grafana Labs, actually its first kind of time series database that it built for Grafana
165
+ [870.60 --> 875.78] Cloud. It's called Metric Tank. Metric Tank is a graphite oriented, still written in Go,
166
+ [876.30 --> 880.44] still using a lot of the same techniques from modern time series databases like the guerrilla
167
+ [880.44 --> 886.48] encoding and so on, but mainly focused on building a kind of scalable multi-tenant cloud version of
168
+ [886.48 --> 891.44] graphite. And that's what kind of bootstrapped Grafana Cloud before I joined the company.
169
+ [891.44 --> 896.86] And then I joined and brought Cortex in with me. And since then, of course, the architecture has now
170
+ [896.86 --> 901.58] kind of moved towards a Cortex style architecture. The Metric Tank team within Grafana Labs for the
171
+ [901.58 --> 908.04] past year or so have actually been working on putting a graphite query engine on top of Cortex.
172
+ [908.72 --> 912.12] And we've actually, I think the launch of that, you know, it'll be seamless launch. Customers
173
+ [912.12 --> 917.34] shouldn't notice, right, that being moved off of Metric Tank and onto Graphite V5. That's actually
174
+ [917.34 --> 922.12] happening very soon. And that's kind of to give you a bit of a hint in the direction we're going. Now,
175
+ [922.56 --> 926.56] Grafana Enterprise Metrics and Grafana Cloud is a single time series database that you can query
176
+ [926.56 --> 931.56] through multiple different query languages. That's fascinating. And now you reminded me
177
+ [931.56 --> 938.84] the link between Acuna Analytics, the company that you were part of at some point, and the startup that
178
+ [938.84 --> 942.92] I was working for at the time, which was GoSquared, which was like real-time visitor analytics.
179
+ [942.92 --> 948.90] So GoSquared, we were using, I think, MongoDB heavily, and we were starting to look into
180
+ [948.90 --> 952.92] Cassandra. There was a Cassandra conference, and I thought you were presenting the analytic
181
+ [952.92 --> 960.56] side of things. And at the time, I was heavily invested in Graphite. Ganglia was there as well.
182
+ [960.72 --> 960.90] Yeah.
183
+ [961.02 --> 965.88] And I thought like, wow, this Graphite and scaling, those like fun days, challenging days.
184
+ [966.54 --> 970.72] And I looked at Acuna, I thought, wow, this is interesting. So they're using Cassandra
185
+ [970.72 --> 974.52] for the metrics, and it works really well. I remember even the demo that you gave.
186
+ [974.88 --> 978.48] I forget the conference name. This was 2012, 2013.
187
+ [979.04 --> 980.28] Yeah, I don't remember that then.
188
+ [980.28 --> 985.90] A long time ago, something like that. Yes. And so Graphite, right, was a great system,
189
+ [986.04 --> 990.76] but it didn't really scale. It was very problematic. And then Grafana came along,
190
+ [990.86 --> 995.32] but Grafana on top of Prometheus. So Prometheus had something new with it. But Prometheus in its
191
+ [995.32 --> 1001.32] incipient phase was, again, like single process, single instance. How do you scale that? Well,
192
+ [1001.40 --> 1008.56] it's not as easy. And Cortex, as far as I know, scales the way anyone would expect, right? You can
193
+ [1008.56 --> 1013.32] shard those metrics, you can replicate them, you have different backends for them. That was really,
194
+ [1013.48 --> 1019.54] really nice. So I can see history in a way repeating itself with Prometheus and Graphite.
195
+ [1019.54 --> 1024.62] And now I can see the link, right, where it's actually part of Cortex, or it will be part of
196
+ [1024.62 --> 1028.24] Cortex. That's really fascinating. Well, so it's interesting you mentioned that, right? Because
197
+ [1028.24 --> 1031.58] one of the things Acunu did, one of its contributions to the Cassandra project
198
+ [1031.58 --> 1036.42] was a technique called virtual nodes, right? Which is where in the earlier versions of Cassandra,
199
+ [1036.60 --> 1040.78] each node basically owned a single range in its distributed hash ring. I remember that.
200
+ [1040.94 --> 1044.36] The technique that Acunu added, and has been in Cassandra for ages now,
201
+ [1044.66 --> 1048.32] was the ability for a node to own multiple ranges, right? And the whole principle there being,
202
+ [1048.32 --> 1053.00] once you can own multiple ranges, like hundreds, like you then just pick them at random,
203
+ [1053.36 --> 1057.88] and you achieve a very good statistical kind of load balancing. What's maybe particularly
204
+ [1057.88 --> 1063.42] interesting is exactly the same techniques in Cortex, in Loki, in Tempo. And that's the ring I was
205
+ [1063.42 --> 1068.70] referring to earlier. This is like, it's basically just an almost identical copy, just in Go,
206
+ [1069.20 --> 1070.36] of the Cassandra hash ring.
207
+ [1070.98 --> 1074.86] This makes me think of the old GoSquare team, because I remember Cassandra and how they were like,
208
+ [1074.92 --> 1078.30] so excited about this. And this was mentioned, like, wow, this is amazing.
209
+ [1078.32 --> 1085.36] Like MongoDB, I think rather Cassandra. I remember that. And it wasn't even like version one at the
210
+ [1085.36 --> 1091.06] time. I know that Netflix were big on it as well. And Adrian Cockroft had like a great talk about it.
211
+ [1091.20 --> 1096.92] And like in that context, the AWS cloud came in. So many threads connecting in my head right now.
212
+ [1097.32 --> 1104.08] Wow. Okay. So let's take a step back from all these, I want to say rabbit holes, but like reminiscing
213
+ [1104.08 --> 1110.40] specific things, which are a thing of the past. And let's come back into the present with a question,
214
+ [1110.40 --> 1115.74] which I know very many people are, I'm not sure what they're struggling with, but they are, you know,
215
+ [1116.30 --> 1122.82] there are two sides to them. What is observability? Some say that it is not the three pillars, which is
216
+ [1122.82 --> 1128.10] metrics, logs, and traces. Some say that's not what observability is. What do you think? What is
217
+ [1128.10 --> 1133.08] observability to you, Tom? I mean, it's definitely a bit of an industry buzzword right now. The three
218
+ [1133.08 --> 1137.92] pillars definition is not that useful as a definition, right? It doesn't really describe
219
+ [1137.92 --> 1142.18] what you're trying to do or what the problem you're trying to solve. It more describes maybe
220
+ [1142.18 --> 1147.48] how you're solving some other problem, right? So whilst I don't necessarily think it's wrong,
221
+ [1147.74 --> 1153.36] like in a lot of places, in a lot of situations, observability does revolve around metrics and logs
222
+ [1153.36 --> 1159.10] and traces. It's not an answer to the question, what is observability? I've always really liked
223
+ [1159.10 --> 1166.12] the definition of observability is, you know, the name for the movement that is like helping
224
+ [1166.12 --> 1172.18] engineers understand the behavior of their applications and their infrastructure. It's about
225
+ [1172.18 --> 1178.50] any tool, any source of data, any technique that helps you understand how a large and complicated
226
+ [1178.50 --> 1185.80] distributed system is behaving and helps you analyze that. That's really my preference. I don't
227
+ [1185.80 --> 1189.28] necessarily think I speak for many people though when I say that. I've been thinking about this for
228
+ [1189.28 --> 1193.68] a couple of years. I had a couple of interesting discussions. Even the episode before this, that's
229
+ [1193.68 --> 1198.14] a really interesting one. If this is the first one that you're listening to, check that out, see,
230
+ [1198.28 --> 1205.80] you know, how the two compare for you. But I also agree that being curious about how things behave,
231
+ [1205.80 --> 1209.82] I think that's like the first requirement for observability. Are you curious? Do you care?
232
+ [1210.38 --> 1216.08] And if you care, great. So what are you going to do to understand your production or your system?
233
+ [1216.14 --> 1219.48] It doesn't have to be production, but it typically is because that's where the most interesting
234
+ [1219.48 --> 1226.42] things happen. So how do you do that? How do you take all those metrics, logs and traces or events,
235
+ [1226.66 --> 1230.10] whatever you call them, it doesn't really matter, to understand how the system behaves?
236
+ [1230.50 --> 1234.74] It's an interesting kind of way of phrasing it, right? Because what I think, what we really
237
+ [1234.74 --> 1241.36] internalize at Grafana Labs is kind of avoiding a one size fits all solution, right? So I know there
238
+ [1241.36 --> 1245.76] are some incredibly powerful solutions out there that are incredibly flexible, but at the end of the
239
+ [1245.76 --> 1250.38] day, we internally call it this kind of big tent philosophy, right? Where we try and embrace multiple
240
+ [1250.38 --> 1255.06] different solutions and multiple different combinations of solutions and really kind of focus
241
+ [1255.06 --> 1260.72] on helping users get the best out of a wide variety of techniques. Because really, you go into any
242
+ [1260.72 --> 1266.04] sufficiently large organization, it doesn't even have to be thousands of people, like even just hundreds
243
+ [1266.04 --> 1271.36] of people. And there's going to be one team over there that uses one monitoring solution and a team over
244
+ [1271.36 --> 1276.48] there that uses a different logging solution. And they're all going to be stuck in their own little silos, and they're
245
+ [1276.48 --> 1281.80] all going to have their own, you know, tools to use to analyze their data. And really, what we're trying to do at
246
+ [1281.80 --> 1286.82] Grafana is bring them all together into a single place and give them all the same experience. The way I've always
247
+ [1286.82 --> 1291.46] thought about it is, you know, when you get paged in the middle of the night, I don't want a system to
248
+ [1291.46 --> 1295.08] tell me necessarily what's wrong, because the reality is, if the system can tell me what's wrong,
249
+ [1295.34 --> 1298.66] it should probably be able to fix it for me. And I probably should have thought of it ahead of time,
250
+ [1298.78 --> 1302.64] and it probably should never have paged me. I only ever really want to get paged for things that I
251
+ [1302.64 --> 1307.42] wasn't expecting, right? And therefore, you know, I want to engage that kind of creative part of my brain.
252
+ [1308.06 --> 1314.18] And I want to come up with hypotheses as to why it's broken, right? And I'm going to, and then I want tools
253
+ [1314.18 --> 1320.44] that help me test those hypotheses and develop new hypotheses. So really, I'm not looking for a tool
254
+ [1320.44 --> 1326.06] that claims to automate kind of root cause analysis, or, or tell me exactly what's broken,
255
+ [1326.06 --> 1329.88] because, you know, if it can do that, it probably shouldn't have broken in that,
256
+ [1330.06 --> 1335.24] in that particular way. I'm looking for a tool that helps me test theories that I've got. Oh,
257
+ [1335.72 --> 1339.46] is it broken because of this? Oh, I can, I can correlate some metrics and some logs,
258
+ [1339.46 --> 1346.28] and I can see if that's the case. Is it broken because there's a tiny little service running on a
259
+ [1346.28 --> 1350.14] computer under someone's desk that's gone down? Oh, I can go and look at a distributed trace and
260
+ [1350.14 --> 1354.72] it will tell me if that's the case. Like I want a tool that helps me access data and test hypotheses.
261
+ [1355.22 --> 1360.52] And the nice thing I think about that as a guiding principle is it doesn't say, well,
262
+ [1360.62 --> 1364.54] the best way of doing that is with logs. It doesn't say the best way of doing that is with events.
263
+ [1364.54 --> 1369.78] And it doesn't say the best way of doing it is with metrics. It says the best way of doing it is
264
+ [1369.78 --> 1373.54] situational and depends on the problem and depends on the tools you've got available.
265
+ [1373.92 --> 1374.68] That's great.
266
+ [1374.68 --> 1394.90] This episode is brought to you by our friends at LaunchDarkly, feature management for the modern
267
+ [1394.90 --> 1400.32] enterprise, power testing in production at any scale. Here's how it works. LaunchDarkly enables
268
+ [1400.32 --> 1405.54] development teams and operation teams to deploy code at any time, even if a feature isn't ready
269
+ [1405.54 --> 1410.24] to release to users. Wrapping code with feature flags gives you the safety to test new features
270
+ [1410.24 --> 1415.48] and infrastructure in your production environments without impacting the wrong end users. When you're
271
+ [1415.48 --> 1419.68] ready to release more widely, update the flag status and the changes are made instantaneously
272
+ [1419.68 --> 1424.82] by the real-time streaming architecture. Eliminate risk, deliver value, get started for free today
273
+ [1424.82 --> 1428.32] at LaunchDarkly.com. Again, LaunchDarkly.com.
274
+ [1430.32 --> 1443.76] I really liked your last answer. And I think now is a great time to start looking at the Grafana
275
+ [1443.76 --> 1451.04] ecosystem, the Grafana Labs Cloud, just because Grafana means many things. How would you solve
276
+ [1451.04 --> 1457.30] specific problems with the tools that you have available in Grafana? So let's take a specific
277
+ [1457.30 --> 1465.30] example. Let's imagine that every now and then, my website, some of the requests are slow. What
278
+ [1465.30 --> 1470.74] would I do to understand why certain requests are slow? Let's imagine this is a monolithic application,
279
+ [1470.74 --> 1476.42] changelog.com. I'm winking right now. It's a Phoenix app. So what would I do?
280
+ [1476.42 --> 1478.18] Actually, I don't know what Phoenix is.
281
+ [1478.74 --> 1483.46] It's a framework similar to Ruby on Rails, but it's based on Elixir, which is
282
+ [1484.18 --> 1488.02] syntax is similar to Ruby, but it's really all running on the Erlang VM.
283
+ [1488.74 --> 1489.22] Oh, wow.
284
+ [1489.22 --> 1490.66] So it's like Ruby on Rails.
285
+ [1491.22 --> 1496.02] Is that a particularly large user base? It seems very nice. I've not heard of that before. Cool.
286
+ [1496.02 --> 1500.82] Right. So not necessarily. I mean, depending on what you mean by large,
287
+ [1500.82 --> 1503.06] but it scales really well because it's the Erlang VM.
288
+ [1503.06 --> 1504.26] Yeah, because it's Erlang. Yeah.
289
+ [1504.74 --> 1506.10] Everything is message passing.
290
+ [1506.10 --> 1506.58] Sweet.
291
+ [1506.58 --> 1512.34] You can have a cluster. It clusters natively. It forms a cluster. It starts sending messages.
292
+ [1512.34 --> 1518.74] I think one of the more popular apps that uses Erlang is WhatsApp. Everybody knows. Everybody uses.
293
+ [1519.30 --> 1524.02] And RabbitMQ is another messaging queue that also uses the same Erlang VM.
294
+ [1524.02 --> 1531.30] And I think the last one is React. It was like the database. I think it still exists. And it was by
295
+ [1531.30 --> 1531.78] Basho.
296
+ [1531.78 --> 1532.34] By Basho.
297
+ [1532.34 --> 1535.78] I remember it was like in the same quadrant, right? Where Acuna Analytics was there.
298
+ [1535.78 --> 1541.70] Manu was there. I think he was their managing director for the EU team. And he was at Acuna a
299
+ [1541.70 --> 1542.34] long time ago. Yeah.
300
+ [1542.34 --> 1544.58] There you go. So it's a small world, isn't it?
301
+ [1544.58 --> 1548.74] I think he's now at one of the cryptocurrency companies, but yeah, sorry, unrelated.
302
+ [1548.74 --> 1552.58] So coming back to this like Phoenix app. So the reason why I mentioned that it's a monolithic
303
+ [1552.58 --> 1556.82] app. It's important because it's not microservices, right? You don't have HTTP calls or
304
+ [1556.82 --> 1561.86] GRPCs. There's no such thing. It's a single app. It's a monolithic app. It talks to a database. It
305
+ [1561.86 --> 1566.98] has an Ingress Nginx actually in front. There's like a load balancer. And then in front of that,
306
+ [1566.98 --> 1571.62] you have a CDN. So the request comes, and this is like very specific, and maybe this will help.
307
+ [1571.62 --> 1577.94] The request goes through a CDN fastly. It hits a load balancer, which is a managed one,
308
+ [1577.94 --> 1584.42] like your ELB, whatever, the equivalent of that. Then it goes to Ingress Nginx. And then from Ingress
309
+ [1584.42 --> 1590.26] Nginx, it gets proxy to the right pod. Well, service pods, I don't have to start decomposing
310
+ [1590.26 --> 1594.74] this. And eventually it hits the database and then it comes back in again. At any one point,
311
+ [1594.74 --> 1601.22] it could be cached. Sometimes requests are slow. Why? How would we find out with the tools that exist
312
+ [1601.22 --> 1606.58] in the Grafana ecosystem world? No, it's a great question. So you already know that requests are slow.
313
+ [1606.58 --> 1610.90] So that's kind of interesting. I'm going to guess, or for the sake of this discussion,
314
+ [1610.90 --> 1615.38] that you've been told by your users that your requests are slow. So I would actually say,
315
+ [1615.38 --> 1620.34] first things first, let's kind of confirm that. We want to instrument the system. We want to get as
316
+ [1620.34 --> 1627.22] many useful metrics as we can out of it. You mentioned in ELB there, for instance, we put the
317
+ [1627.94 --> 1632.18] CloudWatch exporter on there and get the ELB metrics out into Prometheus. Now you can do that with the
318
+ [1632.18 --> 1638.26] open source exporter. We're also working on a service in Grafana Cloud where effectively we run
319
+ [1638.26 --> 1642.74] and manage that exporter for you just to reduce the number of things you need to run. This will give
320
+ [1642.74 --> 1647.46] you access to some rudimentary metrics, but generally I don't find CloudWatch metrics to be super useful.
321
+ [1647.46 --> 1652.26] I'm sorry, that was a bad example. So I gave an analogy. It's actually a Linode node balancer. I'm
322
+ [1652.26 --> 1655.70] pretty sure you don't think to agree with that, but it's like a managed HA proxy.
323
+ [1655.70 --> 1661.54] I wouldn't underestimate the Prometheus ecosystem. There's probably an exporter for Linode metrics
324
+ [1661.54 --> 1666.26] that import them into. And if there isn't, there will be by the time you finish this recording,
325
+ [1666.26 --> 1667.14] I imagine. I hope so.
326
+ [1667.14 --> 1670.58] Yeah. So I get metrics on the load balancer because it's always good to start at the very edge.
327
+ [1670.58 --> 1672.18] The CDN is first. What about the CDN?
328
+ [1672.18 --> 1677.46] Yeah. I don't know enough about Fastly, I'm afraid to really comment, but I'm sure there's some way of
329
+ [1677.46 --> 1683.22] getting logs or metrics from that. Okay. So we've hit something which I wasn't expecting to hit,
330
+ [1683.22 --> 1689.06] but let's just go with it. Okay. I looked at integrating Fastly logs with Grafana Cloud.
331
+ [1689.70 --> 1695.22] To do that, it only supports HTTPS, right? Because that's what Loki exposes, but we have to
332
+ [1695.78 --> 1701.78] validate the HTTPS endpoint that we're going to send logs to. The problem is how do you validate
333
+ [1701.78 --> 1707.94] that we own Grafana Cloud Loki? We can't do that. So what I'm saying is there's not a native
334
+ [1707.94 --> 1712.50] integration between Fastly and Grafana Cloud. And I would really like that. Actually,
335
+ [1712.50 --> 1715.62] there's something which we discussed in the previous episode, episode, no, two episodes
336
+ [1715.62 --> 1722.10] ago, episode 10. So that's the first part. How do we get from Fastly sending logs to Grafana Cloud?
337
+ [1722.10 --> 1726.66] It's not supported. What Fastly is telling us, you will need to have some sort of a proxy
338
+ [1726.66 --> 1732.90] that you can authenticate and then forward those logs to Grafana Cloud, to Loki specifically.
339
+ [1733.62 --> 1737.30] It's okay. Not great. I would like just to send those metrics directly. Sorry,
340
+ [1737.30 --> 1743.46] I keep saying metrics. I mean logs. Send the logs to Grafana Cloud. So that will be the first step.
341
+ [1743.46 --> 1749.86] Great. So let's say we understand the part between the CDN and the load balancer. Let's say that we
342
+ [1749.86 --> 1755.14] understand that path and we have some logs to tell us something. What do we do with those logs?
343
+ [1755.14 --> 1761.46] So this is, yeah. I mean, logs in and of themselves aren't seldom useful. So Loki in LogQL that I
344
+ [1761.46 --> 1766.34] referenced earlier would be able to turn those into some usable metrics, right? You'd be able to turn
345
+ [1766.34 --> 1773.38] them into request rates, error rates, and latencies if the log contains a latency. And you do that all
346
+ [1773.38 --> 1777.86] with Loki. And you can even, with the more recent versions of Grafana and Loki, you can build dashboards
347
+ [1777.86 --> 1782.18] out of those. And some of the cool stuff is like behind the scenes, there's a lot of caching going on
348
+ [1782.18 --> 1788.18] so that those dashboard refreshes don't overwhelm the Loki. And I always say with metrics, it'll tell you
349
+ [1788.90 --> 1793.86] when it happened. It'll tell you how much it happened. Maybe if you've got the granularity,
350
+ [1793.86 --> 1798.10] it might tell you where, which service or which region it happened in, but it won't actually tell
351
+ [1798.10 --> 1803.62] you what happened. It will just tell you that something was slow. So at that point, we start
352
+ [1803.62 --> 1809.22] digging in and there's a couple of techniques we can use. So firstly, I would instrument everything
353
+ [1809.22 --> 1812.98] in the stack. We talked about getting metrics from the CDN. We talked about getting metrics from the
354
+ [1812.98 --> 1817.86] load balancer, getting your Ingress Engine X is running on Kubernetes.
355
+ [1817.86 --> 1823.22] So it's trivial to deploy Promptail as a daemon set and get logs from every Kubernetes pod into
356
+ [1823.86 --> 1827.94] Loki. So you've got the Engine X logs, which again, Loki can extract metrics from,
357
+ [1827.94 --> 1834.42] really straightforward. Ward has a fantastic set of dashboards and examples of how to do that already.
358
+ [1834.42 --> 1839.30] Then you've got your application, the Elixir application. Now, I don't know enough about that,
359
+ [1839.30 --> 1843.38] but I'm going to assume there's a Prometheus client library out there. And so I would instrument
360
+ [1843.38 --> 1847.46] that. And I would follow whenever I'm instrumenting my own application, I tend to follow
361
+ [1847.46 --> 1852.66] a very simple method. If you've heard of Brendan Gregg's use method, then somewhat tongue in cheek,
362
+ [1852.66 --> 1857.78] I coined this phrase called the red method, which is request rate, error rate, and request duration.
363
+ [1857.78 --> 1862.18] Right? Red. Everything comes in threes and it's really easy to remember. So I would just try and
364
+ [1862.18 --> 1868.34] export a Prometheus histogram from the application with request rate, with error rate, and with duration.
365
+ [1868.34 --> 1872.74] And the histogram will capture all three. Finally, you mentioned a database. Let's just for argument's
366
+ [1872.74 --> 1877.22] sake, assume it's MySQL. They don't tend to actually export very good metrics. There is an exporter for
367
+ [1877.78 --> 1882.98] it in Prometheus. And we actually bake that into the Grafana agent to just to simplify and make it
368
+ [1882.98 --> 1888.10] easier and have less stuff to deploy. And so I would wire those up and get whatever metrics I can,
369
+ [1888.10 --> 1891.62] but I'd also gather the logs because the database logs tend to be a little bit more interesting.
370
+ [1891.62 --> 1897.06] Mm-hmm . So finally, this hasn't really caught on very much, but you see it in a lot of the dashboards that
371
+ [1897.06 --> 1901.86] my team and I have built. I tend to always kind of traverse the system from top to bottom.
372
+ [1902.42 --> 1909.22] I always have request rates on the left in panels on the left and durations like latency graphs on the
373
+ [1909.22 --> 1914.10] right. Just as a quick glance in the dashboard, you can typically see where the latency is being
374
+ [1914.10 --> 1919.62] introduced. Do you have a good dashboard that exemplifies this? Because what you say makes a lot
375
+ [1919.62 --> 1924.02] of sense. Is there a good dashboard that we can use as a starting point?
376
+ [1924.02 --> 1928.58] Mm-hmm . The Cortex ones are the ones that I've probably spent the most amount of time.
377
+ [1929.38 --> 1935.30] We ship, again, a bit of work we did with the Prometheus community was this standard called
378
+ [1935.30 --> 1940.42] Bixins, right? Which is a packaging format for Grafana dashboards and Prometheus alerts.
379
+ [1940.42 --> 1946.66] Mm-hmm . So we've built, there's 40 or 50 different mixins now from a lot of popular systems,
380
+ [1946.66 --> 1951.94] but one of them is Cortex. And it's just a versioned set of dashboards and alerts that are very flexible,
381
+ [1952.90 --> 1957.54] very easy to extend, which is kind of key, and very easy to kind of keep up to date with upstream.
382
+ [1958.34 --> 1962.34] Actually, the most popular mixin would be the Kubernetes mixin. I would wager that virtually
383
+ [1962.34 --> 1967.38] every Kubernetes cluster in the world is running the set of dashboards from the Kubernetes mixin,
384
+ [1967.38 --> 1971.30] which is kind of cool because I helped write a lot of those in the very early days, at least.
385
+ [1971.30 --> 1976.10] There's now a whole community that maintains and has taken them far beyond anything I could ever
386
+ [1976.10 --> 1983.54] imagine. So dashboards, you'd have a row per service, and then you just do error rate and
387
+ [1983.54 --> 1988.26] request rate and latency. And this will help you at a very quick glance. When you get used to kind of
388
+ [1988.90 --> 1992.66] looking at dashboards in this format, and every service kind of looks the same, is in the same
389
+ [1992.66 --> 1999.22] format, that consistency really helps reduce that cognitive load. You get to kind of pinpoint very
390
+ [1999.22 --> 2003.14] quickly where that latency is being introduced. It's a very simple technique. It's not universally
391
+ [2003.14 --> 2007.78] applicable, but it does help you know, well, this is coming in my application, or this is coming in
392
+ [2007.78 --> 2012.42] my load balancer, or this is coming in my database. Is there a screenshot of such a dashboard that we
393
+ [2012.42 --> 2016.58] can reference in the show notes? That would really, really help. I can just load up one of our internal
394
+ [2016.58 --> 2022.34] dashboards and send it over. Yes, please. That would be great. The other thing is you mentioned mixins.
395
+ [2022.34 --> 2028.26] Mixins in what context? I've terribly overloaded a term there because I just thought it was a cool term.
396
+ [2028.26 --> 2034.82] Like I realize in CSS and in Python, mixins has a particular meaning. It bears no resemblance to
397
+ [2034.82 --> 2040.98] the kind of language level primitive, right? It is just a cool name that we used for packaging up.
398
+ [2040.98 --> 2045.38] We called them monitoring mixins because we use the language called JSON, well, we use a language
399
+ [2045.38 --> 2052.58] called JSON to express a lot of our alerts and dashboards. And JSON is very much about adding together
400
+ [2052.58 --> 2059.14] big structures of data. And it kind of looks a bit like a mixin in that respect. But that being said,
401
+ [2059.14 --> 2065.30] most of the way people use mixins nowadays doesn't use that technique. We just use it as a packaging
402
+ [2065.30 --> 2066.26] format. Okay.
403
+ [2066.26 --> 2071.94] So it's just a name. There's a GitHub repo and a small website. And the nice thing about the tooling
404
+ [2072.50 --> 2078.66] that's been developed and the packaging format is very much we encourage people who publish exporters
405
+ [2078.66 --> 2083.38] or people who build applications that are instrumented with Prometheus metrics to also
406
+ [2083.38 --> 2088.82] distribute a mixin. So Prometheus has a mixin. Etcd has a mixin. The Kubernetes mixins, part of the
407
+ [2088.82 --> 2094.58] Kubernetes project, right? Cortex has a mixin. We just, they live alongside the code. They're version
408
+ [2094.58 --> 2098.74] controlled and maintained in the same way as the code. And suddenly, you know how people talk about
409
+ [2098.74 --> 2102.98] kind of test-driven development. Well, you almost have observability-driven development.
410
+ [2102.98 --> 2109.62] That's interesting. So I know I've heard of mixins in the context of JSON. And I tried them when I was
411
+ [2109.62 --> 2116.50] using the QPrometheus stack. The one that I think it was Frederick. Yes, it was Frederick. While he was
412
+ [2116.50 --> 2121.62] still at Red Hat, I know that he's not there anymore. But when he was there, he was pushing for this QPrometheus
413
+ [2121.62 --> 2128.34] operator. And in the context of the operator, we could get like the whole stack. Working with that,
414
+ [2128.34 --> 2132.74] we used that for changelog was really hard because we had like the JSON. It was like,
415
+ [2132.74 --> 2137.38] it was a specific version of JSON. It was just, there was a Go one. And there was,
416
+ [2137.38 --> 2142.90] I think a Python one or a JavaScript one. I can't remember. But I know the Go one was much faster
417
+ [2142.90 --> 2147.06] to regenerate all the JSON that you needed, all the YAML that you needed, like took a long,
418
+ [2147.06 --> 2151.86] long time basically to get it into Kubernetes. So the mixins that you're talking about,
419
+ [2151.86 --> 2156.18] how would you use them? Let's imagine that you're running on Kubernetes. How would you use those mixins?
420
+ [2156.18 --> 2160.42] This is a really interesting point because the mixins are advanced mode. It's like hard mode,
421
+ [2160.42 --> 2164.26] right? Like the mixins are solving a problem that software developers have. It's like,
422
+ [2164.26 --> 2169.86] how do I package and redistribute and version control and keep up to date? Like, it's not really
423
+ [2169.86 --> 2175.06] an end user format. Like I wouldn't expect that to happen, right? So just to address some of the
424
+ [2175.06 --> 2179.62] initial challenges, it was a, there's a C version and a Go version of JSON it. And they weren't quite
425
+ [2179.62 --> 2184.74] the same. The Go version didn't have formatting, for instance. Go versions caught up and is now what most
426
+ [2184.74 --> 2188.98] people use. That's kind of, we've solved that problem. We've also developed a lot more tooling,
427
+ [2188.98 --> 2193.06] right? So there's MixTool and there's Grizzly and there's Tanker, and there's a whole kind of
428
+ [2193.06 --> 2199.94] ecosystem, JSON it bundler of tools to use to manage these. And the way it works particularly well is if
429
+ [2199.94 --> 2206.18] you're in an organization with kind of sophisticated config management, you know, we have a single repo
430
+ [2206.18 --> 2211.94] that has all of the config that describes pretty much our entire deployment of Grafana Cloud across 20
431
+ [2211.94 --> 2215.46] something Kubernetes clusters. Is it public please? Can you add me to it?
432
+ [2215.46 --> 2220.74] No, unfortunately not. But there's lots of examples we use from it. But yeah, we've got this one
433
+ [2220.74 --> 2226.10] deployment, this one repo, and it's that mono repo approach to config management at least where
434
+ [2226.10 --> 2230.98] mixings really fit nicely because you can use JSON it bundler to package manage them. And then the really
435
+ [2230.98 --> 2235.54] cool thing comes in, you probably kind of got 90% of the way there, but then didn't have the last 10%.
436
+ [2235.54 --> 2243.62] We use JSON it to also manage all of our Kubernetes jobs. So all our pods, stateful sets, config maps,
437
+ [2243.62 --> 2248.18] services, you name it, it's all defined in the same language, in a single language for dashboards,
438
+ [2248.18 --> 2255.62] for alerts, for any files, for config maps, for anything. It makes it really easy for us to deliver
439
+ [2255.62 --> 2262.74] dashboards and alerts encoded as JSON, encoded as YAML inside a config map in the same language that's
440
+ [2262.74 --> 2269.54] then uploaded with a single tool. And the whole process of updating an application and updating its
441
+ [2269.54 --> 2275.38] config and updating its monitoring is a single PR, a single push and a single apply, which is all CD now.
442
+ [2275.38 --> 2280.74] That's where the vision was. That's a bit advanced, right? It's a bit much to ask for most people. And also,
443
+ [2280.74 --> 2285.14] it's a bit opinionated, right? You have to have the complete stack end-to-end bought into the whole thing
444
+ [2285.14 --> 2292.26] to really realize that benefit. And let's face it, like other techniques, right? Customize and
445
+ [2292.26 --> 2298.26] queue are gaining more popularity than JSON it ever did. And so I think the time's passed for that vision
446
+ [2298.26 --> 2302.58] and that way that we're running things. And really, you kind of touched on something really important
447
+ [2302.58 --> 2308.74] here. It was too hard to use. So what we've been doing in Grafana Cloud really for the past year or so,
448
+ [2308.74 --> 2315.22] is trying to make a kind of more opinionated, more integrated, easier to use version of all of that.
449
+ [2315.86 --> 2319.54] You sign up to Grafana Cloud, you deploy the agent, right? And so that's the first bit of
450
+ [2319.54 --> 2323.62] simplification. The Grafana agent embeds, it's all open source, right? It embeds
451
+ [2324.26 --> 2328.66] Prometheus remote write code and scraping code. It embeds Loki's Promptail, it embeds the open
452
+ [2328.66 --> 2334.58] telemetry collector. It also embeds some 10 to 20 different exporters, all in a single binary,
453
+ [2334.58 --> 2338.34] all with a single thing to deploy and a single thing to configure. And it scrapes and gathers
454
+ [2338.34 --> 2343.14] metrics and logs and traces and sends them all to your Grafana Cloud instance. And then within that
455
+ [2343.14 --> 2347.54] instance, we've built a service that it's almost like an app store, right? You can select the
456
+ [2347.54 --> 2351.14] integration you want to install. I want to monitor some MySQL, I want to monitor some Kubernetes,
457
+ [2351.14 --> 2354.82] I want to monitor Docker. And it will install the dashboards and the alerts and it will keep them
458
+ [2354.82 --> 2358.82] up to date for you. And it will connect them through to the integration in the agent.
459
+ [2358.82 --> 2363.46] And behind the scenes, this is all mix-ins, right? This is all JSON it. This is all automation we've
460
+ [2363.46 --> 2368.58] built to make this whole thing easy to use and integrated and opinionated. It's much harder to
461
+ [2368.58 --> 2374.42] do, you know, to do that easy to use story in open source because the opinions change, right? And the
462
+ [2374.42 --> 2379.78] integrations change. But in Cloud where it's a much more controlled environment, we can deliver that
463
+ [2379.78 --> 2386.74] easy to use experience. This just means for people who maybe have seen me talk or seen someone else
464
+ [2386.74 --> 2391.94] talk about Prometheus and talk about Grafana and talk about how easy it is to use and how powerful it is
465
+ [2391.94 --> 2396.02] and how awesome it is and how much value they've got out of it. But maybe, you know,
466
+ [2396.02 --> 2401.38] don't really have the time to jump into the intricacies of JSON it and learn 50 new tools.
467
+ [2401.38 --> 2403.86] We're just trying to make that accessible to that group of people.
468
+ [2403.86 --> 2419.46] This episode is brought to you by our friends at Cockroach Labs, the makers of CockroachDB,
469
+ [2419.94 --> 2425.94] the most highly evolved database on the planet. With CockroachDB, you can scale fast, survive
470
+ [2425.94 --> 2432.50] anything and thrive everywhere. It's open source, Postgres wire compatible and Kubernetes friendly,
471
+ [2432.50 --> 2436.74] which means you can launch and run it anywhere. For those who need more, you can build and scale
472
+ [2436.74 --> 2441.86] fast with Cockroach Cloud, which is CockroachDB hosted as a service. It's the simplest way to
473
+ [2441.86 --> 2447.38] deploy CockroachDB and is available instantly on AWS and Google Cloud. With Cockroach Cloud,
474
+ [2447.38 --> 2453.14] a team of world-class SREs maintains and manages your database infrastructure so you can focus less
475
+ [2453.14 --> 2458.02] on ops and more on code. Get started for free with a 30-day free trial or try their new forever
476
+ [2458.02 --> 2464.26] free tier that's super generous. Head to CockroachLabs.com to learn more. Again, CockroachLabs.com
477
+ [2464.26 --> 2465.78] slash changelog.
478
+ [2474.74 --> 2481.06] As I was saying, we use JSON it bundler, JB. I remember the cube Prometheus operator and the
479
+ [2481.06 --> 2487.54] cube Prometheus stack, which was generated out of that. So we did away with all of that. We used to,
480
+ [2487.54 --> 2494.42] obviously, set up our own Grafana, set up Loki, set up Prometheus. Now all we have is a Grafana
481
+ [2494.42 --> 2500.34] agent, which is really nice. By the way, do you know that docs recommend two Grafana agents? One
482
+ [2500.34 --> 2506.02] to scrape the logs, one to get the metrics. So I figured out how to get a single one, and that was
483
+ [2506.02 --> 2513.78] okay because one can do both. But the thing which I still struggle with is how to get the dashboards
484
+ [2513.78 --> 2518.10] working nicely together. I think that's the most important thing. We have Prometheus. That's the
485
+ [2518.10 --> 2523.86] library that we use in Elixir and Phoenix to get the metrics out. And it's actually on the Grafana
486
+ [2523.86 --> 2530.26] blog as well. So it was featured. Alex Kutmos is working close with the Grafana team. He's also
487
+ [2530.26 --> 2535.38] a friend of changelogs. Very close, a very close friend. We worked together. We even did a couple of
488
+ [2535.38 --> 2541.70] episodes together, even a YouTube stream on how we upgraded Erlang 24 and we were using Grafana
489
+ [2541.70 --> 2544.50] cloud to see the impact of that for changelog.com. Nice.
490
+ [2544.50 --> 2549.46] It was a Friday evening deploy. Prometheus was there. It was a great one. We had great fun. It was a few
491
+ [2549.46 --> 2557.54] weeks back. So in that world, the dashboards, I still feel they are the strongest thing that you,
492
+ [2557.54 --> 2563.54] and the best thing that you have, but also the most difficult one to integrate. Because the Grafana
493
+ [2563.54 --> 2568.26] agent doesn't really handle dashboards, right? It just like gets the logs and the metrics out.
494
+ [2568.26 --> 2574.18] So we're using Prometheus, but it's really clunky because you're building your dashboards in Grafana
495
+ [2574.18 --> 2580.58] cloud. A lot of the time they don't work because the metrics don't show up reasons. And then you
496
+ [2580.58 --> 2586.02] adjust them. Then you have to export them. Then you have to version control them. And then Prometheus
497
+ [2586.02 --> 2591.14] has to be configured to upload them to Grafana cloud. So it's just a bit clunky. So I'm wondering,
498
+ [2591.14 --> 2593.86] how could that be done better? Do you have some ideas?
499
+ [2593.86 --> 2596.90] David Poulos There's some kind of guidelines for
500
+ [2596.90 --> 2601.54] building dashboards in my opinion. First thing, you should always template out the data source,
501
+ [2602.18 --> 2606.58] right? Different Grafana installations will name their data sources, different things. And so a
502
+ [2606.58 --> 2611.30] dashboard imported from one might not necessarily work in another. So I always make sure my data
503
+ [2611.30 --> 2616.58] sources are templated out. Second thing, I always tend to template out the job and the instance labels,
504
+ [2616.58 --> 2620.82] maybe with wildcard selectors. And again, same reason. This means the dashboard can effectively
505
+ [2621.38 --> 2627.86] dynamically discover what jobs you've got with certain metrics. This actually fits a pattern
506
+ [2627.86 --> 2632.98] in Prometheus really nicely where we have this Go build info if you're in Go and Java building for
507
+ [2632.98 --> 2637.70] if you're in Java and so on, where every job exports a metric that tells you the version it was built
508
+ [2637.70 --> 2645.46] with and so on. We call these info level metrics. I tend to add an info metric to every piece of software
509
+ [2645.46 --> 2650.66] right, right. You know, maybe it's Cortex info, right? And then I'll tell the template selector
510
+ [2650.66 --> 2656.18] for any Cortex dashboard to just look for all the unique jobs and instances that export a Cortex build.
511
+ [2656.18 --> 2656.74] Mm-hmm.
512
+ [2656.74 --> 2661.86] Right. And this again, this kind of turns a static dashboard that might have encoded to use a
513
+ [2661.86 --> 2665.94] particular set of labels into a very dynamic dashboard, which allows you to select the job
514
+ [2665.94 --> 2670.26] you want to look at and also means that the chances are when you load it, as long as there's some job
515
+ [2670.26 --> 2674.50] exporting some relevant metrics, it will work. So first things first, template your dashboards.
516
+ [2674.50 --> 2675.14] Right.
517
+ [2675.14 --> 2680.18] Right. Second thing, I'm a big fan of dashboards as code, right? So I actually don't tend to build
518
+ [2680.18 --> 2685.78] my dashboards in Grafana. I tend to build them in my text editor. And I tend to use JSON it,
519
+ [2685.78 --> 2689.70] unfortunately. I tend to use a library called Grafana or there's another one called Grafana
520
+ [2689.70 --> 2693.46] Builder. And if you don't like JSON it, there's a good library called Grafana Lib that helps you
521
+ [2693.46 --> 2698.74] build them in Python. And yeah, I tend to build them there. I tend to version control them from the get-go.
522
+ [2698.74 --> 2704.02] And really I tend to use a much more kind of GitOps style approach. There's a couple of tools you can use to do
523
+ [2704.02 --> 2708.58] this, but the one I've been using more recently is called Grizzly by Malcolm Holmes and it's on the
524
+ [2708.58 --> 2713.38] Grafana GitHub. And you can install that and you can point to a JSON it definition of a dashboard
525
+ [2713.38 --> 2719.06] and it will upload it to Grafana. And generally, you know, I do a kind of dev deploy cycle on my
526
+ [2719.06 --> 2723.06] laptop as I'm developing these dashboards, uploading to Grafana, refreshing, seeing the change.
527
+ [2724.02 --> 2728.90] That way, kind of the definition of the dashboard is already in Git, right? And because I'm version
528
+ [2728.90 --> 2734.90] controlling source code and not a big blob of JSON, the code is much more reviewable and I can create
529
+ [2734.90 --> 2738.66] PRs and have someone else review those PRs and it's meaningful to do that.
530
+ [2738.66 --> 2743.70] That sounds exactly what I would want. I mean, you've described my ideal approach,
531
+ [2744.50 --> 2750.10] but first of all, I didn't know about those tools. Second of all, I'm not aware of any article,
532
+ [2750.74 --> 2755.62] any video, anything like this that runs you through how to do this.
533
+ [2755.62 --> 2760.90] Yeah. So what I would want to do is to go through that and capture it.
534
+ [2760.90 --> 2767.70] I think the reason we don't promote it too widely is because the 80% use case for Grafana is editing
535
+ [2767.70 --> 2773.14] dashboards in Grafana, right? And that's the easy to access, easy to use. It's very visual. It's very
536
+ [2773.14 --> 2780.10] kind of rewarding to do that, right? The 20% use case that I've just described is the serious SRE
537
+ [2780.10 --> 2786.34] DevOps approach. And I think we've tried a bunch of different ways of doing it. We've settled on this
538
+ [2786.34 --> 2791.94] way, but I don't think anyone is satisfied. I don't think we think this is as easy as it can be.
539
+ [2791.94 --> 2798.34] I don't think anyone thinks that this is the final form. And so I'm not sure that anyone's kind of too
540
+ [2798.34 --> 2803.54] eager to promote this as the advanced way of doing it. I referenced the hackathon earlier that we were
541
+ [2803.54 --> 2808.26] doing internally. And I know that we've got some cool stuff coming out that maybe will be the final
542
+ [2808.26 --> 2813.78] form of this. I know that I'm very excited about trying it out. This is a dream and you can say,
543
+ [2813.78 --> 2819.62] no, right? Or like not a dream, but like a crazy plan. What would it look like if we paired for an
544
+ [2819.62 --> 2825.94] hour? I've been doing it for close to a decade. So I think I'm pretty good or so others say to have
545
+ [2825.94 --> 2830.42] a go at this. Maybe half an hour will be enough just like to get a hang of things. So, okay.
546
+ [2830.42 --> 2832.66] I'm thinking YouTube stream. I'm thinking...
547
+ [2832.66 --> 2833.22] Yeah, let's do it.
548
+ [2833.22 --> 2834.18] Wow. Okay.
549
+ [2834.18 --> 2837.54] Can we use VS Code Sharing? Because I've always wanted to use that and
550
+ [2837.54 --> 2839.06] and I haven't had an opportunity to.
551
+ [2839.06 --> 2843.14] Anything you want. You're the driver. You're just showing me how it's done. And then maybe
552
+ [2843.14 --> 2848.10] we can switch over and I can have a go to see if I understood it correctly in the context of
553
+ [2848.10 --> 2852.90] changelog.com because we are already using Rufana Cloud. The integration is there. We're already using
554
+ [2852.90 --> 2858.42] Rufana Agent. And who knows? Maybe there will be some interesting things to share, but the focus is on
555
+ [2858.42 --> 2864.90] getting this nailed down because it sounds amazing. Why aren't more people doing this? And I don't think
556
+ [2864.90 --> 2870.18] many know about it. Whatever comes after it, I think it's an important step to capture and to
557
+ [2870.18 --> 2875.86] share widely because I don't think people know. I've never heard this before. Jason it, JB,
558
+ [2875.86 --> 2879.54] but I was doing it wrong and I didn't even know until today. So thank you, Tom.
559
+ [2879.54 --> 2883.62] Oh yeah. I wouldn't say you're doing it wrong, but it was, yeah, you didn't see the full,
560
+ [2883.62 --> 2886.34] didn't get an opportunity to use the full process.
561
+ [2886.34 --> 2889.30] To do it right. I didn't have the opportunity to do it right. Okay.
562
+ [2889.30 --> 2893.46] I mean, and that's one of the big challenges of this approach, right? Is it's, there's a lot to
563
+ [2893.46 --> 2897.46] learn. There's a lot to consume and you don't really see the benefits until you do it all,
564
+ [2897.46 --> 2902.42] which is like from a, from a developer experience perspective is awful, right? Like there's no kind
565
+ [2902.42 --> 2905.06] of incremental reward that goes with it, which is what we're missing.
566
+ [2905.06 --> 2910.34] We talked about metrics quite a bit, which talked about logs, but we haven't talked about traces.
567
+ [2910.34 --> 2910.90] Yeah.
568
+ [2910.90 --> 2914.26] I think it's a very important element. We ourselves are not using traces.
569
+ [2914.26 --> 2921.62] And I can see the traces being instrumental, critical, essential to understanding why our
570
+ [2921.62 --> 2926.82] requests are slow. If you have a trace, you can understand where the time is being spent
571
+ [2927.62 --> 2931.22] and the slow requests, you can see, well, actually, you know what? It was Qproxy.
572
+ [2931.22 --> 2935.94] Because I suspect based on the metrics that we have, which by the way, we have quite a few and
573
+ [2935.94 --> 2941.14] everything's going to Grafana Cloud, all the logs, everything. Based on what I see, like what we have,
574
+ [2941.14 --> 2950.90] it's all things point to Qproxy. So how would we use traces to understand that? First of all,
575
+ [2950.90 --> 2955.30] how does it work? This is tempo. I know that's the component. That's the, would you call it a
576
+ [2955.30 --> 2959.86] component? What, what would you call it? I tend to call it either a project or a service,
577
+ [2959.86 --> 2964.66] like depending on the context. Okay. So like the, the tempo service, how would we use it
578
+ [2964.66 --> 2969.54] for traces and how would it integrate in the problem or how it solved the problem that I just described?
579
+ [2969.54 --> 2973.30] So this is a really interesting one, right? Because in the metrics world,
580
+ [2973.30 --> 2978.18] we develop exporters, right? Which gather numeric data from other systems and expose them as metrics.
581
+ [2978.18 --> 2982.66] The barrier to entry for metrics is kind of medium, you know, maybe it's kind of three feet tall.
582
+ [2982.66 --> 2987.14] You know, for logs, everything has logs, right? It's so easy to get logs from everything.
583
+ [2987.14 --> 2991.86] So the barrier to entry for logs is kind of nowhere, like it's on the floor. The barrier to entry for
584
+ [2991.86 --> 2996.34] traces is super high. You need to have systems that are instrumented. You need to correctly
585
+ [2996.34 --> 3002.74] propagate the context, the trace ID, and you need to have a way of kind of distributing this
586
+ [3002.74 --> 3007.86] telemetry data, right? So this is the challenge in the tracing space right now. And this is why I
587
+ [3007.86 --> 3011.62] think it's always the, you know, to your point, right, you haven't adopted tracing yet. It's always
588
+ [3011.62 --> 3016.90] the third thing people adopt. The investment is high. The good news is there's a huge reward for that
589
+ [3016.90 --> 3021.70] investment. And particularly whenever you're looking at any kind of performance challenges,
590
+ [3021.70 --> 3025.62] tracing is invaluable. We've been doing a lot of distributed tracing for a long time in Grafine
591
+ [3025.62 --> 3029.86] Labs. We started with Jaeger and eventually did our own thing with Tempo. And it's been
592
+ [3029.86 --> 3035.38] instrumental in kind of accelerating the query performance of every component. So that's the
593
+ [3035.38 --> 3041.46] TLDR. How do you do it? So there's some good news here. One of them is open telemetry, very kind of
594
+ [3041.46 --> 3048.10] cross-functional project from many different contributors and vendors that is designed really to make the whole
595
+ [3048.10 --> 3054.26] telemetry journey better and easier and simpler. And the most well-developed bit of open telemetry
596
+ [3054.26 --> 3059.78] and bit that is most widely adopted is their tracing stack, right? So we've put the open telemetry
597
+ [3059.78 --> 3064.58] collector into the Grafana agent. So you can deploy that and then you've got something you can just fire
598
+ [3064.58 --> 3070.74] traces at in your local environment. You'll set up the Grafana Cloud agent, the Grafana agent to forward
599
+ [3070.74 --> 3075.46] those traces up to Grafana Cloud to Tempo and then Tempo deals with the storage of them, right? And that's
600
+ [3075.46 --> 3080.02] really the component of this. All that leaves is for you to deal with the instrumentation.
601
+ [3080.02 --> 3085.46] Now, the good news is with a lot of high-level languages, a lot of dynamic languages, you can
602
+ [3085.46 --> 3090.90] use auto-instrumentation. So this is part of open telemetry's client libraries that come along. And
603
+ [3090.90 --> 3096.66] for instance, with most Java web frameworks, with most Python frameworks, it's like one line of code,
604
+ [3096.66 --> 3102.34] or maybe it's even no code changes and you can get reasonable traces out of the system. I don't
605
+ [3102.34 --> 3106.58] think a system like that exists for Go. So it's a bit more work with Go, but it's still not that
606
+ [3106.58 --> 3110.10] challenging. I unfortunately don't know enough about the Erlang VM, but I'm going to expect there's
607
+ [3110.10 --> 3115.78] probably a pretty easy way of getting traces. It exists. So like the open telemetry integration
608
+ [3115.78 --> 3121.62] exists in Erlang. It's not that mature, but it's improving. Like every month is getting better.
609
+ [3122.18 --> 3128.66] And I think it's more around the queries that go all the way to PostgreSQL. So how does the request
610
+ [3128.66 --> 3134.02] map to that? I mean, I know that the database has some impact on that, but right now, the most
611
+ [3134.02 --> 3142.02] important one is between the app pod, the app instance, and the PostgreSQL pod, which they all
612
+ [3142.66 --> 3147.62] exist in the same place. Now, maybe if PostgreSQL is like a managed service, we wouldn't have this
613
+ [3147.62 --> 3153.14] problem. Maybe. But regardless of what the case would be, you'd want to know what is the problem.
614
+ [3153.14 --> 3158.18] And if I change this, does it actually improve it? And by how much? If you have the trace,
615
+ [3158.18 --> 3163.94] it's really easy to understand, well, I should, you know what, not Qproxy, I should focus maybe
616
+ [3163.94 --> 3168.98] on the load balancer. But I don't know where that request is stuck or like, you know, in that request,
617
+ [3168.98 --> 3172.74] which is the longest portion. So where should I invest my time first?
618
+ [3173.38 --> 3176.82] You've hit on the problem or one of the many problems with distributed tracing. Like
619
+ [3177.86 --> 3182.58] you have to have the entire stack instrumented to really get a lot of value, right? And if you have
620
+ [3182.58 --> 3187.78] holes in the middle or black blind spots from a kind of tracing perspective, the values greatly
621
+ [3187.78 --> 3188.34] diminish. Yeah.
622
+ [3188.34 --> 3188.50] Yeah.
623
+ [3188.50 --> 3194.42] You can get tracing information out of load balancers, right? And I've never actually done
624
+ [3194.42 --> 3198.58] it myself though, right? I've always kind of stopped there. I'm hoping that things like open
625
+ [3198.58 --> 3203.62] telemetry, and I know Amazon are heavily investing in open telemetry. So I'm hoping that it will be
626
+ [3203.62 --> 3209.46] possible if it isn't already to get open telemetry spans out of my ELBs, right? I think, you know,
627
+ [3209.46 --> 3214.42] my ALBs and so on. I think that's going to be really important. I'm hoping that things like the W3C
628
+ [3214.42 --> 3221.54] trace context makes this easier. And maybe this even allows things like the CDN Fastly to also
629
+ [3221.54 --> 3227.30] emit a span. That would be kind of cool being able to see a CDN and an ALB and your application.
630
+ [3227.30 --> 3232.90] When it comes to Postgres and MySQL, I don't know. I'd love to see spans coming out of those systems,
631
+ [3232.90 --> 3237.54] but I don't really know the status. I'm not really an expert on this side of things. A common
632
+ [3237.54 --> 3242.58] misconception is that kind of every service emits one and only one span, right? It doesn't have to.
633
+ [3242.58 --> 3246.42] You can emit as many spans as you like. You probably shouldn't emit too many, but you can
634
+ [3246.42 --> 3250.90] do whatever you like. So one of the things where we do a lot of is kind of client-side spans.
635
+ [3250.90 --> 3256.18] You know, whenever we do a request to a database in Cortex, in pretty much any of the systems I've
636
+ [3256.18 --> 3262.42] worked on, they'll emit a client-side span. And this effectively gives you some insight into the
637
+ [3262.42 --> 3267.54] latency that external systems are contributing. But it doesn't have to even just be two spans,
638
+ [3267.54 --> 3271.54] right? A server span and a client span. You know, you can put spans in between. You know,
639
+ [3271.54 --> 3276.98] so we will have spans around cache lookups. We will have spans around various kind of
640
+ [3277.54 --> 3282.74] areas inside a single service that parallelize, right? And we'll emit multiple spans. And it
641
+ [3282.74 --> 3286.82] really helps you understand the flow of the request. Don't go crazy with it, but in general,
642
+ [3286.82 --> 3292.34] it's possible. In your situation, because it's a monolith, I would instrument the Elixir server and
643
+ [3292.34 --> 3298.66] client going out to Postgres. And that would probably give you enough information to know if it's Postgres,
644
+ [3298.66 --> 3305.38] to know if it's the Qproxy or the ELB. You want to get a span from something further up the chain,
645
+ [3305.38 --> 3306.74] and then start to look at the differences.
646
+ [3306.74 --> 3311.30] Chris Ingress Nginx. Does Ingress Nginx and Nginx support spans? Do you know?
647
+ [3311.30 --> 3315.86] I don't know off the top of my head. Like, one of the things I've definitely seen engineers go down
648
+ [3315.86 --> 3321.30] this rat hole of trying to get complete traces and spans from everywhere. And there's just a kind of,
649
+ [3321.30 --> 3326.58] there's a, you know, effort reward trade-off to be made. Like, it might take a lot of effort to get
650
+ [3326.58 --> 3331.30] a complete span from every single service. You know, if you're on a mobile app, like doing a
651
+ [3331.30 --> 3335.06] client-side span might tell you everything you need to know, just, you know, emitting it from your
652
+ [3335.06 --> 3336.02] mobile app.
653
+ [3336.02 --> 3340.74] Chris I understand what you're saying. I think on the client side, that is less of an issue because
654
+ [3340.74 --> 3346.18] the span, which is the longest one, happens server-side, where it's like waiting or processing,
655
+ [3346.18 --> 3352.10] whatever the name may be. And that tends to sometimes be really long. So what happens inside
656
+ [3352.10 --> 3359.22] of that span? So we know that it goes to, let's say, fastly. Great. We can remove that. We can go
657
+ [3359.22 --> 3364.82] directly to the load balancer. Okay. I don't think there's much we can do about the load balancer. So
658
+ [3364.82 --> 3372.18] let's say we ignore that. So our span really starts at possibly the Ingress Nginx. So that's the first
659
+ [3372.18 --> 3377.30] starting point. Excellent. What happens inside Ingress Nginx maybe would be interesting. I mean,
660
+ [3377.30 --> 3382.98] this is Nginx specifically, maybe it would be interesting. But the next hop will be into,
661
+ [3382.98 --> 3389.62] as far as I know, this will be the entry points into Kubernetes. So that will be the service that's
662
+ [3389.62 --> 3394.98] responsible for routing the traffic. I mean, that's actually even before the Ingress Nginx, right?
663
+ [3394.98 --> 3400.98] It's a service. It hits the Nginx pod. And from the Nginx pod, it will need to talk to the other
664
+ [3400.98 --> 3409.94] service, which is the application service. So having these first two, three steps in the span
665
+ [3409.94 --> 3414.66] would be already helpful. But realistically, I think we can only start from the Kubernetes side.
666
+ [3415.14 --> 3423.86] And that's okay. So from Nginx, the next hop would be really the application. So how does that span vary?
667
+ [3423.86 --> 3427.54] And regardless of what happens inside, it doesn't matter. How does that duration change?
668
+ [3427.54 --> 3434.50] From the application, again, it has to hit the database. And if we know the timings that it takes,
669
+ [3434.50 --> 3439.46] that would be enough. So we have literally the three, four hops that we're really interested in.
670
+ [3439.46 --> 3445.54] And then there's the cube proxy. So where does that happen? And how long does that span take?
671
+ [3446.18 --> 3451.78] So it's just like, okay, together, maybe seven steps. And which is the step which is more variable?
672
+ [3451.78 --> 3455.62] That's the way I think about it. Is that right? Does this sound right to you? With distributed tracing,
673
+ [3455.62 --> 3459.94] you've always got to kind of see. The great thing about it is like being able to visualize the actual
674
+ [3459.94 --> 3464.82] flow of the request. So yes, like, I'm agreeing with you. One of the things I will say is,
675
+ [3465.78 --> 3470.74] it's probably not cube proxy. My understanding in most deployments is that is not a layer seven thing,
676
+ [3470.74 --> 3475.70] right? It's done at the TCP level, where it doesn't intercept any traffic, right? So it's not worth putting
677
+ [3475.70 --> 3480.58] a, well, it's not even technically possible, I guess, to do a request level span there because
678
+ [3480.58 --> 3482.34] it's very connection oriented. Right.
679
+ [3482.34 --> 3486.10] You know, one of the promises of OpenTelemetry, right, because it's so vendor neutral and because
680
+ [3486.10 --> 3492.82] it's so open as a standard is that we might even be able to get spans into more established open
681
+ [3492.82 --> 3498.10] source projects who don't want to pick favorites. So maybe one day we will be able to get spans into
682
+ [3498.10 --> 3502.82] Postgres and into MySQL. Maybe it really exists. I'll admit to not knowing off the top of my head.
683
+ [3502.82 --> 3508.50] Neither do I, but that's really fascinating. So this is what I'm thinking. First step is,
684
+ [3509.22 --> 3516.02] let's pair up on what it looks like to do Grafana dashboards, Tom style. I'll call it Tom style. I
685
+ [3516.02 --> 3522.02] know it isn't, but Grizzly style or whatever. The point being is the way you developed them. Big fan
686
+ [3522.02 --> 3527.62] of GitHub, big fan of version controlling it. We're not using Argo CD yet, but I would love to put that
687
+ [3527.62 --> 3532.58] in the mix. How does that play with the tools that you use? How does it integrate with Grafana Cloud?
688
+ [3532.58 --> 3537.30] How can we control those dashboards in a way that is nicer than what we have today?
689
+ [3538.02 --> 3543.94] And then this specific problem, once we have that iteration set up really nicely and those feedback
690
+ [3543.94 --> 3548.02] loops set up really nicely so we can experiment, which goes back to what you were saying, being
691
+ [3548.02 --> 3553.30] able to ask interesting questions, being able to figure things out, right? Like explore, which
692
+ [3553.94 --> 3558.58] I'm a big fan of, right? Like figure out, like we don't know what the problem is, so let's figure out.
693
+ [3558.58 --> 3563.70] So how can we very quickly iterate on solving that specific or like finding that answer?
694
+ [3564.34 --> 3571.38] And then I think those spans, tempo and integrating with that, super valuable, long, long term.
695
+ [3571.38 --> 3576.34] I'm expecting things to change along the way as the ecosystem matures, more libraries are getting
696
+ [3576.34 --> 3582.58] instrumented, open telemetry becomes more mature. I think that's a great vision and a great
697
+ [3582.58 --> 3586.66] direction towards where the industry is going. I'm very excited about that.
698
+ [3587.54 --> 3593.70] As a listener, if I had to remember one thing from this conversation, what should that be, do you think?
699
+ [3593.70 --> 3599.70] I go all the way back to the early comments about observability and about the big tent philosophy
700
+ [3599.70 --> 3605.62] and about them not being one size fits all tooling. I know as a vendor here, like, you know, I have a
701
+ [3605.62 --> 3610.34] preference for Prometheus and Loki and tempo, but honestly, like that's just a preference. That's just an
702
+ [3610.34 --> 3617.54] opinion. Like an equally valid opinion is to use graphite and Jaeger and elastic, right? And they're very
703
+ [3617.54 --> 3623.14] powerful systems. And it's our kind of mission at Grafana Labs to allow you to have the same
704
+ [3623.14 --> 3629.22] experience, the same level of integration and ease of use, no matter what your choice of tooling is.
705
+ [3629.22 --> 3635.54] I love that. So if we were to pick one title for this discussion, what do you think that should be?
706
+ [3635.54 --> 3638.50] Observability and big tent, yeah. Big tent philosophy.
707
+ [3638.50 --> 3643.06] Big tent philosophy. I like that. I like that big tent philosophy.
708
+ [3643.06 --> 3646.50] I'm not sure where the term comes from, to be brutally honest. I should probably Google it.
709
+ [3646.50 --> 3650.82] It's like, you know, I know how a lot of companies have internal mantras, right? You know,
710
+ [3650.82 --> 3656.26] Google's mission was to organize the world's information, right? We are, you know, the internal
711
+ [3656.26 --> 3659.94] mantra in Grafana Labs is this big tent philosophy. We apply it everywhere to everything we do.
712
+ [3659.94 --> 3663.14] Who came with the idea of the big tent? Do you know?
713
+ [3663.14 --> 3669.70] I think, I don't know where the term came from, but the idea was very early on in Grafana when
714
+ [3669.70 --> 3675.30] Torquil added support for multiple data sources, right? And very early on, Grafana started life
715
+ [3675.30 --> 3680.66] visualizing graphite data. But very early on, support for other systems was added, right?
716
+ [3680.66 --> 3686.98] And it's really that vision early on to bring together data for multiple systems in Grafana
717
+ [3686.98 --> 3692.82] that seeded this idea. So the big tent, the way I understand it, is bringing all these,
718
+ [3693.62 --> 3697.46] I want to say vendors, data sources. It's more than just data sources, right?
719
+ [3697.46 --> 3701.14] More than just data sources, because it's data from anywhere and combining it in a single place,
720
+ [3701.14 --> 3706.42] but building experiences that span multiple systems, integrating them in ways that didn't
721
+ [3706.42 --> 3711.86] exist before. But it is not just a concept that applies to Grafana and the visualization, right?
722
+ [3711.86 --> 3716.02] We apply it on the backend with supporting different query languages within the same
723
+ [3717.86 --> 3724.82] time series database. You know, we support it in Tempo, being able to send traces formatted for
724
+ [3724.82 --> 3729.38] Jaeger or formatted for Zipkin. You know, and it's kind of intrinsic in a lot of open telemetry as well,
725
+ [3729.38 --> 3734.98] being very vendor neutral to a fault. Tom, I didn't think this was possible,
726
+ [3734.98 --> 3738.02] but it happened. I have more questions at the end than at the beginning.
727
+ [3738.02 --> 3743.46] I'm sorry about that. And I'm more excited to continue talking with you at the end than I was
728
+ [3743.46 --> 3748.90] at the beginning. Again, that's not possible. I'm really looking forward to trying things which
729
+ [3748.90 --> 3752.98] I've just said, and I'm really looking forward to next time. So thank you for today.
730
+ [3752.98 --> 3754.26] Thank you very much, Gerhard.
731
+ [3754.26 --> 3757.86] Thank you.
732
+ [3757.86 --> 3759.86] That's it for this episode of Ship It.
733
+ [3759.86 --> 3765.30] Thank you for tuning in. We have a bunch of podcasts for developers at Changelog that you
734
+ [3765.30 --> 3771.46] should check out. Subscribe to the master feed at changelog.com forward slash master to get
735
+ [3771.46 --> 3779.06] everything we ship. I want to personally invite you to join your fellow changeloggers at changelog.com
736
+ [3779.06 --> 3784.66] forward slash community. It's free to join and stay. Leaving, on the other hand, will cost you some
737
+ [3784.66 --> 3791.22] happiness credits. Come hang with us on Slack. They're no imposters. Everyone is welcome. Huge
738
+ [3791.22 --> 3798.26] thanks again to our partners Fastly, LaunchDarkly and Minnowed. Also, thanks to Breakmaster Cylinder
739
+ [3798.26 --> 3811.86] for making all our awesome beats. That's it for this week. See you next week.
740
+ [3829.06 --> 3833.94] I'll see you next week.
741
+ [3833.94 --> 3836.36] ...
Grafana’s Big Tent idea_transcript.txt ADDED
@@ -0,0 +1,407 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** The last time that we spoke, Tom, was at KubeCon 2019 North America. That was actually my first KubeCon, in San Diego, and it was an amazing one. I loved it. This was actually Changelog \#375, and again, it was one of my favorites. That was almost two years ago. I know that a lot of things have changed. First of all, Grafana was at version 6 back then. Now it's at version 8, which is a massive improvement from version 7, which was a massive improvement from version 6. What other things changed in the last (almost) two years since we spoke?
2
+
3
+ **Tom Wilkie:** Oh wow, yeah. I mean, two years... How do we cover two years in five minutes? I think working backwards, we've launched Tempo, the tracing system from Grafana Labs, which is kind of cool... A slightly different take on distributed tracing, focusing on very efficient storage of the traces itself, and very scalable.
4
+
5
+ We've done Loki 2.0... Our log aggregation system is over two years old now, and with Loki 2.0 came a much more sophisticated query language. That's really cool, because now you can start to use Loki in anger and really kind of extract metrics and really dig into your logs with it. That was a really exciting design process for the language as well, because we always wanted it to be really heavily inspired by Prometheus, but it's logs in the end; it's different to time series.
6
+
7
+ \[04:10\] We actually collaborated with Frederick from the Prometheus team, and he really influenced the design. I remember one of the calls... We came up with one of the things that I think makes LogQL really cool, which is you've got the pipeline operator for filtering logs. So you use pipelines to filter your logs, and we kind of stuck with that for everything in the logs space. And then the minute you start working with metrics, you start using brackets, and it looks like PromQL, like Prometheus query language. And it just means you look at a query and it's really obvious that that part of the query deals with logs, and that part of the query deals with metrics.
8
+
9
+ Working backwards more, exemplars in Prometheus and in Grafana, so you can link from metrics to traces... You know, you put little dots on the graphs, and the dots indicate a trace, and you can click on it, and that whole kind of experience works.
10
+
11
+ And you bring up KubeCon 2019. I think that was the year Frederick and I gave a keynote address on the future of observability. And in that keynote we predicted that linking metrics and logs and traces and correlating and building experiences that combine them would be the future. Now, of course, like a bit tongue-in-cheek, because I have the great opportunity and I'm very lucky to be able to influence what we do at Grafana Labs... We've kind of spent the last two years making that keynote happen, and making it possible to combine those metrics and logs and traces in a single development experience, in a single, on-call, kind of instant response.
12
+
13
+ I can go on... There's so many things that have changed. We've grown hugely at Grafana Labs, where we're now over 400 people. I joined when we were about 25-26 people, 3,5 years ago... So we launched GEM (Grafana Enterprise Metrics), which is our kind of self-managed enterprise version of Cortex, the scalable of Prometheus, the other CNCF project.
14
+
15
+ Yeah, there's so many... And I'm still only talking about kind of the second half of last year... And I guess, when you ask that question, everyone always responds with "pandemic" as well. I kind of glossed over that, but... We had a global pandemic.
16
+
17
+ **Gerhard Lazu:** Yeah.
18
+
19
+ **Tom Wilkie:** I think what was really interesting - obviously, it has a huge impact, but Grafana Labs was set up from day zero to be remote-first... So I think we've been super-lucky that the impact has been less than it has been on other organizations. I could go in twenty more of those, but I'll stop there.
20
+
21
+ **Gerhard Lazu:** Yeah, I remember The Future of Observability keynote that you gave... That was a really good one, an inspirational one, and I could see it. I could see it as the vision that you shared. And I remember thinking "Wow, if they pull it off, this is going to be amazing." And guess what - you did. And even more so.
22
+
23
+ **Tom Wilkie:** I can't take all the credit. I did the keynote with Frederick.
24
+
25
+ **Gerhard Lazu:** No, I know. When I say "you", I mean Grafana Labs, the whole org that you're part of, the whole team that you're part of. But you were there, you had this vision, you shared it... I'm sure everybody contributed to it, and then everybody made it happen. And I really love that journey, seeing how things have been happening with Loki. I remember when Loki version one came out, and I thought "Wow, this makes so much sense." I was so keen to start using it. And we did. Even for Changelog. We used Grafana for a long time. Prometheus... Then we went to Loki, and that was great. And then we thought "Hm... If only we could delegate this problem to someone else." And guess what - Grafana Cloud came along, the hosted/managed service, you had some very generous tiers... Once that changed, everything changed. So all of a sudden we no longer had to run our own Grafana and Prometheus. Not that it was difficult, but it's much easier to just run the Grafana Agent - that's all you need - send everything to Grafana Cloud, and it just works.
26
+
27
+ \[07:54\] And with the last changes of the alerts - I think that was the weak point of Grafana for a long, long time. And I saw that as well. So there were all these things just falling into place naturally, and being able to know what's coming and seeing it happening every six months, there's like more, and more, and more. It's like, we know what to expect, you're delivering... "Please carry on", that's what I'm thinking.
28
+
29
+ **Tom Wilkie:** Thank you very much, yeah. You know, I miss so much out of what's happened because unified alerting is a huge step in the Grafana story. I'm really pleased as the way the company came together. We used to have two alerting systems - we had the Grafana alerting system and the Prometheus alerting system. And they were worlds apart. On one hand, the Grafana alerting system is probably the easiest one that exists out there; it's very accessible, very easy to get started with... And on the other hand, the Prometheus system is probably one of the most sophisticated and powerful ones.
30
+
31
+ So I think it was really exciting how the team could combine the power of the Prometheus system, with multi-dimensional alerts, with alert managers routing, grouping and deduping and silencing... And bundle all these features into Grafana in a way that makes them easy to use and gives you that level of user experience that people have come to expect. And best of all, we haven't duplicated any features. We're just using alert manager under the hood. We're using the same API as Prometheus under the hood. So it's true to our open source roots as well, and that's -- the team did a fantastic job with unified alerting.
32
+
33
+ I think the thing you said about cloud, the generous free tier, for instance - we launched that in January, I think...
34
+
35
+ **Gerhard Lazu:** That's right.
36
+
37
+ **Tom Wilkie:** We've always had a kind of free tier; we've always allowed you to have a free Grafana instance, for instance. The work that goes into actually being able to offer a free tier - there's so much going on behind the scenes, just at a very architectural level.
38
+
39
+ The point I'd always make here is that you need the marginal cost of a new Prometheus instance, or of a new Loki instance, or of a new Tempo instance - you need it to be effectively zero. You can't offer a free tier unless the cost of the thing you're offering is as close to zero as possible.
40
+
41
+ So this means behind the scenes we can't be spinning up a new Prometheus port or a new Loki port, or a new Grafana port, or a new Tempo port for every customer that signs up. That would get too expensive for us to offer. We're not that big a company yet. So fundamentally, the architecture of all of these systems has to be multi-tenant, and we've built -- this is where Cortex comes in. We've built this horizontally-scalable, multi-tenant version of Prometheus, which means provisioning any new instance in that multi-tenant cluster is basically free. It doesn't really cost us -- I mean, once you start sending metrics there are some costs incurred, but because it's multi-tenanted, we can start to take advantage of statistical multiplexing techniques, and really drive down the cost of offering that service... Which allows us to make the free tier so generous.
42
+
43
+ And that architecture has been replicated in Loki... Well, not replicated; it uses the same code, it uses the same module system, the same ring, the same architecture and the same techniques in Loki and in tempo. And that consistency across the offerings just also carries over to the kind of operational and cognitive burden of running this... Because it's the same. Because you scale it in the same way, and you do instant response the same way. So yeah, it's incredibly exciting to finally feel like you're in the last mile of delivering on a vision that's been in progress for five or six years.
44
+
45
+ **Gerhard Lazu:** Everything you said makes a lot of sense to me, but I know that many people will be confused, because you are a VP of product. How on Earth does a VP of product know so many things about code and how things actually work? And I know that you're one of the Cortex co-authors. You've started Cortex... I don't know who the other author is.
46
+
47
+ **Tom Wilkie:** It was Julius, actually. The chap who was one of the original founders of the Prometheus project.
48
+
49
+ **Gerhard Lazu:** Julius Volz?
50
+
51
+ **Tom Wilkie:** Julius Volz.
52
+
53
+ **Gerhard Lazu:** Right, okay. So you and Julius - you started Cortex, which went to grow, and I think it's part of a very important component of Grafana Cloud as an engine, an inspiration for Loki, which I think you also had something to do with, right?
54
+
55
+ **Tom Wilkie:** Yeah.
56
+
57
+ **Gerhard Lazu:** \[12:08\] ...when you started the codebase. So how does that work? How can you be VP of product and code Go at a very advanced level? How does it work?
58
+
59
+ **Tom Wilkie:** Titles in the abstract are pretty meaningless, right? So - yes, my title is VP of product, and I do have a lot of product management responsibilities in the company... But my background is a software engineer. I've been a software engineer now for 15-16 years, I've always worked on open source codebases... Straight out of university I was kind of tangentially involved in the Xen hypervisor project. So I worked a little bit on the control tools there.
60
+
61
+ I started a company that got involved in the Cassandra distributed database, and then worked on Prometheus and Cortex. I've just always been a software engineer. I took a brief stint doing some engineering management at Google, some site reliability engineering, where I learned a lot about the whole monitoring side of things. But yeah, at the end of the day I've always been a software engineer. I've always been passionate about this kind of thing.
62
+
63
+ I don't get to do as much software engineering now as it perhaps seems... I have a large team of software engineers who do that and really should take a lot more of the credit than perhaps I do... But you know, I did a few PRs yesterday; that was mostly on some kind of continuous deployment for some internal SLO dashboards... You know, I still try and write a bit of code.
64
+
65
+ We had a hackathon recently internally, where everyone in the company took a week to code on whatever their imagination had been noodling over for the past few months... And I took part. That was pretty cool. I managed to get a couple of days of solid coding in. I'm not gonna tell you what the project was though, because that might become a future product, who knows...?
66
+
67
+ **Gerhard Lazu:** Interesting. I was just going to ask that, if any of those projects are public, but I'm sure the good ones will be, right?
68
+
69
+ **Tom Wilkie:** No, some of them are. Bjorn and Dieter and Ganesh were working on -- one of their hackathon projects was high-definition histograms in Prometheus... And Ganesh has already tweeted about that, and will be putting out more information. The code is out there in public.
70
+
71
+ **Gerhard Lazu:** I've seen that.
72
+
73
+ **Tom Wilkie:** There's a few of them that are public, and a lot of them are gonna form future projects, and potentially even future products. I can give you a bit of a hint what the project I was working on was. Not a lot of people know, at Grafana Labs - actually, its first time-series database that it built for Grafana Cloud is called Metrictank. Metrictank is a Graphite-oriented, still written in Go, still using a lot of the same techniques from modern time-series databases, like the Gorilla encoding and so on... But mainly focused on building that kind of scalable, multi-tenant cloud version of Graphite. And that's what kind of bootstrapped Grafana Cloud before I joined the company.
74
+
75
+ And then I joined and brought Cortex in with me, and since then, of course, the architecture has now kind of moved towards a Cortex-style architecture. The Metrictank team within Grafana Labs, for the past year or so, have actually been working on putting a Graphite query engine on top of Cortex. And we've actually -- I think the launch of that... You know, it'll be a seamless launch; customers shouldn't notice they're being moved off of Metric Tank and onto Graphite v5. That's actually happening very soon, and that's kind of -- to give you a bit of a hint on the direction we're going, now Grafana Enterprise Metrics and Grafana Cloud is a single time-series database that you can query through multiple different query languages.
76
+
77
+ **Gerhard Lazu:** That's fascinating. And now you reminded me the link between Acunu Analytics, the company that you were a part of at some point, and the startup that I was working for at the time, which was GoSquared, which was real-time visitor analytics. At GoSquared we were using MongoDB heavily, and we were starting to look into Cassandra. There was a Cassandra conference, and I thought you were presenting the analytics side of things... And at the time, I was heavily invested in Graphite, Ganglia was there as well...
78
+
79
+ **Tom Wilkie:** \[15:58\] Yeah.
80
+
81
+ **Gerhard Lazu:** ...and I thought "Wow, this Graphite--" And scaling - those were fun days, challenging days. And I looked at Acunu and I thought "Wow, this is interesting. They're using Cassandra for the metrics and it works really well..." I remember even the demo that you gave -- I forget the conference name; this was 2012, 2013...
82
+
83
+ **Tom Wilkie:** Yeah, I don't remember back then.
84
+
85
+ **Gerhard Lazu:** ...a long time ago. Something like that, yes. So Graphite was a great system, but it didn't really scale. It was very problematic. And then Grafana came along, but Grafana on top of Prometheus. So Prometheus had something to do with it. But Prometheus in its incipient phase was a single process, a single instance. How do you scale that? Well, it's not as easy. And Cortex, as far as I know, scales the way anyone would expect. You can shard those metrics, you can replicate them, we have different back-ends for them... That was really, really nice.
86
+
87
+ So I can see history in a way repeating itself with the Prometheus and Graphite, and now I can see the link, where it's actually part of Cortex, or it will be part of Cortex. That's really fascinating.
88
+
89
+ **Tom Wilkie:** Well, it's interesting you mention that, because one of the things Acunu did, one of its contributions to the Cassandra project was a technique called virtual nodes, which is where in the earlier versions of Cassandra each node basically owned a single range in its distributed hash rate...
90
+
91
+ **Gerhard Lazu:** I remember that.
92
+
93
+ **Tom Wilkie:** The technique that Acunu added, and it's been in Cassandra for ages now, was the ability for a node to own multiple ranges. And the whole principle there being once you can own multiple ranges - like hundreds - you then just pick them and random and you achieve a very good statistical load balancing. What's maybe particularly interesting - it's exactly the same techniques in Cortex, in Loki, in Tempo... And that's the ring I was referring to earlier; it's basically just an almost identical copy, just in Go, of the Cassandra has ring.
94
+
95
+ **Gerhard Lazu:** This makes me think of the old GoSquared team, because I remember Cassandra and how they were so excited about this... And this was mentioned like "Wow, this is amazing. Like, MongoDB? I think rather Cassandra." I remember that. And it wasn't even like version one at the time. I know that Netflix were big on it as well, and Adrian Cockcroft had a great talk about it; in that context AWS Cloud came in... So many threads connecting in my head right now. Wow... Okay.
96
+
97
+ So let's take a step back from all these -- I won't say rabbit holes, but reminiscing specific things, which are a thing of the past, and let's come back into the present with a question which I know very many people are... I'm not sure whether struggling with, but they are, you know -- there are two sides to them. What is observability? Some say that it is not the three pillars, which is metrics, logs and traces. Some say that's not what observability is. What do you think? What is observability to you, Tom?
98
+
99
+ **Tom Wilkie:** I mean, it's definitely a bit of an industry buzzword right now. The three pillars definition is not that useful of a definition. It doesn't really describe what you're trying to do, or what the problem you're trying to solve. It more describes maybe how you're solving some other problem. So whilst I don't necessarily think it's wrong... Like, in a lot of places and a lot of situations observability does revolve around metrics and logs and traces. It's not an answer to the question "What is observability?"
100
+
101
+ I've always really liked the definition of "Observability is the name for the movement that is helping engineers understand the behavior of their applications and their infrastructure. It's about any tool, any source of data, any technique that helps you understand how a large and complicated distributed system is behaving, and how you should analyze that. That's really my preference. I don't necessarily think I speak for many people though when I say that.
102
+
103
+ **Gerhard Lazu:** I've been thinking about this for a couple of years... I had a couple of interesting discussions. Even the episode before this, that's a really interesting one; if this is the first one that you're listening to, check that out, see how the two compare for you... But I also agree that being curious about how things behavior - I think that's like the first requirement for observability. Are you curious, do you care? And if you care - great. So what are we going to do to understand your production, or your system? It doesn't have to be production, but it typically is, because that's where the most interesting things happen... So how do you do that? How do you take all those metrics, logs and traces - or events, whatever you call them; it doesn't really matter - to understand how the system behaves?
104
+
105
+ **Tom Wilkie:** \[20:27\] It's an interesting kind of way of phrasing it, because what I think we really internalize at Grafana Labs is kind of avoiding a one-size-fits-all solution. So I know there are some incredibly powerful solutions out there that are incredibly flexible, but at the end of the day we internally call it this kind of big tent philosophy, where we try and embrace multiple different solutions and multiple different combinations of solutions, and really kind of focus on helping users get the best out of a wide variety of techniques... Because really, you go into any sufficiently large organization - it doesn't even have to be thousands of people, even just hundreds of people - and there's going to be one team over there that uses one monitoring solution, and a team over there that uses a different logging solution, and they're all gonna be stuck in their little silos, and they're all gonna have their own tools to use to analyze their data... And really, what we're trying to do at Grafana is bring them all together into a single place, and give them all the same experience.
106
+
107
+ The way I've always thought about it is when you get paged in the middle of the night, I don't want a system to tell me necessarily what's wrong, because the reality is if a system could tell me what's wrong, it'll probably be able to fix it for me, and I probably should have thought of it ahead of time, and it probably should never have paged me. I only really ever wanna get paged for things that I wasn't expecting, and therefore I wanna engage that kind of creative part of my brain, and I wanna come up with hypotheses as to why it's broken. And then I want tools that help me test those hypotheses and develop new hypotheses.
108
+
109
+ So really, I'm not looking for a tool that claims to automate root cause analysis, or tell me exactly what's broken... Because if it can do that, it probably shouldn't have broken in that particular way.
110
+
111
+ I'm looking for a tool that helps me test theories that I've got. "Oh, is it broken because of this? Oh, I can correlate some metrics and some logs, and I can see if that's the case." Is it broken because there's a tiny little service running on a computer onto someone's desk that's gone down? Oh, I can go and look at a distributed trace and it'll tell me if that's the case. I want a tool that helps me access data and test hypotheses. And the nice thing about that as a guiding principle is it doesn't say "Well, the best way of doing that is with logs." It doesn't say "The best of doing that is with events." And it doesn't say "The best way of doing it is with metrics." It says "The best way of doing it is situational, and depends on the problem, and it depends on the tools you've got available.
112
+
113
+ **Gerhard Lazu:** That's great.
114
+
115
+ **Break**: \[22:52\]
116
+
117
+ **Gerhard Lazu:** I really liked your last answer, and I think now is a great time to start looking at the Grafana ecosystem, the Grafana Labs, Cloud... Just because Grafana means many things. How would you solve specific problems with the tools that you have available in Grafana? So let's take a specific example... Let's imagine that every now and then my website - some of the requests are slow. What would I do to understand why certain requests are slow?
118
+
119
+ Let's imagine this is a monolithic application, Changelog.com. I'm winking righ now... It's a Phoenix app... So what would I do?
120
+
121
+ **Tom Wilkie:** Actually, I don't know what Phoenix is.
122
+
123
+ **Gerhard Lazu:** It's a framework similar to Ruby on Rails, but it's based in Elixir, which - the syntax is similar to Ruby, but it's really running on the Erlang VM.
124
+
125
+ **Tom Wilkie:** Wow.
126
+
127
+ **Gerhard Lazu:** So it's like Ruby on Rails.
128
+
129
+ **Tom Wilkie:** Is that a particularly large user base? It seems very -- I've not heard of that before. Cool.
130
+
131
+ **Gerhard Lazu:** Right. So not necessarily... I mean, depending on what you mean by large, but it scales really well, because it's the Erlang VM.
132
+
133
+ **Tom Wilkie:** Because it's Erlang, yeah.
134
+
135
+ **Gerhard Lazu:** Everything is message passing, you can have clusters natively, it forms a cluster, you start sending messages... I think one of the more popular apps that uses Erlang is WhatsApp, that everybody knows, everybody uses... And RabbitMQ is another messaging queue that also uses the same Erlang VM... And I think the last one is Riak, the database -- I think it still exists. It was by Basho.
136
+
137
+ **Tom Wilkie:** By Basho.
138
+
139
+ **Gerhard Lazu:** And I remember it was like in the same quadrant, right? Acunu Analytics was there...
140
+
141
+ **Tom Wilkie:** Manu was there, I think he was their managing director for the EU team, and he was at Acunu a long time ago, yeah.
142
+
143
+ **Gerhard Lazu:** There you go, so it's a small world, isn't it?
144
+
145
+ **Tom Wilkie:** I think he's now at one of the cryptocurrency companies, but yeah. Unrelated...
146
+
147
+ **Gerhard Lazu:** So coming back to this Phoenix app - the reason I mentioned that it's a monolithic app, it's important because it's not microservices. You don't have HTTP calls, or gRPC's, there's no such thing. It's a single app, it's a monolithic app, it talks to a database, it has an Ingress NGINX in front, there's a load balancer, and then in front of that you have a CDN.
148
+
149
+ So the request comes -- and this is very specific, and maybe this will help... The request goes through a CDN, Fastly, it hits a load balancer, which is a managed one, like your ELB, whatever the equivalent of that...
150
+
151
+ **Tom Wilkie:** Yeah.
152
+
153
+ **Gerhard Lazu:** Then it goes to Ingress NGINX, and then from Ingress NGINX it gets proxied to the right service pod... You know, I don't have to start decomposing this...
154
+
155
+ **Tom Wilkie:** Yeah.
156
+
157
+ **Gerhard Lazu:** And eventually, it hits the database and then it comes back in again. At any one point it could be cached. Sometimes requests are slow... Why? How would we find out with a tool that exists in the Grafana ecosystem world?
158
+
159
+ **Tom Wilkie:** No, it's a great question. So you already know that requests are slow, so that's kind of interesting. I'm gonna guess, for the sake of this discussion, that you've been told by your users that your requests are slow.
160
+
161
+ **Gerhard Lazu:** Right.
162
+
163
+ **Tom Wilkie:** So I would actually say -- first things first, let's kind of confirm that... We wanna instrument the system, we wanna get as many useful metrics as we can out of it. You mentioned an ELB there, for instance. We've put the CloudWatch exporter on there and get the ELB metrics out into Prometheus. Now, you can do that with your open source exporter. We're also working on a service in Grafana Cloud where effectively we run and manage that exporter for you, just to reduce the number of things you need to run. This will give you access to some rudimentary metrics, but generally, I don't find CloudWatch metrics to be super-useful...
164
+
165
+ **Gerhard Lazu:** I'm sorry, that was a bad example. So I gave an analogy -- it's actually a Linode NodeBalancer. I'm pretty sure you don't integrate with that...
166
+
167
+ **Tom Wilkie:** Okay.
168
+
169
+ **Gerhard Lazu:** But it's like a managed HAProxy.
170
+
171
+ **Tom Wilkie:** I wouldn't underestimate the Prometheus ecosystem. There's probably an exporter for Linode metrics that the importer... And if there isn't, there will be by the time we finish this recording, I imagine.
172
+
173
+ **Gerhard Lazu:** I hope so.
174
+
175
+ **Tom Wilkie:** Yeah. So I'd get metrics on the load balancer, because it's always good to start at the very edge.
176
+
177
+ **Gerhard Lazu:** The CDN is first. What about the CDN?
178
+
179
+ **Tom Wilkie:** Yeah, I don't know enough about Fastly, and I'm afraid to really comment... But I'm sure there's some way of getting logs or metrics on that.
180
+
181
+ **Gerhard Lazu:** \[27:55\] Okay. So we've hit something which I wasn't expecting to hit, but let's just go with it. I looked at integrating Fastly logs with the Grafana Cloud. To do that, it only supports HTTPS, because that's what Loki exposes... But we have to validate the HTTPS endpoint that we're going to send logs to. The problem is, how do we validate that we own Grafana Cloud/Loki? We can't do that. So what I'm saying is there's not a native integration between Fastly and Grafana Cloud, and I would really like that. Actually, that's something which we discussed in the previous episode. No, two episodes ago - episode ten.
182
+
183
+ So that's the first part - how do we get from Fastly, sending logs to Grafana Cloud? It's not supported. What Fastly is telling us - you will need to have some sort of a proxy that you can authenticate, and then forward those logs to Grafana Cloud, to Loki specifically.
184
+
185
+ It's okay... Not great. I would like just to send those metrics directly -- sorry, I keep saying metrics. I mean logs... Send the logs to Grafana Cloud. So that would be the first step. Great.
186
+
187
+ So let's say we understand the part between the CDN and the load balancer. Let's say that we understand that path, and we have some logs to tell us something. What do we do with those logs?
188
+
189
+ **Tom Wilkie:** So logs in and of themselves are seldom useful. So Loki, in LogQL that I referenced earlier, would be able to turn those into some usable metrics. You'd be able to turn them into request rates, error rates, and latencies, if the log contains latency. And you do that all with Loki. You can even, with the more recent versions of Grafana and Loki, you can build dashboards out of those. And some of the cool stuff is like behind the scenes there's a lot of caching going on, so that those dashboard refreshes don't overwhelm the Loki.
190
+
191
+ And I always say, with metrics - it'll tell you when it happened, it'll tell you how much it happened... Maybe if you've got the granularity, it'll tell you where, which service, or which region it happened in. But it won't actually tell you what happened. It will just tell you that something was slow.
192
+
193
+ So at that point, we start digging in. And there's a couple of techniques we can use. Firstly, I would instrument everything in the stack. We talked about getting metrics on the CDN, we talked about getting metrics on the load balancer... Your Ingress NGINX is running on Kubernetes, so it's trivial to deploy Promtail as a daemon set and get logs on every Kubernetes pod into Loki... So you've got the NGINX logs, which again, Loki can extract metrics on really straightforward. Ward has a fantastic set of dashboards and examples of how to do that already.
194
+
195
+ Then you've got your application, the Elixir application. Now, I don't know enough about that, but I'm going to assume there's a Prometheus client library out there, so I would instrument that... And I would follow -- whenever I'm instrumenting my own application, I tend to follow a very simple method. If you've heard of Brendan Gregg's USE Method, then kind of somewhat tongue-in-cheek I coined this phrase called The RED Method, which is request rate, error rate, and request duration. RED. Everything comes in threes, and it's really easy to remember.
196
+
197
+ So I would just try and export a Prometheus histogram from the application with request rate, with error rate, and with duration. And the histogram will capture all three.
198
+
199
+ Finally, you mentioned the database... Let's just, for argument's sake, assume it's MySQL. They don't tend to actually export very good metrics. There is an exporter for it in Prometheus, and we actually baked that into the Grafana Agent, just to simplify it and make it easier and have less stuff to deploy. So I would wire those up and get whatever metrics I can, but I'd also gather the logs, because the database logs tend to be a little bit more interesting.
200
+
201
+ So finally, this hasn't really caught on very much, but you see it in a lot of dashboards that my team and I have built - I tend to always kind of traverse the system from top to bottom. I always have request rates on the left, in panels on the left, and durations like latency graphs on the right. Just through a quick glance on the dashboard, you can typically see where the latency is being introduced.
202
+
203
+ **Gerhard Lazu:** Do you have a good dashboard that exemplifies this? Because what you say makes a lot of sense... Is there a good dashboard that we can use as a starting point?
204
+
205
+ **Tom Wilkie:** \[32:00\] The Cortex ones are the ones that I've probably spent the most amount of time. Again, a bit of work we did with the Prometheus community was this standard called mixins which is a packaging format for Grafana dashboards and Prometheus alerts. So we've built -- there's 40 or 50 mixins now, from a lot of popular systems, but one of them is Cortex. And it's just a versioned set of dashboards and alerts that are very flexible, very easy to extend, which is kind of key, and very easy to keep up to date with Upstream.
206
+
207
+ Actually, the most popular mixin would be the Kubernetes mixin. I would wager that virtually every Kubernetes cluster in the world is running a set of dashboards from the Kubernetes mixin... Which is kind of cool, because I helped write a lot of those, in the very early days at least. It is now a whole community that maintains and has taken them far beyond anything I could ever imagine.
208
+
209
+ So dashboards - you would have a row per service and then you'd just do error rate, and request rate, and latency. And this will help you at a very quick glance. When you get used to looking at dashboards in this format - and every service kind of looks the same, is in the same format - that consistency really helps reduce that cognitive load. You get to kind of pinpoint very quickly where that latency is being introduced.
210
+
211
+ So a very simple technique; it's not universally applicable, but it does help you know "Well, this is coming in my application, or this is coming in my load balancer, or this is coming in my database."
212
+
213
+ **Gerhard Lazu:** Is there a screenshot of such a dashboard that we can reference in the show notes? That would really, really help.
214
+
215
+ **Tom Wilkie:** I can just load up one of our internal dashboards and send it over.
216
+
217
+ **Gerhard Lazu:** Yes, please. That would be great. The other thing is you mentioned mixins. Mixins in what context?
218
+
219
+ **Tom Wilkie:** I've terribly overloaded the term there, because I just thought it was a cool term. I realize in CSS and in Python mixins has a particular meaning... It bears no resemblance to the kind of language-level primitive. It is just a cool name that we used for packaging up.
220
+
221
+ We call them monitoring mixins because we used a language called Jsonnet to express a lot of our alerts and dashboards. And Jsonnet is very much about adding together big structures of data, and it kind of looks a bit like a mixin in that respect. But that being said, most of the way people use mixins nowadays doesn't use that technique. We just use it as a packaging format.
222
+
223
+ **Gerhard Lazu:** Okay.
224
+
225
+ **Tom Wilkie:** So it's just a name. There's a GitHub repo and a small website, and the nice thing about the tooling that's been developed and the packaging format is very much -- we encourage people who publish exporters, or people who build applications that are instrumented with Prometheus metrics to also distribute a mixin. So Prometheus has a mixin, Etcd has a mixin, the Kubernetes mixin is part of the Kubernetes project, right? Cortex has a mixin... And they live alongside the code, they're version-controlled and maintained in the same way as the code... And suddenly, you know how people talk about test-driven development. Well, you almost have observability-driven development.
226
+
227
+ **Gerhard Lazu:** That's interesting. So I know I've heard of mixins in the context of Jsonnet, and I tried them when I was using the Kube Prometheus Stack. The one that -- I think it was Frederick... Yes, it was Frederick while he was still at Red Hat. I know that he's not there anymore, but when he was there, he was pushing for this Kube Prometheus operator...
228
+
229
+ **Tom Wilkie:** That's right.
230
+
231
+ **Gerhard Lazu:** And in the context of the operator, we could get the whole stack. Working with that -- we used that for Changelog. It was really hard, because we had Jsonnet, it was a specific version of Jsonnet... There was a Go one, and there was (I think) a Python one, or a JavaScript one... I can't remember. But I know the Go one was much faster to regenerate all the JSON that you needed, all the YAML that you needed, it took a long, long time, basically, to get it into Kubernetes...
232
+
233
+ So the mixins that you're talking about, how would you use them? Let's imagine that you're running on Kubernetes. How would you use those mixins?
234
+
235
+ **Tom Wilkie:** \[35:52\] This is a really interesting point, because the mixins are Advanced mode. It's like Hard mode. The mixins are solving a problem that software developers have. It's like, how do I package and redistribute and version-control and keep up to date? It's not really an end user format. I wouldn't expect that to happen.
236
+
237
+ So just to address some of the initial challenges. There's a C version and a Go version of Jsonnet, and they weren't quite the same. The Go version didn't have formatting, for instance. The Go version has caught up, and is now what most people use. That's kind of -- we solved that problem.
238
+
239
+ We've also developed a lot more tooling, right? So there's mix tool and there's Grizzly, and there's Tanker, and there's a whole kind of ecosystem (Jsonnet Bundler) of tools to use to manage these. And where it works particularly well is if you're in an organization with sophisticated config management. We have a single repo that has all of the config that describes pretty much our entire deployment of Grafana Cloud across 20-something Kubernetes clusters.
240
+
241
+ **Gerhard Lazu:** Is it public, please? Can you add me to it? \[laughs\]
242
+
243
+ **Tom Wilkie:** Unfortunately not, but there's lots of examples we use from it. But yeah, we've got this one repo, and it's that monorepo approach to conflict management at least where mixins really fit nicely, because you can use Jsonnet bundler to package-manage them. And then the really cool thing comes in - you probably kind of got 90% the way there, but then didn't have the last 10%... We use Jsonnet to also manage all of our Kubernetes jobs. So all our pods, stateful sets, config maps, services, you name it. All defined in the same language, in a single language, for dashboards, for alerts, for any files, for config maps, for anything. It makes it really easy for us to deliver dashboards and alerts encoded as JSON, encoded as YAML, inside a config map, in the same language, that is then uploaded with a single tool, and the whole process of updating an application and updating its config and updating its monitoring is a single PR, a single push and a single apply. That's all in CD now.
244
+
245
+ That's where the vision was. That's a bit advanced. It's a bit much to ask for most people. And also, it's a bit opinionated. You have to have the complete stack, end to end, bought into the whole thing, to really realize that benefit. And let's face it, other techniques: Customize and CUE are gaining more popularity than Jsonnet ever did... So I think the time has passed for that vision and that way that we're running things.
246
+
247
+ You kind of touched on something really important here... It was too hard to use. So what we've been doing in Grafana Cloud really for the past year or so is trying to make a kind of more opinionated, more integrated, easier to use version of all of that. You sign up to Grafana Cloud, you deploy the agent, so that's the first bit of simplification, the Grafana Agent embeds - it's all open source - Prometheus remote write code, and scraping code. It embeds Loki's Promtail, it embeds the OpenTelemetry collector... It also embeds so 10-20 different exporters, all in a single binary, all in a single thing to deploy and a single thing to configure... And it scrapes and gathers metrics and logs and traces and sends them all to your Grafana Cloud instance.
248
+
249
+ And then within that instance, we've built a service that -- it's almost like an app store; you can select the integration you want to install... "Oh, I wanna monitor some MySQL. I wanna monitor some Kubernetes. I wanna monitor Docker." And it will install the dashboards and the alerts and it will keep them up to date for you, and it will connect them through to the integration and the agent.
250
+
251
+ Behind the scenes, this is all mixins. This is all Jsonnet, this is all automation we've built to make this whole thing easy to use and integrated and opinionated. It's much harder to do that easy-to-use story in open source, because the opinions change, and the integrations change. But in cloud, where it's a much more controlled environment, we can deliver that easy-to-use experience. This just means that people who maybe have seen me talk, or seen someone else talk about Prometheus and talk about Grafana and talk about how easy it is to use and how powerful it is and how awesome it is and how much value they've got out of it, but maybe don't really have the time to dump into the intricacies of Jsonnet and learn 50 new tools, we're just trying to make that accessible to that group of people.
252
+
253
+ **Break**: \[39:59\]
254
+
255
+ **Gerhard Lazu:** As I was saying, we used Jsonnet Bundler (JB). I remember the Kuber Prometheus operator and the Kube Prometheus stack which was generated out of that... So we did away with all of that. We've obviously set up our own Grafana, set up Loki, set up Prometheus... Now all we have is a Grafana Agent, which is really nice. By the way, do you know that the docs recommend two Grafana Agents; one to scrape the logs, one to get the metrics. So I figured out how to get a single one, and that was okay, because one can do both... But the thing which I still struggle with is how to get the dashboards working nicely together. I think that's the most important thing. We have PromEx - that's the library that we use in Elixir and Phoenix to get the metrics out... And it's actually on the Grafana Blog as well, so it was featured...
256
+
257
+ **Tom Wilkie:** Great.
258
+
259
+ **Gerhard Lazu:** Alex Koutmos was working close with the Grafana team. He's also a friend of Changelog's, a very close friend. We work together; we even did a couple of episodes together... Even a YouTube stream on how we upgraded Erlang 24 and we were using Grafana Cloud to see the impact of that for Changelog.com.
260
+
261
+ **Tom Wilkie:** Nice.
262
+
263
+ **Gerhard Lazu:** It was a Friday evening deploy. PromEx was there... It was a great one; we had great fun. It was a few weeks back. So in that world, the dashboards - I still feel they are the strongest thing and the best thing that you have, but also the most difficult one to integrate... Because the Grafana Agent doesn't really handle dashboards, right? It just gets the logs and the metrics out. So we are using PromEx, but it's really clunky, because you're building your dashboards in Grafana Cloud; a lot of the time they don't work, because the metrics don't show up (reasons), and then you adjust them, then you have to export them, then you have to version control them, and then PromEx has to be configured to upload them to Grafana Cloud. So it's just a bit clunky.
264
+
265
+ **Tom Wilkie:** Yeah.
266
+
267
+ **Gerhard Lazu:** So I'm wondering, how could that be done better? Do you have some ideas?
268
+
269
+ **Tom Wilkie:** There's some kind of guidelines for building dashboards, in my opinion. First thing - you should always template out the data source. Different Grafana installations will name their data sources different things, and so a dashboard imported from one might not necessarily work in another. So I always make sure my data sources are templated out.
270
+
271
+ The second thing - I always tend to template out the job and the instance labels, maybe with wildcard selectors. And again, same reason - this means the dashboard can effectively dynamically discover what jobs you've got with certain metrics. This actually fits a pattern in Prometheus really nicely, where we have this Go buildinfo if you're in Go, and Java buildinfo if you're in Java, and so on... Where every job exports a metric that tells you the version it was built with, and so on. We call these info-level metrics. I tend to add an infometric to every piece of software I write. Maybe it's Cortex info. And then I'll tell the template selector for any Cortex dashboard to just look for all the unique jobs and instances that export a cortex build.
272
+
273
+ \[44:11\] And again, this kind of turns a static dashboard that might have encoded to use a particular set of labels into a very dynamic dashboard, which allows you to select the job you wanna look at, and it also means that the chances are when you load it, as long as there's some job exporting some relevant metrics, it'll work. So first things first - template your dashboards.
274
+
275
+ **Gerhard Lazu:** Right.
276
+
277
+ **Tom Wilkie:** Second thing - I'm a big fan of dashboards as code. So I actually don't tend to build my dashboards in Grafana. I tend to build them in my text editor, and I tend to use Jsonnet, unfortunately. I tend to use a library called Grafonnet, or there's another one called Grafonnet Builder... And if you don't like Jsonnet, there's a good library called Grafanalib that helps you build them in Python... And yeah, I tend to build them there, I tend to version-control them from the get-go, and really, I tend to use a much more kind of GitOps style approach. There's a couple of tools you can use to do this, but the one I've been using more recently is called Grizzly, by Malcolm Holmes, and some of the Grafana GitHub. And you can install that and you can point a Jsonnet definition of a dashboard and it will upload it to Grafana. Generally, I do a kind of dev deploy cycle on my laptop as I'm developing these dashboards, uploading to Grafana, refreshing, seeing the change... That way, the definition of the dashboard is already in Git. And because I'm version-controlling source code and not a big blog of JSON, the code is much more reviewable, and I can create PRs and have someone else review those PRs, and it's meaningful to do that.
278
+
279
+ **Gerhard Lazu:** That sounds exactly what I would want. You've described my ideal approach. But first of all, I didn't know about those tools. Second of all, I'm not aware of any article, any video, anything like this that runs you through how to do this. So what I would want to do is to go through that and capture it.
280
+
281
+ **Tom Wilkie:** I think the reason we don't promote it too widely is because the 80% use case for Grafana is editing dashboards in Grafana. And that's easy to access, easy to use, it's very visual, it's very rewarding to do that. The 20% use case that I've just described is the serious SRE DevOps approach. And I think we've tried a bunch of different ways of doing it. We've settled on this way, but I don't think anyone is satisfied. I don't think this is as easy as it can be. I don't think anyone thinks that this is the final form. So I'm not sure that anyone's kind of too eager to promote this as the advanced way of doing it.
282
+
283
+ I referenced that hackathon earlier that we were doing internally, and I know that we've got some cool stuff coming out that maybe will be the final form of this.
284
+
285
+ **Gerhard Lazu:** I know that I'm very excited about trying it out. This is a dream, and you can say no, right? Or like - not dream, but like a crazy plan. What would it look like if we paired, for an hour -- I've been doing it for close to a decade, so I think I'm pretty good (or so others say) to have a go at this. Maybe half an hour will be enough...
286
+
287
+ **Tom Wilkie:** No, I'd love to.
288
+
289
+ **Gerhard Lazu:** ...just to get a hang of things. So - okay, I'm thinking YouTube stream, I'm thinking--
290
+
291
+ **Tom Wilkie:** Yeah, let's do it.
292
+
293
+ **Gerhard Lazu:** Wow, okay.
294
+
295
+ **Tom Wilkie:** Can we use VS Code sharing? Because I've always wanted to use that, and I haven't had an opportunity to.
296
+
297
+ **Gerhard Lazu:** Anything you want. You're the driver. You're just showing me how it's done, and then maybe we can switch over and I can have a go to see if I understood it correctly in the context of Changelog.com... Because we are already using Grafana Cloud; the integration is there. We're already using Grafana Agent... And who knows, maybe there'll be some interesting things to share. But the focus is on getting this nailed down, because it sounds amazing. Why aren't more people doing this? And I don't think many know about it. Whatever comes after it, I think it's an important step to capture and to share widely.
298
+
299
+ **Tom Wilkie:** Yeah, I agree.
300
+
301
+ **Gerhard Lazu:** Because I don't think people know -- I've never heard this before. Jsonnet, JB... But I was doing it wrong, and I didn't even know until today... So thank you, Tom.
302
+
303
+ **Tom Wilkie:** I wouldn't say you were doing it wrong, but you didn't see the full -- you didn't get an opportunity to use the full process.
304
+
305
+ **Gerhard Lazu:** \[48:00\] ...to do it right. I didn't have the opportunity to do it right. Okay.
306
+
307
+ **Tom Wilkie:** I mean, that's one of the big challenges of this approach, is there's a lot to learn, there's a lot to consume, and you don't really see the benefits until you do it all... Which is, from a developer experience perspective, awful. There's no kind of incremental reward that goes with it, which is what we're missing.
308
+
309
+ **Gerhard Lazu:** We talked about metrics quite a bit, we talked about logs, but we haven't talked about traces. I think it's a very important element. We ourselves are not using traces, and I can see the traces being instrumental, critical, essential to understanding why our requests are slow. If you have a trace, you can understand where the time is being spent, and the slow request, you can see "Actually, you know what - it was qproxy." Because I suspect, based on the metrics that we have, which - by the way, we have quite a few, and everything is going to Grafana Cloud, all the logs, everything. Based on what I see, what we have, all things point to qproxy.
310
+
311
+ So how would we use traces to understand that? First of all, how does it work? This is Tempo, I know that's the component -- would you call it a component? What would you call it?
312
+
313
+ **Tom Wilkie:** I tend to call it either a project or a service, depending on the context.
314
+
315
+ **Gerhard Lazu:** Okay, so the Tempo service. How do we use it for traces and would it integrate in the problem or how it solves the problem that I just described?
316
+
317
+ **Tom Wilkie:** This is a really interesting one, because in the metrics world we develop exporters, which gather numeric data from other systems and exposes them as metrics. The barrier to entry for metrics is kind of medium. Maybe it's kind of three feet tall. For logs - everything has logs. It's so easy to get logs from everything, so the barrier to entry for logs is kind of nowhere; it's on the floor.
318
+
319
+ The barrier to entry for traces is super-high. You need to have systems that instrument it, you need to correctly propagate the context, the trace ID, and you need to have a way of kind of distributing this telemetry data. So this is the challenge in the tracing space right now, and this is why I think it's always the -- to your point, you haven't adopted tracing yet. It's always the third thing people adopt. The investment is high.
320
+
321
+ The good news is there's a huge reward for that investment, and particularly whenever you're looking at any kind of performance challenges, tracing is invaluable. We've been doing a lot of distributed tracing for a long time in Grafana Labs. We've started with Jaeger and eventually did our own thing with Tempo, and it's been instrumental in kind of accelerating the query performance of every component. So that's the TL;DR.
322
+
323
+ How do you do it? So there's some good news here. One of them is OpenTelemetry, very kind of cross-functional projects, from many different contributors and vendors, that is designed really to make the whole telemetry journey better and easier and simpler. And the most well-developed bit of OpenTelemetry and the bit that is the most widely adopted is their tracing stack. So we've put the OpenTelemetry collector into the Grafana Agent, so you can deploy that and then you've got something that you can just fire traces at in your local environment.
324
+
325
+ You set up the Grafana Agent to forward those traces up to Grafana Cloud, to Tempo, and then Tempo deals with the storage of them. And that's really what the component is. All that leaves is for you to deal with the instrumentation.
326
+
327
+ Now, the good news is with a lot of the high-level languages, a lot of dynamic languages you can use auto-instrumentation. So this is part of OpenTelemetry's client libraries that come along, and for instance with most Java web frameworks, with most Python frameworks, it's like one line of code, or maybe it's even no code changes, and you can get reasonable traces out of the system. I don't think a system like that exists for Go, so it's a bit more work with Go, but it's still not that challenging. Unfortunately, I don't know enough about the Erlang VM, but I'm gonna expect there's probably a pretty easy way of getting traces...
328
+
329
+ **Gerhard Lazu:** \[51:47\] It exists. The OpenTelemetry integration exists in Erlang. It's not that mature, but it's improving. Every month it's getting better. And I think it's more around the queries that go all the way to PostgreSQL, so how does the request map to that. I know that the database has some impact on that, but right now the most important one is between the app pod, the app instance, and the PostgreSQL pod, which - they all exist in the same place.
330
+
331
+ Now, maybe if PostgreSQL was like a managed service, we wouldn't have this problem. Maybe. But regardless what the case would be, you'd want to know what is the problem, and if I change this, does it actually improve it? And by how much? If you have the trace, it's really easy to understand "Well, I should not qproxy, I should focus maybe on the load balancer." But I don't know where that request is stuck, or in that request which is the longest portion, so where should I invest my time first?
332
+
333
+ **Tom Wilkie:** You've hit on one of the many problems with distributed tracing. You have to have the entire stack instrumented to get a lot of value. And if you have holes in the middle, or black blind spots from a kind of tracing perspective, then the value is greatly diminished.
334
+
335
+ **Gerhard Lazu:** Yeah.
336
+
337
+ **Tom Wilkie:** You can get tracing information out of load balancers, and I've never actually done it myself though. I've always kind of stopped there. I'm hoping that things like OpenTelemetry -- and I know Amazon are heavily investing in OpenTelemetry, so I'm hoping that it'll be possible (if it isn't already) to get OpenTelemetry spans out of my ELBS... my ALBS and so on. I think that's gonna be really important.
338
+
339
+ I'm hoping that things like the W3C Trace Context makes this easier, and maybe this even allows things like the CDN, Fastly, to also emit a span. That would be kind of cool, being able to see a CDN and an ALB and your application.
340
+
341
+ When it comes to Postgres and MySQL, I don't know. I'd love to see spans coming out of those systems, but I don't really know the status, and I'm not really an expert on this side of things.
342
+
343
+ A common misconception is that every service emits one and only one span. It doesn't have to; you can emit as many spans as you like. You probably shouldn't emit too many, but you can do whatever you like. So one of the things we do a lot of is client-side spans. Whenever we do a request to a database in Cortex in pretty much any of the systems I've worked on, they'll emit a client-side span. And this effectively gives you some insight into the latency that external systems are contributing. But it doesn't have to even just be two spans, a server span and a client span. You can put spans in between. So we will have spans around cache lockups, we will have spans around various kind of areas inside a single service that parallelize, and will admit multiple spans, and it really helps you understand the flow of the requests. Don't go crazy with it, but in general it's possible.
344
+
345
+ In your situation, because it's a monolith, I would instrument the Elixir server and client going out to Postgres, and that would probably give you enough information to know if it's Postgres, to know if it's the qproxy or the ELB. You wanna get a span from something further up the chain, and then start to look at the differences.
346
+
347
+ **Gerhard Lazu:** Ingress NGINX? Does Ingress NGINX and NGINX support spans, do you know?
348
+
349
+ **Tom Wilkie:** I don't know off the top of my head.
350
+
351
+ **Gerhard Lazu:** Okay.
352
+
353
+ **Tom Wilkie:** One of the things I've definitely seen engineers go down this rat hole of trying to get complete traces and spans from everywhere, and there's just kind of a -- there's an effort/reward trade-off to be made. It might take a lot of effort to get a complete span from every single service. If you're on a mobile app, doing a client-side span might tell you everything you need to know. Just emitting it from your own mobile app.
354
+
355
+ **Gerhard Lazu:** I understand what you're saying. I think on the client-side that is less of an issue because this span, which is the longest one, happens server-side, where it's waiting - or processing, whatever the name may be - and that tends to sometimes be really long... So what happens inside of that span?
356
+
357
+ We know that it goes to (let's say) Fastly. Great. We can remove that, and we can go directly to the load balancer... I don't think there's much we can do about the load balancer, so let's say we ignore that... So our span really starts at possibly the Ingress NGINX. So that's the first start point.
358
+
359
+ **Tom Wilkie:** \[56:05\] Mm-hm.
360
+
361
+ **Gerhard Lazu:** Excellent. What happens inside Ingress NGINX maybe would be interesting. I mean, this is NGINX specifically. Maybe it would be interesting. But the next hop will be into - as far as I know, this will be the entrypoint into Kubernetes. So that will be the service that is responsible for routing the traffic. I mean, that's actually even before the Ingress NGINX, right? It's the service, it hits the NGINX pod, and from the NGINX pod it will need to talk to the other service, which is the application service.
362
+
363
+ So having these first 2-3 steps in the span would be already helpful, but realistically, I think we can only start from the Kubernetes side, and that's okay. So from NGINX, the next hop would be really the application. So how does that span vary, and regardless what happens inside, it doesn't matter. How does that duration change? From the application, again, it has to hit the database. And if we know the timings that it takes, that would be enough. So we have literally the 3-4 hops which we're really interested in, and then there's the Kube-proxy. So where does that happen, and how long does that span take?
364
+
365
+ So it's just like, okay, together, maybe seven steps, and which is the step which is more variable. That's the way I think about it. Is that right? Does this sound right to you?
366
+
367
+ **Tom Wilkie:** With distributed tracing you've always gotta kind of see -- the great thing about it is being able to visualize the actual flow of the requests. So yes, I'm agreeing with you.
368
+
369
+ One of the things I will say is it's probably not Kube-proxy. My understanding in most deployments is that it's not a layer seven thing; it's done at the TCP level, where it doesn't intercept any traffic, so it's not worth putting a -- or it's not even technically possible, I guess, to do a request-level span there, because it's very connection-oriented.
370
+
371
+ **Gerhard Lazu:** Right.
372
+
373
+ **Tom Wilkie:** You know, one of the promises of OpenTelemetry, because it's so vendor-netural and because it's so open as a standard is that we might even be able to get spans into more established open source projects who don't wanna pick favorites. Maybe one day we will be able to get spans into Postgres and into MySQL. Maybe it already exists. I'll admit to not knowing off the top of my head.
374
+
375
+ **Gerhard Lazu:** Neither do I, but that's really fascinating. So this is what I'm thinking... First step is let's pair up on what it looks like to do Grafana dashboards Tom-style. I'll call it Tom-style; I know it isn't, but... Grizzly-style, or whatever. The point being is the way you develop them. Big fan of GitOps, big fan of version controlling it... We're not using Argo CD yet, but I would love to put that in the mix. How does that play with the tools that you use? How does it integrate with Grafana Cloud? How can we control those dashboards in a way that is nicer than what we have today?
376
+
377
+ And then, this specific problem once we have that iteration set up really nicely, and those feedback loops that operate nicely, so we can experiment, which goes back to what you were saying, being able to ask interesting questions, being able to figure things out, like explore, which I'm a big fan of... Figure out, like -- we don't know what the problem is, so let's figure out. So how can we very quickly iterate on solving that specific, or finding that answer?
378
+
379
+ \[59:16\] And then, I think those spans, Tempo and integrating with that - super-valuable, long, long-term. I expect things to change along the way as the ecosystem matures. More libraries are getting instrumented, OpenTelemetry becomes more mature... I think that's a great vision and a great direction towards where the industry is going. I'm very excited about that.
380
+
381
+ As a listener, if I had to remember one thing from this conversation, what should that be, do you think?
382
+
383
+ **Tom Wilkie:** I'd go all the way back to the early comments about observability and about the big tent philosophy, and about there not being one-size-fits-all tooling. I know as a vendor here I have a preference for Prometheus and Loki and Tempo, but honestly, that's just a preference; that's just an opinion. An equally valid opinion is to use Graphite and Jaeger and Elastic. They're very powerful systems. It's our kind of mission at Grafana Labs to allow you to have the same experience and the same level of integration and ease of use no matter what your choice of tooling is.
384
+
385
+ **Gerhard Lazu:** I love that. So if we were to pick one title for this discussion, what do you think that should be?
386
+
387
+ **Tom Wilkie:** Observability and big tent philosophy.
388
+
389
+ **Gerhard Lazu:** Big tent philosophy. I like that. I like that big tent philosophy.
390
+
391
+ **Tom Wilkie:** I'm not sure where the term comes from, to be brutally honest. I should probably google it. I know how a lot of companies have internal mantras. Google's mission was to organize the world's information. The internal mantra in Grafana Labs is this big tent philosophy. We apply it everywhere, to everything we do.
392
+
393
+ **Gerhard Lazu:** Who came with the idea of the big tent, do you know?
394
+
395
+ **Tom Wilkie:** I don't know where the term came from, but the idea was very early on in Grafana, when Torkel added support for multiple data sources. And very early on -- Grafana started life visualizing Graphite data. But very early on, support for other systems was added. And it's really that vision early on to bring together data from multiple systems in Grafana that ceded this idea.
396
+
397
+ **Gerhard Lazu:** So the big tent, the way I understand it, is bringing all these (I wanna say) vendors' data sources? It's more than just data sources, right?
398
+
399
+ **Tom Wilkie:** More than just data source, because it's data from anywhere, and combining it in a single place, but building experiences that span multiple systems, integrating them in ways that didn't exist before... But it's not just a concept that applies to Grafana and the visualization. We apply it on the backend, with supporting different query languages within the same time-series database. We support it in Tempo, being able to send traces formatted for Jaeger, or formatted for Zipkin. And it's kind of intrinsic in a lot of OpenTelemetry as well, being very vendor-neutral, to a fault.
400
+
401
+ **Gerhard Lazu:** Tom, I didn't think this was possible, but it happened... I have more questions at the end than at the beginning...
402
+
403
+ **Tom Wilkie:** I'm sorry about that...
404
+
405
+ **Gerhard Lazu:** And I'm more excited to continue talking with you at the end than I was at the beginning; again, I thought that's not possible... I'm really looking forward to trying things which I've just said, and I'm really looking forward to the next time, so thank you for today.
406
+
407
+ **Tom Wilkie:** Thank you very much, Gerhard.
Honeycomb's secret to high-performing teams_transcript.txt ADDED
@@ -0,0 +1,381 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So in 2020, 16th of June, I reached out, I've sent you a direct message, and it read like this: "Hi. I can't make headspace for our conversation at the moment. Will ping when I'm done with current work in progress and have loaded necessary context to make it worthwhile. Liking your recent tweets, by the way. Looking forward to where you're taking the mindshare." And I was referring to the observability mindshare. This was 2020.
2
+
3
+ 17 months later, I think that mindshare is getting even more traction, if that was possible. I think it was expected, but I really liked where the whole observability landscape has shifted, and you, Charity, and your team, made a massive contribution to that.
4
+
5
+ **Charity Majors:** Yeah. That's very sweet of you.
6
+
7
+ **Gerhard Lazu:** You gave lots of great talks, lots of great presentations, and I think this will be another one. We'll see how it goes. That's my hope. That's my intention.
8
+
9
+ **Charity Majors:** Cool.
10
+
11
+ **Gerhard Lazu:** So I know that you get asked this question a lot, but I think it's important that we start here. What is observability?
12
+
13
+ **Charity Majors:** Well, it comes from the mechanical engineering and control systems theory, and the definition of observability - it's the mathematical tool of controllability, and it means how much can you understand about any internal system state just by looking at it from the outside.
14
+
15
+ So if we extrapolate that to computers, I think a lot of interesting things flow from it, and it's increasingly relevant. It used to be that we had a load balancer, the app, and the database. And you could pretty much predict most of the ways the system was gonna fail; it repeated itself over and over. Nowadays, people have got microservices, they might have hundreds of services, all these different storage systems, and the systems tend to fail in a way that is different every time. So observability is about instrumenting your code in such a way that you could ask any question, understand any internal system state with no prior knowledge, without ever having seen it before... Can you understand what's happening on your very complex system?
16
+
17
+ **Gerhard Lazu:** So instrumenting your code - that is a really important one. Would you say that you would need to instrument your code every time you need to observe an aspect of it?
18
+
19
+ **Charity Majors:** The point of it is that you shouldn't have to add new code to observe it. That's part of the point. If you've got enough context, you should be able to slice and dice and ask new questions without shipping custom code. Because adding custom code implies that you knew in advance what you needed to look for, right?
20
+
21
+ **Gerhard Lazu:** So the code, in a way, it needs to expose some information about how it runs...
22
+
23
+ **Charity Majors:** You want to gather any information you happen to know about the parameters that were passed in, or the runtime environment, the language internals, the container, the systems environment, as well as -- you wanna wrap automatically and store any HTTP calls, you wanna store the amount of time it took, what the contents were etc. Any database calls - you wanna store the raw query, the normalized query, the amount of time it took, the return value... You wanna store anything that might help you find this request at some later date. Any user ID, any shopping cart ID... High cardinality dimensions like IDs are incredibly identifying and incredibly useful. The point is you don't know it's gonna be useful in the future, so you should just throw in anything you think might be useful, and some day it will be.
24
+
25
+ **Gerhard Lazu:** I find it really interesting how you keep mentioning things which make business sense; they are typically related to the problem that your application or your code is trying to solve. What you're not saying is CPU, memory, disk. That's very interesting. Why is that?
26
+
27
+ **Charity Majors:** I feel like we're seeing a bit of a divergence. I think that monitoring tools, things that are metrics-based, are the right tool for the job when it comes to understanding your infrastructure. If you're reflective of the service, "Is this service healthy?" But that's a very different question from "Is my code working? Is this user happy? Is this kind of request executed from end to end?" That's the observability tool.
28
+
29
+ Now, I do think that the observability, from the perspective of your code - I think there are a couple of metrics that are probably useful to software engineers. You do wanna know if you just shipped a change and your memory usage tripled. You do wanna know if you just shipped a change and your CPU suddenly saturated. But there's only like three or four of those that are really useful most of the time. The rest of those metrics tend to be everything under /proc or all of the IPv6 counters, and statistics, and stuff... And that should not be in the purview of software engineers who are trying to write code and understand it in production.
30
+
31
+ **Gerhard Lazu:** So the way I hear it, it's almost like the end user experience, what makes them happy, what makes them sad.
32
+
33
+ **Charity Majors:** \[08:10\] It's a radical perspective shift from the perspective of the service, to the perspective of the user. Another way to think of this is "Well, we blew up the monolith..." It used to be you had a monolith, and if all else failed you could attach GDB and you could just step through it, right? Well, then we blew up the monolith and suddenly the request is hopping the network all over the place, and now you can't step through it. So part of the way that we focus on instrumenting is gathering up all of that information around the perspectives of the request, so that we're almost like passing it along with the request as as it hops the network from step to step.
34
+
35
+ **Gerhard Lazu:** So that to me sounds a lot like what the microservice architecture would advocate for. You have lots of microservices, you have--
36
+
37
+ **Charity Majors:** You're pretty much screwed if you don't have something like this and you're running microservices, yeah.
38
+
39
+ **Gerhard Lazu:** Right. So this is very important for microservices. What about serverless?
40
+
41
+ **Charity Majors:** Absolutely. In fact, I will often tell people that the right way to think about instrumenting your code in the future is just imagine you're running serverless. Because you might not have access to all of the underlying infrastructure. All you have access to is what can you tell through the lens of the instrumentation that you're embedding in your code. In turns out you can tell a lot, and that's what's actually important.
42
+
43
+ **Gerhard Lazu:** Interesting. So if you do have a monolith, what do you do? Can you still use the observability that you mention about?
44
+
45
+ **Charity Majors:** Absolutely. It's never not easier to have observability tools... I feel like though when you're asking someone to radically change the way they do things, or adopt a new tool, what you're offering them needs to be an order of magnitude better than what they've got. For some monoliths, it is. For some, it's not. For a lot of monoliths, they can get along just fine with some Datadog graphs, and dashboards, some monitoring checks... Because almost all the complexity is bound up inside the application logic, and they're familiar with that.
46
+
47
+ So you should never embrace change for the sake of change, or complexity for the sake of complexity. If what you have is working for you, more power to you. The problem is that for so many of us it's almost like falling off a cliff. It's very discontinuous; when the old solutions stop working for you, they really stop working, and it's pretty abrupt and pretty brutal.
48
+
49
+ **Gerhard Lazu:** Right, that makes sense. Going back a little bit to the users - I think that is very important, because all of a sudden, being able to see or visualize in a way the journey that the user takes through your app, and what that entails through your app, I think that is very powerful. And being able to understand what is not working for that user specifically is important...
50
+
51
+ **Charity Majors:** Yes.
52
+
53
+ **Gerhard Lazu:** But also, extrapolating that to all your users.
54
+
55
+ **Charity Majors:** Yes. If it's broken for this user, who else is it broken for? Absolutely.
56
+
57
+ **Gerhard Lazu:** And I like that perspective, because that can work equally well for development teams. So we often think that our end users are the only ones that benefit from the code. But a lot of the time, the development teams spend a lot more time with the code, wrestling it, fixing it, debugging it, whatever needs to happen. So how does the observability that you think about help those types of users?
58
+
59
+ **Charity Majors:** Well, the best engineers I've ever worked with are the ones who will have one window up with their IDE and another window up with their observability tool, and they're just constantly -- they're entering a conversation with their code as it's live in production.
60
+
61
+ So observability isn't a magical fairy solution in and of itself. There are other important components here that work in synchrony. I think that CI/CD, having a really healthy CI/CD pipeline is a really important part of this, because when you're writing code, you have all that context in your brain, it's fresh; you know what you're trying to do, you know what trade-offs you make, you know what you didn't try, or what you tried and what failed... And that stays in your brain for minutes, hours... Not that much longer after you've switched contexts and picked up a different project. Then it's gone and it's never coming back.
62
+
63
+ \[12:12\] And so having a CI/CD pipeline where once you merge your changes to main it automatically picks it up, runs tests and deploys within 15 minutes (that's a good upper bound), and very importantly, it deploys only your changes. If it's small, it's compact, it's a few minutes, then you can ship one engineer's changes at a time, which gives you a really powerful sense of ownership. When you know your changes are going live within 15 minutes, you're highly incentivized to go look at it through the lens of the instrumentation you just wrote.
64
+
65
+ When you're merging your changes and you're pretty sure that at some point in the next 12 to 72 hours your changes and anywhere from 0 to 15 other people's changes are going to be shipped, nobody's gonna look at it. So you've severed that tight, virtuous feedback loop of ownership.
66
+
67
+ I'd also like to point -- you know, Facebook did some great research earlier this year that showed from the moment when you're writing code and you write a bug, the amount of cost and time and pain etc. goes up exponentially when it comes to fixing that bug, the longer it gets. You've written it, you can backspace and it's the easiest it's ever going to be. The longer it gets, the more expensive it gets, the more painful it gets, the harder it gets. Once it's been a month or two, it probably won't even be you that's finding and fixing the bug. It'll be some other poor fool who has no context.
68
+
69
+ So observability is what allows you to take your microscope out and compare at the level of the pull request, what is different about the request? I have this build ID, with these changes, with this instrumentation... And once you can see it, it's so easy to fix. Fixing bugs is not hard; finding the bugs is hard, right?
70
+
71
+ **Gerhard Lazu:** Yes. It's always that one character change, or the one line change. And the hardest bugs - that's exactly what it is, where you just reorder a line, and guess what - it starts working again. Nobody knows why. Don't touch it. That's what typically happens.
72
+
73
+ **Charity Majors:** The hard part is finding where in the system is the bug that you need to fix. And knowing that there's a bug in the first place. And these are the things that observability is so well-positioned to do for you, because it speaks the language of endpoints, and variables, and not the language of low-level systems stuff.
74
+
75
+ **Gerhard Lazu:** I think the term "observability" is overloaded, overused...
76
+
77
+ **Charity Majors:** Well, it is now...
78
+
79
+ **Gerhard Lazu:** It is now, right? Not when you started, right?
80
+
81
+ **Charity Majors:** I planted my flag on it! That was my word.
82
+
83
+ **Gerhard Lazu:** You had quite a bit of time to think about it, and I really like the alternatives that you came up with, which - I think they all mean observability. One that really stood out to me is being curious in production. What happens in your production? How do yo know what is going on? And obviously, production is a metaphor for a system that really matters... Because you maybe work on a software that gets shipped to other users that get to use it in their production, and it's not your production, it's their production; so you're removed from it. But still, understanding how that software behaves in production, someone else's, is also important. So how can you be curious in production? What does that even look like? And I think what you've just described captures it well. Reducing that time between introducing the bug and seeing how the system behaves at scale - because that's what production typically means; lots and lots of requests, lots of weird paths being taken through your codebase... The heisenbugs, right? One in a million. They only happen in production.
84
+
85
+ **Charity Majors:** Right.
86
+
87
+ **Gerhard Lazu:** \[15:58\] And I think property testing and fuzzing help with it somewhat, but not at scale. You can't generate production scale. It's impossible.
88
+
89
+ **Charity Majors:** You've just basically gotta accept that all the interesting bugs are only gonna happen in production. There's no such thing as a staging environment that matches production. It doesn't exist.
90
+
91
+ **Gerhard Lazu:** I love that. I love that. And I think that's why you should push directly to production... On a Friday. The day doesn't matter really. It's just a day. What if it's Saturday? Does it matter?
92
+
93
+ **Charity Majors:** Well, if you're using feature flags, then it shouldn't matter... Right? If you're using a feature flag... Like, decoupling deploys from releases is one of the most powerful things you can do for reliability. I love the phrase that the Intercom folks came up with, which is that shipping is the heartbeat of your company. If you're a software co, shipping should be as regular, as minor, as uneventful, as boring, as tedious, as pedestrian as a heartbeat, right? Because that's how you deliver value to users; it shouldn't be something that you have to get all worked up about. It should just work, it should happen many, many, many times a day. Predictably, etc.
94
+
95
+ **Gerhard Lazu:** Any day, every day, it doesn't really matter. As long as you're shipping...
96
+
97
+ **Charity Majors:** Right? A lot of people get worked up about the phrase "testing in production", but in fact we all do it. The only question is "Do you admit you do it?" and "Do you try to build guardrails so that you do it safely or not?" Because I agree, testing in production, if you don't have tests, if you don't whatever - it's a terrible idea. But that's not what we're talking about. We're talking about doing it well. Because it's the only way you can test these things.
98
+
99
+ **Gerhard Lazu:** That's right.
100
+
101
+ **Break**: \[17:35\]
102
+
103
+ **Gerhard Lazu:** So I think we're both agreeing that shipping into production is very important. Anything before that - you can do it, sure. Why? Ask yourself. If you have--
104
+
105
+ **Charity Majors:** Code that isn't in production is dead code. It doesn't matter, it doesn't exist.
106
+
107
+ **Gerhard Lazu:** Right. So that's the first one. The second one is you want that time to be as short as possible. I think anything under 15 minutes is good, but what I'm wondering is why 15 minutes.
108
+
109
+ **Charity Majors:** Totally arbitrary. \[laughs\]
110
+
111
+ **Gerhard Lazu:** Right.
112
+
113
+ **Charity Majors:** \[20:01\] Just the longer it gets, the more pathologies start to creep in; you're entering this sort of death spiral of it takes longer, so you need bigger diffs, so code review takes longer, you start to ship multiple changes from multiple people at a time, so you decouple-- you know, it's just badness. And these numbers I've also pulled out of my ass, but they also seem to be true. If you ship within 15 minutes, that takes you X number of engineers to build, maintain this codebase. If it takes you in the order of an hour or more, you need twice as many engineers. And if it takes you in the order of a day, you need twice as many again. And if it takes you a week, twice as many again. And I'm definitely not exaggerating it; if anything, I am being too conservative and underestimating it.
114
+
115
+ **Gerhard Lazu:** Right.
116
+
117
+ **Charity Majors:** And that time is not being spent on -- it's the worst parts of engineering; it's the waiting on each other, and the trying to find the bugs someone else wrote... Engineering can be such a wonderful, beautiful, creative, fantastic profession, but only if you're in a high-performing team that can spend most of its time solving new and interesting, hard problems that push the business forward every day. There's nothing magical about it, it's just that I think that, honestly, 15 minutes is achievable for anyone who just invests the engineering effort. It's not rocket science, it's just engineering. You just have to set a level and hold yourself to it. Instrument your build pipelines, you can see where all the time is going to...
118
+
119
+ I think Intercom - again, they're some of my favorite people, but they ship in 15 or less, and they have a Ruby on Rails monolith. \[laughs\] If you're using Golang or something, you have no excuse. Anyone can get it done in 15 minutes. 5 minutes or less - that's trickier. I don't think that's achievable for everyone and every stack. But 15 minutes or less - I think that's actually achievable for almost everyone.
120
+
121
+ **Gerhard Lazu:** I think this feels to me very related to testing. You don't want your tests to take more than some number of minutes... And if they do - well, how many changes can you push to your repository if you have to wait an hour, two hours to know whether you haven't broken anything?
122
+
123
+ **Charity Majors:** Yeah.
124
+
125
+ **Gerhard Lazu:** I think this is very similar, but maybe more important, because this also includes the tests. You would obviously want your tests to run within these 15 minutes, and then have your code deployed. So I think there is some sense there. But also, I think even shipping may be an overloaded term. Like, what does it mean to ship? Well, obviously, you're getting code out there in production. But what if you think about shipping as learning? How long would you want to wait to learn something, or actually to get an answer to one of your questions that you have? "I wonder..." "What if...?" and you try something. And if you have to wait an hour for an answer - well, I think you'll get frustrated very quickly.
126
+
127
+ **Charity Majors:** If you have to ship a one-line change, how long would it take you to get that out? For a lot of companies, they haven't prioritized it, and it literally takes them hours to do a one-line change, and that to me is just unspeakable. The thing is, even companies who take an hour to make the change, almost all of them have shortcuts... And that's terrible. You really want the shortest, fastest, quickest, easiest path to be the default.
128
+
129
+ **Gerhard Lazu:** Let's take our example. Changelog.com is a monolithic application. It's a Phoenix-based app; think of it like Ruby on Rails. It's using a PostgreSQL database, it has NGINX in front - this is Ingress NGINX, it's running on Kubernetes, and there's a load balancer in front, and there's a CDN in front as well. So if we wanted to make our setup more observable, the way you think about observability as we've discussed so far, what should our first three steps be?
130
+
131
+ **Charity Majors:** \[24:04\] What language are you using?
132
+
133
+ **Gerhard Lazu:** What language? It's Elixir.
134
+
135
+ **Charity Majors:** Yeah, you should install the Honeycomb OpenTelemetry instrumentation into your application, and then that'll give you out of the box -- it automatically wraps HTTP and database calls and all this stuff... And then you might want to, at some point, go in and add some amount of tracing.
136
+
137
+ **Gerhard Lazu:** So the tracing is like the custom stuff, where we care about specific calls being made, how long they take, stuff like that.
138
+
139
+ **Charity Majors:** Yes. Which is optional, but it's really handy when you're trying to figure out where your time is going, or concurrency problems, or stuff like that.
140
+
141
+ **Gerhard Lazu:** Okay. So that was just like one step - install it, and that's it. I like that. That sounds really good. That's super-simple. Very interesting. And how would we visualize the data? So our app starts emitting those events... What happens next?
142
+
143
+ **Charity Majors:** You go to Honeycomb.io, and your landing page will be familiar to you if you've used APM tools before. It will have errors, and latency, and request rate etc. But you can start playing around with it. If you're trying to diagnose the problem, or if you're -- one of my favorite things about Honeycomb that we've done is BubbleUp, which is this cool thing where if you see a graph and there's a spike or something and you're like "Ahh, this is bad. I wanna know more about this", you can just draw a little bubble around that spike, and then we will pre-compute for all the dimensions, outside and inside of the bubble, and diff them, sort them, and we'll tell you exactly what is different about the thing that you said you cared about, whether that's one thing or five things.
144
+
145
+ So you might go "Ah, I care about this", and we'll go "Oh, these errors could be maybe the export endpoint, all from this region of Amazon, all for this particular user ID, all for this particular language pack", and it's really clear, you just immediately see "Ah, this is what's different about the thing that I care about."
146
+
147
+ **Gerhard Lazu:** So let's say that we have certain requests which sometimes are really, really slow. Could Honeycomb help us identify why they're slow?
148
+
149
+ **Charity Majors:** Hell yeah.
150
+
151
+ **Gerhard Lazu:** Okay, okay... I'll try that. And if it doesn't work, who should I talk to?
152
+
153
+ **Charity Majors:** \[laughs\]
154
+
155
+ **Gerhard Lazu:** You?
156
+
157
+ **Charity Majors:** There's a great little Intercom pop-up in the app that will take you to our support team, and they're wonderful.
158
+
159
+ **Gerhard Lazu:** Amazing. That's exactly what I intend to do next.
160
+
161
+ **Charity Majors:** Excellent.
162
+
163
+ **Gerhard Lazu:** Okay. And behind the scenes, where are all those events going?
164
+
165
+ **Charity Majors:** Well, they've come to the Honeycomb API, which is pretty thin little shim that does some rate limiting etc. and then drops them into Kafka. And Kafka queue is consumed by a pair of retrievers; that's our custom -- you know, I've spent my entire career telling people "Never write a database!" and I'd like to be very clear that we have not written a database, we've written a storage engine... It's completely different.
166
+
167
+ **Gerhard Lazu:** Okay... What's the difference?
168
+
169
+ **Charity Majors:** Not much. \[laughs\]
170
+
171
+ **Gerhard Lazu:** Right, okay...
172
+
173
+ **Charity Majors:** It's a \[unintelligible 00:27:04.12\] store so it gets consumed by a pair of retriever nodes, and pretty swiftly it also gets aged out to S3. Then when you're issuing requests via the Honeycomb UI, the queries are actually run by Lambda jobs, which will then fan out to a full table scan. So we merge the data and return it to you in the browser.
174
+
175
+ **Gerhard Lazu:** That's interesting. So I hear S3, I hear Lambda... The API - you're not using API Gateway or anything like that from Amazon?
176
+
177
+ **Charity Majors:** API Gateway?
178
+
179
+ **Gerhard Lazu:** Whatever the name of the service is; they have a service which basically provides API functionality for your Lambdas, so you can hook up Lambdas to APIs...
180
+
181
+ **Charity Majors:** No.
182
+
183
+ **Gerhard Lazu:** No? You're not using that. And do you know why? I'm curious, genuinely.
184
+
185
+ **Charity Majors:** I'm not sure, actually.
186
+
187
+ **Gerhard Lazu:** And then why Kafka? I have to ask that. For other reasons.
188
+
189
+ **Charity Majors:** \[27:54\] Since we are writing our own storage engine, it gives us like 18 hours' worth of backup. You know, if we need to replay some events, or if anything happened... It's also how we bootstrap and bring up new nodes...
190
+
191
+ **Gerhard Lazu:** Why not Kinesis?
192
+
193
+ **Charity Majors:** At the time, I was the one who made that decision; this was like five years ago... And there were some constraints Kinesis had that I think had to do with the events and some of the data types that we needed, that it just wouldn't support; it wasn't flexible enough.
194
+
195
+ **Gerhard Lazu:** Okay, so it's a Kinesis limitation that was there in the past, and it doesn't matter whether it's there now... Obviously, you have Kafka, it's running well, I'm assuming...
196
+
197
+ **Charity Majors:** Honestly, ideologically, while I do believe in outsourcing, making it someone else's problem whenever possible, given that Kafka is basically functioning as part of our database which is a very integral part to Honeycomb; it is one of things that I think is better for us to have in-house expertise and run ourselves.
198
+
199
+ **Gerhard Lazu:** Okay. So that answers my next question, which is if it's a managed service or if it's something that you install, you manage, you update...
200
+
201
+ **Charity Majors:** Yeah, we install and manage and update.
202
+
203
+ **Gerhard Lazu:** How is that experience, I'm wondering?
204
+
205
+ **Charity Majors:** Kafka?
206
+
207
+ **Gerhard Lazu:** Yeah. Some people say that managing Kafka, like installing Kafka clusters used to be difficult. With Zookeeper I think that is going away these days... I don't know.
208
+
209
+ **Charity Majors:** You know, not many startups have an ops co-founder... \[laughs\] And we were fortunate enough to have me. So that stuff is not that hard if it's your bread and butter.
210
+
211
+ **Gerhard Lazu:** Right, okay. And then it's S3 behind the scenes, it's Lambda... So I'm assuming that when the Honeycomb UI is displaying those charts from all those events, you're actually consuming those events from S3, is that right? To draw the --
212
+
213
+ **Charity Majors:** For the first few hours... It depends. It's dynamic based on your write throughput etc. But it gets written out to SSDs first, and then it gets aged from there into S3. So yeah, it's reading from some combination of the local SSDs and S3.
214
+
215
+ It was interesting - when we moved from using SSDs for everything to age things out to S3, we really thought there would be a severe performance hit... It turns out no. The performance characteristics are different, but -- and speed is incredibly important to us, because we really want people to be in the zone, just try this and this, add this question and tweak it, and tweak it... So for our 95th percentile we target one second for those queries.
216
+
217
+ **Gerhard Lazu:** Right. Okay.
218
+
219
+ **Charity Majors:** Quite fast.
220
+
221
+ **Gerhard Lazu:** Yeah, that sound fast. I mean, we were mentioning 15 minutes before, and now you're telling me one second... So yes, it is very fast, if you have that reference point... Okay. And I'm assuming that Honeycomb uses Honeycomb to understand Honeycomb. Is that right?
222
+
223
+ **Charity Majors:** Definitely. \[laughs\] We have office dogs... Honeycomb was actually first named Bloodhound, and then we shortened it to hound.sh, and then we got a cease and desist from Hound CI... So now we're named Honeycomb. But Retriever is the name of our database, and Poodle is the name of our frontend, and dog names for everything. We have a dogfood cluster; that is how we monitor everything that Honeycomb does with Honeycomb. And then we have a kibble cluster that monitors the dogfood cluster.
224
+
225
+ **Gerhard Lazu:** And what monitors the kibble cluster?
226
+
227
+ **Charity Majors:** Nothing. \[laughter\]
228
+
229
+ **Gerhard Lazu:** You do, right? Are you working alright?" "Yes" is the answer. Okay.
230
+
231
+ **Charity Majors:** \[laughs\] Yeah.
232
+
233
+ **Gerhard Lazu:** Right. So does the dogfood cluster run a different version of Honeycomb?
234
+
235
+ **Charity Majors:** Well, it's interesting you bring this up. We are deployed from cron, like every ten minutes. And it first deploys to kibble, and then it waits some amount of time, and if everything is okay, then it promotes to dogfood, then it waits and then it eventually promotes to production. And all that happens automatically. So it runs a different version for some amount of time until catch-up.
236
+
237
+ **Gerhard Lazu:** So how long does it take for that to make it to production?
238
+
239
+ **Charity Majors:** About an hour, three tops.
240
+
241
+ **Gerhard Lazu:** So that means you don't deploy to production first. You go to kibble first, and then dogfood, and then production.
242
+
243
+ **Charity Majors:** We consider that production.
244
+
245
+ **Gerhard Lazu:** Right, okay. That's what will become production once everything is okay on kibble.
246
+
247
+ **Charity Majors:** Yeah.
248
+
249
+ **Gerhard Lazu:** \[32:07\] Okay. And how long do you keep things on kibble before promoting to dogfood?
250
+
251
+ **Charity Majors:** It's about an hour. It's about an hour from kibble to dogfood, and an hour from dogfood to production.
252
+
253
+ **Gerhard Lazu:** Did you find that helping? Did you find it helping having kibble and then dogfood before production?
254
+
255
+ **Charity Majors:** Oh, yeah. Absolutely.
256
+
257
+ **Gerhard Lazu:** So everybody makes mistakes, even the best ops people in the world. Is that what you're telling me?
258
+
259
+ **Charity Majors:** Absolutely. \[laughs\]
260
+
261
+ **Gerhard Lazu:** Good. I think that makes a lot of sense. People think some ops people are demi-gods...
262
+
263
+ **Charity Majors:** No...! No.
264
+
265
+ **Gerhard Lazu:** Everybody makes mistakes, but we fix them so quickly you don't even know. And we don't let them -- I think this is propagated everywhere. We trust the system, and the system has all these gates built-in.
266
+
267
+ **Charity Majors:** We never trust \[unintelligible 00:32:51.09\]
268
+
269
+ **Gerhard Lazu:** Right. Or engineers.
270
+
271
+ **Charity Majors:** Or engineers. No. Why would we do a thing like that? \[laughs\]
272
+
273
+ **Gerhard Lazu:** So I think this brings us to the software development being a socio-technical problem. People are fallible, they will make mistakes, and a lot of the time it is about those mistakes, which - think of them like learning opportunities. And if you think of them like that, then you optimize for learning; 15 minutes is important. You have those guardrails in place, so that things like failures don't cascade... I think that's a better word for it. So you know, you have circuit-breakers, and all those fancy things. All it means is errors don't run havoc in your setup.
274
+
275
+ **Charity Majors:** Yeah.
276
+
277
+ **Gerhard Lazu:** And what else would you say about this? Because I know it's a term which is very dear to you.
278
+
279
+ **Charity Majors:** Yeah. Well, I think that people have this image of like "Oh, you hire a Google engineer and suddenly your team will get better", or something. No, I think that it's pretty clear that any engineer who joins a team, within 36 months or so, will be shipping and performing at the level that that team performs, whether that's up or down. The power of the group, the environment that you're in is far more powerful than your own personal knowledge of data structures and algorithms. And we have this weird magical belief in the power of individuals, but we should spend way more time just paying attention to the environment in which we all write and build and ship our code, because the way that people are doing it now is the hard way. You shouldn't have to be a great engineer to write code and get it out quickly. We should build systems that make it easy for engineers to get their code out quickly. I just think we act like great engineers make great teams, when it's exactly the opposite, in fact. It is great teams that make great engineers.
280
+
281
+ **Break**: \[34:56\]
282
+
283
+ **Gerhard Lazu:** I think we're touching on something very, very important. You keep mentioning systems, you keep mentioning teams... Now, a system means teams. It doesn't mean a technical system. It means how everything works. And a system can even mean a company. They're never closed systems, by the way; there are always all sorts of forces, and it changes all the time. Sometimes very fast, or some people think like that; others think it's very slow. But it is a system, all of it. And I'm wondering, what does a high-performing team look like in such a system, or a high-performing system. What does it look like, from your perspective?
284
+
285
+ **Charity Majors:** A high-performing team is one that gets to spend most of their time and energy and focus solving new, hard problems that move the business forward, not trudging in salt mines of engineering, just trying to find bugs and reproduce bugs, and firefight.
286
+
287
+ A high-performing team is one that ships often. It doesn't find it remarkable to ship. A high-performing team is one that can take a lot of stuff for granted, because there's a real structure, socio-technical structure around them that the CI/CD is well tended to. There are internal/external SLO's, people's time is taken seriously and respect as the incredibly valuable resource that it is, not frittered away and wasted.
288
+
289
+ **Gerhard Lazu:** So if a team would like to become high-performing, but let's say they're fighting their CI/CD pipeline, what would you recommend they do?
290
+
291
+ **Charity Majors:** Well, a team or an individual? I feel like you can only really make decisions as an individual. And while I do believe in pitching in and trying to make the system better, there are also a lot of places where there are too many entrenched forces that are against change... And I really think that people should be more willing to leave their jobs and go find a high-performing team to join. Go find that high-performing team that will make you a great engineer. You only get one job, you only have one career. Your career is the most powerful, it's a multi-million-dollar appreciating asset. You have an obligation to yourself to curate for the long haul.
292
+
293
+ Join great teams where you don't have to fight to make change, to make progress, where you can learn a lot from other great people. I've seen too many amazing engineers stick it out year after year at jobs that didn't appreciate them, where they weren't allowed to make the changes that they knew needed to be made. There are other places that will welcome your creativity and will care about your sleep schedule... And if you don't feel respected, you probably aren't. Go somewhere else. It is a buyer's market. This is probably the one role that is easiest to find a new job in the entire world, and it won't last forever, so take advantage of it.
294
+
295
+ **Gerhard Lazu:** \[40:17\] So you make a high-performing team by joining a high-performing team, and then you become--
296
+
297
+ **Charity Majors:** It's the easiest way.
298
+
299
+ **Gerhard Lazu:** That's the easiest way, okay. And what about the hard way? What about someone that says "No, I have decided that I want to make my team high-performing, and I will stick with them for as long as it takes"?
300
+
301
+ **Charity Majors:** Do you have support? Do you have the support of your higher-ups? Do you have the support of your team? Because you can't do it on your own. This is a team effort, so you need to look -- and here's where I feel like engineering managers have a lot to answer for the state of things today... Because engineering managers are the ones who are in the position where they should be able to translate between the business and the engineers. They should be able to not just take orders about what to spend engineering time on, but push back. Make the case that just doing product, product, product, feature, feature, feature, feature, feature - it's a really short-sighted approach. It's not good for the team, it's not good for the engineers, it's not good for anyone, even though it looks like you're super-hella-busy. Like, push back. Make the case. Learn to translate from engineering words into dollars and cents.
302
+
303
+ I feel like there's a lot of passivity on the part of a lot of engineering leadership, when -- who's gonna do it if not you?
304
+
305
+ **Gerhard Lazu:** What would you say about product people that -- I don't wanna use the word "boss", but let's say tell engineers what to do, and sometimes the engineers think "You know what - this doesn't feel right"? What would you recommend in that situation?
306
+
307
+ **Charity Majors:** I don't think that that's a healthy situation. Product people should never be telling engineers what to do. It should be a triad. You've got product, design, engineering. You are all equals. All your voices matter. You're experts in your own domain. The idea of a product person telling an engineer what to do in terms of engineering labor is ludicrous. You wouldn't tell them how to run their user surveys... So - like so much, this comes down to respect, and it comes down to a healthy culture, and you should push back gently, push back more firmly... And in the end, if you aren't listened to, leave.
308
+
309
+ **Gerhard Lazu:** What does a healthy product engineering relationship look like?
310
+
311
+ **Charity Majors:** It looks like a triad, it looks like a partnership. Nobody's trying to make anyone do anything; you're all aligned on wanting to move the business forward and wanting to do a good job. I'm not saying it's easy... But unhealthy power dynamics should be pretty easy to sniff out, and that's never okay.
312
+
313
+ **Gerhard Lazu:** We've been discussing with Ian Miell in a previous episode about the power of money, and especially money flows... And he makes a really good case where he says "You should really follow the money flows, because they will dictate what is important and what should happen."
314
+
315
+ **Charity Majors:** Yes.
316
+
317
+ **Gerhard Lazu:** How, in your opinion, does the money flow or money come into play when it comes to product and engineering? Because there must be a relationship. What does a healthy relationship look like?
318
+
319
+ **Charity Majors:** Well, the naive, the simplistic answer that we often see is just to focus on "Features, features, features", because there's a straight line from feature to money. Or there should be. It's a more elliptical line from tech debt to money, or from observability to money, or from--
320
+
321
+ **Gerhard Lazu:** Happiness.
322
+
323
+ **Charity Majors:** Happiness, right. There are a lot of things that are more elliptical, but they're no less real. It's just a question of short-term investment versus long-term investment, and you can't just play the short-term game all day, all week, all month, all year, or you'll lose people, you'll lose happiness... It shows just in people's weary faces.
324
+
325
+ **Gerhard Lazu:** \[44:14\] So how would you measure what is important on a team? Money is not it, right? That's a short-term goal which has many negatives associated with it. It's important, of course, but it shouldn't be the sole driver.
326
+
327
+ **Charity Majors:** No. It depends, to some extent... Here's one thing. I think every manager should be -- so I do think every engineer who builds a 24/7 highly-available service should be on call for their work. I also think that getting woken up two or three times a year for your service is reasonable. I think more than that veers close to abusive. And I think it's an engineering manager's job to track this, to make sure that it doesn't get out of hand, to take assertive, active measures when it starts to get really noisy, to carve out time for it. Because sleep - sleep is an important thing, which leads to retention of engineers, which leads to job satisfaction, and all other intangibles... But that's one pretty solid thing that I can put my finger on. People's ability to spend their time focusing and not being interrupt driven, not being woken up, not being firefighting all the time.
328
+
329
+ Every engineering team has two constituents. There's your customers and there's your engineers. Neither one is more important than the other.
330
+
331
+ **Gerhard Lazu:** That is really powerful. So how do you measure the happiness, or -- I think "measure" is maybe the wrong word. How do you determine how happy and healthy your engineers at Honeycomb are?
332
+
333
+ **Charity Majors:** Well, you can start by asking them, and by doing anonymous surveys now and then. Good engineering managers have their finger on the pulse of their teams, and they should be sensitive to things. Is a team getting burned out? Are the demands unreasonable? Does the team need different composite -- do we need some more senior folks to be doing mentorship? Do we need more challenging -- the care and feeding of engineering teams should be the job of a good engineering manager, and they should be able to tell you quite a lot right there.
334
+
335
+ You can also look at top-level metrics like attrition... But honestly, I'm a big fan of just asking people and building up a trust relationship, so that people know they aren't gonna be punished for saying something.
336
+
337
+ **Gerhard Lazu:** Would you ask them regularly? Would you let them come to you? What works best?
338
+
339
+ **Charity Majors:** Both. All. All of the above. And also, I like asking engineers about each other too, like "How is so-and-so doing? Do you feel like so-and-so is getting stressed or burned out?" Because a team of people tends to care deeply for each other, and they're often a lot more sensitive to each other's burnout etc. than they would be for their own. So you can ask them about each other, too.
340
+
341
+ **Gerhard Lazu:** \[47:03\] I really like the way you think about the human element. I really like the way you see us, the engineers, as people, at the end of the day. They're not machines; they have to talk to machines, but it doesn't make them one.
342
+
343
+ **Charity Majors:** Engineers are not fungible. You asked about the socio-technical systems, and like -- there's a thought experiment that I use sometimes... Imagine the New York Times; you've got a socio-technical system, it's comprised of people, the tools, the systems etc. If you took all the people away and replaced them with equally-powerful engineers, equally-experienced etc. and you send all of your New York Times engineers off to the Bahamas... How long would it take them to figure out how to fix even a small problem? So much of the system lives in your head, right?
344
+
345
+ **Gerhard Lazu:** Context, yes.
346
+
347
+ **Charity Majors:** Starting with how you really log in. It would take a really long time. The majority of the system lives in the heads of the people who work on it, so you can't take them for granted, you can't just replace them.
348
+
349
+ **Gerhard Lazu:** Not cogs. Not machines. They're not pets, they're not cattle... They're people.
350
+
351
+ **Charity Majors:** None of the above.
352
+
353
+ **Gerhard Lazu:** So as a listener, if I had to remember one thing from this conversation, what do you think that should be?
354
+
355
+ **Charity Majors:** If you're frustrated about the performance of your engineering team, take a long, hard look at your CI/CD pipeline. 15 minutes or bust. And observability, of course. Go use the Honeycomb free tier.
356
+
357
+ **Gerhard Lazu:** That's a good one. What I would say is be curious in production, because that's where all the interesting stuff happens.
358
+
359
+ **Charity Majors:** I like that a lot.
360
+
361
+ **Gerhard Lazu:** I would use another word instead of "stuff", but you know what I mean.
362
+
363
+ **Charity Majors:** \[laughs\]
364
+
365
+ **Gerhard Lazu:** Lastly, I would like to talk about a book which - there's an early release, raw and unedited, that you can get for free. Observability Engineering. I think the tagline is even better: "Achieving production excellence." I think that's super-powerful.
366
+
367
+ So we'll add a link to the show notes. You can go and download it for free, by the way. I think you will need to share your address with the happy and friendly people from Honeycomb... But otherwise, I've been reading it, skimming it shall I say; I haven't read all of it, but the index looks really good. The first question is "What is observability?" So we couldn't cover it all, but if you wanna really know what observability is, you can go and check it out for free. I highly recommend that.
368
+
369
+ **Charity Majors:** Thank you.
370
+
371
+ **Gerhard Lazu:** The book, I know, will be available in its final version - and by the way, it's published by O'Reilly - in January 2022.
372
+
373
+ **Charity Majors:** Sounds great.
374
+
375
+ **Gerhard Lazu:** So what I'm wondering is when that happens, even if it's not January, would you like us to talk again, Charity?
376
+
377
+ **Charity Majors:** That'd be great.
378
+
379
+ **Gerhard Lazu:** I'm looking forward to that. Thank you.
380
+
381
+ **Charity Majors:** Awesome. Thank you.
Introducing Ship It!_transcript.txt ADDED
@@ -0,0 +1,253 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Jerod Santo:** So we're here to introduce Ship It. I'm here, my name is Jerod. Adam, you're here...
2
+
3
+ **Adam Stacoviak:** What's up?!
4
+
5
+ **Jerod Santo:** Your name is Adam...
6
+
7
+ **Adam Stacoviak:** My name is Adam.
8
+
9
+ **Jerod Santo:** And Gerhard's here. And Gerhard, you have to be here, because this is your show.
10
+
11
+ **Gerhard Lazu:** Hey, everyone. And hey, Jerod and Adam. I've been looking forward to this for such a long time; I wanna say years. That's a bit of an exaggeration, but months is definitely accurate. I'm Gerhard Lazu, everybody, and I'm so thrilled to be here. It's been a long time coming.
12
+
13
+ **Adam Stacoviak:** It has been. I think you're accurate to say years... The anticipation has been years, but I think the practical feats stepping forward to produce this is more like months. Anytime you launch a new show, there's a lot of fun and excitement, and it's like "What's the show gonna be? Who's gonna listen to this show? Who will be on the show? Where will this take us?" And that's, I think, what I love most about this business, because we've launched several shows over the years; some launches better than others, but nonetheless, the shows get produced, they're awesome, great communities around them, great people involved... But it's just such an amount of energy to launch something, so... I guess it makes sense to call the show Ship It, right?
14
+
15
+ **Gerhard Lazu:** It really does. It really does, because I think it's the embodiment of what we do every day, maybe in different forms. Some of us write code, some of us write documentation, or tweets, or books, or whatever it may be. Or even videos. In our case, right now, it's podcasts. And you have to ship whatever you create; how else do you get it out there? It's essential. And some people would say that shipping, or acting, is the first step. You have to do it. And that's it. And then you figure out "Does it work? What do people think? How can I improve it?" And on and on it goes. It's this essential first step, shipping it.
16
+
17
+ **Jerod Santo:** So we've been shipping something like Ship It for the last few years, because you've been helping us ship Changelog.com into production for 4-5 years now... Give a little bit of a back-story on our relationship, how we came to do this annual infrastructure show where you're helping us ship Changelog to production, and how that has sort of evolved and turned into what is now a much more frequent podcast of its own life.
18
+
19
+ **Adam Stacoviak:** Yeah.
20
+
21
+ **Jerod Santo:** Give us the back-story.
22
+
23
+ **Gerhard Lazu:** Yeah, I love that question, I love that beginning, because I see so many similarities... First of all, Jerod sent me an email and said "Hey, Gerhard, we have this app that we want to ship/deploy. Can you help us?" And I said "Hm... Jerod, shipping it is just like such a tiny part of it. There's like all these 20 questions that you have to answer first, before I can even start considering what it would take to ship it..." Because you have tests, you have dependencies, you have where, how, how often, how do you code, what is your local setup, what is the availability that you care about? What about latency?" Maybe not all of these questions were there, but many were, and many I already forgot. And that's how it all began. It began with an email. I mean, how old school are we? Not a tweet, not anything else; just an email.
24
+
25
+ **Adam Stacoviak:** Yeah.
26
+
27
+ **Gerhard Lazu:** And that was a good conversation, so we started --
28
+
29
+ **Jerod Santo:** It was a long list of questions, you're definitely not exaggerating. I remember receiving the questions back and thinking "Do I really wanna answer all of these? \[laughs\] Or should I just figure it out myself."
30
+
31
+ **Gerhard Lazu:** \[03:49\] No, it is a test, because you're starting on a journey. Are you ready? Where about are you? And if you're serious about it, you will need to think about those things. And it's better to think about these things upfront and be honest. "I care about this, I don't care about that", so we know where we stand. And if it's a match, let's go for it. And if not, then - well, you can git push and let maybe Heroku handle it... And we know it's so much more complicated than that. In the days that was like the craze, but we tried to do something else. And I think this is the seed of Ship It. We try to improve publicly, share publicly, and always recognize the mistakes, but also share the wins.
32
+
33
+ And then I think in year two or three we thought "Are we working with our partners as well as we could?" And then we started embracing our Linode relationship, our Fastly relationship... Who else was there, Adam? I think you were in the thick of it...
34
+
35
+ **Adam Stacoviak:** Rollbar was there...
36
+
37
+ **Gerhard Lazu:** Rollbar was there, that's right. I can't remember any other names... Anyway, there were a few names like this; GitHub maybe? No, CircleCI.
38
+
39
+ **Adam Stacoviak:** CircleCI, yeah.
40
+
41
+ **Gerhard Lazu:** Yeah. And then we said "Well, how about we start using these partners that we promote, just to see how well they actually work for real? How well do they work for us? And if we had to depend on them, how would that look like?" And I think that led to a lot of things such as feedback. So we started giving them feedback, "Hey, do you know that this thing is not working the way it should? And by the way, this is our suggestion." And many things -- you know, people are busy. Busy shipping. Maybe not paying as much attention to certain feedback. But others were very receptive, and many things improved because of that.
42
+
43
+ I still have very fond memories when we started using Linode Kubernetes Engine, the beta, when it opened up... And it just like opened a whole new chapter with Linode. And it was great. That was a great conversation, and many good conversations. So from there, we are here. We have shipped those improvements, we blogged about them, we talked about them, so how about we do it more often? People are asking about it, people like it, so why not?
44
+
45
+ **Adam Stacoviak:** That's the beauty of it, too... That's why I say you aren't over-exaggerating by saying years, because I think I almost longed for that yearly/annual show we did, because that was a lot of fun, to talk about what we're doing and to explore the different options with for example Kubernetes, or when we were using Ansible and Docker and discovering how that can work for us... And just different improvements and how that changed our infrastructure, how that changed our dev experience, whether it's onboarding because our app is open source, how they can spin up the environment and make changes or improve the application, or maybe step in on a feature... Or just fix a bug. All these different things. And then how that might improve uptime, or availability of the application, or if things go down, or things like that...
46
+
47
+ All these things have been fun to explore ourselves, so why not just produce a show around that? And one idea was "Let's just talk about ourselves", but that wasn't enough; talking about how other teams ship their applications... Like, how does GitHub ship GitHub, for example? How does Kubernetes ship Kubernetes? How do they do that? It's all these questions like that that we can then dive deeper into, and I'm sure it's just a multi-layered, super-layered onion there to just unravel and get lost in, but that's the fun part of it.
48
+
49
+ So this show here, today, the very first episode on the feed, is the invitation to anyone out there, shippers, to join us on that adventure, to invest, to get involved, to listen, to encourage, to share, people we should talk to... All those fun things. Hop into Slack and be a part of the community, and just have fun for the next (I don't know) hundred, two hundred, four hundred, thousand episodes... We never know where we're gonna go with this, but I know it's gonna be a lot of fun.
50
+
51
+ **Gerhard Lazu:** Yeah, I don't think you can see a video. There's no video feed with this show, but we will have a screenshot which shows my mic, I have Ship It, and I have four zeroes. So I'm thinking we'll have 10,000 shows. That's my estimate.
52
+
53
+ **Jerod Santo:** \[08:07\] There you go.
54
+
55
+ **Gerhard Lazu:** Once per week, you work out how many years of discussions like these we have ahead of us. I'm really excited about that. And to Adam's point, I really like the people element. The Cloud Native Foundation (CNCF) was amazing. The Linux Foundation, some of the conversations we've been having... KubeCons... It's about the people, it's about the community. That is something that many of us forget, because we're just like down in the code, shipping stuff. Well, guess what - that is not even 1%. It's all the conversations that you're not having, or maybe having, all the ideas that you're getting, all the interactions that you're allowing to happen, because guess what - you've taken your headphones down, you've put your headphones down, you've looked around, and you went in and you had the conversation. That's more difficult, and I think people appreciated more how valuable it was. Don't get me wrong, I love working from my house, but how amazing is it to be at a conference in person, to have those hallway track conversations. I think we all miss them.
56
+
57
+ So while the past years were -- I wouldn't say one-directional, but we were mostly meeting less frequently, and mostly sharing stuff, but I don't think we were discussing as much as we could, and I don't think we were looking around, inviting others to join us and tell us their story. And this show, I hope, will change all that.
58
+
59
+ **Jerod Santo:** Well, I hate to break it to you, Gerhard, but you've made a classic blunder... The old Y2K mistake. When you hit episode 10,001, your whole system is gonna break down, my friend.
60
+
61
+ **Gerhard Lazu:** That's okay, I'm sure we will have something in place by then. \[laughter\] We'll improve it sufficiently that that won't matter.
62
+
63
+ **Jerod Santo:** We'll have to hire some very expensive consultants to come in and help us fix this numbering system.
64
+
65
+ **Gerhard Lazu:** Oh, wow... There's actually an episode that we recorded about a similar joke; I think it's episode three or four... It's one of the first that will ship, so listen to that, about consultants and simple solutions to complex problems. It's a great one.
66
+
67
+ **Adam Stacoviak:** Yeah. For context, Gerhard has four zeroes on his microphone, so he's anticipating this is episode 0,000 in anticipation of 10,000 or more.
68
+
69
+ **Gerhard Lazu:** How many weeks is that, by the way?
70
+
71
+ **Jerod Santo:** 9,999...
72
+
73
+ **Gerhard Lazu:** 10,000... That's 192 years. I think we're good... \[laughter\]
74
+
75
+ **Adam Stacoviak:** Yeah. Well, that's assuming the cadence... We could increase the cadence a bit. But yeah.
76
+
77
+ **Jerod Santo:** Yeah, what if we go daily?
78
+
79
+ **Gerhard Lazu:** Woooah... Okay, now you're totally crazy, Jerod... \[laughter\] It's too early for that.
80
+
81
+ **Adam Stacoviak:** Hang on now, hang on now...
82
+
83
+ **Jerod Santo:** Alright. Slow down. Well, you've put some work into this. We have some episodes that have been recorded, so this is your introduction episode. Everyone here is welcome. If you find this in your feed, welcome to Ship It. We are happy to have you along with us. There are some episodes also in the feed just getting started... Maybe Gerhard give an idea of what to expect, maybe even just highlight a couple of the shows you've recorded, and then what Ship It is gonna feel like as you move forward, you think... With the disclaimer that it's experimental, we're having fun; we don't know exactly where a podcast ends up, but this is where we're starting.
84
+
85
+ **Gerhard Lazu:** So the way I'm thinking about this... In the beginning I'm thinking of hitting some of the bigger topics... For example the topic of observability. We know that tends to be very contentious. The other one is Kubernetes. Why is it Kubernetes and why not a PaaS? I think that's a great question, and there's so many valid answers... And I think people need to be aware of all these options; I think that's very important. Because there's no one solution that fits all.
86
+
87
+ I'm very passionate about continuous delivery, and one of the first episodes, one of the early episodes will be about that, about the concept of continuous delivery, with one of the people that actually made that term popular. I don't wanna spoil it; it's one of the two. And it's coming, I think, in episode five or six. We'll see, depending on what else we have going on.
88
+
89
+ \[12:07\] I'm very passionate also about the whole agile thing, how we work, how we communicate... And I can mention this because it's already recorded, already in the pipeline... There's an episode coming with Ben Ford from Commando Development. He actually is a former Royal Marine Commando. This is like your Navy SEAL in the U.S. He learned so many things in those five years as a commando, and then he went to become a software engineer, and he refined some of those learnings... And the one thing that really stood out is that OODA loop, which by the way, the one that you know is wrong; the one that I knew was wrong. Mission and command... Sorry, excuse me. Mission and command -- what was the other one? Oh, situational awareness.
90
+
91
+ So all these three things, if you think about them - for example, your CI/CD system. We use a CI/CD system because it helps us approach coding and shipping in a certain way - continuous integration, continuous delivery. So what are the equivalents for the relationships from a business perspective, the interactions from a business perspective, and at an org perspective? And these are some principles that may be applied.
92
+
93
+ You see, it goes so much more than just coding, because you don't do that in a vacuum. So what are the other things that need to happen, and what are the interactions that need to happen, the healthy ones - and the unhealthy ones, because they're important to talk about those as well - so that you feel good about your work. And yes, it is about you, because there are so many perspectives here, and I think if we do build this community of people that share their stories and share their improvements, I think we'll all be better for it.
94
+
95
+ I think the CNCF is a great example of how to do it. I don't think we're trying to even compete with CNCF; we're trying to be inspired and create our own version of that magic that they were able to do. So I'm very excited about those things.
96
+
97
+ **Jerod Santo:** It sounds good to me... What about you, Adam?
98
+
99
+ **Adam Stacoviak:** I think a podcast is a great medium, obviously, but I think a podcast where it's a place where if you care about not just git push or shipping an application to production, but all the things in between there; if you're looking for something to fill that vacuum, then that's what we aspire for this show to be, is that this is a place you can call home if you care about topics around delivering quality software to users in the world, and that's what this show will be. We'll cover all sorts of different facets. It's not just simply about the code, or in particular just about the tech. It might be about the people, and the interactions, it might be more about different communities...
100
+
101
+ But a lot of different horizontal and vertical ways to move around the landscape, but just the idea of putting something out there, a piece of technology out there - from the team, to the software itself, to keeping it stable, keeping it up, keeping it reliable... All those different things are so deeply available for us to talk about, and that's my hope, is that this show is a place where if you're looking for that kind of conversation, you can come here weekly and rely upon that to be there for you. And we have worked so hard over the years to improve our relationships, our audio quality, all the different things involved in producing a podcast; quality transcripts, a fast, reliable website, obviously... And different things involved with that. Fastly, making sure that our mp3's around the globe are available wherever you're at, super-fast... And so from just an infrastructure standpoint on podcasts, we are desiring to produce a high-quality podcast, and that also begins with Gerhard, because Gerhard is a world-class SRE who's helped us for many years and we've just kind of been keeping him to ourselves, and now we're releasing Gerhard to the world to show everybody his magic, and that's what I hope this show really embodies - his lens, his look on the world, and then involving everybody else in the game of shipping.
102
+
103
+ **Jerod Santo:** \[16:13\] Yeah. Gerhard, take a couple of minutes and just tell everybody who the heck is this guy... Because we know you very well, our regular listeners, but many people are gonna be coming to Ship It - they may not even know what Changelog Media is, or Changelog.com, and they're like "Who's Gerhard Lazu, and why should I listen to him do a podcast about shipping stuff?" So give us just a little bit of your history.
104
+
105
+ **Gerhard Lazu:** Okay. I'm going to share something which I haven't shared before... I was born in Romania. That's Eastern Europe. And as I was growing up, my mother - she was a professional broadcaster. The Western part of Romania, she was part of the national TV and radio; it was a national thing, like BBC, the equivalent of the BBC... And as I was growing up, I used to spend a lot of time around the broadcasting station, as you do. Bring your kid to work.
106
+
107
+ I loved those buttons, I loved those monitors. And in a few years, I started recording things for fun. And she liked my voice. She said "Hey, do you wanna help me with my show?" This was 25 years ago, just to give you an idea. And it worked really well. She loved it. She was like "I wish you didn't have school. I wish we could do this." And I loved it, too. It was great.
108
+
109
+ Fast-forward maybe five years, and I was getting into tech. I started learning HTML from a book. I didn't have a computer. So I was writing HTML... This was HTML 4.1. It was like the bleeding edge, and CSS (whatever it was at the time) 0.9 maybe. I can't remember. And I was writing it in a notebook. And I was like "When I get my computer, I will transcribe this and it will be amazing." That was my beginning.
110
+
111
+ **Jerod Santo:** Nice.
112
+
113
+ **Gerhard Lazu:** Fast-forward five years and I was dabbling with PHP, I was looking at Zend... If you remember the Zend framework, that was the hot rave. I don't think Facebook existed at the time... PHP was not that popular. I think Perl -- there was this big debate whether "Is it gonna be PHP or Perl?" jQuery didn't exist. So a lot of Ajax was handwritten, and it was like this bleeding-edge thing... That was interesting.
114
+
115
+ So I went from a frontend developer, if you wish, to a web hosting provider, because everybody needed to host a website; they didn't know how, so what is this thing Apache? That's how it started.
116
+
117
+ Before I knew it, I dropped Apache. I was looking at NGINX, because that was the hotness at the time... And I found out about this thing Ruby on Rails. What is this Ruby? Are you telling me I can write this app ten times quicker? Okay, it's a Hello app, but so what?
118
+
119
+ I had very big, thick books of PHP and MySQL at the time. I dropped them, and I said "No, Ruby is the thing." And I think I stuck with Ruby for maybe about ten years, give or take... And that love for infrastructure was always there. So even though I was like a frontend developer/backend developer/full-stack developer, Puppet, Chef, at the time (Ansible didn't exist), they really caught my attention. "CFEngine? What is this CFEngine thing?"
120
+
121
+ If you're paying attention, I was always very curious. I was always like "What is this next thing? What is this next thing? What is AJAX?" And that curiosity and learning on the job served me really, really well. So I always had this passion for infrastructure, always had this passion for assembling things, and one of the tools that I wrote... Bash -- oh, don't get me started. Self-proclaimed king of Bash? That's me. \[laughter\] FizzBuzz TDD in Bash? That's me. I even have a repo, [check it out](https://github.com/gerhard/bash); there's so much stuff in the commits.
122
+
123
+ \[20:03\] Oh, git. How do you think I'm @Gerhard on GitHub? I knew about GitHub before people knew about Git. That's how it started - always curious, always discovering. That's how I got Gerhard on GitHub. I'm very sad I didn't pay attention to Twitter. I thought it was just gonna be a fad at the time, so I took my time. I didn't make the same mistake with Instagram. So if you're listening to this and you know the person who's @gerhard on Twitter, please introduce us, okay? Years from now, whenever it's gonna happen; I like playing the long game.
124
+
125
+ Coming back to Changelog, I wrote this tool in Bash for deploying Ruby websites. Not just Ruby websites - I was working at a tech startup; this was 2012, and we were using Capistrano and Chef and Puppet and a bunch of things... It was just a mess to deploy things, and I thought "No, this is madness. It can't be this complicated. Can I just build a really simple thing that SSH-es into service and deploys things?" And that was Deliver. Ship It, Deliver - it was right there. And there was a fork called eDeliver, which deployed Erlang apps into prod. And Jerod picked up on that, and he saw my name, and he was like "Hm, I think I know him." I think it was the Ruby background, right Jerod? The Rails background?
126
+
127
+ **Jerod Santo:** So I think you had written something for Changelog years before, back when we were using a GitHub-based writing flow, where you could write into a repo and we give you feedback right there on a pull request, and then we would publish it from there. And you had written something about something, I thought it was about Ansible maybe, but who knows what it was about. We can go all the way back and we could find it in our CMS.
128
+
129
+ So I had interacted with you very briefly via that, because you had written a piece for us, and then when I saw your name again, I said "I know that guy. He wrote something for us. And here he is again." The synapses fired and I thought "I bet he's better at this than I am, so I'm gonna just go ahead and email him."
130
+
131
+ **Gerhard Lazu:** That was a great start and a great conversation. I still have fond memories of that. And things happened, right? I think we were a natural fit. And what I really enjoyed is that we were always honest about what we were trying to achieve.
132
+
133
+ **Jerod Santo:** Yeah.
134
+
135
+ **Gerhard Lazu:** I think we had that doggedness about "We will get this to work. It can't be that complicated. Come on now. I don't have to use Chef..." and there was like the Chef server or whatever it was... Can I do this easier? And Ansible was that easier thing at the time. I think that's where we started. It was very early days.
136
+
137
+ I knew Linode, I knew DigitalOcean... At that point I will have been with all the hosting providers, because it was a thing which I enjoyed. I just wanted to see who has the best service out there at the time, at the best price, and how can I distribute these apps across the world so that if one fails, not everything will fail. AWS was not invented at the time. It wasn't a thing. This precedes all of that.
138
+
139
+ **Jerod Santo:** Yes.
140
+
141
+ **Gerhard Lazu:** So we had fun, and we kept improving things, and in parallel -- now, you have to realize, this thing for me was happening for fun. In my free time. We began this in my free time. And that was like a job, right? Because I really enjoy coding, I really enjoy deploying, I really enjoy interacting with people around those subjects. And Changelog grew to become so much more. And I would say that Changelog now is -- I almost identify myself with it. I know that's saying a lot, and maybe it's like -- I won't say it's an over-promise, but maybe like I'm bragging, it may sound like that... But I do feel like part of Changelog in my heart, because I've been with it for so many years and I've seen it improve, and I've been there through many changes, and I really enjoyed it. And guess what - I changed jobs, but I'm still at Changelog.
142
+
143
+ **Adam Stacoviak:** Nice.
144
+
145
+ **Gerhard Lazu:** That hasn't changed.
146
+
147
+ **Jerod Santo:** That's right.
148
+
149
+ **Gerhard Lazu:** \[23:53\] And I hope that's never gonna change. That's my hope and that's my wish. So while this thing was happening with Changelog, I was going through infrastructure stuff, I was going to Erlang, I was going to Go, I spent quite a bit of time with Go, I was an XP, I was a consultant for consultants... I went through many, many things. Think about all the big companies - I either worked with someone from those companies, I was interviewed... Well, wanted to be interviewed many times, but the big company never appealed to me. It was just like, it was just too big. Change is too difficult; things are moving so slowly... And things are just about to change now, I think, but that's a different story for another day.
150
+
151
+ The point is that if you wanna know more about me, guess what - there's Gerhard.io. Check it out. All my talks, all my videos, all my history is there, if you care about Gerhard the person.
152
+
153
+ And I think the last thing which I'm going to share is that I never went to university, because I wanted to learn at my own pace the things that really interested me, and that worked so well that I just didn't have time. There were more important things. And I became, I think, pretty successful - and this is ignoring Changelog, which I think is a big personal success. I'm very fond of it. And we are today here, 25 years later, where I have a studio around me, an amazing microphone, just about ready to start recording again... And maybe bringing two threads together - my love for tech, and my history in recording and broadcasting. The beginning of something great, I hope.
154
+
155
+ **Adam Stacoviak:** I think so, for sure. I love that you identify back to Changelog; that really makes me happy, that you have fond memories too, because obviously, I've been here for a long time... But the fun part for me has been the people involved with us; over the years it hasn't been static, it's been very dynamic. Jerod has come on, and you've come around... And I went back to our Git repo - the draft repo - and found that the why and how of Ansible and Docker was published on February 21st, 2014.
156
+
157
+ **Jerod Santo:** There it is.
158
+
159
+ **Adam Stacoviak:** And that was the last update to the repository too, so that meant that whatever was published is the final version of it; there's been no other changes. So that's really interesting to me... I didn't know that's how we began, Jerod. I knew that you knew of some software he had written out there to deploy an application, but I just forgot about those connected dots that he had written something, and that's how it sparked your reminder of him and whatnot. That was just crazy.
160
+
161
+ And then two years later we deployed the CMS - I guess that's what we call it, a CMS; internally we describe it as a CMS, but it's a Phoenix/Elixir application. Your roots, Jerod, are in Ruby, so are mine, so even that's a tangent we could take potentially... But writing in Elixir. But you were very aware of how to deploy a Ruby application, but not so much how to deploy an Elixir application.
162
+
163
+ **Jerod Santo:** Right.
164
+
165
+ **Adam Stacoviak:** That's part of the journey too with Gerhard, was figuring that out. Shipping it. How to ship it.
166
+
167
+ **Jerod Santo:** Right. And part of the fun of it was Gerhard is very thorough, text-oriented, curious... And I'm somewhat the opposite, in many ways, and so our relationship -- I just wanted to get the thing out there; I was kind of pragmatic, you know... So he had asked me all these questions, "Why? Why? Why?" and I was always like "I don't really know why I'm doing it this way." He was drilling down, and I'd always be like "I don't care how it works, man. Let's go."
168
+
169
+ So we just had that fun kind of back and forth from the very beginning, and learned a lot from each other. I've learned a lot from you over the years. I am gonna continue to learn a lot from you just by listening to Ship It, more so than I ever have, because our interactions have always been sporadic. We'd have like a sprint, and then we'd have a year off, or we'd talk to each other a lot, and then we wouldn't for a while... So I definitely, as well as you, identify you with Changelog, and I'm excited to have you to be a staple now. Week in, week out, part of what we're doing, and shipping awesome podcasts to people who are interested in this stuff... Because you really dive into the details, you explain things well, and you keep it fun along the way, so you got me excited for the show, for sure.
170
+
171
+ **Gerhard Lazu:** \[28:20\] Thank you, Jerod. I really appreciate that. And I do know that the fact that we are the way we are, and we're honest about who we are, we make such a great team... And Adam is like this third element which completes us really well. So we're like a trio which works very, very well, I have to say. And I don't think I would be here if that was not the case. I'm diligent, I choose my teams wisely, so I'm really fond that we've found one another and we are able to be here today. And I'm so looking forward to what we'll build next, because this is just the beginning. I'm sure of it.
172
+
173
+ **Adam Stacoviak:** You know, the one thing I think is interesting from a listener perspective is that the thing you should hear is that we're super-committed to this show. I think when you pick up a podcast for the first time, you're like "I'm gonna listen to the first episode" or wherever you begin at; if this is your first episode, you should think "Are they committed to this show? Are they committed to this mission?" And that's what I love that we bring to the table - when we do something, we kind of live by "Do it right." If you're gonna do something, do it right. And that's what I think as a listener you should take to heart, is that we're gonna do our best to deliver the best show, show up every single week; we have a commitment.
174
+
175
+ It just sucks when you listen to a podcast and they fall off, or they fade, or whatever happens. There's nothing wrong with that, it happens, but we are super-committed to this show, super in love with the topic, and we obviously have kindred spirits with Gerhard, so... Great mix.
176
+
177
+ **Gerhard Lazu:** Yeah. And some of the guests which we'll have on the show - we have been talking with them on and off for many years. So this really has been in the making for a long, long time now, and it's finally just coming all together now, but small threads have been ongoing for years.
178
+
179
+ I was just telling Adam before we actually started -- well, we were chatting before you started recording, and we were saying that I had certain conversations in January 2020, which I didn't have time to continue. I didn't have the headspace, and I didn't have the medium to make that an interesting conversation. And now it's finally happening.
180
+
181
+ So the person that knows who I'm talking about - I don't wanna give her name away just yet, because I think she's a very special guest, at least in my mind, and I would like you to discover that... But I'll mention it when we record. Because stuff like that - and that's just one example - has been going for a long time, and I know that many of you are looking to have these types of conversations, to have these types of maybe different perspectives, diverse perspectives, because I think we are a lot more accepting and understanding than you would think. I'm not so sure about humble, I'm still working on that, but... \[laughter\]
182
+
183
+ **Jerod Santo:** Some honesty...
184
+
185
+ **Gerhard Lazu:** ...the point is we do wanna make the best thing there is, and we are trying very hard to get amazing hosts, that fit the topics, and I was trying to explain what topics we're going to start with before diving in the weeds. We're trying to look for things that apply to everyone, but in a way that's fundamental. It's not that we're trying to cover topics which are generic. We're trying to cover topics which are meaningful and impactful to most of you.
186
+
187
+ **Jerod Santo:** So a couple of touchpoints... Of course, we do like to hear directly from those who are listening to the show. So in terms of episodes you would like to hear on the pod, there's changelog.com/request. Request an episode; there's a dropdown there, you can select Ship It, and that goes right into our admin, so that Gerhard can see all the episode requests. So we do desire those...
188
+
189
+ \[32:00\] We also have the free community, which is changelog.com/community, totally free to sign up. There's a Slack, there's a Ship It channel in our Slack, a good place just to have watercooler conversations about -- it could be about the show, it could be about related topics on infrastructure, or whatever is on your mind, or specific things that are going on in that space.
190
+
191
+ We also are on Twitter, we have @changelog, we have @gerhardlazu if you wanna go directly at Gerhard there, we have @shipitfm... Adam, did we get that one locked in? I think we have @shipitfm.
192
+
193
+ **Adam Stacoviak:** That's true.
194
+
195
+ **Jerod Santo:** Okay, we've got that... So those are all good touchpoints. And you can always just email directly to gerhard@changelog.com and have a conversation. I'm sure you will read all those emails, right Gerhard?
196
+
197
+ **Gerhard Lazu:** I will. All of them. \[laughter\] And reply as soon as I can, promise. If you're not getting a reply, I'm just too busy with other things, but I will pick it up as soon as I possibly can. There's all these ways that people can contact us, there's all these shows to be produced, and all these conferences to go to and events to attend, which I'm very excited about... So yeah.
198
+
199
+ **Jerod Santo:** Lots of stuff. Anything else that's vital we haven't said yet about Ship It? Otherwise they can just hit next and go to the next podcast and hear the actual -- the show-show.
200
+
201
+ **Gerhard Lazu:** If there's a topic that you're passionate about, something that you really wanna get off your chest, we're listening. And if others like it too, we should make a podcast out of it. If you know someone that would benefit from having that discussion. Maybe someone who needs convincing - even though I don't do that; but I can make an exception - I'm really looking forward to those types of conversations, too. Panels, if you have a group of people, if you can think of a group of people that you want to get together, we'd very much like to have those types of conversations, too.
202
+
203
+ And yeah, I think we should do this again, not just like a one-off, but maybe a check-in every six months? I'm keen on those, too. The progression. What have learned, what is better? Almost like a retrospective for the show.
204
+
205
+ **Jerod Santo:** Right.
206
+
207
+ **Gerhard Lazu:** I think that would be a good idea.
208
+
209
+ **Adam Stacoviak:** I think to give an example of what you're talking about there is some conversation in our Slack discussing the complexity of Kubernetes, the desire of a PaaS-like experience, or an - do you call it an IaaS? I don't know how you say infrastructure-as-a-service, like PaaS... But there's a discssion on this. I think we should do a show called "K8s vs. Paas vs IaaS." That would be cool. That became this contentious back-and-forth in Slack, and that's a great example of like on-your-heart, and an argument. Let's do a show, to not so much end an argument, but at least to provide some contextual, deep conversation from industry professionals that are solving these problems on the daily, to sort of talk about "Does it really take a million dollars to run Kubernetes?" Well, not really, but you should probably have a million-dollar problem; at least that's what Ben Johnson said on that podcast when we talked to him about Litestream. And that's just his opinion. So is that opinion wrong? Maybe, maybe not. Is a platform-as-a-service or infrastructure-as-a-service better? I don't know. It kind of depends on the questions you ask, Gerhard, like you did to Jerod, and how you answered them. It may, or it may not be. But this podcast is gonna be a great place for those kind of conversations.
210
+
211
+ **Gerhard Lazu:** The truth is that everything is very contextual, it changes all the time... There's no best practices really anymore. It's something that we tell ourselves to feel good about what we do. But to be honest, most of us just wing it. So then what makes sense? Listen to your instinct, listen to your experience, listen to maybe what the team is telling you. It's that consistency element. What makes sense. And I think a lot of us - it's difficult to go back to what makes sense to you, in your situation. And by the way, everybody's right, and then everybody's wrong at the same time, so there's that as well.
212
+
213
+ **Jerod Santo:** \[35:59\] Well, that's what I think is cool about this particular show with the internal focus in terms of we do have a production online application that we have been shipping, that we'll continue to be shipping, that we have no problem experimenting on and with, and so we don't have to speak in the generics, we can speak in the particulars; we can have actual code going into it, an actual codebase that goes out to the actual world, that hosts the actual podcast that the podcast is talking about. So it's kind of nice and circular in that way.
214
+
215
+ So it's not just gonna be theoretical and general discussions and debates, it's gonna be actual results-oriented episodes, which is very cool. And maybe -- I don't know anybody else who's doing that.
216
+
217
+ **Adam Stacoviak:** Yeah.
218
+
219
+ **Jerod Santo:** Alright... If I'm listening right now, I'm thinking "Alright, I'm skipping the rest of this. I'm hitting Next. I'm gonna check out that first episode." So maybe we should just call it.
220
+
221
+ **Gerhard Lazu:** I would like to say something else...
222
+
223
+ **Jerod Santo:** Go ahead.
224
+
225
+ **Gerhard Lazu:** It's okay to edit this out, this is not a problem, so we can stop here, but I have to say this, because I wanna see your faces... So to the point that you've made, we want to upgrade to Erlang 24 in production, on Friday evening. With Alex, this Friday. So we're preparing tomorrow, and contrary to the industry advice, best practices, whatever... I don't know how to call those points, but basically don't deploy or don't ship on a Friday evening - that's exactly what we're going to do. Because we're confident in what we have. We really own it, front back and center. We're not afraid of doing that, because if something is wrong, we will learn. We have recorded a show with Alex Koutmos, I think that's episode three, where we talk about PromEx and a bunch of things... And we would like to livestream on Friday for one hour the thing that we're going to do. We're just going to update Changelog.com live in production, observe it with a Grafana Cloud, with PromEx... What difference does the latest version of Erlang - which by the way, shipped a few weeks ago; maybe it's a month now - have on Changelog.com performance?
226
+
227
+ And the fact that we can do this live with our own show, with our own infrastructure, with our own podcasts - I mean, we're literally putting our money where our mouth is... Is that how you say it?
228
+
229
+ **Adam Stacoviak:** Yes, your money where your mouth is.
230
+
231
+ **Gerhard Lazu:** English is my third language, so excuse my English. Okay. So that's what we're going to do. Now, who else does that? I don't know. But I like to be part of it; bring your popcorn, by the way, or drink, or gin and tonic, whatever you're having, it's okay. Join us and see what happens.
232
+
233
+ **Jerod Santo:** Well, we will put a link in the show notes to that. It'll be Friday--
234
+
235
+ **Gerhard Lazu:** So it'll be on the 28th.
236
+
237
+ **Jerod Santo:** Friday the 28th. We'll put a link in the show notes to the YouTube event. It'll be a YouTube stream, and we'll schedule it so that it's in there. So if you want to be a part of that...
238
+
239
+ Unfortunately, I have bowling league that night, so -- I just made that up; I don't have a bowling league... But I'm afraid of what's gonna happen, so I probably won't --
240
+
241
+ **Gerhard Lazu:** You won't be paged, don't worry.
242
+
243
+ **Jerod Santo:** Don't page me. \[laughter\]
244
+
245
+ **Gerhard Lazu:** We're all adults, it's okay.
246
+
247
+ **Jerod Santo:** Page Adam. Page Adam.
248
+
249
+ **Gerhard Lazu:** \[laughs\]
250
+
251
+ **Jerod Santo:** Fun! Yeah, there you go... I mean, who else does crazy things like this? And we're just getting started, so... Stay tuned for that.
252
+
253
+ **Adam Stacoviak:** Alright. Let's ship this. Click Next, listen to more episodes. We appreciate you listening.
Is Kubernetes a platform_transcript.txt ADDED
@@ -0,0 +1,309 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** It's been several years since we worked together - 2016, 2017 - and I think it's been too long since you and me played the game of table tennis. How's your game? \[laughter\]
2
+
3
+ **Tammer Saleh:** I was so bad at table tennis.
4
+
5
+ **Gerhard Lazu:** That's not true. That's not true. I've seen improvement. I've seen those years in which you really improved. And the last games that we've had were really good. So I enjoyed them.
6
+
7
+ **Tammer Saleh:** It was a lot of fun. I don't know if you know this, it was never official, but it always kind of seemed like your seniority at Pivotal would directly correlate with how good you were at table tennis. \[laughs\]
8
+
9
+ **Gerhard Lazu:** Yes. I knew that, but I never mentioned it to anyone. I think it was like a little thing, yes.
10
+
11
+ **Tammer Saleh:** I'm pretty sure most of my engineers let me win, just to make me feel better.
12
+
13
+ **Gerhard Lazu:** I'm sorry, not me; no, we had some great games. So did you play much in the last three, four years?
14
+
15
+ **Tammer Saleh:** Not at all. I mean, it was entirely a Pivotal thing. It was built into the Pivotal culture. You're pair programming and you need a quick 15-minute break, where you get up and you jump around, and there's table tennis tables right there, and you're playing doubles, so you're a pair, you find another pair that also needs a break... I mean, everything about it was just built around Pivotal.
16
+
17
+ **Gerhard Lazu:** \[04:21\] Yeah. I really miss that. From the whole office culture, which seems to be slowly disappearing when it comes to remote work, and this is like the new norm, and we're in it for the long drive, shall I say... I really miss that table tennis, that social aspect. I mean, pairing is great. You can do it remotely. But what you can't do remotely is play table tennis.
18
+
19
+ **Tammer Saleh:** It's true. I mean, I've always been very passionately 100% remote. Our company has always been 100% remote, even before the apocalypse. And that made the apocalypse a little bit easier for us to weather as a company. But I do miss that camaraderie of going out to lunch together, that camaraderie of playing a game of table tennis together. And obviously, there's a tax to being remote when it comes to communication, right? Communication is just more fluid when you're sitting right there. At the same time, there's always benefits, one side or the other. And I think the benefits of being able to find amazing talent who's uninterested in moving to some central location, and the benefit of everyone in the company being on equal footing. You know, companies that do remote where there's a mothership and small offices - the small offices always feel like their growth is going to be stunted. And it is, because they're not close to leadership and close to where the decisions are made.
20
+
21
+ And even more important - and I think this is more about American culture and what's been happening to American culture over the past, I don't know, 20, 30, 40 years... As people congregate more into the cities, we are getting a very strong cultural divide. It's probably happening in other places too, but for us, it's incredibly strong between the cities and the countryside, right? And I feel like the more fully remote various companies move towards, the better it's going to be for society, because you get people from different backgrounds all working together, and you start to flatten out the cities. I think cities are not a great thing from a cultural point of view, right? They're a huge strain on infrastructure, and it would just be much better if we could just flatten them a bit and have the small towns grow a bit bigger in the countryside. And I think fully remote allows that.
22
+
23
+ **Gerhard Lazu:** Yeah. I can see that. And I do have to say, having left a big city not that long ago -- I mean, I'm still around it. I'm still around London, but I'm not living in London anymore. And I do appreciate the advantages to that; but I can also see some of the trade-offs. So there's always some trade-offs.
24
+
25
+ **Tammer Saleh:** We miss the really good dinners.
26
+
27
+ **Gerhard Lazu:** Yeah. And the table tennis.
28
+
29
+ **Tammer Saleh:** And the table tennis, yeah.
30
+
31
+ **Gerhard Lazu:** Okay. Now, one other topic that I know that you're really passionate about besides dinners and table tennis is Kubernetes.
32
+
33
+ **Tammer Saleh:** It's true. It's true.
34
+
35
+ **Gerhard Lazu:** Same here. Same here. Big fans. So I know that you're seeing so many things around Kubernetes - so many social interactions, so many teams interacting with Kubernetes. And I see companies these days, they no longer say, "Oh, Kubernetes is interesting. Maybe I should try it out." They need Kubernetes. And that's a very interesting mind shift, which happened I think in the last maybe year, two years. So a company, when they start with Kubernetes, what problems do you see them having?
36
+
37
+ **Tammer Saleh:** Yeah, that's a great question. And just to put a little bit of context in it - so at SuperOrbital, we have two lines of business. One of the lines of business - the biggest one - is our engineering services. We help companies out with very difficult Kubernetes-related problems. We have a very small team, very senior, seasoned engineers, with a lot of judgment. And when one of our clients has a very unusual and challenging problem with Kubernetes, like going on-premise via Kubernetes, or doing some very deep security stuff with Kubernetes - that's when they bring us on board for short term engagements, whatever, we help out.
38
+
39
+ \[08:16\] We also have a smaller part of our business, which is producing workshops and training. And the reason that I bring this up is because when we are doing our workshops, that's when we engage more with companies who are just starting to embrace Kubernetes, right? So we don't help those customers on the engineering front as often, but more likely, we get to train them and show them how complex Kubernetes is.
40
+
41
+ That's the key problem with Kubernetes... I mean, everybody who's used it, knows it, but the complexity is huge. I mean, there's something like 80 different resource types that the Kubernetes API understands, the last time I looked. And each one of those can have dozens or hundreds of attributes that you have to, to some degree, understand. And especially as you're doing production workloads in Kubernetes, the defaults are not always in your favor, right? So things like affinity rules and stuff, which - this stuff is improving, but... Affinity rules, security, all that stuff is things that are kind of left as an exercise to the reader with Kubernetes. So the complexity is just enormous. And new releases, they used to have been quarterly, and now, literally, they slowed it down, because quarterly was too fast. Now it's every three -- three times a year, new releases. Sure, it's a minor number, but we all know that in the Kubernetes world the minors are basically majors, right? So 1.23 is around the corner right now. By the time this is published, it'll probably be out.
42
+
43
+ The interesting thing to me is that the original authors of Kubernetes, they never envisioned that Kubernetes would be used directly by application developers. That's fascinating to me, right? There's some tweet by Joe Beda where he said that they always viewed YAML as an implementation detail, as like the assembly language, or whatever, the API that you would talk to Kubernetes via. And there would always be something on top of it that would smooth over the rough edges and take care of a lot of that complexity, and make all those decisions for the developers, for the engineers. Yet, here we are, right? We are all wrangling YAML in order to use Kubernetes.
44
+
45
+ So absolutely, when we train our customers in Kubernetes, our most popular workshop is this core Kubernetes workshop, where it's like you just want to get your application developers up to speed on how to use Kubernetes. The complexity is just astounding, and you need all of your engineers to understand it if they're going to carry the pager. Especially a smaller company, where your application engineers need to be able to debug issues with their applications in the cluster when things go sideways, they need far more knowledge than you would expect.
46
+
47
+ **Gerhard Lazu:** So when companies come to you saying that, "Hey, Tammer (and your awesome SuperOrbital team) we need help. We really need help", what do they need help with? Is it training? Is it running stuff? What does that look like?
48
+
49
+ **Tammer Saleh:** Because of the nature of who we hire and how we're positioned, we don't help with maintenance on clusters. We don't help with on-call or upgrading clusters and that kind of stuff. It just doesn't make sense to engage with us for that kind of thing. But customers definitely come to us for training, and they come to us, like I said, for the harder Kubernetes problems.
50
+
51
+ **Gerhard Lazu:** Can you give us a few examples, like some hard Kubernetes problems that companies struggle with, or teams struggle with?
52
+
53
+ **Tammer Saleh:** \[11:48\] Yeah. We have a couple of clients who are attacking on-premise installations for their product. They have a product that they run, but they want to deliver it to other companies, on-premise, in the other companies, AWS accounts, or even bare metal, or whatever. And the interesting thing about Kubernetes is that it is becoming that ubiquitous platform. It is becoming that assumption that you can make, that if I'm going to go on-premise, I want to target Kubernetes, because that's going to hit 80% of my potential customers. That's easily becoming the case. And going on-premise is very difficult, even with a substrate like Kubernetes to lean on, because often you get zero telemetry, right? You get no metrics, no logs, no hands on the keyboard, you can't kubectl exec into something and fix it. Usually, with these engagements, or usually for our clients, their customers are highly regulated, highly secure companies, that have very strong security postures.
54
+
55
+ And so what our clients need is not only to believe that what they are going to be deploying into their customers' Kubernetes environments are well-engineered and using all of the best practices from Kubernetes' point of view, but often, they also need a lot of custom code developed in order to do health checks. For one customer, we actually built a dashboard that their customers can go to and see the health of their application, but also the health of the underlying cluster, basically, so that their customers can self-select into, "Should I file a ticket? Or is it actually a problem with our own cluster, and we need to go to our own operations team?" That kind of thing is fundamentally important.
56
+
57
+ When we were at Cloud Foundry, we have so much experience with the headaches of trying to ship on-premise that we just naturally -- that's why we ended up with all these customers doing it, because we just had that experience already.
58
+
59
+ Another fun example is we had a crypto client who wanted to integrate AWS Nitro secure Enclaves with EKS. And the Nitro Enclave thing is a really interesting technology where you can run verified code in a highly secure hardware-based environment that has to be built into the chips on the actual machines that AWS gives you. And even AWS engineers cannot access the memory for that code, but using it is a huge pain. I mean, using it is incredibly difficult. And the code that runs inside this secure Enclave cannot do things like network, or anything. You can only communicate with it through this weird VSOC that happens at the kernel level. And so integrating that with EKS turned out to be very challenging, and so they brought us on board to help out with that. And as it turns out, we were, I think, maybe still the only people who have done that integration, the only people who have tied EKS and Nitro together so that you could launch a secure Enclave from a pod, and communicate with it directly from that pod. And we know that because we actually had to work with the AWS engineering team to get it done. And it was a lot of fun, and we blogged about it, and the engineers loved that work. It's part of the reason why we can attract such senior talent, is because we get to work on the more interesting projects like that.
60
+
61
+ **Gerhard Lazu:** Right. You've hit on so many things, and I'm getting to ask one thing, which is very close to my heart. So in Cloud Foundry, we knew to use BOSH to manage Cloud Foundry. Is there such a thing in Kubernetes where when you deploy Kubernetes on bare metal, what would you say? What should users or teams use for that management of Kubernetes on bare metal or on-prem?
62
+
63
+ **Tammer Saleh:** There's a variety of tools for deploying Kubernetes to bare metal installations. And that's not really the hard part with Kubernetes. In the cloud, there's managed Kubernetes, and that solves all your problems. That's not the problem with Kubernetes in complexity. In fact, getting a Kubernetes cluster up and running is fairly easy. On bare metal, you have some issues with the networking, but there's projects to solve there. You've got kube-router, and you've got MetalLB, and you've got others that solve that problem for you.
64
+
65
+ \[15:56\] It's interesting that you brought up BOSH and Cloud Foundry. For those who don't know, the way that Cloud Foundry was designed was that we had two different products. We had BOSH, which was sort of a competitor to Terraform and Ansible and Salt. I don't know this for sure, but I think it came right out of the Google's Borg. It's like a rewrite of Borg, basically. And it's very difficult to use. But once you use it, once you learn it, Stockholm Syndrome kicks in and you start to love it. There's huge BOSH fanatics, right? And BOSH was the tool that the operator used to deploy Cloud Foundry; very difficult to use, but very powerful. And Cloud Foundry was the interface that the operator then could present to the application developers, which was basically a blatant rip-off of Heroku, which was a great model. Twelve-factor buildpacks, all that stuff made it really easy for application developers. But here's the interesting thing - I refer to that as the great wall DevOps model where Cloud Foundry allowed the operator to serve the application developer well by giving the operator this beautiful wall that both sides really appreciated. The operator appreciated how easy it was to manage Cloud Foundry through BOSH, and the application developer appreciated how powerful it was for them to manage their application through Cloud Foundry.
66
+
67
+ Kubernetes is entirely different from that, right? Kubernetes is what I call the kumbaya DevOps model where everybody has to know everything, right? Kubernetes doesn't have the concept of an operator, versus an application developer. At best, it gives you some tools where you can kind of build that using rbacks and stuff, but that's really difficult to do. And nobody knows quite where the line is supposed to be. Yeah, so everybody does it differently, you know?
68
+
69
+ **Gerhard Lazu:** Yeah. Okay. So they do have YAML in common... \[laughter\] That's still around. That's like silipaid but maybe not for long. Who knows? We'll see. So what I'm taking away from this is that Kubernetes is everywhere, and teams, they need Kubernetes because it's the easiest way to get something out there, it's ubiquitous, it's everywhere, and it handles the complexity really well. So you're right, the AT resource types, plus all the custom ones that you can install and, typically, you get via CRDs, and they get even more complicated. It's a great way of modeling some really complex software, whether it's microservices, whether it's stateful services, and that's like, hmm... Not fully, but it's getting there for sure. I think there was a maturity level that had to happen at the data services side as well, just to understand that operating model.
70
+
71
+ **Tammer Saleh:** Yeah. It's not just ubiquitous, it's just becoming the standard, right? It's expected that if you're going to, as you said, model out your application infrastructure, then you're going to do it in YAML, using Kubernetes objects, so that you can deploy it anywhere.
72
+
73
+ **Gerhard Lazu:** And there are some really great projects in this Kubernetes ecosystem and in the bigger cloud-native ecosystem, which work well together. But it's intricacy of finding the right combination of objects or the products that make sense to you, and that's where the complexity lies in. So the kumbaya - anything goes and everything goes. And by the way, there are teams for which a certain combination makes sense, which would never work for other teams. And that's what gives it the beauty, and also the complexity.
74
+
75
+ **Tammer Saleh:** It's building blocks, right? The entire community is all about building blocks. And if you have a large enough team that you can dedicate a couple of people to choosing the right building blocks and wiring them all together and producing this really great experience for your engineers, then that's great.
76
+
77
+ **Gerhard Lazu:** Do you think that teams would do better without Kubernetes?
78
+
79
+ **Tammer Saleh:** Yeah, yeah, yeah. I mean, again, it depends on the size of the team. But I'm going to just ball-park that 30%-ish of people who come to us saying, "We're looking to embrace Kubernetes. We're going to move to Kubernetes, and we'd like your training or your help on the engineering side to get it done, and to get it done right", about 30% of the time when people come to us asking for that, we try really hard to convince them not to; because if you're a small startup, then unless you're doing something really complicated, then it's just too much for you, right? I mean, you're not focused on your own innovation; instead, you're focused on managing Kubernetes.
80
+
81
+ \[20:29\] So here's a story... When I was, I don't know -- through most of my life, I've been a Linux user, until around 2006, I think it was. And I used to run Linux on all kinds of hardware I ran. I was one of those geeks in college that had a small network of Sun and different servers, and things like that. And for the longest time, I ran Linux on my laptop as my daily driver. And around 2006, I realized that I was spending 20% of my time trying to figure out how to close my ThinkPad without the kernel panicking, right?
82
+
83
+ **Gerhard Lazu:** Oh, yes...
84
+
85
+ **Tammer Saleh:** Seriously, it was about an hour day, every day, you know?
86
+
87
+ **Gerhard Lazu:** Yeah. Doesn't want to sleep. Linux doesn't sleep, does it?
88
+
89
+ **Tammer Saleh:** Yeah. It's always working for you, you know? And I just flipped the table, I bought a Mac and I never looked back. To me, the analogy is that Kubernetes is that Linux on the laptop experience. There's always going to be problems, because you're always integrating two dozen different technologies to get a full Kubernetes system running. And it's fine if you have administrators there to focus on that task. But if you're a 10-person startup, that's not where you need to be. You should be on Heroku, or Fly.io or-- what's the other, Nitros, or Google Cloud Run, Fargate... Any of those are better choices than Kubernetes. The litmus that we give these people when they come to us is, "Stay on these fully managed platforms for as long as you can." And every time an engineer says, "We should really use Kubernetes for this, that or the other", you say "No, we should stay within the confines of a twelve-factor app", as much as you can. You change your product definition so that you can stay within that confine, whatever you can do, until you really believe that you need to provision raw EC2.
90
+
91
+ When an engineer says, "Look, this is an important feature. The only way we can get this feature done is if you give the keys to AWS, because I need to provision some instances. We're going to configure those instances. We're going to run systemd on them. We're going to tie in all the logging and all the metrics into some sort of centralized system. We're going to have alerting and everything set up", and all of that, that's when you say, "No, no, no, no, no, no. We're never going to provision raw instances", because Kubernetes is the future for all things cloud-level; all things that would be infrastructure as a service. Instead, you should be using Kubernetes. That's the inflection point.
92
+
93
+ **Break:** \[22:52\]
94
+
95
+ **Gerhard Lazu:** I think that you've heard this question many times before, but I still have to ask it. Do you think that Kubernetes would have been this popular and successful was it not for Docker?
96
+
97
+ **Tammer Saleh:** Yeah. Yeah, that's a great question. I mean, obviously, who knows? But from my point of view, I don't think Kubernetes would have gotten off the ground at all if it wasn't for Docker as a standard, right? We all know that Docker, as a company - they had an opportunity and they just couldn't white execute on it. So whatever. That is what it is. But the thing that Docker gave to the technology community is that standard of what it means to be a container. And we all know that there were containers before Docker, right? I mean, LXD, there was Solaris Zones, FreeBSD jails, sort of, right? And things like Solaris Zones, arguably, were better. If I remember correctly, they ran separate kernels per container, right? But it was that standardization of how you create a container and what a container-- how you create a container image, and what a container image actually is, that allowed tools like Kubernetes to flourish. So absolutely not. I don't think K8s would've been a thing without Docker at all.
98
+
99
+ I mean, I understand that Kubernetes inside Google was Borg and Omega, right? So obviously, it existed before Docker existed inside Google, but that's a completely different thing. In order to get community adoption, in order for this open source thing to flourish... If Kubernetes had been built as an open source product and had its own idea of what a container is, and had this thing of "You have to run these commands to generate an image, and then we run this...", I just don't think it would've gotten adoption at all. It wasn't just the standardization of Docker, too. It was also, frankly, the -- I don't want to use the term hype, because Docker is a very powerful and important technology; but there was a wave where people were just really excited about Docker, and anything that embraced Docker got an immediate uplift because of that. And I think Kubernetes benefited from that.
100
+
101
+ **Gerhard Lazu:** Yeah. I remember that age and period really well, when you had to run containers. It didn't matter how, didn't matter where, you just had to run containers. And Kubernetes wasn't a thing back then.
102
+
103
+ **Tammer Saleh:** So few people even knew what containers were, right?
104
+
105
+ **Gerhard Lazu:** Exactly. They were like, "What? Containers what? Why would you want containers?" And I remember FreeBSD jails as well. I'm yet to start a FreeBSD jail successfully. I've started that project ten years ago when I got my first FreeBSD server, and I never got, to this day, to get a jail up and running because of how complicated it was. And I started like, "Ah, there's so many configuration options." And Docker made its running commands, and you have it. That was brilliant. So as an idea, as a concept, it was really, really good. And things then, they got complicated and it happened what happened. But you're right, we are here today where Docker is no longer part of Kubernetes. It used to be. And that created quite the confusion.
106
+
107
+ **Tammer Saleh:** People say that, they're like, "Oh, Kubernetes dropped Docker, and it's no longer--" But that's my point, is that we shouldn't be thinking about the word Docker, we should be thinking about the standard that Docker created. So Kubernetes is still using Docker as a standard just as much as it did before, right? It's still an integral part of what it means to be Kubernetes.
108
+
109
+ **Gerhard Lazu:** I think it's the container run time, but the clarification came afterwards, like "No, we're not dropping Docker support, because Docker means so many things." It became an ecosystem. And even now, the default container registry is the Docker Hub, right? So if you don't specify -- and that's also Docker. It's part of Docker. But also, the container run time, the containerd, runC, and a couple of others, but I think these are the two popular ones. So that's what they meant by removing Docker as a dependency of Kubernetes. And I'm wondering if you have to be good at Docker to do Kubernetes. Do you need any experience with Docker? Do you need to run Docker locally to get Kubernetes? I know that you can get Kubernetes in Docker, which confuses a lot of people, and I never recommend it, but--
110
+
111
+ **Tammer Saleh:** \[28:24\] Turtles all the way down, and turtles in a circle even.
112
+
113
+ **Gerhard Lazu:** Yeah.
114
+
115
+ **Tammer Saleh:** We actually get that question a lot, especially when we're talking to people about our workshops, because-- I guess the answer is sort of. You sort of need to be good with Docker in order to be good with Kubernetes. And what I mean by that is - our core Kubernetes workshop actually doesn't use Docker at all. You never run a Docker command throughout that entire workshop. And even when we go under the hood, as you said, nowadays, you don't even see Docker on the nodes, because it's all containerd, right? You need to understand the concept of what containers are, as in sort of tiny VMs that can share some stuff.
116
+
117
+ We talk about the Linux namespaces that are being used in Kubernetes when we talk about the different things you can share amongst containers, but you don't have to be great at crafting a Docker file, for example. And crafting a Docker file is an art. It is hard to create an efficient, really good Docker file, and to understand all the security implications and everything. And to some degree, I think that shows how Docker did the tech community a service by giving us the standard, but did us a disservice by making that standard so low-level. I mean, as an application developer, you need to understand not only app get installed, but also the app cache, and the difference between Alpine Linux and Ubuntu... All this stuff is kind of crazy.
118
+
119
+ So most successful teams that I've seen, instead, centralize at least the skill of crafting Docker files, if not just using a single centralized Docker file across all of your applications. That's a thing you can do, right? So most teams I've seen have centralized that knowledge of how you create efficient Docker files, and all that... And then application developers just need to understand -- maybe locally, they need to understand docker compose up and maybe a few Docker command line things. And they need to understand, maybe, how to push Docker images, but frankly, often, that's just taken care of by the CI/CD system, too. So no, I think you can make a lot of use of Kubernetes without having a deep understanding of Docker.
120
+
121
+ **Gerhard Lazu:** For me, Kubernetes makes a lot more sense, having started with Docker and having spent a couple of years in that ecosystem before Kubernetes was a thing. And that's very easy to ignore and forget, because my beginning was not Kubernetes; but many people, this is where they start, and they miss the whole Docker thing. I mean, they may have been running it locally, but not to the point that they understand it, not to the point that they've been using it for a couple of years and really understand what's happening under the hood.
122
+
123
+ So I think some Docker concepts - and as you've mentioned, it's not just a run time; there's so many other aspects of Docker - are really helpful to get started with Kubernetes. What other things do you think are helpful when you get started with Kubernetes?
124
+
125
+ **Tammer Saleh:** In terms of knowledge, I think it's almost more important to have a deeper understanding of Linux networking, and just networking in general. From our experience, understanding how a cluster IP service works, for example, and all the IP tables stuff that happens there, understanding how load balancers work, understanding why node ports are a terrible idea, or understanding how Ingress works at layer seven, right? All of that is conceptually harder for our students, from what we've seen, conceptually harder for people who are new to Kubernetes, because they just never had to deal with that kind of networking knowledge.
126
+
127
+ I think another thing that's important for a team who's getting started with-- well, first of all, let's talk about how you should adopt Kubernetes. First of all, even though I kind of pooh-poohed the value of the Kubernetes managed services like EKS, AKS, and GKE, you absolutely should use them. I mean, yes, you can deploy your own cluster, but why? Just go with one of the managed solutions. Frankly, they're cheaper, especially GKE, right?
128
+
129
+ \[32:08\] And if you have a choice, just to-- if you have your druthers about which Cloud to be on, GKE is by far the best experience, and Azure is by far the worst experience, not just in terms of Kubernetes, but just across the board, right? And AWS is what it is. So if you're on AWS, you're probably forced to be on AWS, and whatever; you're on EKS. And then once you've got that, as I mentioned before, there's so much other stuff that has to be configured and deployed on top of that, and our best advice is just to keep it as simple as you can.
130
+
131
+ Most of our customers have already spent so many innovation points when they are adopting Kubernetes. We kind of feel it's our mission, our job to help guide them towards more conservative solutions, and fewer moving parts... Because it's so tempting, once you've got Kubernetes, like "Ah, I guess I need Istio because Istio does all these cool things. It does. And if you need those things, that's great, jump on board. But holy crap is Istio complicated, and it's dangerous. I mean, if you misconfigure Istio, you can really do damage to your production traffic. And avoid any tooling that you don't have an immediate pain point for. When you look at the CNCF landscape, it can often look like you're in a toy store. You see all these wonderful, cool gadgets and you just want to grab them all up into your basket, but you need to show a lot of restraint, because every one of those that you add is something else you have to manage and understand.
132
+
133
+ **Gerhard Lazu:** Oh yes. Yes. Most people forget about that, and install it, and that's it. Well, how are you going to upgrade it? And some components don't upgrade as well as others. And then that just opens a whole new world of problems, a whole new set of problems, like "Do you upgrade in place, or do you stand up another Kubernetes cluster?" And if a cluster gets too big, well, should you split it in multiple clusters? And before you know it, you're solving problems that you didn't even know existed before you chose Istio. So maybe don't?
134
+
135
+ **Tammer Saleh:** Right. Exactly. Exactly. You're like, "Where am I?" \[laughs\]
136
+
137
+ **Gerhard Lazu:** Exactly. "I thought I understood networking." No, you don't.
138
+
139
+ **Tammer Saleh:** Yeah. When you get to understand networking, and you see how Istio actually works, you're like, "Oh, my gosh..." And there are some components that are kind of tables stakes for a new cluster. A cert manager is a great example of just-- okay, everybody should have a cert manager running in their cluster. But there's so many other things that are cool and interesting, but probably not something you need.
140
+
141
+ Another example is Helm. Helm as a tool is amazing for installing third-party packages, something that somebody else has to maintain, right? You need Postgres? Then sure, use the official Postgres Helm Chart is the best way to do it, by far. Well, Postgres may be a bad example, because there's also operators that do an even better job, right? But what I see teams immediately doing because they just didn't know any better, they just assumed that this is how you use Kubernetes, is they start building Helm Charts for their internal applications; small teams doing this. And Helm, although it's great for package distribution and consuming third-party software, in order to author a Helm Chart, you are using a Turing-complete templating language in order to generate white space sensitive data structures.
142
+
143
+ **Gerhard Lazu:** How crazy is that? Oh, my goodness...
144
+
145
+ **Tammer Saleh:** It's just crazy. It's crazy, right?
146
+
147
+ **Gerhard Lazu:** I'm glad it's not just me that thinks exactly the same way. I'm glad it's not just me. So I'm not the crazy one. Okay, good. So yeah, I have confirmation I'm those crazy. \[laughs\] Okay.
148
+
149
+ **Tammer Saleh:** I don't know about that, but in this one aspect, you're not crazy.
150
+
151
+ **Gerhard Lazu:** Damn it. Almost. Almost. Almost. \[laughter\]
152
+
153
+ **Tammer Saleh:** And the sad thing about it is they just don't know any better. They've got very simple applications, they're a small team, and they end up spending a lot of time building these Helm Charts to distribute them, and stuff. You don't need that. Customize, for example, is a great tool for managing your YAML when it's being deployed to multiple environments, because you can make very small changes. Customize is much easier to understand, much easier to maintain. If you're really small, you don't even need a tool like that. You could just apply the YAML and just call it a day.
154
+
155
+ \[36:14\] I think when a team chooses Kubernetes, where it should focus on is automation; building out their own internal automation system, not just for managing the cluster, using Terraform, which is by far the best tool for that kind of stuff, but also for managing the resources inside the cluster. A CI/CD pipeline, maybe using GitOps at the end, or whatever... That's the fundamentals that your team should focus on, because once you have that, all the other changes become simpler. And frankly, that automation is half of the value prop of Kubernetes, because the Kubernetes API is so good. It's so easy to automate stuff through Kubernetes. And if you're not investing in that automation, you're wasting that value. And then, obviously -- I mean, I run a company, so I should say that... If you're just choosing Kubernetes, you should be looking for training. And I love our workshops, obviously, but there's others, right? But you do need to invest in your engineers' knowledge, because they are going to have to debug it when it goes sideways, and you don't want them floundering and using Stack Overflow in the middle of an outage.
156
+
157
+ We offer engineering services, usually not for people who are just now adopting Kubernetes, unless you've got a very interesting application you're moving over, but you should be finding experts, either hiring Kubernetes experts, or finding a partner that you can integrate with your team, that will give you those subject matter experts for Kubernetes, because you're going to save a lot more time and money in the long run if you do that early on.
158
+
159
+ **Break:** \[37:53\]
160
+
161
+ **Gerhard Lazu:** You've touched on a really important point, namely the investment in automation. So if you use Kubernetes - that's great, especially if you need it; but you will have to invest in automation. And I think there's a set of principles which are really important that you have once you enter this world of cloud-native Kubernetes. Because otherwise, making choices will be really difficult. Automation is really important once you are in the world of Kubernetes, in the world of cloud-native.
162
+
163
+ **Tammer Saleh:** Absolutely.
164
+
165
+ **Gerhard Lazu:** What other things are important?
166
+
167
+ **Tammer Saleh:** \[39:55\] Well, I mean, if you're going to move into that world, again, as we said before, the complication is just massive. I mean, there's so much that you're pinning together, that you're tying together. I think that it's important if you're going to do that, that you invest in education in your engineers, so that they can understand this complexity. And depending on the size of the company that you are, depending on the size of your engineering team, many companies invest in what we're calling internal platforms. And you can just view that as an extension of the automation. It's almost a spectrum of how sophisticated these internal platforms get, and kind of what model they use. All the way from - on the lowest level side is just the platform team providing maybe a centralized Docker file, maybe a centralized Helm Chart... That's one of the few times we've seen Helm used internally in a good way... And a centralized CI/CD system, so that the application developers can plug their app into the Helm Chart using that Docker file, and it gets automatically deployed to all the various environments, and such.
168
+
169
+ Then on the other side of the spectrum is implementing a full Heroku, where the developers are insulated 100% from the details of Kubernetes, and they're given a really nice interface. We have never seen that done successfully, just to be clear. I've never seen that work where the developers did not still have to understand the intricacies of Kubernetes; because at some point, they've got to break glass in case of emergency.
170
+
171
+ **Gerhard Lazu:** Yeah, because you have to run it, right? You've built it, but you have to run it. And guess what? It's running on Kubernetes. So if you don't know how to debug it, or even understand what is happening, good luck to you.
172
+
173
+ **Tammer Saleh:** And if your platform team is so good that they have actually built a full interface on top of Kubernetes that takes care of all the details and the application developer only needs to interact with that interface, that platform they built, I've got news for you - you're probably in the wrong industry. You should spin that off and clear house, right?
174
+
175
+ **Gerhard Lazu:** Oh, you gave me an idea, because even though we use Kubernetes to run all of Changelog, the developers - they don't know that. They still just git push; and all the automation takes care of the rest. So we were using Docker Swarm before, and we were using Docker before. The experience, as far as developers are concerned - it has never changed. It has always been git push. Isn't that the Heroku experience, git push and it runs?
176
+
177
+ **Tammer Saleh:** That is. That is. But what happens when there's a fire? How do the developers debug when--
178
+
179
+ **Gerhard Lazu:** They don't.
180
+
181
+ **Tammer Saleh:** Okay. \[laughter\]
182
+
183
+ **Gerhard Lazu:** They don't. So around that, we have a set of services, for example, Grafana Cloud, where we send all the logs, all the metrics. So if there is a problem, that's one of the first places where you would look. A new addition was integrating with Honeycomb. And Honeycomb gets the Fastly logs as well, which is the CDN; because it's not just Kubernetes, it's also what's in front of it, and then what's behind it as well. There's all these components.
184
+
185
+ So having these different ways of understanding what is happening in your runtime, whether it's Kubernetes or something else, is important regardless what the run time is; for example, getting exceptions. That's a really old thing, which we used to do when we used to SCP our Ruby code, or FTP it, right? We still used to get exceptions. I forget what the name of that tool was. Do you remember what we used back in the day?
186
+
187
+ **Tammer Saleh:** There was a number of them. In fact, I actually wrote one of them...
188
+
189
+ **Gerhard Lazu:** Exactly. That's why I'm asking you. \[laughter\]
190
+
191
+ **Tammer Saleh:** I wrote Hoptoad, which later became Airbrake, and competed against Get Exceptional, and hilariously, both Airbrake and Get Exceptional were purchased by the same person, and now they're actually running under the same umbrella, which is kind of funny... But yeah.
192
+
193
+ **Gerhard Lazu:** Right.
194
+
195
+ **Tammer Saleh:** Yeah. You need all these things. You need all these interfaces into understanding what your application is doing. I'm really excited, by the way. This is a bit of a tangent, but I'm really excited by all the stuff that's going on with eBPF, especially with things like, I think it's New Relic's Pixie. So yeah, New Relic's Pixie is really exciting because of the deep insight it can give in a language-agnostic way. It's one of those things that you could see as a building block so that the developer does not need access to kubectl exec, for example.
196
+
197
+ **Gerhard Lazu:** \[44:18\] Exactly. That's it. That's, I think, what a successful ops side of like running Kubernetes look like, where you don't have to get there. As a developer, for example blue-green - if you do that properly, and if you have all the redundancies in place, even when something goes down, the end user doesn't see that. And it doesn't matter that it runs Kubernetes. And when it comes to debugging it - well, if you're a small team, and let's say the problem is in Heroku, what happens? Do you debug Heroku? No. No way. You don't get the keys to Heroku to debug the stack, right? It just gets scheduled somewhere else, and that's how that gets solved. So what I'm saying is having that visibility into how things run is really important. And if that's your experience and your interface, that's great. I think that's one of the principles that are really important, regardless what the run time is. And if it's Kubernetes, so be it.
198
+
199
+ **Tammer Saleh:** If you're going to be using something like Kubernetes, you need to invest doubly strongly in observability and in all of that metrics. But I'd argue that you need that just as much, if not more, if you are not using Kubernetes. If you're trying to do raw AWS, for example, it's even harder to build all that observability infrastructure in place. But it's absolutely, if you're just moving into the cloud world and moving into this whole type of world where automation and where -- a cloudy world that that's focused on automation, you need that observability, not only for your own ability to debug, but eventually, you're going to feed that observability back into your automation, right? You're going to do automated blue-green rollouts, where you want the automation, over the course of maybe a day, to look for errors, to look for reduced metrics, and to roll it back.
200
+
201
+ **Gerhard Lazu:** Yeah, that's right. I know that I really like ops and infrastructure, that side of things, but our Kubernetes set up is simple on purpose, and some things could be better. It can always be improved. We have it public. Anyone can check it out to see how we run and how we set up and which components we pick. A cert manager is part of it, ExternalDNS, Ingress, NGINX, all the stock stuff.
202
+
203
+ **Tammer Saleh:** Yes. ExternalDNS, also absolutely necessary.
204
+
205
+ **Gerhard Lazu:** Yes. It's part of it. And the Kubernetes is managed, so we don't deploy on bare metal servers, even though that's become simpler over the years since we've embarked on this journey. And there's other options which we will also be exploring.
206
+
207
+ So whether you do Kubernetes or something else, there will be certain operational concerns which will be difficult. And there's a level of maturity that you need to have on the team to navigate them. And I think that's what is important to almost reiterate. And in certain cases, like Istio, I'm sure some things it makes better, but networking - I don't know. I think networking gets more complicated with Istio. And if you're okay with a trade-off, maybe it's a good one to make, but I wouldn't. We haven't chosen Istio, so there we go.
208
+
209
+ **Tammer Saleh:** I agree with you 100%.
210
+
211
+ **Gerhard Lazu:** Talking about Kubernetes and how we run it, do you recommend a big cluster, or do you recommend smaller clusters?
212
+
213
+ **Tammer Saleh:** Oh, yeah. So when Kubernetes first came out-- I mean, first of all, the short answer is many small clusters. The long answer is when Kubernetes first came out, CIOs looked at it and said, "Oh, this is great. We're probably using 20% of our CPU and memory across all of our VMs, across our entire fleet", just because of natural inefficiencies between teams, right? You need a new app out, you throw a couple of VMs out there, you call it a day. And the CIO's job, part of it, is to reduce infrastructure costs, right? And so the CIOs looked around and they said, "Oh, this is great. We can bimpack the \*\*\*\* out of this", right? "We can take all that stuff and just shove it into one big, massive cluster, and save so much money." And I think that drove a lot of initial Kubernetes adoption. I mean, obviously, there was a lot of grassroots adoption of Kubernetes, but there was a lot of adoption coming out of the IT organizations in larger companies because of that driving factor.
214
+
215
+ \[48:14\] Now, when the operators started using Kubernetes, they saw what I think of as the real benefits. I don't think the benefit of Kubernetes is about orchestrating containers. I think it's about that beautiful idempotent, declarative and ubiquitous API. And especially when you start extending that into external services, external resources that you're managing, like using, for example, Crossplane to provision AWS resources through kubectl - it's a fantastic experience, right?
216
+
217
+ **Gerhard Lazu:** Yes.
218
+
219
+ **Tammer Saleh:** And the operators looked at it and said, "This whole Kubernetes thing is pretty cool." However, blast radius is a thing, right? And so if you got everything in one big cluster-- and especially those poor operators who went through the 1.8 through 1.11 upgrade path got burned so many times on trying to upgrade these clusters in place, and they started developing these complicated blue-green cluster upgrade strategies where they'd deploy an entirely new cluster... And that's necessary and great, but now we've figured out that, well, you should just be running many small clusters. And there's two different ways you could do it. You run a cluster per bounded context for your microservices. In other words, you could have a cluster just for your shopping cart stuff, the next cluster just for your front end stuff, and a cluster for your back end, and all of that. But a better way of doing it is to run all these clusters as homogenous workloads, where they're all running identical workloads.
220
+
221
+ In fact, one of our clients is doing that, and they're referring to it as fleets internally. So what they do is actually really smart. They run a cluster in AWS per availability zone, and that does a couple of things. It's a natural dividing point for the different clusters, and it means that they also keep all of their traffic inside each AZ because all the services in cluster A are always talking to other services in cluster A. They don't try and do cross-cluster traffic. And that saves them a good amount of money, because they have a lot of networking that's happening in AWS. But also, it means that when they're upgrading these clusters, they can just upgrade one, and if it goes sideways, who cares? Burn it down, rebuild it, and you're fine. You've only lost - what, 20%, 25% of your capacity? And you just keep moving.
222
+
223
+ Now, of course, the big elephant here is state. You can't do that with databases. And so the best solution that we always propose to our customers is, "Look, if you're going to run stateful workloads in Kubernetes--" which, by the way, that's a lot of innovation points; you really need a team to manage that if you're going to do that. That's a dangerous thing to do as a small company. But if you're going to run stateful workflows in Kubernetes, at least shove them into a smaller cluster that you know you have to treat as a pet. You've taken all of your other clusters, your stateless ones, and you've made them into cattle, which is beautiful; then you constrain all your stateful workloads into one, or just use RDS. Just externalize your databases entirely, right?
224
+
225
+ **Gerhard Lazu:** It's a tough problem. And yeah, unless you've been solving that problem for some years, it's really difficult to appreciate. And even the operators - I'm glad that you mentioned it earlier, for PostgreSQL. Do you know how we run PostgreSQL?
226
+
227
+ **Tammer Saleh:** How do you?
228
+
229
+ **Gerhard Lazu:** We run it as a stateful set. No Helm, no operator, nothing like that. And since we did that, it's been more stable. It has not failed since we went to a stateful set, a simple stateful set, PostgreSQL container-- sorry, PostgreSQL image.
230
+
231
+ **Tammer Saleh:** And what were you doing before that? Were you doing RDS, or were you doing--?
232
+
233
+ **Gerhard Lazu:** We tried running the Crunchy Data PostgreSQL operator, and it failed because of replication. Actually, we even covered this in an episode at length, but the point was the primary stopped replicating to the replica. So the right headlock failed up on the primary. It crashed. The secondary could not be promoted. The replica could not be promoted to primary, because it was too far behind, and then we didn't have a database.
234
+
235
+ **Tammer Saleh:** \[laughs\] Ouch.
236
+
237
+ **Gerhard Lazu:** \[52:10\] And we couldn't reboot the main one, because the PVC filled up, and we couldn't resize the PVC either. And we thought, "Nah, let's just crunch data." We actually went to the Zalando one, the other PostgreSQL operator, and the same thing happened. So obviously, there was an issue at that point with networking, and that broke PostgreSQL replication, which resulted in a less stable database.
238
+
239
+ **Tammer Saleh:** Yeah. But I mean, come on, that's not because of those operators. You would have the same problem running a stateful set. I think you probably changed other things at the same time as moving to a stateful set, or maybe changed the way you use it, or something like that.
240
+
241
+ **Gerhard Lazu:** We don't replicate. Single instance...
242
+
243
+ **Tammer Saleh:** Oh, okay. There you go. Yeah.
244
+
245
+ **Gerhard Lazu:** We back everything up. Every hour, we do a full backup. And we can restore from backup within two, three minutes. So a blank node can pull the backup down from S3 and boot up in three minutes. We'll have less downtime, and it's a very simple procedure. Now, would I choose a managed--
246
+
247
+ **Tammer Saleh:** Right. You've got a potential data loss issue of up to an hour, right? Half an hour median data loss if you lose the PV, right?
248
+
249
+ **Gerhard Lazu:** Exactly. Yes.
250
+
251
+ **Tammer Saleh:** But that's a trade-off that you're willing to make. That's fine. That works great.
252
+
253
+ **Gerhard Lazu:** Exactly. And if I was to choose any PostgreSQL type of service, I would just go for a managed one, like CockroachDB, something like that. I mean, that's what I'm thinking, because it's a really hard problem to solve. I've been trying to solve this for a couple of years. I don't think I have, in a different context, because it's really difficult.
254
+
255
+ **Tammer Saleh:** I've got to tell you, I love the solution you just talked about, because too many companies-- and I've heard other people say this, so it's not like this is some insight that I have, but I agree with it 100%. Too many companies look around and they see all this really interesting and production-grade, hardened technologies coming out of Google and Facebook and other companies like that, and they think, "Oh, okay. Well, if we're going to play in the Cloud, we've got to have that", right? You don't. And if you try and build your system to be at that level, it's going to drag you down with the weight of it, right?
256
+
257
+ **Gerhard Lazu:** Oh, yes.
258
+
259
+ **Tammer Saleh:** And you looked at it and you said, "Nah, worst case scenario, we lose a PV. We can handle half an hour's worth of data loss", right? It's not that big of a deal. Then you can go with a single instance of Postgres without replication and you are fine, and your life is so much better, right? So I love that you had the self-awareness as an organization to make that choice.
260
+
261
+ **Gerhard Lazu:** Yeah. We don't use PVs, but that's another story. \[laughter\]
262
+
263
+ **Tammer Saleh:** Do you use a host disk for that? Or what do you do?
264
+
265
+ **Gerhard Lazu:** Oh, yes. It's like 10 times faster. We never lose that.
266
+
267
+ **Tammer Saleh:** You don't care. So it does mean that when you're rolling hosts under your cluster, you need to probably call downtime, right? You need to stop fast
268
+
269
+ **Gerhard Lazu:** We have a single host. \[laughter\] It's so good. It never went down. We have a much better integration with the CDN. And what that means is that even when the origin is down, we serve stale content. And unless you do posts or patches or anything like that. Gets; it works. And parts of the website may be down for most users, but you get your MP3s. We'll serve that content. We'll get the pages, and--
270
+
271
+ **Tammer Saleh:** Basically, what you're telling me is, "Boy, life is easy when you are a read-heavy workload, I'll tell you what."
272
+
273
+ **Gerhard Lazu:** Yeah, it is. It definitely is. And if we were to, for example -- if we had to have the database up, I really do think that going to a managed service, regardless who manages that, it's a much better proposal.
274
+
275
+ **Tammer Saleh:** Oh, for sure.
276
+
277
+ **Gerhard Lazu:** All the backups, all the replication, all that stuff - it's managed. You don't have to do that. And you're just consuming the PostgreSQL interface. That's it. So that sounds like a much better proposal. Like a CDN - would you run your own CDN? Maybe. I mean, if you're big enough, you'll have to.
278
+
279
+ **Tammer Saleh:** \[55:58\] If you're that scale, sure, right? And another thing about running databases inside Kubernetes is that you could think of it as almost addicting, because once you make the decision that, "Well, we're not going to use an external database provider. Instead, we're going to just run them as stateful sets inside Kubernetes. We believe in the Zalando operator", for example, right? Well, you're going to find that your developers are naturally just going to be provisioning databases. And that's going to result in multiple stateful sets, not big schemas in a large existing Postgres. It's just naturally going to proliferate. And that's the headache that you're going to feel, is that suddenly-- we have a client who's got hundreds of Postgreses. And I'm not going to name the client, obviously, but I will say they're running them wrong, and they know it, right? It's technical debt that we're helping them dig out of; but it's a huge pain, a huge cost for them.
280
+
281
+ **Gerhard Lazu:** Once you get to a certain scale, you're right; you have to take a certain approach. But when you're not there, don't take that approach. Take the simpler one. And what this approach means for us is that we can innovate elsewhere, and we can fight other battles. There will still be battles to fight, even if you don't do this one. It doesn't mean that you're less capable or less curious. It just means you've picked your battles in a way that suits you.
282
+
283
+ **Tammer Saleh:** And one of these days, as a company, you'll get big enough where you need that more interesting, innovative challenges. And there'll be companies like ours to help you out when that happens, but please don't just assume you need that prematurely. There's a similar with writing code. I'll tell you, iterating on a codebase -- because I've spent half my career as application developer, as well as operations. Iterating on a codebase before it's actually launched and in production is so much faster, right?
284
+
285
+ **Gerhard Lazu:** Oh, yes.
286
+
287
+ **Tammer Saleh:** You can make all kinds of schema changes. Who cares...?
288
+
289
+ **Gerhard Lazu:** Never ship. That's what you're saying. \[laughter\]
290
+
291
+ **Tammer Saleh:** Yeah. Basically, never ship, and you'll be the fastest startup.
292
+
293
+ **Gerhard Lazu:** So the opposite of this show. Don't ship it. \[laughter\]
294
+
295
+ **Tammer Saleh:** But I mean, it's the same thing. You launch when you need to launch, but you understand the fact that as soon as you launch, you're going to slow down by at least a factor of two, maybe three, right? And you increase the complexity of your operations stance, your Kubernetes usage when you need to. And you understand -- I mean, even embracing Kubernetes, you do it when you need to. And you understand that that much complexity is going to slow you down.
296
+
297
+ **Gerhard Lazu:** Yeah. That's a good one. That is a good one. So I think it's time to wrap up. We can have so much fun. I didn't even realize. I think we just have to do this more often, that's the only conclusion again, you know? As we prepare to wrap up, what do you think the most important takeaway is for our listeners from this conversation?
298
+
299
+ **Tammer Saleh:** Well, I mean, I didn't think it was going to be this when we first started talking, but I think the most important takeaway is don't use Kubernetes, unless you need to. Delay the adoption of Kubernetes. It's going to be on your roadmap. It's going to happen as you grow. But just like anything else, don't try and tackle that problem early. Use one of the existing managed platforms, not managed Kubernetes installations... Although, when you do adopt Kubernetes, do that; but just delay it for as long as you can. And even then, understand that you're spending innovation points, so use it in as simple of a way as you can, because you need to pay down that innovation debt, right? Focus on the automation, and focus on the education for your people, because you will underestimate how complicated Kubernetes is. You will be surprised when you start using it and start seeing all of the different ways that you can configure it, and all the best proud practices that are not codified in it.
300
+
301
+ **Gerhard Lazu:** Well, thank you, Tammer, for sharing so much valuable information.
302
+
303
+ **Tammer Saleh:** I had so much fun. This was great. Thank you.
304
+
305
+ **Gerhard Lazu:** Yeah. I had so much fun, too. Thank you. I'm looking forward to the next one. I really am.
306
+
307
+ **Tammer Saleh:** Absolutely.
308
+
309
+ **Gerhard Lazu:** Thank you.
Is Kubernetes a platform?_transcript.txt ADDED
@@ -0,0 +1,1173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.22 --> 5.62] You are listening to ShipIt, a podcast about operations, infrastructure, and the people
2
+ [5.62 --> 7.00] that are Kubernetes-ing.
3
+ [8.04 --> 8.56] Kubernetes-ing.
4
+ [9.28 --> 10.78] K-T-ing, you know what I mean.
5
+ [11.44 --> 16.62] I'm your host, Gerhard Lassil, and in this episode, I'm joined by Tamer Saleh, founder
6
+ [16.62 --> 20.40] of Super Orbital and former VP of Engineering at Pivotal.
7
+ [20.86 --> 25.28] Many years ago, we both used to work in the same London office on Cloud Foundry, and nowadays
8
+ [25.28 --> 26.44] we are into Kubernetes.
9
+ [26.44 --> 31.32] We start with table tennis, remote work, and then we spend the rest of the time talking
10
+ [31.32 --> 33.98] about the challenges that teams have with Kubernetes.
11
+ [34.60 --> 39.26] Tamer and his Super Orbital team are deeply experienced in this topic, and they help teams
12
+ [39.26 --> 44.78] at companies like Bloomberg, Shopify, and federal US agencies tackle hard Kubernetes and DevOps
13
+ [44.78 --> 47.22] problems through engineering and training.
14
+ [47.66 --> 50.40] So why do companies need Kubernetes in the first place?
15
+ [50.80 --> 52.80] Which are the right reasons for choosing it?
16
+ [53.14 --> 54.60] Is Kubernetes even a platform?
17
+ [54.60 --> 59.54] My favorite, I'm doing Kubernetes wrong, but it works better than when I was doing
18
+ [59.54 --> 59.98] it right.
19
+ [60.24 --> 61.16] So what's up with that?
20
+ [61.58 --> 63.22] This last one was a lot of fun.
21
+ [63.54 --> 66.76] And, as your request, we left the entire minute of laughter in.
22
+ [67.08 --> 70.66] Big thanks to our partners Fastly, LaunchDarkly, and Linode.
23
+ [71.02 --> 72.82] Thank you for the great bandwidth Fastly.
24
+ [73.26 --> 75.34] You can learn more at Fastly.com.
25
+ [75.86 --> 80.10] Ship new features with confidence by getting your feature flags powered by LaunchDarkly.com.
26
+ [80.10 --> 83.70] And thank you, Linode, for keeping our Kubernetes fast and simple.
27
+ [84.20 --> 88.28] Run your setup as we do via Linode.com forward slash changelog.
28
+ [88.28 --> 98.70] This episode is brought to you by Honeycomb.
29
+ [99.16 --> 103.76] Honeycomb is built on the belief that there's a more efficient way to understand exactly what
30
+ [103.76 --> 105.84] is happening in production right now.
31
+ [105.84 --> 109.96] When production is running slow, it's hard to know exactly where problems originate.
32
+ [110.26 --> 114.18] Is it your application code, your users, or the underlying systems?
33
+ [114.18 --> 119.02] Teams who don't use Honeycomb scroll through endless dashboards guessing at what they mean.
34
+ [119.26 --> 123.56] They deal with alert floods, guessing which ones matter, and go from tool to tool to tool,
35
+ [123.86 --> 125.72] guessing at how the puzzle pieces all fit together.
36
+ [126.04 --> 129.70] It's this context switching and tool sprawl that are slowly killing your teams and your
37
+ [129.70 --> 130.16] business.
38
+ [130.56 --> 135.06] With Honeycomb, you get a fast, unified, and clear understanding of the one thing driving
39
+ [135.06 --> 136.44] your business, production.
40
+ [136.88 --> 140.94] Honeycomb quickly shows you the correct source of issues, discover hidden problems, even in
41
+ [140.94 --> 145.24] the most complex stacks, understand why your app feels slow to only some users.
42
+ [145.66 --> 148.12] With Honeycomb, you guess less and no more.
43
+ [148.56 --> 153.04] Join the swarm and try Honeycomb free today at honeycomb.io slash changelog.
44
+ [153.04 --> 156.02] Again, honeycomb.io slash changelog.
45
+ [156.02 --> 172.60] We are going to ship in three, two, one.
46
+ [186.02 --> 190.50] It's been several years since we worked together, 2016, 2017.
47
+ [191.44 --> 196.08] And I think it's been too long since you and me played the game of table tennis.
48
+ [196.54 --> 197.14] How's your game?
49
+ [197.24 --> 201.20] I was so bad at table tennis.
50
+ [202.16 --> 203.06] That's not true.
51
+ [203.46 --> 204.02] That's not true.
52
+ [204.16 --> 205.18] I've seen the improvement.
53
+ [206.00 --> 208.26] I've seen those years in which you really improved.
54
+ [208.66 --> 211.02] And the last games that we've had were really good.
55
+ [211.12 --> 212.06] So I enjoyed them.
56
+ [212.20 --> 212.92] It was a lot of fun.
57
+ [212.92 --> 217.04] I don't know if you know this, it was never official, but it always kind of seemed like
58
+ [217.04 --> 221.86] your seniority at Pivotal would directly correlate with how good you were at table tennis.
59
+ [222.84 --> 223.24] Yes.
60
+ [224.52 --> 226.90] I knew that, but I never mentioned it to anyone.
61
+ [227.04 --> 228.60] I think it was like a little thing.
62
+ [228.74 --> 229.00] Yes.
63
+ [229.68 --> 233.50] I'm pretty sure most of my engineers let me win just to make me feel better.
64
+ [233.86 --> 234.58] I'm sorry.
65
+ [235.02 --> 235.80] Not me.
66
+ [236.98 --> 238.54] No, we had some great games.
67
+ [238.68 --> 241.18] So did you play much in the last three, four years?
68
+ [241.40 --> 242.18] Not at all.
69
+ [242.30 --> 242.82] Not at all.
70
+ [242.94 --> 244.52] I mean, it was entirely a Pivotal thing.
71
+ [244.62 --> 246.60] It was like part of, built into the Pivotal culture.
72
+ [246.78 --> 250.72] You know, you're pair programming and you need a quick 15 minute break where you get
73
+ [250.72 --> 255.80] up and you jump around and there's table tennis tables right there and you're playing doubles.
74
+ [255.94 --> 256.76] So you're a pair.
75
+ [256.98 --> 258.72] You find another pair that also needs a break.
76
+ [258.78 --> 261.34] I mean, everything about it was just built around Pivotal.
77
+ [261.60 --> 261.80] Yeah.
78
+ [261.90 --> 262.70] I really miss that.
79
+ [262.70 --> 267.24] Like from the whole office culture, which seems to be slowly disappearing when it comes
80
+ [267.24 --> 268.26] to remote work.
81
+ [268.26 --> 273.66] And, you know, this is like the new norm and we're in it for the long drive, shall I say.
82
+ [273.66 --> 278.98] I really miss that table tennis, that social aspect, that, I mean, pairing is great.
83
+ [279.06 --> 279.80] You can do it remotely.
84
+ [279.80 --> 282.36] But what you can't do remotely is play table tennis.
85
+ [282.36 --> 283.10] It's true.
86
+ [283.20 --> 287.44] I mean, I've always been very passionately 100% remote.
87
+ [287.54 --> 291.12] Our company has always been 100% remote, even before the apocalypse.
88
+ [291.50 --> 295.04] And that made the apocalypse a little bit easier for us to weather as a company.
89
+ [295.04 --> 299.92] But I do miss that camaraderie of going out to lunch together, that camaraderie of playing
90
+ [299.92 --> 301.48] a game of table tennis together.
91
+ [302.00 --> 307.20] And obviously there's a tax to being remote when it comes to communication, right?
92
+ [307.38 --> 310.54] Communication is just more fluid when you're sitting right there.
93
+ [310.96 --> 313.82] At the same time, there's always benefits one side or the other.
94
+ [314.26 --> 321.46] And I think the benefits of being able to find amazing talent who's uninterested in moving
95
+ [321.46 --> 327.20] to some central location and the benefit of everyone in the company being on equal footing.
96
+ [327.66 --> 331.92] You know, the companies that do remote where there's a mothership and small offices, the
97
+ [331.92 --> 335.76] small offices always feel like their growth is going to be stunted.
98
+ [335.76 --> 340.36] And it is because they're not close to leadership and close to where the decisions are made.
99
+ [340.74 --> 346.54] And even more important, and this is, I think this is more of a, about American culture and
100
+ [346.54 --> 351.44] what's been happening to American culture over the past, I don't know, 20, 30, 40 years.
101
+ [352.10 --> 357.72] As people congregate more into the cities, we are getting a very strong cultural divide.
102
+ [357.80 --> 361.20] It's probably happening in other places too, but for us, it's incredibly strong between
103
+ [361.20 --> 363.90] the cities and the countryside, right?
104
+ [364.62 --> 371.94] And I feel like the more fully remote various companies move towards, the better it's going
105
+ [371.94 --> 377.68] to be for society because you get people from different backgrounds all working together
106
+ [377.68 --> 379.56] and you start to flatten out the cities.
107
+ [379.56 --> 383.98] I think cities are not a great thing from a cultural point of view, right?
108
+ [384.36 --> 388.16] They're a huge strain on infrastructure and it would just be much better if we could just
109
+ [388.16 --> 392.40] flatten them a bit and have the small towns grow a bit bigger in the countrysides.
110
+ [392.56 --> 394.22] And I think fully remote allows that.
111
+ [394.52 --> 395.44] Yeah, I can see that.
112
+ [395.52 --> 399.24] And I do have to say, having left a big city not that long ago, I mean, I'm still around
113
+ [399.24 --> 399.38] it.
114
+ [399.42 --> 402.26] I'm still around London, but I'm not living in London anymore.
115
+ [402.26 --> 408.00] And I do appreciate the advantages to that, but I can also see some of the trade-offs.
116
+ [408.16 --> 409.98] So there's always some trade-offs.
117
+ [410.26 --> 411.46] We miss the really good dinners.
118
+ [412.14 --> 412.54] Yeah.
119
+ [412.94 --> 413.90] And the table tennis.
120
+ [414.30 --> 415.40] And the table tennis, yeah.
121
+ [415.86 --> 416.14] Okay.
122
+ [416.58 --> 421.38] Now, one other topic that I know that you're really passionate about besides dinners and
123
+ [421.38 --> 422.64] table tennis is Kubernetes.
124
+ [423.06 --> 423.78] It's true.
125
+ [423.86 --> 424.32] It's true.
126
+ [424.80 --> 425.40] Same here.
127
+ [425.50 --> 425.88] Same here.
128
+ [425.94 --> 426.44] Big fans.
129
+ [426.44 --> 433.36] So I know that you're seeing so many things around Kubernetes, so many social interactions,
130
+ [433.92 --> 436.86] so many teams interacting with Kubernetes.
131
+ [437.58 --> 437.70] Yeah.
132
+ [437.90 --> 444.70] And I see companies these days, they no longer say, oh, Kubernetes is interesting.
133
+ [444.94 --> 445.88] Maybe I should try it out.
134
+ [445.98 --> 447.68] They need Kubernetes.
135
+ [448.26 --> 453.42] And that's a very interesting mind shift which happened, I think, in the last maybe year,
136
+ [453.50 --> 453.94] two years.
137
+ [453.94 --> 460.16] So a company, when they start with Kubernetes, what problems do you see them having?
138
+ [460.44 --> 461.36] Yeah, that's a great question.
139
+ [461.70 --> 463.26] And just to put a little bit of context in it.
140
+ [463.54 --> 466.48] So at Super Orbital, we have kind of two lines of business.
141
+ [466.72 --> 470.36] One of the lines of business is, the biggest one is our engineering services.
142
+ [470.52 --> 474.10] We help companies out with very difficult Kubernetes-related problems.
143
+ [474.30 --> 480.70] We have a very small team of very senior, seasoned engineers with a lot of judgment.
144
+ [480.70 --> 487.38] And when one of our clients has a very unusual and challenging problem with Kubernetes, like
145
+ [487.38 --> 492.76] going on-premise via Kubernetes or doing some very deep security stuff with Kubernetes.
146
+ [492.92 --> 495.40] That's when they bring us on board for short-term engagements, whatever.
147
+ [495.54 --> 495.96] We help out.
148
+ [496.26 --> 501.74] We also have a smaller part of our business, which is producing workshops and training.
149
+ [501.74 --> 506.78] And the reason that I bring this up is because when we are doing our workshops, that's when
150
+ [506.78 --> 512.38] we engage more with companies who are just starting to embrace Kubernetes, right?
151
+ [512.46 --> 520.68] So we don't help those customers on the engineering front as often, but more likely, we get to train
152
+ [520.68 --> 525.18] them and show them how complex Kubernetes is.
153
+ [525.46 --> 527.42] That's the key problem with Kubernetes.
154
+ [527.42 --> 533.32] I mean, everybody who's used it knows it, but the complexity is huge.
155
+ [533.62 --> 541.52] I mean, there's something like 80 different resource types that the Kubernetes API understands
156
+ [541.52 --> 542.28] the last time I looked.
157
+ [542.50 --> 549.94] And each one of those can have dozens or hundreds of attributes that you have to, to some degree,
158
+ [550.06 --> 550.66] understand.
159
+ [550.66 --> 557.64] And especially as you're doing production workloads in Kubernetes, the defaults are not always
160
+ [557.64 --> 559.06] in your favor, right?
161
+ [559.14 --> 564.22] So things like affinity rules and stuff, which this stuff is improving, but affinity rules,
162
+ [564.64 --> 569.86] security, all that stuff is things that are kind of left as an exercise to the reader with
163
+ [569.86 --> 570.20] Kubernetes.
164
+ [570.58 --> 572.54] And so the complexity is just enormous.
165
+ [573.04 --> 577.86] And new releases, they used to happen quarterly and now literally slowed it down because quarterly
166
+ [577.86 --> 578.56] was too fast.
167
+ [578.56 --> 581.96] So now it's every three, three times a year, you know, new releases.
168
+ [582.16 --> 587.18] Sure, it's a minor number, but we all know that in Kubernetes world, like the miners are
169
+ [587.18 --> 588.38] basically majors, right?
170
+ [588.44 --> 591.24] So, you know, 1.23 is around the corner right now.
171
+ [591.72 --> 593.28] By the time this is published, it'll probably be out.
172
+ [593.68 --> 600.52] And the interesting thing to me is that the original authors of Kubernetes, they never envisioned
173
+ [600.52 --> 605.58] that Kubernetes would be used directly by application developers.
174
+ [605.78 --> 607.14] That's fascinating to me, right?
175
+ [607.14 --> 613.44] There's some tweet by Joe Beta where he said that they always viewed YAML as an implementation
176
+ [613.44 --> 614.04] detail.
177
+ [614.18 --> 618.28] It's like the assembly language or whatever, the API that you would talk to Kubernetes via,
178
+ [618.38 --> 622.72] and there would always be something on top of it that would smooth over the rough edges
179
+ [622.72 --> 626.92] and take care of a lot of that complexity and make all those decisions for the developers,
180
+ [627.12 --> 627.84] for the engineers.
181
+ [628.20 --> 629.70] But yeah, here we are, right?
182
+ [629.70 --> 634.50] We are all wrangling YAML in order to use Kubernetes.
183
+ [634.76 --> 641.28] So absolutely, when we train our customers in Kubernetes, our most popular workshop is
184
+ [641.28 --> 645.52] this core Kubernetes workshop where it's like you just want to get your application developers
185
+ [645.52 --> 647.50] up to speed on how to use Kubernetes.
186
+ [647.90 --> 650.30] The complexity is just astounding.
187
+ [650.30 --> 655.76] And you need all of your engineers to understand it if they're going to carry the pager, especially
188
+ [655.76 --> 662.02] a smaller company where your application engineers need to be able to debug issues with their
189
+ [662.02 --> 663.12] applications in the cluster.
190
+ [663.44 --> 667.64] When things go sideways, they need far more knowledge than you would expect.
191
+ [667.64 --> 673.70] So when companies come to you saying that, hey, Tamer and your awesome super orbital team,
192
+ [673.88 --> 674.56] we need help.
193
+ [674.80 --> 675.84] We really need help.
194
+ [676.10 --> 677.52] What do they need help with?
195
+ [677.64 --> 678.20] Is it training?
196
+ [678.48 --> 680.20] Is it running stuff?
197
+ [680.44 --> 681.30] What does that look like?
198
+ [681.50 --> 685.68] We don't do, because of the nature of who we hire and how we're positioned, we don't like
199
+ [685.68 --> 687.34] help with maintenance on clusters.
200
+ [687.34 --> 691.82] We don't help with on-call or upgrading clusters and that kind of stuff, which it just doesn't
201
+ [691.82 --> 693.82] make sense to engage with us for that kind of thing.
202
+ [693.82 --> 700.24] But customers definitely come to us for training and they come to us, like I said, for the harder
203
+ [700.24 --> 702.04] Kubernetes problems.
204
+ [702.82 --> 706.56] Can you give us a few examples, like some hard Kubernetes problems that companies struggle
205
+ [706.56 --> 708.04] with or teams struggle with?
206
+ [708.18 --> 714.64] Yeah, we have a couple of clients who are attacking on-premise installations for their
207
+ [714.64 --> 714.92] product.
208
+ [715.00 --> 720.28] They have a product that they run, but they want to deliver it to other companies on-premise
209
+ [720.28 --> 725.08] in the other companies, AWS accounts or even bare metal or whatever.
210
+ [725.86 --> 732.52] And the interesting thing about Kubernetes is that it is becoming that ubiquitous platform.
211
+ [732.96 --> 738.30] It is becoming that assumption that you can make that if I'm going to go on-premise, I want
212
+ [738.30 --> 743.20] to target Kubernetes because that's going to hit the 80% of my potential customers.
213
+ [743.20 --> 744.52] That's easily becoming the case.
214
+ [744.52 --> 750.80] And going on-premise is very difficult, even with a substrate like Kubernetes to lean on,
215
+ [750.88 --> 753.90] because often you get zero telemetry, right?
216
+ [754.08 --> 758.24] You get no metrics, no logs, no hands on the keyboard.
217
+ [758.38 --> 761.06] You can't kubectl exec into something and fix it.
218
+ [761.26 --> 767.06] Usually with these engagements, it's with, or usually for our clients, their customers are
219
+ [767.06 --> 773.08] highly regulated, highly secure companies that have very strong security postures.
220
+ [773.08 --> 779.20] And so what our clients need is not only to believe that what they are going to be deploying
221
+ [779.20 --> 785.86] into their customers' Kubernetes environments are well-engineered and using all of the best
222
+ [785.86 --> 790.16] practices from Kubernetes' point of view, but often they also need a lot of custom code
223
+ [790.16 --> 792.94] developed in order to do health checks.
224
+ [793.32 --> 800.38] For one customer, we actually built a dashboard that their customers can go to and see the health
225
+ [800.38 --> 804.60] of their application, but also the health of the underlying cluster, basically so that
226
+ [804.60 --> 809.46] their customers can self-select into, should I file a ticket or is it actually a problem
227
+ [809.46 --> 812.16] with our own cluster and we need to go to our own operations team?
228
+ [812.34 --> 814.30] That kind of thing is fundamentally important.
229
+ [814.72 --> 820.00] And when we were at Cloud Foundry, we have so much experience with the headaches of trying
230
+ [820.00 --> 825.62] to ship on-premise that we just naturally, that's why we ended up with all these customers
231
+ [825.62 --> 828.04] doing it, because we just had that experience already.
232
+ [828.04 --> 834.54] Another fun example is we had a crypto client who wanted to integrate AWS Nitro Secure Enclaves
233
+ [834.54 --> 835.68] with EKS.
234
+ [836.46 --> 842.86] And the Nitro Enclave thing is a really interesting technology where you can run verified code
235
+ [842.86 --> 849.52] in a highly secure hardware-based environment that has to be built into the chips on the actual
236
+ [849.52 --> 851.18] machines that AWS gives you.
237
+ [851.18 --> 855.92] And even AWS engineers cannot access the memory for that code.
238
+ [856.04 --> 857.78] But using it is a huge pain.
239
+ [857.94 --> 859.94] I mean, using it is incredibly difficult.
240
+ [860.60 --> 865.86] And the code that runs inside this secure enclave cannot do things like network or anything.
241
+ [866.14 --> 870.12] You can only communicate with it through this weird VSOC that happens at the kernel level.
242
+ [870.12 --> 873.54] And so integrating that with EKS turned out to be very challenging.
243
+ [873.92 --> 875.66] And so they brought us on board to help out with that.
244
+ [876.22 --> 881.68] And as it turns out, we were, I think, maybe still the only people who have done that integration,
245
+ [881.86 --> 886.56] the only people who have tied EKS and Nitro together so that you could launch a secure
246
+ [886.56 --> 890.70] enclave from a pod and communicate with it directly from that pod.
247
+ [890.70 --> 895.20] And we know that because we actually had to work with the AWS engineering team to get it done.
248
+ [895.74 --> 896.52] And it was a lot of fun.
249
+ [896.68 --> 898.02] And we got, you know, we blogged about it.
250
+ [898.28 --> 900.40] And the engineer loved that work.
251
+ [900.54 --> 904.28] It's part of the reason why we can attract such senior talent is because we get to work
252
+ [904.28 --> 905.86] on the more interesting projects like that.
253
+ [906.14 --> 906.26] Right.
254
+ [906.42 --> 907.38] You've made so many things.
255
+ [907.72 --> 910.50] And I'm going to ask one thing, which is very close to my heart.
256
+ [910.66 --> 915.54] So in Cloud Foundry, we knew to use Bosch to manage Cloud Foundry.
257
+ [915.74 --> 915.96] Yeah.
258
+ [915.96 --> 920.34] Is there such a thing in Kubernetes where when you deploy Kubernetes on bare metal,
259
+ [920.70 --> 921.62] what would you say?
260
+ [921.76 --> 927.06] What, like, what should users or teams use for that management of Kubernetes on bare metal
261
+ [927.06 --> 927.78] or on-prem?
262
+ [928.02 --> 931.88] There's a variety of tools for deploying Kubernetes to bare metal installations.
263
+ [932.28 --> 935.86] And that's not really the hard part with Kubernetes.
264
+ [936.16 --> 939.06] In the cloud, there's managed Kubernetes and that solves all your problems.
265
+ [939.18 --> 942.04] But that's really, that's not the problem with Kubernetes in complexity.
266
+ [942.04 --> 946.12] In fact, getting a Kubernetes cluster up and running is fairly easy.
267
+ [946.76 --> 950.40] On bare metal, you have some issues with the networking, but there's projects to solve that
268
+ [950.40 --> 955.54] you've got Kube router and you've got Metal LB and you've got others that solve that problem
269
+ [955.54 --> 955.94] for you.
270
+ [956.20 --> 958.74] It's interesting that you brought up Bosch and Cloud Foundry.
271
+ [958.82 --> 963.52] And for those who don't know, the way that Cloud Foundry was designed was that we had two
272
+ [963.52 --> 964.24] different products.
273
+ [964.60 --> 972.22] We had Bosch, which was sort of a competitor to Terraform and Ansible and Salt.
274
+ [972.54 --> 978.08] I think, I don't know this for sure, but I think it came right out of the Google's Borg.
275
+ [978.08 --> 980.48] It's like a rewrite of Borg, basically.
276
+ [980.66 --> 983.00] And it's very difficult to use.
277
+ [983.12 --> 988.28] But once you use it, like once you learn it, Stockholm syndrome kicks in and you start to
278
+ [988.28 --> 988.50] love it.
279
+ [988.56 --> 991.20] There's huge Bosch fanatics, right?
280
+ [991.34 --> 994.58] And Bosch was the tool that the operator used to deploy Cloud Foundry.
281
+ [994.68 --> 996.74] Very difficult to use, but very powerful.
282
+ [997.16 --> 1003.18] And Cloud Foundry was the interface that the operator then could present to the application
283
+ [1003.18 --> 1007.80] developers, which was basically a blatant ripoff of Heroku, which was a great model.
284
+ [1008.10 --> 1013.24] 12 factor build packs, all that stuff made it real easy for application developers.
285
+ [1013.70 --> 1014.58] But here's the interesting thing.
286
+ [1014.82 --> 1021.12] I refer to that as the great wall DevOps model, where Cloud Foundry allowed the operator to
287
+ [1021.12 --> 1028.58] serve the application developer well by giving the operator this beautiful wall that both sides
288
+ [1028.58 --> 1029.50] really appreciated.
289
+ [1029.50 --> 1033.48] The operator appreciated how easy it was to manage Cloud Foundry through Bosch and the
290
+ [1033.48 --> 1038.04] application developer appreciated how powerful it was for them to manage their application
291
+ [1038.04 --> 1039.34] through Cloud Foundry.
292
+ [1040.22 --> 1042.00] Kubernetes is entirely different from that, right?
293
+ [1042.08 --> 1047.52] Kubernetes is what I call the kumbaya DevOps model, where everybody has to know everything,
294
+ [1047.74 --> 1047.96] right?
295
+ [1048.32 --> 1052.06] Kubernetes doesn't have the concept of an operator versus an application developer.
296
+ [1052.06 --> 1058.68] At best, it gives you some tools where you can kind of build that using RBACs and stuff,
297
+ [1058.74 --> 1060.76] but that's really difficult to do.
298
+ [1061.22 --> 1064.12] And nobody knows quite where the line is supposed to be.
299
+ [1064.60 --> 1066.96] And so, yeah, so everybody does it differently, you know?
300
+ [1067.48 --> 1067.64] Yeah.
301
+ [1068.18 --> 1068.48] Okay.
302
+ [1068.98 --> 1071.32] So they do have YAML in common.
303
+ [1074.32 --> 1075.30] That's still around.
304
+ [1076.52 --> 1078.86] That's like still a paid, but maybe not for long.
305
+ [1078.94 --> 1079.30] Who knows?
306
+ [1079.38 --> 1079.72] We'll see.
307
+ [1079.72 --> 1080.12] Okay.
308
+ [1080.12 --> 1086.62] So what I'm taking away from this is that Kubernetes is everywhere and Teams, they need
309
+ [1086.62 --> 1090.08] Kubernetes because it's the easiest way to get something out there.
310
+ [1090.16 --> 1090.76] It's ubiquitous.
311
+ [1090.90 --> 1091.46] It's everywhere.
312
+ [1091.92 --> 1092.06] Yeah.
313
+ [1092.12 --> 1093.96] And it handles the complexity really well.
314
+ [1094.14 --> 1094.98] So you're right.
315
+ [1095.08 --> 1099.02] The 80 resource types plus all the custom ones that you can install.
316
+ [1099.12 --> 1101.06] And typically you get via CRDs.
317
+ [1101.28 --> 1101.46] Yeah.
318
+ [1101.58 --> 1103.52] You get even more and they get even more complicated.
319
+ [1103.52 --> 1109.56] It's a great way of modeling some really complex software, whether it's microservices, whether
320
+ [1109.56 --> 1111.30] it's stateful services.
321
+ [1111.30 --> 1115.50] And that's like, hmm, not fully, but it's getting there for sure.
322
+ [1115.90 --> 1120.12] I think there was like a maturity level that had to happen at the data services side as well.
323
+ [1120.12 --> 1122.54] Just understand that operating model.
324
+ [1122.54 --> 1123.88] It's not just ubiquitous.
325
+ [1124.02 --> 1125.92] It's just becoming the standard, right?
326
+ [1126.00 --> 1132.76] It's expected that if you're going to, as you said, model out your infrastructure, your
327
+ [1132.76 --> 1138.02] application infrastructure, then you're going to do it in YAML using Kubernetes objects, right?
328
+ [1138.02 --> 1139.26] So that you can deploy it anywhere.
329
+ [1139.48 --> 1143.48] And there are some really great projects in this Kubernetes ecosystem and in the bigger cloud
330
+ [1143.48 --> 1145.60] native ecosystem, which work well together.
331
+ [1145.60 --> 1152.06] But it's intricacy of finding the right combination of the objects or like the products that make
332
+ [1152.06 --> 1152.76] sense to you.
333
+ [1152.92 --> 1154.72] And that's where the complexity lies in.
334
+ [1154.86 --> 1158.84] So the kumbaya, anything goes and everything goes.
335
+ [1158.96 --> 1163.12] And by the way, there are teams for which a certain combination makes sense, which would
336
+ [1163.12 --> 1164.32] never work for other teams.
337
+ [1164.44 --> 1165.90] And that's what gives it the beauty.
338
+ [1166.02 --> 1166.76] Also the complexity.
339
+ [1167.28 --> 1168.54] It's building blocks, right?
340
+ [1168.62 --> 1170.90] The entire community is all about building blocks.
341
+ [1170.90 --> 1176.40] And if you have a large enough team that you can dedicate a couple of people to choosing
342
+ [1176.40 --> 1181.16] the right building blocks and wiring them all together and producing this really great
343
+ [1181.16 --> 1183.78] experience for your engineers, then that's great.
344
+ [1184.02 --> 1186.30] Do you think that teams would be better without Kubernetes?
345
+ [1186.70 --> 1186.96] Yeah.
346
+ [1187.24 --> 1193.26] I mean, again, it depends on the size of the team, but I'm going to just ballpark that 30%
347
+ [1193.26 --> 1198.34] ish of people who come to us saying, we're looking to embrace Kubernetes.
348
+ [1198.42 --> 1199.24] We're going to move to Kubernetes.
349
+ [1199.24 --> 1204.68] And we'd like your training or your help on the engineering side to get it done and to
350
+ [1204.68 --> 1205.24] get it done right.
351
+ [1205.50 --> 1211.00] About 30% of the time when people come to us asking for that, we try really hard to convince
352
+ [1211.00 --> 1211.74] them not to.
353
+ [1212.16 --> 1219.02] Because if you're a small startup, then unless you're doing something really complicated,
354
+ [1219.24 --> 1222.28] then it's just too much for you, right?
355
+ [1222.36 --> 1225.40] I mean, you're not focused on your own innovation.
356
+ [1225.40 --> 1229.14] Instead, you're focused on managing Kubernetes.
357
+ [1229.58 --> 1230.40] So here's the story.
358
+ [1230.62 --> 1235.46] When I was, I don't know, through most of my life, I've been a Linux user until around
359
+ [1235.46 --> 1237.90] 2006, I think it was.
360
+ [1238.26 --> 1241.44] And I used to run Linux on all kinds of hardware.
361
+ [1241.78 --> 1246.04] I ran, I was one of those geeks in college that had a small network of, you know, like
362
+ [1246.04 --> 1248.52] Sun and different servers and things like that.
363
+ [1248.52 --> 1252.60] And for the longest time, I ran Linux on my laptop as my daily driver.
364
+ [1253.16 --> 1259.42] And around 2006, I realized that I was spending 20% of my time trying to figure out how to
365
+ [1259.42 --> 1263.24] close my ThinkPad without the kernel panicking, right?
366
+ [1265.00 --> 1268.28] It's like about an hour a day, every day, you know?
367
+ [1268.86 --> 1269.68] Doesn't want to sleep.
368
+ [1270.00 --> 1270.72] Linux doesn't sleep.
369
+ [1271.16 --> 1272.20] Yeah, it's just it.
370
+ [1272.28 --> 1273.84] Yeah, it's always working for you, you know?
371
+ [1274.28 --> 1277.74] And I just flipped the table, I bought a Mac, and I never looked back, right?
372
+ [1278.02 --> 1282.10] To me, the analogy is that Kubernetes is that Linux on the laptop experience, right?
373
+ [1282.16 --> 1286.78] There's always going to be problems, because you're always integrating two dozen different
374
+ [1286.78 --> 1290.32] technologies to get a full Kubernetes system running.
375
+ [1290.58 --> 1294.52] And it's fine if you have administrators there to focus on that task.
376
+ [1294.52 --> 1298.50] But if you're, you know, a 10 person startup, that's not where you need to be.
377
+ [1298.50 --> 1304.26] You should be on like Heroku or Fly.io or what's the other one?
378
+ [1304.30 --> 1307.68] Nitrous or Google Cloud Run, Fargate, like any of those, right?
379
+ [1307.86 --> 1308.04] Yeah.
380
+ [1308.20 --> 1310.10] Are better choices than Kubernetes.
381
+ [1310.38 --> 1316.70] The litmus that we give these people when they come to us is stay on these fully managed
382
+ [1316.70 --> 1318.56] platforms for as long as you can.
383
+ [1318.66 --> 1322.50] And every time an engineer says, we should really use Kubernetes for this, that, or the
384
+ [1322.50 --> 1326.98] other, you say, no, we should stay within the confines of a 12-factor app, like as much
385
+ [1326.98 --> 1327.94] as you can, right?
386
+ [1327.94 --> 1332.46] You change your product definition so that you can stay within that confine, whatever
387
+ [1332.46 --> 1338.46] you can do, until you really believe that you need to provision raw EC2.
388
+ [1338.94 --> 1343.32] When an engineer says, look, this is an important feature, the only way we can get this feature
389
+ [1343.32 --> 1347.10] done is if you give me the keys to AWS, because I need to provision some instances, we're going
390
+ [1347.10 --> 1350.36] to configure those instances, we're going to run systemd on them, we're going to tie in
391
+ [1350.36 --> 1353.42] all the logging and all the metrics into some sort of centralized system, we're going
392
+ [1353.42 --> 1355.90] to have alerting and everything set up and all of that.
393
+ [1355.90 --> 1358.32] That's when you say, no, no, no, no, no, no.
394
+ [1358.40 --> 1365.00] We're never going to provision raw instances because Kubernetes is the future for all things
395
+ [1365.00 --> 1369.22] cloud level, all things that would be infrastructure as a service.
396
+ [1369.34 --> 1370.82] Instead, you should be using Kubernetes.
397
+ [1371.12 --> 1372.34] That's the inflection point.
398
+ [1372.34 --> 1387.74] This episode is brought to you by our friends at Incident.io.
399
+ [1388.14 --> 1392.42] Every software team on the planet has to manage incidents and a very large percentage of those
400
+ [1392.42 --> 1394.34] teams are using Slack to communicate.
401
+ [1394.52 --> 1395.50] That includes us.
402
+ [1395.50 --> 1400.76] With Incident.io, you can create, manage, and resolve incidents directly inside Slack.
403
+ [1401.04 --> 1401.96] Here's how it works.
404
+ [1402.22 --> 1404.30] Head to Incident.io and sign up for free.
405
+ [1404.52 --> 1405.94] Then add it to your Slack.
406
+ [1406.10 --> 1409.96] From there, you have a brand new incidents channel where all incidents get announced.
407
+ [1410.34 --> 1412.90] Use the slash incident command to create and manage incidents.
408
+ [1413.32 --> 1418.64] This command lets you share updates, assign roles, set important links, and more, all without
409
+ [1418.64 --> 1419.92] ever leaving the incident channel.
410
+ [1419.92 --> 1425.84] Each incident gets their own Slack channel plus a high-res dashboard at Incident.io with
411
+ [1425.84 --> 1427.76] the entire timeline from report to resolution.
412
+ [1428.30 --> 1431.50] Get everyone on the same page from the moment they join the incident and help stakeholders
413
+ [1431.50 --> 1432.36] stay in the loop.
414
+ [1432.72 --> 1436.86] Add Incident.io to your Slack today and prove to yourself and your team that they have everything
415
+ [1436.86 --> 1438.46] you need to streamline your incident management.
416
+ [1438.94 --> 1441.36] Learn more and sign up for free at Incident.io.
417
+ [1441.66 --> 1442.72] No credit card required.
418
+ [1443.22 --> 1444.62] Again, Incident.io.
419
+ [1449.92 --> 1464.08] I think that you've heard this question many times before, and I still have to ask it.
420
+ [1464.32 --> 1467.48] Do you think that Kubernetes would have been as popular and successful?
421
+ [1467.82 --> 1468.70] Was it not for Docker?
422
+ [1469.20 --> 1469.52] Yeah.
423
+ [1469.70 --> 1470.56] Yeah, that's a great question.
424
+ [1470.90 --> 1472.18] I mean, obviously, who knows?
425
+ [1472.18 --> 1478.22] But from my point of view, I don't think Kubernetes would have gotten off the ground at all if
426
+ [1478.22 --> 1481.94] it wasn't for Docker as a standard, right?
427
+ [1482.14 --> 1484.06] We all know that Docker is a company.
428
+ [1484.48 --> 1487.66] They had an opportunity and they just couldn't quite execute on it.
429
+ [1487.84 --> 1489.02] So whatever.
430
+ [1489.20 --> 1490.08] That is what it is.
431
+ [1490.48 --> 1498.06] But the thing that Docker gave to the technology community is that standard of what it means
432
+ [1498.06 --> 1499.06] to be a container.
433
+ [1499.06 --> 1504.50] And we all know that there were containers before Docker, right?
434
+ [1504.60 --> 1506.08] I mean, LXD, LXD.
435
+ [1506.34 --> 1510.34] There was Solaris Zones, FreeBSD Jails, sort of, right?
436
+ [1510.60 --> 1513.92] And things like Solaris Zones arguably were better, if I remember correctly.
437
+ [1514.06 --> 1516.56] They ran separate kernels per container, right?
438
+ [1516.78 --> 1524.00] But it was that standardization of how you create a container and what a container or how
439
+ [1524.00 --> 1526.62] you create a container image and what a container image actually is.
440
+ [1526.62 --> 1531.02] And that allowed tools like Kubernetes to flourish.
441
+ [1531.48 --> 1532.50] So absolutely not.
442
+ [1532.66 --> 1537.42] I don't think Cates would have been a thing without Docker at all.
443
+ [1537.88 --> 1543.24] Which, I mean, I understand that Kubernetes inside Google was Borg and Omega, right?
444
+ [1543.40 --> 1548.88] So obviously, it existed before Docker existed inside Google.
445
+ [1548.88 --> 1550.14] But that's a completely different thing.
446
+ [1550.14 --> 1555.30] In order to get community adoption, in order for this open source thing to flourish, if
447
+ [1555.30 --> 1559.84] Kubernetes had been built as an open source product and had its own idea of what a container
448
+ [1559.84 --> 1563.94] is and had this thing of you have to run these commands to generate an image and then we run
449
+ [1563.94 --> 1566.20] it, I just don't think it would have gotten adoption at all.
450
+ [1566.76 --> 1569.08] It wasn't just the standardization of Docker, too.
451
+ [1569.18 --> 1573.74] It was also, frankly, I don't want to use the term hype because Docker is a very powerful
452
+ [1573.74 --> 1575.14] and important technology.
453
+ [1575.36 --> 1576.76] But there was a wave, right?
454
+ [1576.92 --> 1581.28] Where people were just really excited about Docker and anything that embraced Docker got
455
+ [1581.28 --> 1583.18] an immediate uplift because of that.
456
+ [1583.26 --> 1585.58] And I think Kubernetes, you know, benefited from that.
457
+ [1586.06 --> 1586.14] Yeah.
458
+ [1586.48 --> 1591.80] I remember that age and period really well when you had to, like, run containers.
459
+ [1591.98 --> 1593.24] Didn't matter how, didn't matter where.
460
+ [1593.28 --> 1594.46] You just had to run containers.
461
+ [1594.88 --> 1596.56] And Kubernetes wasn't a thing back then.
462
+ [1596.56 --> 1598.86] So few people even knew what containers were, right?
463
+ [1599.02 --> 1599.28] Exactly.
464
+ [1599.42 --> 1599.84] They're like, what?
465
+ [1599.96 --> 1600.46] Containers what?
466
+ [1600.56 --> 1601.70] Like, why would you want containers?
467
+ [1601.70 --> 1601.86] containers.
468
+ [1602.40 --> 1605.10] And I remember FreeBSDJLs as well.
469
+ [1605.22 --> 1608.18] I'm yet to start a FreeBSDJL successfully.
470
+ [1608.54 --> 1613.46] I've started that project when, like, 10 years ago when I got, like, my first FreeBSD
471
+ [1613.46 --> 1613.80] server.
472
+ [1614.24 --> 1618.62] And I never got to this day to get the jail up and running because how complicated it
473
+ [1618.62 --> 1618.88] was.
474
+ [1618.98 --> 1619.24] Yes.
475
+ [1619.34 --> 1621.68] And I started, like, ah, there's, like, so many configuration options.
476
+ [1621.68 --> 1624.50] And Docker made it run a command and you have it.
477
+ [1624.80 --> 1625.52] That was brilliant.
478
+ [1626.16 --> 1628.68] So as an idea, as a concept was really, really good.
479
+ [1628.68 --> 1633.02] And things then, they got complicated and, you know, it happened what happened.
480
+ [1633.20 --> 1634.20] But you're right.
481
+ [1634.26 --> 1637.96] We are here today where Docker is no longer part of Kubernetes.
482
+ [1638.34 --> 1638.90] It used to be.
483
+ [1639.00 --> 1640.94] And that created quite the confusion.
484
+ [1641.44 --> 1644.78] People say that, that, like, oh, Kubernetes dropped Docker and it's no longer.
485
+ [1644.92 --> 1648.16] But that's my point, is that we shouldn't be thinking about the word Docker.
486
+ [1648.26 --> 1650.56] We should be thinking about the standard that Docker created.
487
+ [1650.56 --> 1656.16] So Kubernetes is still using Docker as a standard just as much as it did before, right?
488
+ [1656.42 --> 1656.64] Yeah.
489
+ [1656.68 --> 1658.98] It's still an integral part of what it means to be Kubernetes.
490
+ [1659.26 --> 1660.62] I think it's the container runtime.
491
+ [1661.10 --> 1663.38] That's, you know, that clarification came afterwards.
492
+ [1663.38 --> 1667.00] Like, no, we're not dropping Docker support because Docker means so many things.
493
+ [1667.04 --> 1667.94] It became an ecosystem.
494
+ [1668.28 --> 1672.80] And even now, the default container registry is the Docker hub, right?
495
+ [1672.80 --> 1674.94] So if you don't specify, and that's also Docker.
496
+ [1674.94 --> 1680.74] It's part of Docker, but also the container runtime, the container D, run C, and a couple
497
+ [1680.74 --> 1681.18] of others.
498
+ [1681.36 --> 1682.74] But I think these are the two popular ones.
499
+ [1683.18 --> 1687.32] So that's what they meant by removing Docker as a dependency of Kubernetes.
500
+ [1687.90 --> 1691.62] And I'm wondering if you have to be good at Docker to do Kubernetes.
501
+ [1691.86 --> 1693.96] Like, do you need any experience with Docker?
502
+ [1694.16 --> 1696.40] Do you need to run Docker locally to get Kubernetes?
503
+ [1696.92 --> 1700.70] I know that you can get Kubernetes in Docker, which confuses a lot of people.
504
+ [1700.70 --> 1704.96] But I'd never recommend it, but, you know.
505
+ [1705.22 --> 1707.52] Turtles all the way down and turtles in a circle even.
506
+ [1707.68 --> 1707.78] Yeah.
507
+ [1707.98 --> 1712.14] We actually get that question a lot, especially when we're talking to people about our workshops,
508
+ [1712.34 --> 1714.32] because I guess the answer is sort of.
509
+ [1714.44 --> 1718.82] You sort of need to be good with Docker in order to be good with Kubernetes.
510
+ [1719.10 --> 1724.40] And what I mean by that is our core Kubernetes workshop actually doesn't use Docker at all.
511
+ [1724.50 --> 1727.08] You never run a Docker command throughout that entire workshop.
512
+ [1727.08 --> 1731.66] And even when we go under the hood, as you said, nowadays, you don't even see Docker on
513
+ [1731.66 --> 1733.92] the nodes because it's all container-ty, right?
514
+ [1734.16 --> 1734.28] Yep.
515
+ [1734.44 --> 1741.26] You need to understand the concept of what containers are, as in sort of tiny VMs that
516
+ [1741.26 --> 1742.36] can share some stuff.
517
+ [1742.48 --> 1746.90] Like, we talk about the Linux namespaces that are being used in Kubernetes, right?
518
+ [1746.92 --> 1749.28] When we talk about the different things you can share amongst containers.
519
+ [1749.58 --> 1753.16] But you don't have to be great at crafting a Dockerfile, for example.
520
+ [1753.22 --> 1754.92] And crafting a Dockerfile is an art.
521
+ [1754.92 --> 1760.20] It is hard to create an efficient, really good Dockerfile and to understand all the security
522
+ [1760.20 --> 1761.16] implications and everything.
523
+ [1761.76 --> 1766.66] And to some degree, I think that shows how Docker did the tech community a service by
524
+ [1766.66 --> 1770.90] giving us the standard, but did us a disservice by making that standard so low level.
525
+ [1771.04 --> 1775.72] I mean, as an application developer, you need to understand not only apt-get install, but
526
+ [1775.72 --> 1779.78] also the apt-cache and the difference between Alpine Linux and Ubuntu.
527
+ [1780.24 --> 1781.42] All this stuff is kind of crazy.
528
+ [1781.42 --> 1788.84] So most successful teams that I've seen instead centralize at least the skill of crafting Dockerfiles,
529
+ [1788.94 --> 1794.38] if not just using a single centralized Dockerfile across all of your applications.
530
+ [1794.38 --> 1796.24] That's like a thing you can do, right?
531
+ [1796.46 --> 1803.44] So most teams I've seen have centralized that knowledge of how you create efficient Dockerfiles
532
+ [1803.44 --> 1803.94] and all that.
533
+ [1803.94 --> 1808.52] And then application developers just need to understand, maybe locally, they need to understand,
534
+ [1808.62 --> 1813.84] you know, Docker Compose up and maybe a few Docker command line things.
535
+ [1813.88 --> 1817.14] And they need to understand maybe how to push Docker images.
536
+ [1817.14 --> 1820.34] But frankly, often that's just taken care of by the CICD system too.
537
+ [1820.76 --> 1826.06] So no, I think you can make a lot of use of Kubernetes without having a deep understanding
538
+ [1826.06 --> 1826.42] of Docker.
539
+ [1826.42 --> 1831.56] For me, Kubernetes makes a lot more sense having started with Docker and having spent a couple
540
+ [1831.56 --> 1834.46] of years in that ecosystem before Kubernetes was a thing.
541
+ [1835.14 --> 1839.66] So, and that's very easy to ignore and forget because my beginning was not Kubernetes.
542
+ [1840.08 --> 1843.36] But many people, this is where they start and they missed the whole Docker thing.
543
+ [1843.44 --> 1847.10] I mean, they may have been running it locally, but not to the point that they understand it,
544
+ [1847.18 --> 1850.52] not to the point that they've been using it for a couple of years and really understand
545
+ [1850.52 --> 1851.56] what's happening under the hood.
546
+ [1851.56 --> 1856.62] So I think some Docker concepts, and as I mentioned, and as you've mentioned, it's not
547
+ [1856.62 --> 1857.18] just the runtime.
548
+ [1857.48 --> 1861.70] There's so many other aspects of Docker are really helpful to get started with Kubernetes.
549
+ [1862.34 --> 1866.02] What other things do you think are helpful when you get started with Kubernetes?
550
+ [1866.62 --> 1872.04] In terms of knowledge, I think it's almost more important to have a deeper understanding
551
+ [1872.04 --> 1875.24] of Linux networking and just networking in general.
552
+ [1875.40 --> 1881.12] From our experience, understanding how a cluster IP service works, for example, and all the IP
553
+ [1881.12 --> 1886.04] tables stuff that happens there, understanding how load balancers work, understanding why
554
+ [1886.04 --> 1890.76] node ports are a terrible idea, or understanding how ingresses work at layer seven, right?
555
+ [1891.32 --> 1897.50] All of that is conceptually harder for our students from what we've seen and conceptually harder
556
+ [1897.50 --> 1902.32] for people who are new to Kubernetes because they just never had to deal with that kind
557
+ [1902.32 --> 1903.44] of networking knowledge.
558
+ [1903.76 --> 1908.58] I think another thing that's important for a team who's getting started with, well, first
559
+ [1908.58 --> 1910.26] of all, let's talk about how you should adopt Kubernetes.
560
+ [1910.26 --> 1915.92] First of all, even though I kind of pooh-poohed the value of the Kubernetes managed services
561
+ [1915.92 --> 1920.56] like EKS, AKS, and GKE, you absolutely should use them.
562
+ [1920.68 --> 1923.20] I mean, yes, you can deploy your own cluster, but why?
563
+ [1923.56 --> 1925.88] Like, just go with one of the managed solutions.
564
+ [1926.10 --> 1928.44] Frankly, they're cheaper, especially GKE, right?
565
+ [1928.54 --> 1933.50] And if you have a choice just to, you know, if you have your druthers about which cloud to
566
+ [1933.50 --> 1940.64] be on, GKE is by far the best experience, and Azure is by far the worst experience, not
567
+ [1940.64 --> 1943.40] just in terms of Kubernetes, but just across the board, right?
568
+ [1943.86 --> 1945.10] And AWS is what it is.
569
+ [1945.18 --> 1948.00] So if you're on AWS, you're probably forced to be on AWS and whatever.
570
+ [1948.20 --> 1948.94] You're on EKS.
571
+ [1949.16 --> 1952.76] And then once you've got that, as I mentioned before, there's so much other stuff that has
572
+ [1952.76 --> 1955.10] to be configured and deployed on top of that.
573
+ [1955.28 --> 1957.72] And our best advice is just to keep it as simple as you can.
574
+ [1957.72 --> 1963.54] Most of our customers have already spent so many innovation points when they are adopting
575
+ [1963.54 --> 1964.00] Kubernetes.
576
+ [1964.40 --> 1970.18] We kind of feel it's our mission, our job to help guide them towards more conservative
577
+ [1970.18 --> 1976.30] solutions and fewer moving parts, because it's so tempting once you've got Kubernetes,
578
+ [1976.42 --> 1979.30] like, oh, I guess I need Istio because Istio does all these cool things.
579
+ [1979.38 --> 1979.82] It does.
580
+ [1980.08 --> 1982.16] And if you need those things, that's great.
581
+ [1982.46 --> 1983.38] Jump on board.
582
+ [1983.56 --> 1985.96] But holy crap, is Istio complicated?
583
+ [1985.96 --> 1987.50] And it's dangerous.
584
+ [1987.72 --> 1991.08] I mean, like, if you misconfigure Istio, like, you can really do damage to your production
585
+ [1991.08 --> 1991.44] traffic.
586
+ [1991.94 --> 1996.86] And, you know, avoid any tooling that you don't have an immediate pain point for.
587
+ [1997.14 --> 2001.88] When you look at the CNCF landscape, it can often look like you're in a toy store, you
588
+ [2001.88 --> 2002.02] know?
589
+ [2002.12 --> 2005.42] You see all these wonderful, cool gadgets, and you just want to grab them all up into
590
+ [2005.42 --> 2005.86] your basket.
591
+ [2006.04 --> 2010.78] But you need to show a lot of restraint, because every one of those that you add is something
592
+ [2010.78 --> 2012.40] else you have to manage and understand.
593
+ [2012.90 --> 2013.26] Oh, yes.
594
+ [2013.60 --> 2013.88] Yes.
595
+ [2014.08 --> 2016.80] Most people forget about that, like, install it, and that's it.
596
+ [2016.80 --> 2018.20] Well, how are you going to upgrade it?
597
+ [2018.20 --> 2018.32] Right.
598
+ [2018.52 --> 2021.26] And some components don't upgrade as well as others.
599
+ [2021.64 --> 2021.80] Yep.
600
+ [2021.98 --> 2026.02] And then that just opens, like, a whole new world of problems, like, a whole new set of
601
+ [2026.02 --> 2026.36] problems.
602
+ [2026.88 --> 2031.00] Like, do you upgrade in place, or do you stand up another Kubernetes cluster?
603
+ [2031.34 --> 2034.74] And if a cluster gets too big, well, should you split in multiple clusters?
604
+ [2034.82 --> 2038.36] And before you know it, you're, like, you're solving problems that you didn't even know
605
+ [2038.36 --> 2040.00] existed before you chose Istio.
606
+ [2040.00 --> 2041.02] So maybe don't.
607
+ [2041.36 --> 2041.38] Right.
608
+ [2041.56 --> 2042.00] Exactly.
609
+ [2042.40 --> 2042.84] Exactly.
610
+ [2043.28 --> 2044.44] You're, like, where am I?
611
+ [2045.40 --> 2045.84] Exactly.
612
+ [2046.60 --> 2048.02] I thought I understood networking.
613
+ [2048.26 --> 2048.80] No, you don't.
614
+ [2049.52 --> 2049.68] Right.
615
+ [2049.80 --> 2050.00] Yeah.
616
+ [2050.08 --> 2052.98] When you understand networking, then you see how Istio actually works.
617
+ [2053.08 --> 2054.20] You're, like, oh, my gosh.
618
+ [2054.74 --> 2058.12] And there are some components that are kind of table stakes for a new cluster.
619
+ [2058.34 --> 2062.32] Like, cert manager is a great example of just, okay, everybody should have cert manager
620
+ [2062.32 --> 2063.12] running in their cluster.
621
+ [2063.12 --> 2068.70] But there's so many other things that are cool and interesting, but probably not something
622
+ [2068.70 --> 2068.96] you need.
623
+ [2069.20 --> 2070.48] Another example is Helm.
624
+ [2070.76 --> 2077.36] Helm, as a tool, is amazing for installing third-party packages, something that somebody
625
+ [2077.36 --> 2078.74] else has to maintain, right?
626
+ [2078.76 --> 2079.70] You need Postgres?
627
+ [2079.88 --> 2082.36] Then, sure, use the official Postgres Helm chart.
628
+ [2082.42 --> 2084.56] That's the best way to do it, by far.
629
+ [2085.06 --> 2088.42] Well, Postgres may be a bad example, because there's also operators that do an even better
630
+ [2088.42 --> 2089.00] job, right?
631
+ [2089.00 --> 2095.72] But what I see teams immediately doing, because they just didn't know any better, they just
632
+ [2095.72 --> 2099.70] assume that this is how you use Kubernetes, is they start building Helm charts for their
633
+ [2099.70 --> 2100.68] internal applications.
634
+ [2100.94 --> 2102.66] Small teams doing this.
635
+ [2103.08 --> 2110.80] And Helm, although it's great for package distribution and consuming third-party software, in order
636
+ [2110.80 --> 2118.24] to author a Helm chart, you are using a Turing-complete templating language in order to generate
637
+ [2118.24 --> 2120.98] white space-sensitive data structures.
638
+ [2121.38 --> 2122.38] How crazy is that?
639
+ [2122.74 --> 2123.08] Oh, my goodness.
640
+ [2123.08 --> 2123.68] It's just crazy.
641
+ [2123.78 --> 2124.54] It's crazy, right?
642
+ [2124.84 --> 2126.00] I'm glad it's not just me.
643
+ [2126.10 --> 2127.46] That thing's exactly the same way.
644
+ [2127.58 --> 2128.68] I'm glad it's not just me.
645
+ [2128.74 --> 2129.58] So I'm not the crazy one.
646
+ [2129.64 --> 2130.06] Okay, good.
647
+ [2130.86 --> 2133.44] Okay, so I have confirmation that I'm not crazy.
648
+ [2134.90 --> 2135.26] Okay.
649
+ [2135.50 --> 2138.68] I don't know about that, but just one aspect, you're not crazy.
650
+ [2139.08 --> 2139.52] Damn it.
651
+ [2139.68 --> 2140.00] Almost.
652
+ [2140.64 --> 2141.00] Almost.
653
+ [2143.00 --> 2143.36] Almost.
654
+ [2143.74 --> 2145.96] And the sad thing about it is they just don't know any better.
655
+ [2145.96 --> 2147.14] They've got very simple applications.
656
+ [2147.14 --> 2151.84] They're a small team, and they end up spending a lot of time building these Helm charts to
657
+ [2151.84 --> 2154.10] make them, you know, to distribute them and stuff.
658
+ [2154.16 --> 2154.84] You don't need that.
659
+ [2155.06 --> 2162.98] Like, Customize, for example, is a great tool for managing your YAML when it's being deployed
660
+ [2162.98 --> 2163.90] to multiple environments.
661
+ [2163.96 --> 2165.76] Because you can make very small changes.
662
+ [2165.98 --> 2168.94] Customize is much easier to understand, much easier to maintain.
663
+ [2169.36 --> 2171.64] If you're really small, you don't even need a tool like that.
664
+ [2171.68 --> 2174.62] You could just apply the YAML and just call it a day, you know?
665
+ [2174.62 --> 2181.22] I think when a team chooses Kubernetes, where it should focus on is automation.
666
+ [2181.58 --> 2188.08] Building out their own internal automation system, not just for managing the cluster using
667
+ [2188.08 --> 2193.60] like Terraform, which is by far the best tool for that kind of stuff, but also for managing
668
+ [2193.60 --> 2195.46] the resources inside the cluster.
669
+ [2195.46 --> 2200.34] You know, a CI, CD pipeline, maybe using like GitOps at the end or whatever.
670
+ [2200.62 --> 2203.34] That's the fundamentals that your team should focus on.
671
+ [2203.38 --> 2206.56] Because once you have that, all the other changes become simpler.
672
+ [2206.68 --> 2212.66] And frankly, that automation is the half of the value prop of Kubernetes because the Kubernetes
673
+ [2212.66 --> 2214.20] API is so good.
674
+ [2214.68 --> 2217.16] It's so easy to automate stuff through Kubernetes.
675
+ [2217.16 --> 2222.44] And if you're not investing in that automation, you're wasting that value.
676
+ [2222.88 --> 2227.12] And then obviously, I mean, I run a company, so I should say that like, if you're just choosing
677
+ [2227.12 --> 2230.20] Kubernetes, you should be looking for training.
678
+ [2230.46 --> 2233.88] And I love our workshops, obviously, but there's others, right?
679
+ [2233.94 --> 2239.12] But you do need to invest in your engineer's knowledge because they are going to have to
680
+ [2239.12 --> 2240.40] debug it when it goes sideways.
681
+ [2240.76 --> 2245.24] And you don't want them floundering and using Stack Overflow in the middle of an outage.
682
+ [2245.24 --> 2250.64] If you can find, we offer engineering services, usually not for people who are just now adopting
683
+ [2250.64 --> 2254.42] Kubernetes, unless you've got a very interesting application you're moving over.
684
+ [2254.70 --> 2261.02] But you should be finding experts, either hiring Kubernetes experts or finding a partner that
685
+ [2261.02 --> 2265.74] you can integrate with your team that will give you those subject matter experts for Kubernetes,
686
+ [2265.92 --> 2271.60] because you're going to save a lot more time and money in the long run if you do that early on.
687
+ [2275.24 --> 2287.82] What's up, shippers?
688
+ [2287.94 --> 2290.48] This episode is brought to you by Sentry.
689
+ [2290.72 --> 2294.70] You already know working code means happy customers, and that's exactly why teams choose
690
+ [2294.70 --> 2295.08] Sentry.
691
+ [2295.30 --> 2299.42] From error tracking to performance monitoring, Sentry helps teams see what actually matters,
692
+ [2299.74 --> 2303.88] resolve problems quicker, and learn continuously about their applications from the front end to
693
+ [2303.88 --> 2304.56] the back end.
694
+ [2304.56 --> 2310.44] Over a million developers and 70,000 organizations already ship better software faster with Sentry.
695
+ [2310.78 --> 2311.42] And guess what?
696
+ [2311.58 --> 2312.30] You can too.
697
+ [2312.70 --> 2315.50] Ship it listeners new to Sentry get the team plan for free for three months.
698
+ [2315.86 --> 2317.96] Use the code SHIPIT when you sign up.
699
+ [2318.20 --> 2320.78] Head to sentry.io and use the code SHIPIT.
700
+ [2321.18 --> 2323.10] And by our friends at Equinix Metal.
701
+ [2323.48 --> 2327.82] If you want the choice and control of hardware with low overhead and the developer experience of
702
+ [2327.82 --> 2329.88] the cloud, check out Equinix Metal.
703
+ [2330.24 --> 2334.36] Deploying minutes across 18 global locations from Silicon Valley to Sydney,
704
+ [2334.78 --> 2339.74] visit metal.equinix.com slash just add metal and receive $100 in credit to play with.
705
+ [2340.06 --> 2343.66] Again, metal.equinix.com slash just add metal.
706
+ [2343.66 --> 2366.26] You've touched on a really important point, namely the investment in automation.
707
+ [2366.90 --> 2371.14] So if you use Kubernetes, that's great, especially if you need it.
708
+ [2371.14 --> 2373.48] But you will have to invest in automation.
709
+ [2374.02 --> 2378.94] And I think there's a set of principles which are really important that you have once you
710
+ [2378.94 --> 2385.22] enter this world of cloud native, Kubernetes, because otherwise making choices will be really
711
+ [2385.22 --> 2385.66] difficult.
712
+ [2386.22 --> 2391.44] Automation is really important once you are in the world of Kubernetes, in the world of cloud
713
+ [2391.44 --> 2391.84] native.
714
+ [2392.00 --> 2392.30] Absolutely.
715
+ [2392.30 --> 2394.10] What other things are important?
716
+ [2394.76 --> 2399.08] Well, I mean, if you're going to move into that world, again, as we said before, the complication
717
+ [2399.08 --> 2400.12] is just massive.
718
+ [2400.12 --> 2403.98] I mean, there's so much that you're pinning together, that you're tying together.
719
+ [2404.28 --> 2410.16] I think that it's important if you're going to do that, that you invest in education in
720
+ [2410.16 --> 2413.90] your engineers so that they can understand this complexity.
721
+ [2414.88 --> 2420.04] And depending on the size of the company that you are, depending on the size of your engineering
722
+ [2420.04 --> 2426.42] team, many companies invest in what we're calling internal platforms.
723
+ [2426.42 --> 2430.38] And you can just view that as an extension of the automation.
724
+ [2430.38 --> 2436.32] It's almost a spectrum of how sophisticated these internal platforms get and kind of what
725
+ [2436.32 --> 2437.88] model they use.
726
+ [2438.04 --> 2446.42] All the way from on the lowest level side is just the platform team providing maybe centralized
727
+ [2446.42 --> 2449.46] Docker file, maybe a centralized Helm chart.
728
+ [2449.64 --> 2452.64] That's one of the few times we've seen Helm used internally in a good way.
729
+ [2452.64 --> 2458.68] And a centralized CI CD system so that the application developers can plug their app into the Helm chart
730
+ [2458.68 --> 2460.20] using that Docker file.
731
+ [2460.84 --> 2466.04] And it gets automatically deployed to all the various environments and such.
732
+ [2466.58 --> 2470.52] Then on the other side of the spectrum is implementing a full Heroku, right?
733
+ [2470.52 --> 2477.50] Where the developers are insulated 100% from the details of Kubernetes and they're given a really
734
+ [2477.50 --> 2478.34] nice interface.
735
+ [2478.68 --> 2481.30] We have never seen that done successfully, just to be clear.
736
+ [2481.48 --> 2488.48] Like I've never seen that work where the developers did not still have to understand the intricacies
737
+ [2488.48 --> 2492.24] of Kubernetes because at some point they got to break glass in case of emergency.
738
+ [2492.78 --> 2492.88] Yeah.
739
+ [2493.14 --> 2494.24] Because you have to run it, right?
740
+ [2494.38 --> 2496.06] You've built it, but you have to run it.
741
+ [2496.22 --> 2496.80] And guess what?
742
+ [2496.82 --> 2497.66] It's running on Kubernetes.
743
+ [2497.66 --> 2502.84] So if you don't know how to debug it or even understand what is happening, good luck to you.
744
+ [2503.06 --> 2508.46] And if your platform team is so good that they have actually built a full interface on top of
745
+ [2508.46 --> 2513.20] Kubernetes that takes care of all the details and the application developer only needs to interact
746
+ [2513.20 --> 2515.30] with that interface, that platform they built.
747
+ [2515.68 --> 2516.44] I've got news for you.
748
+ [2516.46 --> 2517.68] You're probably in the wrong industry.
749
+ [2517.88 --> 2521.30] Like you should spin that off and clear house, right?
750
+ [2521.58 --> 2526.20] Oh, you gave me an idea because even though we use Kubernetes to run all of changelog,
751
+ [2526.20 --> 2528.00] the developers, they don't know that.
752
+ [2528.20 --> 2531.98] They still just get push and all the automation takes care of the rest.
753
+ [2532.30 --> 2536.54] So we were using Docker Swarm before and we were using Docker before.
754
+ [2536.70 --> 2540.20] The experience, as far as developers are concerned, it has never changed.
755
+ [2540.42 --> 2542.22] It has always been get push.
756
+ [2542.54 --> 2544.22] Like, isn't that the heroic experience?
757
+ [2544.34 --> 2545.18] Get push and it runs.
758
+ [2546.18 --> 2546.50] That is.
759
+ [2546.60 --> 2546.88] That is.
760
+ [2547.26 --> 2548.58] But what happens when there's a fire?
761
+ [2548.76 --> 2550.20] How do the developers debug when...
762
+ [2550.72 --> 2551.34] They don't.
763
+ [2551.38 --> 2551.64] Okay.
764
+ [2551.64 --> 2552.78] They don't.
765
+ [2553.72 --> 2556.48] So around that, we have a set of services.
766
+ [2556.70 --> 2561.66] Like, for example, Grafana Cloud, where we send all the logs, all the metrics.
767
+ [2562.14 --> 2565.54] So if there is a problem, that's one of the first places where you would look.
768
+ [2565.72 --> 2567.92] The new addition was integrating with Honeycomb.
769
+ [2568.08 --> 2568.30] Nice.
770
+ [2568.40 --> 2572.86] And Honeycomb gets the Fastly logs as well, which is the CDN.
771
+ [2572.90 --> 2575.54] Because it's not just Kubernetes, it's also what's in front of it.
772
+ [2575.68 --> 2577.02] And then what's behind it as well.
773
+ [2577.14 --> 2578.24] There's like all these components.
774
+ [2578.24 --> 2585.08] So having these different ways of understanding what is happening in your runtime, whether
775
+ [2585.08 --> 2588.76] it's Kubernetes or something else, is important regardless what the runtime is.
776
+ [2589.04 --> 2590.46] For example, getting exceptions.
777
+ [2590.82 --> 2596.96] That's like a really old thing, which we used to do when we used to SCP a Ruby code onto
778
+ [2596.96 --> 2598.22] or FTP it, right?
779
+ [2598.48 --> 2600.36] We still used to get like exceptions.
780
+ [2600.64 --> 2603.04] I forget like what the name of that tool was.
781
+ [2603.14 --> 2605.38] Do you remember what we used back in the day?
782
+ [2605.48 --> 2606.36] There was a number of them.
783
+ [2606.36 --> 2607.94] In fact, I actually wrote one of them.
784
+ [2608.24 --> 2608.64] Exactly.
785
+ [2608.88 --> 2609.84] That's why I'm asking you.
786
+ [2610.24 --> 2616.26] I wrote Hop Toad, which later became Airbrake and competed against Get Exceptional.
787
+ [2616.44 --> 2621.80] And hilariously, both Airbrake and Get Exceptional were purchased by the same person.
788
+ [2621.92 --> 2624.44] And now they're actually running under the same umbrella, which is kind of funny.
789
+ [2625.06 --> 2625.34] Right.
790
+ [2626.08 --> 2627.46] Yeah, you need all these things.
791
+ [2627.56 --> 2630.86] You need all these interfaces into understanding what your application is doing.
792
+ [2631.12 --> 2632.50] I'm really excited, by the way.
793
+ [2632.58 --> 2635.74] This is a bit of a tangent, but I'm really excited by all the stuff that's going on with
794
+ [2635.74 --> 2640.54] EBPF, especially with things like, I think it's New Relics Pixie.
795
+ [2640.94 --> 2648.78] So yeah, New Relics Pixie is really exciting because of the deep insight it can give in a
796
+ [2648.78 --> 2650.08] language agnostic way.
797
+ [2650.30 --> 2654.90] It's one of those things that you could see as a building block so that the developer does
798
+ [2654.90 --> 2658.34] not need access to kubectl exec, for example.
799
+ [2658.34 --> 2658.78] Exactly.
800
+ [2659.98 --> 2660.50] That's it.
801
+ [2660.60 --> 2665.68] That's, I think, what a successful ops side of running Kubernetes looks like, where you
802
+ [2665.68 --> 2666.80] don't have to get there.
803
+ [2667.06 --> 2671.66] As a developer, for example, Blue Green, if you do that properly, and if you have all
804
+ [2671.66 --> 2676.02] the redundancies in place, even when something goes down, the end user doesn't see that.
805
+ [2676.32 --> 2677.86] And it doesn't matter that it runs Kubernetes.
806
+ [2678.32 --> 2683.02] And when it comes to debugging it, well, if you're a small team, and let's say the problem
807
+ [2683.02 --> 2684.32] is in Heroku, what happens?
808
+ [2684.56 --> 2685.46] Do you debug Heroku?
809
+ [2685.72 --> 2685.98] No.
810
+ [2685.98 --> 2687.06] No way.
811
+ [2687.48 --> 2690.70] You don't get the keys to Heroku to debug the stack, right?
812
+ [2691.26 --> 2692.82] It just gets scheduled somewhere else.
813
+ [2693.02 --> 2694.06] And that's how that gets solved.
814
+ [2694.70 --> 2699.34] So what I'm saying is having that visibility into how things run is really important.
815
+ [2699.92 --> 2703.96] And if that's your experience and your interface, that's great.
816
+ [2704.06 --> 2708.32] I think that's one of the principles that are really important, regardless what the runtime
817
+ [2708.32 --> 2708.74] is.
818
+ [2708.92 --> 2710.54] And if it's Kubernetes, so be it.
819
+ [2710.74 --> 2714.94] If you're going to be using something like Kubernetes, you need to invest doubly strongly
820
+ [2714.94 --> 2718.46] in observability and in all of that metrics.
821
+ [2718.46 --> 2725.44] But I'd argue that you need that just as much, if not more, if you're not using Kubernetes.
822
+ [2725.44 --> 2732.30] If you're trying to do raw AWS, for example, it's even harder to build all that observability
823
+ [2732.30 --> 2733.48] infrastructure in place.
824
+ [2733.48 --> 2738.74] But it's absolutely, if you're just moving into the cloud world and moving into this whole
825
+ [2738.74 --> 2745.50] type of world where automation and where it's a cloudy world that's focused on automation,
826
+ [2745.50 --> 2751.06] you need that observability, not only for your own ability to debug, but eventually you're
827
+ [2751.06 --> 2754.00] going to feed that observability back into your automation, right?
828
+ [2754.06 --> 2759.48] You're going to do automated blue-green rollouts where you want the automation to, over the course
829
+ [2759.48 --> 2763.88] of maybe a day, to look for errors, look for reduced metrics, and to roll it back.
830
+ [2764.24 --> 2764.80] Yeah, that's right.
831
+ [2765.04 --> 2771.14] And I know that I read ops and infrastructure and that side of things, but our Kubernetes
832
+ [2771.14 --> 2773.70] setup, it's simple on purpose.
833
+ [2774.06 --> 2775.50] And some things could be better.
834
+ [2775.60 --> 2776.80] It can always be improved.
835
+ [2776.92 --> 2777.48] We have it public.
836
+ [2777.76 --> 2781.94] Anyone can check it out to see how we run and how we set up and which components we pick.
837
+ [2782.48 --> 2783.66] CertManager is part of it.
838
+ [2783.76 --> 2785.42] External DNS, if you ingress Nginx.
839
+ [2785.48 --> 2785.66] Yes.
840
+ [2785.78 --> 2786.28] All the stock stuff.
841
+ [2786.28 --> 2788.18] External DNS, also absolutely necessary.
842
+ [2788.18 --> 2788.88] It's part of it.
843
+ [2789.02 --> 2792.92] And the Kubernetes is managed, so we don't deploy on bare metal servers, even though
844
+ [2792.92 --> 2796.98] that's become simpler over the years since we embarked on this journey.
845
+ [2797.56 --> 2800.62] And there's other options which we will also be exploring.
846
+ [2801.28 --> 2806.96] So whether you do Kubernetes or something else, there will be certain operational concerns which
847
+ [2806.96 --> 2807.68] will be difficult.
848
+ [2808.06 --> 2812.84] And there's a level of maturity that you need to have on the team to navigate them.
849
+ [2812.84 --> 2815.68] And I think that's what is important to almost like reiterate.
850
+ [2815.68 --> 2815.82] Great.
851
+ [2816.34 --> 2820.58] And in certain cases, like Istio, I'm sure some things it makes better.
852
+ [2820.68 --> 2822.10] But networking, I don't know.
853
+ [2822.16 --> 2824.22] I think networking gets more complicated with Istio.
854
+ [2824.52 --> 2827.64] And if you're okay with a trade-off, maybe it's a good one to make.
855
+ [2827.88 --> 2829.10] But I wouldn't.
856
+ [2829.24 --> 2830.24] We haven't chosen Istio.
857
+ [2830.40 --> 2831.10] So there you go.
858
+ [2831.24 --> 2831.74] I agree with you.
859
+ [2831.80 --> 2832.24] 100%.
860
+ [2832.24 --> 2838.42] Talking about Kubernetes and how we run it, do you recommend a big cluster or do you recommend
861
+ [2838.42 --> 2839.48] smaller clusters?
862
+ [2840.06 --> 2840.40] Oh, yeah.
863
+ [2840.82 --> 2846.52] So when Kubernetes first came out, I mean, first of all, short answer is many small clusters.
864
+ [2846.76 --> 2852.70] The long answer is when Kubernetes first came out, CIOs looked at it and said, oh, this
865
+ [2852.70 --> 2853.14] is great.
866
+ [2853.14 --> 2858.50] We can, you know, we're probably using 20% of our CPU and memory across all of our VMs
867
+ [2858.50 --> 2863.00] across our entire fleet, just because of natural inefficiencies between teams, right?
868
+ [2863.18 --> 2866.82] You need a new app out, you throw a couple of VMs out there, you call it a day.
869
+ [2867.26 --> 2871.88] And the CIOs job, part of it, is to reduce infrastructure costs, right?
870
+ [2872.18 --> 2874.72] And so the CIOs looked around, they said, oh, this is great.
871
+ [2874.78 --> 2876.58] We can bim pack the f*** out of this, right?
872
+ [2876.60 --> 2881.24] We can take all that stuff and just shove it into one massive cluster, save so much money.
873
+ [2881.24 --> 2884.08] And I think that drove a lot of initial Kubernetes adoption.
874
+ [2884.22 --> 2887.48] I mean, obviously, there was a lot of grassroots adoption of Kubernetes, but there was also
875
+ [2887.48 --> 2891.78] a lot of, there was a lot of adoption coming out of the IT organizations in larger companies
876
+ [2891.78 --> 2894.20] because of that driving factor.
877
+ [2894.60 --> 2900.70] Now, when the operators started using Kubernetes, they saw what I think of as the real benefits.
878
+ [2900.82 --> 2904.20] I don't think the benefit of Kubernetes is about orchestrating containers.
879
+ [2904.42 --> 2909.36] I think it's about that beautiful, idempotent, declarative, and ubiquitous API.
880
+ [2909.36 --> 2915.54] And especially when you start extending that into external services, external resources
881
+ [2915.54 --> 2922.62] that you're managing, like using, for example, Crossplane to provision AWS resources through
882
+ [2922.62 --> 2923.18] KubeCuttle.
883
+ [2923.30 --> 2924.74] It's a fantastic experience, right?
884
+ [2924.92 --> 2925.54] Yes.
885
+ [2925.66 --> 2929.24] And the operators looked at it and said, this whole Kubernetes thing is pretty cool.
886
+ [2929.50 --> 2932.42] However, Blast Radius is a thing, right?
887
+ [2932.42 --> 2937.14] And so if you've got everything in one big cluster, and especially those poor operators
888
+ [2937.14 --> 2946.90] who went through the 1.8 through 111 upgrade path got burned so many times on trying to upgrade
889
+ [2946.90 --> 2947.90] these clusters in place.
890
+ [2947.90 --> 2951.54] And they started developing these complicated blue-green cluster upgrade strategies where
891
+ [2951.54 --> 2953.22] they deploy an entirely new cluster.
892
+ [2953.42 --> 2955.50] And that's necessary and great.
893
+ [2955.76 --> 2960.52] But now we've figured out that, well, you should just be running many small clusters.
894
+ [2960.52 --> 2961.70] And there's two different ways you could do it.
895
+ [2961.70 --> 2966.06] You run a cluster per kind of bounded context for your microservices.
896
+ [2966.22 --> 2970.66] In other words, you could have a cluster just for your shopping cart stuff and a cluster
897
+ [2970.66 --> 2975.56] just for your front-end stuff and a cluster for your back-end and all that.
898
+ [2975.90 --> 2980.92] But a better way of doing it is to run all these clusters as homogenous workloads, where they
899
+ [2980.92 --> 2982.68] are all running identical workloads.
900
+ [2983.24 --> 2987.44] In fact, one of our clients is doing that, and they're referring to it as fleets internally.
901
+ [2987.44 --> 2990.36] So what they do is actually really smart.
902
+ [2990.62 --> 2995.06] They run a cluster in AWS per availability zone.
903
+ [2995.36 --> 2996.46] And that does a couple of things.
904
+ [2996.74 --> 2999.26] It's a natural dividing point for the different clusters.
905
+ [2999.78 --> 3004.88] And it means that they also keep all of their traffic inside each AD because all the services
906
+ [3004.88 --> 3007.86] in cluster A are always talking to other services in cluster A.
907
+ [3007.92 --> 3009.72] They don't try and do cross-cluster traffic.
908
+ [3010.18 --> 3013.58] And that saves them a good amount of money because they have a lot of networking that's happening
909
+ [3013.58 --> 3014.10] in AWS.
910
+ [3014.60 --> 3019.60] But also, it means that when they're upgrading these clusters, they can just upgrade one.
911
+ [3019.86 --> 3021.20] And if it goes sideways, who cares?
912
+ [3021.40 --> 3023.68] Burn it down, rebuild it, and you're fine.
913
+ [3023.94 --> 3027.00] You've only lost, what, 20%, 25% of your capacity?
914
+ [3027.18 --> 3028.46] And you just keep moving.
915
+ [3028.98 --> 3031.72] Now, of course, the big elephant here is state.
916
+ [3032.10 --> 3034.00] You can't do that with databases.
917
+ [3034.38 --> 3039.00] And so the best solution that we always propose to our customers is, look, if you're going to
918
+ [3039.00 --> 3043.06] run stateful workloads in Kubernetes, which, by the way, that's a lot of innovation points.
919
+ [3043.48 --> 3046.14] You really need a team to manage that if you're going to do that.
920
+ [3046.20 --> 3048.66] That's a dangerous thing to do as a small company.
921
+ [3048.98 --> 3053.52] But if you're going to run stateful workloads in Kubernetes, at least shove them into a smaller
922
+ [3053.52 --> 3055.48] cluster that you know you have to treat as a pet.
923
+ [3055.94 --> 3058.90] You've taken all of your other clusters, your stateless ones, and you've made them into
924
+ [3058.90 --> 3060.26] cattle, which is beautiful.
925
+ [3060.80 --> 3062.86] Then you constrain all your stateful workloads into one.
926
+ [3062.96 --> 3064.52] Or just use RDS.
927
+ [3065.06 --> 3067.74] Just externalize your databases entirely.
928
+ [3067.74 --> 3068.42] Right?
929
+ [3068.66 --> 3069.46] It's a tough problem.
930
+ [3069.64 --> 3073.46] And yeah, unless you've been solving that problem for some years, it's really difficult
931
+ [3073.46 --> 3074.18] to appreciate.
932
+ [3074.64 --> 3078.16] And even the operators, I'm glad that you mentioned it earlier for PostgreSQL.
933
+ [3078.46 --> 3079.78] Do you know how we run PostgreSQL?
934
+ [3079.96 --> 3080.32] How do you?
935
+ [3080.80 --> 3082.78] We run it as a stateful set.
936
+ [3083.04 --> 3085.34] No help, no operator, nothing like that.
937
+ [3085.62 --> 3088.40] And since we did that, it's been more stable.
938
+ [3088.58 --> 3092.12] It has not failed since we went to a stateful set.
939
+ [3092.24 --> 3095.68] Simple stateful set, PostgreSQL container, sorry, PostgreSQL image.
940
+ [3095.68 --> 3097.36] And what were you doing before that?
941
+ [3097.36 --> 3099.36] Were you doing RDS or were you doing?
942
+ [3099.66 --> 3106.96] We tried running the Crunchy data, PostgreSQL operator, and it failed because of replication.
943
+ [3107.48 --> 3110.36] Actually, we even covered this in like an episode at length.
944
+ [3110.46 --> 3114.74] But the point was the primary stopped replicating to the replica.
945
+ [3114.96 --> 3115.10] Yeah.
946
+ [3115.10 --> 3117.08] So the write-ahead log filled up on the primary.
947
+ [3117.50 --> 3119.10] The second it crashed.
948
+ [3119.24 --> 3121.54] The secondary could not be promoted.
949
+ [3121.62 --> 3125.84] The replica could not be promoted to primary because it was too far behind.
950
+ [3125.98 --> 3127.68] And then we didn't have a database.
951
+ [3129.62 --> 3129.98] Ouch.
952
+ [3129.98 --> 3133.46] We couldn't reboot the main one because the PVC filled up.
953
+ [3133.72 --> 3135.18] We couldn't resize the PVC either.
954
+ [3135.52 --> 3137.66] And we thought, nah, let's just crunch data.
955
+ [3137.86 --> 3142.20] We actually went to Zalanda one, the other PostgreSQL operator, and the same thing happened.
956
+ [3142.86 --> 3146.28] So obviously the networking, there was an issue at that point with networking.
957
+ [3146.78 --> 3153.02] And that broke replication, PostgreSQL replication, which resulted in a less stable database.
958
+ [3153.02 --> 3155.56] Yeah, but I mean, come on, that's not because of those operators.
959
+ [3156.16 --> 3158.70] You would have the same problem running a stateful set.
960
+ [3158.80 --> 3162.54] I think you probably changed other things at the same time as moving to a stateful set,
961
+ [3162.58 --> 3164.72] or maybe changed the way you use it or something like that.
962
+ [3164.74 --> 3165.56] We don't replicate.
963
+ [3165.92 --> 3166.66] Like it's single instance.
964
+ [3166.88 --> 3167.06] Oh, okay.
965
+ [3167.16 --> 3167.62] Well, there you go.
966
+ [3167.82 --> 3168.62] We back everything up.
967
+ [3168.86 --> 3170.00] We back every hour.
968
+ [3170.28 --> 3171.42] We do like a full backup.
969
+ [3171.72 --> 3171.86] Yeah.
970
+ [3171.88 --> 3175.38] And we can restore from backup within two, three minutes.
971
+ [3175.84 --> 3182.58] So a blank node can pull the backup down from S3 and boot up in three minutes.
972
+ [3182.58 --> 3183.90] We'll have less downtime.
973
+ [3184.08 --> 3185.42] And it's a very simple procedure.
974
+ [3185.80 --> 3187.24] Now, would I choose a managed?
975
+ [3187.32 --> 3187.44] Right.
976
+ [3187.48 --> 3191.46] You've got a potential data loss issue of like up to an hour, right?
977
+ [3191.54 --> 3194.50] Half an hour median data loss if you lose the PV, right?
978
+ [3194.72 --> 3195.06] Exactly.
979
+ [3195.14 --> 3195.34] Yes.
980
+ [3195.64 --> 3197.34] But that's a trade-off that you're willing to make.
981
+ [3197.38 --> 3197.76] That's fine.
982
+ [3197.84 --> 3198.46] That works great.
983
+ [3198.62 --> 3198.90] Exactly.
984
+ [3198.90 --> 3204.66] And if I was to choose any PostgreSQL service, type of service, I would just go for a managed
985
+ [3204.66 --> 3207.10] one, like CockroachDB, something like that.
986
+ [3207.22 --> 3211.06] I mean, that's what I'm thinking because it's a really hard problem to solve.
987
+ [3211.54 --> 3214.30] I've been trying to solve this for like a couple of years.
988
+ [3214.58 --> 3218.08] I don't think I have in like a different context because it's really difficult.
989
+ [3218.34 --> 3224.52] I got to tell you that I love the solution you just talked about because too many companies,
990
+ [3224.72 --> 3228.60] and I've heard other people say this, not like this is some insight that I have, but
991
+ [3228.60 --> 3230.24] I agree with it 100%.
992
+ [3230.24 --> 3234.50] Too many companies look around and they see all this really interesting and production
993
+ [3234.50 --> 3238.18] grade hardened technologies coming out of Google and Facebook and other companies
994
+ [3238.18 --> 3238.66] like that.
995
+ [3238.66 --> 3241.84] And they think, oh, okay, well, if we're going to play in the cloud, we got to have
996
+ [3241.84 --> 3242.62] that, right?
997
+ [3242.86 --> 3243.66] You don't.
998
+ [3243.96 --> 3250.36] And if you try and build your system to be at that level, it's going to drag you down
999
+ [3250.36 --> 3251.56] with the weight of it, right?
1000
+ [3251.90 --> 3255.88] And you looked at it and you said, yeah, we can, you know, worst case scenario, we lose
1001
+ [3255.88 --> 3256.24] a PV.
1002
+ [3256.42 --> 3259.90] We can handle half an hour's worth of data loss, right?
1003
+ [3260.38 --> 3261.92] It's not that big of a deal.
1004
+ [3261.92 --> 3267.40] Then you can go with a single instance of Postgres without replication and you are fine and your
1005
+ [3267.40 --> 3269.06] life is so much better, right?
1006
+ [3269.20 --> 3274.42] So I love that you had the self-awareness as a, you know, organization to make that choice.
1007
+ [3274.74 --> 3274.84] Yeah.
1008
+ [3275.04 --> 3276.18] We don't use PVs.
1009
+ [3276.34 --> 3277.90] But I don't have time for that story.
1010
+ [3279.50 --> 3281.86] Do you use the host disk for that or what do you do?
1011
+ [3282.06 --> 3282.48] Oh, yes.
1012
+ [3282.70 --> 3284.24] It's like 10 times faster.
1013
+ [3284.96 --> 3285.18] Yeah.
1014
+ [3285.48 --> 3286.66] Like we never lose that.
1015
+ [3286.92 --> 3287.32] You don't care.
1016
+ [3287.32 --> 3290.82] So it doesn't mean that like when you're rolling hosts under your cluster, you need
1017
+ [3290.82 --> 3292.16] to probably call downtime, right?
1018
+ [3292.18 --> 3292.84] You need to stop traffic.
1019
+ [3292.84 --> 3293.58] We have a single host.
1020
+ [3297.26 --> 3298.40] It's so good.
1021
+ [3298.62 --> 3299.46] It never went down.
1022
+ [3302.96 --> 3304.92] We have a much better integration with the CDN.
1023
+ [3305.02 --> 3309.42] And what that means is that even when the origin is down, we serve stale content.
1024
+ [3309.82 --> 3315.58] And unless you do posts or patches or anything like that, gets, it works.
1025
+ [3315.58 --> 3320.58] And parts of the website may be down for most users, but you get your MP3s.
1026
+ [3320.88 --> 3321.98] We'll serve that content.
1027
+ [3322.24 --> 3322.94] We'll get the pages.
1028
+ [3323.84 --> 3328.60] And basically what you're telling me is, boy, life is easy when you're a read-heavy workload.
1029
+ [3328.72 --> 3329.26] I'll tell you what.
1030
+ [3329.82 --> 3331.06] Yeah, it is.
1031
+ [3331.54 --> 3332.58] It definitely is.
1032
+ [3332.66 --> 3336.78] And if we were to, for example, if we had to have the database up, I really do think
1033
+ [3336.78 --> 3341.22] that going to a managed service, regardless who manages it, who manages that, it's a much
1034
+ [3341.22 --> 3341.98] better proposal.
1035
+ [3342.24 --> 3342.68] Oh, for sure.
1036
+ [3342.68 --> 3346.38] All the backups, like all the replication, all that stuff, it's managed.
1037
+ [3346.66 --> 3347.72] You don't have to do that.
1038
+ [3347.74 --> 3350.24] And you're just consuming the PostgreSQL interface.
1039
+ [3350.38 --> 3350.76] That's it.
1040
+ [3351.10 --> 3353.16] So that sounds like a much better proposal.
1041
+ [3353.22 --> 3353.80] Like a CDN.
1042
+ [3353.88 --> 3354.96] Would you run your own CDN?
1043
+ [3355.14 --> 3355.54] Maybe.
1044
+ [3356.00 --> 3357.84] I mean, if you're big enough, you'll have to.
1045
+ [3358.24 --> 3359.62] If you're that scale, sure.
1046
+ [3359.92 --> 3360.08] Right.
1047
+ [3360.30 --> 3366.52] And another thing about running databases inside Kubernetes is that you could think of
1048
+ [3366.52 --> 3367.90] it as almost addicting.
1049
+ [3367.90 --> 3372.16] Because once you make the decision that, well, we're not going to use an external database
1050
+ [3372.16 --> 3372.64] provider.
1051
+ [3372.64 --> 3375.34] Instead, we're going to just run them as stateful sets inside Kubernetes.
1052
+ [3375.54 --> 3378.22] And we believe in the Zolando operator, for example.
1053
+ [3378.36 --> 3378.50] Right.
1054
+ [3378.56 --> 3381.70] Well, you're going to find that your developers are naturally just going to be provisioning
1055
+ [3381.70 --> 3382.26] databases.
1056
+ [3382.26 --> 3389.20] And that's going to result in multiple stateful sets, not schemas in a large existing Postgres.
1057
+ [3389.38 --> 3390.94] It's just naturally going to proliferate.
1058
+ [3391.60 --> 3396.16] And that's the headache that you're going to feel, is that suddenly we have a client who's
1059
+ [3396.16 --> 3399.00] got hundreds of Postgres's.
1060
+ [3399.20 --> 3401.02] And I'm not going to name the client, obviously.
1061
+ [3401.26 --> 3403.30] But I will say they're running them wrong.
1062
+ [3403.40 --> 3404.10] And they know it.
1063
+ [3404.20 --> 3404.52] Right.
1064
+ [3404.52 --> 3408.42] It's technical debt that we're helping them dig out of.
1065
+ [3408.58 --> 3413.00] But it's a huge pain, huge cost for them.
1066
+ [3413.34 --> 3414.62] Once you get to a certain scale, you're right.
1067
+ [3414.76 --> 3417.06] You have to take a certain approach.
1068
+ [3417.50 --> 3419.92] But when you're not there, don't take that approach.
1069
+ [3420.12 --> 3420.96] Take the simpler one.
1070
+ [3421.32 --> 3421.42] Right.
1071
+ [3421.48 --> 3425.42] And what this approach means for us is that we can innovate elsewhere.
1072
+ [3425.48 --> 3425.70] Yes.
1073
+ [3425.76 --> 3427.52] And we can fight other battles.
1074
+ [3427.86 --> 3431.40] There will still be battles to fight, even if you don't do this one.
1075
+ [3431.40 --> 3435.00] It doesn't mean that you're less capable or less curious.
1076
+ [3435.18 --> 3438.22] It just means you've picked your battles in a way that suits you.
1077
+ [3438.54 --> 3443.80] And one of these days, as a company, you'll get big enough where you need that more interesting,
1078
+ [3444.00 --> 3444.92] innovative challenges.
1079
+ [3445.40 --> 3448.44] And there will be companies like ours to help you out when that happens.
1080
+ [3448.74 --> 3451.66] But please don't just assume you need that prematurely.
1081
+ [3451.80 --> 3453.60] There's a similar thing with writing code.
1082
+ [3454.00 --> 3459.96] I tell you, iterating on a code base, because I've spent half my career as an application developer
1083
+ [3459.96 --> 3461.12] as well as operations.
1084
+ [3461.48 --> 3466.26] Iterating on a code base before it's actually launched and in production is so much faster,
1085
+ [3466.36 --> 3466.58] right?
1086
+ [3466.86 --> 3469.20] You can make all kinds of schema changes.
1087
+ [3469.44 --> 3470.12] Like, who cares?
1088
+ [3470.66 --> 3471.16] Never ship.
1089
+ [3471.24 --> 3471.94] That's what you're saying.
1090
+ [3472.02 --> 3472.24] Yeah.
1091
+ [3472.34 --> 3474.80] Basically, never ship and you'll be the fastest startup.
1092
+ [3475.20 --> 3476.44] So the opposite of the show.
1093
+ [3477.72 --> 3478.58] Don't ship it.
1094
+ [3480.48 --> 3482.20] But I mean, it's the same thing.
1095
+ [3482.46 --> 3486.36] You launch when you need to launch, but you understand the fact that as soon as you launch,
1096
+ [3486.36 --> 3489.46] you're going to slow down by at least a factor of two, maybe three, right?
1097
+ [3489.90 --> 3496.44] And you increase the complexity of your operations stance, your Kubernetes usage when you need
1098
+ [3496.44 --> 3496.66] to.
1099
+ [3496.76 --> 3500.46] And you understand, I mean, even embracing Kubernetes, you do it when you need to.
1100
+ [3500.54 --> 3503.46] And you understand that that much complexity is going to slow you down.
1101
+ [3504.08 --> 3505.10] Yeah, that's a good one.
1102
+ [3505.20 --> 3505.86] That is a good one.
1103
+ [3505.96 --> 3507.84] So I think it's time to wrap up.
1104
+ [3507.88 --> 3509.14] We can have so much fun.
1105
+ [3509.20 --> 3509.90] I didn't realize.
1106
+ [3510.52 --> 3511.94] I think we just have to do this more often.
1107
+ [3512.04 --> 3513.30] That's the only conclusion again.
1108
+ [3513.30 --> 3519.08] As we are prepared to wrap up, what do you think the most important takeaway is for our
1109
+ [3519.08 --> 3520.42] listeners from this conversation?
1110
+ [3521.92 --> 3525.60] Well, I mean, I didn't think it was going to be this when we first started talking, but
1111
+ [3525.60 --> 3529.10] I think the most important takeaway is don't use Kubernetes unless you need to.
1112
+ [3529.24 --> 3531.00] Like delay the adoption of Kubernetes.
1113
+ [3531.24 --> 3532.74] It's going to be on your roadmap.
1114
+ [3533.24 --> 3535.22] It's going to happen as you grow.
1115
+ [3535.60 --> 3539.80] But just like anything else, don't try and tackle that problem early.
1116
+ [3539.80 --> 3545.62] Use one of the existing managed platforms, not managed Kubernetes installations.
1117
+ [3545.90 --> 3548.00] Although when you do adopt Kubernetes, do that.
1118
+ [3548.56 --> 3549.96] But just delay it for as long as you can.
1119
+ [3550.04 --> 3553.00] And then even then understand that you're spending innovation points.
1120
+ [3553.14 --> 3559.32] So use it in as simple of a way as you can, because you need to pay down that innovation
1121
+ [3559.32 --> 3560.38] debt, right?
1122
+ [3560.52 --> 3566.78] Focus on the automation and focus on the education for your people, because you will underestimate
1123
+ [3566.78 --> 3568.58] how complicated Kubernetes is.
1124
+ [3568.58 --> 3573.64] You will be surprised when you start using it and start seeing all of the different ways
1125
+ [3573.64 --> 3577.34] that you can configure it and all the best practices that are not codified in it.
1126
+ [3577.76 --> 3581.50] Well, thank you, Tamar, for sharing so much valuable information.
1127
+ [3581.98 --> 3583.06] And I had so much fun.
1128
+ [3583.16 --> 3583.70] This was great.
1129
+ [3583.80 --> 3584.18] Thank you.
1130
+ [3584.38 --> 3585.54] Yeah, I had so much fun too.
1131
+ [3585.90 --> 3586.42] Thank you.
1132
+ [3586.56 --> 3587.80] I'm looking forward to the next one.
1133
+ [3587.96 --> 3588.44] I really am.
1134
+ [3588.52 --> 3588.86] Absolutely.
1135
+ [3589.12 --> 3589.40] Thank you.
1136
+ [3589.40 --> 3590.40] Thank you.
1137
+ [3590.46 --> 3591.40] Thank you.
1138
+ [3591.40 --> 3592.40] Thank you.
1139
+ [3592.40 --> 3593.08] Thank you.
1140
+ [3593.08 --> 3593.28] Thank you.
1141
+ [3593.28 --> 3595.14] Thank you for tuning in to another episode of Ship It.
1142
+ [3595.20 --> 3597.72] This is just one of our podcasts for developers.
1143
+ [3598.02 --> 3601.40] Go to changelog.com forward slash master for the rest.
1144
+ [3601.78 --> 3606.12] You can join our community at changelog.com forward slash community.
1145
+ [3606.48 --> 3607.98] There are no imposters in our Slack.
1146
+ [3608.20 --> 3609.46] Everyone is welcome.
1147
+ [3609.92 --> 3613.34] Huge thanks to our partners Fastly, LaunchDarkly, and Linode.
1148
+ [3613.34 --> 3616.72] Thank you, Breakmaster Cylinder, for all our awesome beats.
1149
+ [3617.02 --> 3617.90] That's it for this week.
1150
+ [3618.08 --> 3618.72] See you next week.
1151
+ [3643.34 --> 3650.86] Hey-
1152
+ [3650.86 --> 3651.58] Bye-
1153
+ [3651.58 --> 3652.98] Bye-
1154
+ [3652.98 --> 3655.32] Bye-
1155
+ [3655.60 --> 3655.92] Bye-
1156
+ [3655.92 --> 3656.04] Bye-
1157
+ [3656.04 --> 3656.98] Bye-
1158
+ [3656.98 --> 3657.50] Bye-
1159
+ [3657.58 --> 3657.74] Bye-
1160
+ [3657.74 --> 3659.30] Bye-
1161
+ [3659.30 --> 3659.96] Bye-
1162
+ [3659.96 --> 3661.02] Bye-
1163
+ [3661.02 --> 3661.64] Bye-
1164
+ [3661.64 --> 3662.78] Bye-
1165
+ [3662.78 --> 3663.62] Bye-
1166
+ [3663.62 --> 3665.90] Bye-
1167
+ [3665.90 --> 3666.42] Bye-
1168
+ [3667.64 --> 3668.94] Bye-
1169
+ [3669.00 --> 3671.16] Bye-
1170
+ [3671.58 --> 3671.66] Bye-
1171
+ [3671.66 --> 3672.36] Bye-
1172
+ [3672.36 --> 3673.26] Bye-
1173
+ [3673.26 --> 3673.32] Bye-
It's crazy and impossible_transcript.txt ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Molly Wood:** Hello, I'm Molly Wood from CNET. I have a question actually about market share, which is sort of what we're getting at. There has been a suggestion that because of pricing and design, Apple tends to appeal to kind of a smaller elite, rather than that sort of mass customer base... So I guess once and for all, is it your goal to overtake the PC in market share? \[laughter\]
2
+
3
+ **Steve Jobs:** I’ll tell you what our goal is. Our goal is to make the best personal computers in the world, and to make products we are proud to sell and would recommend to our family and friends. We want to do that at the lowest prices we can. But I have to tell you, there's some stuff in our industry that we wouldn't be proud to ship, that we wouldn't be proud to recommend to our family and friends... And we can't do it. We just can't ship junk.
4
+
5
+ **Gerhard Lazu:** Today, we have a very special episode of Ship It, where I get to share my favorite learnings from Steve Jobs. If it wasn't for his determination to build a better personal computer, I would have most likely continued with a career in physics. I know what you're thinking, it's crazy and impossible to interview Steve Jobs. But on his 10th Memorial anniversary, I was determined to combine the things that he said with my passion for computers, automation and infrastructure. I share some of his beliefs, and just as they've served me well over the years, I think that they will serve you well, too. This show is for all the crazy ones that think they can change the world. Live your life and ship your best stuff, because there's nothing like the present.
6
+
7
+ **Break:** \[01:40\]
8
+
9
+ **Gerhard Lazu:** Hi, and welcome. I want to start by thanking you for making time for this crazy idea. I appreciate you joining us on this special day, which marks 10 years since you had to shift your focus to let's say something completely different. So how do you want to do this?
10
+
11
+ **Steve Jobs:** What I want to do is just chat. And so we get to spend 45 minutes or so together, and I want to talk about whatever you want to talk about. I have opinions on most things, so I figured if you just want to start asking some questions, we'll go to some good places.
12
+
13
+ **Gerhard Lazu:** Let me start with a story that didn't make sense to me back then, but now it explains everything. As a high school student, physics was my passion. When the opportunity presented itself to travel to the US for 100 years from Max Planck, the conference, I could not sleep for days. This was going to be my first trip to a physics conference, on a different continent, to a place where all great things appeared to be happening.
14
+
15
+ In that moment, I was convinced that I was destined to become a physicist... But life had other plans. There I was, at the University of Puget Sound in Tacoma, being blown away by these talks on quantum mechanics and Schrödinger's cat, when chance made it that I stumbled across the brand new iMac G3 in some building which name I no longer remember. What I do remember as if it actually happened yesterday was this Bondi Blue iMac which was perfect from every angle. And the one thing that captivated my imagination like nothing else was the font. That font is the reason why I fell in love with that perfect machine. And in an instant, my heart knew what I was going to do with the rest of my life.
16
+
17
+ \[04:24\] To this day, for me, the font rendering on Mac devices is the perfect combination of imagination, order and passion. It's not just technology and science; I know that there is something more to it. Can you share with us the story behind that font?
18
+
19
+ **Steve Jobs:** I dropped out of Reed College after the first six months, but then stayed around as a drop-in for another 18 months or so before I really quit. So why did I drop out?
20
+
21
+ It started before I was born. My biological mother was a young, unwed graduate student, and she decided to put me up for adoption. She felt very strongly that I should be adopted by college graduates. So everything was all set for me to be adopted at birth by a lawyer and his wife... Except that when I popped out, they decided at the last minute that they really wanted a girl. So my parents, who were on a waiting list, got a call in the middle of the night, asking “We've got an unexpected baby boy. Do you want him?” They said, “Of course.”
22
+
23
+ My biological mother found out later that my mother had never graduated from college, and that my father had never graduated from high school. She refused to sign the final adoption papers. She only relented a few months later, when my parents promised that I would go to college. This was the start in my life.
24
+
25
+ And 17 years later, I did go to college. But I naively chose a college that was almost as expensive as Stanford, and all of my working-class parents savings were being spent on my college tuition. After six months, I couldn't see the value in it. I had no idea what I wanted to do with my life, and no idea how college was going to help me figure it out. And here I was, spending all the money my parents had saved their entire life. So I decided to drop out and trust that it would all work out okay. It was pretty scary at the time, but looking back, it was one of the best decisions I ever made. The minute I dropped out, I could stop taking the required classes that didn't interest me, and begin dropping in on the ones that looked far more interesting. I loved it. And much of what I stumbled into by following my curiosity and intuition turned out to be priceless later on.
26
+
27
+ Reed College at that time offered perhaps the best calligraphy instruction in the country. Throughout the campus, every poster, every label on every drawer was beautifully hand-calligraphed. Because I had dropped out and didn't have to take the normal classes, I decided to take a calligraphy class to learn how to do this. I learned about serif and sans-serif typefaces, about varying the amount of space between different letter combinations, about what makes great typography great. It was beautiful, historical, artistically subtle in a way that science can't capture, and I found it fascinating. None of this had even a hope of any practical application in my life.
28
+
29
+ But 10 years later, when we were designing the first Macintosh computer, it all came back to me. And we designed it all into the Mac. It was the first computer with beautiful typography. If I had never dropped in on that single course in college, the Mac would have never had multiple typefaces or proportionally-spaced fonts. And since Windows just copied the Mac, it's likely that no personal computer would have them. \[audience cheering\]
30
+
31
+ \[08:02\] If I had never dropped out, I would have never dropped in on that calligraphy class, and personal computers might not have the wonderful typography that they do. Of course, it was impossible to connect the dots looking forward when I was in college. But it was very, very clear looking backwards ten years later. Again, you can't connect the dots looking forward, you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future. You have to trust in something - your gut, destiny, life, karma, whatever. Because believing that the dots will connect down the road will give you the confidence to follow your heart, even when it leads you off the well-worn path, and that will make all the difference.
32
+
33
+ **Gerhard Lazu:** Thank you, Steve. That was beautiful. It's somewhat ironic, because I dropped out of university after my first six months, because it didn't make sense to me. My heart kept telling me computers, Macs specifically, and programming, and learning on the job, and my parents kept telling me MBA. I had no idea about your experience with college until many years later. That just proves what you've just said, "You can only connect the dots backwards, and trusting your heart that it will all work out."
34
+
35
+ I would like us to go all the way to the beginning now and talk about this magical box that changed everything, not just for me, but also for everyone else that is listening to us today. How did the fascination with personal computers start for you?
36
+
37
+ **Steve Jobs:** The question is, is really what is a personal computer? And why is it different than all the other computers that have existed throughout history? Probably the best way to explain that is through an analogy when you look at the invention of the first electric motor in the late 1800s. It was only possible to build a very large one, and it was very, very expensive, and therefore it could only be cost-justified for the most expensive or large applications. And the electric motor really took its next step in proliferation when somebody hooked a long shaft to it and ran it down the center of a factory, through series of belts and pulleys brought that power down to maybe 15 or 20 individual workstations, thereby cost-justifying sharing that horsepower among some medium-size applications.
38
+
39
+ But the electric motor really achieved a true proliferation in the society with the invention of the fractional horsepower electric motor. And at that point, the horsepower could be brought directly to where it was needed, on a personal scale, cost-justified for a small number of things. And we see the same thing, the same evolution if we examine the history of computing. The first computer, ENIAC, in 1946, was designed primarily for weather and ballistic calculations, very large tasks. And the next revolution in computing was in the 1960s, with the invention of what's called time-sharing. In essence, sharing one of these very large computers with maybe 40 or 50 terminals scattered through a company, and thereby cost-justifying it for medium size applications.
40
+
41
+ What we think the personal computer industry is about is the invention of the fractional horsepower computer, something that can be cost-justified on the personal level; something that weighs 12 pounds, that you can throw out the window if you don't like, and it's really changing the way that people interact with computers. There's a one-on-one relationship that develops between one person and one computer.
42
+
43
+ **Gerhard Lazu:** Wow, this is just as fascinating now, 40 years later, as it was the first time that you have talked about it in 1981. And this story makes me wonder - how do you see the relationship between computers and us, the people?
44
+
45
+ **Steve Jobs:** There was an article in Scientific American in the early '70s, which compared the efficiency of locomotion for various species of things on the planet. In other words, they measured how much energy it took for a bird to get from point A to point B compared with the energy it took a fish to get the same distance, and a goat, and a person, and all sorts of other things... And they ranked them, and it turns out the Condor won. The Condor was the most efficient, and man came in with a rather unimpressive showing about a third of the way down the list, somewhat disappointing.
46
+
47
+ \[12:03\] But someone there had the insight to test the efficiency of man riding a bicycle, and man riding a bicycle was twice as good as the Condor, all the way off the end of the list. And what it really illustrated was man's ability as a toolmaker to fashion a tool that can amplify an inherent ability that he has. And that's exactly what we think we're doing; we think we're basically fashioning a 21st Century bicycle here, which can amplify an inherent intellectual ability that man has, and really take care of a lot of drudgery to free people to do much more creative work. And what we're finding is it is enriching people's lives, it is freeing people to do things that we think people do best.
48
+
49
+ People are freed to think about the conceptual issues involved and the creative issues involved, and use the computer actually to plow through the drudgery. And we're actually changing job descriptions based on allowing people to do more creative work, rather than more work-work.
50
+
51
+ **Gerhard Lazu:** This made me think of something else... I know that many people still think of shipping code as an engineering activity, a byproduct of smart people brainstorming and coming up with technical solutions to business problems, then working hard on delivering all those amazing features as fast as possible. But my heart and intuition tell me something else. I think that it's actually talking to users, and then working with them on a solution. I’m imagining the back and forth conversations over changes deployed into production or, as popularized by GitHub, a PR deployed to staging environment. What do you think?
52
+
53
+ **Steve Jobs:** One of the hardest things when you're trying to affect change is how does that fit into a cohesive, larger vision that's going to allow you to sell $8 billion, $10 billion a product a year?
54
+
55
+ And one of the things I've always found is, you've got to start with the customer experience and work backwards to the technology. You can't start with the technology and try to figure out where you're going to try to sell it. And I've made this mistake probably more than anybody else in this room, and I've got the scar tissue to prove it, and I know that it's the case. And as we have tried to come up with a strategy and a vision for Apple, it started with what incredible benefits can we give to the customer? Where can we take the customer? Not starting with, "Let's sit down with the engineers and figure out what awesome technology we have, and then, how are we going to market that?"
56
+
57
+ **Gerhard Lazu:** I also believe that it all starts with a vision, the Why as captured by Simon Sinek. But how do you discover that? What does it even look like?
58
+
59
+ **Steve Jobs:** I think every good product that I've ever seen in this industry and pretty much anywhere is because a group of people cared deeply about making something wonderful that they and their friends wanted. They want to use it themselves, and that's how the Apple I came about, that's how the Apple II came about, that's how the Macintosh came about... That's how almost everything I know that's good has come about. It didn't come about because people were trembling in the corner, worried about some big company stomping on them; because if the big company made the product that was right, then most of these things wouldn't have happened. If Woz and I could have went out and plunked down 2,000 bucks and bought an Apple II, why would we have built one? We weren't trying to start a company, we were trying to get a computer.
60
+
61
+ **Gerhard Lazu:** Okay. So people that cared deeply about making something wonderful, that everyone they know, including themselves, want to use. So this is a great idea. And if it was that simple, everyone would do it. I'm pretty sure that there is more to it. In my experience, one of the things that people with great ideas and passion struggle with is focus. What are your thoughts on that?
62
+
63
+ **Steve Jobs:** \[16:07\] Apple suffered for several years from lousy engineering management, I have to say it. And there were people that were going off in 18 different directions, doing arguably interesting things in each one of them. Good engineers. Lousy management. And what happened was you look at the farm that's been created with all these different animals going in different directions, and it doesn't add up. The total is less than the sum of the parts. And so we had to decide what are the fundamental directions we're going in, and what makes sense and what doesn't. And there were a bunch of things that didn't. And microcosmically, they might have made sense. Macrocosmically, they made no sense. And the hardest thing is - when you think about focusing, right? You think, "Well, focusing is saying yes." No, focusing is about saying no.
64
+
65
+ **Gerhard Lazu:** I understand what you mean. I also find myself saying yes to too many things. Adam keeps repeating to trick myself and go slow, and Justin in Episode 16 told us about the importance of going smooth. And by that, he meant slow. Because slow is smooth, and smooth is fast. I know that Jocko would hard-agree.
66
+
67
+ Okay, so you've mentioned management and engineering, and I too have seen far too many disconnects in my career to know what bad looks like. So what does good engineering management look like to you?
68
+
69
+ **Steve Jobs:** I've never believed in the theory that if we're on the same management team and a decision has to be made, and I decide in a way that you don't like, and I say, “Come on, buy into the decision. Buy into it. Like we're all on the same team, you don't agree, but buy into it. Let's go make it happen.” Because what happens is, sooner or later you're paying somebody to do what they think is right. But then you're trying to get them to do what they think isn't right. And sooner or later, it outs and you end up having that conflict.
70
+
71
+ So I've always felt the best way is to get everybody in a room and talk it through, until you agree. Now, that's not everybody in the company, but that's everybody involved in that decision that needs to execute... Because we're paying people to tell us what to do. I don't view that we pay people to do things. That's easy, to find people to do things. What's harder is to find people that tell you what should be done; that's what we look for. So we pay people a lot of money, and we expect them to tell us what to do.
72
+
73
+ And so when you when that's your attitude, you shouldn't run off and do things if people don't all feel good about them. And the key to making that work is to realize there's not that many things that any one team really has to decide. And we might have 25 really important things we have to decide on a year. Not a lot.
74
+
75
+ **Gerhard Lazu:** I think that what you have described is a good leader, and I think that good ones are few and far between. Even few in the last 10 years. It's mostly company politics, personal agendas, and bottom of my list, regarding people as resources, which serve the primary function of contributing the status and importance of managers or product people. But you know what? I have to admit that sometimes developers can make matters worse; they get so consumed by complexity and shipping any and every solution that they lose track of what is really important. How do you see this?
76
+
77
+ **Steve Jobs:** You’re developers, you know that... It's all about managing complexity. It's like scaffolding, right? You erect some scaffolding, and if you keep going up and up and up, eventually, the scaffolding collapses of its own weight. That's what building software is. It's how much scaffolding can you erect before the whole thing collapses of its own weight. It doesn't matter how many people you have working on it, it doesn't matter if you're Microsoft with 300-400 people, 500 people on the team; it will collapse under its own weight.
78
+
79
+ \[20:06\] You've read The Mythical Man-Month, right? The basic premise of this is “A software development project gets to a certain size where if you add one more person, the amount of energy to communicate with that person is actually greater than their net contribution to the project, so it slows down.” So you have local maximum, and then it comes down. We all know that about software - it's about managing complexity. These tools allow you to not have to worry about 90% of the stuff you've worried about, so that you can erect your five storeys of scaffolding, but starting at storey number 23 instead of starting at storey number six. And you can get a lot higher.
80
+
81
+ **Gerhard Lazu:** That's funny, because that's exactly how I think about Dagger, Kubernetes and Fly, as well as all the other great tooling in my toolbox. I guide myself by the following principles - do I enjoy using this? Does it make sense? Does it make my work simpler? If the answer is no to any of these, I know that I need to look elsewhere. Now I'm thinking about the teams that are so focused on shipping features so that they can make money, which in my opinion, is a bad reason to do it... But they start forgetting about end-user delight; you know, the thing which they should actually care about the most. In all this madness, the thing which keeps coming up consistently is measuring developer productivity by lines of code written, or how fast PRs get merged, or issues get closed. I always thought those were bad metrics, because they don't capture quality or positive impact. What do you think?
82
+
83
+ **Steve Jobs:** The way you get programmer productivity is not by increasing the lines of code per programmer per day. That doesn't work. The way you get programmer productivity is by eliminating lines of code you have to write. The line of code that's the fastest to write, that never breaks, that doesn't need maintenance is the line you never had to write. So the goal here is to eliminate 80% of the code that you have to write for your app; that's the goal. And so along the way, if we can provide whizzy this and whizzy that, and visual this and visual that - well, that's fine. But the high order bit is to eliminate 80% of the code.
84
+
85
+ **Gerhard Lazu:** I've heard many people talk about less code and no code... And the example that I keep going back to is kelseyhightower/nocode repository on GitHub, which I think is a great example of this. Perhaps taken to the extreme to prove the point, but nevertheless, worth checking out.
86
+
87
+ Before we dive deeper into code and hardware, are there any other important learnings about people that you want to share?
88
+
89
+ **Steve Jobs:** The greatest people are self-managing. They don't need to be managed. Once they know what to do, they'll go figure out how to do it, and they don't need to be managed at all. What they need is a common vision, and that's what leadership is - having a vision, and being able to articulate that so that people around you can understand it, and getting a consensus on a common vision. We wanted people that were insanely great at what they did, but were not necessarily those seasoned professionals, but who had at the tips of their fingers and in their passion the latest understanding of where technology was and what we could do with that technology, and wanted to bring that to lots of people.
90
+
91
+ So the neatest thing that happens is when you get a core group of ten great people, that it becomes self-policing as to who they let into that group. So I consider the most important job of someone like myself is recruiting.
92
+
93
+ **Gerhard Lazu:** This resonates with me, as I had a similar experience not that long ago. And knowing you, I know that we are still missing your high-order bit perspective on people.
94
+
95
+ **Steve Jobs:** \[23:46\] I now take a longer-term view on people. In other words, when I see something not being done right, my first reaction isn't to go fix it. It's to say, "We're building a team here, and we're going to do great stuff for the next decade, not just the next year." And so what do I need to do to help, so that the person that's screwing up learns? ...versus, "How do I fix the problem?" And that's painful sometimes, and I still have that first instinct to go fix the problem... But that's taking a longer-term view on people, is probably the biggest thing that's changed. And then I don't know, that's maybe the part that's biological.
96
+
97
+ **Gerhard Lazu:** I also think that being able to make mistakes and learn from them in a safe environment, where everyone comes together and promotes sharing so that others can learn too, can be one of the most wonderful and rewarding aspects of our profession. Unfortunately, people tend to be afraid of consequences, and it's not always clear that the best thing that they can do is to ask for help. And experience doesn't matter because we all make mistakes, and knowing how to manage your fallibles is just as important as knowing what your strengths are, or what makes you tick.
98
+
99
+ **Steve Jobs:** You know, I've actually always found something to be very true, which is most people don't get those experiences because they never ask. I've never found anybody that didn't want to help me if I asked them for help. I always call them up -- I called up... This will date me, but I called up Bill Hewlett when I was 12 years old. And he lived in Palo Alto; his number was still in the phonebook. And he answered the phone himself, he said, "Yes." I said, "Hi, I'm Steve Jobs, I'm 12 years old. I'm a student in high school, and I want to build a frequency counter... And I was wondering if you had any spare parts I could have." And he laughed... And he gave me the spare parts to build this frequency counter, and he gave me a job that summer in Hewlett Packard, working on the assembly line, putting nuts and bolts together on frequency counters. He got me a job in the place that built them, and I was in heaven.
100
+
101
+ And I've never found anyone who said, “No” or hung up the phone when I called. I just asked. And when people asked me, I try to be as responsive, and to pay that gratitude back. Most people never pick up the phone and call, most people never ask... And that's what separate sometimes the people that do things from the people that just dream about them. You’ve got to act, and you've got to be willing to fail; you've got to be willing to crash and burn.... You know, with people on the phone, with starting a company, with whatever. If you're afraid of failing, you won't get very far.
102
+
103
+ **Gerhard Lazu:** That is another great story, thank you for sharing it with us. So once you have all these great people that really love what they do, self-manage and make mistakes in order to learn, how do you organize as a company?
104
+
105
+ **Steve Jobs:** One of the keys to Apple is Apple's an incredibly collaborative company. Do you know how many committees we have at Apple? Zero. We have no committees. We are organized like a startup; one person is in charge of iPhone OS software, one person is in charge of Mac hardware, one person is in charge of iPhone hardware engineering, another person is in charge of worldwide marketing, another person is in charge of operations. We're organized like a startup. We're the biggest startup on the planet. And we all meet for three hours once a week and we talk about everything we're doing, the whole business. And there's tremendous teamwork at the top of the company, which filters down to tremendous teamwork throughout the company. And teamwork is dependent on trusting the other folks to come through with their part without watching them all the time, but trusting that they're going to come through with their parts. And that's what we do really well. We're great at figuring out how to divide things up into these great teams that we have, and all work on the same thing, touch bases frequently, and bring it all together into a product. We do that really well.
106
+
107
+ \[28:09\] And so what I do all day is meet with teams of people, and work on ideas, and solve problems, to make new products, to make new marketing programs, whatever it is. If you want to hire great people and have them stay working for you, you have to let them make a lot of decisions and you have to be run by ideas, not hierarchy. The best ideas have to win, otherwise good people don't stay.
108
+
109
+ **Gerhard Lazu:** I think that there are too many good ideas out there, but not enough people that stick with them for a few years to see them through. In other words, they never get to benefit from the compound interest of learning from a cluster of mistakes.
110
+
111
+ **Steve Jobs:** I think that without owning something over an extended period of time, like a few years, where one has a chance to take responsibility for one's recommendations, where one has to see one's recommendations through all action stages, and accumulate scar tissue for the mistakes and pick oneself up off the ground and dust oneself off, one learns a fraction of what one can. Your coming in and making recommendations and not owning the results, not owning the implementation I think is a fraction of the value and a fraction of the opportunity to learn and get better. And so you do get a broad cut at companies but it's very thin; it's like a picture of a banana... You might get a very accurate picture, but it's only two-dimensional. And without the experience of actually doing it, you never get three-dimensional. So you might have a lot of pictures on your walls, you can show it off to your friends, you can say like, "I've worked in bananas, I've worked in peaches, I've worked in grapes..." but you never really tasted.
112
+
113
+ **Gerhard Lazu:** I have to admit that at the beginning of my career my strategy was to stay with the company for 12 months and then move. I wanted to learn and grow quickly by working with as many people as I could and explore as much as possible... And I do have to say that it worked well. But after about a decade of doing this, I realized that it was time to go deep in specific problem areas, the ones that I enjoyed the most, and that worked even better. So it is important to know when it is time to go broad and when it is time to go deep. And if you pay attention, you will know.
114
+
115
+ Okay, I would like us to go back now into the code and hardware topics. Do you think of Apple as a software or a hardware company?
116
+
117
+ **Steve Jobs:** We think we should be a software company and a hardware company. The charter of our hardware division is to make the best hardware; it might not be the cheapest, it might not be this, might not be that, but we think all in all we can make the best stuff. So the more innovative the product is, the more revolutionary it is and not just an incremental improvement, the more you're stuck, because the existing channel is only fulfilling demand.
118
+
119
+ So how does one bring innovation to the marketplace?
120
+
121
+ We believe the only way we know how to do it right now is with the direct sales force, out there in front of customers, showing them the products in the environment of their own problems, and discussing how those problems can be made with these solutions.
122
+
123
+ A software-only company could never afford to feel the direct sales force. With average selling prices of $500 a software package, you could never afford professionals in the field. With an average selling price of $5,000, you can. And that's why I don't think we're going to see any more system software companies succeed. I don't think it's possible to fund the effort to educate the market about a revolutionary product with ASPs that low. And if it's not a revolutionary product, I don't think the company can succeed.
124
+
125
+ \[32:10\] So our strategy has been that we've got to be a hardware company in order to make our software business succeed, and we think we can do really well at both of them. I know that's a long answer, but it's a complex problem, too.
126
+
127
+ **Gerhard Lazu:** With that in mind, what would you say is Apple's greatest strength, hardware or software?
128
+
129
+ **Steve Jobs:** A lot of times both in people and in organizations, your greatest strength can be your greatest weakness, or your greatest weakness can be your greatest strength. Apple has been highlighted as having an incredibly great weakness of being totally vertically integrated. Well, it doesn't make its own semiconductors, but it makes the hardware, it makes the software, it controls the user experience... Many people are constantly calling for Apple to get out of the hardware business because of that weakness that they perceive. I don't agree with that. I perceive it as a potential weakness if not managed right. I also perceive it as Apple's greatest strength, if managed right.
130
+
131
+ The fact that Apple controls the product design from end-to-end, hardware, software, gives Apple an incredibly unique opportunity. It's the only company in the industry that does that... An incredibly unique opportunity to tackle some of these really gnarly, complex problems that could have enormous potential advantage in the market if we could solve them... And I think solve them literally a half a decade to a decade sooner than the 93 headed monster out there in the Wintel space. Now they have their advantages too, don't get me wrong... But I think one of our great advantages is that we can really have the vision that spans all the disciplines, we control all the disciplines to actually implement a vision much faster if we can get ourselves all going in a few directions.
132
+
133
+ **Gerhard Lazu:** Yes, that matches my experience. Because while the font is the thing that hooked me for life, it was the iMac G3 which attracted me to that computer in the first place. And if hardware is your greatest strength, that leaves software as your competitive advantage, right? Because I think that you need to get both right to win big, as you have been winning.
134
+
135
+ **Steve Jobs:** A lot of times you don't know what your competitive advantage is when you launch a new product. Some really big companies came to us and said, “You don't understand what you've got. The same software that allows Lotus to create their apps 5-10 times faster, is letting us build our in-house mission-critical apps 5-10 times faster, and this is the biggest problem we've had. This is a huge problem for every big company and almost all medium-sized companies and you have the solution in your hands and you dummies don't even know it."
136
+
137
+ And it took them about three months before we finally heard it, and in last summer, we changed our whole sales and marketing strategy around to focus on that, and it's taken off like a rocket. We grew about 4x last year and we'll probably grow about 2x this year, and our customer list is now very strong and growing like crazy. We just got back from spending a few days in DC and in New York, and we're talking to customers we only dreamed of talking to a year ago.
138
+
139
+ **Gerhard Lazu:** I have to ask, why not hardware as a competitive advantage?
140
+
141
+ **Steve Jobs:** The greatest thing is hardware is really -- hardware churns every 18 months; it's pretty impossible to get a sustainable competitive advantage from hardware. If you're lucky, you can make something 1.5 or 2 times as good as your competitor, which probably isn't enough to be quite a competitive advantage, and it only lasts for six months. But software seems to take a lot longer for people to catch up with. I watched Microsoft take eight or nine years to catch up with the Mac, and it's arguable whether they've even caught up.
142
+
143
+ **Gerhard Lazu:** \[36:02\] Okay, I get it. Even though you do currently have maybe a two times advantage with your new ARM-based architecture, I don't expect it to last long enough to make a meaningful difference. And meanwhile, for 20 years, no one has caught up with your font. So yes, it all makes sense.
144
+
145
+ Speaking about hardware, one thing that I started thinking about more and more in recent months is the possibility of doing all my coding on remote hosts. The GitHub Codespaces conversation from Changelog episode 459 is the latest nudge in that direction. And while the experience is not what I imagined - I feel at home in Vim, K9s and Tig - I really see the potential, especially with the recent shift to remote work.
146
+
147
+ **Steve Jobs:** Much of the great leverage of using computers these days is using them not just for computationally intensive tasks, but using them as a window into communication-intensive tasks. And never have I seen something more powerful than this computation combined with this network technology that we now have. I just want to focus on something that's very close to my heart, which is living in a high-speed networked world to get your job done every day. Now, how many of you manage your own storage on your computers? How many of you backup your computers, as an example? How many of you have had a crash in the last three years, four years? \[laughter\] Right. Okay. Let me describe the world I live in. We had high-speed networking connected to our now obsolete next hardware running next at the time and because we were using NFS, we were able to take all of our personal data - our home directories we call them - off of our local machines and put them on a server. And the software made that completely transparent. And because the server had a lot of RAM on it, in some cases it was actually faster to get stuff from the server than it was to get stuff off your local hard disk, because in some cases it would be cached in the RAM of the server if it was in popular use. But what was really remarkable was that the organization could hire a professional person to backup that server every night, and could afford to spend a little bit more on that server, so maybe it had redundant disk drives, redundant power supplies.
148
+
149
+ And you know, in the last seven years, do you know how many times I have lost any personal data? Zero. Do you know how many times I’ve backed up my computer? Zero. I have computers at Apple, at NeXT, at Pixar and at home. I walk up to any of them and log in as myself, it goes over the network, finds my home directory on the server, and I've got my stuff wherever I am. Wherever I am. And none of that is on a local hard disk.
150
+
151
+ **Gerhard Lazu:** I can see the similarity in our thinking, and this makes me wonder about that higher-order bit. Because I think the experiences we have just shared are lower order.
152
+
153
+ **Steve Jobs:** \[39:32\] I believe that you can use the concept of technology windows opening and then eventually closing. What I mean by that is enough technology, usually from fairly diverse places, comes together and makes something that's a quantum leap forward possible. And it doesn't come out of nowhere; if you poke around the labs and you hang around, you can kind of get a feel for some of those things. And usually, they're not quite possible, but all of a sudden you start to sense things coming together and the planet’s lining up to where this is now possible, or barely possible... And a window opens up. And it usually takes around - in my experience anyway - five years to create a commercial product that takes advantage of that technical window opening up. Sometimes you start before the window’s quite open and you can't get through it, and you push it up and you push it up... Sometimes it just takes a lot of work. It took that long with the Apple II, it took that long with the Mac, it took a Lisa along the way, $100 million... It takes a while, it's a lot expensive to push those windows open.
154
+
155
+ And in our case, our first product failed. We came out with this cube, and we sold 10,000 of them. Why? Because we weren't quite there yet, and we made some mistakes along the way, and we had to course-correct. You know, Macintosh was a course correction off the Lisa. With Apple II and III we did it in reverse... \[laughter\] But it takes around five years or some number of years like that to realize that window opening, and then it seems to take about another five years to really exploit it in the marketplace. These things are hard, it's not—they don't last because it's convenient or even because it's economic. They last because this is hard stuff to do.
156
+
157
+ And so when we are pushing that window open -- I think with our current generation of products, we finally got the window open. After six years, it's open, we've got an extremely elegant implementation, and we've got five years of work to do to exploit it in the marketplace. And we’ll peak in five years. Five years, we'll all sit around and say," Okay, it's time to get started on the next thing. It's time to get going on the next thing." Maybe four years from now. But we've got a lot of work ahead of us just to move this thing out and educate the market, and continue to refine it based on market feedback. So everything I know about technology windows that are open or just about open is in NeXTSTEP, where we're working on it in the labs. And these things generally don't come along independently; kind of clumps of them come together, has been my experience.
158
+
159
+ **Gerhard Lazu:** Even if you know about these technology windows, I suspect that none of this works without a well-oiled shipping machine, or supply chain, as some call it, and essentially going from idea, to prototype, to a final product as fast as possible. How did you manage to solve this really hard problem?
160
+
161
+ **Steve Jobs:** One of the key things that manufacturing can contribute to competitive advantage is time to market. Why is that? Because the way most things work is you design your product here, and after you're done, you throw it over the wall and you design your manufacturing process here, right? Sorting out a bunch of things that maybe weren't done right here, fixing them, changing them, and then completing the process design. What you want to do is do this and ship it right here, while your competitors are still here. And that's what we've been able to do in many cases.
162
+
163
+ What we do is we suck data out of our CAD systems and engineering, we zing them around over the local network, and in our own computers we compute all of the robot placement programs fully optimized path, we compute all the vision system programs, we check it against the bill of materials in the iOS system, and we download it to the robots and we're ready to build a board, lot size of one, in between two production CPU boards, on the line, full surface mount with all of our automation technology.
164
+
165
+ Now the key is that manufacturing did that so well for engineering that we haven't built a prototype in engineering for two years. We haven't built a wire wrap or any other kind of prototype in engineering for two years; everything has been built in the factory. Now, what does that mean? What that means is manufacturing gets involved from day one. Because the fact that the engineering guys call the manufacturing guys and go, “Hey, we want to build a prototype. We're going to need these special parts in the thing. Take a look at this, tell us what you think. We'd like to do it tomorrow. Let us know if that's okay", blah, blah, blah. They get involved from day one.
166
+
167
+ \[44:15\] Secondly, a lot of times when you build prototypes, it's not quite the same technologies you're going to use in production. And so all the accumulated knowledge you get from building your prototypes you throw away when you change technology to go into production, and you start over in that accumulation process. Because we don't change technology, we don't throw anything away, we don't waste time. And it's led to one of the healthiest relationships between an engineering and manufacturing group I've ever seen in my life. They're all working off the same databases, they're all working on the same processes... They're all working in a very disciplined process environment, to where when any processes are changed, they all get together and review the proposals, and all buy into it. It's not that hard.
168
+
169
+ The key to it all though was we didn't go out and hire a bunch of manufacturing people; we went out and hired engineers.
170
+
171
+ **Gerhard Lazu:** Whenever I hear anyone mentioned manufacturing and factories, my mind goes straight to “The Goal” and “It’s Not Luck”, both amazing books written by Goldratt. And one of the key takeaways from those books for me was the inventory and work-in-progress, or Git branches and PRs, as we know it in today's software industry. A low work-in-progress is essential to keeping the shipping machinery running at it's ideal capacity.
172
+
173
+ **Steve Jobs:** One of the things you learn when you start building factories is that warehouses are really bad, right? Warehouses are bad, because you tend to put things in them. And inventory is really bad; inventory is really bad because if it's defective, you don't find out about it for a while, and you don't close the quality feedback loop with a vendor, and correct the problem till they've made a zillion of them. What you want to do is find a problem, the first one that comes in the door, and stop them from making more until you fix the problem. So warehouses also cost money, because you put all this stuff in them, and the stuff - you have to go borrow money from the bank or use money that could be used in a more productive purpose. So warehouses are bad, and you want to go to JIT; I'm sure you've studied this all, and studied examples.
174
+
175
+ I was walking through the Mac factory one day, and the two biggest pieces of automation we put in were a giant small parts storage and retrieval system; there were these totes that ran around. And the second one was this giant burn-in system at the end; a few tens of millions of dollars worth of equipment. And I realized, unfortunately too late, that both of them are warehouses. They're just high-tech warehouses. So when we looked at NeXT, we said, "No warehouses of any kind. We have a true JIT factory. Stuff comes in and is delivered right to the point of use on the factory floor. There is no warehouse. Deliveries are made daily, sometimes more frequently than that. There is no outgoing warehouse. Everything is visible."
176
+
177
+ And the reason that we were able to do a lot of what we've done is because when we were learning about manufacturing Mac, we hired a Stanford Business School professor at the time named Steven Wheelwright... And he did a neat thing, he drew on the board a chart. The first time I met him, he said, “You can view all companies from a manufacturing perspective this way. You can say -- there's five stages. Stage one is companies that view manufacturing as a necessary evil. They wish they didn't have to do it, but damn it, they do. And all the way up through stage five, which is companies that view manufacturing as an opportunity for competitive advantage.” We can get better time to market and get new products out faster. We can get lower costs. We can get higher quality.
178
+
179
+ \[47:58\] And in general — you can sort of put the American flag here, and put the Japanese flag in here... \[laughter\] And that's changing, however. That's changing. And it's changing because people like you are going into manufacturing. Big companies are starting to realize that we were great at this one time, and then we took it for granted. Ad people are starting to pay good salaries now and get good people. So we want to be one of these and we try very hard.
180
+
181
+ By the way, just going back to software for a minute... I often apply this scale to computer companies and how they look at software. See, I think most computer companies are stage one - they wish software had never been invented. I think there's only three companies here, and that's us, Apple and Microsoft in stage five. We start everything with the software and work back. But anyway, going back to manufacturing... We started looking at the factory as a software problem. And the first people we hired in the factory were some software engineers; we convinced them to move from R&D into software, which was not easy. We had to give them bonuses, we had to cajole them, we had to promise them they could come back if they hated it... And they went over there, and we said “This is really just a software problem with interesting I/O devices called robots, that's all it is.” And so we started building the software first.
182
+
183
+ And our first robots that we got, we spec'd them out, and we bought them completely turnkey, with the robot arms on them and all the electronics, and the software to control them. And we spec'd it out, but we didn't write it. And they worked okay. Some of them are still in use, but they weren't great. And being software folks - we weren't real happy. They weren't elegant. We couldn't do what we wanted with the robots. We couldn't tie in a quality information system to them, and all this other stuff we wanted.
184
+
185
+ So the second generation, we spec'd out the hardware and had somebody build the hardware for us, but we wrote all the software on our own computers. We’re object-oriented, so we started writing robot objects, quality objects, all sorts of objects to control this factory. And we found out our computer was great for it. And so our whole factory now runs on this object-oriented factory and quality system. The last generation, our latest generation of robots, which we've deployed this year, we actually built the hardware.
186
+
187
+ I've been to Japan a lot of times, maybe 30-40 times, and I loved to tour factories over there... And they always amaze me, because they built everything themselves, they weren't afraid of anything. They needed a robot—they'd try to buy one, but if they couldn't, they'd actually engineer it and build it. And you'd think this was really expensive, but we found out it's pretty cheap. It's actually cheaper than buying them. And so we've actually now designed and spec'd our own robots; we don't mill the metal or anything we get them all made we put them all together, and we do the software top to bottom, and we have now some extraordinarily advanced robots in the factory. And our computers are built start to finish on the key components, completely untouched by human hands.
188
+
189
+ **Gerhard Lazu:** Wow, this is fascinating. And I want to make sure that I understood it... So automation is key; only by understanding and owning the entire stack, can you make a meaningful difference, and using robots or automation for tasks which don't challenge the human imagination or empathy is the way to go. This is priceless, Steve, thank you.
190
+
191
+ Now that we are preparing to wrap up, what would you like listeners to take away from this conversation?
192
+
193
+ **Steve Jobs:** People say, you have to have a lot of passion for what you're doing, and it's totally true. And the reason is because it's so hard that if you don't, any rational person would give up. It's really hard, and you have to do it over a sustained period of time. So if you don't love it, if you're not having fun doing it, you don't really love it, you're going to give up. And that's what happens to most people, actually. If you really look at the ones that ended up being successful “in the eyes of society” and the ones that didn't, oftentimes the ones that are successful loved what they did, so they could persevere when it got really tough. And the ones that didn't love it, quit, because they're sane, right? Who would want to put up with this stuff if you don't love it?! So it's a lot of hard work, and it's a lot of worrying constantly. And if you don't love it, you're going to fail. So you’ve got to love it, you’ve got to have passion, and I think that's the high-order bit.
194
+
195
+ \[52:27\] The second thing is you've got to be a really good talent scout, because no matter how smart you are, you need a team of great people. And you've got to figure out how to size people up fairly quickly, make decisions without knowing people too well, and hire them, and see how you do, and refine your intuition, and be able to help build an organization that can eventually just build itself, because you need great people around you.
196
+
197
+ **Gerhard Lazu:** Thank you, Steve, for sharing so much with us, and for caring enough about personal computers. This changed my life for the better. I followed your advice, and asked for help with the ending. What Jony Ive said ten years ago at your memorial is just as meaningful today. Here it goes.
198
+
199
+ **Jony Ive:** Steve used to say to me, and he used to say this a lot... “Hey, Jony, here is a dopey idea...” And sometimes they were, really dopey. Sometimes they were truly dreadful. But sometimes they took the air from the room, and they left us both completely silent. Bold, crazy, magnificent ideas, or quiet, simple ones, which in their subtlety, their detail, they were utterly profound.
200
+
201
+ And just as Steve loved ideas and loved making stuff, he treated the process of creativity with a rare and a wonderful reverence. You see, I think he, better than anyone, understood that while ideas ultimately can be so powerful, they begin as fragile, barely-formed thoughts, so easily missed, so easily compromised, so easily just squished. I loved the way that he listened so intently. I loved his perception, his remarkable sensitivity and his surgically precise opinion. I really believe there was a beauty in how singular, how keen his insight was, even though sometimes it could sting.
202
+
203
+ He used to joke that “the lunatics had taken over the asylum”, as we shared a giddy excitement, spending months and months working on a part of a product that nobody would ever see, or not with their eyes. But we did it because we really believed that it was right, because we cared. He believed that there was a gravity, almost a sense of civic responsibility to care way beyond any sort of functional imperative.
204
+
205
+ \[55:29\] Now, while the work hopefully appeared inevitable, appeared simple and easy, it really cost. But you know what? It cost him most. He cared the most, he worried the most deeply. He constantly questioned, "Is this good enough? Is this right?" And despite all his successes, all his achievements, he never assumed that we would get there in the end. When the ideas didn't come, and when the prototypes failed, it was with great intent, with faith, he decided to believe we would eventually make something great.
206
+
207
+ I loved his enthusiasm, his simple delight... Often, I think, mixed with some relief. But we got there. We got there in the end, and it was good. You can see his smile, can't you? The celebration of making something great for everybody, enjoying the defeat of cynicism, the rejection of reason, the rejection of being told 100 times “You can't do that.” So his, I think, was a victory for beauty, for purity. And as he would say, “For giving a damn.”
208
+
209
+ He was my closest and my most loyal friend. We worked together for nearly 15 years, and he still laughed at the way I said, “Aluminium.” Thank you, Steve. Thank you for your remarkable vision, which has united and inspired this extraordinary group of people. For all that we have learned from you and for all that we will continue to learn from each other, thank you, Steve.
210
+
211
+ **Gerhard Lazu:** And that's it for this special episode of Ship It, where we remembered Steve Jobs and all the learnings and special moments that he shared with us throughout his life.
212
+
213
+ I want to thank Andrew for showing me what it means to be a great friend. and a loyal Apple employee. I still have fond memories from when we used to pair at level 39, and it's been a joy seeing you grow over the years into a great manager at Apple. Everything worked out great.
Kaizen! Are we holding it wrong_transcript.txt ADDED
@@ -0,0 +1,809 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So we're back for the third Kaizen. I can't believe it's been 30 episodes, and I'm not the only one. Adam can't believe either than it has been 30 episodes of Ship It.
2
+
3
+ **Adam Stacoviak:** Yeah... It really is insane, honestly... I mean, this show was just an idea recently. I think anybody who makes things come to life from nothing is always flabbergasted by the creation, I suppose, once you sort of get into it... But podcasting is a little bit different, because it really is a journey. It's a journey pre-production, and it's a journey post-production. Now we're obviously post-production, 30 episodes in, and I think it's just kind of crazy, looking back and thinking this was just an idea... And then in particular to podcasts, the impact to us and to the audience. That's why I love it. That's why I love the game.
4
+
5
+ **Gerhard Lazu:** Yeah. I mean, we shipped it, right? It took us a while; it took us five months to ship the first three episodes... And then it was like a roll. What blows my mind is that my mind is on episode 40. And most people don't realize this. The next five episodes are pretty much locked in. The guests, the topics, the flow... And even the five ones after that are nebulous, nothing locked in for real, but it's coming... So for me, it's even more mind-blowing, because I'm already like in February. I'm thinking February right now.
6
+
7
+ **Jerod Santo:** Yeah, you just live in the future. I think you might be the most prepared and scheduled out podcaster in the entire universe, Gerhard.
8
+
9
+ **Gerhard Lazu:** \[laughs\] Okay... I want to think that's a compliment...
10
+
11
+ **Jerod Santo:** I'm happy that I got us scheduled out through December, but you're -- no, it is.
12
+
13
+ **Gerhard Lazu:** Thank you.
14
+
15
+ **Adam Stacoviak:** That's a compliment.
16
+
17
+ **Gerhard Lazu:** I don't want to leave myself open to unique encounters and like...
18
+
19
+ **Jerod Santo:** \[04:15\] Yeah, that's a challenge. Serendipity is taken out when you're scheduled out.
20
+
21
+ **Gerhard Lazu:** That is a great word. I haven't heard it in a while. I thought I was the only one using it. Okay...
22
+
23
+ **Jerod Santo:** Happy to surprise and delight.
24
+
25
+ **Gerhard Lazu:** Right. Well, thank you very much in which case, Jerod. I appreciate that. Thank you. And what I'm really excited about is -- I don't think many people realize this, but there's like a theme to this; there are like multiple themes. A couple of episodes, they kind of cluster together, and there's a build-up... And a lot of the episodes that we had -- like the last 10-15 ones, they're leading to something. They're building to something. And that will be the Christmas episode, episode 33, which I'm very excited about. We'll come back to that a bit later, but... One of the things which is on my mind is the incident 2. Our last episode, 20, our last kaizen, episode 20, was all about incidents. We called it "Five incidents later."
26
+
27
+ **Adam Stacoviak:** Yeah.
28
+
29
+ **Gerhard Lazu:** And there was something which I wanted to understand, which I didn't at the time... Was why was an unhealthy pod put back into service. Do you remember that?
30
+
31
+ **Jerod Santo:** I do remember that. We didn't have answers.
32
+
33
+ **Gerhard Lazu:** Yes. So my answer is we're using the "latest" tag. What that means is that if something is unhealthy, and it has to go back to the previous one, it will use the "latest" tag. But "latest" has moved on. So it doesn't keep the old SHA, the one that was working; it says "always the latest." So if you were to go back, then you always go back to the latest. Any by the way, the latest already moved, so that's like the broken version.
34
+
35
+ **Jerod Santo:** Oh, you're pointing back to the same version, which is broken.
36
+
37
+ **Gerhard Lazu:** Exactly. Exactly.
38
+
39
+ **Jerod Santo:** Why are we doing that?
40
+
41
+ **Gerhard Lazu:** Um, some corners have been cut... \[laughter\]
42
+
43
+ **Adam Stacoviak:** The honesty, I love it.
44
+
45
+ **Gerhard Lazu:** ...and that worked well for quite some time. So I have to say that even though those corners have been cut, there was like a trade-off to be made. It was like a conscious trade-off... And it only failed once. So that trade-off has bit us once.
46
+
47
+ **Jerod Santo:** Right.
48
+
49
+ **Gerhard Lazu:** But I think it is high time that we revisit the whole GitOps approach. The GitOps approach that we have, but not really have, to how we run our infrastructure. So while we do version all the manifests, and everything is in the repo, and we apply them, some manifests reference "latest", and "latest" can move. So we cannot -- basically, right now we don't capture everything we run at the SHA that we run. So Ingress NGINX, external DNS - we have versions for those, but for our app, we have "latest".
50
+
51
+ The thinking goes we always want to be running "latest". When do you not want to run "latest"? Apparently, when "latest" is broken.
52
+
53
+ **Jerod Santo:** \[laughs\] Exactly. The one time when you definitely do not.
54
+
55
+ **Gerhard Lazu:** That's when you don't want to run "latest". \[laughs\] But that's something that -- yeah, we will be investing in. I will be spending a bit of time on that, among many other things. But that explains this incident too, which - I didn't have an explanation ten episodes ago.
56
+
57
+ **Jerod Santo:** Yeah. How did you learn of this?
58
+
59
+ **Gerhard Lazu:** Um, I looked at the manifest, and I tried to understand what happens. So I went through the steps of what would happen, or of what happens in Kubernetes when -- like, the new one gets put in service, it fails, the old one crashes, and when it gets restored, it gets restored with "latest". So that's what happens.
60
+
61
+ **Jerod Santo:** \[07:56\] So my developer brain sees something like this, and I think infinite loop. Is that going on here, or does it just fail? Because if it runs "latest", "latest" is broken, it runs "latest", "latest" is broken... Does it just keep doing that over and over again?
62
+
63
+ **Gerhard Lazu:** Yeah. So in our case, what happened was that the version that was running - that crashed. Because it's just meant to restore it, right? It crashes - not a problem, it will come back.
64
+
65
+ **Jerod Santo:** Right.
66
+
67
+ **Gerhard Lazu:** But when it comes back, it doesn't know which version it should come back with, because it has "latest", and it resolves that when it boots. And "latest" has moved along, which is where the problem comes from. So we need to capture the version of the app that we want to run. Not the app, it's the app container image. Currently, because we use "latest", that always changes. So yeah...
68
+
69
+ **Adam Stacoviak:** That's a challenge.
70
+
71
+ **Jerod Santo:** It's always nice to get answers to mysteries...
72
+
73
+ **Gerhard Lazu:** Yes. I love a good mystery, especially when I have an answer for it...
74
+
75
+ **Jerod Santo:** Exactly.
76
+
77
+ **Gerhard Lazu:** Otherwise it drives me crazy. I hate it. Like, "Oh, \*\*\*\*! What's the answer?!"
78
+
79
+ **Jerod Santo:** It's like that show, Unsolved Mysteries, which I always avoided, because... Come on, give us the solution already. Have you guys ever watched that one? It's probably dead now, but back in the day they would show these mysteries and they're like, people who are actively being sought by FBI, or whatever... And there's no solution. At the end they're like, "If you know where this person is, please let us know."
80
+
81
+ **Gerhard Lazu:** Unsolved cases.
82
+
83
+ **Jerod Santo:** And I'm always like, "I want the solution!"
84
+
85
+ **Adam Stacoviak:** Yeah. It's those shows that don't have endings essentially that get me. It's like, "I can't watch that..." It drives me crazy.
86
+
87
+ **Jerod Santo:** Yeah.
88
+
89
+ **Adam Stacoviak:** Okay... So what are we doing to solve this then? If "latest" can't be used, how do we uncut that corner?
90
+
91
+ **Gerhard Lazu:** So right now we have Keel.sh, which basically watches the Docker image updates, and when there is an update, it will just basically update itself. But what we have in the deployment, it's also "latest". So we need to use GitOps properly. What that means is commit in the manifest the version of the app that should be running, and that should automatically be applied, which is where Argo CD comes in, or something like that. I'm thinking Argo CD; maybe there will be something else.
92
+
93
+ So basically, the infrastructure gets continuously reconciled with what is versioned in the repo, and what we version in the repo is the app updates. So when a new image is built, there will be a new push to the repo, a new commit to the repo, which has the exact version of the app that should be running, and there'll be a reconciler which will make sure that that is true. And that's currently what we don't have.
94
+
95
+ So finish GitOps... We're 90%, maybe 95% there. Because we version the manifests, but we don't update them when the app updates. And we don't apply them when the app updates. So that's what's missing.
96
+
97
+ **Adam Stacoviak:** Is there like one place to learn exactly what the requirements are for GitOps to comply, I suppose? You could search on Google what is GitOps, and there's a lot of pages that describe what is GitOps.
98
+
99
+ **Gerhard Lazu:** I think GitOps.org is a good resource. That's the one that I would recommend for learning what GitOps is. And in a few episodes we'll have Alexis from WeaveWorks, where we'll be talking all about GitOps.
100
+
101
+ **Adam Stacoviak:** So GitOps.org doesn't resolve to anything for me...
102
+
103
+ **Gerhard Lazu:** GitOps.tech. That's the one.
104
+
105
+ **Adam Stacoviak:** So this is what you would consider the canonical resource for learning about GitOps at least... It's gonna link out to WeaveWorks, it's gonna link out to a PDF, an ePUB book... So I guess this is a book, too?
106
+
107
+ **Gerhard Lazu:** So the last time when I've seen it -- I'm seeing this has a few updates. I wasn't aware of the book, so that must be something new...
108
+
109
+ **Adam Stacoviak:** It does say "We've just released our short book on GitOps."
110
+
111
+ **Gerhard Lazu:** \[11:57\] There you go. So that's the new element which I wasn't aware of. If you scroll down, you see push-based deployments, pull-based deployments, which is what we have, by the way... We have a pull-based deployment model. And WeaveWorks were the ones that coined the term of GitOps, and this is the canonical resource, for me at least, when it comes to GitOps.
112
+
113
+ **Adam Stacoviak:** Okay. So they have this graph down there... Or, sorry, this -- what do you call this thing? Infographic, I guess... A graphic to look at, essentially outlining what --
114
+
115
+ **Jerod Santo:** Is there information on the graphic?
116
+
117
+ **Adam Stacoviak:** Say again?
118
+
119
+ **Jerod Santo:** Does the graphic have information on it?
120
+
121
+ **Gerhard Lazu:** Yes.
122
+
123
+ **Adam Stacoviak:** It does have information on it.
124
+
125
+ **Jerod Santo:** Oh, that's a classic infograph then.
126
+
127
+ **Adam Stacoviak:** That's right. It's really just a graphic of what the flow is, from application repository all the way to deployment, what should happen in there. So are you seeing that we're somewhat adhering to this push-based deployment graph here, this idea?
128
+
129
+ **Gerhard Lazu:** Yes. The difference is that in the pull-based deployment there's an operator that observes the image registry, and then updates the environment repository. The environment repository is basically which stores the config for everything that's running in an environment. So basically, those would be our Yaml manifests. Currently, that doesn't happen.
130
+
131
+ **Adam Stacoviak:** And the reason why this flow is prescribed is to prevent things like calling on "latest" when "latest" is broken.
132
+
133
+ **Gerhard Lazu:** Yes. Or "latest" changes. Because you don't know what you're running, so you're trying to capture your production as much as you can. Not as much like fully, like to the SHA. Not even to the version, because when you tag an image with a version, like v1.0, you can update the tech to point to a different SHA. So you want to point to a specific SHA, which will not change. It's like a Git SHA, but it's the equivalent in container images, which is what we would want.
134
+
135
+ **Adam Stacoviak:** Which is important for recovery from a disaster. So in this case, a disaster happened, the application failed, you needed to reboot, you rebooted, but you called upon latest, and latest wasn't right... So if you would have had continuity in place, the operator would have told the environment repository which SHA to point to, essentially, so that when you reboot, you don't pull from a broken "latest".
136
+
137
+ **Gerhard Lazu:** Yeah. So a couple of things had to go wrong in our case when instant 2. The version that was running - that one came down as well. So the version that was running came down, it had to be rebooted, the pod, and when the pod was restarted, because it was pointing to the latest, it pulled the broken version. So that happened as well, on top of latest being broken.
138
+
139
+ So it needs to be like a sequence of events for this to happen, which is what happened in our case, and that's why those are rare. So as I mentioned, in the year since I had this set up, it only happened once. It was enough for us to have an incident. It wasn't a major one, it was just a minor one, because production was up, everything was cached, we served from the CDN... We ARE serving from the CDN everything, except the authenticated users, except the dynamic requests. So not like the gets. This was like a post, a patch, and we have quite a few of those. I didn't actually realize how many of those we have... Because whenever we visit a link, like news and press, that's the most popular one we keep hitting; we keep doing a lot of posts. So there's that.
140
+
141
+ But anyways, it was like up for anyone that was casually browsing it; people could listen to podcasts. Only a few URLs that were not in the CDN were not available.
142
+
143
+ **Adam Stacoviak:** That's a good -- to your point, Jerod, the unsolved mysteries... If you listen to Kaizen 20, we solved some more mysteries for you. So if you left that conversation thinking "Gerhard, what actually happened behind the scenes?" Well, we've kind of recapped some of that, so... The mystery is solved for those unsolved mysteries of Kaizen 20. You're welcome.
144
+
145
+ **Gerhard Lazu:** \[16:12\] But I do have very exciting news... So not only we solved that mystery, we did something even better. And I think we discussed about this also in episode 20, about a tighter Honeycomb integration. So one of the things which we did since - we integrated Honeycomb with Fastly, with our CDN, so we can see a lot more details about how the CDN behaves. Which are the cache hits, which are the misses... I don't mean "misses" like the missus; I mean like M-I-S-S-E-S. There's no U there.
146
+
147
+ **Jerod Santo:** Solved clarification...
148
+
149
+ **Gerhard Lazu:** Yeah... \[laughs\] And we can just drill down, observe a lot of stuff... That's amazing. The level of visibility which we have right now - we can answer so many questions, including the pull requests which we had open. I'm going to fire it up now, because I forgot the exact number. There were some new pull requests since.
150
+
151
+ This is issue (not pull request) 383. "Why do some mp3 requests take 60 seconds or more, while others complete quicker?" So we have an answer to this question, as well as full visibility into how the CDN behaves, the app behaves, the Ingress NGINX, how it behaves and how they interact among one another... And some of the details which we get are fascinating. I can finally be properly curious in prod, and I didn't know what it meant until I did this integration, and some of the level of detail is just amazing.
152
+
153
+ We can for example see the top URL's, the top episodes by browser, by user agent, by data center, by country, by city... It's just so much insight. And this is just like the content stuff. Then it comes to the CDN. As I mentioned, the cache status; how many hits versus how many misses. We can slice and dice by audio requests. And rather than building dashboards, we can do something even more amazing, which is literally start with a query, and keep asking questions, and keep getting answers, until we understand what's happening.
154
+
155
+ **Adam Stacoviak:** So this is the first time we've been able to have observability to this level on our CDN. So to recap, we leverage it quite well, because all requests go through Fastly first, prior to hitting our application. So it would make sense that if you make that choice and lean that heavily, trust that much on your CDN - in this case we do; we trust Fastly, they do an amazing job for us, for many years now... But now we actually have observability into various specifics of how it operates for us, where we never had before.
156
+
157
+ **Gerhard Lazu:** Correct.
158
+
159
+ **Adam Stacoviak:** And this is thanks to the details and visibility that Honeycomb gives us.
160
+
161
+ **Gerhard Lazu:** Correct. Yeah. That was one of the big improvements since episode 20. And we can see the slowest requests, and we understand that the XML ones, like the sitemap, or the feeds that are the slow ones, they take 5 seconds sometimes to load. The website is fairly fast; the only time when it gets slow is when we serve static assets from the website. So in the Phoenix app, when there's a cache miss in the CDN, it has to go to the app - actually, Ingress NGINX... Ingress NGINX has to go to the app, and the app has to store a PNG, or JPEG. It's usually a PNG. That's the one that took quite a bit of time. So I was looking at it -- was it earlier? Yes. Let me find it, it's right here. That was an interesting one. It was icon-small... No, it wasn't that one. Time elapsed. This was it. It was actually a GIF. News item, 1.4 minutes to serve it. That's how long it took to serve that news item GIF, all the way to Hong Kong. So someone from Hong Kong was accessing it...
162
+
163
+ **Adam Stacoviak:** \[20:37\] They were waiting that long, huh?
164
+
165
+ **Gerhard Lazu:** They were waiting that long because they had to go all the way to our data center in New York.
166
+
167
+ **Adam Stacoviak:** It’s probably a big GIF, too.
168
+
169
+ **Jerod Santo:** Yeah, they always are. I mean, GIFs are just large files, unfortunately.
170
+
171
+ **Adam Stacoviak:** Yeah, they tend to be megs. At least a meg, sometimes ten. Maybe 50, but...
172
+
173
+ **Gerhard Lazu:** Let's see... How big is it? We have that as well, that information. Body size... 18 kilobytes? No, it can't be.
174
+
175
+ **Jerod Santo:** No. Megabytes.
176
+
177
+ **Gerhard Lazu:** Like 18 million... Let's see.
178
+
179
+ **Adam Stacoviak:** Should we ask Siri to do some math for us again?
180
+
181
+ **Gerhard Lazu:** Yes, Siri. 18 million bytes.
182
+
183
+ **Jerod Santo:** We should ask Honeycomb to do that math for us.
184
+
185
+ **Gerhard Lazu:** Right. So that's the one thing which we need to set. I was setting some derived queries... But let's see. But not for this specific thing. 17 kilobytes -- 17 megabytes.
186
+
187
+ **Adam Stacoviak:** Yeah.
188
+
189
+ **Gerhard Lazu:** We have a 17-megabyte GIF. And serving it to Hong Kong, that’s how long it takes.
190
+
191
+ **Adam Stacoviak:** It’s pretty heavy, yeah.
192
+
193
+ **Jerod Santo:** Yeah Sometimes we do lazy-load those, so you're not actually waiting end user experience. You can read what the news item is, and then as long as it takes a minute and a half to read, by then the image is loaded; it's still too long, but...
194
+
195
+ **Adam Stacoviak:** Yeah. Well, I don't think anybody's optimizing for reading -- unless your image, or something like that. Maybe you're optimizing for those things to be super-fast; large GIFs like that, for example.
196
+
197
+ **Jerod Santo:** Well, if we had it on a CDN in Hong Kong, it would be much faster.
198
+
199
+ **Adam Stacoviak:** Okay. That's the question I was thinking of asking... Like, okay, the observability lets us know this event happened, right? The event being this GIF was served from New York to Hong Kong at this speed, it's this size, etc.
200
+
201
+ The other question is it was a miss - so why was it a miss? These are questions we'll begin to answer ourselves as we dig into this. Okay, why was it a miss? Okay, now we know, and we’ve figured... What was the answer to that? Why was it a miss? Why was it a cache miss?
202
+
203
+ **Jerod Santo:** First, Hong Kong visitor of the day... Or it's expired, or who knows...
204
+
205
+ **Gerhard Lazu:** Yeah. I mean, those are kept in cache right on Fastly, and they can't cache like the entire Internet. Even for us, they can’t the cache all of our content.
206
+
207
+ **Jerod Santo:** They can probably cache all of our content at all of their pops, and barely ever notice, don't you think?
208
+
209
+ **Gerhard Lazu:** They could, but I think the reason why they're not is because they have to shed some of the extra content that is not accessed within, I don't know, x hours, days, whatever. So they don't guarantee that everything will be in the CDN all the time, even though our headers asked for it to be in CDN for a few weeks, I believe or something. I'm not sure exactly this one... We can check to see how long it should be kept in the CDN for, this specific request, but as far as I remember, it’s just meant to be a few weeks at most. So if that wasn't accessed in a few weeks, then it may expire when it's requested again, which will be a miss.
210
+
211
+ **Jerod Santo:** Right.
212
+
213
+ **Adam Stacoviak:** Why don't they just make people pay for that desire then? I guess if you're a larger site, with much more assets than we have, then maybe that becomes more and more expensive... But it's in our affordance right now to ask them to do that.
214
+
215
+ **Gerhard Lazu:** Yeah, that’s a great, great idea.
216
+
217
+ **Adam Stacoviak:** \[24:02\] So why wouldn't they offer it as a service, like "Hey, just cache the whole thing indefinitely, and I’ll pay for it."
218
+
219
+ **Gerhard Lazu:** I would love us to be able to do that. All our stuff should be cached all over the world. I agree.
220
+
221
+ **Adam Stacoviak:** What’s our assets on stuff like that? What would be the weight, in terabytes?
222
+
223
+ **Gerhard Lazu:** No...
224
+
225
+ **Jerod Santo:** No.
226
+
227
+ **Adam Stacoviak:** Or in Gigs?
228
+
229
+ **Gerhard Lazu:** 100-150 Gigs? Not that much.
230
+
231
+ **Adam Stacoviak:** That's pretty reasonable. I mean, I can go buy a 14-terabyte hard drive for under 400 bucks.
232
+
233
+ **Gerhard Lazu:** Yeah, but you need to multiply that times how many times you want, how many ops you want.
234
+
235
+ **Adam Stacoviak:** That’s true.
236
+
237
+ **Gerhard Lazu:** But still, you’re right, it's not a lot of data. I wish it was cached, and I wish we had an e-tag implementation, so that if the content doesn’t change, it won't expire from the cache. I mean, we have it configured, we have cache shielding, or pop shielding, which means that there should be at least one pop where this is always kept in cache. So if another one doesn't have it, it should get it from that pop, rather than come to us.
238
+
239
+ **Adam Stacoviak:** Right. And their network’s probably faster than ours.
240
+
241
+ **Gerhard Lazu:** Of course, yes.
242
+
243
+ **Adam Stacoviak:** Right. It should be at least, by design.
244
+
245
+ **Gerhard Lazu:** It's optimized, right? I mean, they should -- they have all the optimization, they have the best routing between their pops, which is how it should be. So you're right. But this, we've never had before, and this is the exciting thing. Now we know why our 99th percentile -- why we have such a bad tail latency. Because sometimes this stuff happens. We didn't have this visibility before, and that's the exciting stuff.
246
+
247
+ **Jerod Santo:** When does the law of diminishing returns come in?
248
+
249
+ **Adam Stacoviak:** The now known from the unknowns
250
+
251
+ **Gerhard Lazu:** I didn't hear any of you. \[laughs\] Do you want to try again?
252
+
253
+ **Jerod Santo:** When does the law of diminishing returns come in? Because you know, slow clients are slow. We can't make every request fast. Where do we know, "Now, we're just basically toiling away at something that's not worth our time anymore", versus "This is actually a valuable optimization"?
254
+
255
+ **Gerhard Lazu:** I'm really glad you brought this up, because we have -- this is something which we weren't able to see before... We have Apple Watches consuming MP3 files. And they are slow, so they take many, many minutes. Our longest consumer was something like 40 minutes. Imagine someone being connected to our website and consuming MP3's for 40 minutes. It was an Apple Watch, and there was a couple of others like that.
256
+
257
+ So when it comes to content that is not in the cache, I don't think we should spend much more time on that, except if we're talking about using an object store versus local store, but that's like a separate conversation. However, we should absolutely try to serve as much as we can from the CDN, especially when it comes to the static content. GIFs, PNGs, MP3s - all that stuff should be served directly from the CDN, which is exactly what Adam was suggesting.
258
+
259
+ **Adam Stacoviak:** I mean, it'd be different if we had an unreasonable ask from them; if it was like, terabytes and terabytes of data - that's unreasonable. But if it's like, sub-200 gigs, that's not unreasonable to ask a CDN, to in perpetuity hold that until it's expired.
260
+
261
+ **Gerhard Lazu:** What are you thinking, Jerod?
262
+
263
+ **Jerod Santo:** Well, this is what I've been saying for years. That's what I had been thinking. \[laughter\]
264
+
265
+ **Gerhard Lazu:** Okay, you're being facetious now, right? Facetious...
266
+
267
+ **Jerod Santo:** No. Facetious... No, I'm not. I've been saying it for years - can't they just cache our stuff forever, and just keep it and never request it again until we tell them that it's fresh? I understand that, okay, if we're going to do what Adam proposes, you're kind of becoming a snowflake, right? Like, "Hey, Fastly. Please treat us differently." But isn't there just a way that they can scale to all their customers, to let you say, "Don't ever request this again, please"?
268
+
269
+ **Gerhard Lazu:** I would love to have that conversation with someone from Fastly. I've been trying for years.
270
+
271
+ **Jerod Santo:** That’s what I’ve been saying for years. I don't want them to keep asking me for new --
272
+
273
+ **Adam Stacoviak:** Well, I don't want them to treat us differently either.
274
+
275
+ **Jerod Santo:** ...my ShipIt-28.mp3 hasn't changed, it's not going to change. It's never going to change. It’s never going to change.
276
+
277
+ **Adam Stacoviak:** \[28:07\] Right. We know it's never going to change. So, just keep them.
278
+
279
+ **Gerhard Lazu:** I will not name any names, the people that I reached out that I knew within Fastly, but if a listener knows someone within Fastly that wants to have this conversation, I would love to do that improvement... Because Honeycomb - this new integration showed us how much can improve within the CDN. And we are reaching diminishing returns within the app, within our own infrastructure, where the biggest wins right now are in the CDN.
280
+
281
+ **Adam Stacoviak:** Right.
282
+
283
+ **Jerod Santo:** For me, imposter syndrome sets in when I think "Surely, we're holding it wrong." You know, like the Steve Jobs response to the antenna on the iPhone 4 is "You're holding it wrong."
284
+
285
+ **Gerhard Lazu:** Yes.
286
+
287
+ **Jerod Santo:** I feel like we're just not using Fastly right...
288
+
289
+ **Adam Stacoviak:** All these years.
290
+
291
+ **Jerod Santo:** I mean, I understand how to set HTTP headers, and we use e-tags, and we set cache control, we've tweaked some stuff, but I just feel like we're not using it right for some reason, and that's why part of me is just wondering... That's where I like the toiling away, like "Well, how many times can we tweak the way that we tell Fastly to do things?" But I don't know. I just thought this is how CDNs work, is like "Hold on to it till it's fresh, please." That seems like a button you click in a click op somewhere, but I don't know.
292
+
293
+ **Gerhard Lazu:** Yeah. So I'm surprised when content that should be cached for -- now that I think of it, some of it is even cached like for a whole year. The stuff that we know is not going to change. And that content is being requested, even though it was requested before, and it's requested again, and it hasn't passed a year. So what's going on Fastly? I can’t answer that.
294
+
295
+ **Jerod Santo:** Right. I mean, our old episodes, the long tail of listens on our shows is bewilderingly awesome. Like, you go back to a show and you're like, "Wow, 33 people listened to this today", and it's four years old. Every day, our MP3s are being requested, pretty much all of them, plus or minus some outliers. So they shouldn't be expiring, unless you set the expiration to an hour, or 30 minutes, or six hours. But if we're setting it to a long time, I do not understand why we have so many cache misses.
296
+
297
+ **Adam Stacoviak:** Especially, I mean, given -- it'd be different if our content was highly volatile in terms of change. We're a media company, the things we create are long-term artifacts, so just by nature of the business we're bringing, like the character type we are, the persona, so to speak even, we know that the reason we're using the CDN is to be globally fast. And the data we're giving them to be globally fast doesn't change, if ever. So we want to be globally fast forever, and pay for that. And we put Fastly in front of everything to enable that, so that even if our app is down, we're still serving cache pages, and the same thing for our actual files, like MP3s and GIFs and things like that. Just by the nature of us being a media company or a media entity, the things we have tended to never change.
298
+
299
+ I think we've changed like an episode, just to go back and update... We call it a remastering, and we were doing that for a bit. We were remastering some of these shows Jerod was talking about, that had high degrees of listens, that are several years old. So rather than having that listener go back and listen to an old show and still be sort of like unimpressed by the audio quality in comparison to now, we went back and remastered those.
300
+
301
+ **Jerod Santo:** But we can also programmatically purge endpoints from Fastly by way of our system. It'd be easier to code that up. I just don't -- I've never done it, because I feel like it keeps purging anyways. And every once a while, I'll hop in there and just purge one manually. Especially if it’s released...
302
+
303
+ **Adam Stacoviak:** I'm with you, Jerod. I feel like we're holding it wrong. I do. I feel -- I don't know why we're holding it wrong. It seems like the logical way a CDN should work is the way we think it does work... Yet we are holding it seemingly wrong.
304
+
305
+ \[32:12\] So yeah, listeners, if you're out there, if you know somebody at Fastly who knows more than we do... We have connections in there, but we've hit certain dead-ends on that front... But we'd love to have some help... Like, Fastly, come on this show. Come on YouTube with Gerhard and triage how we use our CDN and help us de-antennagate ourselves and hold it right. \[laughter\] You know what I mean? Let’s not CDN-gate ourselves here.
306
+
307
+ **Gerhard Lazu:** Over the years, we've had some epic support threads with Fastly. Epic. Some of them have not been solved.
308
+
309
+ **Adam Stacoviak:** Unsolved mysteries.
310
+
311
+ **Gerhard Lazu:** Many unsolved mysteries when it comes to Fastly.
312
+
313
+ **Adam Stacoviak:** Just hold it right, please.
314
+
315
+ **Gerhard Lazu:** I'm looking... So I think we're holding it right, but I think there's stuff happening within Fastly which we don't fully understand.
316
+
317
+ **Adam Stacoviak:** Right. And maybe that's just how it works. It doesn't make sense why it is that way. So if it works that way and that's how it does work, that seems odd, given the reason you'd use a CDN.
318
+
319
+ **Gerhard Lazu:** I think we can Kaizen Fastly. I think that's what you're getting to.
320
+
321
+ **Jerod Santo:** Yeah.
322
+
323
+ **Gerhard Lazu:** Because in the last 24 hours, we had 3,000 misses on MP3 files. This is in the last 24 hours.
324
+
325
+ **Adam Stacoviak:** That's incredible.
326
+
327
+ **Jerod Santo:** It doesn't make sense.
328
+
329
+ **Gerhard Lazu:** It doesn't make sense. Exactly.
330
+
331
+ **Adam Stacoviak:** The whole reason we engaged with Fastly in the origin, before we got to what we could do application-wise, was to deliver our MP3s globally, fast, forever. So to have 1,000 misses in the last 24 hours is egregious.
332
+
333
+ **Gerhard Lazu:** 3,000.
334
+
335
+ **Adam Stacoviak:** Especially--
336
+
337
+ **Gerhard Lazu:** That's crazy. I agree with you.
338
+
339
+ **Adam Stacoviak:** Triple that. 3x that. Because if we can have one pop -- so let's just say it’s a size requirement. Too much data, forever... Okay, sure. We have to purge somewhere. Fine. Then have one pop be the canonical. That one is forever. And then you can miss somewhere else and pull from your own pop fast, not from us.
340
+
341
+ **Jerod Santo:** Well, we shield through LaGuardia, so we should have that. LaGuardia should have it, if Hong Kong doesn't. **Gerhard Lazu:** Exactly, yeah.
342
+
343
+ **Jerod Santo:** So I'm not super-clear if that still shows up as a miss, if Hong Kong misses but grabs it from LaGuardia, and it doesn't grab it from us. Gerhard, you know the difference? But—
344
+
345
+ **Gerhard Lazu:** Yeah... So I'm not sure, but that's something worth digging into. This is exactly—
346
+
347
+ **Jerod Santo:** Yeah. Let’s solve this mystery.
348
+
349
+ **Gerhard Lazu:** Exactly. How does this stuff work within Fastly? This is the first time we could have a really good conversation about this, because of this integration.
350
+
351
+ **Adam Stacoviak:** We have data now. We have wisdom. Before, we had assumption. Now we have like, "Look, here's Honeycomb."
352
+
353
+ **Gerhard Lazu:** Facts. Hard facts.
354
+
355
+ **Adam Stacoviak:** "This is where it goes. This is how it works." Yeah.
356
+
357
+ **Gerhard Lazu:** It’s amazing.
358
+
359
+ **Adam Stacoviak:** Even asking for support makes it so much harder, when you have no visibility into what's going on. Now we do, so we are armed with more data to support ourselves differently in our argument back like why things are not working the way they should be, or how we think it should be.
360
+
361
+ **Gerhard Lazu:** Yeah.
362
+
363
+ **Break:** \[35:19\]
364
+
365
+ **Adam Stacoviak:** So Jerod and I got some brand new computers recently, brand new M1 Macs, and like any new Mac, you take your sweet time setting it up... And in my case, Jerod, you may concur with your case, I'm doing it all manually. I'm not scripting anything this time, I want to take my time... Because the M1 Mac is so different, even Homebrew has a couple -- it has one slight variance in how you set it up with what you add to your, in my case, and I think yours too, Jerod, the .zshrc file. So there's a couple particulars to deal with, and I haven't gotten to the point yet to set up the app. Actually, I have, but I haven't done it yet. So my thought’s like if I'm setting up changelog.com for a dev environment on my new Mac - how? What's the way? The readme isn't super clear, there's a Docker path I'm not sure is still working... So what do we do? How do you do it? Have you set it up, Jerod? Where are you at?
366
+
367
+ **Jerod Santo:** I have not set it up yet, because I haven't needed to. I still have my old laptop right here, that I can use. I would not use Docker, because I didn't use Docker last time.
368
+
369
+ **Adam Stacoviak:** Okay.
370
+
371
+ **Jerod Santo:** I would set it all up individually. But maybe I'd even just procrastinate until we're on Codespaces. What do you think, Gerhard?
372
+
373
+ **Gerhard Lazu:** That's exactly what I'm thinking.
374
+
375
+ **Adam Stacoviak:** It’s even better.
376
+
377
+ **Jerod Santo:** \[laughs\]
378
+
379
+ **Gerhard Lazu:** That’s exactly what I’m thinking. The reason why���
380
+
381
+ **Jerod Santo:** I don’t even want to set it up if I don't have to.
382
+
383
+ **Gerhard Lazu:** Exactly. I uninstalled Docker about six months ago, or four months ago, something like that, and it's not coming back on my machine, or any other machine, like my local machine... However, I'm running Docker on Linux, on a Linux server in Linode, which is my development machine.
384
+
385
+ **Adam Stacoviak:** Is that right?
386
+
387
+ **Gerhard Lazu:** That's right. So what we want is GitHub Codespaces, where we can run our own infrastructure. So rather than using the Azure VMs, which is what runs GitHub Codespaces, we want to be running our own, whether it's Linode, or - and this is where the big one comes in - Equinix Metal.
388
+
389
+ **Jerod Santo:** I don’t think they’ll go there, will they? GitHub.
390
+
391
+ **Gerhard Lazu:** Well, no, they won't, but like, can they allow people to use, like -- you know, as you can run your own GitHub runners with the GitHub Actions... So you should be able to run your own hardware, wherever it is, with GitHub Codespaces. I think it's a natural next step. Because whatever needs -- because you pay for the hardware. That's where the cost for the GitHub Codespaces is... And that's fine if you want the simplicity. But if you want to run, for example, on the ARM servers, or fast Intel servers with dedicated CPUs, dedicated NVMes, 20-gigabit networks, why wouldn't you go to Equinix Metal? So that's what I'm thinking... Because in that world, everything is amazing.
392
+
393
+ **Adam Stacoviak:** \[40:18\] So I guess then—
394
+
395
+ **Gerhard Lazu:** Or it will be when I’m finished with it.
396
+
397
+ **Jerod Santo:** It’s all rainbows and—
398
+
399
+ **Adam Stacoviak:** Isn't the thing with GitHub Codespaces that is their -- like, their thing is their infrastructure, so their VMs, their hardware, and it's optimized... Obviously, it's probably Azure-backed, considering their parent company, etc. But isn't that what they sell? Are they selling the agnostic route to dev environment to the cloud? They’re selling—
400
+
401
+ **Gerhard Lazu:** Not currently—
402
+
403
+ **Adam Stacoviak:** ...Codespaces, which is hosted by them, right?
404
+
405
+ **Jerod Santo:** Right. It seems like it's natural for us to want that, but it doesn't seem natural for GitHub to want to offer that. So maybe it's like a Cloudspaces alternative which is genericized, is the answer.
406
+
407
+ **Gerhard Lazu:** So there’s GitPod, I’m aware of that.
408
+
409
+ **Jerod Santo:** Yes, right.
410
+
411
+ **Gerhard Lazu:** There's Tilde.dev as well. There's a couple like that... But what I really want to do, having listened to the GitHub Codespaces episode on the Changelog (I forget the number), I tweeted Cory, like, "Hey, we should talk." He said, "Yeah, sure. Email me", and I didn't have time to follow up on that email. But I really want to do that, because I see the potential of GitHub Codespaces, but I would use it slightly differently now. We're always up for partners, aren't we, Adam?
412
+
413
+ **Adam Stacoviak:** Yeah.
414
+
415
+ **Gerhard Lazu:** So if GitHub wants to sponsor Changelog with the GitHub Codespaces, we'll be more than happy to use it, and help it improve. But my first go-to would be what I know, right? Like, bare-metal servers somewhere, or Linodes, or wherever, spin them up... And that's where Crossplane comes in. There's like a couple of things happening in the background that will start coming together. There's an Equinix Metal episode with Zac coming. Number 29, I think. Actually, it came out... By the time we're listening to it, it came out, the episode with Zac.
416
+
417
+ So there's like a couple of things coming together, which make me really excited, and which I think setting anything locally for development - it is a time sink, and should have environments which are pre-built for development in an automated way, and you just click a button and you have it. And when you're finished with it, you take it down, and you don't have to worry about it. You don't have to worry about upgrading PostgreSQL, or are you running the right version of Erlang, or should you install Docker, or put up with Docker desktop updates, which have been getting really annoying in recent months, which is one of the reasons why I uninstalled it.
418
+
419
+ **Adam Stacoviak:** My main issue has always been -- I manage Homebrew, I upgrade some things in there. I don't want to specifically upgrade particular things, so I say ‘upgrade all’ essentially, or just ‘brew upgrade’ after update, and next thing you know, Postgres is updated to the latest and my Postgres is broken... And that was always the culprit. And then a couple times, it was Erlang, and that kind of thing.
420
+
421
+ Because my local hackery things that aren't really connected to a dev environment shouldn't overlap with my actual dev environment for the application. I'm kind of in that weird space where it's like my truck - I have a gas-guzzling Ford F-150. I love the new EV F-150, the Lightning coming out. I want to buy a new truck sometime soon, because I’m due, it's like seven years old... But I don't want to buy a gas vehicle. I want to buy an electric vehicle.
422
+
423
+ \[43:49\] So I don't want to spin up my own dev environment. I want to use Codespaces, or some prescribed dev space that I don't have to worry about, that's always just fresh... Because I’m me, my identity is me, you know my trustworthiness, or the application should, or our config should, so I can then get access to a certain database maybe a drive-by contributor shouldn't get access to... That kind of thing. And even drive-by contributions - those are harder to do probably. Maybe through dot dev it's somewhat easy if it's a typo or something like that. But if it's a contribution, I think it's much easier for us.
424
+
425
+ **Gerhard Lazu:** So I'm thinking of the GitHub Codespaces experience, but maybe not necessarily running on Azure as it is today. But I'm not suggesting that we should all set up some bare-metal servers. No way. It's an approach that our contributors should be able to use as well. And you're right, identity should be baked in. But that's like the long-term. Short-term. I think you want the short-term. The short-term answer is use your old machine. \[laughs\]
426
+
427
+ **Adam Stacoviak:** I would say short term answer would be "Can we get sit up on Codespaces in their current blessed way?" and hope for a future where they have a more infinitely configurable version that's for the ways you want to use it. So I'd say let's re-engage with Cory and GitHub on that front. I know they’re willing, we've talked to them recently, so we know they're willing. That gate has not closed. They want us to be on Codespaces and leverage it that way.
428
+
429
+ **Gerhard Lazu:** Amazing.
430
+
431
+ **Adam Stacoviak:** So I'd say let's use it the way they want us to use it currently, get going that way, and then whenever it needs to scale in different ways, then it can. Or you can use GitPod to do it your own way with Equinix Metal, because that's what GitPod does, right? GitPod lets you be anywhere; they're agnostic. Whereas Codespaces is simply GitHub, simply Azure infrastructure.
432
+
433
+ **Gerhard Lazu:** I'm happy if the Changelog.org would have this capability, if GitHub Codespaces was part of the Changelog.org, and we could use it out of the box. I think that would be amazing, right? So if we can contribute to that, and we can make sure that anyone wanting to contribute that the Changelog app, we could get that working very well with Codespaces, which currently isn't... That, you're right; that is a good short-term solution. So I think you just gave me a Christmas gift, Adam.
434
+
435
+ **Adam Stacoviak:** I'm going to hold on to that. I’m not going to set it up locally. I’m going to wait -- I’m going to wait for my Christmas gift, which is Codespaces wrapped in a bow.
436
+
437
+ **Jerod Santo:** The challenge with this path being short-term is that Gerhard is the most organized podcaster in the universe, and he's scheduled it into March and April. \[laughter\] So that doesn’t sound very short-term to me.
438
+
439
+ **Gerhard Lazu:** I’ll need to make room. I'll need to -- I don't know, someone cancel an interview, maybe... \[laughter\]
440
+
441
+ **Adam Stacoviak:** No, here's what can happen... Honestly, behind the scenes, what happens is you may plan that way, but you have got to plan for a buffer; even if you have it planned out, there's always a -- Jerod and I have done this, too. We've had it planned up several weeks to a month, and something happens, and we're like, "We’ve got to go change the order."
442
+
443
+ **Gerhard Lazu:** Yeah.
444
+
445
+ **Adam Stacoviak:** And so because you get to run the show, you can make those calls.
446
+
447
+ **Gerhard Lazu:** Yes.
448
+
449
+ **Adam Stacoviak:** Just because you're setting that motion. Now, if you've made a promise or whatever, reach back out to them and say, "Hey, I'm sorry. We've got a timely episode coming out. I need to bump you back one week." They're probably not going be upset. And if they are, give them a free T-shirt, or whatever it takes to make them sweet
450
+
451
+ **Gerhard Lazu:** How do you do that? I don't know how to give them a free T-shirt.
452
+
453
+ **Adam Stacoviak:** You tell me or Jerod.
454
+
455
+ **Jerod Santo:** We'll talk offline. We'll talk offline.
456
+
457
+ **Gerhard Lazu:** Alright. Okay.
458
+
459
+ **Adam Stacoviak:** It's too easy. And we'll make it happen. It’s too easy.
460
+
461
+ **Gerhard Lazu:** Okay. It's amazing, what a free T-shirt will do... \[laughter\]
462
+
463
+ **Adam Stacoviak:** Yes. We love our listeners, and we love our guests just as much, if not more... So if ever we have to apologize, we’ll do it with very sweet kindness.
464
+
465
+ **Gerhard Lazu:** Alright. GitHub Codespaces in December, here I come.
466
+
467
+ **Jerod Santo:** There you go. Let's make happen.
468
+
469
+ **Gerhard Lazu:** Let’s make it happen.
470
+
471
+ **Jerod Santo:** Christmas is coming early. Or right on time. So I think the actual short-term solution is brew install Elixir, brew install Postgres, clone the repo...
472
+
473
+ **Gerhard Lazu:** I don't think that's going to work.
474
+
475
+ **Jerod Santo:** Why not?
476
+
477
+ **Gerhard Lazu:** I guess the versions have changed. I never even tried—I think by default PostgreSQL will be version 13, or maybe even 14 if it's out yet. I don't know whether things will work with that.
478
+
479
+ **Jerod Santo:** Oh, it does. I’m running it.
480
+
481
+ **Gerhard Lazu:** Are you? Okay.
482
+
483
+ **Adam Stacoviak:** And the readme is a little off, too.
484
+
485
+ **Gerhard Lazu:** The readme is off. Yes.
486
+
487
+ **Adam Stacoviak:** ...in terms of what it prescribes. It just said dependencies are Elixir and Erlang; it doesn't say which Postgres, and everything else.
488
+
489
+ **Jerod Santo:** \[48:16\] Just wait for the transcript to come out, of this episode, and then follow that. I'm telling you, brew install Elixir, brew install Postgres, clone the repo...
490
+
491
+ **Gerhard Lazu:** Okay. So first step--
492
+
493
+ **Jerod Santo:** `mix deps.get`
494
+
495
+ **Gerhard Lazu:** ...Gerhard gets a new MacBook M1 for Christmas. \[laughter\]
496
+
497
+ **Jerod Santo:** I already got one, Gerhard. I can do this work.
498
+
499
+ **Gerhard Lazu:** Alright. Just post it to me. \[laughter\]
500
+
501
+ **Jerod Santo:** Well, unfortunately, with the ship dates on these new MacBooks, I also don't think that's a short-term solution either.
502
+
503
+ **Gerhard Lazu:** I know. 4-6 weeks. I've seen that. Yes, I know what you mean.
504
+
505
+ **Adam Stacoviak:** You had to order it like a month ago to get it on time for Christmas.
506
+
507
+ **Gerhard Lazu:** Yes, I know.
508
+
509
+ **Jerod Santo:** Alright, so the short-term solution is keep your old machine around, and use that till we have a medium-term solution.
510
+
511
+ **Gerhard Lazu:** Exactly. Yes.
512
+
513
+ **Adam Stacoviak:** Which I do. It's right next to me. It's no problem to use it. But, like anybody, I want to get set up on this new machine and never look back to the old, and just format the drive and roll on.
514
+
515
+ **Break:** \[49:15\]
516
+
517
+ **Jerod Santo:** Last Kaizen we talked about moving our uploads to the cloud, specifically S3 is cloud. I wanted to give a quick update on progress there. I wanted to have it done by the time we recorded this, and the fact that Gerhard, you and I met (was it last week?) to discuss a game plan to getting us from where we are to 100% cut over. We did not quite get there, and that's because I had a yak shave instead. So I thought I would take you guys on a little journey.
518
+
519
+ **Gerhard Lazu:** I did a few as well, so it's okay. Your yak shave held yak shaves. It's all good.
520
+
521
+ **Jerod Santo:** \[laughs\] So... You know, I only have so much time to work on the platform, and I have to use that time wisely, and sometimes it's GitHub issues-based development when things come in, because then you know it’s a user or a listener or a reader’s need, or something that they hit up against. So I end up deprioritizing things that I want to do; probably not always the wisest... But it happened again. I had my waffle branch, which waffle is the new replacement for Ark. Ark is the upload library that we had used previously, that went unmaintained, taken over by the community and now called Waffle... And so we've cut over to that, I have my branch... It's like, I said it was -- what did I tell you? How many percentage points did I have done when I told you the other day, Gerhard?
522
+
523
+ **Gerhard Lazu:** I think it was like 90%.
524
+
525
+ **Jerod Santo:** 90?
526
+
527
+ **Gerhard Lazu:** Yes, 90% is what I remember.
528
+
529
+ **Jerod Santo:** Yeah. So probably I'm at like 94% now... And then here comes an issue, issue number 393 hit our GitHub issues, which we'll link up... Newsletter links proxy encodes special URLs with HTML instead of percent based. This is a tiny little bug that was just interesting to me.
530
+
531
+ \[51:59\] So what happened is, in our Changelog weekly newsletter, which goes out every Sunday morning, it includes all the shows from that week, every episode we put out, as well as all the news and the links and the repos and the commentary for the week, we linked to Chris Manson's post called It's All Gravy. And his website is Chris.manson.ie, probably because he loves Internet Explorer, and then /it's-all-gravy... Only it's is a contraction, right? So, it's, it apostrophe s. And the son of a gun left the apostrophe in there. Now, I'm giving him a hard time, because I know Chris, he's a JS Party listener, hangs out in the chat... And he left that apostrophe in the URL. First of all, isn't that just like, blasphemous right there, having an apostrophe in your clean URLs, people?
532
+
533
+ But what happened with that apostrophe is the way that we encode that creates the HTML encoding instead of percent-based, which you'd expect in the URL, which caused people that clicked on that link in our newsletter to go to a web page, which was a 404, because it was incorrect.
534
+
535
+ Now, certain browsers actually manage it okay, and the apostrophe looks fine in the address bar and everything, which I thought was kind of interesting. And so I thought, "Here's a bug I should chase down, while not working on these uploads to the cloud branch that I'm supposed to be working on..." And so I started to figure out - okay, mystery time... What is going on here?
536
+
537
+ So I dive into our codebase and I find the line of code in question, and everything looks legit to me, and then I realize, okay, I'm calling this Phoenix... So we are an Elixir/Phoenix application, for those who haven't been following along the whole time... And at a certain point, we call into Phoenix. And Phoenix has an HTML library that generates HTML, and there's a function called link... So if you're familiar with -- every web framework has like a link function; linkto was Rails’ invention, which everybody's pretty much copied. Phoenix’s link works very similarly. So all we're doing is calling that and just passing it the URL, which has the apostrophe in it.
538
+
539
+ So I started digging a little deeper, and I started thinking, it's like, "Whatever is happening is outside of my domain, right? It's a dependency that's doing it." So I don't know, Gerhard... What do you do in this circumstance? You’ve got a dependency that's not doing something totally right? What's your first move? I guess you're more of an ops guy, so maybe your developer chops are maybe rusty, but what’s your instinct?
540
+
541
+ **Gerhard Lazu:** No, not really. Not really.
542
+
543
+ **Jerod Santo:** Okay, good.
544
+
545
+ **Gerhard Lazu:** So I would look at an issue to see if there is an issue in the repo for the DEP. I would try and find the code, see what happened around it. Like, I would call a blame, see if that is different... And if I can't find anything, I would just open an issue on that repo, explain my problem, link to my code, and ask the developers, "Hey, how would you solve this? What do you think? Is it legit? Am I holding it wrong?"
546
+
547
+ **Jerod Santo:** Yeah, exactly.
548
+
549
+ **Gerhard Lazu:** "What’s the problem here?"
550
+
551
+ **Jerod Santo:** Yeah. So the interesting thing about this one is I'm not really savvy with these character encodings, and I'm not sure why it's doing the HTML encoding versus the URL encoding, but my first question is, like, is this even a bug? Or is this just like the way it would work if you pass it an apostrophe?
552
+
553
+ And when I start to have these questions - you laid out a very clear path to potential victory, but I'm lazier than you, so my first thing is like, "Am I running the latest version?" That's just what I ask myself. Like, maybe this was fixed between my version and now. So my first step is, "Well, let's just upgrade stuff." And I start to -- even if it's like a procrasticoding thing, I'm like, "I'm going to go check out my deps tree and see how old everything is." A bunch of stuff was out of date, so this begins the yak shave. So instead of fixing that, I'm like "Here's what I'm going to do - I'm going to update all of our deps."
554
+
555
+ **Gerhard Lazu:** \[56:11\] Update everything. Oh, my goodness me. Okay... What can possibly go wrong...? \[laughs\]
556
+
557
+ **Jerod Santo:** Exactly. So we're on Phoenix 1.5, and 1.6 was out. Most Elixir packages do a pretty good job of following semantic versioning. So I knew this was a minor upgrade, so there are some breaking changes, but -- or no, a major upgrade breaks changes. There shouldn't have been any API changes, right? Yeah... So this one kind of bit me. So there were API changes. \[laughs\] So I thought I could just safely upgrade. And I did all the auto upgrades... So inside of Elixir's mix tool, if you have patch version upgrades, it'll just auto do those for you. They're green, you can just upgrade those, because they're assuming semantic versioning.
558
+
559
+ So I did all those, ran the tests, everything was fine. Then I went to upgrade Phoenix, which was a minor version upgrade from 1.5 to 1.6. Got that done. While it was kind of doing its thing, I was like, "Well, I'm going to go read the changelog and see what's going on." And I did notice that they made a breaking change, which I guess that's not semver, so that should have went to 2.0. They don't want to go 2.0, because it's too major, or whatever... But I did notice it, and I'm like, "Man, this is something that I need to look at." So I did the upgrade to Phoenix 1.6, had some failing tests... I was like, "Alright, good. My tests are testing things, and they change the API..." And so I'm going to have that, but it's like two changes.
560
+
561
+ So what did they change? Well, the way Phoenix works as it passes the request data from your controllers down into your views and to be used in the template, there's this bag of data called assigns, and in the assigns, there’s a bunch of -- it’s literally a map, or a struct, or a dictionary, or a hash, depending on what your language of choice is, right? So it's keys and values, and there were two keys that no longer exist - view module and view template. What do these keys hold in them? Well, they hold in them the information of what's the currently active or being used module that's handling this request, and which template is going to be used to render.
562
+
563
+ So I did find those. There were two places I was using those, and I changed them. And there was a new way of doing it. Fine. And I upgrade, and all my tests pass, and so what do I do? I ship it, baby. I send it out there, and it's all good. And then I start to realize, via Twitter, that our Twitter embed’s broken. It's just showing like the default news and podcast for developers thing, and like a stock share image. We actually have player embeds, where you can click play right there on Twitter and start playing the episode. So that Phoenix upgrade, even though I thought I'd covered all my bases, broke all the metadata on all of our pages, across the entire site.
564
+
565
+ **Gerhard Lazu:** Wow...
566
+
567
+ **Jerod Santo:** ...which led to Twitter embeds breaking, all third-party integrations that are based on the meta elements in your HTML - busted. That led to me refactoring our entire meta module, because that data is gone, and the entire thing in that module is like, "Which view am I, and which template am I? Okay, here's my meta information." So I refactored that entire meta module; it took me a few hours... I'm not even happy with the way it works now. I liked it better before. And I fixed it... And the yak was shoven, or shaven. What's past tense for shave?
568
+
569
+ **Gerhard Lazu:** Shaven.
570
+
571
+ **Jerod Santo:** I shaved it. I shaved that sucker. But I did not get our cloud uploads done... So that's my excuse, and I'm sticking with it.
572
+
573
+ **Gerhard Lazu:** Well, first of all, you were very determined to shave this yak... \[laughter\]
574
+
575
+ **Jerod Santo:** Yes, I was.
576
+
577
+ **Gerhard Lazu:** And I'm glad that you didn't give up, until it was all done.
578
+
579
+ **Jerod Santo:** Success, baby!
580
+
581
+ **Adam Stacoviak:** Yes. Well, the question is, did the upgrade even fix the original URL issue?
582
+
583
+ **Jerod Santo:** \[01:00:15.25\] No... It’s not a bug. It’s a feature. \[laughter\]
584
+
585
+ **Adam Stacoviak:** That's the best!
586
+
587
+ **Gerhard Lazu:** By the way, the number is 394. I checked. It’s not 393. 394.
588
+
589
+ **Jerod Santo:** Oh, I’m sorry.
590
+
591
+ **Gerhard Lazu:** That's okay. That's okay. Second of all, this reminds me of exactly what happened. You said that you had to shave a yak, and we had to get together, where I upgraded -- I've set up the new version of our Kubernetes deployment... And it's amazing how I was shaving a similar yak.
592
+
593
+ You know how you do an upgrade of Kubernetes, like from 1.20 to 1.21, and then you think, "Hmm, maybe I should upgrade Ingress NGINX. Or even better, "I should replace it with Traefik." Why? Because then we don't have a cert manager. Excellent. So, Traefik and take care of all of that. Great.
594
+
595
+ What about external DNS? Let's do that as well. What about Honeycomb agent? Let’s do that as well. What about Grafana agent? Oh, crap. They broke something... \[laughter\] So maybe try and figure out what the config is. And before you know it, like two days, like three days, whatever, you say like, "No, no, this is just too much. I just have to keep some of the older versions, because it's just too hard, and I'm biting too big of a chunk", which is exactly what you've done, right? And before we know it, the yak is like a herd. \[laughter\]
596
+
597
+ **Jerod Santo:** Yes. Somewhere in there I completely lost the thread, you know?
598
+
599
+ **Adam Stacoviak:** Yeah... It feels necessary as you keep biting more off though, right? As you go deeper into the yak shave. I mean, I guess this is an onion analogy more than a shave, I guess... Every new hair you shave away -- I don't know how to describe... Like, you just have to go further, you know what I mean? It feels like it's perpetual, and you just need to keep going. And then it's one part, personal determination, and then knowing you as a list, extra offer, you've got to get through this thing, whatever it is. So it's like, perseverance though...
600
+
601
+ **Gerhard Lazu:** I'm wondering, how much actual work happens like this? Really valuable work, like upgrades, fixes, refactorings... Because you start somewhere, and rather than doing the bare minimum, you say "Well, I'm going to do a little bit more, and a little bit more..." and before you know it, you're like a week in, and everything is amazing, but you’ve wasted the week on something which wasn't even on the board.
602
+
603
+ **Jerod Santo:** Right. It was not even on my agenda.
604
+
605
+ **Adam Stacoviak:** I wonder as well, because that's the state of flow, right? You can get through that yak shave, probably, because of a state of flow. Was this a sustained session, Jerod, or was it multiple sessions?
606
+
607
+ **Jerod Santo:** This was all one session. This is basically took my afternoon that I would have otherwise spent finishing that cloud uploads thing.
608
+
609
+ **Adam Stacoviak:** Right. Did you plan to spend the amount of time that you spent? So did you consume the time you desired to spend, or did you consume more?
610
+
611
+ **Jerod Santo:** Way more. I did not want to rewrite that meta module at all.
612
+
613
+ **Adam Stacoviak:** Right. This is my point then. So you want to do it in one session, you were in a state of flow, despite your aim, so to speak, being off... You shaved the yak, you didn't do what you intended to. However, you probably did as much work as you could have done in eight hours, or whatever number - some sort of multiple beyond that - because you're in such a momentum mode going on. That's my assumption at least, because you were in a state of flow. So to your point, Gerhard, I wonder as well - because when you get that kind of momentum, sometimes you just have to run with it.
614
+
615
+ Speaking of new, we've got some gifts coming up. It's going to be the holiday season, Christmas... You’ve got some Christmas gift for us, Gerhard?
616
+
617
+ **Gerhard Lazu:** I do, actually. I have four, five... We'll see how many. But a couple. More than a couple.
618
+
619
+ **Adam Stacoviak:** \[01:04:03.00\] Okay.
620
+
621
+ **Gerhard Lazu:** What I'm thinking is, I was mentioning --
622
+
623
+ **Adam Stacoviak:** Two. More than one, right? Two.
624
+
625
+ **Jerod Santo:** More than two.
626
+
627
+ **Gerhard Lazu:** More than three.
628
+
629
+ **Adam Stacoviak:** More than two, okay. A couple.
630
+
631
+ **Gerhard Lazu:** More than a few. Several. Several gifts. So I was mentioning at the beginning of the show that a lot of the episodes, when I spend time talking to the people that come on the show, there's always a background story to it. Usually, like a past story we share, we have a common past, but also I see a common future.
632
+
633
+ What that means is when we covered Crossplane, I was mentioning even during the episode that I want to make Crossplane part of our infrastructure, part of our setup. So what that looks like is managing our Kubernetes, managing our infrastructure with Crossplane. So how do we do that? What does that look like? What is the simplest thing that we can do to improve our Kubernetes deployments, so that when you want two, three, four, it's really simple to do that? What about using Upbound Cloud for that, rather than running around Crossplane? So that is one of the gifts - how do we use Crossplane to manage our infrastructure, our new infrastructure, the 2022 one, and going forward, what are the benefits of doing that?
634
+
635
+ So we're bringing them on board, with our story, with our Changelog story, with our setup story that's been evolving... And the mix is what makes it amazing, because we have the opportunity to try all these different tools out, show our approach, whether it's right or wrong, it doesn't matter. The point is, it's good enough for us, and there's always something to learn. We create great content, we promote the good stuff, the stuff that we believe in, that we use, and most importantly, we help it improve. We get feedback to those projects, to those products, and as a result, they improve.
636
+
637
+ Honeycomb is another one. We’ll have specific Honeycomb integrations. Dagger - I want to mention that as well. And that happened like over the last couple of weeks... Preparing episode 33, where a few gifts will be mentioned. Parka, I want to mention that as well. That actually happened today. In my lunch break, we were recording that segment, which will be part of episode 33, and that's the Parka one.
638
+
639
+ **Adam Stacoviak:** Yeah. I like seeing Solomon Hykes in our pull requests/comments back and forth on the Dagger stuff you're working on. I was paying attention to just that commentary. And so just one... You know, I think it's super cool that -- you know, we've been a podcast... Ship It is part of the network, but the network itself has been around for more than 12 years now.
640
+
641
+ We talked to Solomon like way back early days of Docker even, when he did that first talk to announce Docker, essentially... And now to be at a place to have the right kind of infrastructure for this... What was just once a Tumblr blog, happily on WordPress at one point as well, and worked just fine. Maybe we had a ton of misses there. Not misses, but actual misses; but we didn't have any caching, so we were good to go. And now to see this feature, Dagger, these GIFs, and Solomon Hykes, who is one of the creators of Docker - those catching up in the comments of our pull requests... It's cool. I love that. I was loving seeing that. It’s just -- the whole circle of life kind of thing. You know, like you had said even with Ship It, the pre-story, and then the future story. Like, I love all that serendipity, Gerhard, really, coming together.
642
+
643
+ **Gerhard Lazu:** It is a journey. It's really is. And many journeys coming together.
644
+
645
+ **Adam Stacoviak:** Yeah.
646
+
647
+ **Gerhard Lazu:** And the little contributions that we can make to those projects, they're definitely helping us. We couldn't run the infrastructure the way we do without all the great tooling that's out there. And I wish we had more time to try it all, and to give all the feedback that we can.
648
+
649
+ \[01:08:12.00\] I think whenever people pitch the idea or request an episode, like "We would like to have this conversation", I'm thinking, "Am I excited about this? Is this something which I would use?" If the answer is no, it doesn't mean that tool is wrong. It means I'm not into it. I wouldn't use it. It’s a no from that perspective. So I love trying out the things that we have on the show, all the people - just go beyond that, go beyond that conversation and see what happens. Literally, see what happens. I love that stuff.
650
+
651
+ **Adam Stacoviak:** I like bringing that feedback to them too, in particular Honeycomb. I love just -- or even with Dagger, and Crossplane. I think we can give that kind of feedback differently than, say, a customer would, or a drive-by user who's just on the free tier, for example, of whatever it might be. We're going to give a different layer. Because one, to Fastly’s credit even, like - if you’re a listener who works at Fastly, we're not bashing you. We love Fastly. We're just unhappy with current things or certain things, and we want to improve them. That doesn't mean we're negative Fastly. We're quite pro Fastly. And I think that through the podcast and the content that comes from it, and just our willingness to try and be curious, but then put that on air on a podcast and flesh it out, for the sake of ourselves, as well as the listeners, who are like, "How are they solving these problems? How is Jerod shaving this yak? How is Gerhard shaving that yak?" He has no packets lost. Great. Okay, cool. Two ISPs later. All that fun stuff. That, to me - that's a journey. That's a narrative. That's a story. And I think that we can give that feedback to Crossplane, to Honeycomb, and even sharing how we have that observability into our CDN which we never had before - that is super cool. That may not be something that Charity and the team at Honeycomb thought about. Sure, you can observe anything really, but have they considered, like, should you observe your CDN? Well, I think now that we have this tool in our hand, the answer is emphatically yes, especially when it's your front layer.
652
+
653
+ **Gerhard Lazu:** Yeah, and it's all those ands which are really exciting for me... So Crossplane AND Dagger. Honeycomb AND Grafana Cloud. Most people don't think like that. They think, "Competitors."
654
+
655
+ **Adam Stacoviak:** Either/or.
656
+
657
+ **Gerhard Lazu:** No, no. It’s an AND proposition, because they all have their strengths and their weaknesses. And if you don't know what the trade-offs are, well that means that you don't know them well enough. Because there's no such tool which is just perfection. There's no such thing. It doesn't exist. So stop looking for it, and try and understand which trade-offs you're making.
658
+
659
+ So Honeycomb is helping us in specific ways. Grafana Cloud is helping us in other ways, and we'll have people on the show to talk about those things, and to talk about the improvements. If you want to know what's coming up in episode 33, you can go to our changelog.com, the repo on GitHub, github.com/thechangelog. There's a couple of pull requests opened, and the pull requests have Ship It Christmas gifts. It's an Echoes initiative, Echoes HQ; they were on the show. Arnaud was on the show. So we're using Echoes for that purpose, and it's all coming together, like one big, happy family—
660
+
661
+ **Adam Stacoviak:** And they’re red.
662
+
663
+ **Gerhard Lazu:** And they’re red, yes, for Christmas. Exactly.
664
+
665
+ **Adam Stacoviak:** That’s right. Red and white actually, because the text is white, and the --
666
+
667
+ **Gerhard Lazu:** Yes. It’s not coincidental. So there's many things coming together, and Dagger is improving, because it reflects some of the feedback that we're giving. Honeycomb as well. Crossplane as well. Every single person I get to talk to, they're taking notes of what they can improve. Fredrik - it was amazing to do that with him, to give him ideas... Because end users, the ones that are paying for it, for that product, they maybe are not as patient or not as knowledgeable, or they’re more entitled, or rushed, or...
668
+
669
+ **Adam Stacoviak:** Precisely. Willing.
670
+
671
+ **Gerhard Lazu:** \[01:12:21.13\] Exactly. But we’re not. We genuinely want to help. We genuinely want to promote this stuff - what works, what doesn't work, and let's make it better. So, Kaizen.
672
+
673
+ **Adam Stacoviak:** Yeah. I love that. And I guess, to some degree, on that note, there's an order of things. So we talked about this show, in the initial part of the show, just the beginnings, how there was early innings... It was just an idea at one point. And as part of bringing that idea to life, one, Gerhard, we had to have a deeper conversation with you, and understand your desire. Clearly, you've realized a lot of that desire for us in your execution of Ship It, even so far to plan well ahead.
674
+
675
+ But all that's possible because, one, our willingness, but then two, capable and willing partners behind the scenes. And in no particular order, I'm going to thank some people who were on the charge this year, involved next year as well... Planet Scale, Fly, Equinix Metal, Render, Linode, Raygun, Sentry, Honeycomb, Grafana Labs, Teleport, LaunchDarkly, Incident, FireHydrant, Cockroach Labs... And I'm sure at least a couple more that I may have forgotten and didn't get in the list. If so, I apologize, but... Great partners make it possible to do this kind of fun stuff, and I am so thankful for them. I'm so thankful for you. I'm so thankful for our listeners. What would this show be if it didn't have listeners, right?
676
+
677
+ So you listening right now, we really appreciate you taking your time to either subscribe, or listen to a segment, or listen to a full-length show, even if you're not a subscriber. Thank you for giving us a little bit your time, hopefully a bit of your future trust and listen to this show further. We hope to one day have a beautiful vanity URL to give this, but until then, it's changelog.com/shipit. All the links to subscribe are there. You can subscribe via email, you can come in Slack... Hey, there is a community, it is free, so you can hang your hat, call this place home. Everyone's welcome, no matter where you're at in your hacker journey. We welcome you to be here. There's no imposters here. You can go to changelog.com/community for you to join, hang with us.
678
+
679
+ I love it, man. I'm loving the momentum and the direction we're going. I think enough pats on the back, but I'm just so thankful for this team here, the listeners, our partners... Really, I am. We’re just so blessed - really, we are - to be doing this show. It's so much fun.
680
+
681
+ **Gerhard Lazu:** Thank you, Adam. That was beautiful. Thank you very much. That’s reached a very special place. Thank you.
682
+
683
+ **Adam Stacoviak:** Cool. So 2022, here we come. We’ve got a little more shows left, but this is the last Kaizen episode. We'll come back here in 2022 with Kaizen, with Kaizen... 40?
684
+
685
+ **Gerhard Lazu:** Kaizen 40, that’s the one.
686
+
687
+ **Adam Stacoviak:** Kaizen 40. And hopefully, we'll have our Kaizen T-shirt in the merch store... So stay tuned to that. One more gift, potentially, a New Year’s gift, merch.changelog.com/. Until then, we’re out.
688
+
689
+ **Outro:** \[01:15:37.01\]
690
+
691
+ **Jerod Santo:** Hey y'all, Jerod here. So during the tail end of our recording, right after I told my yak shave story, Gerhard pretty much broke the show. Turns out he's been deep on a yak shave of his own regarding his home network setup and some nagging internet connection issues. I guess my yak shave story triggered Gerhard to consider the ridiculous length he's gone through, and - well, hilarity ensues.
692
+
693
+ Gerhard laughs uncontrollably, which makes me laugh uncontrollably. Adam keeps it together and desperately attempts to get us back on track, but not going to happen. It was so broken that we cut it from the episode, but it was also so funny that we figured we'd throw it in at the end, for those of you with a few extra minutes to spare and the curiosity to hear what it sounds like when the show goes off the rails.
694
+
695
+ Alright, here it is.
696
+
697
+ **Gerhard Lazu:** I’m sorry. I’m just \[laughs\] I’m just trying to hold something in.
698
+
699
+ **Adam Stacoviak:** Something is making Gerhard laugh really big.
700
+
701
+ **Gerhard Lazu:** It’s just too good. \[laughter\]
702
+
703
+ **Adam Stacoviak:** Oh, he's got a hidden thought that he can't get out, because it’s making him laugh too much.
704
+
705
+ **Gerhard Lazu:** I just remembered... \[laughs\]
706
+
707
+ **Jerod Santo:** What? \[laughs\]
708
+
709
+ **Gerhard Lazu:** \[laughs\]
710
+
711
+ **Adam Stacoviak:** I can't even look at this face. I'm sorry.
712
+
713
+ **Gerhard Lazu:** It’s just too good—
714
+
715
+ **Adam Stacoviak:** I can’t look at him. I have to look away.
716
+
717
+ **Gerhard Lazu:** \[laughs\] Okay. Alright.
718
+
719
+ **Jerod Santo:** What did you remember?
720
+
721
+ **Adam Stacoviak:** If you’re listening to this, try hard to look away.
722
+
723
+ **Gerhard Lazu:** \[exhales\]
724
+
725
+ **Jerod Santo:** Okay, got it.
726
+
727
+ **Adam Stacoviak:** He's taking off his glasses and everything.
728
+
729
+ **Gerhard Lazu:** It took me three weeks... \[laughs\]
730
+
731
+ **Jerod Santo:** Three weeks? \[laughs\] Oh my God, man...
732
+
733
+ **Gerhard Lazu:** It’s just too good to-- \[laughter\]
734
+
735
+ **Adam Stacoviak:** That’s true determination, because you not only did it -- you didn't do it in one session, you did it in multiples, and you kept going.
736
+
737
+ **Gerhard Lazu:** Multiple weeks.
738
+
739
+ **Jerod Santo:** \[laughs\] Multiple weeks...
740
+
741
+ **Gerhard Lazu:** \[laughs\] Three routers later... \[laughing out loud\]
742
+
743
+ **Jerod Santo:** \[laughing out loud\]
744
+
745
+ **Gerhard Lazu:** Two internet connections later... \[Laughter\] And all my packets aren't getting lost anymore. \[laughter\]
746
+
747
+ **Jerod Santo:** Oh, man...! \[laughs\]
748
+
749
+ **Adam Stacoviak:** That is an extreme yak shave, Gerhard.
750
+
751
+ **Jerod Santo:** That is.
752
+
753
+ **Gerhard Lazu:** \[laughs\] I’m sorry.
754
+
755
+ **Adam Stacoviak:** Extreme tales of yak shaving. That’s the next show.
756
+
757
+ **Gerhard Lazu:** That is the next show. Actually, there's like an episode with new ISPs -- I have two ISPs now. Both fiber connections
758
+
759
+ **Jerod Santo:** Two ISPs now... \[laughing out loud\]
760
+
761
+ **Gerhard Lazu:** Yeah, like two fiber connections coming into the house. Three routers
762
+
763
+ **Adam Stacoviak:** The funny part about this is like -- you have to think about that beyond just being two ISPs, that's two separate people coming to your house to install fiber...
764
+
765
+ **Gerhard Lazu:** Yes.
766
+
767
+ **Adam Stacoviak:** Because that's two separate fiber lines. That's like true dedication. \[laughter\] That's new holes into your house.
768
+
769
+ **Gerhard Lazu:** \[laughs\] Yes, exactly. Two holes in my wall. You’re right. I have two holes.
770
+
771
+ **Adam Stacoviak:** That's one more plug in your -- whatever. Maybe you even have a UPS for this even, I'm sure...
772
+
773
+ **Gerhard Lazu:** Not yet. \[laughter\]
774
+
775
+ **Jerod Santo:** Not yet. \[laughing out loud\] He just added that to his list of things to do.
776
+
777
+ **Adam Stacoviak:** That's some serious dedication.
778
+
779
+ **Jerod Santo:** Don't give him anything else to do, Adam.
780
+
781
+ **Adam Stacoviak:** I'm just thinking like - the logistics of doing that. That's being on the phone to order it, that's deciding to pay for it. That's one more line item on the budget, so to speak. That's somebody coming in your house, new hole, new fiber, new equipment. At least you're getting to use that LAN fill over though, on the uninfy system
782
+
783
+ **Gerhard Lazu:** I do actually, yeah. I do. Not load balancing yet, but I'm working towards it.
784
+
785
+ **Adam Stacoviak:** I’m sure you’ll be -- yeah.
786
+
787
+ **Gerhard Lazu:** \[laughs\]
788
+
789
+ **Jerod Santo:** Alright, we’ve got to reel this in. What's the summary here, Gerhard? What's the takeaway from this?
790
+
791
+ **Gerhard Lazu:** The summary is that now I have two LAN connections
792
+
793
+ **Jerod Santo:** \[laughing out loud\] You've already said that part. What's the takeaway?
794
+
795
+ **Adam Stacoviak:** What's the takeaway here?
796
+
797
+ **Gerhard Lazu:** You need two of each. \[laughing out loud\] Except your life. You only want one of those.
798
+
799
+ **Jerod Santo:** There you go. So I think we should do GitPod and Codespaces. \[Laughter\]
800
+
801
+ **Gerhard Lazu:** \[laughs\] Of course.
802
+
803
+ **Adam Stacoviak:** Yes, because you never know.
804
+
805
+ **Jerod Santo:** Kubernetes AND Fly.io, AND Render. That’s how we roll.
806
+
807
+ **Adam Stacoviak:** Well, I can agree with the N-plus. I mean that is smart. I mean, you can never have enough. That was actually coined best in the movie Contact. Anybody remember that? Why build one when you can build two?
808
+
809
+ **Gerhard Lazu:** I think I've had enough fun... \[laughter\]
Kaizen! Are we holding it wrong?_transcript.txt ADDED
The diff for this file is too large to render. See raw diff
 
Kaizen! Five incidents later_transcript.txt ADDED
@@ -0,0 +1,523 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** This is the second Kaizen, second series of improvements. 2,5 months later, 10 episodes later, here we are. I listened to the last episode yesterday (our first kaizen episode, episode 10), so I am picking up the discussion exactly where we left off... How about that?
2
+
3
+ **Adam Stacoviak:** Oh, boy...
4
+
5
+ **Jerod Santo:** Wow.
6
+
7
+ **Adam Stacoviak:** I'm gonna read the transcript.
8
+
9
+ **Gerhard Lazu:** As a listener, if you want this to make a bit more sense, then read the transcript, if you want, as Adam is doing; or maybe listen to that, I think it'll make a lot of sense. So a big portion of episode 10 we discussed about incidents, about having issues, and how do we share the learning within the team, how do we capture what happened, what the problem was, how do we have follow-ups for us to improve on. And remember when I was saying that it was either Adam or Jerod that deleted the DNS simple token...
10
+
11
+ **Adam Stacoviak:** Oh, yes...
12
+
13
+ **Jerod Santo:** Yes.
14
+
15
+ **Adam Stacoviak:** I recall that. We were trying to track that down, yeah.
16
+
17
+ **Gerhard Lazu:** It was actually me.
18
+
19
+ **Jerod Santo:** I had a feeling... \[laughs\]
20
+
21
+ **Gerhard Lazu:** Yeah, it was definitely me.
22
+
23
+ **Adam Stacoviak:** Well, I know it wasn't me... \[laughter\]
24
+
25
+ **Gerhard Lazu:** I did it.
26
+
27
+ **Jerod Santo:** Why? How? Tell us.
28
+
29
+ **Gerhard Lazu:** So when I've set up the new DNSimple token for cert manager and for external DNS, even though I created a new one, I was still using the old value. So there was a new token, but the value which I was using in the new infrastructure was the value from the previous token, from the old token.
30
+
31
+ **Jerod Santo:** It was the old one.
32
+
33
+ **Gerhard Lazu:** So when I deleted the old token, because that's what it was in DNSimple, that just stopped working. So things broke.
34
+
35
+ **Jerod Santo:** \[04:12\] I see.
36
+
37
+ **Gerhard Lazu:** But it gets a bit more interesting than that. Apparently, there's two sets of tokens. There's one token which you get by user settings and the user access token, and those are the ones that we were looking at the last time, trying to figure out why is this token missing, where is the token. But there's also account automation tokens, which is another submenu in a different page. Those were the tokens that I couldn't find the last time; I confirmed that it was definitely not there, so I worked all that out.
38
+
39
+ The more important thing was "How do we track this? How do we capture this so that in the future when this happens we know where to look?" We looked at a few incident management platforms, we discussed both FireHydrant and Incident.io; we even did a write-up. I say "we", it's a royal we. We did a write-up to compare the two, just like an internal one, to see why maybe choose one versus the other.
40
+
41
+ When we discussed in episode ten, FireHydrant was actually fronted by Fastly... Because the issue which we were trying to mitigate against was Fastly, it would have been a bad decision to choose a system that has Fastly in front when there's a Fastly issue, because you can't get to the system. So Incident doesn't use Fastly in front, so that was one of the reasons. There was a couple more... But using Incident, we created four instances so far. The first one was this. That's incident one - TLS certificates are failing to renew. What do we remember about that incident, Jerod? Do you remember much about it, do you remember about the experience, about looking at it? What do you remember about it?
42
+
43
+ **Jerod Santo:** Is this the one that comes into Slack and starts a new channel in Slack, and then you can update it there?
44
+
45
+ **Gerhard Lazu:** Yes.
46
+
47
+ **Jerod Santo:** Okay. So I remember that much... I don't remember anything else about it, honestly.
48
+
49
+ **Gerhard Lazu:** Okay.
50
+
51
+ **Jerod Santo:** How about you? You might have way more context than I do.
52
+
53
+ **Adam Stacoviak:** I remember the emails about it, in addition to the incident in Slack, of course... There were some emails about TLS expiring, and issues, and stuff like that.
54
+
55
+ **Gerhard Lazu:** Yeah, we had those. Yes, they were useful, but not useful from the perspective of "What is the problem? What goes into debugging the problem, and so on and so forth?" So there is the Incidents Slack channel, which by the way, is public to everyone... So if anyone wants to go in our Slack, open up Incidents, you can see what incidents we had, including the first one. There's a link. In our case, it's app.incident.io/incidents/1. That loads up the first incident. And you just log in with Slack, so that's a nice integration.
56
+
57
+ The reason why I'm asking this is because having run it and having captured this information, how useful is that? Just glancing at it - does it look useful? Is it something that you see going back to, referring to? That's what I'm wondering.
58
+
59
+ **Jerod Santo:** So you just heard how bad my memory is...
60
+
61
+ **Gerhard Lazu:** Mm-hm. That's why you write things down.
62
+
63
+ **Jerod Santo:** Right. So in the case of somebody saying "Haven't we had this problem before?" Or "Distant memory of TLS errors...", and I would say "Yeah, we have, but I don't remember anything about it."
64
+
65
+ **Adam Stacoviak:** Yes.
66
+
67
+ **Jerod Santo:** When was that? We're talking about July, we're now in September, so a couple months... It's just come and gone. Now, I think you've fixed it, Gerhard, so you probably remember it better than I do... I just kind of watched it happen.
68
+
69
+ So the ability to go back to it, which I now have scrolled back to it; it was July 12th. It just happened to be my birthday when this happened...
70
+
71
+ **Gerhard Lazu:** Happy birthday! \[laughs\]
72
+
73
+ **Jerod Santo:** Oh, thank you. Awfully belated there...
74
+
75
+ **Gerhard Lazu:** I'm pretty sure I said Happy Birthday back then, but now that you mentioned it...
76
+
77
+ **Jerod Santo:** Now that you know how bad my memory is, you can just retcon that for me... I'm seeing some details here about it, and if I could click through somehow to Incident.io from the Slack incident, then I'm sure there'll be even more information. But in this particular channel, or maybe -- oh, here we go. I've gotta click through to the -- each incident gets its own channel; so there's the Incidents channel, and then the incidents get their own channel... Which I can come and go, and I could read all of the details here, I think. Yeah. So now I'm looking at this, it's kind of loading in, screenshots etc. So it's great for just outsourcing your memory, I think.
78
+
79
+ **Gerhard Lazu:** \[08:14\] Yeah.
80
+
81
+ **Jerod Santo:** And that's about what I would use it for.
82
+
83
+ **Adam Stacoviak:** Yeah. I like the fact that there's a grand channel, an Incidents channel where you can go and see all the incidents. And I like the fact that it's public. So if you're listening to this, you're in Slack already, then just hop in that channel and you can just pay attention to what we're doing... Just for fun, or to ask question, or just be aware. So I think distributing the knowledge not just to insiders, but to externals who wanna participate or just pay attention, you can. That's cool.
84
+
85
+ **Gerhard Lazu:** That was one of the things which were top of my mind back then. Not only that, but I tried it, and now we're improving our understanding of this new thing that is in our Stack. What do we like about it, what don't we like about it? Stuff like that. How well does it work? Or at least we know where to pay attention when these things happen, when there are incidents... Because a lot of the time something goes wrong and you just don't know; I'm like "What's the problem?"
86
+
87
+ So I mentioned that we had four incidents since. But the second one - that was an interesting one... This is something that Jerod was trying to investigate, and actually, he even fixed it.
88
+
89
+ **Jerod Santo:** I did.
90
+
91
+ **Gerhard Lazu:** That was the PR\#378. Do you wanna tell us a little bit about that, the story behind it?
92
+
93
+ **Jerod Santo:** Yeah.
94
+
95
+ **Adam Stacoviak:** Testing Jerod's memory again. Here we go.
96
+
97
+ **Jerod Santo:** I'm now opening the incident to my memory... It was like a failure for the application to boot in production, or something... Oh, it's coming back to me. So this leads into Oban a little bit, doesn't it?
98
+
99
+ **Gerhard Lazu:** Yes, yes. Parker Selbert. Thank you, Parker; that was a great improvement, by the way. We really appreciate it. And I even captured this in a comment, in PR\#378, how much we appreciate you contributing this, and testing how reliable our system is. And in some ways, it failed in an unexpected way... But for me, the most important thing was to test how we use the incident management platform, how we capture these issues, how we work amongst ourselves, and how we have this memory written down of what happened and why it happened, and what we may want to do about it. That was for me the highlight. But what were you going to ask, Adam?
100
+
101
+ **Adam Stacoviak:** Well, I'm just looking down the Incidents channel now, and 1) I like the fact that we're describing it as memory, which I think is interesting... You know, some would say that incidents tend to be big things, not small things; I guess they're just incidents, that they can make them more grand than they need to be, basically. These seem to be like blips along the SRE radar, essentially. "Is this app working? Is there an error?" It's kind of interesting how this works out. But the question really is "Is the helpful summary that comes along with the incident?" So is there a precursor ceremony prior to this coming into Slack, or is this automated? Because that summary seems written by you, and that would mean that there's like a pre-incident to you, that you then declare an incident into Slack. How does that work?
102
+
103
+ **Gerhard Lazu:** Yes, that's correct. So /inc, that's the shortcut. And I forget what is the command, but if I don't type a command and I just press enter, it asks me what I want to do. "Do you want to declare a new incident?" "Yes, I do." I think that's like one of the first options. I fill in the details, like the title, a quick summary, Start, and that just creates a new channel. Then from there I have like a bunch of options; it has check-ins...
104
+
105
+ I don't wanna spoil it too much, because the episode that soon follows we will be talking with the Incident.io team about Incident and about our experience. So we'll have a whole episode about this - how it works.
106
+
107
+ **Adam Stacoviak:** What episode number is that gonna be, do you know?
108
+
109
+ **Gerhard Lazu:** 21.
110
+
111
+ **Adam Stacoviak:** 21. So it's literally the next episode. So if you're in the future and there's an episode number 21, just pause and go listen to that if you want to and come back. We'll just earmark it.
112
+
113
+ **Gerhard Lazu:** I didn't want to be too certain... Remember, 100% uptime?
114
+
115
+ **Jerod Santo:** Haah...!
116
+
117
+ **Adam Stacoviak:** It's near! It's on deck. I've been using the word "on deck" lately.
118
+
119
+ **Gerhard Lazu:** Right.
120
+
121
+ **Adam Stacoviak:** So in Slack you have /inc, which is short for /incident.
122
+
123
+ **Gerhard Lazu:** That's right.
124
+
125
+ **Adam Stacoviak:** You can do either inc or incident fully, the full name... And it creates/manages an incident.
126
+
127
+ **Gerhard Lazu:** Yes.
128
+
129
+ **Adam Stacoviak:** \[12:01\] So when something happens, you create an incident here, you summarize it, it asks you some questions, there's some interactive process that Incident integrates with Slack to let us use the Slack channel Incidents, the Incidents channel, as our pointer to all incidents that happen now or in the future.
130
+
131
+ **Gerhard Lazu:** Correct.
132
+
133
+ **Jerod Santo:** So let's go back to this Incident 2, and let me tell the story, because it unravels a little bit from this Oban situation which you just thanked Parker... So let me tell that story, because our listener hasn't been in on it.
134
+
135
+ The last episode we were talking about Changelog.com and the open source codebase that runs it, and Parker was listening to the episode. Parker Selbert. So let me tell that story, because our listener hasn't been in on it. The last episode we were talking about Changelog.com and the open source codebase that runs it, and Parker was listening to the episode, Parker Selbert. And he is the author of Oban, which is a background job processing library for Elixir, which is a dependency of our codebase, but we weren't fully utilizing that.
136
+
137
+ So to tell that story a little bit, when I first wrote all the background processing stuff in Elixir, it was just happening by just backgrounding things with native Elixir functions and features. And that served us very well for many things, such as sending emails, and processing statistics, and anything you don't wanna do in a background job. I didn't need a background job library, which I thought was really awesome; I guess that lasted us four or five years without having to have a background job library... However, late last year we had Alex Koutmos working on some features, and one feature is the ability to edit your comments on the site, not have the original comment be emailed directly to the recipients; the notifications be delayed, basically...
138
+
139
+ So if I write a comment -- I think I have three or five minutes to edit that right away, just for typos, and... You know how you always know it's a typo right after you hit Submit, right? Even with the Preview. We have a Preview button, you can preview the markdown, you can look at it... And then you hit Send and then it's like, "Oh, gosh..."
140
+
141
+ **Gerhard Lazu:** Undo emails? The feature which I use the most. Undo. Undo.
142
+
143
+ **Jerod Santo:** Yeah, exactly. This is basically the Undo Email feature in our commenting system... And to do that, we have to say "Okay, delay the email notifications for this comment for five minutes, so that person has a chance to edit it." We don't wanna send out the original if they're gonna change it. So for that, Alex added Oban, which you can do exactly that feature. So Oban has a background processing library that's persistent, it uses Postgres as its persistence layer.
144
+
145
+ So Alex added that and said "Hey, there's a bunch of other stuff that we can cut over to Oban if you want me to", and I was like "Nah, I'll take care of that." And then I never did. \[laughter\] Thankfully, I didn't...
146
+
147
+ **Adam Stacoviak:** Because things happen...
148
+
149
+ **Jerod Santo:** ...because Parker was listening to that episode, and out of nowhere, he opens this amazing pull request, the one that Gerhard just thanked him for...
150
+
151
+ **Gerhard Lazu:** 378.
152
+
153
+ **Jerod Santo:** 378, where he basically goes through our entire website in best-practiced fashion Oban usage, removing some dependencies like Quantum, which was a cron scheduling thing, which Oban can do cron scheduling... All this cool stuff. So thank you for that. It was an amazing PR, and it was cool to see not just Oban being used more broadly, but also the guy who wrote it, so you know it's the right way to use it, versus me trying to use it with my limited knowledge... So that was really cool to see. And his reasoning was actually that, he was like "Here's a nice open source codebase that's using Oban. I want to be using it in the best way possible, so when people see it, they see best practices." So that's why he said he did it.
154
+
155
+ Now, the roll-out to that caused this incident, and it's all coming back to me... He had a typo in the production config, which of course it's only in prod... So none of our test environment runs it, in dev you're not gonna see it, in tests you're not gonna see it... And so I did all my due diligence except for in prod, and then we did our due diligence in prod when we deployed it, and everything broke. Now, this is pointing out this insufficiency in our deployment process, right Gerhard? Because this should have never went live. Is that right?
156
+
157
+ **Gerhard Lazu:** \[15:52\] Yes. So first of all, I'd like to thank Charity Majors for coining and popularizing the term -- I'm not sure about coining, but definitely popularizing the term "test in prod." Like, until you test in prod, you're not really testing; you're pretending to be testing. I'm being facetious now, because it's not quite like that, but the listeners that know this slogan, where it came from, from Charity Majors, know what I mean.
158
+
159
+ To come back to Jerod's point - yes. This release should have never gone out, in the sense that when the new version came out, because it failed to boot, it should not have been put behind a service, because it was never ready. It would never boot. For some reason, it was, and even to this day, I didn't spend enough time on this to understand why that happened... Because the system in this case being Kubernetes should not have updated the new pod; it should not have put it behind the surface, because it was never healthy. It never booted long enough, it never started. Why it happened? I don't know.
160
+
161
+ **Jerod Santo:** Yeah, because -- isn't this the way Kubernetes works? It's like, blue-green deploys, or something. It never went green, it should have stayed blue.
162
+
163
+ **Gerhard Lazu:** Exactly. Yes, it's a bit more basic than that, in that if a pod is not healthy and ready, it will not be put in the service... Because it's not healthy. It has to pass its health checks before it can be marked as ready. And it was never ready. These are the readiness probes. The readiness probes, which basically runs HTTP requests, I think -- yeah, they are HTTP requests to port 4000; there was nothing bound to port 4000, because it wouldn't boot for long enough. I mean, it actually wouldn't even boot; it would crash, because the content was wrong, so the app could never boot. And because it could never boot, how did the readiness probes pass? They never passed. And if they didn't pass, why was the pod put into service? That should have not happened. But it did.
164
+
165
+ **Adam Stacoviak:** So what happened as a result of that then? So the pod that was unhealthy was put into service... And what was the actual incident?
166
+
167
+ **Gerhard Lazu:** So the incident was that the origin was returning 503 responses. What that means is that the CDN, Fastly - it proxies these requests, it forwards these requests to LKE, Linode, where our app is running. And the origin in this case being LKE, our app running in LKE, was returning 503. This is Ingress NGINX. Ingress NGINX serving 503, the backend is not available, so the CDN was basically forwarding these requests.
168
+
169
+ Now, this actually affected only a subset of users. The CDN will serve stale content for all get requests. Obviously, not the dynamic ones, not like post, patch, stuff like that. But get, head - all of those, they will serve stale content. If you're logged in, because you're an author and you have like a token and a cookie, obviously, none of those requests will be cached. So the website will be down for each who were logged in. So Adam, Jerod, myself, when we logged in the app, we would see that it's down. But anyone listening to our feed, or podcasts, listening to episodes, they don't even see this. Browsing the website - they wouldn't see this, especially if they're not logged in. So that part behaved as it should, that was good, but obviously, we detected it and now we're alerting detecting it and we could see straight away that it was down.
170
+
171
+ **Jerod Santo:** Yeah, exactly. So it's kind of like a degraded performance is what it becomes, because there's certain endpoints, certain pages, whether you're logged in or logged out, that don't work... And I think it was actually a redirect that we were used to having there was failing because of a 503 when it finally hits the app, and so for certain people - I think it was for signed in people only, which is like, you want your signed in users to have their best experience, but they actually get the worst... It was just down for them. So that's what happened, and of course, fixing that was paramount. But according to the world at large, we were still up.
172
+
173
+ **Break:** \[19:30\]
174
+
175
+ **Gerhard Lazu:** One of the other things that we improved since episode 10 were more redirects at the edge, specifically in Fastly. So now we have www to the root domain to the apex, redirects in Fastly, and things happen very quickly, rather than going all the way to our app. HTTP to HTTPS redirects, which also happen in Fastly... And I think there are a couple more changes around the health checks frequency, because we were getting just way too many health checks. I think we were getting close to a thousand every minute, from all the Fastly pops...
176
+
177
+ **Jerod Santo:** Oh, wow.
178
+
179
+ **Gerhard Lazu:** ...and we've reduced that to about 300, maybe even lower; I forgot exactly how much it was. Actually, I can look it up. Let me click on this to see exactly -- oh, yes! See? Writing it down. \[laughter\] So our Ingress and our app - they were servicing 44 requests per second from all the Fastly pops, which means 2.6k requests per minute. It was quite a few things in our logs...
180
+
181
+ **Jerod Santo:** Wow.
182
+
183
+ **Gerhard Lazu:** And when we went down, we went to - let me expand the screenshot... We went to about 196. 196 per minute. So we had about 3 requests per second; more than 10x improvement. So we were placing way too much load, way too much strain on our origin. But the thing which I wanted to focus on is some of the improvements, some of the redirects, which we did in Fastly, and that was one of the improvements that Jerod wanted to make.
184
+
185
+ **Jerod Santo:** Mm-hm.
186
+
187
+ **Gerhard Lazu:** So can you tell us a bit more about that, Jerod? Why did you want to make them and how did that work? Because there was also a problem that Adam spotted. That was a good one.
188
+
189
+ **Jerod Santo:** So first, the why - we don't wanna serve anything over HTTP, because HTTPS everywhere. Let's not worry about it. Everything, always, every time. That's why you do it with Fastly, right? They just go ahead and take care of it in every case.
190
+
191
+ And then www - well, we just don't like it. Right, Adam? We just like the cleanliness of the Apex domain. I kind of despise the www in our address bar. Some of it is just like personal taste, but really what the problem is is both. That's the problem, more than anything. If we were gonna pick www and redirect that way - totally fine, technically, and SEO-wise, and all of that kind of stuff. But if you're gonna pick the other direction, which is the direction that Adam and I just like to go, and just go Apex domain - same principle applies; it has to be all the time.
192
+
193
+ So we had these issues where it's easier to go from Apex to www than the other way around. And it always had to do with non-standard DNS records; I always never know the details, but you're not supposed to CNAME an Apex domain, so they create these other kind of records that are not part of the DNS spec... Y'all know what I'm talking about.
194
+
195
+ **Gerhard Lazu:** Yeah, exactly.
196
+
197
+ **Jerod Santo:** So there's reasons why www on a technical basis is just easier to accomplish. Well, we didn't wanna do that; we don't like Easy mode, we like Hard mode... So nope. Get rid of them. We don't need them, we don't want them.
198
+
199
+ So that was happening at Fastly, but it wasn't happening universally... So Gerhard, you had kind of turned it on, turned it off over the course of time, because weird things would happen. One of those weird things Adam spotted, which is Safari would redirect sometimes, and then fail to redirect other times. And it would only happen in Safari. So only Adam and myself, every once in a while, would catch up on it... But in Curl everything is fine, in Google everything is fine, in Brave everything is fine... But Safari would fail to redirect.
200
+
201
+ \[23:59\] And the reason for that was that we had basically a bad conditional in our Fastly config, which would match every request, and add a location header to every request, even non-redirect requests... So you'd get like a 200 okay to Changelog.com route, and it would have a location header in there, which - most browsers are like "Well, I've got a 200. I don't need to look for a location header", so it ignores it. Well, Safari would not ignore it; they'd pick up on it anyways, so it caused some issues, with the redirects working -- redirecting where it's not supposed to. All sorts of weird things.
202
+
203
+ It took me a long time to figure that out, because you're not looking at the headers when you're looking at the response codes, and you look at the header -- you're a lot like a browser, right? When I see a 301 or a 302, then I look at the location header. But otherwise, I didn't realize "No, the location header is being added every single request by Fastly", so I had to go in there and rewrite that condition to basically have two checks. One is this request that's going to be redirected, and -- well, that was the one I added; it was make sure it's a redirected request in order to add the location header.
204
+
205
+ Lots of detail there, lots of little Fastly changes... Talking about testing in production - like, "Well, I'm gonna roll this one out real quick and see if that works", and scripting up requests to hit all the endpoints that I wanna make sure they had the right responses... But got that fixed, and now we're 100%, every single time. www gone, and HTTPS all the things.
206
+
207
+ **Gerhard Lazu:** That's right. I remember experimenting with this in production, and the last time when I've done this - I think i was like a year ago - I introduced at least an hour's worth of downtime... And it wasn't like constant downtime, which I think is more manageable; it was flaky downtime. It'd be down for five minutes, then up for two minutes, then down again for ten minutes... It was a terrible experience for users.
208
+
209
+ So this time around I used another domain, which I just had sitting... Because each and every one of us has at least ten domains that we bought, but don't use...
210
+
211
+ **Jerod Santo:** Right.
212
+
213
+ **Gerhard Lazu:** So I had one of those, and I tried setting a new Fastly service and configuring a few things, but I missed this... This one thing which was setting the location header, but the status code was wrong, I missed. So you're very welcome for that surprise... \[laughs\]
214
+
215
+ **Jerod Santo:** Right. \[laughs\]
216
+
217
+ **Gerhard Lazu:** It was like, "How sharp is Jerod? Can he figure this one out?" No, I didn't think that.
218
+
219
+ **Jerod Santo:** Well, what's funny - there's like a bias when you're going into somebody else's work, where with myself I always know my incompetence... But when I'm editing your work, I'm assuming every change is fully competent, with full knowledge... You know, I just give you way more respect than I give myself, so it's hard to find those flaws, because I'm like, "Surely, Gerhard knew what he was doing when he set this, so I must not know what I'm doing", because it looks wrong... So it takes me longer to actually be like, "No, actually he just made a mistake."
220
+
221
+ **Gerhard Lazu:** I really appreciate that, by the way, the respect part; thank you very much, Jerod. \[laughs\] That means a lot. But I do make mistakes, actually a lot... So a lot of the time I fix them so quickly that people don't even know I've made them. But trust me--
222
+
223
+ **Jerod Santo:** That's the key right there.
224
+
225
+ **Gerhard Lazu:** ...mistakes - there's so many I make. All the time, every single day. Hundreds and hundreds of them. Because it's essential to learning. Experimenting. At least that's how I see it.
226
+
227
+ **Jerod Santo:** So when you were describing your DNSimple findings earlier in the conversation, and now we have you guilty again, it reminds me of this amazing quote by Filipe Fortes. I'm not sure if he originated this or if he just has the tweet... But he says "Debugging is like being the detective in a crime movie where you are also the murderer."
228
+
229
+ **Gerhard Lazu:** That's exactly what it feels like. "I murdered the infrastructure. It's my fault, and I have to fix it. I messed it up. \[laughs\] Why, Kubernetes, why?!"
230
+
231
+ **Adam Stacoviak:** There's three people here... That closes the loop too, your mention of the extra domains hanging around and testing them, because I was like "What is that weird domain in Fastly?"
232
+
233
+ **Gerhard Lazu:** Yeah. Do you remember which one it was? Do you still remember it? It's a very special domain. That's my future.
234
+
235
+ **Adam Stacoviak:** Well, I didn't wanna call it out and dox you in case it was private, or something like that...
236
+
237
+ **Gerhard Lazu:** No, it's okay... It's my surname. But the TLD means a lot to me...
238
+
239
+ **Adam Stacoviak:** Gotcha.
240
+
241
+ **Gerhard Lazu:** Do you wanna check it out real quick?
242
+
243
+ **Adam Stacoviak:** Well, I was like, "What is this domain doing there?" It was interesting.
244
+
245
+ **Gerhard Lazu:** Yeah. It is a special one, I have to say. It is the future.
246
+
247
+ **Adam Stacoviak:** I went to it too, and I don't recall it being memorable in terms of going to it... I think it actually just said like -- yeah, "Pong." That's right.
248
+
249
+ **Jerod Santo:** It just replies with Pong when you go there?
250
+
251
+ **Adam Stacoviak:** Yeah.
252
+
253
+ **Gerhard Lazu:** \[28:10\] Yeah. Ping Pong, yeah. That's like an Erlang joke. Or not like an Erlang joke, but whenever you ping nodes, they respond with pong. And they respond Pang if they can't ping other nodes... So yeah, it's very relevant to the Changelog infrastructure and our app... But anyways.
254
+
255
+ **Adam Stacoviak:** I checked it out, I was like "What is this domain?" I'm like, "That's weird. Okay... Surely, Gerhard must know what he's doing..." \[laughs\]
256
+
257
+ **Jerod Santo:** I knew exactly what it was. As soon as I saw Lazu, I was like, "Oh, he's got a test domain out there he's been trying to futz with." Because it is kind of -- I mean, when you're basically editing Varnish cache configs via a web interface... And they have some nice tools for diffing, and they'll do a static analysis and make sure that that thing is gonna actually boot, or whatever... But it's hard to replicate a production environment in a way that you can just futz around with a config, especially when you're putting conditions, and rewrite rules, and adding headers, and removing headers... And just doing that on your live production site causes all sorts of little downtimes, right Gerhard?
258
+
259
+ **Gerhard Lazu:** That's right.
260
+
261
+ **Jerod Santo:** So that was a good move.
262
+
263
+ **Gerhard Lazu:** That's exactly like -- okay, so the domain is lazu.ch. And ch is really special to me, but we'll talk about it another time.
264
+
265
+ **Adam Stacoviak:** Like today, or a different day?
266
+
267
+ **Gerhard Lazu:** It's up to you. You ask me the question, okay?
268
+
269
+ **Adam Stacoviak:** What .ch do you mean? Let's resolve this now. What is it?
270
+
271
+ **Gerhard Lazu:** Okay. So ch is a TLD for Switzerland. Switzerland is a really special place for me. It's the one place where I feel like home; it doesn't matter when I go, whether it's summer, whether it's winter... Every single opportunity I have to go there, I go there. DevOps Days Zurich happens today, and I think yesterday, and I was really bummed that I couldn't make it... Maybe next year. It happens once a year. It's a really special place, and it is a future home, for sure. It's also a present home, but it's more like a spiritual home rather than an actual physical home... But it's in the future. A few years, a couple of years... Who knows? But it's definitely there. We have to go as a family to Switzerland at least once a year. It's perfect. And this year was amazing.
272
+
273
+ **Jerod Santo:** Yeah, I saw some of your recent Insta posts, or your wife's posts as you guys were vacationing there, and I was like, "Their vacations look amazing. They're really good at photography and great locations", you know?
274
+
275
+ **Gerhard Lazu:** Yeah.
276
+
277
+ **Adam Stacoviak:** And there was nobody else around. It was just you. You had the mountains to yourselves. You essentially owned them.
278
+
279
+ **Gerhard Lazu:** Yeah. It's not many people out there, because it's so big and wild...
280
+
281
+ **Jerod Santo:** Pretty cool.
282
+
283
+ **Gerhard Lazu:** And yes, you have to reach these spots, but you get some nice and quiet places. You can be walking for hours and not see another soul... It's great. I love it there.
284
+
285
+ **Adam Stacoviak:** Yeah.
286
+
287
+ **Gerhard Lazu:** Anyways, coming back to the issue at hand - this ClickOps, Dan Mangum, in episode 15...
288
+
289
+ **Jerod Santo:** Yes, ClickOps.
290
+
291
+ **Gerhard Lazu:** He mentions it. That's a great one.
292
+
293
+ **Jerod Santo:** I love that.
294
+
295
+ **Gerhard Lazu:** Yeah. We were meant to write - or at least attempt to start - a Fastly provider for Crossplane. Didn't have time; too many things happened. But we must, must version control and GitOps our Fastly config. ClickOpsing is just - no. It's not gonna work.
296
+
297
+ **Jerod Santo:** Plus, they have their own little version of version control in there... So I'm in there, reading your comments and seeing what you changed, and just like you would, I would love to have that with all of our existing tooling and not have to go into their web interface, and blah-blah-blah.
298
+
299
+ **Gerhard Lazu:** Yeah, I mean, that's one thing. And I think this ClickOps nature of Fastly makes it easy to make certain mistakes, and it makes it more difficult to experiment easier... So experimenting is a little bit harder in how we create the surveys; I have to combine things... You can't just put everything together. I had to install binaries on a server which I had to set up as an origin, because it was that difficult to just combine everything together. I wish it was simpler. Much, much simpler. So yeah, that's one of the areas of improvement. There's too many, so we have to focus, I suppose. But that was a good story, I thought.
300
+
301
+ **Adam Stacoviak:** So just to call it out for the listeners - if you're listening to this and you're curious what the deeper story is, episode 15, ClickOps, Dan Mangum...
302
+
303
+ **Gerhard Lazu:** \[31:57\] That was a great one, so thank you, Dan. We will make it happen, for sure. Upbound Cloud, it's in the future as well... But anyways, let's leave that for another time, because this is one thing which I find myself doing - I get excited about so many things, and I wanna do everything, and it's physically impossible... So I have to focus a bit more. So let's do just that - let's focus a bit more on the incidents that we had, on the things that we've fixed, because that's what this episode is about.
304
+
305
+ Cool. So there were two more incidents which happened shortly after our second one, the origin, the PR\#378, and they all had to do with Linode networking issues and Linode LKE unavailability. I'll link it in the show notes, the specific incidents, which we called them out... But as a result, August 3, August 26th, incident 3 and incident 4... By the way, if you go to the Incidents Slack channel, you'll find them. You can click, you can go through them; they're publicly available on our Slack.
306
+
307
+ There was a couple of things there... The one with LKE was interesting, because our database, the backups and the restores when there's networking issues - they are not as reliable, which then prevents things from rebooting properly... So that was like an interesting one. I'm sure we'll come back to this, especially as we start looking closer to CockroachDB and a Fly PostgreSQL. That's very, very recent, so we'll leave that maybe for another time... But it's there.
308
+
309
+ The other one was around, again, CDN Fastly. The website was available 100%. 100% uptime on the Changelog.com via CDN. But our origin, our backend, the LKE one, in these periods, between the two episodes, between episode 10, which was July 15th, and this one, we had 99.69 uptime, which means we were down for four hours. So three nines - we can't even do three nines in our origin. We can do 100% on Fastly when Fastly is not down; that's great... But that was an interesting one to see and contrast and compare.
310
+
311
+ **Jerod Santo:** Yeah.
312
+
313
+ **Adam Stacoviak:** So this was a U.S. East issue. This is a networking issue in multiple data centers? When you say U.S. East, that's multiple, right?
314
+
315
+ **Gerhard Lazu:** I'm not sure how many data centers they have, but...
316
+
317
+ **Jerod Santo:** It's one region though, right?
318
+
319
+ **Gerhard Lazu:** It's a region, yes. So they don't have U.S. East 1, U.S. East 2, or...
320
+
321
+ **Adam Stacoviak:** I see.
322
+
323
+ **Gerhard Lazu:** ...you know, as other providers have. So I think it's a single data center, but anyways, it was affecting us, and it was like a region-wide networking issue. That was August 3rd, and August 26th there was like LKE issue. It was unavailable for about an hour, and things were failing in weird and wonderful ways... But again, if you're not logged in or if you're not doing any dynamic requests, any post/patch to the Changelog app, everything's fine, everything's up. There's a bit more latency, but that's it. That's what users see.
324
+
325
+ **Adam Stacoviak:** So Linode has a global infrastructure page where they show off the regions... I think it's one, marketing, but then two, obviously, informational... But if what they mean by U.S. East comprises Toronto, Newark, which is in New Jersey, and Atlanta, which is in Georgia - Toronto is in Canada - so that's multi-country, East Coast, Northern Hemisphere, North America... Well, I guess U.S. East might be actually Newark and Atlanta then, and Toronto is on its own. Maybe it's Canada East, maybe it's CDN East or something, potentially... I don't know.
326
+
327
+ **Gerhard Lazu:** For us it's Newark, in New Jersey.
328
+
329
+ **Adam Stacoviak:** So what do you do then? When you have this kind of issue, how do you remedy? How do you plan for network downtime? One, you've got database backups that could go wrong, the reliability, as you mentioned, or even rebooting despite network issues... How do you SRE around this kind of issue?
330
+
331
+ **Gerhard Lazu:** So we touched up on this in episode ten, where we talked about multi-cloud... But I think for us it's even simpler than that. I started looking to Fly, like seriously looking at Fly recently, and they have a concept of running multiple instances of your app in different regions, very easily. And it works, but the problem is that for the Changelog app we have two - well, actually there's one very important dependency, and that's the -- you know what I'm gonna say, Jerod, right?
332
+
333
+ **Jerod Santo:** No.
334
+
335
+ **Gerhard Lazu:** The upload media volume.
336
+
337
+ **Jerod Santo:** Oh, yes.
338
+
339
+ **Gerhard Lazu:** \[35:56\] Until we have that, we can only run a single instance, because the volume is just local; it can only exist in a single region. Things get very complicated if we use multi regions. I know there are solutions, but the trade-offs - I wouldn't want to make them. It'd be much easier --
340
+
341
+ **Adam Stacoviak:** And that's a Linode thing, right? That's not a -- that block storage, essentially. That's what that is, is just local storage.
342
+
343
+ **Gerhard Lazu:** Exactly. Block storage, local storage... Yeah.
344
+
345
+ **Adam Stacoviak:** Okay. I wasn't sure if it was their block storage service, and it's local to that --
346
+
347
+ **Gerhard Lazu:** Yeah, it doesn't matter how it's implemented, or who you're using, whether it's Fly, whether it's Linode, whether it's GCP... In this case, a disk can only be attached to a single instance, and it's like a Kubernetes limitation as well, depending on the CSI driver... The point being, until we store our media, our files on an S3-compatible API (it doesn't matter which one it is), we are limited to running single instance because of this volume thing. So if we had that, if we stopped depending on local storage or block storage and we used this, then we could have multiple Changelog instances. And if a region went down, that's okay; we have two, three more running. So that sounds interesting.
348
+
349
+ **Jerod Santo:** So I've made the first steps in that directions, I just haven't made steps two through N at this point. And the first step was to identify the replacement library for Arc, which is the file uploads library we are using... Which does have S3 support, but has fallen into -- I don't wanna call it "disrepair", because it's working just fine. Let's just call it "unmaintained mode." I just don't wanna change and make progress on a library that's unmaintained, and I don't wanna maintain it either, at this phase...
350
+
351
+ There are some folks who've picked up the mantle and run with it, and it's actually a community fork called Waffle, which is being maintained by the communities. I couldn't remember it, I had to find it in my bookmark history or my search history, because Waffle does not come to mind when you think of Elixir file uploads. Waffle... I don't know. It should be like XUpload. That's how Elixirists name things. UploadX. So it took me a while to find it, and then I started to actually dive in and find out what they've been doing since then, the process it takes to swap it over... So I'm on that phase, of switching over to cloud uploads. So more to come. We're not there, but it hasn't been zero action by me on that.
352
+
353
+ **Adam Stacoviak:** That would include user-related stuff to like avatars, uploaded images to news items... Anything uploaded essentially into this - that would be no longer local; it would be in the S3-compatible, everything.
354
+
355
+ **Jerod Santo:** That's right.
356
+
357
+ **Adam Stacoviak:** Yeah.
358
+
359
+ **Break:** \[38:30\]
360
+
361
+ **Gerhard Lazu:** There is one more thing which I wanna talk about before we tackle the next steps, which I'm a big fan of - what happens between episode 20 and episode 30. The thing that I wanna talk about before we cover next steps is the errors in Sentry that we've been seeing. So between July 15th and September, basically, between the two episodes, we had 3.2k errors.
362
+
363
+ Sentry makes it really easy to see exactly what's been happening in the app. 2.3k thousand of them are the undefined function error crypto HMAC parity 3. And this is actually linked to the Erlang 24 upgrade that we did with Alex Koutmos three months ago. Alex, it's your fault... No, it's not. It's actually mine. \[laughter\]
364
+
365
+ One of the unintended side effects of that upgrade was that one of the libraries that we use - and Jerod knows more about this - is no longer working. So tell us a bit more about that, Jerod.
366
+
367
+ **Jerod Santo:** Well, I can tell you the rabbit hole I've gone down trying to fix it, which is that basically our Twitter Auth has been offline ever since then... Which so far only Mat Ryer seems to know about, so maybe he's the only one who uses Twitter Auth on a regular basis... Because he's always like, "I can't sign into the website." And I'm like, "Dude, just put your email address in there and we'll send you a magic link." And he's like, "Oh, you can do that?" Anyways... He really likes Twitter Auth. We offer GitHub Auth and Twitter Auth. Well, we did offer Twitter Auth for a while, until we upgraded to Erlang 24... And this crypto HMAC error happens deep inside of the ueberauth\_twitter library that we use. And it's a difficult situation, because it basically -- I don't know if "segfault" is the right word, but it just crashes the interpreter altogether. This is not a nice stack trace, and everything. It's deep down in there, but what it's saying, this crypto HMAC error, is not exactly the problem, I don't think... But it's very difficult for me to debug, because I have to debug it from inside of ueberauth\_twitter, the package. And that package doesn't handle the situation gracefully at all; it handed up the stack to me in order to debug... And I can't actually repro in dev.
368
+
369
+ So that's as far as I've gotten... I've found out what the problem is. I think it's when it's passing in an empty session cookie, for some reason, and it's trying to HMAC an empty string, if I recall correctly... It's hairy down in there, but actually, it's just navigating the debugging which has made me not be able to fix it... So what I do is every couple of weeks I go check their repo and see if they've cut a new release, and then I upgrade, and I'm like, "Please, have it fixed..." and then it still doesn't work. Because I don't even know what issue to open at this point. It's such a small use of our website, and I'm pretty sure most of those things hitting that error are just robots hitting that route; they're not people.
370
+
371
+ So I haven't fixed it, I haven't opened an issue yet... I hope that somebody just upgrades the thing and it goes away. Maybe -- is Erlang 25 out yet? I don't know... What changed, Gerhard? What's going on in there? Because I can't figure it out quite yet.
372
+
373
+ **Gerhard Lazu:** So a function call that this library is making no longer exists. Crypto HMAC with an arity of 3, which takes three arguments - it's undefined in Erlang 24. It must have been removed. So we can go to Erlang 23, it wouldn't take much, really... But 24 - it came out in July; it's a much better one. So many improvements.
374
+
375
+ \[44:18\] Other than this, we haven't seen any issues. So it's a good upgrade to make. We are on the latest major of Erlang. Erlang 25 is coming out next year. They ship once a year, in the summer, June/July, sometimes May... But it's usually June. So I see two things. Either someone from the library just says like "Parker Selbert from Oban helped us improve things..." That was a great contribution, and that actually would be a nice reward for these episodes that we make, where we talk about these things...
376
+
377
+ **Jerod Santo:** Yeah, absolutely.
378
+
379
+ **Gerhard Lazu:** I would quite like that. And if that doesn't happen - not a problem. Maybe we just disable Twitter Auth. If there's not that many people using it... Sorry, Mat. I don't know what we do about that... \[laughs\] But if there's not many people using it, why don't we just remove the feature, rather than -- like, the majority of our errors are this. And we may have some other bigger errors, but we don't see them, because there's a lot of noise. So it just goes to good hygiene, good housekeeping... We either remove the feature or we fix it.
380
+
381
+ **Jerod Santo:** Right.
382
+
383
+ **Gerhard Lazu:** And either is acceptable. I don't mind which one it is, as long as the number of errors is going down, as long as we're improving this. What do you think?
384
+
385
+ **Jerod Santo:** Yeah, I'm pro fixing it, but I'm also -- it didn't cross my mind just to disable it in the meantime. I think that's probably the move - you disable it till you get it fixed; and we definitely wanna get it fixed. There's no reason not to...
386
+
387
+ And the ueberauth\_twitter is maintained; I don't' see anybody complaining about this. I feel like we're getting in a weird state that nobody else does, where -- I think that arity of 3 is the issue; it's passing in an empty string when it shouldn't be... Anyways, I feel like I could probably get to the bottom of it and find out that I'm actually the murderer somehow...
388
+
389
+ **Gerhard Lazu:** Maybe even just sharing the stack trace as we have it, and see what the developers of that library think or have to say. Maybe it's an easy fix, I don't know.
390
+
391
+ **Jerod Santo:** Yeah...
392
+
393
+ **Gerhard Lazu:** Maybe they don't even know this is happening, because we're the only ones having this problem, which is hard to believe... But maybe people are so stumped by it that they say "You know what - I don't even know how to report this issue."
394
+
395
+ **Jerod Santo:** Yeah... Part of it is my open source citizenship; I feel obligated to spend eight hours on it before I open an issue, because I know that I'm gonna take someone else's time... So I'm hesitant to open it. Although, on the other side, you're gonna save other people time if they have the same issue. But then I figure no one else has opened it, so maybe it is just me. I don't know, there's too many fields going on there.
396
+
397
+ **Gerhard Lazu:** Exactly. Well, that's why we do this, right? We think about improvement; how to improve things, even though it may be difficult... But it's that spirit of improvement, of contributing to the open source... Because otherwise, where would we be without it? I don't wanna think about that.
398
+
399
+ The other source of errors, which -- by the way, these are only three days ago... Ecto.Query.CastError. We had 700 in the last three days. And that seems a bit more meaningful.
400
+
401
+ **Jerod Santo:** Yeah. So this is a specific route. Your first point is well taken, because I haven't seen this one, whereas the other one, I knew about it quite well, because every time I look at Sentry, it's the top of the list. I haven't actually seen this one till right now, and it looks like it's a single endpoint. It's unsubscribing from the Founders Talk podcast, that's the route, and you're doing it with no email -- like, basically with a user ID that doesn't exist. So this is definitely a robot; that's why there's been so many of them. It's the same exact IP...
402
+
403
+ **Adam Stacoviak:** This show is really bad at this point then, because we have problems with the show...
404
+
405
+ **Jerod Santo:** Yeah. Everyone's trying to unsubscribe, we just won't let them. Like, when they hit Unsubscribe, it just errors out; they're like, "Dang it! I'll try again next time they email me." \[laughter\] No, these are all within hours of each other... So this is the same IP, same exact user agent, hitting the same exact route. It's definitely not a person. But I can fix this one pretty easily by basically just making that query a little smarter, and not erring. It'll just send them a 404, or something like that.
406
+
407
+ **Adam Stacoviak:** Send them through an infinite loop if they're a robot. Sent them to a -- crash their machine.
408
+
409
+ **Jerod Santo:** \[48:00\] Okay.
410
+
411
+ **Adam Stacoviak:** Instead of 404-ing it. That's too obvious.
412
+
413
+ **Jerod Santo:** Whenever anybody tries to unsubscribe from Founders Talk we just crash their machine?
414
+
415
+ **Adam Stacoviak:** Well, if they have this bot-like behavior, yeah.
416
+
417
+ **Jerod Santo:** So now I need a throttling library... This is too much work for a troll.
418
+
419
+ **Adam Stacoviak:** \[laughs\]
420
+
421
+ **Gerhard Lazu:** No, because I think we can see details. In Sentry, if you click the links, by the way -- and I can't add it in the show notes, but we can see the IP address which it's coming from; we can see the Chrome version, and we can see that it's using Windows 10. So if you're trying to unsubscribe and you're a listener of this, we are looking into it. \[laughs\] We know it's a problem. It's been happening for the last four days. Please hang on tight, we're fixing it.
422
+
423
+ **Adam Stacoviak:** I would recommend emailing editors@changelog.com and just saying "Please unsubscribe me manually."
424
+
425
+ **Gerhard Lazu:** Yeah, that's a good one.
426
+
427
+ **Jerod Santo:** This could be like a really MacGyver-style listener, who's like "I wanna unsubscribe. I realize they have a bug. I'm just gonna write a script that hits it every couple hours until it works, and then I'll be unsubscribed." Maybe that's what's going on here.
428
+
429
+ **Gerhard Lazu:** Apparently there's like a single user, and it's been happening 695 times in the last four days. Someone is really persistent.
430
+
431
+ **Adam Stacoviak:** What did we do to you? Gosh...
432
+
433
+ **Gerhard Lazu:** But one thing which I've wanted to say is that Getsentry made it easy. We get those weekly emails, we can see whether the arrows are going up or down... They have some nice things. I don't go there that often, but they have a performance feature which we don't even use; that's an interesting one. Also dashboards - they have like a new dashboards feature. But just looking at the issues, it's very easy to see which are the top ones which have been happening the most in the last 30 days, or within the timestamp. It's easy to see where you should focus first as far as application errors go. That was nice.
434
+
435
+ **Jerod Santo:** Yeah. What they don't do - and this is probably a config; I just haven't got it set up right - that Rollbar would do is the first time a new issue comes in, I would still get an email every time. And now with Sentry I get the weekly email, and I just don't get the "Hey, new error detected" email, which I figure is a standard feature that I just don't have set up... And that's why I probably didn't even notice this one, because I don't go there and check Sentry unless I'm experiencing -- or I get the weekly email and say "Dang. A lot of errors this week", and I go check it again. But I just don't have that set up right maybe... Are you guys getting emails on new errors?
436
+
437
+ **Adam Stacoviak:** I'm only getting the weekly updates...
438
+
439
+ **Gerhard Lazu:** No, only the weekly ones. So one thing that I can see on this issue - it's on the right-hand side, just underneath the number of events - is ownership rules. Import GitHub or GitLab code owners files to automatically sign issues to the right people. Maybe that would help? I don't know.
440
+
441
+ **Jerod Santo:** I don't know. I'll look into it more. Not interesting for this conversation, but just something that we have to figure out. I mean, we would be the owner -- just set yourself the owner of all new issues, maybe. And then maybe you'll get emailed. I don't know.
442
+
443
+ **Adam Stacoviak:** What is odd too, at least in this last report, was that Monday through Friday was low errors. It was Saturday and Sunday that was the error dates... Which is the exact opposite of -- at least WebTraffic. I'm not sure about ListenTraffic, if they happen a lot more on weekends... But I would suspect that week days are probably higher than weekends, in most cases, for us.
444
+
445
+ **Gerhard Lazu:** Yeah. That is an interesting one.
446
+
447
+ **Jerod Santo:** Well, the weekends, when you finally listen to that Founders Talk episode and you're like "I've gotta get off of this train..." \[laughter\] Unsubscribe, 695 times. I guess for the listeners' sake, Founders Talk is Adam's show. He does it all by himself, so we're picking on him at this point.
448
+
449
+ **Gerhard Lazu:** Yes, we are. Sorry. That was a really bad.
450
+
451
+ **Jerod Santo:** \[laughs\]
452
+
453
+ **Gerhard Lazu:** I apologize. I apologize.
454
+
455
+ **Jerod Santo:** It's a great podcast; you should totally subscribe, and I'm not saying that facetiously.
456
+
457
+ **Gerhard Lazu:** You should, yes.
458
+
459
+ **Jerod Santo:** Okay.
460
+
461
+ **Gerhard Lazu:** So what happens next? Next steps between episode 20 and episode 30. What is the first thing that you would like to see happen, Jerod? Let's go around. Or Adam, if you have one already queued up...
462
+
463
+ **Adam Stacoviak:** \[51:50\] Well, I know we've been talking about exploring more... I'm all about 1% improvements; I would say let's make progress on the front, not so much accomplish the front... But let's explore what it might be to consider something like Fly, considering their new hire recently, and their focus on Elixir; we're an Elixir stack, so it makes sense to explore the advantages of different platforms and how it works... And you know, 1) get around the networking issue that we had there, and so what if it could be multi-cloud... You mentioned Upbound and the ability to have a plane that goes across different clouds, and whatnot; so maybe that makes sense to continue to explore down. Or share what you've currently explored so far.
464
+
465
+ **Gerhard Lazu:** Yeah, that's a good one. So the person that Adam is talking about is Chris McCord, creator of the Phoenix framework. It's exactly what changelog.com the app is using... And he joined --
466
+
467
+ **Adam Stacoviak:** It was about two weeks ago, I think.
468
+
469
+ **Gerhard Lazu:** So a few weeks ago he joined the Fly team; I think that's great. There's a big commitment from Fly to Phoenix to Elixir to this ecosystem, which makes us very excited, because our app is using exactly the same stack, so that's great.
470
+
471
+ **Adam Stacoviak:** And I'll plug too, since we're dogging Founders Talk - I'll plug episode 80, with one of the co-founders and the CEO, Kurt Mackey, whom I think is a super-awesome dude. I think he's super-smart. He has great intentions, he's a developer at heart... He is a developer, obviously, but he's been iterating -- the title of the show is "Iterating to globally distributed apps and databases for a long time." He was the person who was behind MongoHQ, the naming issues... Long story short, turned that into Compose, which was eventually acquired by IBM... Exited that positively, continued to explore, and founded Fly.io. I'm telling a micro-version of the story. Y Combinator twice... Super-smart fella, so - a lot of respect for what they're doing, and I think their grounds there that they're tilling, I suppose; I'm trying to use farmers terms, or something like that... The grounds they're dealing with over there are grounds worth exploring.
472
+
473
+ **Gerhard Lazu:** In Ship It episode 18, in the show notes, there's a link to "Firecracker VMs on Metal, Oh, My!" This is Kurt Mackey's talk in March, earlier this year...
474
+
475
+ **Adam Stacoviak:** At Proximity, yeah.
476
+
477
+ **Gerhard Lazu:** The Proximity one. That was a really good talk. I've really, really enjoyed it... So if you wanna check it out -- that got me really excited about Fly and the infrastructure which they run... And I'm sure Kurt will be joining Ship It very soon.
478
+
479
+ **Adam Stacoviak:** Yeah. I think we barely scratched the surface of the ideas he has, so I think he's due a conversation with you at a deeper level on the tech side.
480
+
481
+ **Gerhard Lazu:** What about you, Jerod? Would you like to go next, your top thing?
482
+
483
+ **Jerod Santo:** Well, it's time to get our uploads over to cloud, but that one's on me... On you, Gerhard - I wanna see that Honeycomb test out integration sometime here real soon, because I did enjoy what Charity had to say on your episode with her, and I think that it sounds like a good place to hop in and try out Honeycomb and report back your findings to us and the gang.
484
+
485
+ **Gerhard Lazu:** Yeah, that's actually a good one. So this is one of the problems that I've been having since episode ten. I've had so many great conversations on Ship It, and I want to try so many things, and I do a little bit of this and a little bit of that...
486
+
487
+ **Jerod Santo:** Right.
488
+
489
+ **Gerhard Lazu:** ...but nothing long enough to land it. And that's something which I would like to be doing more of.
490
+
491
+ **Jerod Santo:** Focus.
492
+
493
+ **Gerhard Lazu:** Yeah, exactly. But there's so many exciting things, like - I wanna try Fly, and I wanna try Honeycomb, and I do, and I set it up, and CockroachDB, and I've set things up... But I haven't taken it all the way. So that's something which I would like to get better at.
494
+
495
+ So my top of the list - I really like what you've mentioned, Jerod and Adam, and I think this basically is more towards Adam... Is experimenting more, for sure; there is debugging the Kubernetes issue that we've been having since -- actually, since we've enabled Grafana Cloud and we've had more visibility into Ingress NGINX...
496
+
497
+ What we see - and this is, by the way, in a Rawkode livestream which is coming up, and it will be out, by the way, by the time you listen to this; I can add it to the show notes... It's that the tail latencies -- this is Ingress NGINX, our tail latencies to the app are really high.
498
+
499
+ \[55:54\] So our 90th percentile - this is Ingress NGINX to Phoenix, to PostgreSQL, the request coming back to NGINX, the maximum 90th percentile is 286 milliseconds. It's fairly high, but it's okay, not that high. The 95th one is 841 milliseconds, so almost a second. So some requests can take almost a second to come back, and that's fairly slow. But the 99th percentile - this is the long tail that I've been talking about - can be as high as a minute. So which requests are taking more than a minute to service?
500
+
501
+ **Adam Stacoviak:** Oh, my gosh... That's a long time, a minute... I mean, a full second is a long time. A minute is 60 times that.
502
+
503
+ **Gerhard Lazu:** Exactly.
504
+
505
+ **Adam Stacoviak:** I'm just doing some math for you who don't know how time works.
506
+
507
+ **Gerhard Lazu:** Thank you. Thank you, Adam. So that's one thing which I would like to look into more, because one thing which -- I mean, I had many great conversations in these ten episodes, but the one which really resonated with me was the one with Justin Searls about reliable software, trusting your software, optimizing for smoothness, not speed. That was episode 16. So I would like to make Changelog the app, the setup, more reliable, more robust. I just want it to work as good as it can, for as many people as it can, even when things go wrong behind the scenes. End users shouldn't need to know about that, or see that. And if it does happen, let's just be honest about it, let's just have incidents, talk about it openly, and figure out how to do it better, how to improve.
508
+
509
+ So in my mind, how do we make it more reliable? How do we fail less? How are more available? I think we have made some great improvements, but I don't think we're there yet. I don't think we'll ever be there, but at least we'll be improving. That's where my mind is at.
510
+
511
+ **Adam Stacoviak:** Well, Kaizen, right? Come back to Kaizen. Speaking of Kaizen, behind the scenes we have a T-shirt design which is simply the Japanese characters that make up the word Kaizen, which I feel like is an adopted -- we rose this flag, Gerhard, so I feel like this is an adopted company-wide mantra, for me at least, ever since you've brought it up. We're now on Kaizen 2, essentially. This is episode 20 of Ship It, but surely, the second version of Kaizen for us. So if you're this far, this end of the show and you thought "These guys navel-gaze big time on the show", it's on purpose. It's on purpose. \[laughter\] This is about Ship It the show, this is about our infrastructure, this is about Changelog Media and how we progress the business; we're a podcast company primarily. How we think about infrastructure, how we conjure that into content that entertains, but also informs... And you know, this embodies this idea of continuous improvement. Kaizen.
512
+
513
+ So we do have a shirt coming out soon, it's the Japanese characters that represent Kaizen. It's a super-cool shirt, it's on a super-soft T-shirt... You're gonna love it, and I'm excited to wear that to represent this idea of continuous improvement and embracing that.
514
+
515
+ **Gerhard Lazu:** And on that high note, episode 20 is a wrap. Thank you, gents. It was a pleasure. Looking forward to the next one.
516
+
517
+ **Adam Stacoviak:** Me too.
518
+
519
+ **Jerod Santo:** Kaizen.
520
+
521
+ **Adam Stacoviak:** Kaizen.
522
+
523
+ **Gerhard Lazu:** Kaizen.
Kaizen! The day half the internet went down_transcript.txt ADDED
@@ -0,0 +1,565 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So I really wanted to talk to you about this topic of Kaizen. Kaizen, for those that's the first time they hear this, is the concept of the art of self-improvement specifically. And that is really powerful, because it's the best way that you have to improve yourself, and to always think about "How can I do this better?" It all starts with "How can I do this better?" So with that in mind, what I wanted us to do every tenth episode was to reflect on what can we improve for the application, for our setup, but also the show, because - isn't that the best way of improving? I think it is...
2
+
3
+ **Adam Stacoviak:** Kaizen. I love it.
4
+
5
+ **Jerod Santo:** Always Be Improving. ABI.
6
+
7
+ **Adam Stacoviak:** ABI, yeah. Always Be Something. ABS.
8
+
9
+ **Gerhard Lazu:** I'm pretty sure that means something else for others, ABS, but yes...
10
+
11
+ **Adam Stacoviak:** Automatic Breaking System, that's what it refers to for me...
12
+
13
+ **Gerhard Lazu:** \[03:59\] The reason why I care so much about this is that having been part of Pivotal, this company which isn't anymore - it was acquired by VMware last year or two years ago - is that one of the core principles was to always be improving. "Be kind" was there as well. But always be improving was something that was embodied in the retrospectives that we used to have every single week at the end of the week. And this was good, because what worked well, what didn't work so well? Anything that people wanna discuss? And that made sure that everybody was in sync with problems, but also the wins. I think that's important.
14
+
15
+ So having done it for 5, 6, 7 years, it's so deep-ingrained in me I cannot not do it. It's part of me. And I do it continuously. And I think the infrastructure setup that we've been rolling for some number of years has been an embodiment of that. Every year it has been improving. It was rooted in this principle.
16
+
17
+ Now, one thing that we did in the past differently is that we improved, or at least we shared those improvements once per year. It was a yearly thing. And one of the ideas for the show was to do it more often, to improve more often. So we can improve in smaller steps, but also figure things out a lot, lot quicker, what works and what doesn't work, rather than once a year.
18
+
19
+ **Adam Stacoviak:** It works out at about every two-ish months essentially we get a response, a blip, a feedback loop, whereas before it was like once, or more recently twice in a year; if it's Kaizen every ten shows, then we get around four or five(ish) per year if you're shooting for 52 as a year.
20
+
21
+ **Gerhard Lazu:** So I think in end of April, beginning of May we switched on the 2021 setup, and we had a show, we had an intro, we did a couple of things, episodes -- do you still remember which episode that was on Changelog, Adam?
22
+
23
+ **Adam Stacoviak:** No, but I have the internet, and I will look it up... So give me a moment while I look it up.
24
+
25
+ **Gerhard Lazu:** That is a good one. That was meant to be part of Ship It, but then some timelines got moved around, and that went on Changelog... And then the Ship It - we did the intro to the show. So that's how it happened.
26
+
27
+ **Adam Stacoviak:** It was an interesting maneuver, a last-minute maneuver from us too, which - I'm not sure it matters to the listeners, but I think it was kind of... We had a plan, and then at the last minute we changed the first ten years of running down the field, so to speak. That was episode 441 on the Changelog's feed, so changelog.com/441 will get you there, "Inside 2021's infrastructure for Changelog.com", which is a throwback to the year prior, "Inside 2020's infrastructure for Changelog.com." So we've been doing that every year now for the past couple of years.
28
+
29
+ **Gerhard Lazu:** I think that change made a lot of sense, and that change just led to a couple of other things... And now we're finally in the point to talk about the next improvement, so you don't have to wait another year; not only that, we're doing things slightly differently. We're going to share the things that we're thinking about improving, maybe why we're thinking about improving them, so that maybe you have better ideas, maybe you know about things that we don't, that you would like us to try out, maybe part of the same thing...
30
+
31
+ So - Fastly. I would like to mention that, because Fastly, our partner - amazing CDN - had an outage a couple of weeks back.
32
+
33
+ **Adam Stacoviak:** Unexpected, of course.
34
+
35
+ **Jerod Santo:** Right after you said 100% uptime.
36
+
37
+ **Gerhard Lazu:** Exactly.
38
+
39
+ **Adam Stacoviak:** Exactly. \[laughter\] No, it was like a week after, wasn't it? That show shipped, and the very next week - Fastly outage. And it was a global outage, too.
40
+
41
+ **Gerhard Lazu:** It was global. Half the internet broke. It was the biggest Fastly outage that I am aware of. So what that made me realize is that Fastly is great when it works. And when it doesn't, it doesn't affect just us, it affects everybody. BBC was down, and that's a big one, BBC being down. Emojis were down, on the whole internet. That was unexpected...
42
+
43
+ **Jerod Santo:** Wait, wait, wait. Tell me more. How were emojis down for the whole internet? It doesn't make sense.
44
+
45
+ **Gerhard Lazu:** \[08:01\] Well, apparently, the assets that were served by AWS had something to do with it. I don't know exactly in which capacity, but AWS was serving certain emoji assets, and Fastly was part of that... And emojis stopped working for Slack, so I think in the Slack setup somewhere -- I mean, everybody uses Slack to communicate these days, because everybody's at home these days, or most of us are at home these days. So you couldn't use emojis in Slack anymore. They stopped working.
46
+
47
+ **Jerod Santo:** That makes more sense than "The emojis just stopped working globally", across the entire world of devices... But yeah, inside Slack.
48
+
49
+ **Gerhard Lazu:** Sensational. It's news, it has to be sensational. \[laughter\]
50
+
51
+ **Jerod Santo:** Well, most importantly, we were down, so most importantly to us... So the BBC being down - tragic; terrible for lots of people. But for us specifically, we were down, and that was the worst part about it, wasn't it?
52
+
53
+ **Gerhard Lazu:** For us, yes. As for all the listeners. \[laughs\] Right. And interestingly, during this time, our origin, the backend to Fastly was up. It didn't have an issue. So this month I got a report, we were down for 21 minutes because of that. So 99.96% uptime.
54
+
55
+ **Jerod Santo:** So you had a cutover though; you turned off Fastly basically, right?
56
+
57
+ **Gerhard Lazu:** Yes. I jumped in, switched Fastly... Basically every routed traffic; so DNS updates, and Changelog.com would start resolving directly to the Linode host, talking to the Linode load balancer, node balancer, and Fastly was basically taken out of the picture. But because of how DNS is cached, it took a couple of more minutes to propagate... But that was it. And CDN as well, re-routed it...
58
+
59
+ I was basically chilling, it was like a day off... It was a great one. I was in the garden, just chilling. \[unintelligible 00:09:51.07\] As you do, exactly... And then the phone started going off like crazy. That was really like "What?!" I got SMS messages, because we have multiple systems... When something is down, you really wanna know about it, so I got texts, I got Pingdom alerts... Oh, and what I didn't get was Telegram notifications, because guess who else was down? Grafana Cloud.
60
+
61
+ **Jerod Santo:** Grafana, yeah. You didn't let me guess. I was gonna guess it.
62
+
63
+ **Adam Stacoviak:** I thought you were saying you had a day off because of all the down \[unintelligible 00:10:22.20\]
64
+
65
+ **Jerod Santo:** Grafana? Was it Grafana?
66
+
67
+ **Gerhard Lazu:** Yes, Grafana. Sorry, Adam, what were you saying?
68
+
69
+ **Adam Stacoviak:** I was saying I thought you said you were taking the day off because you had nothing to do because the internet was down, essentially. That's what I thought you were saying.
70
+
71
+ **Jerod Santo:** Oh, no.
72
+
73
+ **Adam Stacoviak:** I misunderstood.
74
+
75
+ **Jerod Santo:** \[unintelligible 00:10:35.29\]
76
+
77
+ **Gerhard Lazu:** I was just chilling. It was a gorgeous day, sunny, and it was a day off; I was sunbathing. I won't go into more details with that. \[laughs\]
78
+
79
+ **Jerod Santo:** Well, let me say two things. First of all, thanks for springing into action and bringing this back up; 21 minutes, nothing wrong with that, compared to the BBC, those suckers, they were down for much longer... But the bummer side, let me tell you the bummer side, which - I haven't told you this before, but what you did is you cut Fastly out and you put Linode directly in, right? And so all of our traffic was served from Linode during that time. Well, it just so happened to be timed directly. When we shipped our episode of the Changelog with Ryan Dahl, and because we do all of our analytics through our Fastly logs, and then we served all of that traffic directly from Linode, we have no idea how popular that episode is. In fact, in our admin it looks like it's not a very good episode of the Changelog, but I'm quite sure it was pretty popular... So I was bummed, I was like "Oh, no...! We missed out on the stats for the show", which was one of our bigger shows of the year... But I'd rather have that happen and let people listen to it than have it be down and nobody gets to listed to it. So that was a bummer, but pick your poison, I guess, or better of two evils.
80
+
81
+ **Adam Stacoviak:** Yeah.
82
+
83
+ **Gerhard Lazu:** \[11:56\] I remember that, actually. I remember that, because I remember looking at the stats, and the stats were down, and I was thinking "I wanna talk to Jerod about this." So if there's one lesson to learn from this, we need to double up. So everything that we do, we need to do two of that thing. Say like monitoring - we have two monitoring systems. Because sometimes Grafana Cloud has an issue, and we want to still know -- and when I say Grafana Cloud, I mean the black box, all the exporters... There was a recent one as well when they pushed updates, sometimes things are offline for a few minutes... And it makes you think that a website is offline, but it's not. Or when it is offline, you don't get anything. So we used Pingdom as a backup, and that helps. So stats - I think it's great to have stats from Fastly, but I don't think we can rely only on those stats. I think we need more.
84
+
85
+ **Jerod Santo:** Well, it's one of those ROI kind of conversations, and I think this is a good conversation for Ship It, like "What's worth doing?" and the fact is that in our five years of being on Fastly, this is the first incident they've had... And if it didn't happen to be right when we released a popular episode of the Changelog -- if it was just like a Saturday and we missed some downloads, I wouldn't care all that much. At the end of the day, I know that show is popular, so it's not really changing my life... I just know it was popular because people reacted that way, versus looking at the download stats.
86
+
87
+ So the question becomes "What does it take to get that redundancy? What does that redundancy cost, and what does it gain?" And in the case of stats, I'm not sure what side of the teeter-totter we actually end up on, because the way it works now as Fastly streams the logs of all of the requests to the mp3 files over to S3, and then we take those logs, which are formatted in a specific way, parse them, and then bring them locally into our database, and it's reproducible in that way off of S3. So we can just suck down the same logs from S3 whenever we want, re-parse them, recalculate...
88
+
89
+ But what would it take to get Linodes doing the same thing, or changing the way we do our stats, so that we're either redundant or do it differently? I don't know the answer to that off the top of my head.
90
+
91
+ **Adam Stacoviak:** In the case of something like Grafana though, I would put that back on them. We shouldn't have two Grafanas. I think this is probably the case for multi-cloud - wouldn't it make sense then to be let's say on GCP, Azure, or essentially multi-cloud? And maybe that's an issue with cloud at large. The cloud has to be multi-cloud, so that if part of their cloud goes down, then there's still some sort of redundancy in them. I would rather them do that kind of stuff than us have to have essentially two Grafanas, or Linode and Fastly, and deal with that. And maybe that's the unique scenario where it's like we do have to deal with that N+ whatever... But I would say on a service level, push that onto the service to be smarter about the way they roll out their own cloud, and their potential downtime, what that means for internet at large.
92
+
93
+ **Gerhard Lazu:** Now, obviously, as you would expect, I think about this differently.
94
+
95
+ **Jerod Santo:** \[laughs\] Please tell us.
96
+
97
+ **Adam Stacoviak:** Please tell.
98
+
99
+ **Gerhard Lazu:** The way I think about this is that we are in a unique position to try out all these providers. We have the know-how, and really, our integrations are fairly simple... So I know that it wouldn't take that much more to integrate Cloudflare. So how about we use Cloudflare AND Fastly? ...the two biggest CDN providers, at the same time. What if, for example, we decouple assets from local storage? We store them in an S3 object store. We for a database use maybe CockroachDB, a hosted one, the database is global, and then we are running Changelog one instance on Linode, one instance on Render, one instance on Fly, and then we use different types of services, not just Kubernetes we try and apply it for... Because we try it out, and at the same time, we are fully redundant.
100
+
101
+ \[16:06\] Now, the pipeline that orchestrates all of that will be interesting... But this is not something that's gonna happen even like in a year. It's slowly, gradually... It's maybe a direction that we choose to go towards... And maybe we realize "You know what? Actually, in practice, Cloudflare and Fastly - it's just too complicated." Because only once you start implementing you realize just how difficult it is.
102
+
103
+ **Adam Stacoviak:** Yeah, that's that cost that Jerod was talking about - how much does the redundancy cost, and how much does it gain you?
104
+
105
+ **Gerhard Lazu:** So from a CDN perspective we just basically have multiple DNS entries; you point both Fastly and Cloudflare to the same origin - or origins in this case... \[unintelligible 00:16:44.19\] The configuration is maybe slightly different, but we don't have too many rules in Fastly. How do they map to Cloudflare? I don't know. But again, there's not that much stuff. I think the biggest problem is around stats. We keep hitting that.
106
+
107
+ **Jerod Santo:** Yes. And I looked at Cloudflare - it was probably two years ago now - with regards to serving our mp3's, and where I ran into problems was their visibility into the logs and getting that information out paled in comparison to what Fastly provides. So we would lose a lot of fidelity in those logs, like with regard to IP addresses... Fastly will actually resolve with their own MaxMind database or whatever their GeoIP database is; they will give you the state and the country of the request, stuff that we wouldn't have to do... And Cloudflare - at least at the time (a couple years ago) just didn't provide any sort of that visibility, so it was like, I would lose a lot of what I have in my stats using Cloudflare. And if I was gonna go multi-CDN, which is kind of like multi-cloud, I would have to go lowest common denominator with my analytics in order to do that... So it really didn't seem worth it at the time. But maybe it's different now.
108
+
109
+ **Adam Stacoviak:** Yeah, if they've improved their logs, then it's back on the table, let's say.
110
+
111
+ **Jerod Santo:** Yeah. So that's maybe the long-term direction. What's some stuff that is more immediate, that you have on the hitlist? Things that we should be doing with the platform.
112
+
113
+ **Adam Stacoviak:** I think multi-CDN makes sense to me, just for those reasons. If you've got one that goes down, then you've got another resolver.
114
+
115
+ **Jerod Santo:** But once in five years... How often is Fastly down?
116
+
117
+ **Gerhard Lazu:** Okay, I'm thinking about this from the perspective of the experience and sharing these things.
118
+
119
+ **Jerod Santo:** Right.
120
+
121
+ **Gerhard Lazu:** A few years back, we were missing this. But we don't know what they have or don't have this year, or maybe what we're missing. Maybe they don't even know what we would like for them to have. And listeners of this show, they can think "You know what - this show is really interesting, because they are using multi-cloud, and these are all the struggles that they have, so maybe we can learn from them and not do some of these mistakes ourselves."
122
+
123
+ **Jerod Santo:** Right.
124
+
125
+ **Gerhard Lazu:** So in a way, we're just producing good content. That is very relevant to us, when we say "You know what - we are informed, and we have made an informed decision to not use Cloudflare because of these reasons... Which may or may not apply to you, by the way."
126
+
127
+ **Jerod Santo:** Right. It's like there's a brand new hammer, and we grab hold of it... And everyone gathers around, we put our hand out and we strike it right on our thumb. And then everybody knows "That hammer really hurts when you strike it on your thumb. I'm glad those guys did it. I've learned something. I don't have to do that myself.
128
+
129
+ **Gerhard Lazu:** \[laughs\] I think that's a very interesting perspective, but I don't see it that way. It's an amazing analogy, but I'm not sure that it applies here... But it's great fun, that's for sure.
130
+
131
+ **Adam Stacoviak:** yeah.
132
+
133
+ **Jerod Santo:** Okay, good.
134
+
135
+ **Break**: \[19:48\]
136
+
137
+ **Gerhard Lazu:** So you were asking, Jerod, what is next on our hill... One of the things I learned from the Fastly incident is that we don't have anything to manage incidents. When something is down, how do we let users know what is going on? How do we learn from it in a way that we can capture and then share amongst ourselves and then also others?
138
+
139
+ A document is great, a Slack just to write some messages is great, but it feels very ad-hoc. So one of the things that I would really like is a way to manage these types of incidents. And guess what - there's another incident that we have right now.
140
+
141
+ **Jerod Santo:** Right now?
142
+
143
+ **Gerhard Lazu:** Right now, right now.
144
+
145
+ **Jerod Santo:** Like the website's down right now?
146
+
147
+ **Gerhard Lazu:** No. The incident - this is a small incident...
148
+
149
+ **Jerod Santo:** Okay, good...
150
+
151
+ **Gerhard Lazu:** No, the website is 100% up.
152
+
153
+ **Jerod Santo:** 100% uptime, thank you.
154
+
155
+ **Gerhard Lazu:** Yeah. So Fastly, it's your responsibility to keep it up, right? That's what it boils down to. It's someone else's problem. It's Fastly's problem.
156
+
157
+ **Jerod Santo:** That's right. Pass the buck.
158
+
159
+ **Gerhard Lazu:** Right. So right now, one of the DNSimple tokens that we used to renew certificates has been deleted. So it's either Adam or Jerod, because I haven't.
160
+
161
+ **Adam Stacoviak:** It wasn't me...
162
+
163
+ **Gerhard Lazu:** Anyways, I'm not pointing any fingers...
164
+
165
+ **Adam Stacoviak:** I don't touch DNS.
166
+
167
+ **Gerhard Lazu:** So in the account --
168
+
169
+ **Jerod Santo:** It's looking like maybe it was me, but I haven't touched anything, so I don't know what's going on. It could be worse than we think.
170
+
171
+ **Adam Stacoviak:** It could be a bit flip.
172
+
173
+ **Gerhard Lazu:** So we had two DNS tokens, one was for the old setup and one was for the new setup. The one for the old setup I have deleted, because we just didn't need it... And then we had three DNS tokens left. One of them disappeared. It's no longer there. And that was the one that was used by cert-manager to renew certificates. So the certificates are now failing to renew. We passed the 30-day threshold, and we have I think another 25 days to renew the certificate. But because the token is not there, the certificate will never be renewed, and then eventually, the certificate will stop being valid. This is the same wallet we use in Fastly. So a lot of stuff is going to break for many people.
174
+
175
+ Now, I've found out about this by just looking through K9s what is happening with the different jobs. There's jobs which are failing, that are meant to renew things. It's not the best setup, so the first thing which I've done, I've set up an alert in Grafana Cloud, when the DNS expires in less than two weeks, or in actually three weeks - whatever; some number of seconds, because that's how they count them - I get an alert. So it should automatically renew within 30 days. If within 25 days it hasn't been renewed, I get an alert. So I have 25 days to fix it, roughly.
176
+
177
+ So what I would like to do is first of all capture this problem in a way that we can refer back to it, and also fix it in a way that we also can refer back to it, like how did we fix it; what went into it, what was added, so that this doesn't happen again. And adding that alert was one of the actions that I took even before we created an incident. So that's one of the top things on my list. How does that sound to you both?
178
+
179
+ **Adam Stacoviak:** Was it called an access token?
180
+
181
+ **Gerhard Lazu:** Yes.
182
+
183
+ **Adam Stacoviak:** \[23:56\] So on June 19th -- they have an activity log. This is actually kind of important for -- I think this is super-important for services that have multiple people doing things that are important, that could break things, essentially... Have an activity log of things that happened - deletions, additions... And DNSimple does have that, except for to have more than 30 days of activity, you have to upgrade to a pro plan that costs $300/year. It's kind of pricey, in my opinion.
184
+
185
+ **Jerod Santo:** So we don't know what happened.
186
+
187
+ **Adam Stacoviak:** Well, we do, for the past 30 days. So on June 19th, because I'm the only user, it says "Adam deleted it." So I guess I "deleted" it, but it was not me.
188
+
189
+ **Jerod Santo:** Hah, so it was you.
190
+
191
+ **Gerhard Lazu:** No, that was actually me... But the token which I deleted was the one for the old infrastructure. There were two tokens.
192
+
193
+ **Adam Stacoviak:** I see, okay. So this happened -- do you know when, roughly? Can you assume at least?
194
+
195
+ **Gerhard Lazu:** June 19th sounds right. But a single token was deleted and we had two.
196
+
197
+ **Adam Stacoviak:** Yeah, okay. So it shows a single token being deleted June 19th, at an abnormal time for me to do any deletions. I think Jerod as well.
198
+
199
+ **Gerhard Lazu:** Yeah, that was me.
200
+
201
+ **Adam Stacoviak:** If this is Central timezone, because that's where I'm at \[unintelligible 00:25:04.28\] it's 7:16 in the morning. I'm definitely not deleting things at that time besides Z's in my brain. I don't get up that early. That's how we know.
202
+
203
+ **Jerod Santo:** Maybe you accidentally deleted two. It was a two for one deal that morning.
204
+
205
+ **Adam Stacoviak:** It doesn't show on the activity log though, so that's the good thing.
206
+
207
+ **Gerhard Lazu:** Right.
208
+
209
+ **Adam Stacoviak:** I would maybe push back on DNSimple support and they can dig into it. And then 1) get a true git-blame on this, and then 2) see if it was maybe just an error on the platform side.
210
+
211
+ **Jerod Santo:** Yeah, I don't think I've done anything with tokens aside from maybe one of our GitHub access tokens was expiring, or they've made a new one and I think I rotated one token... But nothing to do with DNS. Not in the last month, or six months.
212
+
213
+ **Adam Stacoviak:** It'd be cool if certain things like this required consensus. You can delete it if Jerod also deletes it.
214
+
215
+ **Jerod Santo:** It's like the nuclear codes. You've gotta have two hands on the button.
216
+
217
+ **Adam Stacoviak:** Yeah. You don't have to do it at the same time. You can do it async by saying "Okay, Gerhard, at his 7 in the morning timeframe (because he's in London) deleted it." You get an email, Jerod, saying "Gerhard deleted this. Do you wanna also have consensus on this deletion?" and you have to go and also delete it too, where it's like two people coming together to agree on the deletion of an access token...
218
+
219
+ **Jerod Santo:** It seems awfully draconian for a DNS access token. That's why I think the nuclear codes makes sense... You know, like, you're about to send a nuclear bomb, you've gotta have consent. But I think an access log is good enough.
220
+
221
+ **Gerhard Lazu:** I think it would help in the DNSimple log to see which token has been deleted, like the name of the token...
222
+
223
+ **Adam Stacoviak:** It doesn't say that. It's not very thorough. It just says "Access token delete."
224
+
225
+ **Gerhard Lazu:** Yeah. That would have helped.
226
+
227
+ **Adam Stacoviak:** That's the event name. So some of the items in DNS have text associated with them, but this does not. It doesn't showcase the token, the first six, or anything like that. It's just simply the event name, in this case. Everything else is pretty thorough.
228
+
229
+ **Jerod Santo:** Well, I think we're rat-holing this particular incident, but the bigger picture thing in addition to this "We've gotta figure out what happened here and fix it" is how do we handle incidents in a better way? So I think this is a place where I would love to have listeners let us know how you handle incidents, what are some good options... I know Gerhard you've been looking at a few platforms and solutions, certainly there's open source things... There's lots of ways that you can go about this. You could use existing tools, you could set up kind of a notice for this particular thing... But that's not what you're talking about. Like, how do we track and manage incidents in like a historical, communicable way?
230
+
231
+ **Gerhard Lazu:** Exactly.
232
+
233
+ **Jerod Santo:** \[27:51\] I don't know. We don't know the best way to doing this, or a good way... So what's a good way for listeners, if they have a great incident solution, or maybe they have one that they use at work but they hate it, like "Avoid this one"? Is it Slack, is it email, is it tweets? What's the best way for listeners to -- feed back comments on the episode page perhaps on the website?
234
+
235
+ **Gerhard Lazu:** Yeah, that is an excellent point. Yeah, so however you wanna communicate - via Slack, or even via Twitter, we are everywhere these days. Everywhere that works and it's still available...
236
+
237
+ **Jerod Santo:** \[laughs\] Everywhere where you can get an emoji rendered, we're there...
238
+
239
+ **Gerhard Lazu:** Exactly. There are a couple of things here. For example, one thing which this reminded me is that we do not declare - and this is a bit of a chicken and egg situation, where we should absolutely manage the tokens on DNSimple site with something like for example Kubernetes (why not?), which continuously declares those. Now, obviously you still need a token that creates tokens. But if you have that -- we should have a token that it needs \[unintelligible 00:28:55.19\]
240
+
241
+ I think that's a bit interesting, because then what do you do from the perspective of security? It can't give itself access to everything, and then delete all the DNS records. That's not good. So some thought needs to go there... But the idea being is that even like with Fastly, for example, when we integrate, we still have manual things, manual integrations; we don't declare the configuration. That's something which I would like us to do more of... And maybe also have some checks that would -- I mean, if you don't have DNS or something isn't right, like in this case you don't have access to DNS, that's a problem, and you would like to know about it as soon as possible. So the token being deleted on the 19th, and the failure only happening two weeks later almost, end of June - that is not great, because it removes the moment that you've done something, which... Maybe it was me. Maybe I have deleted by mistake the wrong token. But I remember there were two... Who knows. Maybe I've seen two tokens when there was just one. It's possible. And then when that happened, it makes sense that two weeks later that this thing starts failing... But because it took so long for this failure to start happening, it was really difficult to reconcile the two and to link the two together.
242
+
243
+ **Jerod Santo:** Yeah, so where did those checks live in the system? Where would they live? Not in Grafana, I wouldn't think...
244
+
245
+ **Gerhard Lazu:** I don't know. I think it depends. In Kubernetes you declare the state of your system. Not just the state of your system, but the state of the systems that the system integrates with. So you can have providers... I know that Crossplane has that concept of providers, so it integrates with AWS, GCP... I don't think that it has a DNSimple provider, but we should have something that periodically makes sure that everything is the way it should be, and Kubernetes has those reconciling loops. It's central to how it operates. So to me, that sounds like a good place.
246
+
247
+ From a monitoring perspective, you can check that things are the way you expect them to be, but that is more like when there's a problem, you need to work backwards from that, "Where is the problem?" While if you try to continuously create things, and if it doesn't exist, it will be recreated, if it exists, there's nothing to do. That's more proactive, so I quite like that model.
248
+
249
+ **Adam Stacoviak:** What does instant management give a team though? Because I think this came about whenever you said "Well, hey--", Fastly was down, we didn't expect it to be down; a majority, if not all the responsibility tends to fall on your shoulders for resuming uptime, which is incident management, like a disruption in a service that requires an emergency response; you're our first and only responder. I suppose Jerod and I can step in in most cases, but you really hold the majority of the knowledge... Does incident management give you the ability to share that load with other people, that may not have to know everything you do, but can step in? What is incident management, I guess, breakdown to be? Is it simply monitoring and awareness, is it action-taking? Is there multiple facets of incident management?
250
+
251
+ **Gerhard Lazu:** \[31:57\] It has a couple of elements... But the element that I'm thinking about based on your initial question was having the concept of a runbook. I know I have a problem - great, I'm going to communicate my problem. So what do I do? And you codify those steps in something which is called a runbook. For example, if Jerod had to roll the DNS, what would he do? How would he approach it? It didn't have to be me, but the problem is, as you very well spotted, is that I am the one who has the most context in this area, and it would take Jerod longer to do the same steps. Make files plural, we have how-to's. So how to rotate credential? This is a step by step process, like seven steps or four steps, however many it's now, how to basically rotate a specific credential. So we need something similar to that, but codified in a way that first of all - there's an incident; these people need to know about it, maybe including our listeners... Like, "Hey, we are down. We know we're down, we're working on it. We'll be back shortly." And then one of us, whoever is around - because maybe one of us is on holiday. And if I'm on holiday, what do you do? What are the steps that you follow to restore things? And as automated as things are, there's still elements... I mean, ROI. Not everything is automated, because it's not worth automating everything, or it's impossible.
252
+
253
+ **Jerod Santo:** Right.
254
+
255
+ **Gerhard Lazu:** So what are the steps that Jerod or even you can follow to restore things? Or anyone, for that matter, that has access to things. Anyone trusted.
256
+
257
+ **Adam Stacoviak:** Yeah.
258
+
259
+ **Gerhard Lazu:** And if it's that simple, then maybe we can automate that. Some things aren't worth automating, because if you run it once every five years - well, why automate it? The ROI just doesn't make sense.
260
+
261
+ **Adam Stacoviak:** Yeah. It seems like it's pretty complex to define for a small team. Maybe easier for larger teams, but more challenging for smaller things.
262
+
263
+ **Gerhard Lazu:** But I know that there are incident management platforms out there... Can we name names? I have to... So one of them is FireHydrant. The other one is Incident.io. I looked at both, and I know that FireHydrant for a fact has the concept of runbooks. So we could codify these steps in a runbook. I don't know about Incident.io, but if they don't have one, or if they don't have this feature, I think they should, because it makes a lot of sense. If we had this feature, we wouldn't need to basically find a way to do this or work around the system. The system exists and facilitates these types of approaches, which make sense across the industry, not just for us. So even though we're a small team, we still need to communicate these sorts of things somehow, and in a way that makes sense. And if we use a tool --
264
+
265
+ **Adam Stacoviak:** What's an example of a runbook then? Let's say for our case, the Fastly outage, which is a once-in-five -- they're not gonna do that in the next five years. I'm knocking on wood over here... It would be smart of them--
266
+
267
+ **Gerhard Lazu:** Remember my certainty? 100% uptime?
268
+
269
+ **Adam Stacoviak:** Next week Fastly goes down.
270
+
271
+ **Gerhard Lazu:** Exactly. Don't jinx it. \[laughs\]
272
+
273
+ **Adam Stacoviak:** Well, given their responsibility and size, they're probably gonna be less likely to do that again anytime soon is kind of what I mean by that. But even that -- would you codify in a runbook a Fastly outage?
274
+
275
+ **Gerhard Lazu:** I think I would--
276
+
277
+ **Adam Stacoviak:** Now you might, because you have this hindsight of recent events... But prior to this you probably wouldn't. So what's a more common runbook for a team like us?
278
+
279
+ **Gerhard Lazu:** I think I would codify the incidents that happened. For example, if we had an incident management platform, when the Fastly incident happened, I would have used whatever the platform or whatever this tool offered me to manage that incident. And as an outcome of managing the incident, we would have had this runbook. So I wouldn't preemptively add this.
280
+
281
+ **Adam Stacoviak:** I see. So it's retrospectively.
282
+
283
+ **Gerhard Lazu:** Exactly.
284
+
285
+ **Adam Stacoviak:** An incident happens, it doesn't happen again.
286
+
287
+ **Gerhard Lazu:** Well, it may...
288
+
289
+ **Adam Stacoviak:** Gotcha.
290
+
291
+ **Gerhard Lazu:** \[35:52\] Yeah, "This is what I've done to fix it." And anyone can follow those steps. And maybe if something for example happens a couple of times, then we create a runbook. But at least Jerod can see "Oh, this happened six months ago. This is what Gerhard did. Maybe I should do the same." I don't know. For example, in the case of this DNS token, what are the steps which I'm going to take to fix it? So capturing those steps somewhere... And it's a simple form; literally, as I do it, I do this, and I do that." And that is stored somewhere and can be retrieved at a later date.
292
+
293
+ **Adam Stacoviak:** I guess then the question is when the incident happens again, how does somebody know where to go look for these runbooks? I suppose if you're using one of these services it gets pretty easy, because it's like "Hey, go to this service", and there's a runbooks dashboard, so to speak.
294
+
295
+ **Gerhard Lazu:** Yeah. I think it's just specific to the service.
296
+
297
+ **Adam Stacoviak:** Yeah. And you go there and you're like "Oh man, there's never been a runbook for this. I'm screwed. Call Gerhard" or "Call so-and-so", you know?
298
+
299
+ **Gerhard Lazu:** Yeah, I suppose... But I think if you operate a platform long enough or a system long enough, you see many, many things, and it just progresses to the point that -- let's imagine that we did have multi-cloud. Let's imagine that Linode was completely down, and the app was running elsewhere. We wouldn't be down. Okay, we would still be in a degraded state, but things would still be working. If we had multi-CDN, Fastly is down - well, Cloudflare is up. It rarely happens that both are down at the same time. So then it's degraded, but it still works, so it's not completely down.
300
+
301
+ In this case, for example, we didn't have this, but right now if the backend goes away, if everything disappears, we can recreate everything within half an hour. Now, how would you do that? It's simple for me, but if I had to do it maybe once and codify it, which is actually what I have in mind for the 2022 setup, I will approach it as if we've lost 2021 and I have to recreate it. So what are the steps that I will perform to recreate it? And I'll go through them, I'll capture them...
302
+
303
+ **Adam Stacoviak:** Yeah, because 2021 is kind of a STDIN your codifying the current golden standard.
304
+
305
+ **Gerhard Lazu:** The steps that I would take, yes, to set up a new one.
306
+
307
+ **Adam Stacoviak:** Yeah, exactly, to get to zero, where you're at right now; this is ground zero.
308
+
309
+ **Gerhard Lazu:** Yeah. In 2021, what I set up was fairly easy to stand up, because I changed these things inside the setup, so that for example right now the first step which it does - it downloads from backup everything it doesn't have. So if you're standing this up on a fresh setup, it obviously has no assets, no database, so the first thing which it does - it will pull down the backup. From the backup, it will pull everything down. And that's how we test our backups.
310
+
311
+ **Adam Stacoviak:** Which is smart, because the point of backup is restoration, not storage...
312
+
313
+ **Gerhard Lazu:** Exactly, exactly. We test it at least once a year now.
314
+
315
+ **Adam Stacoviak:** You know, what's important to mention here is that this may not be what every team should do. In many cases, this is exploration on our part. This is not so much what every team out there should do in terms of redundancy. We're doing it in pursuit of 1) knowledge, and 2) content to share. So we may go force new ground on the listeners' behalf, and hey, that's why you're listening to this show... And if you're not subscribed, you should subscribe. But we're doing this not so much because 1) our service is so important that it must be up at all times, it's because the pursuit of uptime is kind of fun, and we're doing it as content and knowledge. So that's I think kind of cool, not so much that everyone should eke out every ounce of possible runtime.
316
+
317
+ **Gerhard Lazu:** Definitely.
318
+
319
+ **Adam Stacoviak:** In some cases it's probably not wise, because you have product to focus on, or other things. Maybe you have a dedicated team of SRE's, and in that case, their sole job is literally uptime, and that totally make sense... But for us, we're a small team, so maybe our seemingly unwavering focus on uptime is not because we're so important, but because it's fun for content and knowledge to share.
320
+
321
+ **Gerhard Lazu:** And it makes us think about things in a different way. So if you try something out, why are you trying something out? Well, we have a certain problem to address, and it may be a fun one, but we will learn. So it's this curiosity, this building curiosity. How does Incident.io work? How does FireHydrant work? What is different? What about Render? What about Fly? They look all cool... Let's try it out. What would it mean to run Changelog on these different platforms?
322
+
323
+ \[40:08\] Some are hard, some are dead simple, and sometimes you may even be surprised and say "You know what - I would not have guessed that this platform is so much better, so why are we complicating things using this other thing?" But you don't know until you try it. And you can't be trying these things all the time, so you need those innovators that are out there. And if for example we have something stable that we depend on, something that serves us well, we can try new things out in a way that doesn't disrupt us completely. And I think we have a very good setup to do those things.
324
+
325
+ **Adam Stacoviak:** This reminds me of Sesame Street. Have either of you watched Sesame Street?
326
+
327
+ **Gerhard Lazu:** Not that I remember...
328
+
329
+ **Jerod Santo:** No.
330
+
331
+ **Adam Stacoviak:** Of course. Everybody knows Sesame Street. But my son is a year and a half old, so he watches Sesame Street... But something that Hailee Steinfeld sings on the show is "I wonder... What if... Let's try..." That's kind of what we're doing here, "I wonder how this would work out if we did this. What if we did that? Let's try."
332
+
333
+ **Gerhard Lazu:** I think that's how all great ideas start. The majority of the ideas may fail, but how are you going to find the truly remarkable ideas that work well in practice? Because on paper everything is amazing, everything is new, everything is shiny. How well does it work in practice? And that's where we come in. Because if it works for a simple app that we have, which serves a lot of traffic, it will most probably work for you, too. Because I think the majority of listeners -- I don't think they are the Googles, or the Amazons. Maybe you work for those companies, but let's be honest, it's everybody part of that company that contributes to some massive systems, that very few have.
334
+
335
+ **Adam Stacoviak:** It's all about gleaning, really. We're doing some of this stuff, and the entire solution, the way we do it may not be pertinent to the listener in every single case, but it's about gleaning what makes sense for your case. The classic "It depends" comes into play.
336
+
337
+ **Gerhard Lazu:** Oh, yes...
338
+
339
+ **Adam Stacoviak:** This makes sense to do, in some cases. Does it work for me? It depends. Maybe. Maybe not.
340
+
341
+ **Break**: \[42:02\]
342
+
343
+ **Gerhard Lazu:** So I would like us to talk about the specifics, three areas of improvements for the changelog.com setup. Not for the whole year 2022, but just like over the next couple of months. Top of my list is the incident management, so have some sort of incident management... But that seems like a on-the-side sort of thing, and we've already discussed that at some length.
344
+
345
+ The next thing is I would like to integrate Fastly logging. This is the origin, the backend logging with Grafana Cloud. The reason why I think we need to have that is to understand how our origin, in this case Linode (LKE) where changelog.com runs - how does the origin behave from a Fastly perspective, from a CDN perspective. Because that's something that we have no visibility in.
346
+
347
+ \[43:57\] So what I mean by that is like when a request hits Fastly, that request has to be proxied to a node balancer running in Linode, and that has to be proxied to Ingress NGINX running in Kubernetes, and that has to be proxied to eventually our instance of Changelog. How does that work? How does that interaction work? How many requests do we get? How many fail? When are they slow? Stuff like that. So have some SLO's uptime as well, but also how many requests fail, and what is the 99th percentile for every single request? That's what I would like to have.
348
+
349
+ **Jerod Santo:** How hard is that to set up?
350
+
351
+ **Gerhard Lazu:** Not too hard. The only problematic area is that Fastly doesn't support sending logs directly to Grafana Cloud. So I looked into this a couple of months ago, and the problem is around authenticating the HTTPS origin where the logs will be sent... Because it needs to push logs, HTTP requests. So how do we verify that we own the HTTPS origin, which is Grafana Cloud? Well, we don't. So we don't want to DDOS any random HTTPS endpoint, because that's what we would do if we were to set this up.
352
+
353
+ So we need to set up - and again, this is in the support you get with Fastly, what they recommend is you need to set up a proxy... Imagine you have NGINX, it receives those requests which are the Fastly logs (it will be HTTPS), and then it proxies them to Grafana Cloud. So that would work.
354
+
355
+ **Jerod Santo:** Where would we put our proxy?
356
+
357
+ **Gerhard Lazu:** Well, we would use the Ingress NGINX on Kubernetes, the one that serves all the traffic, all the Changelog traffic.
358
+
359
+ **Jerod Santo:** Well, couldn't we DDOS ourselves then?
360
+
361
+ **Gerhard Lazu:** We could, if Fastly sends a large amount of logs... Yes, we could. Now, would we set another--
362
+
363
+ **Jerod Santo:** This is not a DDOS if it's ourselves. It's just a regular DOS.
364
+
365
+ **Gerhard Lazu:** Right. \[laughs\]
366
+
367
+ **Jerod Santo:** It's not gonna be distributed, it's just us. \[laughs\]
368
+
369
+ **Gerhard Lazu:** Well, it will come from all Fastly endpoints, I imagine...
370
+
371
+ **Jerod Santo:** That's true, it could come from lots of different Fastly points of presence...
372
+
373
+ **Gerhard Lazu:** Yeah. We could run it elsewhere, I suppose, but I like things being self-contained. I like things being declared in a single place. So to me, it makes most sense to use the same setup. I mean, it is in a way a Fastly limitation, and actually specifically Fastly and Grafana Cloud, the lack of integration that we have to work around...
374
+
375
+ **Jerod Santo:** Right.
376
+
377
+ **Gerhard Lazu:** But speaking of that, I know that Honeycomb supports Fastly logging directly... And one of the examples that Honeycomb has is the RubyGems.org traffic, which is also proxied by Fastly. So in their "Try Honeycomb out" you can play with the dataset which is the RubyGems.org traffic. So I know that that integration works out of the box. And that's why maybe that would be an easier place to start...
378
+
379
+ **Jerod Santo:** Just a place to start, yeah.
380
+
381
+ **Gerhard Lazu:** Yeah. But then we're using Grafana Cloud for everything else, so...
382
+
383
+ **Jerod Santo:** Right.
384
+
385
+ **Gerhard Lazu:** ...that's an interesting moment... Like, do we start moving stuff across to Honeycomb, or do we have two systems?
386
+
387
+ **Jerod Santo:** Right. That's like a little break in the dam, like a little water just starts to pour out... And it's not a big deal right now, on Grafana Cloud, right? Like, "Well, I've got just this little thing over here on Honeycomb..."
388
+
389
+ **Gerhard Lazu:** 99%, yeah.
390
+
391
+ **Jerod Santo:** It turns out pretty nice over there... And then it starts to crack a little bit, and more water starts to... And all of a sudden it just bursts, and Grafana loses a customer. That stuff happens. So we could also parallelize this, and we could simultaneously try to get Fastly and Grafana sitting in a tree, k-i-s-s-i-n-g... But their integrations -- because that would be great, right?
392
+
393
+ **Gerhard Lazu:** Yeah, that would be great. That is actually a request from us.
394
+
395
+ **Jerod Santo:** And that would probably be in the benefit of both Fastly and Grafana. That would be in both entities -- to their benefit. So maybe it's already in the works, who knows.
396
+
397
+ **Adam Stacoviak:** I would guess that it is.
398
+
399
+ **Gerhard Lazu:** Well, I would like to know, because then we could be not doing a bunch of work...
400
+
401
+ **Jerod Santo:** Then we could procrastinate till it's there...
402
+
403
+ **Adam Stacoviak:** Exactly, yeah.
404
+
405
+ **Gerhard Lazu:** But it's stuff like this, right?
406
+
407
+ **Adam Stacoviak:** \[47:54\] Well, let's put an email feeler out. We've got some people we can talk to to know for sure... And then if it is in the works, and it's maybe on the back-burner, we can put some fire under the burner, because we need it, too.
408
+
409
+ **Gerhard Lazu:** Well, then we've hit another interesting thing, in that I really wanna try Honeycomb out. I've signed up, and I wanna start sending some events their way and just start using Honeycomb to see what insights we can derive from the things that we do.
410
+
411
+ One of the things that I really wanna track with Honeycomb - and I wasn't expecting to discuss this, but it seems to be related, so why not... I wanna visualize how long it takes from git push to deploy. Because there are many things that happen in that pipeline, and from the past episodes, this is really important. This is something that teams are either happy or unhappy about. The quicker you can see your code out in production, the happier you will be. Does it work? Well, you wanna get it out there quickly. Right now it can take anywhere between 10 and 17-18 minutes. Even 20. Because it depends on so many parts. Circle CI, sometimes the jobs are queued. The backups that run - well, sometimes they can run 10 seconds more. The caches that we hit in certain parts, like images being pulled, whatever - they can be slower. Or they can be cold, and then have to be warmed up.
412
+
413
+ So we don't really know, first of all -- I mean, in my head I know what they are, all the steps, but you and Jerod don't know. What does the git push have to go through before it goes out into prod, and what are all the things that may go wrong? And then which is the area or which is the step which takes the longest amount of time and also is the most variable? Because that's how we focus on reducing this time to prod. And Honeycomb - they're championing this, left, right and center.
414
+
415
+ Charity Majors - I don't know which episode, but she will be on the show very soon. 15 minutes or bust. That's what it means. Your code is either in production in 15 minutes, or you're bust.
416
+
417
+ **Jerod Santo:** There was an unpopular opinion shared on Go Time. I can't remember who shared it, but he said if it's longer than 10 minutes, you're bust.
418
+
419
+ **Gerhard Lazu:** There you go.
420
+
421
+ **Jerod Santo:** So that 15 minutes is gonna be moving, I think...
422
+
423
+ **Gerhard Lazu:** It will be moving, exactly.
424
+
425
+ **Jerod Santo:** As the industry pushes forward, it's gonna keep going lower and lower, right?
426
+
427
+ **Gerhard Lazu:** Exactly.
428
+
429
+ **Adam Stacoviak:** Well, what is it that -- does every git push, which is from local to presumably GitHub in our case (it could be another code host), is there a way to scrutinize, like "Oh, this is just \[unintelligible 00:50:24.19\] and CSS changing to make that deployment faster"? You know, like it was not involving images or a bunch of other stuff... Like, why does a deployment of let's say -- let's just say it's a typo change in HTML, and a dark style to the page, for some reason. Whatever. If it's just simply CSS, or an EEx file change in our case, could that be faster? Is there a way to have a smarter pipeline? These are literally just an HTML and CSS update. Of course, you're gonna wanna minimize or minify that CSS that Sass produces, in our case etc. But 15 minutes is way long for something like that.
430
+
431
+ **Gerhard Lazu:** You're right. So the steps that we go through - they're always the same. We could make the pipeline smarter, in that for example if the code doesn't change, you don't need to run the tests. The tests themselves, they don't take long to run. But to run the tests, you need to get the dependencies. And we don't distinguish -- like, if the CSS changed, you know what, you don't need to get dependencies. So we don't distinguish between the type of push that it was, because then you start putting smarts -- I mean, you have to declare that somehow; you have to define that logic somewhere. And then maybe that logic becomes, first of all, difficult to declare, brittle to change... What happens if you add another path? What happens if, for example, you've changed a Node.js dependency which right now we use, and then we remove Node.js and we compile assets differently? And then by the way, now you need to watch that, because the paths, the CSS that gets generated actually depends on some Elixir dependencies, I don't know. I think esbuild, we were looking at that? Or thinking...?
432
+
433
+ **Jerod Santo:** \[52:09\] You effectively introduce a big cache invalidation problem.
434
+
435
+ **Gerhard Lazu:** Yes.
436
+
437
+ **Jerod Santo:** It's what you do. Cache invalidation is one of the hard things in computer science. So it's slow, but it's simple. It's like, "We'll just rebuild it every time." It's like, why does React re-render the entire DOM every time? Well, it doesn't anymore, because that was too slow, so it does all this diffing, and stuff. But there's millions and millions of dollars in engineering spent in figuring out how React is going to smartly re-render the DOM, right? It's the same thing. It's like, there's so many little what-if's once you start only doing -- and this is why Gatsby spent years on their feature, partial builds. Because building \[unintelligible 00:52:53.19\] which is static site generator - building a 10,000 page static site with Gatsby was slow; I just made up the word10,000, but... You know, 100,000, whatever the number is - it was slow, and so it's like "Well, couldn't we just only build the parts that changed?" Like what Adam just said. It's like "Yeah, we could." But then they go and spend two years building that feature, and VC money, and everything else to get that done. So it's like a fractal of complexity.
438
+
439
+ **Gerhard Lazu:** Yeah.
440
+
441
+ **Jerod Santo:** What I'm saying - there's small things you can do. You can get the 80% thing, and it works mostly; it doesn't squeeze out every performance, but it's a big -- so there's probably some low-hanging fruit we could do... But it's surprisingly complicated to do that kind of stuff.
442
+
443
+ **Gerhard Lazu:** And the first step really is trying to understand, these 15 minutes, first of all, how much they vary... Because as it says, sometimes they can take 20 minutes. Why does it vary by that much? Maybe, for example, it's test jobs being queued up in CircleCI. A lot of the time that happens, and they are queued up for maybe five minutes. So maybe that is the biggest portion of those 20 minutes or 15 minutes, and that's what we should optimize first.
444
+
445
+ **Jerod Santo:** Yeah, that's why I said there's probably some low-hanging fruit, and we could probably do a little bit of recon and knock that down quite a bit.
446
+
447
+ **Gerhard Lazu:** And that's exactly why I'm thinking, use Honeycomb just to try and visualize those steps, what they are, how they work, and stuff like that...
448
+
449
+ **Jerod Santo:** \[unintelligible 00:54:16.02\]
450
+
451
+ **Gerhard Lazu:** Exactly.
452
+
453
+ **Jerod Santo:** Good idea.
454
+
455
+ **Gerhard Lazu:** Second thing is -- and I think this can either be a managed PostgreSQL database, so that either CockroachDB or anyone that manages PostgreSQL, like one of our partners, one of our sponsors, I would like us to offload that problem... And we just get the metrics out of it, understand how well it behaves, what can we optimize, and stuff like that, in our queries. But otherwise, I don't think we should continue hosting PostgreSQL. I mean, we have a single instance, it's simple, really simple; it backups... It's not different than SQLite, for example, the way we use it right now... But it works. We didn't have any problems since we switched from a clustered PostgreSQL to a single-node PostgreSQL. Not one. We used to have countless problems before when we had the cluster. So it's hard is what I'm saying; what we have now works, but what if we remove the problem altogether?
456
+
457
+ **Adam Stacoviak:** I remember slacking "How can our Postgres be out of memory?" It's like \[unintelligible 00:55:16.10\]
458
+
459
+ **Gerhard Lazu:** Yeah, the replication got stuck, and it was broken, it just wouldn't resume, and the disk would fill up crazy. Crazy, crazy.
460
+
461
+ **Adam Stacoviak:** And that's the reason you would wanna use a \[unintelligible 00:55:31.02\] because they handle a lot of that stuff for you.
462
+
463
+ **Gerhard Lazu:** Exactly. And if it can be distributed, that means that we can run multiple instances of our app, was it not for the next point, which is an S3 object store for all the media assets, instead of local disk. Right now when we restore from backups, that's actually what takes the most time, because we have like 90 gigs at this point... So restoring that will take some number of minutes. I think moving to an S3 one and a managed PostgreSQL, which we don't have -- we can have multiple instances of Changelog. We can run them in multi-clouds... I mean, it opens up so much possibility if we did that.
464
+
465
+ **Jerod Santo:** \[56:09\] Putting all of our assets into S3 would be like "Welcome to the 2000's, guys."
466
+
467
+ **Gerhard Lazu:** I would be, right? \[laughter\] This is exactly right, yeah.
468
+
469
+ **Jerod Santo:** "You've now left the '90s..." Maybe I should explain why we're using local storage. Some of it is actually just technical debt. This was a decision I made when building the platform back in 2015, around how we handle uploads. Not image uploads, but mp3 uploads... Which is one of the major things that we upload and process. And these mp3's are anywhere from 30 to 100 megabytes. And once we have them, we also want to do post-processing, like post-upload processing on the mp3's. Because we go about rewriting ID3 tags, and doing fancy stuff based on the information in the CMS, not a pre-upload thing. So it's nice for putting out a lot of podcasts, because if Gerhard names the episode and then uploads the file to the episode, the mp3 itself is encoded with the episode's information without having to duplicate yourself.
470
+
471
+ So because of that reason, and because I was new to Elixir and I didn't know exactly the best way to do it in the cloud, I just said "Let's keep it simple. We're just gonna upload the files to the local disk." We had a big VPS with a big disk on it, and were like "Don't complicate things." So that's what we did.
472
+
473
+ **Adam Stacoviak:** \[unintelligible 00:57:34.07\]
474
+
475
+ **Jerod Santo:** And know full well -- I mean, even back then I had done client work where I would put their assets on S3. It was just because this mp3 thing and the ID3, we ran FFmpeg against it, and like "How do you that in the cloud?" etc. So that was the initial decision-making, and we've been kind of bumping up against that ever since. Now, the technical debt part is that our assets uploader library in Elixir that I use is pretty much unmaintained at this point. It's a library called Arc, and in fact the last release was cut version 0.11, in October of 2018. So it hasn't changed, and it's a bit long in the tooth. Is that a saying, long in the tooth? I think it is. And I know it works pretty well, I've used it very successfully, so it served us very well, but there's technical debt there...
476
+
477
+ So as part of this "Well, let's put our assets on S3" thing, I'm like "Let's replace Arc when we're doing this, because I don't want to retro-fit Arc." It does support S3 uploads, but the way it goes about shelling out for the post-processing stuff, it's kind of wonky, and I don't totally trust it... So I would want to replace it as part of this move, and I haven't found that replacement. Or do I write one? etc.
478
+
479
+ So it's kind of like that, where it's slightly a bigger job than reconfiguring Arc just to push to S3, and doing one upload and being done with it. But it's definitely time... It's past time, so I'm with you. I think we'll do it.
480
+
481
+ **Gerhard Lazu:** Yeah, I think that makes a lot of sense. This just basically highlights the importance of discussion these improvements constantly. So stuff that keeps coming up, not once, but two years in a row - it's the stuff that really needs to change. Unless you do this constantly, you don't realize exactly what the top item is. Because some things just change. It stops being important. But the persistent items are the ones that I think will improve your quality of software, your quality of system, service, whatever you have running, and it's important to keep coming back to these things... Like, "Is it still important?" "It is." "Okay, so let's do it." "Well, you know what - let's just wait another cycle." And then eventually you just have to do it. So I think this is one of those cases, and we have time to think about this, and what else will it unlock? If we can do this, then we can do that. And "Is it worth it?" Maybe it is.
482
+
483
+ I think in this case, this S3 and the database, which is not managed, have the potential of unlocking so many things for us. Simplifying everything...
484
+
485
+ **Jerod Santo:** \[01:00:11.09\] Well, the app becomes effectively stateless, right?
486
+
487
+ **Gerhard Lazu:** It does. How amazing is that...?
488
+
489
+ **Jerod Santo:** And then you're basically in the cloud world, where you can just do whatever you want, and life is good.
490
+
491
+ **Gerhard Lazu:** That's exactly it.
492
+
493
+ **Jerod Santo:** And then face all new problems you didn't know existed. \[laughs\]
494
+
495
+ **Gerhard Lazu:** True.
496
+
497
+ **Adam Stacoviak:** Does this Arc thing also impact the chaptering stuff we've talked about in the past, Jerod? Wasn't that also part of it?
498
+
499
+ **Jerod Santo:** There is an angle into that... So for the listeners, the chaptering -- so the mp3 spec... Actually, it's the ID3 version 2 spec, which is part of the way mp3's work - it's all about the headers - supports chaptering. ID3 v1 does not. ID3 v1 is very simple. It's like a fixed frame kind of a thing... And ID3v2 is complicated more so, but it has a lot more features, one of which was chaptering, which - chapters are totally cool. You know, Ship It has roughly three segments - well, we could throw a chapter into the mp3 for each segment, and if you wanna skip to segment three \[unintelligible 01:01:02.16\] you could. We would love to build that into our platform, because then we can also represent those chapters on the web page. So you can have timestamps, and click around... Lots of cool stuff.
500
+
501
+ Unfortunately, there's not an ID3v2 Elixir library, and the way that we do our ID3 tags right now, by way of Arc, is with FFmpeg. So we shell out FFmpeg, and we tell FFmpeg what to do to the mp3 file, and it does all the ID3 magic, and then we take it from there.
502
+
503
+ So the idea was - well, if we could not depend on FFmpeg, first of all, that simplifies our deploys, because we don't have a dependency that's like a Linux binary \[unintelligible 01:01:39.05\] But we'd be able to also do chaptering, so we'd get some features, as well as simplify the setup. And that is only partially to do with Arc. Really, that has to do with the lack of that ID3v2 library in Elixir. That functionality does not exist in native Elixir. If it did, I could plug that into Arc's pipeline and get that done currently. If FFmpeg supported the feature, we wouldn't need it anyway, we would just do it in an FFmpeg, but it does not seem like something that they're interested in... Because mp3 chaptering is not a new whiz-bang feature. It's been around for a decade, maybe more.
504
+
505
+ So the fact that it doesn't exist in FFmpeg, which - if you've ever seen, it's one of the most feature-full tools in the world. I mean, FFmpeg is an amazing piece of software, that does so many things... But it doesn't support mp3 chaptering. So it's kind of a slightly related, but different initiative that I've also never executed on.
506
+
507
+ **Adam Stacoviak:** I'm just wondering if we had to bite the Arc tail off, or whatever that might seem like, to also get a win, along with that... And the win we've wanted for years essentially was being able to bake in some sort of chaptering maker into the CMS backend, so that we can display those on page, as you said, or in clients that support it... Because that's a big win for listeners.
508
+
509
+ **Jerod Santo:** Totally.
510
+
511
+ **Adam Stacoviak:** And for obvious reasons, that Jerod has mentioned, that's why we haven't done it. It's not because we don't want to, it's because we haven't technically been able to. So if this made us bite that off, then it could provide some team motivation... Like, we get this feature too, and we get this stateless capability for the application. It just provides so much extra.
512
+
513
+ **Jerod Santo:** Yeah. And one way I thought that we could tackle that, which doesn't work with our current setup, is we could -- I mean, we render the mp3's or we mix down the mp3's locally on our machines, then we upload them to the site... We could pre-process the chapters locally. We could add the chapters locally to the mp3, then upload that file... And if we could just write something that reads ID3v2 - it doesn't have to write it - we could pull that out of the mp3 and display it on the website, and that would be like a pretty good compromise. However, when we do upload the file, when you pass it to FFmpeg and tell it to do the title, and the authors, and the dates and all that - well, it just completely boils away your local ID3 tags, so it overwrites it.
514
+
515
+ **Gerhard Lazu:** \[01:04:06.29\] As I was listening to you talking about this, one of the things that it reminded me of is the segments in YouTube video files, which sometimes I really like, because I can skip to specific topics really easily... So rather than having fixed beginning, middle and end, you can have topic by topic and you can skip to the specific parts. I would love to see that in Changelog audio files.
516
+
517
+ **Jerod Santo:** That's the feature right there. You use it however you wanna use it. The obvious way is like "Well, there's three segments. I'll put three chapters in." But if you were in charge of doing your own episode details and you could put the chapters in the way you would want to - yeah, you could make it really nice, just like that.
518
+
519
+ **Gerhard Lazu:** I would love that.
520
+
521
+ **Jerod Santo:** And for clients that support it, it is a spectacular feature. Now, a lot of the popular podcast apps don't care. Spotify is not gonna use it. Apple Podcasts historically has not used it. So they basically don't exist. But the indie devs tend to put those kind of features in... Like the Overcasts, the Castros, the -- I'm not sure Pocket Casts is indie anymore... But those people who really care about the user experience of their podcast clients - they support chaptering. And for the ones that do, it's a really nice feature.
522
+
523
+ **Gerhard Lazu:** Yeah, I love that. The other thing that I would really like is when I write blog posts, I could just drag and drop files as I do in GitHub, and just get them automatically uploaded to S3... Because right now, I have to manually upload them...
524
+
525
+ **Jerod Santo:** You and me both.
526
+
527
+ **Gerhard Lazu:** ...and then referencing them is so clunky.
528
+
529
+ **Jerod Santo:** I would love that feature.
530
+
531
+ **Adam Stacoviak:** You're exposing our ad-hocness. Come on now. \[laughter\] We literally open up Transmit, or whatever you use to manage S3 buckets...
532
+
533
+ **Jerod Santo:** And we upload them.
534
+
535
+ **Adam Stacoviak:** ...and we drag and drop them, and then we Copy URL... But first you have to make it readable by the world - don't forget that part - and then put the link into your blog post.
536
+
537
+ **Jerod Santo:** No, you can globally configure that on the bucket, so that all files are readable...
538
+
539
+ **Gerhard Lazu:** Yes, we do have that.
540
+
541
+ **Adam Stacoviak:** Really? I didn't know about that.
542
+
543
+ **Jerod Santo:** But it still sucks... \[laughs\]
544
+
545
+ **Adam Stacoviak:** It does suck.
546
+
547
+ **Gerhard Lazu:** But one thing which I do for these episodes, for the Ship It ones - I take a screenshot... By the way, I took very good screenshots of all three of us... And I put them in the show notes.
548
+
549
+ **Jerod Santo:** I saw that. You're the first one to do that... So again, you're pushing the envelope of Changelog podcasts, and you're probably pushing us towards features that I would normally just completely put off over and over again.
550
+
551
+ **Gerhard Lazu:** See what happens when people come together and talk about what could improve?
552
+
553
+ **Adam Stacoviak:** Yeah.
554
+
555
+ **Jerod Santo:** Well said.
556
+
557
+ **Gerhard Lazu:** So what I propose now is that we go and improve those things, and come back in ten episodes. How does that sound?
558
+
559
+ **Jerod Santo:** Sounds good.
560
+
561
+ **Adam Stacoviak:** Kaizen!
562
+
563
+ **Jerod Santo:** Kaizen.
564
+
565
+ **Gerhard Lazu:** Kaizen.
Learning from incidents_transcript.txt ADDED
@@ -0,0 +1,247 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So Gergely Orosz - and I may have gotten his name wrong; I'll try at it again... Gergely Orosz - he tweeted in April about this new team that's forming around the problem that they have been passionate about for some time now; so it was like a natural team that just got together. I was intrigued, as I usually am, that there was something there. I signed up, and shortly after I received the nicest emails from Stephen. And it read like this... It was short, to the point, friendly... Really nice. "Hey, Gerhard. Thanks for signing up. I'm a long-time fan of Go Time, Changelog. It was nice to see your name pop up. Just wondering what capacity you're interested in Incident.io. Let me know. Thanks. Stephen."
2
+
3
+ That was great. That was April; a few months have passed, a few more emails have been exchanged, a demo was had, which was really good; thank you very much for that. And Ship It launched - that happened as well in all this time... And I always wanted, at the back of my mind, to have you part of Ship It and part of the Changelog setup. And that happened. Episode 10 and 20 has more details how that happened and why it happened. And now, in episode 21, it's finally happening. It's a special moment where Chris and Stephen are joining us in person. Welcome.
4
+
5
+ **Stephen Whitworth:** Thanks for having us.
6
+
7
+ **Chris Evans:** Hey. Good to be here.
8
+
9
+ **Gerhard Lazu:** So I'll go straight to the point... Why Incident.io is important to others - why is it important to others? How does it help others?
10
+
11
+ **Chris Evans:** \[04:01\] So what we're building at Incident.io is the very best way for whole organizations to get involved in incident response. And I guess the context for why we think that's important is -- the world has massively moved on in the last few years; probably more than that. But essentially, organizations where they used to see things like technology \[unintelligible 00:04:20.10\] that's no longer the case. These days, technology is deeply intertwined into organizations. Customers have high expectations of companies too, so they want every single service to be online all the time; downtime is just sort of like not acceptable. Along with that, customers have choice as well. So where in the past they might go "Well, my service is a bit rubbish, but I'm sort of stuck with it", they can just leave. And it's also \[unintelligible 00:04:49.05\] where everybody is in the same space. So where ten years ago everyone would be sat in an office, and something goes wrong and everyone sort of like piles into a room - that's not the case anymore. People have moved into Slack.
12
+
13
+ So when you look at all those sorts of things rolled up together, the demand when things do go wrong are really high on people dealing with incidents... Whether that's engineers who have to fix things, or whether it's customer support people who have to get information to customers incredibly quickly and sort of have fingers on the pulse there... And fundamentally, it doesn't feel like to us the tooling in this space has really kept pace with how people are operating. And so typically, what that means is people do one of two things. So they either will go "We have a bunch of tools that sort of help us in this area, and then we'll write down on paper how we pool them together, and we manage to sort of marshal something into something that looks like a good incident response."
14
+
15
+ Or at the other end of the spectrum you might get some leading companies who then try and write their own tooling to encapsulate that process a little bit. And fundamentally, we feel that shouldn't be the case. People should be building this sort of thing themselves; you wouldn't go out if you're starting your company today and say "I'm gonna build a paging piece of software, because I need someone to be able to call me when an alert fires." So we think there's sort of a parallel here with incident response, and that's really where I think the motivation for Incident.io came from. Essentially, there's a problem to be solved, there's a problem that pretty much every company has, and they're solving them poorly, and we think we can do a much better job.
16
+
17
+ **Gerhard Lazu:** I really like your tagline, which is right on the homepage... "Playing the leading role, not all the roles." That is a very interesting one. Can you expand a little bit? And we can compare what I understand and what you've meant by it.
18
+
19
+ **Stephen Whitworth:** Yeah, absolutely. So when stuff goes wrong in technology organizations, and it goes wrong fairly frequently - you get paged by PagerDuty or Opsgenie, and then you sort of get dropped into this white space where you need to define a process. And what often happens there is that, I guess, you're floundering. There's a lot of stuff to do. You might need to go and tell the executive that's responsible for the area, but you also might need to SSH into the machine and reboot something, or simultaneously trying to investigate the logs and see how bad it is. And in reality, these are probably a few different roles. But the lack of having a sort of structured, automated way to pull apart your incident usually means that chaos ensues and you kind of take all of these roles on yourself. And what we're trying to do is say, "No, you get to encode your process in the way that you'd like to respond to incidents into the tool", and as a result, we can give those different responsibilities to different people... And including taking a lot of the process management onto our tool, so no other human has to do it, so you can really focus on the problem and not the process of actually working through the workflow. You get to focus on logs, or communication, or whatever it is a human is best at doing, as opposed to trying to follow a workflow under high stress, which - we just find that never really works that well.
20
+
21
+ **Gerhard Lazu:** \[07:58\] Yeah. I think that's really powerful, and I'm wondering, from that perspective, what does the ideal incident workflow look like to you? Because a lot of these principles and a lot of these flows that you're capturing are based on a lot of experience that you share, the founders. So you've seen many of these... But what does the ideal incident flow look like to you?
22
+
23
+ **Chris Evans:** I think that's a really pertinent question, and I think the answer is somewhat "It depends." Our view is that there's a set of core defaults that we think every company should follow. So we want to kind of encapsulate those in the product. But equally, every single company is different, so there's things that different companies need to be able to imprint into the process, to say "For us, it's really important when this thing happens, that we engage this team and pull them in." And those sorts of workflows and automations are different wherever you go.
24
+
25
+ But if we look at the core of what good incident response looks like, it looks like keeping context all in one place, it looks like having very clear roles to be able to define who should be doing what, it looks like having a structured way to be able to coordinate your response... So everyone should know exactly who's picking up what actions and when, so you're not tripping over each other... And it looks like really good communication as well. So that's like communication internally, within those people that are dealing with the thing that's broken, it looks like good communication to other folks within your organization... So the exec that's at home, that needs to stay in the loop, so that if he/she is called upon, in the heat of the moment they have the right information at their fingertips... But then also communication out to your customers. They're often the last to know.
26
+
27
+ We see this a lot, where you jump on Twitter and you're having an issue with something, and you sort of tweet whoever that is, and they come back and go "No, everything is fine" and their status page says the same, and 30 minutes later finally the information will come out. And all those kinds of things are just painful. So yeah, I think good response is built on all of those foundations, with the ability to tweak the bits that are most important to you.
28
+
29
+ **Gerhard Lazu:** I really like that answer. The reason why I like it is because you mentioned the guiding principles which are essential to good incident management... Less the flow, because it depends, and I know people don't like hearing that, that it really depends. So as long as we agree on the principles, we know how to shape them to our context; that is really powerful. But I think you were going to say something, Stephen.
30
+
31
+ **Stephen Whitworth:** Yeah. We think about this a lot internally, and we like to think about this as sort of a scale from JIRA on one end, as a relatively unopinionated piece of software that you can stitch together into an incredibly powerful thing, \[unintelligible 00:10:46.17\] know how to do it... And a tool called Linear, which is the issue tracking tool of our choice, which is opinionated, fast... If it doesn't work for you, it's not going to flex to the way that you want to work. But if it does, it's amazing. And we tried to place Incident.io consciously towards the linear end of the spectrum at the moment, which is we think there's a few, like Chris mentioned, a few core principles to doing incident response really well... And we're unlikely to flex on those. We're unlikely to say "Incidents shouldn't have leads, or they shouldn't have actions", or any of these sorts of things... But we realize that there is, like you say, things above the principles that change, such as policies, regulators that need to be contacted in certain situations... And we're trying to build the core of the product as a very principled, opinionated piece of software, with the right kind of extension points that you can hook in.
32
+
33
+ Think of it much like a program that you'd build. You'd build your core abstractions, and then when you want to have end consumers, you give them a much smaller, more focused API surface that they can really just go and interact with the product in the right way.
34
+
35
+ **Gerhard Lazu:** \[11:56\] I really like that. And to come back to connect, to playing the leading role, not all the roles - what it meant to me is that you have experience in how these things happen, and you have years of experience dealing with incidents at the banks... And that's important. When it's about people's money, when there is an incident, it hurts; people can't pay with cards, and it's really important that that actually works. And if there is a problem - and there will be problems - how quickly can you solve it? What can you learn from it? So playing the leading role in incident management is really important in today's world, which is very complex. The systems are only getting more complex, so how does our experience keep up with the complexity? How do our learnings keep up with the complexity, and how do we share them? It comes back to these principles... How do you teach someone to incident-manage? It's hard, because it depends; and yet, there is a way, and there is a way to instill these core principles and say "This is what's important." But what does it mean to you, for example?
36
+
37
+ One of the things I really liked, and I liked many things, but this is one that really stands out... It's when there is an incident, you can choose every 30 minutes to be notified to give an update. It's such a simple thing, but so important to keep people in the loop constantly, and you yourself to be reminded by the tool "Hey, it's time to update." And you may skip it, you don't have to do it, but it's a good thing to have. So it's stuff like that, which was really powerful.
38
+
39
+ So I think that we get it, I think that I get it. When I say "we", Changelog.com... Does that qualify, our logo, for your homepage? What do you think?
40
+
41
+ **Chris Evans:** A hundred percent.
42
+
43
+ **Stephen Whitworth:** I think we can make that happen.
44
+
45
+ **Chris Evans:** Yeah. \[laughs\]
46
+
47
+ **Gerhard Lazu:** Thank you. \[laughs\] Good. I like that.
48
+
49
+ **Chris Evans:** Is this whole podcast recording just an elaborate way to get your logo on our homepage? Is that what it is?
50
+
51
+ **Gerhard Lazu:** That's exactly what it is. That's the only reason why we're doing this, Chris. You got it!
52
+
53
+ **Chris Evans:** It's done now, we can wrap it up. Cheers, Gerhard.
54
+
55
+ **Break:** \[13:52\]
56
+
57
+ **Gerhard Lazu:** Can you describe for us the context in which the Incident.io started? The idea, the team... How did it all began?
58
+
59
+ **Chris Evans:** If we wind the clock back a few years now in fact, actually, I'd just joined Monzo. I was running their platform team at the time. And as part of that, I was just \[unintelligible 00:15:15.07\] with picking up responsibility over the on-call function at Monzo. So this is like the engineers that get called when something goes wrong, when the bank's not working.
60
+
61
+ And when I picked it up, basically there were a bunch of relatively unhappy engineers who everytime they got paged were jumping into this one shared Slack channel, trying to navigate a pretty complex application. Banks are very, very complicated... And as a result of all those things, they were really struggling to get more engineers onto their on-call rotation.
62
+
63
+ \[15:46\] So I ended up building the most basic solution to try and make that process a little bit easier. The things that we were trying to solve were allow an engineer who has been paged into an incident to sort of take a little bit off their plate by creating a Slack channel automatically and pinging someone in customer support and saying "Hey, the engineers are dealing with it. If you need to communicate with them, use this thread." And it was sort of built around the Lambda function, super-simple, very primitive." But it worked really, really well. It sort of just took a tiny bit of effort off of people's plate, and it sort of did wonders towards people wanting to jump into on-call, people being able to jump into channels and see the entire context of the incident, from end to end.
64
+
65
+ And then from that point onwards, it just became something that Monzo just continued to build on. And so over the time that I was there, it then became more of an application, and it sort of grew and grew, and then we eventually started speaking about it publicly, and then it led to us open sourcing it.
66
+
67
+ So I think all of that sort of culminated in Monzo having this tool that the entire organization started using, not just engineers. It was -- people in customer operations were declaring incidents through this tooling, and people in money, when stuff went wrong there, were doing it... So yeah, it just became something that we were like, "This is great." And I think that was the sort of early seeds, that was this sort of better way to deal with incidents... And I guess fast-forward a little bit, Stephen, Pete and myself, sort of all technical leaders at Monzo, in a lot of incidents... And I guess it kind of felt like there was space for someone to build something and actually share that with the world. And as I said, Monzo had open sourced what they called Monzo Response, and it was sort of good, and it worked well for Monzo, but when you look at what the software did, it's similar to what a lot of other companies have done in that space when they've had to build something -- they'd built something that's just about good enough and just about fits the needs... And it has rough edges, because it's sort of no one's job to build that tooling and own that tooling.
68
+
69
+ So yeah, that was really what led to us coming up with the idea, it became something we worked on evenings and weekends... Monzo were great at supporting us doing that as well... And yeah, it sort of developed from there and just snowballed into this product that we have today.
70
+
71
+ **Stephen Whitworth:** Yeah, and I think the background for it starts, for me, at least when I co-founded a company back in 2015 called Ravelin... So we built credit card fraud detection software. We would be in the synchronous payment flow for apps like Deliveroo and Just Eat. So whenever there was an incident, it was automatically relatively high-impacting. And I remember, as someone that was on-call during that time, thinking about just the lack of automation that I was an on-caller having to essentially deal with, and creating channels, and telling the right folks, and going into customer channels and letting them know... And I feel at that time this thing sort of registered in my brain as "Is there any way to put in a credit card and have this problem solved for me?" And at that time there certainly wasn't, and in 2021 it still is relatively debatable that that was solved as well... So that was another part of the genesis.
72
+
73
+ **Gerhard Lazu:** Yeah. The one thing which I've seen in Incident specifically and that attracted me to it is the simplicity, which to me, it speaks of iterations that had to happen for the idea to get to the point which it did. So seeing Incident in the first phases - I'm not sure that it's opened up yet; like, you can sign up, register and request for access, but you can't just like put a credit card in yet, and then start using it... I don't know where that is. The point being that using it as a beta user felt way more advanced than a beta product I would expect it to be. What that meant is the experience - that's what I'm always thinking about, what is the flow of this product? And it felt very polished. It felt simple. It felt like "Okay, things are missing." I mean, you haven't even launched it properly... But it felt ready. MVP -- it's like more than an MVP. And I liked that.
74
+
75
+ So what you've just told me explains why... It explains that you have been solving this problem in different capacities, in a different context, and now you're bringing it to the masses. It gets really simple. And based on what I've seen - again, I don't wanna spoil it for others, but I really liked it. It was great. Simple, to the point... Lots of opportunity, and I think that's what you want with a new product, where it can go not necessarily having all the bells and whistles... Because actually that's what in my opinion make products bad, when they do too many things. You don't want that. So focus on the simplicity. And that story that you've just shared explains it really nicely, so thank you for that.
76
+
77
+ **Chris Evans:** \[20:26\] Yeah. That simplicity isn't an accident either. That's very much an active product choice on our part. So something that we want to be true always is that you can install Incident.io into your Slack workspace and you can basically get going and start creating incidents with very little onboarding. And at the core of it, what that means is you need to know one slash command or one message shortcut to create an incident, and then at that point it's just like Slack, but a little bit better.
78
+
79
+ So you're juts in a channel; it's not like a new product experience everyone's got to learn. You're in a channel, and what we try to encourage is a learning by doing type of approach to using the product. So rather than someone having to figure out everything all in one go, you'll see someone create an action inside of a Slack channel and be like, "Hah. That's really cool." And we'll give people pointers and nudges as to how they can do that. And this osmosis approach is very deliberate and sort of leads to this kind of organic growth and adoption across organizations. And again, that's come through experience, of that being a way that it worked really well at Monzo. Nobody told someone in customer service that they should start declaring incidents in \[unintelligible 00:21:30.09\] But it sort of happened, because people saw the process, they were pulled into Incident when they were there as a by-stander or an additional piece of support, and they were like "This is great. This is the right way to solve this problem."
80
+
81
+ I think you sort of start there, you start with like a Slack with benefits, and the product then layers things on top of that. So when you get to the point where you go "Do you know what? Our organization has grown, we have some complexity we need to navigate in incidents, so if I set a sev1 type incident, I want to create a ticket in a JIRA thing, so that someone who depends on that as a process - we can do that for you. We can automate that." But you don't need it from day one, and you can sort of layer up and build up this approach to get a very powerful product eventually, but with none of that sort of steep onboarding curve.
82
+
83
+ **Stephen Whitworth:** I think fundamentally we have an advantage, because we are building a product that we wanted to use when things were going wrong... I've seen a lot of people's startups where they're kind of searching for a problem and a pain point, and I think that that is a decent way to find it, but I think we're just at an advantage from a product perspective of knowing that we have 12-18 months' worth of stuff that we know we haven't done yet, but we know that we really, really would want when stuff was going wrong. So as a result, I guess that gives us a bit of a benefit when we're trying to build things, because we're not having to search out and find the pain points.
84
+
85
+ Obviously, our customers are telling us what doesn't exist and what they want us to add, but I think we have a decent nose for what's painful as well.
86
+
87
+ **Gerhard Lazu:** I'm really glad that you've mentioned that, Stephen... And now what I'm wondering is how does Incident.io use Incident.io? What does that look like?
88
+
89
+ **Stephen Whitworth:** That's like Inception, isn't' it? Turtles all the way down.
90
+
91
+ **Gerhard Lazu:** Yeah.
92
+
93
+ **Stephen Whitworth:** So we use it in a few different ways. Incidents is a kind of fuzzy concept to people. For some people, an incident is like the building is burning down, for example. It's a terrible thing, and it happens once every six months.
94
+
95
+ **Gerhard Lazu:** Hopefully not... That's terrible. A building is burning down every six months... No, thank you. \[laughs\] What kind of a building is that? That's what I wanna know...
96
+
97
+ **Stephen Whitworth:** That was a stupid thing to say...
98
+
99
+ **Gerhard Lazu:** That was too funny... \[laughs\] Go on.
100
+
101
+ **Stephen Whitworth:** \[23:42\] So fundamentally, we have a different view. An incident is really just some kind of interruption that takes you away from what you were currently working on, because it demands a level of urgency for you to respond. So that might look like a particularly severe bug, it might look like a small outage, but it also might look like a really complicated deployment that you're just about to do. As a result, we use it for a bunch of different use cases. So I'd say the first is more traditional service outages, 500s on the API sort of thing. We're in the kind of unique position that if we're having issues with our own product, that may inhibit our ability to use it, but most of the time everything just works totally fine on that. We also use it to figure out particularly complicated bugs, where we're seeing errors in Sentry... And we're not quite sure why, but we're trying to lay out this sort of -- think of it, I guess, like a notebook. A way of thinking about and reasoning about the problem.
102
+
103
+ So we have a functionality in Incident.io where if you pin something on Slack, or emoji-react with a specific emoji, that will get added to your incident timeline. So what you're doing is you're sort of diving into things, and when you find a particular point that is high signal or very useful for understanding what's going wrong, we pin that as well. And that means that we have this record of what we're trying to dig into... Which isn't necessarily just an incident, but is a really, really useful way to use that increased collaboration, better communication side of the product.
104
+
105
+ So a few different ways... I think, Chris, you will also give a nice answer, that doesn't include a building burning down every six months... \[laughter\]
106
+
107
+ **Chris Evans:** Yeah, I think you've hit the nail on the head. I think using Incident.io incidents for low, low severity things has many benefits. It has the benefit of you just leaving a really, really good trail, so someone else who can come along and first of all see what you've done, if you've reached a solution and understand your thought process and learn a lot...
108
+
109
+ There was an engineer at Monzo who used to do this repeatedly, where he would dive into some of the gnarliest bugs, that would scare most people away... And it was just a fascinating read, being able to go "I have this channel. I can look at that, I can look at a timeline", and you can sort of scan through and catch up on those things... It also acts as like a really nice, structured way to hand over work. So if you are picking up some lower-severity bug or issue in production, but you have to go somewhere, you can be like "Cool. I've left all the context in this channel. Pick it up and run with it kind of thing." So I think all of those things kind of lean towards that... It's just useful and helpful. There's very few downsides. I think that's the main thing - there is such a low cost to starting an incident. You're talking one slash command, and you've then got everything at your fingertips. And lo and behold, if the worst does happen and you're investigating and you go "Oh, this is really bad", you are suddenly now already in the place where you need to deal with your incident, with a heap of context that people can then pick up and run with... And surrounded by all of the support and tooling that we've got in place there. So if you need to escalate to engineers, they're a button away. If you need to communicate with your customers via your status page, the same sort of thing.
110
+
111
+ And that's the approach we use at Incident.io. As Stephen says, we are just using it for any kind of structured, but interrupt-driven approach to dealing with things.
112
+
113
+ **Gerhard Lazu:** How many incidents have you had in your instance of Incident.io, do you know? Or do you wanna check?
114
+
115
+ **Chris Evans:** I can tell you...
116
+
117
+ **Gerhard Lazu:** I can tell you that the Changelog.com Incident.io is at number four. So the next one would be the fifth one... In a few months. That's been really, really good. And the thing which I would like to add is that the mentality shift which happened when it comes to viewing incidents is something positive, something to learn from... Like, literally, learning from failure. I loved that shift. Because it's not a bad thing when it happens. I mean, okay, it is from some perspectives... But not from the perspective of the people that have to handle it. It's something positive, something to share, it's something to learn, it's something to solve. It's intriguing. I know this may sound controversial, but I'm actually looking forward to the next incident... And that's a very weird thing to say, but it's true, because I know what to expect. The flow is fairly easy. I know that value has been produced, in that it will be captured and others can reference what happened, why it happens, and so forth. So the whole negative side of something going wrong is being mitigated by this nice, simple tool. And I like it.
118
+
119
+ **Chris Evans:** \[28:18\] Nice. Well, to answer your question from earlier, 91 incidents is what we have declared.
120
+
121
+ **Gerhard Lazu:** That's a good one.
122
+
123
+ **Chris Evans:** Yeah.
124
+
125
+ **Gerhard Lazu:** How many sev1's?
126
+
127
+ **Chris Evans:** We had eight major severity incidents for us.
128
+
129
+ **Gerhard Lazu:** Over how many months?
130
+
131
+ **Chris Evans:** A year and a bit. No, maybe a year, something like that.
132
+
133
+ **Gerhard Lazu:** A year, okay. So one sev1 every month and a half. Okay, that's interesting. So did this have to do with your production setup, with anything like that? Or what is a sev1 incident, I suppose is what I'm asking.
134
+
135
+ **Chris Evans:** It's a good question. So we have sort of like guideline text within the product which sort of helps to sort of steer you to set the right value... \[unintelligible 00:28:55.21\] We've actually had none that we've marked as like critical, the top-top severity. These are major, which is what we'd consider sort of seriously impacting, in some form...
136
+
137
+ And to give you sense of what some of these are like - a Slack is having an outage, which is four of these are "Slack is returning 500s", or whatever, and we're at the mercy of them, building on top of their platform... But we'd still consider it an incident, because we own that relationship with our customers, and it's something that we'd wanna proactively reach out and let them know what's going on... But yeah, I think in terms of roughly -- very handwavy terms, the way we would rate incidents would be... Critical would be the entire app is completely down; you can't access the dashboard, you can't access anything through Slack, and it's there for some prolonged period of time. That's like the worst possible case of incident. Major would be some key component, some key feature or product flow within the product is not working, and it's something we need to urgently, urgently all swarm on... And then Minor, which is our only other severity at the moment, is sort of everything else. So that is the big, big bucket of everything from "This is a super-minor, non-impacting bug that I wanna deal with in the open", through to something sort of causing a minor problem for one customer, sort of thing.
138
+
139
+ **Stephen Whitworth:** I want to come back and touch on what you were saying earlier, Gerhard, which was around how your behavior with respect to incidents has changed from using our tool... That's the goal. We are selling technology at one level, but with our most successful customers what we're actually achieving is this sort of organizational change and acceptance of "Incidents aren't as scary as we thought they were. They are a way for us to assemble a team of people together, and for us to approach that with this sort of shared mental model of how we're thinking about this problem." And as a result -- I think Loom is a really good example here. We started off in that platform team being adopted by a lovely person called \[unintelligible 00:30:51.22\], who I'll shout-out here... And now, a few months later, we have been used by 80% of the organization. And that is really a reflection of the fact that it's not just about the engineering team anymore, it's about customer support, it's about sales, it's about executives... Incidents are fundamentally social, and you need to build a product that acknowledges that and leans into it, and that is really where we're trying to head. We're not trying to build the best tool for SREs. SRE is important, they need tools, but we think that essentially the rest of the organization has been left out of these tools for too long, and we really want to build stuff that brings in for the rest of them. So I'm very excited to hear about your approach and your experience with us.
140
+
141
+ **Break:** \[31:38\]
142
+
143
+ **Gerhard Lazu:** I'm wondering, how does the Incident.io production setup look like? You know what ours looks like, we're very public about it... But what does your production setup look like?
144
+
145
+ **Stephen Whitworth:** It's intentionally very simple. We run a Go app, which is just a single binary, on Heroku; so that runs all of our own infrastructure. We use Postgres as a backing store, GitHub stores all of our code, we run tests and deploy using Circle CI... I'm trying think -- and a little bit of BigQuery and Stackdriver tracing and monitoring as well. So intentionally, trying to maintain as few moving parts as possible, and get very rich cloud providers to do that for us wherever we can.
146
+
147
+ **Chris Evans:** Yeah. I think it's \[unintelligible 00:34:50.29\] I've come from a world where I was responsible for everything, from the lowest level moving parts in your storage system, through to deployment tooling, and all these kinds of things, and it's genuinely a wonderful experience being in a "serverless" environment, where we haven't got a single server that we have to run and manage, which is lovely... Essentially, we get to focus all of our time on writing the code, which time and time again we used to ship features incredibly quickly... So - not uncommon for someone to raise a feature request, certainly in the early days, when we were in this very fast shipping and iterating kind of mode... Raise it in the morning, at 10 AM, and by lunchtime their product is in production. We're still doing that today; we're also working on some more longer-term, strategic things along the way.
148
+
149
+ **Gerhard Lazu:** That sounds amazing. That sounds like the dreamplace to be in when it comes to iterating, when it comes to shipping features out there, seeing how they work... So you mentioned a single Go binary - is that right? So no microservices, a monolithic Go binary... Is that what it is?
150
+
151
+ **Stephen Whitworth:** \[35:57\] Yeah. So it's broken down by services internally. So a service would be responsible for maintaining actions, or listing and updating custom fields against an incident. So we sort of factored everything out internally, but the fact that everything is just in one binary makes testing, deployment, communication a whole lot easier. And this isn't to say in the future this might not change, but there's just something very refreshing about running a Go app on Heroku, connecting to Postgres, and just really not having to worry about a huge amount else.
152
+
153
+ **Chris Evans:** Worth highlighting as well that there are multiple replicas of that single \[unintelligible 00:36:32.27\]
154
+
155
+ **Gerhard Lazu:** Of course, of course. Yeah, that was an important one. What about the assets? Do you bake them in in a single Go binary as well? Is that how you deploy assets?
156
+
157
+ **Stephen Whitworth:** This is \[unintelligible 00:36:45.11\]
158
+
159
+ **Gerhard Lazu:** Yeah, for like the website. Like, Incident.io, when it loads up, all the assets are CSS, JavaScript, the images... Where do they live?
160
+
161
+ **Stephen Whitworth:** They're served through the Go binary as well. So we have Netlify for our website, and that handles everything there. But everything from the actual application itself, including the frontend and backend, is served all from the same Go binary.
162
+
163
+ **Gerhard Lazu:** Okay. So the website part is deployed separately. That's like your Netlify deployment. But the API, which is the thing that Slack interacts with - that is your Go binary.
164
+
165
+ **Stephen Whitworth:** Absolutely.
166
+
167
+ **Gerhard Lazu:** Okay. That was a really interesting thing. I couldn't figure out, "How do I get the images, the screenshots that I do for incidents, on the incident page?" And I figured out that if you \[unintelligible 00:37:27.22\] things in Slack, you're actually serving them from Slack, is that right?
168
+
169
+ **Stephen Whitworth:** Not quite. There's some hidden complexity inside of Slack around images and being able to serve those. So there are two types of ways that images will show up within Slack. One of those is like an unfurl; so if you have a public image URL, for example, you post in Slack, that will unfurl in Slack. And if you were to pin that, we could show on the timeline just by sort of using that original source URL.
170
+
171
+ There is a second type of image that will display, which is an upload that you've done. So if I have an image of my laptop and I decide to upload that into my Slack workspace, that goes into Slack. Slack stores it on their servers, rather than unfurling from somewhere external... And it presents it out to you. And the URL that they present it out to you on is an authenticated URL, so you have to manage some of that complexity if you were to serve it through Slack.
172
+
173
+ So what we do actually is we anonymize images, we upload them to Google Cloud Storage, and then when you come to render your timeline, what we will do is we will enrich that \[unintelligible 00:38:27.20\] timeline item with a signed, short-lived URL for that image, to serve it out, basically.
174
+
175
+ **Gerhard Lazu:** That's interesting.
176
+
177
+ **Stephen Whitworth:** So a little bit of complexity to get that seemingly simple feature working.
178
+
179
+ **Gerhard Lazu:** Because I was wondering, where do you store those images? You have to put them somewhere, if you can't get them from Slack... Which kind of makes sense. You have to store them somewhere, and Google storage seems to be the place where you do that from. Interesting. I like that.
180
+
181
+ So the simplicity - I can see how this keeps coming back. It seems to be a theme, keeping things simple, so that you can iterate faster. I think there's something there... That is obviously an understatement - I'm being a bit ironic, because yes, that's exactly how it works. If you keep things simple, on-purpose, things will be fast, things will be straightforward. That's exactly it. So I like that even in your infrastructure setup, that's how it works.
182
+
183
+ Do you use any feature flags when you have new features? How do you ship new features before they're finished?
184
+
185
+ **Stephen Whitworth:** We do use feature flags. We don't have a particularly sophisticated setup there yet. So we're not using an Optimizely or LaunchDarkly or whatever the products are that do that. But we do have mechanisms internally to be able to say "This is just for us", so we will quite often test things ourselves in production to be able to do that.
186
+
187
+ \[39:43\] I expect, as we grow, we will start growing the maturity around that, so that we can start building things for specific customers, and toggling it just for them, to help us build it in the open and get their feedback as we go. We've had a few companies actually that have been essentially design partners on features, and it's just incredibly useful to have someone with a real-world use case and a real need for a thing, and sort of building it with them and shaping it, rather than the -- I mean, clearly, no one's gonna be doing a sort of waterfall "Give me all your requirements and I'll build you your thing." But even just in a world where you're building it week on week, and you have to send them updates, that's clearly a lot less good than "Here's this thing that's about 30% done, and it doesn't do all of these other things, but you can play with that 30% in your live environment and it will work and you can give us feedback from that."
188
+
189
+ We're also very open about what we're working on... We have a public product roadmap which people can visit on Notion. We have a Slack community full of wonderful people that we also tell what we're going to build next... And coming back to the infrastructure side of things, this is all very intentional, because as an early-stage company, we are essentially trying to search for the product that solves the problems that our customers have. We can only do that, or we can do that most effectively if we can build things really quickly and see if what we think is true is actually true. And we can only do that if people can see what's coming up next as well, so that they can help us prioritize and say "Actually, I'd really love to be able to automate things in my incidents, rather than have an API so I can automate them myself." And being able to do that sort of prioritization, both with a customer directly, but also with all of our customers, and be able to ship that stuff really quickly is really useful, and again, is just why we build stuff in as simple a way as we can get away with, essentially.
190
+
191
+ **Gerhard Lazu:** Now that you've mentioned this, Stephen, it reminded me of a feature that I was looking for and I couldn't find, that's runbooks. And I was wondering, where do you sit with runbooks? Do you see them part of Incident.io? How do you think about them?
192
+
193
+ **Stephen Whitworth:** Yeah, it's a great question. Fundamentally, we're trying to build the sort of rails that you will run your incident process on. So the automation. And runbooks are a great way of saying, "Hey, this is a particular type of incident, and in this case you want to go do A, B and C." What we've found up until this point is that I guess from a product perspective we're not sure where this should live. In previous companies, these have lived in GitHub repositories, in other places that are in confluence... Some products offer executable runbooks, so you can actually just go in and SSH into Node, and in the document you actually have a live shell... And it's really just a -- we haven't figured out the right approach for it yet, which is why we haven't built it. We're going to get to it in a few months' time.
194
+
195
+ The first thing that we're going to build in order to make that more powerful is workflows. Workflows is a way to -- think of it a bit like Zapier or IFTTT for incidents. So if you can say -- in a particular case, let's say in a platform incident, I want to go and page the on-call engineer, I want to send an email to this particular address, and I want to go and create five of these actions in Incident.io. That kind of looks a bit like a runbook, and we're not sure - is a runbook a set of actions? Is it a document? We're not totally sure yet... But what we are sure about is you're going to want different runbooks based on different things, and we need to give you that layer of being able to say "This incident is different to that incident, and in this case, do something different." And then once we have that, we can essentially build better runbooks off the top of that. Sorry, that was a bit complicated...
196
+
197
+ **Gerhard Lazu:** \[43:41\] No, that was very good. That was very good. I'm thinking in my head how does this link to my experience and what specifically I'm missing in Incident.io from the Changelog incidents which I ran. One of them - and actually, even more - I've caught myself wanting to write down, like "This is one of the steps that you will need to take. And by the way, this step links to this other step." And before you know it, you have like a series of steps that you may want to follow. And some may be optional, because I don't know, the same thing will happen next time... But I know that this is where I need to look, and this is important, and this maybe is relevant, but I don't know, because it was relevant now... So a way to capture almost like the steps that were followed to solve an incident, to understand an incident, whatever the case may be... And what we have even today - we have some make targets... You can laugh. That's funny. Like, why would you have make targets for --
198
+
199
+ **Stephen Whitworth:** \[unintelligible 00:44:35.20\]
200
+
201
+ **Gerhard Lazu:** For following processes, like a series of steps, right? So we do like make, how to, rotate, secrets. And then it gives you a series of steps, you press Yes to the next one, next one, next one, and then eventually, you have rotate the secret. For example, how to upgrade Elixir. You run the make target, and it shows you step by step what you need to do; and there's this file, and there's that file, and a couple of files. Now, could they be automated? Yes. Should they be automated? We don't know, because it depends how often we use them. So it's almost like there is a lot of knowledge that can be captured in these incidents, and by seeing which incidents keep coming up -- and again, an incident is not something bad. It's something that needs to improve; so there's that positive mindset. Like, credentials have leaked; I need to rotate them. It's an incident. So what are the steps I need to follow to rotate credentials? That's one way that I'm looking at it. So that is my perspective, and that's how I'm approaching this.
202
+
203
+ **Stephen Whitworth:** I think that's very legitimate. What we're trying to build at Incident.io is essentially a structure store of information that takes data from Slack, from Zoom, from escalations through PagerDuty, from errors in Sentry, sort of pulls it down into a set of rows and columns, whereas previously it was scattered throughout all of these tools... And then once we have this structured data that says "Okay, Chris was in this incident, and then Stephen was paged in, and it affected this particular product", that is now queriable, structured information that you can go and do interesting things like recommendations. Does this look similar to something else that has happened? There's lots and lots of stuff there. We haven't really dipped our toe into it yet, but above us sits a whole layer of monitoring; the datadogs, the grafanas of the world. We're not currently ingesting any of that information, like deployments, or any monitoring information, but you can imagine that our set of structured information becomes a lot richer when we integrate back upwards into those tools... But also, we don't want to be this silo of information that hides it away in our SaaS tool. We would also like to build APIs, integrations, exports to BigQuery... Just ways of getting your data back out into your own tools such that you can really just set off of this structured set of information and build what you want off the end of it.
204
+
205
+ So yeah, I think there's a lot of stuff here. We've barely just scratched the surface of what will be useful once we've got all this stuff in Postgres, essentially.
206
+
207
+ **Gerhard Lazu:** This is really attractive to me from the perspective of - building something simple requires you to understand the problem really well. That takes time. Building something complex - it's fairly easy. "Sling some stuff. Does it work? Well, it works for some; it's okay. Let's just move on. More features, more features." And before we know it, no one actually wants to use the product, because it's too complicated. So I've seen so many products fail in that way. So the attractive part is this relentless focus to simplicity, keeping it simple, understanding it well. What makes sense. Like, "Okay, Gerhard told us this. What are other customers telling us? What makes sense for them, and what is this common thread which delivers on 80% of what people are asking for? And the rest 20% is too complicated, maybe not worth doing. But let's focus on the 80%, which is the majority." So I like that approach. That makes a lot of sense to me.
208
+
209
+ **Stephen Whitworth:** Have you heard of the Mark Twain quote...? It says "I didn't have time to write you a short letter, so I wrote you a long one instead." That is extremely applicable to product development.
210
+
211
+ **Gerhard Lazu:** \[48:11\] Definitely.
212
+
213
+ **Stephen Whitworth:** It takes time to build something simple.
214
+
215
+ **Gerhard Lazu:** Speaking of letters, I'm thinking about your blog posts. Some of them are really, really good. I can tell you which my favorite one is, but I'm wondering which is your favorite, Chris. It doesn't have to be yours, by the way. It can be Stephen's, or Pete's...
216
+
217
+ **Chris Evans:** Oh, it's gonna be mine, obviously. What are you talking about...? \[laughter\]
218
+
219
+ **Gerhard Lazu:** Sure.
220
+
221
+ **Chris Evans:** I'll tell you one I actually really enjoyed both researching and writing. That was the one that was around learning from incidents in Formula 1. It's sort of less an opinion piece on how people should do incidents or anything else, but more a spotlight on an incident that I think was run impeccably well. This was one where a minor Formula 1 crash happened when a driver was making his way from the pits around to the grid before the race had started, and caused some damage... And they then fixed the car, from the starting grid, faster than they'd ever done it before, and with none of the garage things they needed around them. It was just incredible... This was all captured on video as well.
222
+
223
+ I think when you look at that, there's just so much that essentially anyone who's dealing with incidents can learn from that. And I think that's a really important thread, actually, for us at Incident.io. We're breaking new ground in many ways, but incidents have been around for -- stuff has gone wrong for a very long time, so there's a lot of interesting learnings that we can take from other industries, whether it's Formula 1, or incident command on fire response type thing... There's so much to learn, and I think that blog post - yeah, I really enjoyed writing it. But the video, if you haven't seen it, go and take a look. It's a really fascinating watch.
224
+
225
+ **Gerhard Lazu:** Okay. I was gonna say that was my favorite, but now I can't, because it's your favorite... And I like cars. I like Formula 1, but especially cars, and it really resonated with me. I'm a visual person, I like videos, so I liked it... But there's another one which is a second close. But before I reveal mine, Stephen, which one is yours?
226
+
227
+ **Stephen Whitworth:** Rather predictably, I'm gonna pick one of my own blog posts, following Chris' trend... So I wrote a blog post called "Incidents are for everyone." Fundamentally, this is, I guess, a calling card; like, the thing that we are building Incident.io with the belief of... It's that current tooling is very, very focused on engineers. So think sort of the PagerDuties and the Opsgenies of the world - these are engineering tools; they present JSON to people. They are very good, but they are not particularly comprehensible to someone working in, say, customer support, or in the sales team, or in the executive team. This is not a slight on their intelligence or anything like that, but it's just not a product that they're used to. And fundamentally, we think that incidents just do involve way more teams than engineering.
228
+
229
+ \[51:00\] So if you think about an incident at a bank, for example. If payments are failing, that might be because some Kubernetes pod is having issues. But actually, that's a \[unintelligible 00:51:09.17\] reportable incident. It needs to have an incident manager there. Executives need to be there to make a call. Customer support has lots of people that are waiting to chat with them. And all of these people need to be involved and present in the incident as well. And really, that is why we're building Incident.io. We're trying to build a tool that caters to the needs of these folks that have been, I guess, too long left out of incidents. We want to build something that feels native to them, and that allows them to get \[unintelligible 00:51:37.26\]
230
+
231
+ **Gerhard Lazu:** This keeps coming up, and I cannot help not notice it, and not even mention it in this case... Bringing people together - really powerful. I mean, that's what it sounds like to me. We need to bring these people together, because each and every one of them has something of value... But they're not talking in the right ways. Or if they're talking, there's too much information overload. So how can we simplify, condense and compress those really valuable pieces of information in a way that people can understand, follow, relate to, go back to, learn from... Super, super-important. And I like this - bringing people together.
232
+
233
+ **Stephen Whitworth:** I have to ask you, Gerhard, \[unintelligible 00:52:17.05\] your favorite one.
234
+
235
+ **Gerhard Lazu:** "Why more incidents is not a bad thing." Actually, "is no bad thing." July 1st, 2021. This is something which I mentioned earlier, in that I'm looking forward to incidents... Which is really weird. But Incident.io makes me do that... Which again, is that mindshift which I talked about. So if you are set to learn from failure, which in my mind, that's the title that we'll give this episode, but you let me know if you have a better one in mind... Is if you're learning, if you're continuously learning, if you can make it to be a positive experience, then you'll be looking to do more of that. And this is applicable to almost everything. If it's fun, you wanna have more of it. Don't have too much fun, because that can be a bad thing... \[laughter\] But can you combine being responsible, being adult, sharing information, having fun - and if you can combine all those things, bring people together? What can be better than that? I don't know. I'm yet to discover it. I'm not saying there isn't something better than this... But this sounds like a really good proposition to me. So I quite like that.
236
+
237
+ **Chris Evans:** Yeah. We think this blog post is essentially an acceptance of reality. Stuff breaks all the time, in little and large ways... You can try and ignore that, or you can solve it in Slack DMs, or you can accept it, use it as a signal to inform what you should do next... And like you say, try and have fun whilst you solve it as a team, together.
238
+
239
+ **Gerhard Lazu:** I was going to say which is your key takeaway, but we've had quite a few takeaways in this last part; they're all very valuable. The blog posts are really good. They're not too long, not too short, they're just the right amount. Go check them out... And keep looking forward to incidents. It's not a bad thing.
240
+
241
+ Thank you very much for joining me. This was great fun. I'm looking forward to the next one.
242
+
243
+ **Stephen Whitworth:** Our pleasure.
244
+
245
+ **Chris Evans:** Thanks so much. We really enjoyed it.
246
+
247
+ **Stephen Whitworth:** Thank you.
Let's Ship It!_transcript.txt ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.64 --> 4.24] We are going to ship in 3, 2, 1.
2
+ [4.24 --> 10.08] I'm Gerhard Azu, host of Ship It, a show with weekly episodes about getting your best ideas
3
+ [10.08 --> 14.88] into the world and seeing what happens. We talk about code, ops, infrastructure,
4
+ [14.88 --> 18.88] and the people that make it happen like charity majors from Honeycomb.
5
+ [18.88 --> 24.00] We act like great engineers make great teams, and it's exactly the opposite. In fact,
6
+ [24.00 --> 27.04] it is great teams that make great engineers.
7
+ [27.04 --> 30.96] And Dave Farley, one of the founders of Continuous Delivery.
8
+ [30.96 --> 34.24] Start off assuming that we're wrong rather than assuming that we're right.
9
+ [34.24 --> 39.04] Test our ideas, try and falsify our ideas. Those are better ways of doing work,
10
+ [39.04 --> 43.20] and it doesn't really matter what work it is that you're doing. That stuff just works better.
11
+ [43.20 --> 48.88] We even experiment on our own open source podcasting platform so that you can see
12
+ [48.88 --> 54.80] how we implement specific tools and services within changelog.com, what works and what fails.
13
+ [54.80 --> 58.96] It's like there's a brand new hammer, and we grab hold of it, and everyone gathers around.
14
+ [58.96 --> 65.20] We put our hand out, and we strike it right on our thumb. And then everybody knows that hammer
15
+ [65.20 --> 69.60] really hurts. When you strike it on your thumb, I'm glad those guys did it. I've learned something.
16
+ [69.60 --> 70.32] Instead, yeah.
17
+ [70.32 --> 75.12] I think that's a very interesting perspective, but I don't see that way.
18
+ [75.12 --> 75.52] Okay.
19
+ [75.52 --> 78.72] It's an amazing analogy, but I'm not sure that applies here.
20
+ [78.72 --> 83.12] Listen to an episode that seems interesting or helpful, and if you like it, subscribe today.
21
+ [83.12 --> 88.80] We'd love to have you with us.
Money flows rule everything_transcript.txt ADDED
@@ -0,0 +1,271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So I remember Docker coming out around 2014, so that's seven years now... Seven years ago, Docker fascinated me. I thought it was amazing. Like, "You mean what?! Finally, containers that make sense." LXC has been around for a long time, so I know that Google contributed to the Linux kernel and they were using it for a long, long time, and I also know that Heroku was fairly big and was very successful in that context... But the regular developer that was just coding some software and shipping it - they didn't really use containers, until Docker came about. And I was fascinated by it. I thought that was amazing.
2
+
3
+ In 2014, at the time, I was a lead engineer for a startup called How Are You. And I was into Chef at the time, I was using Jenkins heavily... And I blogged about using Jenkins with Docker for continuous deployment. Again, this was 2014. And in that context, there was a meetup in London... Was it the London Ruby User Group?
4
+
5
+ **Ian Miell:** It definitely wasn't Ruby, because I would not have gone to a Ruby conference at that time.
6
+
7
+ **Gerhard Lazu:** Right. It wasn't a conference, it was more like a meetup.
8
+
9
+ **Ian Miell:** \[04:24\] Yeah, a meetup. I think it was like a DevOps... Maybe it was a DevOps things, or--
10
+
11
+ **Gerhard Lazu:** Yeah... I know ContainerCamp, or whatever the Docker equivalent was at the time... Container Days? Something like that. I can't remember. It was a long time ago. And in that context, I know that we met, and you were like "Hey, Gerhard--" I think you've read something, or we'd be like on the same GitHub issue, or I can't exactly remember how, but we realized that we have quite a few things in common. I was into shell scripting big time at the time, and you loved your shell scripts; even now you love your shell scripts. I do too, let's be honest about it; you can't hide it. And we were both fascinated by Docker at the time. I was more like as an end user, and you were more like in terms of what this means, and what does it represent for the wider sector, I believe, for the tech sector, because you had a more enterprisy experience at the time. Is that right, Ian?
12
+
13
+ **Ian Miell:** It's very different from my memory, Gerhard. \[laughs\]
14
+
15
+ **Gerhard Lazu:** That's exactly what I thought. Like, what I remember is very different from what you -- so what do you remember, Ian?
16
+
17
+ **Ian Miell:** Okay, here's how I remember it... And it just shows how fallible memory is. It's a good reminder. So I remember giving a talk... I was working at a company called OpenBet, and we did backend systems for online gambling companies around the world... So it was like a big monolithic, old-school, three-tier system. And I'd heard about Docker, I'd read something in Wired, and I clicked through, and I got something going in 20 minutes... And I was like "This is amazing." I had not actually heard of LXC before that. So I went to my colleague at work, the resident Egghead genius guy and I said "Do you know anything about this Docker thing?" And he said "No." And I showed him, and he was like "Oh, right. I've been doing that for years, with LXC, and stuff." He had his own containers set up with different setups on his host, and I was like "Do you realize how transformative this would be for engineers, how much time this would save us?"
18
+
19
+ So I did a little skunkworks project, deliberately went skunkworks because previous attempts to go the official route had not worked. I did something before with Erlang... So yeah I went the skunkworks route with a couple of really bright, young engineers, and we got the whole 15-year-old monolith packed into a single container. It was considered not the right way to do things, but we did that. And then we had it built daily. So engineers could just pull the new layer for that day to their machine, whether they were in London or whether they were in the Far East or wherever around the world, Australia... And they would have a working environment that was suitable for development, with all the applications on it. It was like 50, 60 different applications. It would have a complete database with realistic data on.
20
+
21
+ So engineers who'd spend weeks setting up their machines suddenly could just get going from day one. And they had a completely safe environment - they could trash it, they could commit it... Docker Commit was a wonderful thing, and it's a shame it's kind of disappeared from view... But you used to be able to just make changes, commit; make another change, commit. This was like Save Game. The game is to make the feature work.
22
+
23
+ **Gerhard Lazu:** I missed that completely. That's crazy. I missed that feature. I didn't even know it had it. Wow, okay... Okay, go on. This is really interesting. I'm finding new things out from seven years ago, which I completely missed.
24
+
25
+ **Ian Miell:** \[07:57\] You could run your environment and then you could run a cron job that did some ETL process... And then you could go "Okay, Docker Commit. Now let's do Docker Diff. Oh, these files changed. Why did those files change?" It's like another debugging tool. It had so many things like this which were mind-blowing. We tried to use VMs, but the presales guys -- you know, presales guys you'd think would love using VMs, because it's like "Oh, you wanna see this setup? Right, here's the VM for this." They'd given up, because they were like "It's more hassle than it's worth. I've just set up a machine, I have a bunch of scripts etc." I was like "Well, just take those scripts, stick them in a container, and then we've got something that people can work on, and it makes them super-productive."
26
+
27
+ So how we met, to get back to the point, was I started talking about this in public, and I was terrified of talking in public. And I can't remember -- this is where my memory goes hazy, but it may have been the first talk I ever gave, and I was super-nervous, and I had practiced loads at home, and I got up and I said "Look, we don't do CI/CD properly. We're old school etc. But we do this Docker thing and I've found it to be transformative." And anyway, it was a really kind of raw, honest "We're messed up. Here's how I'm trying to fix it" kind of thing. And people really responded well to this. They were laughing along with me, and it was fantastic. It felt so freeing, because suddenly it was like "Oh, I'm not just getting shouted at as like wasting your time, because you all know this already." It was like, "Oh, you all wanna hear this."
28
+
29
+ So those people came up to me afterwards and said that it was great to hear the real stuff... And you came up and said -- I was gonna leave, because it felt like these were all sales talks... And then you'd get up and talk some real engineering. That's wonderful. And we exchanged emails, and that's how I remember us meeting.
30
+
31
+ **Gerhard Lazu:** That does bring some memories, but I do have to say, maybe that day was so bad that I mostly forgot about it... Because you're right, we did exchange emails, and then nothing happened for a long time. And then we met again. I said, "Hey, Ian." And at the time I think you were working for Barclays when we met the second time... And that was like years apart. And you were in this enterprise IT for a bank; Barclays is a huge bank in the U.K, and I think even in the world... And you were working in the context of that IT department, and you were working with containers, and you were still on Docker, and I think that you were -- either you have only just published the Docker in Practice ( the first book), or you were very close to publishing it. Do you remember whether you'd already published it at the time?
32
+
33
+ **Ian Miell:** So the chronology was I had found out that with this Docker stuff, with its surveys among the engineers who were using it, and one team had adopted it completely, and we found that we saved (self-reported) four man days a month per engineer. And I thought it was actually more, because they didn't wanna say "We were wasting our time before, but now we've cut a load of time out", but basically a week of a month of developer time was taken out. And I was like "Great. I have this evidence now. I have a story. I'm gonna go to the board and ask for funding." So I said "I wanna talk to the board." I'd been there 14 years, and they said "Yeah, we can fit you in in six months." And at the same time, someone from Barclays was saying "We need a Docker expert. Come work for us." And I said "No. I never wanna work for a bank." And this went on for a while. And I got tricked into a couple of interviews without knowing it, and in the end I went, because the team there was so impressive, and I was scared by them." I was like "I haven't felt this scared for a long time."
34
+
35
+ **Gerhard Lazu:** "So I'm gonna join you, because you scared me."
36
+
37
+ **Ian Miell:** They scared me...
38
+
39
+ **Gerhard Lazu:** Really the opposite thing happens when you get scared... You run away. Not you. You run towards the danger. \[laughs\]
40
+
41
+ **Ian Miell:** Well, this is actually a philosophical point... I often say to people, "You should feel uncomfortable at work about 30% of the time."
42
+
43
+ **Gerhard Lazu:** 30% afraid. Okay.
44
+
45
+ **Ian Miell:** Yeah. Not more than that... But if you're feeling uncomfortable 3% of the time, you're not moving enough.
46
+
47
+ **Gerhard Lazu:** \[12:18\] When you mean uncomfortable, do you mean like out of your depth, like you don't know enough, or what type of uncomfortable do you mean?
48
+
49
+ **Ian Miell:** Well, it's never good to feel out of your depth, right? So when I say uncomfortable, I mean -- it's kind of related to another piece of advice that I liked many years ago, which was if you have a choice between doing two things at work, and one of them makes you feel slightly uncomfortable, do the slightly uncomfortable thing. Because you're smelling the opportunity for development. Of course, it's terrible to be stressed. We've all been there we've been at 3AM and shouted at our customer. It's not fun, and this is not a way to live. But generally, you should be feeling like "I'm pushing myself. I'm at my limit somehow. I'm stretching, slightly." I think it's the feeling of being stretched.
50
+
51
+ **Gerhard Lazu:** I see. I think I would call it challenged. You should feel like you're challenged, you're learning something you maybe don't know, you don't have all the answers and you're figuring things out... Because that's growth. Failing - that's great; keep failing, because those things you don't know, and that's how you learn, and that's how you grow. So I think challenged is how I would put it.
52
+
53
+ **Ian Miell:** I think that's a better word, yeah.
54
+
55
+ **Gerhard Lazu:** Okay, okay. So 30%, like a third of your time you should feel challenged, so that you're learning, so that you're not being static, stagnating, and feeling like "Well, what's the point? Nothing's happening. I'm not going anywhere. I'm stuck."
56
+
57
+ **Ian Miell:** Yeah, so in this 14-year company that I worked for, it was a very narrow domain, and everyone there was an expert in that domain... And it was a very challenging environment; people were really into it, and you had to kind of work -- but you were working in this narrow domain. So I went to talk to these people at Barclays and they were part of the infrastructure team. So it wasn't only that they were working for Barclays, which is, as you say, a huge organization, but they were in the belly of the beast. They were trying to produce the stuff that the rest of the business would use. And one of the things when I'm explaining how enterprises are tough, I try to explain that if you're in an engineering team in a big company that has a lot of process and a lot of oversight (regulation), a small team can get away with a lot if they wanna go around the side, because it's like "It's one app. It doesn't really affect the whole business. There are risks. We need to mitigate those risks. But as long as we have looked at it..." "Yeah, you can have that." You can get away with this, you can get away with that. But if you're working in infrastructure, you're delivering stuff to the whole business. The whole business is gonna use it.
58
+
59
+ At the time, at the beginning, we were working on OpenShift version 2, which was pre-Docker. That was rolled out to the whole business, so any team could use it. So the rigor and the demand on security and audit and control was enormous... And it was very little about technology. I mean, the choice was made, the product was chosen and used pretty much out of the books. But everything else around it - the architecture had to be a certain way, and be built a certain way. We had to figure out workarounds for certain problems. These things were really hard, and I'd never experienced them before.
60
+
61
+ So one of the things that was really attractive was like I'd worked in unregulated -- we had router production everywhere and these huge databases with millions and millions going through them every hour, to a place where everything is really tightly locked down. And I was like "I've never worked in that environment, I've never worked in infrastructure, I've never worked in banking." Worst-case scenario, I'll get six months banking experience, which is probably valuable somehow somewhere else. So even if I go in and they think I suck, at least I'll take away something from the experience. So I stayed there for 3,5 years...
62
+
63
+ **Gerhard Lazu:** \[16:04\] You went from the initial Docker experience and Docker in Practice book I was talking about the second time we've met... You were working for Barclays, and I think you have almost maybe just finished writing the book, or it was just published... It was around that time. This was the first edition. You wrote a second edition as well since, so...
64
+
65
+ **Ian Miell:** Yes.
66
+
67
+ **Gerhard Lazu:** I don't think you wrote another book, but you did write some courses... Is that right? Or like self-published books, the Hard Way. Because you had Git the Hard Way. Bash the Hard Way. Terraform the Hard Way. Were they self-published books, or courses? What were they?
68
+
69
+ **Ian Miell:** Yeah, the book was published when I was at Barclay's, and that was a very long process. And then when the book came out, it sold very well... So they said "We want a second edition." I thought naively that the second edition would be "Add a chapter a here, take a chapter out there, revise some stuff, fix some stuff, and we're done. It's not gonna be much work." But actually, they treat it like a whole new book. You don't literally start from scratch, of course, you take the original book, but they go through each one, and you have to work through each part. So it was much more arduous than I thought.
70
+
71
+ The self-published books came about because -- well, I can't actually remember the chronology. I think I did the Git one first. So I was working in Barclays infrastructure, and there were all these super-smart people around me, but they were very infrastructure-focused; they were not from a dev background. And they were doing these projects with Terraform in the cloud; they were actually trying to create accounts per team that were automatically provisioned and had all the controls that were needed, and so on, but were still AWS-native; so it wasn't like a wrapper around AWS. It was a kind of template for building out accounts.
72
+
73
+ Anyway, long story short - they came to me one day and said "Oh, Ian, you know about Git. Can you help us with -- we're trying to find out where this change came from." I was like "Oh, great. Just send me the reference to the repo. I'll download the repo and I'll figure it out." So I downloaded the repo, did `git log --graphs` my usual trick... And I got a page of pipes. I was like, "Okay, what's going on here...?" And I said "How is this structured?" They were like "Well, we have 12 teams working on this product, and each team has about seven people... And each team has its own branch, and they have features off those branches." I was like, "Okay, do you rebase?" They were like "What are you talking about? No, we merge." I was like, "Okay... So this is why I'm seeing a whole bunch of pipes, because there's so many branches, and they're all merged in, therefore there's no single line of change."
74
+
75
+ So I tried to explain what rebasing is, and of course, I got to the deeper points about Git... And I realized, "Oh, to get there, I need to explain all this other stuff." So I created a course for people at Barclays who were using Git, but wanted to understand it more deeply.
76
+
77
+ I'd read "Learn Ruby the Hard Way", because I was doing Chef at the time, and I'd read a couple of other of the books by Zed Shaw, who kind of popularized the Hard Way method. I wrote to Zed and I said "Is this your trademark/copyright? Would you be pissed off/upset if I used this in my books?" and he said "No, it's fine. I can't trademark it. It's actually an older idea that I took..." So I was like "Cool. I can use it. Well, I've got this course. I could turn it into a book." So I did that. The book was pretty well received.
78
+
79
+ Then I thought "You know what - I don't know Bash as well as I like, and there's so many things about Bash where I'd like to use the Hard Way method so I can understand how it works..." So I wrote one on Bash, and that was really popular. That sold really well.
80
+
81
+ \[19:51\] Then I was learning Terraform and I thought "This is how I learn stuff now." I pick up a technology, I use it, and then I'm like "I really wanna understand it more thoroughly. Writing a book is the way to do that." So that's what happened with Docker - I felt like a complete impostor; they were asking me to write this book because I'd been giving talks and someone had recommended me... And there I was, writing this book, thinking "Surely, someone proper should be writing this book, but not me." But I got away with it, and I thought "Well, let's keep this going."
82
+
83
+ And of course, self-publishing is easier in the sense that you have more control. No one is telling you how to write it, no one is telling you you need more of this, that and the other, and you can kind of follow your vision more closely.
84
+
85
+ **Gerhard Lazu:** If you were to write a book today, would you go via a publisher, or would you self-publish?
86
+
87
+ **Ian Miell:** It would very much depend. I really enjoy the self-publishing, because of two reasons. One is pretty selfish - I take 80% of the money. But more seriously, you don't write books to become rich, you write books because they develop you... And it would depend in which way it would develop me.
88
+
89
+ My interests now are moving up the stack towards management and consulting, and so if Wiley came to me and said "Would you like to write a book on money flows in tech, in organizations?" I would be up for that, because it's Wiley, and that would increase my credibility.
90
+
91
+ Something the editor told me when I started writing Docker in Practice - because I said, "No one writes books for the money, right?" He said, "Well, you can make some money, but what you'll find is that it puts you in a different category in the industry." And that was very true. Suddenly, doors are opening to me. Not because I knew more about something than other people I knew, but simply because I'd written a book.
92
+
93
+ I always say to people, if someone tells me now "I've written a book on X", I don't think "Wow, they must know so much about X." I think "Wow, they must be really organized." Because the hard bit of writing a book, once you get past the basic knowledge you need to write it, is having the level of organization to hold down the job and do it.
94
+
95
+ **Gerhard Lazu:** I can see that. It is discipline, you're right. It is the commitment, it is seeing it through shipping it... Because it's very different -- I mean, maybe if you self-publish you can do chapters at a time, and then the book grows... With a publisher it's a little bit different. It's a bit more structured, it's a bit more demanding maybe... And you have to make that stuff work besides your job. I mean, okay, maybe your employer is understanding, in a way, but it doesn't mean that you can stop working and just focus on your book. It doesn't work like that.
96
+
97
+ And still, you managed to write a published book -- well, two, because the second edition is not the same book, as you pointed out... And you also self-published three books. And also, you have the course. We'll add a link in the show notes, but if you're curious and if you want to contribute to Ian's future self-publishing books, check them out. Because I think contributing directly to the authors is a great way of showing appreciation, showing that "Yeah, I like that. It's great content. Thank you for it." So yeah, that really helps.
98
+
99
+ **Break:** \[23:11\]
100
+
101
+ **Gerhard Lazu:** I would like us to start going a bit up the stack, as you mentioned... So we talked Git, we talked Bash, we talked history, how it all began... Very different recollections of that. I remember that I started giving you some feedback on the Bash the Hard Way, but I don't think I finished... Maybe like ten pages, or something like that; I was very busy and I never finished that review properly, so sorry about that... It was not intentional.
102
+
103
+ **Ian Miell:** I don't remember, but yeah, it's fine...
104
+
105
+ **Gerhard Lazu:** Yeah, I still remember that. When you mentioned Bash, I was like "Ahh, I should have given you some feedback five years ago or three years ago (however long it was)." Yeah, I did like part of it, but then I had to stop.
106
+
107
+ So the title of this episode is "Follow the money." And the reason why it is what it is is because you told me that this is something that you have on your mind quite a lot. You've mentioned very briefly money flows, so first of all, I'm curious what money are we talking about? Is it the proverbial money, is it the actual money? Who should follow it and why?
108
+
109
+ **Ian Miell:** Sure, yeah. So I was aware that when we called this "Follow the money", that it would sound like I was saying "Hey, you should get all the money you can as an engineer." And that's very much not what I'm saying... Especially when I mentioned that I worked for banks, it really sounds like I was money-grabbing. So I wanna make that clear from the outset - what I'm talking about with "Follow the money" is that... So as I mentioned, I worked in a very narrow domain, and the economic model of it was really simple; so simple it was invisible to me. So customer wanted thing, customer had platform, customer paid licenses. Customer wanted thing, customer had time and materials, and customer paid for ongoing support and maintenance. That was the model.
110
+
111
+ I didn't think about this too much as a significant thing, but then we tried to convert to being a product company, and failed. There were also reasons we could argue about, about why that failed... But I took that experience, I had my ideas about we used the wrong technologies, or we had the wrong culture, or whatever. I took that experience and then I went to Barclays. And I've found that there it was far less about the technology and far more about the organizational. So I became less interested in technology, because there were better and worse technologies for any given situation... But really, what causes success or failure in these organizations is the structure of the organization, and so on.
112
+
113
+ There was a common meme when we talk about DevOps and we talk about software engineering development, that when you're young, you think about tools. "Should I use Go, or should I use Java, or should I use Ruby...?" And people really focus on that. Then the wise old person says "No, no, no. It's not about that. It's about team organization, and agility, and so on." So you get to thinking about these things. And then the slightly older person says "No, it's about culture." And then I'd find the conversation just stops. Like, people just say it's culture, and you go "Okay, it's culture. Now what? How do I change a culture?" And this got me thinking -- you know, I was one of those humanities students. I came to software when I was 25, when I wanted a different kind of career. And as a humanities student, I studied modern history; I obviously read about Marxian ideas, and one of the Marxist ideas was the material base, the kind of structure of material exchange in capitalism, and so on, ultimately determines the superstructure, which is culture. This is kind of where my mind's been going recently, because I've been involved in various projects where we're looking at why things are struggling to happen, and we uncovered that the business thinks of things in a completely different way than the engineers.
114
+
115
+ \[28:04\] The engineers think of a platform as this continuously engineered product, and it needs constant investment, and this investment is repaid over time, and the more you build it and the more you automate, the more you get as investment over time.
116
+
117
+ But some organizations are still in this very transactional "I pay for something and then I have a thing", and then that's it. So getting them to think in this kind of continuous investment way and selling something that isn't just "Oh, we had ten engineers working for a month, and therefore it costs X", we have this thing that is of value to you, and this is of this much value, and we have to invest in it to maintain it. This kind of way of thinking comes ultimately from the way companies run their accounts, is my hypothesis. And in order to really debug an organization and why it's struggling to move to cloud-native, or DevOps, or whatever, you ultimately end up back with accounts.
118
+
119
+ I saw this in banks, to some extent, because we were building a platform that was consumed by the rest of the business, so it was like a product, but there was actually no way for us to be paid internally for that. So we had - I don't know how many teams - many, many teams using this platform, but the ones who were using it more than others weren't necessarily paying more. It was very opaque; we couldn't actually get to the bottom of how the money was moving around. And actually, the whole business was structured around yearly review cycles, yearly accounting cycles. You couldn't say like "Oh, we have a new version of this product and we need to invest in previous cycle." It's like, "Well, that wasn't in the original budget for the year." So you can't be agile when you have to think in yearly cycles. You just think "What are we gonna do this year? We need to get the budget for this year." It doesn't work where every three months the number of people on our platform is doubling, but we had no more budget to support that, so we had to borrow, beg and steal money from other bits.
120
+
121
+ These are all questions that make me think "Oh, you know what's at the bottom of this? It's how money moves around in an organization, and why money moves around in an organization." If you can sort that out, or at least get that aligned with the way you want to work, then suddenly things can move much faster. Organizations that are built from the ground up as microservices typically have a different cost structure. They allocate money to teams, and they're responsible for products.
122
+
123
+ I spoke to my CFO where I work at Container Solutions and he said "Yeah, there's this whole thing called agile accounting." And he was a big fan of it. He turned me on to a couple of books on the subject... It tries to overthrow this yearly cycle in favor of continuous accounting... Because that's familiar to us, right? Continuous accounting, continuous software.
124
+
125
+ So there must be a Conway's Law type thing here, which says that the organizational structure determines the tools and technologies you use, but what determines the organization structure? Well, one way to look at it is the financial structure. How does money move around? So you get this thing of like -- you're always getting back to money. So this is why I say "Follow the money", because if everything is working fine, you're getting the resources you need to do your job properly, and then it's invisible; you don't care. But when things are going wrong, it's like "Well, I'm debugging..." We did the Five Whys exercise with our customer recently. It was like "Well, why can't you allocate someone full-time to this platform?" And they said "Well, we can't do that because they get pulled away." "Why do they get pulled away?" "Well, because we have other projects." "Why do you have other projects?" And we kind of went around in circles, and then eventually we got to "Oh, because your time and materials company, and you think of things in terms of built, finished, done, and not in terms of "Oh, this needs to be continuously maintained." So the senior management are saying "The platform is finished. Why are we still talking about it?" It's like, "No, that's the wrong way to think about it." It's in the accounts is the thing and now it's depreciating. We finished it. It's built."
126
+
127
+ \[32:06\] Once we go back to that, someone else pointed out - someone very experienced pointed out and called it. Accounts are very used to the idea of investment. If you buy a warehouse, they understand that you buy the warehouse and you have to maintain the warehouse. Amazon don't just buy a warehouse and then think it's done. They buy a warehouse, big expense, and then they say "Right, there's gonna be some ongoing expense here, as we need cleaners, we need janitors, we need security etc. Or we need to rebuild a part of it, because the products we store in them have changed."
128
+
129
+ This is all not new to accountants, they understand it... But we don't talk to them. As engineers, we're scared of -- maybe I speak for myself, but it's a bit like when people talk to us, they suddenly get these jargon words, and we're like "We have no idea what you're talking about." It's like for us when we get to accountants. They start talking about tax deductions, and things we're not that interested in necessarily... And it's very technical. But if you get someone who wants to work with you, it can be completely eye-opening.
130
+
131
+ I remember learning about the difference between cap-ex and op-ex, and I was like -- suddenly, it makes sense to me why the cloud has taken off, because it's treated differently from a tax point of view and a spending point of view. Suddenly, cloud makes a lot more sense.
132
+
133
+ So I haven't formed this into a coherent theory yet. There's still some thoughts sort of flying around in my head, but it's something that I'm super interested in exploring more.
134
+
135
+ **Gerhard Lazu:** I find that really fascinating, in that you're right, we rarely think about that. Maybe if you're a self-bootstrap business and you have a product that you're working on, maybe the relationship is very clear between what you're investing your time in and the return on that investment. Are you doing the right thing? And if you are, then the money is going up, and the income is going up, and you have more money to work with, which means that you can spend more time on the things that maybe generate that money, or other more important things... But the whole point is just keep the money flowing, keep the money - not necessarily coming in, because it has to go out as well. You're not hoarding money. You use money to make more money, that's the way it works (or it should work).
136
+
137
+ It's interesting of thinking it in terms of money flows, and continuous money flows. I think when you're starting a budget, even your personal budget - you know, like, okay, you're saving money, but don't focus on the pot, focus on what's incoming and what's outgoing. As a result of that, you will build a pot. That's just how it works. Or different types of pots. So that makes sense.
138
+
139
+ As I mentioned, a self-bootstrap business - I think it's very clear. A medium-sized business - maybe it starts getting a bit more opaque. 100 people, 200 people... But once you get to tens of thousands, even hundreds of thousands of people, this is completely opaque. You don't even actually -- and I know that some companies, you can't even know how much money you're making; it's just not allowed beyond a certain level. I'm sure there's very good reasons for it, especially if there's like an IPO company, a publicly-traded company, things are even more complicated.
140
+
141
+ So this is very interesting, but again, I fail to see how this would make -- I mean, smaller company yes. Bigger company - it's just impossible. So would you use some sort of proxies?
142
+
143
+ Let's imagine that you're part of a big company, and let's imagine that you're working on a product which is sold for licenses, as you mentioned; you buy the thing and you use the thing, and then you renew your license every year. Let's imagine that the team that is working on the product cannot know the cost that those licenses generate, regardless of the reason. What do they do?
144
+
145
+ **Ian Miell:** \[35:53\] Well, yes, this opaqueness is absolutely part of the problem. I think what I was talking about earlier was more like structural stuff. But if we're talking from a point of view of an engineer - when I was working for very large organizations, I had the same question, "How can we measure the value we're generating for the business and then justify more revenue as a result?" And they got really political and complicated, because the best measure we came out with was the number of teams in production, or using the platform. And there were different measures: production, testing, dev etc. And we took those numbers, but we didn't even really know where to take them, because as I say, we had these yearly budget cycles, which were way up the chain, beyond what we could understand. We had no idea if this information was going up to that part.
146
+
147
+ So even that structure wasn't set up for people to think about things in terms of "Well, I'm delivering this value to the business, and therefore I can justify this funding for future growth." It just hadn't been joined up.
148
+
149
+ When you're a company of a certain size - and being opaque is fine; most people don't have the time, or energy, or inclination to think about these things. When those companies want to change, they just say to themselves "Oh, we'll just be Agile now. We'll put some posters up, and we'll have some courses, and we'll use a few new words to describe small things, and we're okay. We're Agile now." And my feedback to companies who were trying to do that is always "You have to think deeper than that." And usually, we stop at culture. But I'm starting to think that's not enough, because like I said, okay, our culture is broken. Now what do we do? You can't just magic up a culture. It's very vague.
150
+
151
+ In fact, I read a blog piece called "Five things I did to change an IT team's culture." I was given the task of managing an IT team where the director had left... I did five specific things to change the team's culture, from firing someone, to getting my hands dirty on the floor and looking at the pipes of work that were coming in... Because I wanted to say -- you can't just say culture is a problem and then walk away. "Here's my invoice. Culture's a problem." You've gotta actually suggest something. And so at the time, the things I was suggesting were tough things, like firing people, and getting your hands dirty on the floor to see what's happening. Like the goal in the book we were talking about before the podcast; one of the key things in that book was that the manager never actually went to the floor until he had to. So he had all these reports about the machines in the factory, and then he went on the floor and he's like "Hang on a sec, that doesn't really match. The reality on the floor is that we have these two machines which are sitting idle. Why are they sitting idle?" "Oh, because you told us we should use the new machines, which are less productive." Like, "No, switch the old machines on. We need to deliver."
152
+
153
+ Going to the floor is a huge thing. That's practical thinking, too. But another practical thing that I'm thinking about now is this whole money flows things. And it's not an easy sell, because "Why is this guy who knows about Kubernetes suddenly telling me about cap ex, and op ex, and money flows? This is not what I hired him for." So you kind of have to be patient. But eventually, you get there with them, and they can start to see that this is a problem potentially. But I think it's a very underexplored area, and I'm super interested.
154
+
155
+ **Gerhard Lazu:** The more conversations I have with different people, the more I realize that it is this holistic approach that people are not taking. They are comfortable talking about being uncomfortable in their own little narrow area, whether it's coding, whether it's operations, whether it's whatever it may be... And they don't have the energy, don't have the interest, don't have the time to step out of that narrow area and get uncomfortable. But maybe the real opportunity is outside of your comfort area. It's when you start thinking "How does the money flow in the business? And how does my work impact the money flow? How does my work and my approach impact culture? Am I being a bad person?" I wanted to say "Am I being a jerk?", but I don't know whether that's politically correct; I don't know what the politically correct term is, but am I being a difficult person about this? And could I be doing more? And am I doing the right thing? Am I even doing the right thing?
156
+
157
+ \[40:25\] Maybe we look up to our managers and our leaders and expect them to have all the answers... But it's a collective thing. It's a team effort. We have to come together, and different people bring different experiences. And you're right, I think this is a very underexplored area. We don't know the relationship that exists in different companies between money flows and the value that we create and the value that we contribute. Maybe you would like to work more efficiently, but what does it mean? It's not more lines of code, it's not deleting even lines of code. It's not that. It's something else. It's mentoring, being kind? Sure, to some extent yes, but there's more to it. So what is this more?
158
+
159
+ **Ian Miell:** One of the things that I now advise every team that's moving towards CI/CD is not Jenkins over this, or whatever tool that you think... I say "First, measure how expensive it is to make a change on your system." Let's take a CSS change to your site. One hex value change. How long does that take to go through the pipe as it stands. And you get answers of like "Well, it takes three weeks, because QA have to check it, and we have to get the project management, we have to get this done etc." And it's like, okay, so that can take three weeks. So you have a number. You end up with some number. One small change, one trivial change costs the company X.
160
+
161
+ So now do we do GitHub, or CI/CD? Arguably, they're the same thing... And we then measure the costs of the change now. So now you've automated your tests, now you've done this. Oh, you may still have a legacy QA process that makes everyone feel happy, but it's short-term, because that stuff is destined. We have fewer errors, that means things go back around in the cycle. We have fewer bugs resulting. That's the other thing - how much the business has tracked the cost of maintenance to their products. How much is that related to -- actually, something has just occurred to me, which is that when I worked with that company before, I worked in third line support, and we were well-funded. I mean, we felt super-busy. We were super-busy and stressed. But if we needed more money, we could go ask for it... Because customers paid for support, and they paid well for it, because they cared about it. So they would give us lots of money for support. It was pooled up into one bunch of money, and then we had a team that services all these customers, composed of engineers and tech leads together, fixing stuff in real-time.
162
+
163
+ Other companies don't have that money flow. They don't have the same thing; they ignore maintenance, and kind of beg, borrow and steal energy from engineers to fix stuff while keeping them busy with other stuff... But they just pretend it's not there. So if you can get measurements on these things, then you can start to draw a graph and say "Okay, we've spent X amount on the platform." Platforms are very front-loaded; you have to spend a little money before you get any kind of return... But the return should be felt over time. This is not a new idea in business, it's a very old idea. But you can end up with a graph which says "Okay, so we've spent a million pounds building the platform", which is done now in the old way of thinking; whereas before it cost 10,000 pounds to make a single change, now it costs 1,000 pounds. So we've cut that thing in 10. So we used to do ten releases a year, now we can do a hundred.
164
+
165
+ \[44:01\] So that's the financial metric... What's the non-financial metric? Well, we can do even more. We have fewer bugs, we have easier rollbacks, we don't have big meetings with lots of project managers in them and tech leads discussing the huge amount of changes that are going in once every month. All these things make a very good case for "Give us more funding and we can make it better."
166
+
167
+ You also have to link to sales. Sales is a pipe of money, right? And if sales think of things in terms of transactional, they pay for something and it's just done - that's incorrect. With one organization we ended up discussing we should have invisible line items on sales, to say "Well, behind the scenes they're paying for this product, but they don't know it." Or you put it on there and they say "What the hell is that?" and you go "Oh, don't worry. We'll scrub it off for you", but actually you account for it internally. And by doing that, you can say "Okay, we're bringing X amount of money in, therefore we can have our own team, therefore we can have focus, therefore we can hire maybe more people who are more focused on this new way of doing things", and you potentially build up this virtuous circle.
168
+
169
+ But if you don't think about these things when you're starting the process, then it can flounder as you encounter them later. That's a pattern I've seen in all the businesses I've worked that want to change the way they work. You start by thinking about the tools, and then you end up dealing with stuff that you didn't even think about, like how sales thinks about what they're selling. The salesmen are super-smart at hiding costs and changing invoices to make the customer happy, but it's actually the same amount of money. These things turn out to be super-interesting if you get involved with them, but it's not where you start as an engineer. You start over there -- I'm sure actually CFOs go through the same journey, because a lot of accountants become CEOs. They start thinking of things in terms of nice, clean spreadsheets, and then they go and become a CEO and suddenly they're axing a whole team, because they look at the spreadsheet and it's "I can get rid of this number", but they find there's also secondary effects they hadn't thought about, because they come from that background.
170
+
171
+ We have the same problem - we think of things in terms of technology and process, and we like system design, and thinking of things in terms of that, but we don't wanna think about the money; it's kind of annoying. "I have to go and talk to someone about getting -- just give me the money I need to get the job done. Why is this--" But that's the reality of what we do. It's all connected. It's holistic.
172
+
173
+ **Break:** \[46:37\]
174
+
175
+ **Gerhard Lazu:** So we were talking about money flows, we were talking about the cost of change, which is something that really stuck with me. We were talking about the holistic approach to developing and shipping code. Shipping value. It's not just code, it's value, and I think if you start thinking about things in those terms, the work that you do will be more valuable, because that's the way you approach it. it's not about lines of code, it's not about changes, it's about what you contribute to the world. Maybe the world's a bit too grandiose for some, to our users - whether it's five, five thousand, fifty thousand; it doesn't really matter. Users will benefit from what you do.
176
+
177
+ So with that in mind, what would you say that is the biggest mistake, in your experience, that you've seen developers make? Because you've been around the block a couple of times now, you've seen small orgs, big orgs, banks, startups, everything in between... Maybe there's multiple mistakes that you see developers make, or software engineers. Anything that stands out?
178
+
179
+ **Ian Miell:** Yes, I think it's related to what we were talking about - it's the lack of holistic thinking... Which is completely natural. Because if you're a specialist and you're young, you will have focused very tightly on your domain of knowledge, and worked hard to improve that, and we've all been there. I make this mistake, of course. I think every engineer makes this mistake as they're younger, so I'm not saying I'm somehow special. But as you get older, as I get older, I think more holistically about it. So what I find is that engineers think -- let's take an example... I had a lot of engineers surrounding me when I was working at Barclays, because I was named as the owner of Docker, this technology. So each technology in a company like Barclays has an owner, and they're responsible for the lifecycle of that technology within the organization.
180
+
181
+ So people were friending me up and saying "I wanna use Docker." And I would have to explain to them "I can't just install it on your machine. There are these following blocks." And some of them were interested, but most of them were just kind of impatient, like "Can't you just give me what I want?" And what I've found was that those people that thought about it more were more successful, because they would say like "Okay, so you need XYZ to do it. Is there a way I can talk to my manager or talk to their manager or something to kind of get the money moving around, so you can get what you need? Or can I join myself to your team and actually do the work with you to get it done?"
182
+
183
+ So that's kind of a microcosm; that's kind of a single anecdote. But generally, I see engineers just thinking about things in terms of technology. Here's a perfect example - I made that mistake when I tried to do Erlang in the business. I thought "Erlang is a perfect fit for some of our problems. It's a functional programming language, it's message passing, it's actor based... This will have all sorts of benefits for us", and I just went full speed ahead with that. And then when it came to actually trying to roll it out, we found that engineers used to see syntax really struggled with Erlang, to the point where they were just writing things in misguided ways because they just didn't grok the way things should be built. It's like cloud-native, you have to think in a different way to build it. And that doesn't just happen overnight. People still use their old mental tools to apply them. So this initiative floundered.
184
+
185
+ So this is a mistake I often see, failing to see -- and it's not a criticism of them, of these people... But if they fail to respond to that challenge, that's the mistake, I think, that they make... Because you've gotta learn these rules of business somehow. Or you can just stick in your own domain and work away at it. But usually, you get to the point where you're frustrated by something, or you can see how something can be done better, and you want to kind of go and make that change... And at that point you need to start thinking more widely.
186
+
187
+ **Gerhard Lazu:** \[51:52\] I think it is valuable to stay focused, especially if you're passionate about something and you feel connected to that thing, and you feel like you're making great progress... That is very valuable. Some call it flow... By all means, do that. That is important. But be aware there's more to it than that. And if you stay in that mode for too long, your success will be hampered by the fact that you're staying in that mode. It's almost like the equivalent of taking your headphones off, walking around and talking to people. It's the equivalent of that... But in this case, it's the business. It's maybe the sales department. It's maybe the marketing department. It's all connected. It's accounting. And you can ignore it; not a problem. Again, there's no right or wrong here. What we're saying is that if you're a bit more aware of what's happening around you, you will be more successful. You'll feel more connected to the business.
188
+
189
+ **Ian Miell:** Yeah.
190
+
191
+ **Gerhard Lazu:** So look around. Stop. It's okay to stop. It's okay to take a break. Because guess what - when you get back into flow, you'll be more efficient, you will be more guided; you will be better connected to everything else... Because it is a team sport. It's not just you. Even when you think it's your company, and it's your product, and there's no one else, you have your users. So you have to pay attention to that. And it's all the things that you're not doing, that you should be doing. And that's when a lot of people get stretched, and they get out of depth; it's like a nice little progression, it doesn't happen overnight. That's okay. Be kind to yourself and to others (but it starts with yourself), and open your eyes. The world is a big place, and there are people that want to help, there are people that know what they're talking about, there are people that failed in N ways until they realized there is a better way than just coding and shipping and coding and shipping. There's so much more to it.
192
+
193
+ So you've mentioned Container Solutions a couple of times... I know that there's this very -- well, I won't say very popular, but it's a concept which is increasing in popularity. WTF is X. So my question is, first of all, who is Container Solutions and what is WTF is X?
194
+
195
+ **Ian Miell:** Right. So who is Container Solutions - Container Solutions is a consultancy that helps companies move to cloud-native. That's typically what we do. But we like to think that we have a wider perspective than just that. What we often do with companies is we don't just go in and say "Hey, you should be using Kubernetes" or "You should be using Docker." Actually, sometimes we say "You shouldn't."
196
+
197
+ My first assignment actually with Container Solutions (or CS, as I call it), was to go to work on an assessment - we do these two-week assessments, where we go in and we interview 15-20 people within an organization, the key people, for an hour each. We collate the information we've gathered, we synthesize that, and then we analyze that, and then we produce a report of how they should get to where they want to be, or how we think they should get to where they want to be. And one of these companies - they said "We wanna use containers" and we said "No, no, no. You're thinking about this wrong. Maybe for the new stuff you use containers, but the old stuff stays where it is, because it's low risk, it's generating huge piles of cache for you... So just leave it. Invest in it, maintain it. But if you try and rebuild everything from scratch, you incur a huge risk." So first learn on the side, and maybe learn from those lessons, and draw them back in.
198
+
199
+ The reason I joined Container Solutions was because I had a meeting with the owner, a guy called Jamie Dobson, and he immediately figured out my frustrations with work, and told me "I could help you enjoy work more." And that's exactly what's happened since I joined... Because I now I feel like I'm thinking about the real problems. I've spent years working in large organizations, seeing projects flounder, and it was nothing to do with tech, and it was nothing to do with a lack of will, or nothing like that. It was simply that the problems weren't being analyzed in the right way.
200
+
201
+ \[56:09\] So what happens is we often get companies coming to us saying "Hey, we want Kubernetes. You can give us Kubernetes, right? How much will it cost to get Kubernetes?" And we kind of say "Yeah, we know Kubernetes. We have engineers, we write operators, we do all this stuff... But we don't start there. We start at a higher level and look at the way you're approaching this, and have you thought about all the things that you need to think about." And companies really like that... It builds a lot of trust... Because we often go in and say things which seem to be not in our interest.
202
+
203
+ We did an assessment recently of one company, and we basically told them that they're actually a shining beacon of how everything should be done, and maybe we can learn from them actually, more than them from us. That was really satisfying, because we got to tell the truth, and not just say like "Hey, we've got this whole model of how you should move everything to Kubernetes, and have a platform team... This is our blueprint for you." We don't have a blueprint, we have an analysis process.
204
+
205
+ So we get to tell the truth... That's a lot of fun. A lot of fun. Obviously, we have our expertise in tech, and that's where we come in, so we have to demonstrate that... But yeah, that's CS.
206
+
207
+ **Gerhard Lazu:** One of the things that I find very valuable on your GitHub, on the Container Solutions GitHub, is the Kubernetes examples that you have, the Terraform examples... I think you had a lot to do with that. The reason why I wanna call these things out is because it is -- like, if you care about the tech, check those repos out. There's a lot of very good examples and very good approaches to something that you would maybe google for. Search for them. There's a lot of good stuff in a single place. So if that's your curiosity, check it out. You will find it super-useful.
208
+
209
+ The other thing which I want to say, in a conclusion to what you were saying, is it's really resonated with me, helping people in a way that they need, not necessarily telling them what they want to hear. Telling them the truth, the reality... Like "Hey, you don't have a problem." Or "You do have a problem." Or "The problem that you think you have - it's not the actual problem." So having this honest approach will always win. Even if it's not in your best interest. Because what you care about is the customer's best interest. That is a winning approach in anyone's book, I'm sure. If you care more about the other company/person/whatever than you care about your own interests - that's great. And find the alignment, where do your interests meet with my interests. If there's no union, that's okay. It's not a good combination.
210
+
211
+ **Ian Miell:** Yeah, it is very satisfying. It's more satisfying to work in an environment like that. You learn a lot more quickly, because there's no formula. You can't go into a company with a fully -- everyone has prejudices and experiences which fly against reality, but the 15 hours of interview process is a really good way to get beyond that... Because you have to actually start thinking about "How did these people see it? How is that different from my way of seeing it?"
212
+
213
+ You very quickly develop a mental model of how different organizations work, and that's really a very valuable skill to gain. It's something that I've really developed over the last year, because I've worked with companies that are fully microservice-based, and then companies where they don't wanna use cloud because it's too expensive, and they like having their data center. And we actually told them "Keep going. You'll be fine."
214
+
215
+ So the reason we can say that, in that example, in that case, was because we can see that you understand the limitations of what you're doing and how you're doing it, but you've actually accounted for that; you're ready for those challenges. And they did actually use cloud for some things where it made sense, but they were very resistant to it.
216
+
217
+ \[01:00:02.10\] Anyway, you get to see all these different ways of thinking in environments, so you learn to become more flexible in your thinking, and it affects everything... Whereas when I worked in the same place for 14 years, I felt very stuck in a very particular line of thinking and way of thinking, that has been challenged heavily since.
218
+
219
+ **Gerhard Lazu:** WTF is X, that's what I'm thinking.
220
+
221
+ **Ian Miell:** Yes, yes. We didn't get back to that, right. So what happened is during the pandemic we've been very active in conferences, and setting up events, and so on... So the pandemic hitting suddenly, "What do we do now?" This is a whole area of business that we have hired people to do. So we flipped and we started doing some online stuff and we learned some lessons, and we did a couple of conferences... And then the marketing team at CS and Jamie came out with this concept of WTF. It was a really great hook, because I think a lot of people in -- I mean, we all know in tech that there are all these things coming at you all the time: GitHub, microservices, Docker, Kubernetes, Swarm... They never stop coming.
222
+
223
+ **Gerhard Lazu:** Yeah.
224
+
225
+ **Ian Miell:** And you go to a conference and people are talking about it and you are kind of smiling along, and sometimes you know what they're talking about, and sometimes you think you know what they're talking about, sometimes you have no idea what they're talking about... And you don't often go "Hang on a sec... Time out. Sit down and explain this to me carefully for 20 minutes." You don't do that. You just kind of go "I think I get it from context. It's fine."
226
+
227
+ Everyone does this in the industry, and it's even harder if you don't have a technical background, or maybe you're a buyer, or a senior leader who is less confident about this stuff. So wouldn't it be great if we had this sort of banner of WTF, so that we could actually say "You can come along to these events and we'll try and explain to you what it is. And if you have questions - great, let's hear them and maybe we'll discuss it more."
228
+
229
+ One of them we did was WTF is GitOps, which I did... There was a small technical demo, but it was really just kind of showing people who wanted that kind of thing really "Where does this come from? What does this mean? If you hear this in a meeting, what should you be thinking? Where can you slot this?"
230
+
231
+ I don't think there's enough of that in software. I've noticed over the years that the really experienced engineers and the really confident engineers are the ones who always say "I don't know what that means. Tell me what that means." And then there's often a very short five to 20-second discussion between the two people about "What does that mean?" "Oh, it's like this." "Oh, so it's like that." "Yeah." "Okay, right. Now I have a clear idea what it is." This is what experienced people do. It's that old paradox of the more experienced you are, the more comfortable you are saying "I don't know what you're talking about." That's something I had to develop when I worked in banking, because you're having acronyms thrown at you all the time, and you have no idea if it's industry-standard, if it's company-standard... You have to kind of go "Okay, I don't know what that means. Please explain to me what that means." You have to kind of get over your having been in a domain where you knew everything.
232
+
233
+ So yeah, I love that side of it. We recently did a conference, WTF is SRE, and we had many thousands of people attending. It was like a super-success, and we were like "Finally, we think we've cracked how to do stuff online now, how to do a conference online now." But it was a long way to get there.
234
+
235
+ And on the WTFs I also do a gossip thing. So the first ten minutes we have general stuff, and then we have a little section where we talk about gossip in the community, and that's kind of fun.
236
+
237
+ \[01:03:41.18\] So it's pretty light, it's not a heavy, serious thing... And it's not heavy on demos or tooling. It's much more about exploring concepts in the industry, and what they mean, and learning from the people who come, and vice-versa. We often get even side discussions going on the chats, on the Zoom, and they're super-interesting. Because you get stuck in a bubble working in consultancy. You're completely immersed in Kubernetes, and operators, and Terraform, and GitOps, and you think everyone knows this stuff... And of course, people don't. They either don't care, or they don't come across it. So it's really important to get your head out of your own space sometimes, and kind of see things from the other point of view, because otherwise you end up as a consultant just talking jargon to yourself.
238
+
239
+ **Gerhard Lazu:** I really like the approach of -- so first of all, I have attended a few of those, even like the WTF is SRE; that was a really good conference. I think the videos are now available on YouTube, so you can go and check them out. There's the Container Solutions website, and everything is linked there. There's also the YouTube channels, you can go and check them out; we will add them to the show notes. But my perspective and my conclusion was that it's where humans meet tech in a human way; no impostors, no high horses, and that is something that Changelog does a lot as well... Or at least we try to. I'm sure everybody fails, let's be honest about it, but we try to call it how it is. "I don't know. What did you mean, Ian? What money? Do you mean I should get a better job, is that what you mean?" No. Obviously, that's not what you mean.
240
+
241
+ And things change all the time, right? We always improve. So those improvements - how do you share them? How do you improve as a whole, as a group? Because the only successful teams, the really successful teams are the ones that improve as a whole. It's not individuals, it's the interactions. Are the interactions better? Are your team members getting better? Is the industry as a whole getting better, kinder, wiser, learning from its past mistakes and not repeating them?
242
+
243
+ **Ian Miell:** That's really funny, because my first engagement with a customer at CS I got criticized in the first week for not doing enough high-level architecture. I was a lead on this project, I came in halfway through it... And I was like "I don't understand... Why do they think that?" And it turned out that -- because I was spending a lot of time mentoring the junior staff, I didn't know the technologies so well, pairing with them and trying to help move them along... And I explained to them that I thought "You architecture is fine. You know what you're doing. The problem is not the tech, the problem is that your junior engineers don't understand it. And as you roll this out, they're gonna have to get it. So let's invest in that."
244
+
245
+ And what they actually wanted from me was like a rubber stamp of like "Your architecture is fine", but it came over as this kind of "Oh, we think you should be doing high-level architecture." And it's like, "No, no. I think I know what's better for you, but I should have actually called out that your architecture was fine. I didn't realize you needed that from me." It's an interesting dynamic there.
246
+
247
+ **Gerhard Lazu:** Biases and preconceptions. Oh, my goodness... I'm pretty sure that if we were to have another discussion tomorrow, I think it would be on that, how to manage preconceptions. Because they seem to be at the core of many things that we do. We assume that things are good, or fit, or the right thing.
248
+
249
+ When I was talking to Dave Farley the other day, he was saying "Always assume that you're wrong... Because guess what - you won't be wrong." If you assume that you're wrong, you won't be wrong. If anything, you'll be right. "Oh, actually I was right." But you assume that you're wrong; you won't be negatively surprised. Double-check. Learn. Have that mindset of learning, on being open to a better way... Because there's always a better way. And people that think that they know that their approach is best - they are the ones that need this the most... Because it's not. Don't be sure of anything. Always double-check. And if you have that approach, the sky is the limit. And not even the sky. Elon Musk, right? Mars, whatever.
250
+
251
+ **Ian Miell:** \[01:08:08.29\] That's right, yeah.
252
+
253
+ **Gerhard Lazu:** Another solar system.
254
+
255
+ **Ian Miell:** Yeah.
256
+
257
+ **Gerhard Lazu:** Okay. So this conversation, if anything, it reminded me how important it is to talk to you, and I don't think we did enough in the past... But I want that to change, definitely. Is there anything that you're looking forward to in the next six months or twelve months? Because that is a very nice segue into the next conversation, where we can pick it up from here.
258
+
259
+ **Ian Miell:** Well, yeah... I mean, at work I'm moving more towards doing that high-level analysis work, the lower-level technical work. So I'm excited about getting more involved in that and figuring out how to be more efficient and effective at delivering that work, because it obviously is one thing to do the analysis, and the other big thing is how do you present that information in a way that it can be heard well... Because it's not just the logical exercise, you also -- sometimes it's a bit like a therapeutic process in the sense that you can't tell the person everything you think they should do straight away. Sometimes you have to measure it out in spoons, like "First you need this, and then you need that." It sounds kind of patronizing, but if you overwhelm them with "Oh my God, you need to change everything. It's all a disaster", they might just react very badly to that. So you have to kind of balance this honesty and transparency with this kind of how best can you help them change, or help them move towards where they want to go.
260
+
261
+ Sometimes I have done it too fast, tried to move people too fast and introduce ideas that they're not ready for, and you get a lot of pushback... Sometimes I conform too much to the way they wanna think and don't push back enough. It's an art, it's not a science, and this is the stuff I'm really interested in learning more about.
262
+
263
+ **Gerhard Lazu:** I'm really looking forward to having that conversation, Ian, 6-12 months from now, however many months it's going to be... Because I think this progression, as you mentioned, is super-important... So one-off's - what is one-off these days? What is transactional these days? It's relationships. It's building blocks and journeys. You build on top of conversations, you learn new things, you share those new things, you talk about them... Because as you talk about them, they improve, just by talking, the concepts. You refine them. And you go back in time six months and you realize "Actually, you know what - I was wrong. What I thought was right - that was actually wrong. However, another thing which I completely disconsidered proved out to be very important, and that's what I've learned."
264
+
265
+ I think that's really important. I think we are getting better at that. I see conferences that are long-standing. Every six months you have a KubeCon. That's a great example. So many relationships are built and so many ideas get born, and then they just fly. WTF is SRE - that was a great conference. The platform is really good, by the way; I really enjoyed that. And I'm sure there are many other things like that that will emerge as the world around us changes.
266
+
267
+ The world is not the same, and it won't be the same again. 2020 taught us so many things... Those that wanted to learn. Because others are still blind and they still claim everything's back to normal and it will be back to normal... It won't. But that's not even the point. The point is where are we going? Where do we want to go? And shipping it - such a small part. Important, but such a small part.
268
+
269
+ Ian, it was my pleasure. Thank you for your time, and I look forward to speaking with you the next time.
270
+
271
+ **Ian Miell:** Thank you, Gerhard. It was a lot of fun. I really enjoyed it.
OODA for operational excellence_transcript.txt ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** I think I'll start with a story. The story that I wanna start with, it's the complexity that is starting to creep in not just in the development world, but also the operations world. Everything is moving at a break-neck pace. We have Kubernetes, and that's just the tip of the iceberg, which in itself is super-complicated... And hundreds and hundreds of other projects which keep shipping maybe every week. So how do you deal with that complexity? How do you just never mind about implementing it and shipping it in your infrastructure, but just paying attention to what's going on? It's impossible.
2
+
3
+ So one thing that I try to do is focus on the important stuff, and ignore the rest. But what is the important stuff? How do you know? Not to mention that you can't always just pay attention to what is happening; you still need to get on with your job, you still have requirements... We know that some of them are silly and they don't really make sense, and there's a lot of energy that that will take.
4
+
5
+ So in my search for certain models or approaches that would help deal with the complexity and the ever-increasing speed was Agile. This was maybe ten years ago... And that worked well for a while. But how do you apply Agile to the world of ops and the world of Kubernetes, when everything is changing almost every day? No, actually it's definitely every day.
6
+
7
+ \[04:14\] So in that search, I came across Ben Ford and commando development. He had this really interesting concept of OODA loop. Observe, Orient, Decide and Act. I had a very simplistic view of the OODA loop, and Ben opened my eyes. I was like, "Whoa, there's so much to it... And are you telling me that I got it wrong?" Yes, I did. Not only that, but there's so much to it that maybe can help us with the complexity, maybe can help us stay focused on what is important.
8
+
9
+ So I have Ben joining me today to talk a little bit about that. That was my story, Ben. What about yours? How is your story with the complexity, and development, and practices...? Tell us a little bit about that.
10
+
11
+ **Ben Ford:** Yes, it's great to be here. I love talking about this stuff, so I'll happily dig in. I had a bit of a non-traditional route into technology. I learned to code aboard an amphibious assault ship when I was serving in the Royal Marines way back in 2003. I was on the way to Iraq, I was bored on the ship, and I was wondering what was gonna be next after my armed forces career... So I picked up a couple of books, one on Python, one on Linux, and taught myself to code, very rudimentary.
12
+
13
+ Fast-forward 5-6 years and I was working as a software developer, back in the day when you used to order a real server from people and have it installed into a data center... And the pace of getting a company up and running then was six months, probably a couple hundred grand in upfront cost... Markedly different from where we are now.
14
+
15
+ I was pretty happy being a developer. I thought my time in the Marines was a fun diversion that I had that maybe makes my CV stand out a little bit, but not very relevant for my day-to-day life. And then that continued for a while, and then I started to run into this complexity as well. So I had to go from being a happy, functional programmer, working in finance, to learning about DevOps, and learning about continuous deployment, and all of these other things.
16
+
17
+ And then that sort of also starts to come a little bit unstuck, because then you've got decision-making and strategy, and all of these things that you can have a really good CI/CD pipeline and a really fast feedback loop on the technical front, but if the business doesn't have a way of holding the complexity within which it's operating, well then you're kind of in trouble, because decisions are wrong, or are not being taken...
18
+
19
+ So I guess at that point, maybe five years ago, I started to come across some authors who were applying concepts from the military to business. And that opened my eyes. I'd already started to see some links with rudimentary -- I started to pick out some rudimentary links between Agile and the way we used to do things in the military, and then these guys started writing about it from their experience. I only served for five years; I had an early career in the military, but never done any of the more command-based things. But these guys were writing about the principles of combat, and I thought "Yeah, you know, there really is something to this." So I basically started pulling on that thread and I haven't stopped since.
20
+
21
+ The OODA Loop is the concept that holds all of this together. It's the concept that explains pretty much everything else, when you dive into it deep enough and you get beyond the superficial explanation. Even Agile has its roots in the OODA loop. Steve Blank, who wrote about lean startups, all of that stuff, 15 years ago - that all comes from the teaching of John Boyd. So OODA really is at the heart of everything. At the same time, it's well-known, but it's -- it's well-known as in it's widely-known, but it's not well-known as in it's well understood. So here I am, to explain what I've found over the last few years of exploration.
22
+
23
+ **Gerhard Lazu:** \[08:10\] I've found that fascinating from the perspective of your course, the Algorithms for Leadership, which - I didn't know at the time that I will have you on the show; I wasn't even sure about when the show will start, or what it will be about... And as I start discovering more about that course and more about some of the concepts that you present... By the way, some of them are extremely compressed; and this is your word. So it took me a couple of replays just to understand what is being discussed. And there is so much to it.
24
+
25
+ People, when they think of OODA, they imagine those four words arranged in a circle, and there's like an arrow that goes from left to right. And most people, that's where they stop. That's it. I'd like us to dig a bit more into that. But before that, I'm wondering if someone is listening to this and they're wondering "Hm... Shall I spend the next hour listening?", why would they care? Why would someone care about this? It is the complexity that's the obvious one, but is there something else to it? I think there is.
26
+
27
+ I think it's interesting to get that high-fidelity that tends to be lost over time, and I think you're doing a very good job of capturing it, not only in spoken form and written form, but also in a presentational form. That's what really attracted me. You present these concepts really well. They're incredibly compressed, because they may sound simple, but there's so much behind them. And I like how everything unpacks. So why do you care about this stuff as much as you do?
28
+
29
+ **Ben Ford:** That's a great question. So my journey in technology took me into functional programming. So I went pretty deep down the rabbit hole of pure functional programming, and I became a Haskell programmer, and built systems with Haskell... Which requires a mindset shift. You choose a different set of very fundamental abstractions when you program in a functional language. So I wanted to find a similar small set of abstractions that compose well, that are cohesive, and that all kind of pull in the same direction. And for me, the OODA loop is... Because it's so fundamental - like with functional programming, you've got very, very fundamental, well-defined, small building blocks. And because they are that shape, you can use them absolutely everywhere. So the way I think of stream programming now is the same way as I think of adding numbers, because that's the same underlying abstraction from the programming sub-culture that I come from.
30
+
31
+ And the way I think about handling complexity, at both a strategic and a tactical, immediate feedback loop levels - I now have the same mental models to think about both of those, and to teach... So it's one mental model that you can apply to multiple things, and that's why it's so compressed. It's so compressed because you unpack it and it unpacks in a different way in one context, and a different way in another context, but it's the same thing underneath.
32
+
33
+ **Gerhard Lazu:** I think that's super-powerful. If you think about Linux and UNIX, and how it stood the test of time, those really simple tools that compose in infinite ways... And I think most people think they're just one OODA, but what blows their minds is there's OODA loops inside OODA loops; turtles all the way down. Think about that, but replace turtles with OODAs. I think it's amazing how well they compose and how well they apply to not just development, but also operations, and most importantly, which was my entry point, business. That was like a whole new -- like, "Hang on... Do you mean the same problems that apply to ops and dev apply to business as well?" And the answer is yes.
34
+
35
+ \[11:57\] And you go a little bit into that, because there's so much - like, team of teams... I think that was like a super-powerful concept. Red team thinking, and a couple of others... But I think I'm already jumping a bit ahead of myself... Because the one word that kind of unifies them all from my perspective is the operational excellency, or operationally excellent. Which one would you pick?
36
+
37
+ **Ben Ford:** I would call it operational excellent. In fact, I was toying with that very phrase for quite some time to explain what all this stuff is. The OODA loop is observation and orientation, which is information coming in, and then it's decision and action, which is the kind of execution side of things... And those things together, having a good OODA loop and having an understanding of how these OODA loops compose actually is operational excellence. You're operationally fit for your environment and you evolve with your environment because you continually learn from it and adapt to it. And that's the definition of agility. So operational excellence, agility with a small a, and not consultants in sight - take your pick, but it's the same concept.
38
+
39
+ **Gerhard Lazu:** I really like that. I really like that because it's a simple concept that composes in specific ways, and the excellency is in how you compose that. And by the way, there's not consultant that can tell you how to do that. It's a discovery process. It always depends, it always changes; guess what - whatever works this year will not work next year. And unless that's in your DNA as a company, as a team, good luck to you.
40
+
41
+ **Ben Ford:** Yeah. And that's where so many businesses come unstuck. I don't know where I heard this phrase - I will always say that it's not mine, but I do really like it, so I say it a lot... A digital transformation is what you need when you're falling asleep at the wheel of evolution. You've forgotten how to build these structures and communication systems - and algorithms for leadership, as I call them - you've forgotten how they work, and they've fallen apart, and you've got poor commander control of your company, or whatever dysfunction you have... Which means that you are no longer adapting to your environment, which means that you drift further and further away from being operationally excellent, which means that you drift closer and closer to company and organizational death, eventually.
42
+
43
+ **Gerhard Lazu:** When I joined your course, typically they happened in the evening for me, being based in the U.K. So I think 7 o'clock plus, after 7 o'clock; 7 to 8, 8:30... And after a long day, you can imagine that maybe sometimes I'm not paying as much attention as I would want to. That's why recordings - they're amazing. So when you mentioned that phrase, I really loved it; that woke me up. I thought, "So hang on... Do you mean that you don't want digital transformation, because it's maybe too late, and you're just trying to rescue something? What about learning about adaptation rather than transformation?"
44
+
45
+ So I took a note, and I thought "When my mind will be rested, I will unpack this." I'm still to go back to that. I think there's so much there.
46
+
47
+ **Ben Ford:** Yeah. So just to dig into that a bit more... The other problem that digital transformation has is that it's implemented top-down. So the leaders wake up one day and they go "Oh no, we've lost the ability to keep up with the tech upstarts." Banks are a classic example. "Oh no, Monzo is a thing, and Starling is a thing, and we're a big bank and our technology is terrible, and it costs us orders of magnitude more to develop technology." So the leaders turn around, and the shiny-suited consultants turn up, who claim to be able to fix their problem with a simple toolkit. And what happens is they say "Right. Great. We'll have some of this digital transformation, please." So they try and introduce it top-down. But the problem is that anything like this has to be implemented bottom-up. It has to be implemented at the point at which your people are in contact with the environment. They are the ones that are shipping code, the ones that are talking to customers.
48
+
49
+ \[16:07\] So the problem with bigger organizations isn't really the digital transformation at all, it's the fact that there is no link between top-down and bottom-up. They're two different worlds, they can't talk to each other, and no amount of millions of pounds spent on consultants is gonna be able to fix that. And that's the kind of inconvenient truth that the Agile industry as a whole - with a few exceptions, but the vast majority of simplistic frameworks and nonsense that are sold to senior leaders that don't really have a clue, that's why the money gets wasted, because of that fundamental disconnect of pushing something down from the top, versus having something emerge from the bottom.
50
+
51
+ **Gerhard Lazu:** Okay. So let's imagine that a big company approached you, saying "Hey Ben, we are in the pickle. We need your help. You really know your stuff... Help us." What would you do?
52
+
53
+ **Ben Ford:** That's a great question. That's a really, really good question. And there's several problems with that. One is that somebody who comes to an organization assuming that they know the context of the organization is gonna be bitterly disappointed. So the first thing is that I would have to tell them "Look, I can't fix your problems for you. I can give you some mental models, I can give you some abstractions and some understanding about the real mechanisms of what's going on... But if you've been having people in shiny suits turning up to your office and saying that they can fix your problem for years - that's not me. I'm not gonna claim I can do that. What I can do is offer different fundamentals, different abstractions, and I can offer some external perspective." Because anybody who's trying to change a system from within a system is necessarily looking in and down. They are part of the system; they care about the progression of that organization. Whereas if you get somebody who comes in as an external perspective, they can bring some detachment from there and they can say "Well, you know, you're treating this like X, Y and Z. But actually, have you considered that it's maybe A, B and C?"
54
+
55
+ So I think that's what I would say... I would hopefully make them understand that their mental models about what is possible in the world of technology are perhaps a little bit out of date, and you need to give your organization some space... Because there'll be people in every organization - every big organization that's failing at IT, you'll have passionate people in your organization who actually are very current, and very aware of best practice, and are very aware of new tools and new opportunities, but they're just drowning in this overwhelm of top-down pressure. So I think I would be almost like a catalyst to help free those people up and to help build the links of communication between top and bottom by having a common set of mental tools.
56
+
57
+ **Gerhard Lazu:** So the way I hear it is you're almost saying that you would be focusing on the operational excellence, which is the combination, first of all - you need to know the principles, and then how do you combine them and how do you make them relevant to your org... And I can't really tell you how to do that, it's something that has to be emergent; we need to discover that. And more than me, you need to discover that. You need to figure out what works for you. And all I can do is advise you when the things are combining in ways that make sense, versus when they don't make sense. And I think what we're touching up on here is a little bit on the entropy as well, that you're trying to deal within a system. It's too chaotic. Nothing makes sense. Things are just breaking down left, right and center... And we're talking about the interactions, the communications. Don't think technology, because technology is a people problem, an interaction problem. You can make your functions run in milliseconds - what good does it do if they can't be sold, or people don't find out about them? It's like an irrelevant thing.
58
+
59
+ \[19:58\] I think that's a really powerful thing, in that you're focusing on the interactions, principles, of course, and the interactions between those principles, and the applicability, I guess, to the specific context which needs to be discovered. And by the way, that changes all the time.
60
+
61
+ **Ben Ford:** Yeah. An analogy I use all the time is martial arts. I a bit lapsed at the moment, because of injuries and the pandemic and stuff, but I used to train in Brazilian Jiu-Jitsu, and there was no way that somebody could just give me a book on Brazilian Jiu-Jitsu and say "Alright, go sit in your garage for six months, and I'll see you in six months and you'll be a black belt." You have to actually train in the situation that you're learning the skills. Even just watching an expert do it is not doing it. So you have to do the thing in order to learn the thing. And you learn by doing, and you do by learning. And learning Brazilian Jiu-Jitsu and rolling with a higher belt or a lower belt is an OODA loop. It's exactly the same mental abstraction of Observation, Orientation, Decision and Action. Actually, there's a different pathway through OODA, which we talked about on the course, which is actually far more relevant to immediate combat-oriented things like martial arts... But for the purposes of this, it's an OODA loop.
62
+
63
+ **Break:** \[21:10\]
64
+
65
+ **Gerhard Lazu:** Now, there's two things which I'd like us to get to. The first thing is the real OODA loop diagram, the one that you haven't seen, and then Ben's OODA loop diagram, which I think is the best one you'll ever see and you have definitely not seen. So even if you've seen John Boyd's original, this is better, up to date, and I got so much value out of it.
66
+
67
+ And the second thing - I would like us to walk a bit down the stack. So we go from business, we go strategy, operations, tactics. I would like us to spend a bit more time on the tactical area, which is where you have a lot of experience, right? The real combat experience - I'd like us to spend a bit more time there.
68
+
69
+ We'll be taking a closer look at the OODA loop now, the components. How does it apply to the high-level, to the business, and how do we traverse the stack all the way from strategic (Is this strategic?), operational and tactical.
70
+
71
+ **Ben Ford:** That's right, yeah.
72
+
73
+ **Gerhard Lazu:** Okay. So I'm thinking about the business, maybe the management, products... What is in the middle?
74
+
75
+ **Ben Ford:** I have the middle layers as kind of like the organizational level. So the strategic -- and you can look at these as timeframes as well. So the strategic timeframe is whatever the business-level decision-making cycle is. The operational level is how yo decide to integrate what the business wants with what the people that do the work can provide, so sequencing and making bets and biting off projects, and then the tactical side is the days to weeks of building, shipping, getting feedback/building, shipping, getting feedback. That's how I would split it up.
76
+
77
+ **Gerhard Lazu:** That makes perfect sense. And the ops people, the dev people, the DevOps people, mostly individual contributors - they tend to be at the lower tactical side, the people doing the actual work. Then you have the more strategic ones at the top, which is your senior staff -- is it staff? No, it's not staff. What is at the top, top level? The C-suite? That's too high up. That's the tip.
78
+
79
+ **Ben Ford:** \[24:11\] Yeah, so I guess like your VPs and directors, or the middle tier... I mean, it depends how big the company is, I guess. If it's a startup, then the lead developer is that person in the middle, and maybe they're the marketing, or something like that. But that middle layer is the integration of a grand strategy which has to be necessarily a little bit vague, because you can't really project out six months about specific, concrete things that you're gonna do... So it's turning those strategic goals into concrete, specific things that we wanna achieve, and sequencing the work to achieve them. That's the operational level for me.
80
+
81
+ **Gerhard Lazu:** Okay. And how do the components in the OODA loop map to the high-level, and how do they map, like, OODA loops within OODA loops as you go down through the levels, all the way to the tactical, the day-to-day?
82
+
83
+ **Ben Ford:** So we've got the traditional OODA loop, the full John Boyd diagram. One myth in the OODA loop is that Boyd never drew that simple circle with OO and D and A on it. He never drew that. He drew the version which is, I guess, the next level down that you come to, which is observation feeding forward into orientation, which feeds forward into decision, which feeds forward into action, and then a whole bunch of feedback lines that go from action and decision back into observation, along with outside information, unfolding circumstances and unfolding interaction.
84
+
85
+ And then there's these two other funky little lines called implicit guidance and control, which are how your orientation shapes your observation, and how you can sometimes by-pass decisions with direct actions when you're in a domain of familiarity. So how does this map to the different levels that we've just talked about?
86
+
87
+ If you think of a company as an entity, almost like an organism - I'm just gonna call it an organism, because actually organization and organism back in the 15th century were the same word... So let's consider this as kind of like a biological entity that's separate from its environment, so it needs to take energy - and in the case of an economy, that's money - and it needs to turn that into an internal structure that will keep it self-sustaining. And it does that by deciding how to use the resources that it has in such a way as to turn that money into more money by paying people to do work, and selling the results of that work.
88
+
89
+ So the strategic OODA loop of a business is observing the macroeconomic conditions, how the environment is changing, how the tech ecosystem is moving, how customers are changing - all of those things, all of those form the observation. And then how the results of your products are services are landing with the market. The orientation is then how does that change what the business understands about its place within the ecosystem and what it wants to do. Decisions are the kind of big strategic decisions that you get from companies. You know, "We're gonna try and enter this market" or "We're gonna exit this market" or "We want to try a different way of doing this, or a different way of doing that." And then action - well, action is not something that's directly done at that level. Action is something that is -- individuals take action; companies do not take action. So that is when you then start moving down the stack, and those decisions then need to be turned into actions that are atomic and that individuals take, somehow.
90
+
91
+ **Gerhard Lazu:** The way I think of actions, it's almost like the -- well, one of them would be shipping. You're shipping value all the time, which - it's almost like at the end of taking that action, unless whatever value you have built, you're getting it out there so that people can use it - customers, end users can use it - your action is not complete.
92
+
93
+ **Ben Ford:** Yes.
94
+
95
+ **Gerhard Lazu:** \[28:02\] The more you can act, the quicker you can act, the better off you are. But it's not just the action part, is the loop as a whole has to be complete, because it's not sufficient to act. But there's something really important which I took away from your course, which was "Don't start with the observation, start with the action." Why is that?
96
+
97
+ **Ben Ford:** So when you dig into -- bearing in mind that Boyd passed away in 1997 and he was the most active during the '80s and '90s, I guess... And you consider how far science and understanding of cognition and complex systems has moved on since then, it still absolutely astonishes me that the OODA loop maps so well onto all of this emergent research.
98
+
99
+ So the reason I say we should start with action is because that's how cognition works. You and I are sitting here, and we are seeing each other because we're looking at screen, we're seeing the diagrams that I've written, feeling sensations as we sit on chairs and whatnot... And all of those sensations need action in order to work. So we don't see anything properly unless we're moving our eyes slightly, especially if that thing is moving. And those actions are unconscious, but they are almost required to kick off the process of cognition.
100
+
101
+ So if you're sat in an environment and you're sat absolutely still and you're not allowed to touch anything and you're not allowed to look at anything - well, you can't learn anything about that environment. You have to take action in order to kind of set off the ripples that you then start to detect in the rest of the OODA loop. And the diagram that's your favorite of the ones that I've drawn - action sets an expectation. That's where you start to see where this kind of intermingling of all the different parts of the OODA loop, and the fact that it's not just a circle through Observe, Orient, Decide, Act... Because action itself, the process of even taking an action, creates changes in your orientation, which then shape your observation, and then by the time you observe the outcomes of your action, what you observe is different based on your expectation. So we just cannot avoid being entangled with our environment in that way.
102
+
103
+ **Gerhard Lazu:** So to those that are listening to this - just listening to this, something's missing, right? You feel like "Where's this diagram? What are you talking about?" So unless we publish the diagram, unless you look at the show notes, unless someone drew the diagram, you couldn't really imagine what we're talking about. I mean, you could, but it would be an imperfect image of what Ben means or what Ben refers to.
104
+
105
+ Of course, we've shared the diagram, so look at the show notes if you wanna see it... But more importantly, it proves a point, in that if you can't see it, is it real? Does it even exist? Maybe we're talking about an imaginary diagram... Which we're not, by the way. It's real, and it's very good.
106
+
107
+ **Ben Ford:** Yeah. And actually, Boyd's OODA diagram - from my understanding, he was kind of pushed to draw this. Someone said "Look, you need to write it down." So Boyd transmitted most of his learning through an enormous six-hour lecture that he used to go and give to generals. And that encompassed the history of military doctrine, it encompassed a lot of science, he was big into physics and understood Heisenberg's Uncertainty Principle, and Gödel's Incompleteness Theorem, and the Second Law of Thermodynamics - those are the three pillars of his OODA loop. And when he was asked to draw this stuff - bearing in mind this was the mid '90s - the diagram that he came up with is constrained by the medium with which he could transmit it back then. So that's one of the benefits that we have now, of FIGMA and SVG and better mechanisms for drawing things.
108
+
109
+ \[31:56\] As I've mentioned before we started, I'm playing around with some 3JS so I can really properly dig into how these interactions work... So in some ways even the original diagram that he drew was an abstraction. It was an incomplete picture of everything that he was talking about. You read into many other people that had studied Boyd and you get a far more nuanced and complete picture than you ever will just looking at his diagram.
110
+
111
+ **Gerhard Lazu:** I think that's super-powerful, and I'll take it one step further. First of all, you have this static thing, stuck in the 1950's. The best it could be for the time given the constraints. Then we have your diagram, Ben, which really is a work of art; you have to see it, it's amazing. There's so much information there. It's almost like you need to read the background to understand some of the relationships, and the shapes, and the lines... It's just amazing. But there's the visual element as well, which it doesn't have; it's static. What about the motion, the time?
112
+
113
+ Now, let's take that a little bit further. This is me dreaming. Five years from now, ten years from now. It may never happen. What if you could see these loops happening in an org, and the actions being mapped to these things, so you can almost have an appreciation of how the different parts of an org interact? Imagine whatever happens in your org being represented in these diagrams; all the commits, all the code going out, all the bugs coming in, all the money going out and coming in. If you could visualize that, what would that mean for your business? Just imagining that - wow. I would love to be part of that one day. I think it'll be amazing.
114
+
115
+ **Ben Ford:** Yeah. And that's the thing - if you look at the business through this lens of abstraction that we're talking about, you have the opportunity to build something like that, because you can express those abstractions. I've given presentations about how similar the OODA loop is to things like event sourcing. Because observations are events, right? They are concrete things that have happened. We put subjective filters on those things, but they are concrete things that have happened in the environment.
116
+
117
+ Likewise, the left-fold of the events that create state in event-sourcing systems is an orientation of a form, right? It's "How do we integrate the observations or events that we've seen in our environment, and how do we integrate them with the state or internal orientation that we have, and turn them into a change in that orientation, a change in that state?" And then the decision and action side of things is the command side of event sourcing.
118
+
119
+ So I actually think that with the rise of event-driven architectures, and you mentioned Kubernetes in the chat before this - all of that data flowing through the systems that we have now, it is actually within the balance of possibility that we could have a real-time 3D time-based picture of the internal workings of our OODA loop. And I agree with you that I think that that would be incredibly powerful.
120
+
121
+ **Gerhard Lazu:** Statement of facts. Things that have happened, with certain properties, and deriving relationships from those properties and visualizing them in a compressed format - that's what an OODA loop is. You zoom in, you zoom out, you have the high level, you can dig into specific things, even individual events if you want to... And I think we're getting there. I think the building blocks - I can almost start seeing them shaping. I think Concourse was the first continuous integration system that put the pipeline at the front. You just threw some YAML at it, and it would produce this nice pipeline that you could make it so big that it would crash the system... But it was infinitely scalable. And we have seen this, for example, in GitHub Actions, Argo CD with workflows... I think this is really well where you start thinking about visualizing pipelines, and it's not just CI/CD; it's not just running your tests, your builds, it's not just pushing code out there. It can be used for so many more things. And I think the concept is really powerful.
122
+
123
+ \[36:17\] And then you have your UNIX tools. It's all based in the pipes. Combine them. And the relationship between the different -- not just the individual blocks, but also imagine the links. They don't have to be straight, they don't have to be solid. You can change the shape of them, you can make them thicker, thinner, whatever. And I think a little bit of this, which is what gets me excited, is that we have seen this for years - actually decades, now that I think of it - in the Erlang VM. All the message passing, all the interactions between the different components, the trees of the processes, the applications... Everything. The crashes - when something would crash, you could visualize that in the observer, when different applications would go down.
124
+
125
+ You love functional programming... Well, I love Erlang. I haven't tried Haskell. I should check it out, for sure... But I can see the intersection of all these things. And isn't it nice to explore and experiment? What should work be, Ben? What do you think that work is, at a very low level, at a very basic level?
126
+
127
+ **Ben Ford:** That's a great, great question. And just to build on your point earlier about Erlang - we look at this from a technology and a developer-centric viewpoint of pushing code out and shipping... And that's the decision and action of an organization's OODA loop. But there's a whole load of people who also look at the inward sense-making, situational awareness part of that loop, which operates in exactly the same way. That's people that look at marketing, and attribution, and data, and understanding the effect that the organization is having on the environment in some way.
128
+
129
+ So as much as our CI and CD reflects how we think about shipping code, you can use exactly the same abstractions to think about sensing the organization's effect on the environment as well. And what work should be is that system understood as a cohesive system.
130
+
131
+ As a developer, you shouldn't be constrained to thinking that you're done when you ship. You should have a visceral, embodied understanding of what happens to that piece of code, what difference does it make to your customers. What ripples do you send in the environment once that feature or that change lands? How does that make you better at deciding what to build next? That's the OODA loop. That's pure OODA. And having the integration of those different understandings at different levels is exactly why I think OODA is such an important concept. Because - to go back to your original point about complexity - the more complex an environment gets, the more important you need to have this internal/intangible ability to operate and ability to sense your environment. Because the further you get away from that, the more vulnerable you are to the people that do or the companies that do build that ability.
132
+
133
+ **Gerhard Lazu:** That's right. A comprehensive understanding, feeling that what you do matters, understanding "Where does it fit what you do?" Experimenting. You're not working, you're experimenting. You're trying to figure out how what you do works. And a very, very small slice of that is shipping. But I would even say it's not even the tip of the iceberg. I don't know what to compare it to, but it's so small, so tiny. And this is something which I am very passionate about. Even though we call this Ship It, it's insignificant in the big scheme of things. And my mission is to make you understand, listeners, how small and insignificant it is. It's essential and important, but it's such a small piece. And don't think that if you ship, you're done. No, no, no. It's not even the beginning.
134
+
135
+ \[40:13\] I really like your diagram - this is another one, about perception; the ripple effect. And it applies so well to this, because it's really difficult to understand and to map your action to something that someone else does, like your end users do.
136
+
137
+ **Ben Ford:** Yeah.
138
+
139
+ **Gerhard Lazu:** You have a better way of explaining this... Maybe we'll put that diagram as well. You know what - maybe at this point it'd just be easier to join the course, right? I think it'll be easier, because there's so many other things; I'm picking and selecting, but it's so compressed. It just makes sense as a whole, and the conversations - I think they're the most valuable ones. So if you think this is good - well, you should see some of the conversations part of the course, which were my favorite part of it. The diagrams as well, but the conversations - unique. And you cannot recreate that, because it just happens based on the people that are there, and how they feel. Some were tired, like me. Other switched on, like Ben. But it was a good mix.
140
+
141
+ **Ben Ford:** It's a very intangible thing, isn't it? We've got a bunch of individuals, a bunch of people there, with their own energy and their own experiences of that day, and that creates an intangible -- we compose those people together in a group and then we have a discussion, and that creates an intangible kind of understanding of the concepts that we're talking about, that lives within that group. And then that groups goes away, and talks to other groups, and it spreads, or it doesn't... And different people bring their different perspectives... I mean, the most valuable thing to me of exploring all this stuff is what I learn from people who have a different understanding, or a different depth, or a different perspective. And I've got probably about six hours' worth of YouTube conversation that I've either been part of or joined, that has built into this kind of understanding. This conversation has opened up a few more things, like the Erlang VM; I hadn't considered that one before. So yeah, it's great. You just have to dive in and explore.
142
+
143
+ **Break:** \[42:07\]
144
+
145
+ **Gerhard Lazu:** Talking about diving in and exploring - we are going to talk about the recommendations that Ben has around the books, the videos... Basically, all the follow-up material that you may want to look into, that goes really well with this conversation. So we can start with books, or YouTube videos... Wherever you want, Ben. Take it away.
146
+
147
+ **Ben Ford:** Cool. Let's cover books first. One book that I'm overdue a re-read on actually, which is really the book that started -- well, it didn't start me on the journey, but it really made some links fall into place for me was Team of Teams by Gen. Stanley McChrystal. That is just an incredible, nuanced overview of what an organization has to do when its environment is moving faster than it's capable of dealing with. And I can see that Gerhard has a copy.
148
+
149
+ **Gerhard Lazu:** \[44:15\] I had to.
150
+
151
+ **Ben Ford:** Yeah. It's a great book. And the follow-up one -- so Team of Teams is the conceptual "Why this is important." It's about the dynamics. And then the follow-up, One Mission, is a bit more about the specifics. So those two are very good. Extreme Ownership by Jocko Willink is a must read as well.
152
+
153
+ The Fundamental Principles approach I agree with. I'm not sure I completely agree with the principles that they've picked out as the most important.
154
+
155
+ Turn the Ship Around is also very good, on servant leadership and mission command... Although none of these books mention the concepts that I talk about by their kind of conceptual names. They have their own models and whatnot.
156
+
157
+ Red Team Thinking... You know, the whole point of the course is that leadership is as much of a system as it is a skill, and Red Team Thinking is a fantastic set of tools for doing the strategic leadership system. It's a bunch of communication protocols and practices that mean that you get this better situational awareness at a strategic level. And then I've got a whole bunch of books that we probably don't have time to dig into here, about cognition and -- I will mention one that I read recently, "A Thousand Brains" by Jeff Hawkins, which is about the mechanics of cognition from his research at Numenta... That's an amazing book. That will change the way you think about not only what goes on inside your ears, but how the concepts can apply to businesses as well.
158
+
159
+ **Gerhard Lazu:** And there's another book which I would like to mention - it's by Ben Ford, which is "The OODA Loop according to Ben Ford." \[laughter\] That will be a self-published book, I'm sure. Maybe Gumroad. I'm really looking forward to that... Which will be the print version of the course maybe.
160
+
161
+ **Ben Ford:** Someday.
162
+
163
+ **Gerhard Lazu:** Someday.
164
+
165
+ **Ben Ford:** It might be a live, online, 3D diagram illustrated version of the course perhaps... But yes, I definitely have something like that in me at some point.
166
+
167
+ **Gerhard Lazu:** Yeah. I'm really looking forward to that. In the meantime I'm going to read all the other books. That's my plan. That's what I intend to do; talking about turning the ship around... Which is, by the way, a great book; I read it and I can definitely recommend it as well.
168
+
169
+ In the meantime, if you're not into books, Ben has some amazing YouTube videos. So if you think this is good, which it is - again, let's be honest - some of the videos that Ben has are really good. You have also a show, The OODA Loopers, or The OODA Something...? Can you tell us more about that?
170
+
171
+ **Ben Ford:** So one of the things that's come out of this exploration for me is meeting other people that are interested in the OODA loop across a variety of different fields, including serving police officers, and firefighters and whatnot... And the way we've been interacting has - luckily, for everyone - been in the form of videos; so conversations like this between two and seven people, I think, bringing people in with different perspectives and trying to integrate those perspectives and those experiences - many within the tech industry - to attain a deeper understanding of OODA Loop. Because it's the concepts that are really important; it's not what you call it, it's not how you draw the diagram, it's the understanding that you extract from it and the kind of compression that you learn in order to apply the ideas.
172
+
173
+ So I've been collecting some of the best resources that I've found and conversations that I've taken part in into a YouTube playlist which is probably about eight hours' worth of video by now... Because at the end of the day, no matter how much you listen to somebody that knows what they're talking about about any subject, you still have to dive in yourself. You still have to make your own mental links, you still have to build your own understanding, build your own mental models, try things out, break things, put them back together again... And there's really no better way to do that than this format that we're having here, and conversation... And that's what I've been doing quite a lot of lately.
174
+
175
+ **Gerhard Lazu:** \[48:08\] So if someone wants to start doing, implementing all the learnings, all the principles, where would they start, or how would they actually start? Step number one. Step number two.
176
+
177
+ **Ben Ford:** I mean, that's gonna be very contextual compared to where you are. Let's take a few hypotheticals and thought experiments maybe. So if you're in a startup that's growing rapidly and is now lacking structure that you would need in order to scale, which happens all the time, especially nowadays as it's possible to finance business and grow it so much more quickly... Very often those business, they - and this is a mistake I think many tech businesses make - look at the world through a DevOps lens, or they look at Lean, and all these things. And actually, when you take a step back, those businesses - the ability to build and ship code is very rarely the problem anymore. The tools and the infrastructure that you have available to do that now - you could build the beginnings of a company in a weekend, because you can use things like Vercel, and GraphQL, and Hasura... You've got zero ops, zero requirement for any ops. It's just literally build code and ship, build code and ship.
178
+
179
+ So the problem that we have now is that those companies grow to a certain size, and then the internal communications protocols and structures don't keep up. So that's where I would urge people to start looking now, is to take some of the resources that I've shared, have a look at the YouTube videos... You know, take my course and understand that in a world of complexity it's the whole system that you need to consider, rather than thinking that you need to fix this little bit that you think is broken. If you'll fix that bit, you'll have a knock-on effect to something else that needs to be fixed, and you need to get and keep ahead of that in order to survive in this exponential environment that we're in.
180
+
181
+ **Gerhard Lazu:** That's something which - I wanna say I was disappointed, but it was like an eye-opening moment. There's no silver bullet here. There's no set of things that you can take, apply as they are presented and you'll be successful. That's not how this works. You need to understand the principles, you need to try a few things out to see what sticks and what doesn't, and iterate from there. It's a continuous refinement process, and there's no book that can do that for you, no course, nothing. It's you. It starts with you.
182
+
183
+ So step one, become aware of these things. Step two, maybe accept that you may want to start applying some of these things and see what stands out... And step number three, start doing it. And go through it really quickly, because guess what - you have to go through these steps over and over again, almost like an OODA loop. Not once a day. Many times per hour. Maybe even more often. There's no time period which is right or wrong for an OODA loop, by the way.
184
+
185
+ **Ben Ford:** No, absolutely not. Because even if you do build this kind of internal fluency of communications - well, you're changing, your environment is changing; you're adding new people, they're coming in with their own ideas. People leave. The system changes. Even in the Second World War, when the Germans came up with the Blitzkrieg concept and they rolled over Europe - well, guess what. At the time, the U.K. was building a special operations executive, and they built small, compact teams that could go and do exactly the same thing to the German war machine, and that's what they did.
186
+
187
+ So it's this constant cycle and constant process of -- optimization is not the right word, because that implies efficiency. But it's this constant optimizing your effectiveness for the environment. Even if you employ somebody who has done something very similar to what you're doing in your company right now, and you ask them what should you do, their information will be out of date; it will refer to a world that doesn't exist. So you just have to build these structures yourself. There's no way around it.
188
+
189
+ **Gerhard Lazu:** In other words, don't hire the expert. It's a lie. It's contextual, right? The expertise is contextual.
190
+
191
+ **Ben Ford:** \[52:13\] It is.
192
+
193
+ **Gerhard Lazu:** And unless that expert is willing and open-minded to change his perception and to learn with you, it's for nothing, all that expertise. You can't transplant ideas, you can't take guilds or squads or whatever you wanna call them and make them work in your company as they are. That doesn't happen, and that's not how it works. And whoever tells you that's how it works, I would take it with a grain of salt, or two, or three.
194
+
195
+ **Ben Ford:** And not give them any money.
196
+
197
+ **Gerhard Lazu:** Exactly, yes.
198
+
199
+ **Ben Ford:** That's a good point; expertise is still important, because -- it's like the process of evolution. Evolution only works when there's some genetic material to pull apart and put back together, and in the knowledge economy that genetic material is expertise. But the thing that we forget is that expertise is contextual, as you said, and it's the pulling apart and putting it back together that's the most important thing. And if you just try and blindly apply stuff that worked... I mean, this was true ten years ago, but it's even more true and even more critical now to understand. If you're trying to blindly apply stuff that worked before, it won't' work as well as you hoped, and you won't have any clue. Whereas if you build the capabilities and the communications and the decision-making processes and all that stuff that does allow you to sense your environment - well, if it doesn't work, it doesn't matter, because you'll be able to adapt and overcome.
200
+
201
+ **Gerhard Lazu:** Ben, this was a pleasure, a genuine pleasure. I'm seeing this as the beginning of something. I'm seeing this as a loop that will continue. I'm thinking six months from now I would love to catch up again, on the same show maybe, and see where we are then. See what we have learned, see what we have taken apart, what we have put together, and most importantly, what we have shipped... Because I know you have something really special in the pipeline. We should not tell the listeners what it is; you should follow the journey closely if you're really curious. And if you're not, that's okay; it's not a problem.
202
+
203
+ But I'm really looking forward to speaking to you again in six months, roughly. It's just a guideline. Thank you, Ben. This was great.
204
+
205
+ **Ben Ford:** Anytime. It's been great. It's great to have you on the course, it's great to essentially share this journey of understanding with you. This conversation has been fantastic, and eye-opening, and yeah, I'm definitely up for doing another one.
206
+
207
+ **Gerhard Lazu:** Thank you, Ben. Have a nice evening, a nice day, morning, whatever it may be, and keep iterating. Keep looping. Get better. See you, everyone.
208
+
209
+ **Ben Ford:** Thanks, everyone. Bye.
OpenTelemetry in your CICD_transcript.txt ADDED
@@ -0,0 +1,313 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So Akihiro Kiuchi presented Jenkins CI agents, monitoring with OpenTelemetry, and Jaeger, Zipkin and Prometheus was included. And one of the goals, or one of the reasons why he did that was to minimize the downtime and setup costs of Jenkins agents. That was one of the presentation screenshots which I've seen. Now, Akihiro couldn't join us today, but we have Cyrille and Oleg joining us. We'll be talking about OpenTelemetry in your CI and why is it important... And I'm wondering what can you tell us about the presentation that Akihiro gave back in July, I believe. I haven't seen it yet. Is it live, can we watch it?
2
+
3
+ **Oleg Nenashev:** Yes, it's live. So it was a project within the Jenkins community, as a part of the Google Summer of Code this year. Akihiro was one of the students, and he chose observability with OpenTelemetry. Originally, the project was rather positioned towards Prometheus, but given the recent developments in the ecosystem, we decided to press it with OpenTelemetry, and actually to try all three parts should time allow. So metrics, traceability and logs. For us it was one of the missing parts of the puzzle, because we already have an OpenTelemetry plugin for Jenkins; Cyrille and many other contributors created it. But this plugin focuses on the Jenkins controller as one of the instances.
4
+
5
+ \[04:27\] At the same time, Jenkins itself is a distributed system, it has agents, and actually agents might prove to be quite unstable, especially if you use multi-cloud environments, if you use various cloud provisioning and single shot agents which just die after the completion. So it's essential to have some tracing and monitoring for these systems, so that you can ensure that your CI environment is operational. And of course, if you can also verify that it's cost-effective, it would be super.
6
+
7
+ **Gerhard Lazu:** Okay. So this tracing was happening on the agents, not on the Jenkins master, so that when the jobs run, there will be visibility into the jobs and into the availability of the Jenkins agents themselves. Is that right?
8
+
9
+ **Oleg Nenashev:** Yes.
10
+
11
+ **Gerhard Lazu:** Cyrille?
12
+
13
+ **Cyrille Le Clerc:** Yeah. So here we have initiated an effort to provide visibility in the execution of the jobs themselves, where we were able to break down the duration of jobs on pipelines in the different states of this pipeline. Also, we were able to track the time spent to allocate build agents. But then we didn't have detailed visibility in the steps to allocate build agents, and so we also had limited visibility to explain what kind of problems could have been happening when allocating a build agent, like cloud resources being unavailable, or maybe the Docker image you want to use being unavailable or broken, and so on. And this was an important focus of Akihiro, which was to complement the existing traces we had, the existing visibility we had on the CD pipelines, detailing the agent allocation, which is on agent communication, which are some fragile areas.
14
+
15
+ **Gerhard Lazu:** Yeah. That's a good summary. Okay, so the talk is available online, we can go and watch it... I haven't watched it yet, but I will do it right after this, because that's basically what started this conversation. And that made me actually think specifically about OpenTelemetry in our CI/CD systems, and how OpenTelemetry is this nice unifier of all different CI and CDs that we have... Because sometimes, people recommend that CI splits from CD, but you still need to understand the unit as a whole. And then what happens when you switch between CDs or CI/CD systems? One day you use one, and six months later you switch. Do you lose all that visibility? Because the things that happen in your CI/CD - they kind of tend to stay the same. I mean, they may expand in the future and become more sophisticated, but the building blocks tend to be the same.
16
+
17
+ So with this context, why would you say that it's important that we use OpenTelemetry in our CI/CD systems? Oleg, do you wanna go first?
18
+
19
+ **Oleg Nenashev:** Yeah. So first of all, I would rather disagree with the CI and CD statement. It's a subject for Holy War. Personally, I use quite old style automation term, because CI/CD is a methodology, it might be culture, but when it comes to automation, to tools, then actually CI and CD borders are quite blurry, and there are many other use cases, for example for operations, for organization automation - all of that needs traceability if you want to have your software delivery in place. It's not just the CI/CD.
20
+
21
+ **Gerhard Lazu:** Okay.
22
+
23
+ **Oleg Nenashev:** And this is exactly where we can talk about OpenTelemetry and other open standards. Because if any system independently creates its own monitoring and observability, you basically get lost. So when we talk about modern cloud-native deployment, with Kubernetes, you usually build your CI or CD system from dozens of different tools; each of them might have different applications and different interfaces, and then basically you end up just trying to understand what happens.
24
+
25
+ \[08:11\] Similarly to why Jaeger was introduced for cloud-native applications, we need the same for CI/CD and automation in the cloud, because we also need to draw information from these tools on multiple levels. So it might be a CI server, it might an agent, it might be just a build tool like Maven... But we need all this information to understand how is our pipeline going, and now it's also important for audit, for supply chain security, and many other buzzwords that are emerging. But overall, you need data to verify what happens, and OpenTelemetry is one of the great opportunities to provide this data across the ecosystem.
26
+
27
+ **Gerhard Lazu:** You said there something really interesting about you disagreeing that CI and CD should be separate systems... And I will want to come back to that. So that's really important; I've taken a mental note. But Cyrille, why do you think that OpenTelemetry is important for CI/CD systems?
28
+
29
+ **Cyrille Le Clerc:** I will break down the point in two different themes. The first theme is, as you have said, there is a lot of visibility in being an end-to-end view of the execution of the CI and CD processes, where distributed traces is very valuable. We see that distributed traces is a very good data structure to model the execution of CI and CD pipelines and processes. Exposing and meeting more practitioners with this proposal, we discovered that all the data of the CD processes is a goldmine. Of course, CI/CD administrators are interested in this to troubleshoot and maintain up and running their platform; they also see benefits for sizing their platform, and then we see dev teams interested in shortening their build cycles and optimizing their unit tests, their flaky tests. We discover people doing cost accounting on the platform, people doing -- I've seen process optimization, like digital transformation, agile transformation, DevOps transformation. If you want to measure your lead time on here, this is a source of data that is very interesting.
30
+
31
+ So here we see a lot of value in capturing this data on distributed traces, which is often associated with OpenTelemetry, and is very useful. Then what you said also that was very interesting for me is - you say "We want a unified view on CI and CD", and beyond this debate, is it different tools, is it the same tools? Here the distributed trace culture tells us that we can have an overall visibility across different phases of a cohesive unit. So here, whatever people choose to structure their CI and CD phases, with this visibility on the process we will be able to make this unified.
32
+
33
+ Then when you talked about OpenTelemetry - I think OpenTelemetry is a great solution. First, it does distributed traces well, in a way that is standardized, popular for people... And also, OpenTelemetry has the vision to provide unified semantic conventions, a common vocabulary to unify things together. And you said "I can have different CI and CD systems", and I remember this week I was talking with some CI platform administrators who told us "We don't use only Jenkins in our organization. Some other people use \[unintelligible 00:11:36.11\] they use maybe other tools... And we want to have a holistic vision across all these, where the CI platform is an implementation detail." This reminds me of your Dagger conversation previously. These people - they are very interested in having an abstraction to look at the CD process, rather than the details of each CI tool... And this culture of the OpenTelemetry community of creating semantic conventions that span across different tools, techniques and implementations I think is a very good match with the problems we want to solve.
34
+
35
+ \[12:13\] So I saw these two dimensions - collecting data, and also this culture of abstracting to provide a unified vision on top of different implementation details, in some ways.
36
+
37
+ **Gerhard Lazu:** So from the perspective of having a good CI/CD system, regardless whether it's one or multiple, which has a good OpenTelemetry integration, what would that look like from the moment you push some code? What is the perfect flow that you imagine that a system with good OpenTelemetry would have?
38
+
39
+ **Oleg Nenashev:** First of all, the pipeline would include multiple tools in the chain. For example, you push the code, it reaches first whatever social coding system, let's say GitHub, or GitLab... Even on this level, there are some events happening. Firstly, the system needs to process your request, it might apply its own checks, for example via GitHub Actions etc. And after that, our main CI starts, or CD. So we invoke an external service... Again, we may send WebHooks to a completely different instance; this instance provision a build executor, it may call the agent, it may be just a new pipeline task definition and kicked off in a separate container. It starts, and then we just start executing the pipeline.
40
+
41
+ And at this level, it's also not the end, because then we invoke tools. Because nobody really build software in CI or CD systems; it's external tools, like Maven or Gradle doing that. You invoke them. So these tools are also complicated, and you also need to have visibility on this level.
42
+
43
+ So basically, in the beginning of this pipeline we should go through all these levels of tools, and for each level, ideally, we need to have some data so that we can understand what happens, what are the crossblockers for example, what are the obstacles our system experiences... And it gets complicated even more when we talk about parallelization. Basically, for each build we need a distributed trace for going deep, and hence passing context through all levels of the system is essential. I wouldn't say that this question is fully resolved by now, and I want to see much more happening on this state, but my expectation as a user is to have full observability for a pipeline as a single trace, for all levels... And I'm looking forward to see a system that actually does that.
44
+
45
+ **Gerhard Lazu:** So we understand when the pipeline starts, and what happens at the beginning. The middle is always a little bit hazy, so we can leave it like that, because it depends on what it needs to do. But I think that we all agree that when the pipeline ends, some artifact - maybe a production artifact - needs to be produced.
46
+
47
+ **Oleg Nenashev:** Yes.
48
+
49
+ **Gerhard Lazu:** Now, I know that some teams like their pipeline to end with code actually being deployed into production. What do you think about that? Do you think that that should be the last step of the pipeline? Do you think about this differently?
50
+
51
+ **Oleg Nenashev:** Well, it depends whether it's a CI or CD pipeline... Because in a CD pipeline we usually deploy as the last stage; in a CI pipeline, even if we deploy, the last stage is actually doing a lot of reporting and post-processing... Because it's not enough to deliver the software, we also need to do a lot of accounting work afterwards. We also need to process the results, compare them with previous rounds, publish whatever coverage test reports... And many of the things that happen post-factum. Deployment is definitely important for any kind of modern pipeline. There are many other activities and task-heavy activities which still need to be delivered. And all of it involves many external tools, because you can do \[unintelligible 00:15:45.09\] for reporting you may use an external tool, a SaaS like TestRail, it can be on-premise, but still, when something goes wrong, you will need to access this data, and you'll need to understand where it went.
52
+
53
+ **Gerhard Lazu:** What do you think, Cyrille?
54
+
55
+ **Cyrille Le Clerc:** \[16:02\] I would like to come back to your question on what is the right way to instrument a pipeline. What we have discovered instrumenting Jenkins and Maven and Ansible is that instrumenting well your pipeline is a journey for the instrumentation people. We have to understand what are the right spans to capture in your pipeline execution to capture the right step.
56
+
57
+ For example, on Jenkins we had to iterate to capture the right spans to measure the time it was taken to allocate a build agent. Our initial instrumentation did not capture it well, so it was hard for CI/CD administrators to really narrow down their investigation to this specific phase and understand evolving across time.
58
+
59
+ Another thing that was important for us was to iterate on the right attributes we extract from the pipeline execution that we attach to the spans, so that you can get the right meaning of the data for your use case. We've seen that there is a troubleshooting use case, troubleshooting of your pipeline execution. So here maybe you need to capture well the Git access, the GitHub URL, your JIRA URL. Sometimes you need to capture some organizational information.
60
+
61
+ If you want to be able to use this pipeline execution data to do some cost accounting, then you need to attribute your pipeline execution to teams, so maybe it's to understand what has caused your pipeline -- we are improving this on Jenkins at the moment... To understand what caused the execution of the pipeline, to be able to attribute it to the right team. Same will be for using the pipeline execution data to understand the velocity of teams on the software delivery process in different CI platforms. On your pipelines you have some concepts that are commonly used to define your business logic. Jenkins people commonly use what they call stages, which is a grouping of things; it's maybe the CI build phase, it's the QA validation phase, it's the security validation phase. So here we need to capture the right attributes on these constructs of the pipeline that are used for organizational grouping, to be sure that the data will be useful downstream for the consumers, the use case that will come one day.
62
+
63
+ **Break:** \[18:25\]
64
+
65
+ **Gerhard Lazu:** You mentioned, Cyrille, about calculating, or the spans being worked out incorrectly when it comes to job allocation and agents... And that was an interesting problem that I know that CI/CD administrators have. There are many other problems... So I'm wondering, how does OpenTelemetry help the CI/CD administrators, which I think is a very important role? It's not necessarily a person that does that, it's maybe a role that many people share. So how does this help them?
66
+
67
+ **Cyrille Le Clerc:** So as continuous integration and continuous delivery pipelines get more and more complicated, with more complex orchestration, not only getting source code and compiling it to create an artifact, but now also creating a Docker image, going through security scanners, triggering deployment in preview environments, for integration tests, or for a human to test... And this gets always more and more complicated, involving more distributed systems everywhere. So this was more and more complex to maintain up and running, with some scalability problems that are very difficult, because at some time of the day you have many teams needing to build, and then you want to reduce your infrastructure for cost optimization... These people had a problem that was increasing, and at the same time they had limited solutions for this, to help maintain and troubleshoot these problems. Usually, they are the last people to be noticed of a problem in an organization. It's the dev team who is under pressure, who has these pipelines broken, and they get very angry, they shout at people, and it creates a lot of friction. So we felt these CI/CD administrators deserve assistance.
68
+
69
+ Something interesting that we observed as well is that observability says I need to be able to slice and dice my data in any dimension. We saw when there is a CI/CD platform problem, you have to very quickly understand if this is a problem that is impacting just one team, one pipeline, maybe because the Docker image used to build is broken, or if it's a problem that is impacting a large part of your organization. Maybe dozens of dev teams being blocked, your Docker registry is broken; if it's broken, it's unavailable. Or you have a GitHub outage.
70
+
71
+ So we wanted to provide tools to help CI/CD administrators to be notified early of problems, and being able to zoom in/zoom out to understand if the problem is impacting just one, or everybody.
72
+
73
+ Here it was a very good match with the problems that observability is solving at the moment with microservices architectures and all the investments that have been done on microservices architecture observability - automated anomaly detection through leveraging statistics, machine learning, high cardinalities and metrics store... All these could benefit a lot to CI/CD administrators. It was one of the first problems we wanted to solve.
74
+
75
+ **Gerhard Lazu:** Can you think of one example, Oleg, for an administrator, that this tooling helps solve?
76
+
77
+ **Oleg Nenashev:** So for the administrator, when we talk about modern a CI/CD system, it's basically a mesh of various asynchronous processes; all these processes are loosely connected, so even if you have one mainstream pipeline which delivers here, actually, if you start looking under the hood, you may notice that main events, even in this supposedly one pipeline, actually depend on other factors.
78
+
79
+ For example, there might be a provisioning of agents, if we talk about the original work for monitoring. And this agent provisioning doesn't have to be synchronous. Agents maybe share between different pipelines, and hence various outages and issues will also be impacting multiple pipelines. So being capable to trace these events would help me as an administrator to understand, "Okay, this agent is broken." For example, it has the wrong version of Java, due to whatever reason. And then I can go back and understand which pipelines were affected and restore them, if needed, and adjust my systems to reschedule them, so that my delivery continues, and my development teams do not waste time. Just one example. There are many like that.
80
+
81
+ **Gerhard Lazu:** \[24:06\] That's a good one. One thing that really got me in the past was caching in CI/CD systems. So when you have basically some dependencies which have been cached, and there's issues related to retrieving data from the cache, it's so difficult to even understand "Where does this fit into my pipeline? Does my pipeline depend on this other thing? What is this other thing? Does it just affect my pipeline? Did I mess up something in the caching? Maybe I'm running the wrong digest, or maybe something just doesn't interact with the caching system properly. That was so frustrating.
82
+
83
+ And you're right, there's all these changes that happen in pipelines, and we don't know why they're broken. We just know it doesn't work. Well, that doesn't help me much... And good luck debugging systems that you don't even know exist. That's an interesting proposition...
84
+
85
+ **Oleg Nenashev:** Right. But you have to introduce these systems, because caching is one of the most effective ways to reduce the costs of your pipeline. Even if you talk about things like single-shot agents, \[unintelligible 00:25:00.17\] etc, when it comes to real massive production pipelines, we tend to actually simplify the things at the top, like caching, so that we get better throughput, because it's more important.
86
+
87
+ **Gerhard Lazu:** Yeah.
88
+
89
+ **Cyrille Le Clerc:** Something that I identified also working on this visibility of CI/CD pipeline is that we often talk about a divergence between dev and ops, dev changing things all the time to deliver new features, new business value, and ops wanting stability. We see that on the CI/CD platform we have the same challenge with CI administrators wanting a stable platform to keep it up and running, because it's mission-critical for the company... And dev teams wanting to onboard new projects, with a new needs, new fancy requirements, and we wanted to find assistants, so that people could embrace changes with confidence. And we felt that observability would be key to create this confidence to embrace changes in the CD pipelines.
90
+
91
+ **Gerhard Lazu:** That's a great point, and it made me think of flaky tests. When everything is fine and the CI/CD system still fails, and you run it again and then it passes. So I think flaky tests, when it comes to code and developers, tend to be very problematic, especially for legacy codebases, especially for distributed systems. When you have tests and you're testing distributed systems, you have race conditions left, right and center. So how does OpenTelemetry help with flaky tests?
92
+
93
+ **Cyrille Le Clerc:** So this is on our radar, to also add observability to unit test execution. There is already a solution for Go tests; it's written by Jaana Dogan, who works at AWS, where she has instrumented with OpenTelemetry Go tests. And we have the idea that it could also work on Java unit tests or any other language, and that we could as well use distributed traces to visualize your unit test execution, the duration and the outcome, success/failure.
94
+
95
+ And where I think OpenTelemetry is very powerful is that every large organization has its flaky test detector implemented in some ways. People tend to reinvent the wheel. And with OpenTelemetry, with the open nature of its format, then we have an opportunity to create the backbone of unit test results, going through OpenTelemetry channels, which typically can be a Kafka stream. Then you will have the DevOps team -- I think flaky tests personal be something that an observability vendor will implement... But maybe it will be a DevOps team somewhere in an organization who will just connect to these Kafka streams of OpenTelemetry traces, create its own tool to process its flaky test report, and share this with the community.
96
+
97
+ With this open source community nature, I imagine that an open source solution will grow in the community, and leverage the fact that OpenTelemetry has a very flexible architecture, a popular technology with OpenTelemetry itself, and streaming Kafka, Kinesis or Google PubSub. I see a lot of traction, and I expect the solution to come soon in the community.
98
+
99
+ **Gerhard Lazu:** \[28:11\] So I'm sold... I definitely want OpenTelemetry in my CI/CD system. How do I get it, Oleg? What do I do?
100
+
101
+ **Oleg Nenashev:** Well, in theory, any system should include OpenTelemetry or APIs out of the box. It doesn't happen at the moment because OpenTelemetry is still an emerging standard... But how I would foresee it - basically, any enterprise-grade CI/CD would include a number of OpenTelemetry collectors, so that you can just connect to them and retrieve this information. And it can be opt-in, so that for example in your Helm charts, and then all your OpenTelemetry collection is configured, because -- it's a building block. If you need to do something complex to enable OpenTelemetry, then it probably doesn't achieve its goal. And once a technology emerges, I would expect that every tool just adopts that, and it becomes a commodity for any system we run.
102
+
103
+ **Gerhard Lazu:** So what about today? What CI/CD tool can I use today that has this out of the box?
104
+
105
+ **Oleg Nenashev:** Well, that's a good question, because actually almost none of the tools have them, zero
106
+
107
+ **Cyrille Le Clerc:** There are two CI platforms I am aware of who provide native OpenTelemetry instrumentation, and they are Jenkins, I am of course for integration, and also Concourse CI.
108
+
109
+ **Gerhard Lazu:** What do you need to do to get OpenTelemetry in Jenkins?
110
+
111
+ **Cyrille Le Clerc:** So you just need to install the Jenkins OpenTelemetry plugin, going through your Jenkins plugins manager. And then once Jenkins is instrumented with OpenTelemetry, you have to connect your Jenkins to an OpenTelemetry endpoint backend, which can be maybe Elastic (I work for Elastic), or maybe you can use Jaeger. If you want to use Jaeger, this very popular open source distributed tracing visualization that has been created at Uber, you will need to install a small component called OpenTelemetry Collector in between your CI platform on Jaeger, because Jaeger doesn't speak natively OpenTelemetry for the moment... And then you are good to go.
112
+
113
+ In Jenkins, with this OpenTelemetry integration we have started with traces initially, to trace pipeline execution. We have also captured health metrics. So you can also leverage our Jenkins OpenTelemetry integration to capture the health metrics of your Jenkins CI platform, route them to maybe Prometheus, or maybe an observability backend that supports both traces and metrics, Elastic being one; I work for them, but you will find many other vendors who also can consume observability signals.
114
+
115
+ **Gerhard Lazu:** And what about Otel CLI from Equinix Labs? How could we use that to get some OpenTelemetry in CI/CD systems that maybe don't support it?
116
+
117
+ **Oleg Nenashev:** It's possible.
118
+
119
+ **Cyrille Le Clerc:** That's a great point. There were two initiatives that come to my mind. I think the first one I saw came from Honeycomb, where they created a small CLI to instrument some CI platform where the platform itself didn't instrument with Otel... Otherwise, if you are on GitHub Actions, for example, or maybe GitLab CI, you would use Otel CLI as maybe your wrapper when you invoke your Maven build, as a wrapper when you invoke your Maven build, as a wrapper when you invoke your makefile.
120
+
121
+ Also, even when you are inside Jenkins, inside a CI platform that is instrumented with Otel traces, it's still very interesting to get more granularity in let's say a makefile, because... You discuss a lot of makefiles in Ship It. If you want granularity on what's happening in your makefile, you can in your makefile wrap some calls using the Open CLI tool, so that you get finer granularity in your pipeline execution.
122
+
123
+ **Oleg Nenashev:** I'm probably a bit lazy, because I just place a shell on my agents. So I modify shell under Docker Images and, and Otel CLI is enabled by default to their full screens.
124
+
125
+ **Gerhard Lazu:** Okay, interesting.
126
+
127
+ **Oleg Nenashev:** Hackish, but it works.
128
+
129
+ **Gerhard Lazu:** Do you have an example of how to do that? That's very interesting. I would like to check it out, the code.
130
+
131
+ **Oleg Nenashev:** \[32:10\] I don't have the code with me, but basically, you can just take OpenTelemetry, you create a shell wrapper, which just sends all the command wrote in this shell to OpenTelemetry... And that's it.
132
+
133
+ **Gerhard Lazu:** Okay.
134
+
135
+ **Oleg Nenashev:** It's a wrapper with everything so the envrionment which is pretty transparent to your system as long as you use shell scripts. Obviously, if you use a mix of Bash, Python etc. then you will have to instrument all of these tools, which becomes a bit tricky, but still possible.
136
+
137
+ **Gerhard Lazu:** You say, Cyrille, in one of your talks, that Jenkins in production is hard... And I know a thing or two about that, because many years ago we used to pair \[unintelligible 00:32:44.02\] cloudbees Jenkins in Pivotal Cloud Foundry, in the platform... That was many, many years ago.
138
+
139
+ **Cyrille Le Clerc:** Yeah, indeed.
140
+
141
+ **Gerhard Lazu:** And I'm wondering - today, how would you run Jenkins in production? What would you choose?
142
+
143
+ **Cyrille Le Clerc:** We use massively Jenkins at Elastic. We use it in conjunction with Kubernetes for all modern Jenkins platforms. I'm a bit further away from this, but I think it is very important to leverage the flexibility of Docker containers to let development teams customize their build environment the way they need. The way to offer the capability for dev teams to customize or build environments with Docker, combined with the orchestration needed by a CI platform, the scalability needed by a CI platform lets me believe that you should leverage Kubernetes for this.
144
+
145
+ **Gerhard Lazu:** Would you agree, Oleg?
146
+
147
+ **Oleg Nenashev:** Yes, and no, because you better deploy your CI system it should be similar to your target environment, especially if you want to do integration tests, and based on that, a lot of "depends". So if you deploy cloud-native applications, then yeah, most likely you will have to run Jenkins and Kubernetes. But it's not necessarily the case.
148
+
149
+ What I would like to say, if you talk about modern Jenkins management - everyone heard about Jenkins plugin hell, and other things... And it's totally a case. But these days you can fully manage Jenkins using configurations code, and you create basically a CI/CD pipeline for your automation system configuration as well. You really have to be just Jenkins, because it can be infrastructure as code... Yes, I would definitely recommend packaging Jenkins into containers, and there are tools for it, there are Helm charts, there are operators provided by the Jenkins community... But on the low level, you should always know what you run, and you should be able to deploy stage and to verify on your instance whatever is your target environment.
150
+
151
+ **Gerhard Lazu:** Okay.
152
+
153
+ **Cyrille Le Clerc:** Here's something else on the way to build your continuous delivery pipelines, and related to Jenkins a bit broader... A topic you discussed last time when you met with the Dagger people is it's important to be able to run your CI pipeline, to test it, to develop it on your local computer. There are two initiatives that, correct me on this one, one was Rod Johnson with his Atomist company, and one other is Dagger, who said it's very important to be able to test locally on the development cycle of the pipeline. I think when you design your pipeline, it's important to have as much as possible fragments that you can test locally. So I believe in the ideas that you should have as little logic as possible in your CI proprietary orchestration language, and that you should group these in typically makefiles, to help the stability of the system.
154
+
155
+ **Gerhard Lazu:** Okay. Oleg?
156
+
157
+ **Oleg Nenashev:** Firstly, I agree that you should be able to test locally, but that doesn't mean that you cannot use pipeline definitions... Because many modern systems actually allow running pipelines locally. It's not just Jenkins... So for Jenkins we had Jenkinsfile Runner, for TeamCity you can run kotlin DSL and for GitHub there are projects as well... And it basically imposes this \[unintelligible 00:35:50.05\] So if you have proper configuration management for a system, if you can produce your production CD environment locally, for example if you run your CI/CD system in the container, you can easily do local development and create complex pipelines.
158
+
159
+ **Cyrille Le Clerc:** \[36:06\] That's a good solution.
160
+
161
+ **Gerhard Lazu:** We will talk about pipeline development, what that looks like... But I would like to go back to the production question. How do you deploy Jenkins in production? I think Cyrille was mentioning Kubernetes... You would deploy Jenkins, a production deployment, and you would manage Jenkins via Kubernetes. And I imagine a Helm chart, or Operator? What would you go, Cyrille? Which way?
162
+
163
+ **Cyrille Le Clerc:** I am not knowledgeable enough.
164
+
165
+ **Gerhard Lazu:** Okay. What about you, Oleg?
166
+
167
+ **Oleg Nenashev:** I would go with Helm chart, to be honest, because Helm chart allows to be more flexible in terms of defining the system. Operator has a lot of advantages if you want to build a reactive system, which is basically based on Kubernetes APIs, it adds to some events, it automatically scales etc. But for Jenkins, to my experience, it's not always needed. It can be used in particular use cases. So I would go with Operators only if I was building a highly available Jenkins solution, where I would be managing controllers, automatically provisioning them, and if I had shared context between them.
168
+
169
+ **Gerhard Lazu:** Okay.
170
+
171
+ **Oleg Nenashev:** Right now it's not quite possible with stock Jenkins, so I would rather go off the Helm chart.
172
+
173
+ **Gerhard Lazu:** In that world, where you have a production deployment of Jenkins using Helm, how would you configure the pipelines? How would configure Jenkins, and then how would you configure, for example, the agents themselves? Where would that happen? How would that look like?
174
+
175
+ **Oleg Nenashev:** Everything as code, because currently, if you talk about pipelines, if you use a Jenkins pipeline, Job DSL, all these technologies can be stored as code in your repository in parallel with your project, so that when you build your project, you have a pipeline and you can test them all together... And basically, the same for agent definitions. For example, if you use a Kubernetes plugin, you can store an agent definition, again, in the same repository, so that you have your build system within your project and it's portable. Or you can have it separately if needed, but still, it should be defined as code somewhere... And I would argue that actually the entire combination of Jenkins - so for us it's a server itself, plugin, configuration, the pipeline libraries we use, and default pipeline building blocks - all of them should be just one deliverable for the end system, and this deliverable should be tested in its own CI/CD pipelines, so there is much less opportunity for mistakes at the end user pipelines.
176
+
177
+ **Gerhard Lazu:** From the perspective of code, config-as-code, do you mean just config, like Yaml, or some other format? What does that code look like?
178
+
179
+ **Oleg Nenashev:** Yes. So if we talk specifically about a Jenkins pipeline, historically it uses Groovy DSL. So it's a Groovy-like language, with some security and context requirements for failover, but it looks like Groovy, and there are multiple ways to define it. Firstly, it can be a scripted pipeline, which is basically just Groovy DSL; it can be declarative pipeline, which gets it closer obviously to a declarative syntax. But you can also deploy them as Yaml these days.
180
+
181
+ **Gerhard Lazu:** Okay.
182
+
183
+ **Oleg Nenashev:** So it's your choice how you actually implement them, and Jenkins as a tool supports both modes.
184
+
185
+ **Gerhard Lazu:** And would you configure Jenkins using the Kubernetes API, or would you target the Jenkins master node directly? How would that work?
186
+
187
+ **Oleg Nenashev:** In my case, I would rather use Jenkins for agent management, because if you put it in Kubernetes, it will be still a question how do you actually retrieve these configurations into Jenkins... And ultimately, it doesn't matter, because it's still a system in the same repository. It doesn't matter how exactly it's deployed. Kubernetes inside Jenkins just gives you more flexibility, because if needed, you can change in flight, without redeploying significant parts of your system.
188
+
189
+ **Break:** \[39:41\]
190
+
191
+ **Gerhard Lazu:** Oleg, I would like us to come back to the conversation that we started having and we've put a pin in it, around separating the CI from the CD concerns in your system, which gets code out into production. What do you think about that? Do you think you should separate them or you shouldn't? And why.
192
+
193
+ **Oleg Nenashev:** I would say that generally, you should.
194
+
195
+ **Gerhard Lazu:** You should. Okay.
196
+
197
+ **Oleg Nenashev:** Yes. It might be still the same service per se in terms of deployment, but logically, CI and CD pipelines are significantly different. So there are different requirements, there are different implementation paradigms... So when you develop your delivery system, you would rather split that. For example, if you create a script, you shouldn't write a built in deploy a makefile target. You just create two ones, with separate implementations, and you can maintain them separately and modify and test them separately if needed. This is the main difference.
198
+
199
+ If you talk about CI/CD as systems, I would rather say it's an implementation detail, because what we want is that systems work for our use case. If they work, it's perfectly fine.
200
+
201
+ **Gerhard Lazu:** I know that in a previous episode we talk about using something like GitHub Actions for the CI part, which builds, gets the dependencies, runs the tests... And then something like Argo CD for the deployment part, where you have the artifacts, and then Argo CD just reconciles whatever runs in Kubernetes with the artifacts that were produced by our CI system. And I felt that was a good idea. What do you think, Cyrille?
202
+
203
+ **Cyrille Le Clerc:** Something that comes to my mind here is that we are in a world where we want to automate more and more the deployment of what we produce. So even if we decide to use two tools, or maybe to put some boundaries for security constraints, security of the supply chain process, we still need a very automated way to trigger the deployment from the continuous integration phases. And in this sense, I am wondering if it's more a delineation of tools, for some reasons like best tool for the job, or security... But your two processes remain completely connected together, maybe with a kind of GitOps approach, where a Git Yaml manifest is sitting between the two processes... But the processes would remain integrated and connected together.
204
+
205
+ **Gerhard Lazu:** Well, I can tell you, what we changed about the whole Changelog setup a couple of years back, where we decoupled -- we used Concourse, by the way, to run the builds, run the tests, and even deploy. That's what we used in the past. And we used Ansible and Concourse; that's what the setup was. And then I think 2019, if I remember correctly, we went to managed CI, so we started using Circle CI for the steps build and test... And it stops currently today, depending on the branch. So the master branch is the one that produces a container image, which gets pushed to DockerHub. And that's where the CI part stops. As for the CD part, we use something called Keelsh, and we're meant to replace it, but that's what we even today make use of, Keelsh, to watch the image... And when there are changes to the image, it will pull down the latest version automatically; there's nothing to be done, because you always want to run the latest version.
206
+
207
+ So in that world, we can have multiple copies of production, whatever that means, and all we have to do is tell it "This is the artifact(s) that we want you to run. Whenever there's an update, run the latest." So we decouple the deployment concerns from the integration concerns, and we can change the CI, we can produce those build artifacts whichever way we want, even locally if we really want to... Not a good idea, but it could be done. And it works. I'm not saying it's the best way, but it's what works for us.
208
+
209
+ **Oleg Nenashev:** Yeah, it's a good approach, because the CD system will be eventually more complex than CI, even in this case... Because it's nice to say that we just download the artifact, but when it comes, let's say, to failover - failover is a must for CD - then of course, various kinds of scalability concerns... Then you get a huge CD system, and having proper tools for that is definitely nice.
210
+
211
+ **Gerhard Lazu:** \[45:59\] This is a question for you, Oleg... What does your process of developing a CI/CD pipeline look like?
212
+
213
+ **Oleg Nenashev:** So in my case, I develop pipelines locally. I mostly use Jenkins (surprise, surprise). I also use GitHub Actions quite a lot. In both cases, I run pipelines locally, I verify them... And in both cases, I try to minimize the amount of code and business logic that goes into my user definitions, whether it's Yaml, or whether it's Jenkinsfile, because I want to have a library of common steps... For example, if I deploy my application, like publish to Docker Hub, it's just a common step. Or if I build a Maven project, it's still a common step.
214
+
215
+ It happens usually that there is a pipeline library that implements these steps; well, these pipeline libraries, especially in Jenkins, you can create test frameworks, you can verify them. Finally, I end up with my pipeline itself just having several lines of code, which is basically configuration, not the pipeline definition itself. Though the pipeline exists separately, as a separate deliverable, which is verified, which is tested against various configurations, and which can be reused quickly should I decide to implement a different pipeline, for example should I decide to change how I deploy the system, or even how I build the system.
216
+
217
+ **Gerhard Lazu:** And do you have an example that you can share with us, for us to see what that looks like, the end result of that process?
218
+
219
+ **Oleg Nenashev:** One of the examples you can take a look at, the jenkins-infra/pipeline-library - this is the Jenkins pipeline library we use for building Jenkins components. You have something like 1,800 plugins available now at these centers, and basically we have two standard ways right now, Maven and Gradle. So for this, of course, we offer a pipeline library. It is very complex inside. For example, there's a common state build plugin, and it has something like 300 lines in the pipeline library... But for end users, like our Jenkins plugin developers and maintainers - they just get this build plugin step where they pass several options, like whether they want to build on Linux, Windows, which are the Jenkins core versions they want to test against... And that's it. So it's basically one or two lines; you can take a look, I'll share them \[unintelligible 00:48:07.27\] it's all open source, and it's all accessible. Take a look.
220
+
221
+ **Gerhard Lazu:** I will. Thank you for that.
222
+
223
+ **Oleg Nenashev:** And there is test automation for both unit tests and the integration tests there.
224
+
225
+ **Gerhard Lazu:** Thank you. I'll definitely check that out, and I'll also include it in the show notes. Cyrille?
226
+
227
+ **Cyrille Le Clerc:** Listening to you, it reminds me something that I saw when I was working on continuous delivery/continuous integration, when I was project manager at CloudBees two years ago... It's the importance of standardization of the processes. We should manage the CI/CD pipelines of applications of microservices as cattle, not as pets. I see this as a question on observability, where the observability of your different applications on microservices in your organization should also be managed as cattle rather than as pets... And I think this is a very important thing for your operations to remain sustainable.
228
+
229
+ **Gerhard Lazu:** Speaking about important things... Dan Lawrence was saying this: "Your build system should be at least as secure as your production environment." What do you think about that, Cyrille?
230
+
231
+ **Cyrille Le Clerc:** Yes, so we have seen it last year with the supply chain attacks that have been visible... It's also something for which we are thinking about on the OpenTelemetry instrumentation of the continuous delivery pipelines, where we see the importance of capturing all the trails of the CD processes, including the logs, as something critical. And we think that using OpenTelemetry, it will be easier than ever to route all your audit trail of your release process, the build process of what you ship in production, to route them directly in this very secure, long-term, cost-effective storage, being your logs management system; it could be maybe an S3 bucket, or maybe, let's say, your Splunk, Elastic, or you name it, long-term storage.
232
+
233
+ So this is what comes to my mind... And then there are other requirements for the CI/CD companies, but I am less involved in this at the moment.
234
+
235
+ **Gerhard Lazu:** \[50:13\] How do you think about supply chain security within the CI/CD space, Oleg?
236
+
237
+ **Oleg Nenashev:** I definitely support this topic. It's very important. When SolarWinds was announced one year ago, we actually had a Jenkins governance board meeting, and then a discussion at the contributors summit, and we decided to prioritize supply chain security as one of the major topics for this year for the Jenkins community.
238
+
239
+ If you have seen that, there are a lot of activities on this front, for example dependency updates... We have invested quite a lot in tooling, in dependencies scanning, in bills of materials, so currently we can produce BOMs for components, if needed... And indeed, this is important. And it's important for us, because we are a second-level supplier; we depend on so many libraries, and we need to verify them, but we also need to provide a good level of trust, so that the users of Jenkins and of our systems can safely deliver their software.
240
+
241
+ **Cyrille Le Clerc:** Something that comes to my mind here is that I touched when I was working on CI, and that I see also now that I work on observability - it's the importance of capturing the right information in the bill of materials, and I think it's also an incremental journey. First you build on your Docker environment, but if you don't capture exactly the \[unintelligible 00:51:29.15\] of the Docker image that was used to run your build, it's too late. You will not be able to re-understand it 12 months later.
242
+
243
+ So I think there is an incremental journey. It's a continuous exercise to verify that the data you capture in your build are good enough to understand what actually happened. You mentioned the usage of cache system, do I capture all the details to understand what artifact was retrieved from my caching system? Have I been poisoned? And this is a never-ending exercise, in some ways, to always capture the right metadata on your build.
244
+
245
+ **Gerhard Lazu:** Is Captain Obvious involved in any of this, Oleg?
246
+
247
+ **Oleg Nenashev:** Yes and no, because I'm currently building a prototype which integrates Jenkins, OpenTelemetry and Captain... But for me, the main objective is to actually expose more information about quality gates, so when we deliver software, we can verify that all items are basically delivered that meet all matching criteria. So currently, Captain is mostly built around cloud events, which is probably a topic for separate discussion. Captain exposes OpenTelemetry metrics on its own, so you can understand what happens inside Captain when you analyze, for example, quality gates etc. But it would also be great to have integrations in other directions. So when CI/CD systems supply information about the status, metrics and all deployment parameters, tools like Captain, so that they can make decisions, but the system is compliant with the expectations of our CI/CD admins.
248
+
249
+ **Gerhard Lazu:** How can we follow up on what Captain is up to these days? Captain Obvious, specifically...
250
+
251
+ **Oleg Nenashev:** Well, Captain Obvious - it was just a sneak peek into my talk, which is coming soon... And yes, it's talk driven developement because I needed to implement a few \[unintelligible 00:53:14.27\] So stay tuned. There might be an announcement in a few months.
252
+
253
+ **Gerhard Lazu:** Okay.
254
+
255
+ **Oleg Nenashev:** Captain itself is basically a project, a member of the Cloud Native Computing Foundation; it's currently a sandbox project, and there are discussions about making it an incubating project. It has a quite vibrant community, there are meetings every week, including today, developer or user meetings. So if you want to join the community, you're welcome to do so. I just joined.
256
+
257
+ **Gerhard Lazu:** That's a good shout-out. Okay. So there's a question that I've been dying to ask since we began this recording. What made you move to Switzerland, Oleg?
258
+
259
+ **Oleg Nenashev:** I moved to Switzerland because CloudBees is based there. Actually, I joined CloudBees when I was in Russia, but due to various non-technical reasons, it was more reasonable to have me in Switzerland than in Russia... And yeah, I got an opportunity, and Switzerland is a nice country...
260
+
261
+ **Gerhard Lazu:** Right.
262
+
263
+ **Oleg Nenashev:** \[54:09\] For the record, I'm a big fan of Scandinavia, but Switzerland is good, and why not. I moved there.
264
+
265
+ **Gerhard Lazu:** How long have you been in Switzerland? How long have you been living there?
266
+
267
+ **Oleg Nenashev:** Five and a half years.
268
+
269
+ **Gerhard Lazu:** So that's a long time to really appreciate the country... Unlike six months, and it's like the honeymoon period. Okay...
270
+
271
+ **Oleg Nenashev:** I like this country, and I like the city where I am, because I'm in the French-speaking part, and there are a lot of advantages here.
272
+
273
+ **Gerhard Lazu:** Which city are you in Switzerland?
274
+
275
+ **Oleg Nenashev:** Neuchâtel.
276
+
277
+ **Gerhard Lazu:** I think one of the advantages was you not needing a car, right? And you being very excited about that, where the public transport is really good. Okay... So as we are preparing to wrap this up, I'm wondering what is the most important takeaway for our listeners, Cyrille?
278
+
279
+ **Cyrille Le Clerc:** Thank you. The most important takeaway for me is the importance of the open source and standard nature of OpenTelemetry to succeed, to observe CI/CD pipelines, both to succeed in instrumenting these very rich communities of tools involved in the CD processes, and also communities that will consume all the observability data we produce, which are not only CI administrators, but as we have said, also developers for their pipelines, people doing cost accounting, people doing reporting on the delivery process... And CI/CD data are gold mines that we succeed in exposing thanks to the popularity of this open source standard which is OpenTelemetry.
280
+
281
+ **Gerhard Lazu:** Okay. That's a good one. What about you, Oleg?
282
+
283
+ **Oleg Nenashev:** I totally support this statement. Data is the new oil, and it applies everywhere, including the CI/CD world. Actually, you can use this data and not just analyze it and optimize your pipelines, but also to make decisions... Because the same approach is artificial intelligence etc. They apply not only to production systems and use cases, not only to \[unintelligible 00:56:00.12\] but also to your CI/CD. Because once you analyze tests properly, once you can get better insights and tests and coverage, once you can show developers what are the issues, you can actually improve developer velocity a lot, and you can reduce costs for your development, and more importantly, you can shorten your delivery cycle. So this data which is exposed by OpenTelemetry is essential to actually improving your pipelines to the next stage.
284
+
285
+ **Gerhard Lazu:** The thing which gets me really excited is that regardless what system you're using, as long as you emit OpenTelemetry events, you can get the same view, even when you switch between systems. That gets me really excited, because then you're free to mix and match... It doesn't really matter, just pick the right tool for the right job, but we will understand the same things, even when you move between systems. I think that's really exciting.
286
+
287
+ **Oleg Nenashev:** Yeah, it's exciting.
288
+
289
+ **Cyrille Le Clerc:** And when you operate with multiple systems in parallel, which is what happens in the real life of not small organizations or large organizations.
290
+
291
+ **Oleg Nenashev:** I'm looking forward to lay the foundations and various working groups to start working on specific standards for OpenTelemetry, so that they actually standardize the events. Because right now it's still an open question. So it's a very idealistic view that every CI system exposes the same events, the same metrics, and the same logs. It's not the case yet, and there is a lot of standardization work to happen. I see such work, for example, happening in the Continuous Delivery Foundation for CD events.
292
+
293
+ **Gerhard Lazu:** Oh, yes.
294
+
295
+ **Oleg Nenashev:** But for OpenTelemetry, I would like to see that as well.
296
+
297
+ **Gerhard Lazu:** That's a good point. You're right. It's still very early days. As you mentioned, this whole new ecosystem is still very young. It only just started maybe a year ago, two years ago... It's very recent anyways.
298
+
299
+ **Oleg Nenashev:** Yeah, it's just a sandbox project in CNCF these days... But I hope that it will become incubating very soon, because the adoption for OpenTelemetry is already massive, and there are so many players in this space... So from my point of view, it's totally justified that it's transferred to incubating.
300
+
301
+ **Gerhard Lazu:** Is there anything coming in the next six months that you want to share with us, Cyrille?
302
+
303
+ **Cyrille Le Clerc:** We have just donated the OpenTelemetry Maven integration to the OpenTelemetry community... So it's moving fast, and we get feedback, and we are progressing faster. It's great. The OpenTelemetry Ansible integration - we have donated the Ansible integration to the Ansible community itself. We are iterating at the moment and we are rolling it out inside Elastic to really battle-test this... So it's moving as well. These are great milestones for us to expand the ecosystem of tools that we integrate.
304
+
305
+ **Gerhard Lazu:** Oleg, what about you?
306
+
307
+ **Oleg Nenashev:** It's kind of public, I'm changing jobs... I still cannot announce what's the next one, but be sure it will be quite interesting; it's around open source, it's around observability as well, and I will definitely keep working with Cyrille and many other contributors in this area. Looking forward to it. We'll keep working on Jenkins. I will be publishing my vision for Jenkins, and some bits of the roadmap in the coming weeks, so that if you're interested to see the Jenkins evolution - the community is strong, there are a lot of different developments happening here and there... Yeah, I'm looking forward to see what we ship to the users in just a few months, maybe years.
308
+
309
+ **Gerhard Lazu:** Well, this has been a great discussion. Thank you very much. There's so many things I need to check up on now, all very exciting things, and I look forward to what happens in six months in this space, because it's really interesting it just ties so many things together. I'm very excited. Thank you very much for today.
310
+
311
+ **Cyrille Le Clerc:** Thank you very much.
312
+
313
+ **Oleg Nenashev:** Thank you.
OpenTelemetry in your CI⧸CD_transcript.txt ADDED
The diff for this file is too large to render. See raw diff
 
Optimize for smoothness not speed_transcript.txt ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So in my career, I have been part of many teams that just sling code, or features, our business value, depending on who you talk to. But sometimes that did not feel right, just slinging code, slinging stuff. Yes, you should ship and learn quickly, very important... Constantly challenge your assumptions, very important. But there is such a thing as doing it right and fast, and doing it bad and fast. So what is that difference, Justin? What do you think?
2
+
3
+ **Justin Searls:** Yeah and that's the sort of -- I've been on both sides of this conversation; as an entry-level developer, feeling like I had just an infinite amount of pressure, both from on high, wanting more things shipped faster than was physically possible, pushing constantly to just get features out the door, or to get this thing delivered, where there was a failure to communicate between me and the people managing me... Especially early on, I didn't know how to discuss things like software complexity or where my time was going. And to feel that pressure coming from above, feeling it kind of like sympathetically through my peers, who were feeling the same pressure and kind of pushing on one another to try to make that pain go away... And then the personal pressure on myself, where I was literally starting from a place of incompetence. And by incompetence, I mean could not independently build the thing I was being asked to build without significant help, significant research, significant learning.
4
+
5
+ \[04:06\] And I'm at a point now of relative competence, but it's taken me 20 years to realize the software that I want to build, as I build. But until I got to that point, I needed the safety of being able -- psychological safety, as well as the vulnerability in like a social term to be able to communicate with people around me about like "Hey, I need time to figure this out." Or it needs to be okay for me to ask a question about how this works.
6
+
7
+ And so, in the beginning of my career, I viewed your question of just slinging code versus getting stuff right, almost entirely through the lens of these social pressures that others placed on me, that I imagined others placing on me, and that I placed on myself, and it was very difficult for me to escape that.
8
+
9
+ Later in my career, as I started to move into either non-technical roles, or helping teams in a way that was purely advisory, you'd see teams that even in the absence of pressure, they would still really struggle to get any kind of traction towards delivering anything.
10
+
11
+ And I would talk to very well-intended VPs of engineering or CTOs about "How do I, without downstream pressuring people, and giving them deadlines and cracking the whip, so to speak, get the outcomes that I want?" And the answer, then and now, seems to be that the autonomy needs to be met with some sort of healthy alignment, drive, engagement, excitement, positive energy around like just wanting to accomplish the thing together as a combined group. And unless those motivations are both present and healthy and rewarded and aligned, you can really struggle, I think, as a team, to find a good cadence.
12
+
13
+ I think there's a reason why we keep talking about words like velocity, speed, "How fast can we go?" And I think to somebody who's new, they might think that that's all about how fast you can type, right? Or how fast you get features out the door. But really, I started to think about it in terms of not speed per se, but fluidity; how much friction is there day to day in people's lives, and how organically are they able to take an idea, communicate it into a product feature, or aspect, or stakeholder concern, and then prioritize that and get it scheduled and worked on and delivered and shipped into production, and validated, and so on and so forth? How smooth of a process is that, versus how fractious?
14
+
15
+ And if we're going to optimize for one thing, it's probably smoothness over speed, per se. And it's difficult, because it sounds like a little bit like woo, I think, to both developers who just want to focus on the technology, and to managers who just want their project done yesterday.
16
+
17
+ **Gerhard Lazu:** Yeah.
18
+
19
+ **Justin Searls:** So I don't know... Long-winded way to maybe not answer your question.
20
+
21
+ **Gerhard Lazu:** No, I think that was a very good one, because it just showed how much complexity there is, in that answer. And this is complexity that comes from experience, that comes from the real world, all the situations that you have been in personally, and I know that many can relate to you.
22
+
23
+ What I can relate to the most is that velocity. It really doesn't matter how many points you deliver in a sprint; it's not about that, it's about how can you keep that consistent, not over a few weeks or a few months, but about across years. In a couple of years, how can you consistently maintain a speed that's healthy, that you can build on top of? That complexity, when it comes - because it will always come - it doesn't affect that consistency. That is what a healthy delivery mechanism or delivery team looks like to me. It's never about how many points, it's about month-on-month, year-on-year, can you keep that up? And if you can do that, well, the sky's the limit.
24
+
25
+ \[08:16\] I think to this, there's another thing which keeps coming very often - going in the wrong direction, regardless of the speed, will always be wrong. So what would you say about that, about knowing where to point teams, especially the ones that have to collaborate?
26
+
27
+ **Justin Searls:** Yes, that's a great question. And I think that a guiding light for me on the most successful teams that I have either been a part of or that I have witnessed, has always been a shared and common just understanding of what their purpose was. So I was part of an organization, a consulting company, just prior to founding Test Double. So we founded Test Double in 2011, so it was like 2009-2010. And they were in that era where it was known as like an agile software consultancy. And so they were peddling, pushing their own kind of blend of agile engineering practices like Scrum and extreme programming. But they did an interesting thing in their sales process of really pushing business value.
28
+
29
+ And so if user stories rolled up into like Epics -- Epics, in their sales parlance and also in how they practiced and delivered, would roll up into business value stories, or value stories. And we would start each engagement by actually getting the whole team in a room - developers, QA, product owners, business stakeholders alike... So it wasn't behind some secret veil of like a product organization. I didn't even know that might be considered desirable in certain organizations; I was sufficiently naive to this experience. And what was great about it was we would just have an open and honest put up on the board, like "Hey, executive or stakeholder or person who brought us in here to like build this thing, how is X, if delivered as conceived, going to make or save your company money?" And just boil it down.
30
+
31
+ And first of all, a lot of executives, it turns out, are uncomfortable with being put on the spot to answer what shouldn't be a simple question such as that... But when you really sat with it, and as a team forced the conversation out, and then you followed through, not just on -- I don't know... Here's a project example that we did - currently, our system is so slow that sales reps who go to restaurants to sell food supplies, end up just spending multiple minutes just waiting for pages to load, and they could hit three or four more restaurants a day if it was fast. And that would result in like X dollars. And we'd follow through and be "Okay, so what is X? How would you measure X? How will we assess that X has been attained after we've delivered it?" And not only in the kind of initiation phases and discovery of the project, but how will we, on an ongoing basis, track that as the primary metric for success for this project, as opposed to arbitrary story points, right? Because there's no way to know whether you're going in the right direction or the wrong direction if you don't have a shared understanding of what the point of the thing that you're building is. And most software teams, they don't know what the point of the thing that they're building is. Or in this day and age, to know it would be to not want to work on it anymore... You know, whether for ethical reasons, or just because it's a lot of the stuff that gets built these days is kind of slimy.
32
+
33
+ And even though, in my practice at Test Double, our clients - they work on fantastic and wonderful products - I think that we have sort of been encouched into this default relationship where product throws "Here's the features that we want" and "Here's the things that we need." There's a disconnect at the developer level, at the team and engineering level, where we lose sight of or aren't really bought into or aren't really included in the discussion of "But why?"
34
+
35
+ \[12:11\] And I have seen teams where developers know the answer to "Why", and when a product owner says, "Hey, here's how these comps should go... And you click this and then you click this, and then you click this", a developer who knows what the ultimate goal is, in terms of like business value or whatever the overall organizational objective trying to be met is, can successfully have a real two-way discussion with that product person and push back, or offer alternative ideas, or even find shortcuts that would make things faster. And in the absence of that, everyone just becomes an order taker... You know, I receive these marching orders and then I go and build the thing. And I think that sleight of hand is what actually facilitates and enables a lot of the negative externalities that we see in our industry.
36
+
37
+ **Gerhard Lazu:** This resonates with me at many different levels. I've seen a lot of what you've just said in Pivotal Labs. I've seen this in the IBM Bluemix Garage. These are the things that, you're right, were the most important ones from the beginning. I like the engagements, that engagement mentality, I like the focus on business value and customer outcomes... So all that makes perfect sense, and that is a very powerful reason to do things and to ship software. And you can correlate those points to business value. That's amazing.
38
+
39
+ However, I've also seen a different side of the coin, where you're working on a software that gets shipped and others get to use, to implement their own things, like for example a database or a proxy, or whatever else, but it's like more technology-oriented. What do you think the equivalent why and the equivalent business value is in that case?
40
+
41
+ **Justin Searls:** If you're building a developer-focused tool - and this could be a paid database, like a Snowflake, or something like that, or an API... Or it could be open source and it could be completely free - I think it's still important to understand that when developers are your customer, they are still human, and should probably be treated much the same way as a naive non-technical user of a software system that serves naive non-technical users all the time.
42
+
43
+ In general, I suppose -- to clarify your question, are you asking specifically about how this applies when the overall objective is less about making money and more about meeting somebody's unmet need with technology?
44
+
45
+ **Gerhard Lazu:** Well, I think with the software that we write, everybody's trying to make money. But I think sometimes the relationship between making money and writing the software is clearer... Such as, for example, when you write like for business-facing, customer-facing products. But if you have a software that you build that then gets used, and then you have, if you imagine, services attached to that.
46
+
47
+ So let's take, for example, MySQL. Let's say that you're selling MySQL and you're building MySQL. I mean, sure you have the licenses that MySQL has, or maybe you have a service that you offer, which MySQL is part of, but then the value is less clear, because you're not building the software to, as I said, sell licenses. Someone is using it, a part of the service, to deliver value to other users. And in that case, I think the value is less clear. So do you see it differently?
48
+
49
+ **Justin Searls:** I think that what you're describing could be phrased as like a different vector, where some products are just obvious. Like, if I was hired as the Chief Product Officer of a company that made branded sweatpants, and you could put like any college name on those sweatpants that you wanted, my job as a product officer would be pretty straightforward, right? Like you can already imagine what that app would look like.
50
+
51
+ **Gerhard Lazu:** \[16:16\] Yup.
52
+
53
+ **Justin Searls:** And so I wouldn't have to really provide a tremendous amount of detail and subject matter expertise. If my business were to be all about and be focused on, and I'm hired as chief product officer to facilitate the FDA approval of highly regulated pharmaceuticals - that job sounds a lot harder, and I hope it pays a lot more, right?
54
+
55
+ **Gerhard Lazu:** Yes.
56
+
57
+ **Justin Searls:** So I think the same holds for kind of what you're saying in terms of if I'm building a database engine. That is a very, very challenging product category, because it requires -- and when you think about it, what are the things that are in common between that and the pharmaceutical case? It's like, tremendously deep subject matter expertise, and probably a lot of vision, some big dream that a product person can articulate and get other people on board with and break down into smaller reducible problems... And sometimes our wires get crossed, because I think developers and software people, because we are users of a lot of the stuff, we're able to dogfood, like use the tool as we're building it, you know, sometimes in a bootstrap way, to build the tool itself... We can sometimes underrate the value of smart, thoughtful product as it pertains to technical solutions that we ourselves could very obviously see ourselves as consumers of.
58
+
59
+ **Gerhard Lazu:** I think that makes a lot of sense. And it just goes to show that sometimes the complexity in the code that you build, and everything around it can make it difficult to answer that "why". I mean, you should still do it, it's still very important, because if developers, software engineers, however you want to call them, are detached from the "why," why they do what they do, then how can they find all the good things that make what they build good? And how can they get excited about it? How can they be creative and innovative about their work? So I think they go hand in hand and they're very, very important.
60
+
61
+ **Justin Searls:** Totally.
62
+
63
+ **Gerhard Lazu:** Okay. If you were to describe a development pace that feels sustainable and healthy to you, what would that look like?
64
+
65
+ **Justin Searls:** You know, that's a really interesting question, because for me - and it might just be a function of getting older, of being around the bend a certain number of times, on the cadence of different projects of... I used to, especially earlier in my career, I'd feel the ups and downs that came with software development a lot more intensely, early on in my career, and I got married at the beginning of my career...
66
+
67
+ On Monday, say, I would grab a new feature, and I would immediately feel overwhelmed, and I'd feel like I was drowning in complexity around all this stuff that I didn't know, and I would just panic. And on Monday night, I'd come home and my wife would see me in this state and she'd try to console me, right? And then on Tuesday, my asynchronous brain would have a chance to think about the problem, chew on it, and I'd make some kind of forward progress, somehow... And I'd feel the wind at my back, and I would feel hope and inspiration. I'd come home that evening and my wife would see me in a better mood, and you know, she'd be like," Oh, great. He's better. He's over this hump." And then by Wednesday I'd run into another blocker and I'd be in the same pit of despair again.
68
+
69
+ So what I noticed early on was that I'd have these really high highs and really low lows, and enough so that other loved ones in my life were able to kind of predict my mood based on what they'd seen from me the previous two or three days.
70
+
71
+ \[19:58\] And I say that because, to answer your question, I think that it's a very -- I want to like acknowledge and recognize that there are aspects to this work that are deep and creative, and require a lot of asynchronous chewing to successfully build and see the right solution. So even if you could just like, to your point, sling stuff really, really fast, sometimes features are a little bit better if you just take a more deliberate pace and allow yourself an overnight. Right now I'm in a role where I'm kind of split between duties, and I've found that it's actually been really nice that I have a few focused hours to work on software in the morning, and then I get racked with a whole bunch of meetings, but in the asynchronous time where I'm not explicitly thinking about it, I can come at it at the next day and have like a gust of inspiration. And if you think of the stuff that you write as being not just an inevitability of like percent to complete, but that the outcome actually changes based on a whole bunch of stuff that goes on in our brains that we don't really understand, I'm almost trying to -- and I feel like I'm almost you know, describing, some sort of acoustic singer, songwriter... You know, get a particular vibe going, you know... But I feel like what I would want to capture on a multi-person team level is a sense of that same sort of productivity, right? Like, you should feel challenged, you should end some days feeling like you're up against the wall, and you should have enough time to give things a little bit of space to come at them from different angles the next day. But if like a feature is taking, I don't know, a week or two weeks, other human factors sink in. You might just feel disillusioned, or disengaged, or dispirited, and other "dis" words.
72
+
73
+ And so I think that there is a boundary almost on like us as biological organisms. There's probably an answer there, of different spectrums for different people, for sure, but there's probably something about the cadence of just the way that our brains work, how we exist as social creatures... And that's probably where I'd start digging to give a good answer to that question, which is probably very unsatisfying for a lot of people.
74
+
75
+ **Break**: \[22:21\]
76
+
77
+ **Gerhard Lazu:** So I've noticed, Justin, that you had just started a Twitter poll recently. And the Twitter poll is -- this the question: "Has the Emergence of DevOps Sped Up or Slowed Down Teams' ability to Deliver Software Overall?" That was an interesting question. I'm wondering what the responses have been so far. I know we'll still go for another hour, but first of all, what made you ask that question, and what are people replying?
78
+
79
+ **Justin Searls:** Yeah, because I have not the most healthy relationship with distractions throughout the day, I have to admit, I've only kind of glanced at a few of the replies... But the reason that I asked the question is because I think a lot about how the advent of sort of mainstream open source software - and that began, I think, in the mid-aughts, when... You know, I experienced it in the Java community, because of what the Java vendors were selling; enterprise Java systems and stuff were not particularly well designed or usable, and it created an opening for a lot of open source Java tools, chief among them probably Spring, and the Spring family of brands - to be the first thing that a lot of people in large organizations used... And that was open source.
80
+
81
+ \[24:35\] So I got down that rabbit hole -- of course, it was still incredibly hostile to actually try to contribute to these things, and if you weren't a Unix hacker who was super-comfortable in mailing lists as a modality for how to communicate with humans, it was not at all welcoming. But the advent of GitHub, of course, changed all that. You know, once you got over the hump of learning Git.
82
+
83
+ **Gerhard Lazu:** So you had 227 votes so far...
84
+
85
+ **Justin Searls:** Yes.
86
+
87
+ **Gerhard Lazu:** I was one of them. And the majority, 44.5% are saying "Sped up." That's what the majority thinks, and that's what I voted for as well. We may publish at the end of the poll the results in the show notes, so check them out when the episode comes out.
88
+
89
+ Okay. So do you think that the DevOps, but more importantly, the automation that seems to be abundant these days - do you think the automation made things better, or do you think it made things worse for shipping software?
90
+
91
+ **Justin Searls:** So DevOps, just like so many things in open source, became a hot and trendy buzzword that was heavily marketed and associated with either products or sort of halo projects when it comes to recruiting in like big tech companies. And the original idea that DevOps would be like test-driven development, and if you just gave developers testing, they would incorporate it into their team room, they would automate away a lot of the pain around testing and quality assurance, and then the intrinsic quality would increase at a marginal decrease in that team's ability to deliver things quickly... And in part, accelerated by the fact that they no longer had other people to have to communicate requirements to, so that things can be tested. So like the theory went, if we just did that with operations, we would get the same lift.
92
+
93
+ And to me - I had that experience, and it was called Heroku. You know, it was the most DevOps thing that I had ever used in my entire life, was being able to say "git push heroku", not have to think about my operations at all, but know that it was like taken care of, that I had answers to every question about scale, and about adding on additional components, without necessarily having to turn it into my side hustle or my day job or my identity.
94
+
95
+ But DevOps as a term has changed, as I think the Agile era of the aughts sort of undervalued and played down the importance of operations as a practice. I think a lot of the people who are the Linux sysadmin archetype of the late '90s might be seen as sort of getting their comeuppance now or their day in the sun of lots and lots of new innovations and technology that are focused on meeting the same kind of just core desires... You know, some of it's like "Hey, how big can we make things? How fast can we make these? How can we automate all of these very fancy and cool, but maybe a little arcane and unnecessary at small volumes and scales, like orchestration of like lots and lots of real and virtual systems up in the cloud?" So DevOps and automation tools have enabled and empowered lots and lots of really cool stuff.
96
+
97
+ And my experience, of "I just want to be able to "git push heroku" and have my app work in the cloud and not have to worry about it ever again" is, I think, still the pinnacle of what I would want as a developer. And of developers that I've talked to that have had that experience in real life, they all wish that we could still have that.
98
+
99
+ \[28:10\] And Heroku still exists and it's still a thing, and I love the people there and I love the product, but clearly, it's not a flavor of answer that the market is searching for, because everyone thinks that they're going to need Google scale and Facebook scale kind of tools for the job that's in front of their very straightforward CRUD app, with very few users. And this is all of a piece with sort of startup culture that everything needs to be a billion dollar unicorn to be valuable, and so you have to presume the conclusion that of course they're going to reach that scale, so then you may as well just on day one reinvent the universe in AWS through all this automation.
100
+
101
+ So DevOps as an overall meme in the industry I think has been net negative, and slowed down a lot of teams by way of distracting them, where the fact that teams now have to hire a certain number of DevOps people, quote unquote, "to full time just keep the hamster wheel spinning of their cloud-based computing", whereas before you might even have had an on-premises server that was just sort of sitting there and was just on and worked...
102
+
103
+ That's what I, in spite of the poll results - I think like 44% of the people saying sped up... I think some percentage of those people are just people who like really geek out about DevOps technologies and kind of don't care and are just team pro-DevOps... And some percentage are just people who like living in the ideology that we live in and probably just never had the experience of what if you could just set it and forget it and not have to worry about it again? Because if it's a means to an end, why would you want the thing that required a ton of effort and thought and complexity and specialized skills and so forth, and constantly having to read up?
104
+
105
+ So I'm coming across as pretty anti-DevOps here, but I think that when you look at the replies, the number one point of contention is that no one has a shared understanding of like what we mean by the word "DevOps". And so just to focus on automation here, it's - yes, I love real automation, but I don't think that what we're typically describing around DevOps related activities is like actually automating anything, in terms of actually automating away a problem.
106
+
107
+ **Gerhard Lazu:** This specific question is something that I'm really passionate about, because I am in the DevOps camp, but for other reasons. So it's not about the technology. I mean, there are some aspects of that, just to see how things are changing and how they're improving... But I understand it at a very fundamental level, since I have been involved with it for, as you mentioned, in your case, 20 years, but my focus has been infrastructure. And I live and breathe it on every single team; I went into the Puppets, into the CF engines, into the Chefs, into all those like infrastructure's code and configuration management, and so on and so forth.
108
+
109
+ One thing, which I would like to say, the first thing, is that git push is the pinnacle, you're right. And that should not change. Changelock.com, the setup itself has always been git push. We use Ansible, we use Docker, we're on Kubernetes now... We'll be using something else not before long, I'm sure of it. It has always been git push, because that is the golden developer experience - push it and forget about it. It's all the stuff that happens afterwards that makes a resilient system in production, and I think that's where a lot of the DevOps folks or many DevOps folks forget about, because they get distracted by new and shiny, or "Let's just keep changing things."
110
+
111
+ I see a lot of parallels between test-driven development and testing, and DevOps and infrastructure, where you can see things right or wrong, and the outcome will be a result of your perception, of your principles, and eventually, your skill set as well.
112
+
113
+ So what I can say is that if your users are happy, latency is low, all the requests are going through, nothing is lost, data isn't lost, you're doing something right. And as long as developers, which by the way, are also users - as long as you can just git push and show them what is happening at all levels, whether it's testing, whether it's performance, whether it's regressions, whatever it may be, and eventually running in production... As long as they have a good understanding of how the system works as a whole - well, you've achieved your task.
114
+
115
+ \[32:12\] So you're right, Heroku had something for it, and there's many things that have happened afterwards. But to be honest, not everybody cares about these things, or should they. They shouldn't really care; they should just git push or just use the service and be happy. That's the end goal, to simplify it.
116
+
117
+ So let's switch focus to something that I know you have a lot of experience in, which is testing. Just as a lot of advice out there about DevOps is bad, I know that a lot of the advice that's out there about testing is bad. Why is that?
118
+
119
+ **Gerhard Lazu:** Yes. So in trying to connect the two themes, what you just shared about DevOps is 100% true and matches my experience as well. And where the analogy between the two struggles a little bit, is that if I want to have that git push Heroku experience and it cost me $30 a month, it is very difficult, I think, to be like a human who works on infrastructure and do literally any amount of customization or custom stuff, and compete with that on price. But because of the way that we consider the cost of software development - like, a lot of companies out there, as soon as you're a full-time employee, your marginal cost on an hourly basis is $0. It's like they become blind to just the actual expense of people's time.
120
+
121
+ So I think of the failures of DevOps as being a failure to recognize the time sink that a lot of teams find themselves dumping lots and lots and lots of hours into when there's commodity services that if you would only adhere to a set of conventions, would get the job done close to, or as well.
122
+
123
+ Testing is kind of like same core fallacy, is that we talk a lot about the activity and the importance of it in a sort of boolean state, like "Are you DevOps? Are you not DevOps? Are you in the cloud? Are you not in the cloud? Are you tested? Are you not tested?" It's sort of like the degree of sophistication, because these are secondary concerns to building a product that does the thing that it's supposed to do. No one really has the mental and emotional bandwidth to consider, "Am I DevOps-ing good? Am I testing good?"
124
+
125
+ So simply, there's usually some, if not a person, like a mood in the team that's like "In order for us to be a moral and ethical and upstanding team, we should be able to check this box or that box." And so I want to check the box that I'm doing DevOps on and check the box that I'm testing. And when we consider the bad advice about either, it's often coming from people who either are operating under that sort of simplistic notion, or for some reason have an incentive to enable and perpetuate it.
126
+
127
+ And so what I think about the failings of either are when the team lacks an appreciation for the overall total cost of ownership, the overall return on investment of where their time is going, and what are they getting for that time, or you know, in terms of AWS, or if you're running a bunch of server somewhere to automate your CI build, and money. And if you appreciate that, then you can have a lot of really fun and interesting conversations about testing. How often are you seeing failures? When you see a failure, does it indicate an actual bug? Does it indicate somebody forgot to update a test? Is it brittle and flaky? Like, how long does it take to fix them? How many places do you have to fix the code or the tests in order to get back to a passing build? Like, how much time is lost in terms of the waiting to run the tests locally? Do you run the test locally? Do you run them in the cloud? And if you run them in the cloud, how long does it take until you get notified? And how many people get notified? Does the whole team get notified or just one person?
128
+
129
+ \[36:00\] And unless you know, in a quite data-driven way, the answers to a lot of these questions, general context-free advice that you see about the right way to run a test or the right tool to use is not necessarily going to help put you closer to the end goal, which is like the tests serve the team to accomplish what they were trying to do either better or faster.
130
+
131
+ **Gerhard Lazu:** That's a great one. That's a great one. I will have to do something -- go back on the DevOps slot; I just can't leave it. Let's put it that way, I just can't leave it.
132
+
133
+ So DevOps and automation - let's just talk automation - is something that once you get to a certain... I wouldn't even say like team size or certain complexity, a certain maturity - you have to do. And yes, you can delegate all of that to some service provider. But knowing how the service provider works and knowing how that service provider integrates with other service providers, whether it's DNS, whether it's certificates, whether it's backups, whether it's migrations for example, whether it's a distributed database, like... Because they do fail; all these systems fail in weird and wonderful ways. What about your CI system? Even if you use a managed service, every single one of them, in my experience, have small quirks.
134
+
135
+ So having that operational knowledge of how these things work, and how they integrate, and what happens between your git push, and the code actually ending in production. And what happens between patching all the stuff that needs to be out there. And maybe - you know what, maybe it's just like your code dependencies. But what does all that automation around the code look like to actually get the value continuously out there? And when something is wrong, detect it, notify it. And this is not just test, it's everything else.
136
+
137
+ So this is like operating your software; there's a lot of knowledge, even if you're using every single provider under the sun, and you delegate, you offload all those tasks, they still combine at some point. And whether you know it or not -- I don't know of a platform that does it all, because they can't; they're just too big. "I've solved the operation of software", you can't say that. Just as you can't say, "I've solved testing. Every single type of testing for every single platform."
138
+
139
+ So there's a lot of detail in how stuff runs and how stuff gets out there. And what happens when things fail? Because they do fail. How do systems degrade? So it's more of that operational knowledge that I think you have to have, that you need to automate around, so that things are easy, so that things are resilient, they fail predictably. And that is the DevOps and the automation. There's a bit of SRE there that I think about, which is a lot more complex than git push and to run it somewhere, "I don't want to care about it"; the database, or the load balancer. "I don't even know that I have a load balancer. "Just take care of it, Heroku." It's a bit more complicated than that, because there's all those other elements that make the big picture.
140
+
141
+ **Justin Searls:** There's a burden of knowledge and experience that I bring about testing, that you bring about infrastructure to each new thing that you do or team that you join. And one of the things that I think we, as an industry, especially as we have created more sophisticated tools on every front, whether those are frameworks or language ecosystems or dependencies, or memes like DevOps or TDD, is that we haven't done a good job, in spite of the fact that there are indeed very good tools for getting started with a brand new thing and slinging some code and proving out a concept - very fast. And there's a lot of tools for how to with enough time and person power, rebuild Google's infrastructure at scale.
142
+
143
+ What we fail to appreciate, I think, are the inflection points or really the step function, or what, in my brain, I envision as a literal cliff of what do you do when you're transitioning from small enough to be able to use a commodity service and not really care about, so that you can focus on the thing that you're building, versus the stage two or stage three of the rocket, where after some gigantic chasm, "Oh yes, we -- " In your example, "We can't use Heroku anymore, so we have to throw that entirely out and now we have to reinvent the universe while we're continuing to operate suddenly own all of these things."
144
+
145
+ \[40:20\] And so I think about that in terms of slinging code and testing too, right? Like, if you're able to build a proof of concept, get something out the door, there's no tests at all... The same would go for applying rigorous architecture and design principles to the software. And then same would go for let's say like to go faster, we build a server-side rendered traditional HTML templates with variables and stuff stored in a session or something, as opposed to a single page app that's built in JavaScript that might have a snazzier user experience.
146
+
147
+ We might have done all of those kinds of things early, shed that complexity, to get out the door as fast as possible. But in each of those cases, once we reach that breaking point -- like if I've got a server-side render application, I can't just like flick a switch and then remake it as a single page application, just like I can't snap my fingers and have sophisticated DevOps. Or if I have like a big mess of spaghetti code all over the place, I can't just like overnight refactor that into a well-formed, well-considered units of code, or write a test suite that is going to be well paired at appropriate levels of abstraction up and down the stack in terms of everything.
148
+
149
+ And appreciating that when we talk about scale, we are not talking about twisting a knob up or just getting more revenue... Like, we're talking about very specific inflection points where you have to start caring about those deeper levels of knowledge that you're speaking about. And that's where I think there's a lot of a failure in our community and our industry, to put a name to those things, and to actually have patterns that are successful for helping teams navigate those transitions.
150
+
151
+ **Gerhard Lazu:** You asked me before we started recording what I hope to achieve with this podcast. That's one of the things. How do we share more of that? How do we bring those nuances out? How do we have those discussions and figure out how stuff is changing and how do we need to adapt to those changes? What makes sense for our specific setup? Heroku may make perfect sense; Kubernetes may make perfect sense. We just don't know, because it's all specific, and guess what - you have to figure it out for yourself. We can help along the way, maybe simplify some of the choices or make them clear, but at the end of the day, you have to choose and you have to combine those choices long-term. And it's not a one-off. Continuously. And that's what makes this really challenging.
152
+
153
+ **Break:** \[42:44\]
154
+
155
+ **Gerhard Lazu:** Let's seek a very specific example about what I think is a test suite gone wrong. Imagine, Justin, that you have just joined a team of nine developers, so you're developer number ten, and they're all working on the same monolithic codebase. This team has constant test flakes, which means that the testing part of the pipeline that gets code, slinging code into production, it keeps failing for random reasons, multiple times per day. And they keep hitting the Rerun All Jobs button, adjusting timeouts, adding more retries to their tests, that type of thing. First of all, what are your thoughts about this specific situation?
156
+
157
+ **Justin Searls:** Yes, well - I mean, unfortunately, it is all too common of a situation, and I think that it is challenging to write tests that are not susceptible to several specific things that contribute to what is commonly called brittleness or flakiness, right? The most important thing to understand as we're approaching this question from the outside in is what is our goal to have this built in the first place? And the goal is probably to have some sort of confidence that things are working. And if we get one green build out of five and no source code changes in the process -- like, my confidence is not high, right? And we know this based on kind of intuitive experience that if it passes one time out of five, it means that you have proven the system can work. And you've probably also proven not that the system has some sort of fundamental flaw and will break in production, so much as you figured out that there are environmental timing or ordering implications that can cause one or more of your tests to not work.
158
+
159
+ And I think the first thing that a team should consider when they are running into this problem is to get back to consistent green builds as fast as possible. Because again, if you're thinking about testing as ROI, if all nine of those people are getting an email every single time that the build breaks, and then say three of those people just independently start racing to go and screw around with timeouts and stuff - that's a lot of dollars flying out of a window, because of one little tiny thing that might not be well understood. And then no one was no one woke up that day and was like "I'm going to really wrangle all of the flaky stuff in our test suite", right? They're all trying to do something else, and this is the thing blocking them from that thing. Not to say they're half-hearted in their attempt to fix it, but what they're really needing is a salve in that moment, as opposed to a solution.
160
+
161
+ So the first thing that I would do is I would lockdown everything. Normally, I don't like freezing time in the system, right? But I'd probably start with the common quick fixes that I can apply, like "Hey everyone, it is now 2019, August 3rd, it's a Tuesday, and it's 11:33 PM, and we're just going to lock that whole server down that way", whether we're using a test tool to do that or Unix. "Hey, everyone, we're also going to no longer randomize the test order, we're going to use this particular seed that is known and good, that we've seen work." "Hey, everyone, we're going to change all of the directory globs that are currently in an unspecified order in Linux, and that's why Linux builds are failing, but on our Mac where it's the alpha order, those are all passing." Like, "Hey, we're just going to do a sort on all of those glob requests everywhere, so that we're just loading alphabetically, because it doesn't really matter..." And we're going to like go through the half dozen or so quick hit things to just try to get to consistent builds as soon as possible. And hopefully, that's one person one day, if ever.
162
+
163
+ **Gerhard Lazu:** Yeah. What does the consistent build mean? So we said one in five passing; very bad, very inconsistent. What does a consistent build mean to you?
164
+
165
+ **Justin Searls:** \[47:53\] For me, the ideal and the asymptotic goal that I would have is anytime that I saw a build fail, it means that something is broken in the application. And by the way, I think this is actually the popular notion that managers who are told about testing and see a build on the wall - like, their intuitive notion is that if light goes red, it means that the system that they are extensively paying to have built is currently in a state where it does not work, right? That's intuitive. But if you're like this team that has become habituated to this environment, where things just break randomly, those developers will have lost confidence that red means that anything is broken at all.
166
+
167
+ **Gerhard Lazu:** Yes.
168
+
169
+ **Justin Searls:** But the business person is still thinking, "Wow, there's like a lot of failures, and so it's time well spent to go and fix those brokenness", because like in their mind, in the business person's mind, like anytime spent fixing the build is time spent making my system work when it didn't work. And so that seems valuable. But if you were to tell them that 95% of the reds in the build that were distracting the whole team were just bullshit implementation problems in the way that we wrote tests, because data gets polluted from one test to the other tests, depending on all sorts of different things,- that business owner will probably be rather upset, right?
170
+
171
+ **Gerhard Lazu:** Yes.
172
+
173
+ **Justin Searls:** And we shouldn't be the same kind of upset, right? So the flakiness is one thing. I would only want my test to fail if something was actually broken. And I would go a step further and say, "I only want my build to fail if the production code doesn't work." So if a test was just somebody forgot to update the test, I don't want to see that in the build. I want people to run tests locally. And if they're not running tests locally, I'd want to figure out why. And then I want to make it fast enough so that people do that and they have the tools to want to do that. And so that's the best answer I have to your question.
174
+
175
+ **Gerhard Lazu:** Yeah. What about -- so there's follow up which I think it complicates things and it makes them more real as well... What about a test suite that has to rely on integration tests, because the software that it tests is really complex, and you have to do black box testing... Because a lot of the stuff - like, you're testing the correctness of the system at scale. So how does the system break, for example, when we expect it to break? So that's one aspect.
176
+
177
+ The other aspect is not everyone can run it locally, because the stuff that the system has to provision, the setup is too big; it won't fit on a development machine. It needs multiple machines just to basically orchestrate the system as a whole. That may be an over reliance on integration tests, but this type of knowledge to go to something like TLA+, spec-based testing - and there is another one which I'm blanking out on; not feature-based testing... Property-based testing. To do that type of testing, it requires a special type of knowledge and a special type of approach, especially when it comes to like a heavy data system.
178
+
179
+ So there's that aspect, but the other one is around different CI systems flaking in different ways. So the same test runs in two separate CI systems, and not the same tests fail the same way in the two different CI systems. There's nothing wrong with the tests; there may be a timing issue, but more importantly, it's a resource contention issue. So what would you do in that case? Because it's not the order of anything, it's not like the globbing staff, it's not -- nothing that you can do, like the simple fixes; it's just like the sheer scale of the thing. And maybe a lot of approaches over the years, which maybe weren't as good... So you'd think this is like a mature codebase, like a decade plus, which tends to happen, which happens to be a lot of the software, which gets more complex, more brittle, more -- you know, you just have to tend to it.
180
+
181
+ **Justin Searls:** There's another aspect to what we were discussing earlier, about this sort of boolean mindset that the people have - is it tested, is it not? And one of the things that ideology has led to most teams, at least the majority of that I run into, to conclude is that there is a single bucket for every app called test, and you just put tests in it, and you're lucky if it hasn't directories that are nested underneath there, in terms of organizing the tests.
182
+
183
+ \[52:11\] And there is a default sort of assumption -- even on, I would say, highly competent teams, you might be able to expect that there are unit tests that will indeed run locally, of most things that are added, and there might be like one integration test that may be will run the whole application and just prove that a feature works. You know, if you added widgets to an existing application, maybe you would expect that integration test to create a widget, read some widgets, update a widget and delete a widget, right? And just, again, do that CRUD flow for N times, for N features over the life of the system.
184
+
185
+ There's two things that make tests very, very expensive to run, that you hit on. One, on the first bit is this logical organization failure on our parts. So what I would say is - okay, let's say that you're building a system that interacts with an early client of ours, interacts with the paging network on electrical grids to communicate to thermostats, to make them go up and down when it's really hot outside in Texas. Like, we could, if we wanted to, write every single test, with the assumption that a thermostat on a breadboard with a serial number is plugged in and powered on and on this network, so that we can like actually interface with it through every test, even if the tests that have nothing to do with that particular integration, even if our architecture could be built in such a way that we could just sort of like have a driver to that thing that we could easily mock out and like get away from that... Bbut if we walk into the assumption that we need a maximally real test, every single thing this system does, we need to be able to prove scientifically that like we're going to be able to go completely end to end - like, if that's your orientation, whether explicitly, like you're buying into it, or implicitly, like I don't know, we've got a test directory, and every test just needs to -- at least one test needs to talk to this thermostat. So we just have to assume that they all do you... Like, that is a failure to organize based on the constraints that you're under.
186
+
187
+ And so what I would encourage people to do is have multiple test suites and work backwards. So like what's the most resource contented environment that you might have? Maybe it's spinning up 100 different servers and so forth, they are operating under a particular scale... Like, great, we're going to do the bare minimum number of testing to achieve confidence there, and then we're going to break out a new bucket where you don't have access to that, and everything else will default to that until we can find another really expensive resource contendee thing to do, and then we're going to try to increasingly make the default place where people put tests into the least integrated, least complicated, necessary infrastructure for them to work.
188
+
189
+ So that's, I think, at the end of the day, one of the answers to, I think, all these questions - you end up with trying to maximize the number of isolated units that can be tested in isolation, where you get really straightforward, not only fast feedback, but the feedback tells you exactly what line the failure was on kind of tests, and some number of locally integrated, everything's talking to everything else, but all inside of that monorepo... Some number of contract tests that will actually just go and like validate assertions against a running instance of some server that you integrate with, some number of like driver style tests to that kind of thermostat... And then, you know, a hint, maybe just one, just a golden path of like "Hey, when we turn this all on in the real infrastructure that we really have, can I make a user and log in?" And maybe that's the only thing that you actually needed to prove in that fully plugged together state. But then you'll sidle up to teams where they have 1050 unit tests, and you add one marginal unit test just to make sure that emails are formatted correctly... And that's running up and down the stack, and now the base time that is each individual test, like if we run like logically and serially, is like four minutes long, and every single time you add anything, it's like this really outsized cost.
190
+
191
+ **Gerhard Lazu:** We have time for one last question, and it's going to be a quick one...
192
+
193
+ **Justin Searls:** Okay.
194
+
195
+ **Gerhard Lazu:** ...and I'm hoping, more importantly, a fun one. I'm curious how you describe, according to you, which is the most impressive Olympic event you've seen come out of Tokyo so far?
196
+
197
+ **Justin Searls:** Alright, I study Japanese language, and so the only reason I have an answer to this is because I watch the Japanese news every time I'm on my bike. And so the most impressive thing that I saw was a 13-year-old young woman from Osaka winning the gold medal in skateboarding... And to see the level of excitement that had generated, because I believe she's now the youngest gold medal winner in Japanese history, especially in a new event. So I thought that was pretty darn neat.
198
+
199
+ **Gerhard Lazu:** That's great. I was thinking something else, but maybe we drop that in the show notes, because it's too funny... I know that you could not have described that, but that's what I was hoping would happen. That's okay, it'll be in the show notes, you can check it out. This has been a pleasure, Justin. I think we need to do another one. I mean, this just got me started. There's so many more questions that I have for you. I'm looking forward to it. Thank you very much.
200
+
201
+ **Justin Searls:** Absolutely. Take care. Thank you.
Real-world implications of shipping many times a day_transcript.txt ADDED
@@ -0,0 +1,285 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** Emile, tell us, how did the Traefik idea start?
2
+
3
+ **Emile Vauge:** Yeah, it started six years ago. At that time I was a developer... And I was working on a microservices platform which was quite complex. I needed to manage 2,000 microservices. It was microservices where early there were not that many tools to handle microservices at that time, and 2,000 microservices - you cannot do anything manually. It has to be automated. And I needed to automate the rooting aspect, the networking aspect through each of those microservices... And at that time, existing reverse proxy, it was not possible to automate things. You had to basically write a configuration file for those reverse proxy, restart it, and that's it. So if you had to do any change, you had to restart it and you had to generate the configuration file.
4
+
5
+ So that's really what was the pain point at that time - automate the reverse proxy. That's something I started to work on, but it was a side project. So yeah, I started to do a few lines of code in Go, then \[unintelligible 00:03:56.21\] and then I was just passionate about it.
6
+
7
+ \[04:01\] A few months later I had something, I had a project, and it was Traefik. I decided to open source it, and I was like "Yeah, maybe it will interest a few people in the world, because maybe a few people will hit the same pain that I did." But I was not expecting anything.
8
+
9
+ Surprisingly, the success was here. The project was on the front page of Hacker News, and it changed everything. So it was completely unexpected. From a side project it went to a real open source project with a community around it, with external maintainers, external contributors in only a few weeks... So that really was how everything started.
10
+
11
+ **Gerhard Lazu:** That is a great story. So what I'm hearing is it started as a problem that you had, and apparently many others had as well... I think. That's one of the reasons why it became so popular, right? You've definitely hit something that others had, too. 2,000 microservices? I can't imagine that. That sounds crazy. That is a very big deployment. Where was this running? I don't think Kubernetes was using production back then, so how did you run those 2,000 microservices?
12
+
13
+ **Emile Vauge:** Yeah, at that time it was kind of crazy... And early. At that time, of course, Kubernetes was not even here; it was only the beginning of Docker. And so we were using Docker and Mesos, which was already production-ready, and you had already a few companies with big deployment on Mesos, like Twitter or eBay, I think...
14
+
15
+ **Gerhard Lazu:** I remember as well Apple was a big Mesos user...
16
+
17
+ **Emile Vauge:** Exactly, yeah. So yeah, we started with Mesos. The first version of Traefik did have Mesos support with Marathon, and also Consul, Etcd, Docker... A few things. But of course, Kubernetes came later. I think a year after we added Kubernetes support, and... Yeah, it changed everything, once Kubernetes was here.
18
+
19
+ **Gerhard Lazu:** Yeah. That makes a lot of sense. And also, what I'm realizing now is that the service discovery - that Traefik is like a first-class citizen (it works so well) - must be coming from this, where you have 2,000 micro-- like, how do you even configure them? How do you make sure the config reloads instantly, without needing to run any commands? I mean, it just has to work like that, because otherwise -- I mean, it's madness, right? 2,000 things? Wow. Configuring that many things is just so difficult. And things come and go all the time... There's always some sort of churn. New versions... Okay, that's interesting.
20
+
21
+ So since you started Traefik in 2015, what things did you get right?
22
+
23
+ **Emile Vauge:** I guess building an active community around the project from day one was definitely something we did right... And today I learned that it was not easy to sustain it. So yes, we started a project and some people came to contribute and became full-time maintainers, and the community is super-strong on Traefik. And I think this is definitely something that is extremely complex to achieve. I learned that. So it was kind of a mix between being lucky, having the right idea at the right time, but also being able to handle a community, which is complex... Because everybody wants to contribute, but everybody wants to govern the project, everybody has different ideas... So it is kind of complex. And usually, you have strong personalities in these communities, so you have to learn a lot of diplomacy to handle it in the right way. I think it was done very nicely with Traefik... I think so.
24
+
25
+ **Gerhard Lazu:** Yeah, that makes a lot of sense. I always thought that it was those graphics and drawings which were really good. I always remembered those; I was sure that the success was basically that. By the way, who did those drawings? Was it you, by any chance?
26
+
27
+ **Emile Vauge:** \[08:05\] No, it's a friend of mine who is also a developer... But he's doing some design as a side project, so yeah.
28
+
29
+ **Gerhard Lazu:** Traefik was so approachable because of that, and I'm pretty sure that that mentality was seen throughout everything else - polite, correct, inclusive, but also approachable. And I think those drawings captured it really well. So the thing which really stuck with me over the years is how consistent they have remained and how well they were able to explain some complex problems and some complex concepts. So I really enjoyed them, by the way. Whoever your friend is, he or she is doing an excellent job, so keep at it. It's great, I love them.
30
+
31
+ Okay, so - I don't know many projects that have hundreds and hundreds of alphas and betas, but Traefik is one of them. I went to look at the repository and I counted 500 alphas and 800 betas, and some of the alphas and betas were being cut and made available multiple times per day. What is the story behind that? Why so many alphas and betas?
32
+
33
+ **Emile Vauge:** That's a good question, and I think you are the first one to count everything on the repository... \[laughs\] So there is one good reason - before the 1.0 we were using a continuous deployment solution, and basically every commit, every PR was generating a new beta. So everything was automated. And we were thinking at that time that there were so many contributions from many people that they just wanted to have this version right on time, right when the PR was merged. So that was the reason. And it was also very easy to do right; you could just generate a Docker image, you'd just push it and that's it. It was basic.
34
+
35
+ And then later we started to structure a bit more the release cycle, and we decided that it was time to just release only, for example, three or four big releases in a year, because it was easier... Now that it was in 1.0, it was easier for a company to manage the release and the upgrades. So yeah, that was it.
36
+
37
+ **Gerhard Lazu:** I mean, that's exactly what I was expecting, and this is one of the signs, if you do continuous delivery, continuous integration right. You have many, many artifacts. Now, you may choose to make those artifacts publicly available, or they can be more hidden, but you will have those artifacts, regardless whether they're visible or not. And maybe some people - there's like a dev channel, or a nightly channel, or whatever where every single commit... And that's exactly the way it should be - every single commit produces an artifact; people can test it, people can run performance tests.. All sorts of things. They're so valuable. And the quicker you can produce them - well, guess what, the more contributions it will get. Because people can see the results of their work straight away. That is amazing.
38
+
39
+ And the structure thing makes perfect sense. Once you get a bit more structured, once more people get involved, you want to reduce some of that noise, or at least separate it. So that makes perfect sense. So what did you use at the time for the CI/CD system, do you remember?
40
+
41
+ **Emile Vauge:** Yes. And that's an interesting topic, because we changed at least three or four times the chain. We adapted. I think we started with Travis plus Docker Hub. Pretty basic. And after a few months we \[unintelligible 00:11:26.27\] were lasting like 50 minutes. We were hitting the \[unintelligible 00:11:33.09\] of Travis, so we changed to Cycle CI, and it was a big error... And then we changed again to -- we just changed to Circle CI, which was better... And then we ultimately changed to Semaphor CI, which was super-interesting, because we divided the time by ten, I guess, adapting our tests to Semaphor... So it was extremely performant, and probably a bit more basic. You know, you had less command on Semaphor... But that's fine. We were doing everything with our own script... So yeah, we migrated to Semaphor, and then we connected to the company Semaphor, and then we became friends, and so they gave us some server... And that's it.
42
+
43
+ \[12:22\] So we had to adapt several times our CI... And even today, it's kind of complex, because we have a build that is generated on the fly for each PR, with each commit, and it allows us to test everything. It allows the contributor to test everything. As you said, as soon as the PR is merged, we generate an experimental build that everybody can use... So yeah, I think we are really on top of the CI. It is super-important to us, and it allowed us to manage crazy amounts of contributors.
44
+
45
+ **Gerhard Lazu:** You're right. I'm exactly of the same opinion, and I'm glad that you're seeing in practice the same thing. If you get that right, many things will start happening as a result of that. Super-important. So - big fan. Thank you for sharing that.
46
+
47
+ What about from the perspective of things that didn't go so well - but let me make it a bit more positive. What about from the perspective of learning from failure? What failures did you learn from in the last six years from Traefik, or in Traefik? Things that you wish you had done better, or things that in hindsight were not as good ideas that you had?
48
+
49
+ **Emile Vauge:** Two things come to my mind. I mean, of course, we did way more mistakes, but two--
50
+
51
+ **Gerhard Lazu:** The ones that stand out.
52
+
53
+ **Emile Vauge:** Yeah, two things. So the first thing - I will continue my story that I was talking about... We did build a great community, which is still super-active. But over time, I also founded a company behind the project; I also hired a whole team which is working full-time on it, and today we are 40 people. 40 people are working full-time on Traefik. Not the whole team are developers, but a good part of it. And when you have a team working full-time on it, the project is going super-fast, and it's becoming more and more difficult to follow from when you are an external contributor. And with time, we've found that it was really complex to sustain external contributions, with the internal team going so fast.
54
+
55
+ I don't think we did any big mistake, but what I learned is that it was possible to create a great community from a bad project, and to create a big community, but as soon as you go professional with it, it was not that easy to sustain. So that's something important we learned - sustaining a big community is not that easy than just starting it. That's my take.
56
+
57
+ And another thing, a mistake, I guess, that we made is the big gap between 1.x and 2 branch on Traefik. We decided at that time that many things were not that great with the architecture of the 1.x branch, so we wanted to revamp the project basically. I'll just give you one technical example. For example, in the 1.x branch we had some integration with Kubernetes using the Ingress specification... And the Ingress specification is pretty basic on Kubernetes. And as soon as you want to add some options, you had to use annotations. And this was really an issue, because annotations can become a mess, because it's not a structure. It's just annotation. And if you want to do something complex, it becomes a mess. So we decided that "Hey, in Traefik 2 we will support Ingress, but also our own CRD, which will allow us to do some complex stuff on Kubernetes without annotation, which is a pain." And surprisingly -- so we were sure that the community will be okay with that, and we were wrong. The community just wanted to have Ingress... Most of the community, of course; some people were okay with CRD, but most people just wanted to have some Ingress.
58
+
59
+ \[16:21\] So that's one mistake we did. We were convinced internally, in the company, that CRD was the thing to do, but that was not what the majority of the community was thinking... And we learned from that. Sometimes you have a disconnection between your team and the community, and you have to work on that every day. You have to avoid disconnecting your team from the community every day. And it's a real \[unintelligible 00:16:46.04\] It's not easy to do. So the connection between a company and a community - it's a lot of work.
60
+
61
+ **Break:** \[16:55\]
62
+
63
+ **Gerhard Lazu:** I'm really glad that you mentioned the relationship between the community and the company and the product, because I know how important that is. Not only it's important, but it's very easy to get them out of sync, and then the product goes in certain directions, or the community goes in different directions... And they just get out of sync. And it's not nice. It's friction, and tension, and you have to address it at some point if you want to be successful as a project. Because it started as a project, it started as this idea, it's a great idea, but how do you sustain it as it grows and as it becomes more complex?
64
+
65
+ So what did you do to reconcile those differences between Traefik the company, Traefik the product... Traefik the products, because it's like an experience and it has so many components... And the internal team. How did you reconcile that? Work in progress?
66
+
67
+ **Emile Vauge:** Exactly. I mean, it will be work in progress... There is no deadline. It will be always work in progress. You need to work on that every day, as I said, if you want to sustain the connection. One thing we did among others is creating a group of the most active contributors. Kind of a private group, where we would have a specific connection with those. It's called an ambassador, the ambassadors group, and we share with the, some ideas, for example, we have for the roadmap. We discuss with them about this roadmap, we get their ideas, their feedback, we try to have them on board for private betas prior to everyone else... So we try to have a really specific connection with those which are the most active ones. So that's something we created because we really wanted to be sure that an active contributor would receive something special from us, because we do care about them. So that's really something we wanted to create for a long time.
68
+
69
+ \[20:03\] Another thing we did is have a specific process to handle all the input we had from the community every day. To give you an idea, we have so many contributions, PRs, issues, posts on the forum, on Twitter, on Slack. This is so active that we need to have a specific process every day to handle everything. Other than that, the queue is becoming so huge after a week that it's not even manageable anymore. So we need to have a dedicated team to handle all the issues and PRs every day. And that's an every day work.
70
+
71
+ So it's not a joke... As soon as the open source project is big - I mean, you just have to invest in it even more. And you have to have big dedication on it. So yeah, that's how we are dealing with it. We have strong values on it. For example, we don't want a PR to last forever, because it's kind of discouraging for external contributions. And of course, we did some mistakes. Some PR did last for six months. We got some super-complex PR, and you know, it needs some time; internal discussions, external discussions... It's not that easy to get some external contribution. But we try to be as fast as possible to encourage people, or at least to not discourage them... And it requires a lot of work.
72
+
73
+ **Gerhard Lazu:** Yeah. I can definitely relate to that. Not only that, but I can see -- I wanna say how important it is, but it feels like I'm not conveying the importance of it significantly enough. So most people think that shipping is where it stops, right? Like, get the code done, get it out there, and that's it. Well, actually, that is the beginning of a very hard and long process, which maybe never ends. If you're really successful and your success keeps growing, it's just like, how do you sustain it? It's really hard. And what about keeping everything as lightweight as possible, so that you don't waste time on a heavy process. But if you don't have a process - well, what are you even doing? You're left, right, and up, and down... You don't even know which way is up or down, because you're swamped with all these things.
74
+
75
+ So what does the system look like for you? Do you have a JIRA? I hope not... I don't know. Do you have JIRA to keep track of things? How do you track things?
76
+
77
+ **Emile Vauge:** So we do track things using GitHub, mainly. GitHub is the source of truth for everyone on Traefik. But of course, we use internal tools like Notion for a document, or this kind of stuff... But yeah, GitHub is the main source of truth.
78
+
79
+ **Gerhard Lazu:** Okay. So when you receive, for example, a Tweet or a Slack, do you convert it into an issue, or a discussion on GitHub? What happens with that?
80
+
81
+ **Emile Vauge:** Yes. If it makes sense, we convert it into an issue, of course. Because this is the only source of truth on Traefik, the issues and the PR. Of course, the issues are the only source of truth. We don't have an internal tool to have private issues, or this kind of stuff. Everything is public, and everything is on GitHub.
82
+
83
+ **Gerhard Lazu:** And do you have a Signal repository, multiple repositories? How does that work?
84
+
85
+ **Emile Vauge:** So for Traefik we do have -- that's a good question. I guess we could call it a single repository, and especially now, we do have some plugins in Traefik v2. So they come in a separate repository, each plugin. So yeah, it's a singular repo, I guess.
86
+
87
+ **Gerhard Lazu:** Do you do any repo syncing, anything like that behind the scenes, so that you centralize all the issues in a single place? Or do you just open issues, for example, for a plugin, in the plugin repo, and then you have a view that merges them all together? How does that look?
88
+
89
+ **Emile Vauge:** Yeah, every repo has their own issues. Sometimes we do have some connection between a few issues, between different repos. Traefik is having its own issues, relating another issue on another repo, sometimes on another project, maybe even driven by Traefik Labs...
90
+
91
+ **Gerhard Lazu:** \[24:07\] Okay. So one thing which I noticed is you also started using GitHub Actions a bit more in the last year, six months... Six to twelve months.
92
+
93
+ **Emile Vauge:** Exactly.
94
+
95
+ **Gerhard Lazu:** Why did that happen? That was interesting to see.
96
+
97
+ **Emile Vauge:** I think the team at that time was really excited with the GitHub Actions, and they really wanted to take advantage of it. It allowed us to just continue what we were already doing with internal scripts, basically, with just Actions. So I don't think we are doing anything crazy; it just helps us orchestrate things in the build and deployment process... But yeah. So nothing crazy, but it is replacing scripts we were using.
98
+
99
+ **Gerhard Lazu:** Yeah. So it's a bit more than a script, because I had a look at especially the documentation workflow, which I haven't seen before. So you have this concept of the Mymirca ant colony which I found really interesting... And there's like different types of ants which have different roles in this colony, and that actually maps to the tools that you use to keep everything together. For example Structor, which is a type of an ant, creates multiple versions of Mkdocs documentation. That's interesting. And Mixtus creates PRs and documentation changes. And there's like a whole list of these. That's a really interesting idea. How much do you know about that? Were you involved with that? Because your team is big, and things are changing all the time... How much do you know about this specific aspect, which I've found fascinating?
100
+
101
+ **Emile Vauge:** So I don't know every specific aspect of it, but for example, we are dealing with so many contributions that we needed to automate everything as much as possible. The only stuff we didn't automate are the reviews of the PRs, of course.
102
+
103
+ **Gerhard Lazu:** Right, the human element. You need humans for that, yes.
104
+
105
+ **Emile Vauge:** Exactly. Exactly. But that's it. The rest is automated. So yes, the documentation, everything is automated. We needed to keep track of all versions of Traefik, because you know, when you only have one version it's easy. But when you have 20, you have to keep track of everything, because not everybody is on the latest, of course. Some people are still using the 1.0, or I don't know... So yeah, we needed to automate everything. For example, Mkdocs was not supporting multiple versions initially, so we created something on top of that which allowed us to generate a new version once we need it, and keep track of others using the branch of the repo. So that's pretty basic. But it means that we did create it, we do maintain a lot of small tools like this, that are allowing us to automate the whole process.
106
+
107
+ **Gerhard Lazu:** Yeah, I think this is unavoidable. The bigger the community, the more successful the project, the more it spreads, you need more automation, because it's unsustainable to do these things manually. I love that. That just makes not only sense, but it's a joy to see it in the real world and see what shape it takes based on whatever needs you have. So I really like that.
108
+
109
+ I also liked your release cycle. I thought that was really interesting. You mentioned it in the first part of the interview, you mentioned about having three to four minor releases every year. I thought that was great. That makes perfect sense. So can you tell us a bit more about how do the minor releases work, how do the patch releases work, and also, what about the majors? Because currently you're on v2; that's been around for a few years, I think... And v3 - that's an interesting one. But let's just focus a bit on the release process itself, which I've found fascinating.
110
+
111
+ **Emile Vauge:** Alright, so it's pretty common, I guess... We have, as you said, bug fixes, minor releases and major releases. We just followed the semver versioning system. We do approximately 3-4 minor releases per year. Basically, once every three months. And of course, in minor versions - it needs to be backward compatible, no breaking change.
112
+
113
+ \[28:05\] For example, if we need to add something new, it has to be without any breaking change. And with the bug fixes - we have different types of bug fixes, which correspond to different types of issues, with priority issues, I guess... So we have a mechanism where -- for example, if we have a vulnerability discovered which is pretty concerning, we tag it at priority zero, which means we have to release it today, the fix. Just... Today. That's the rule.
114
+
115
+ **Gerhard Lazu:** Now, that is something really powerful... Because you saying that, it means that your pipeline for all the supported versions has to be fast. It can't take more than a few hours, all of it... Because if it takes more than a few hours - well, you can't release it in a day. It's just impossible, because there's only so many hours in a day, and you have so many versions to patch, and it has to work in parallel. So this brings a couple of follow-up questions. How many versions do you currently support?
116
+
117
+ **Emile Vauge:** So we support officially the latest minor version of 1.x, and the latest version of 2.x. So we support the two last minor versions of the two branches. So that's what we are doing.
118
+
119
+ **Gerhard Lazu:** Okay. So the priority zero fix, which has to ship today - it has to actually ship in two versions.
120
+
121
+ **Emile Vauge:** Yes.
122
+
123
+ **Gerhard Lazu:** Okay. What about the other minor releases? Do you still patch them, or you only focus on the latest minors for each major?
124
+
125
+ **Emile Vauge:** We do focus on the latest minors, because otherwise it doesn't make sense. As minors, they are backward-compatible. You should just upgrade to the latest minor. That shouldn't be an issue. So that's what we are doing.
126
+
127
+ **Gerhard Lazu:** That makes a lot of sense. I also think that it makes perfect sense to only ship bug fixes in patches. Does that mean that even if you add a new feature, which doesn't change anything from the codebase, would you not add it in a patch release? I think you wouldn't, that's my understanding. You wouldn't add a new feature...
128
+
129
+ **Emile Vauge:** No.
130
+
131
+ **Gerhard Lazu:** Okay, you wouldn't.
132
+
133
+ **Emile Vauge:** No, of course. In patch releases we never add a new feature. Never, never.
134
+
135
+ **Gerhard Lazu:** That makes perfect sense to me. Okay. Well, I'm glad that it makes sense for Traefik as well. Okay, that's great. So when it comes to new minors that you ship every three months, how long do you support them for?
136
+
137
+ **Emile Vauge:** We support them until the next minor, plus a few months. I don't remember exactly the exact numbers. We do have something in the docs to explain that... But yes, once the new version arrives, we support the previous version for a few months, and then we stop.
138
+
139
+ **Gerhard Lazu:** Okay, okay. And do your users know when to expect new versions? Do you have like a release calendar, or anything like that?
140
+
141
+ **Emile Vauge:** Not really. Ideally, we would love to have that, because it's easier. But in fact, we try to not communicate an exact date of release. Why? Because we always have external contributions that we were not expecting, and usually it comes at the last minute, and usually it leads to a discussion, "Huh. This one is interesting. Maybe we should just wait a bit for the next minor and include this one, because it will be great for many people." Sometimes we just delay a bit what was planned, and sometimes we just postpone this one to the next release... So yeah, we adapt.
142
+
143
+ **Gerhard Lazu:** Yeah, it makes sense. I think a release calendar makes sense from the perspective of communicating what to expect, and when. If you know that, for example, you're going to ship a new minor in (let's say) three months, or six months, and then there will be a feature freeze in five months, any new contributions, no matter how amazing, they'll have to wait for the next one. Why? Because you have to have those discussions; you have to run all the testing, you have to get all the betas, alphas, RCs, whatever you need to do, so that the community is aware of what's coming and they can actually get excited about it.
144
+
145
+ \[32:12\] And who knows, maybe someone else will have another idea and say "Hey, have you thought about this?" And then that contribution becomes even more amazing, because it's being discussed and it's been out in the open for a while longer before the final implementation lands, in a shipped minor. So that makes sense.
146
+
147
+ So what would make Traefik bump to a major version? We know how minors get bumped... What would Traefik make it bump from v2, which is currently, to v3?
148
+
149
+ **Emile Vauge:** In our minds -- I mean, as soon as a new feature leads to not being backward-compatible, it has to be in a major. As soon as we need to do major changes in the architecture itself, this leads to a major release... So this kind of stuff. This makes a new release necessary, or this makes this kind of features wait for the next major release. That's what I mean.
150
+
151
+ **Gerhard Lazu:** Okay. So let's imagine that you have a big feature coming up, which - maybe it's a new feature, like it doesn't change anything, but it's a big difference in how the software behaves. Would you put that in a major, or would you ship that in a minor?
152
+
153
+ **Emile Vauge:** In a major. If it's something big, that changes the way the software behaves, definitely a major.
154
+
155
+ **Gerhard Lazu:** Okay. Even if it is backwards-compatible, it doesn't really matter, because it's significant enough to deserve its own major. Okay.
156
+
157
+ **Emile Vauge:** Yeah, yeah. Because it could have some side effects on many aspects, because it's extremely complex to -- now Traefik has become kind of complex, and as soon as you change significantly the architecture or something inside Traefik, it will have some side effects. So it could be kind of crazy to do that in a minor, to be honest.
158
+
159
+ **Gerhard Lazu:** Yeah, I'm with you. I'm with you. So if you can't tell by now how passionate I am about releases, shipping, you're just about to find out... If you can't tell by now. How do you apply semantic versioning in Traefik to, for example, config, or plugins, or even like the API? What does semantic versioning mean, and something not being backwards-compatible? What does it mean in the context of Traefik?
160
+
161
+ **Emile Vauge:** That's a good question, but for example on the API, between two minor releases, the only thing that we accept additions to the API. No changes, only additions. So if you have new features with additional, for example, parameters in the API, that's fine, because it's perfectly backward-compatible, so that's okay. That's the rule we follow. Same thing for the configuration. The configuration should be perfectly backward-compatible, so it's okay to add some new parameters, some new fields, some new annotations... You just have to continue to work with what was existing before.
162
+
163
+ **Gerhard Lazu:** What if the behavior of an internal component changes? Is that the public API? Something just doesn't behave the way it used to because you've made the change... But the API hasn't changed; it's just the behavior changed.
164
+
165
+ **Emile Vauge:** It depends, to be honest. There is no definitive answer. Sometimes we change some of the behavior, but it's on purpose, because for example it's fixing something, so that's fine. But if it changes the behavior and if it could lead to unexpected things to users, then we don't do it. Or we do it, but adding a flag, typically.
166
+
167
+ **Gerhard Lazu:** A feature flag.
168
+
169
+ **Emile Vauge:** A feature flag, or something.
170
+
171
+ **Gerhard Lazu:** Yeah, that makes sense. So when we say Traefik's API, what I understand by that is how things get configured and discovered, so how Traefik does it. That's my understanding. Are we thinking about something else when we are talking about the Traefik public API?
172
+
173
+ **Emile Vauge:** \[35:49\] We also expose a REST API to update or change the configuration. So we have, I guess, what we could call a real API... But yes, typically we have an API, but we can also configure Traefik through a configuration file, through annotation in Docker, or whatever, through a configuration file on Kubernetes, or through a KV store with Etcd or Consul...
174
+
175
+ **Gerhard Lazu:** All those are APIs.
176
+
177
+ **Emile Vauge:** Yeah.
178
+
179
+ **Gerhard Lazu:** What about the plugins? What about the APIs and the plugins used to integrate with Traefik? The providers, or -- I think you call them providers, right?
180
+
181
+ **Emile Vauge:** We have a different type of plugins, in fact. We have provider plugins... And what is a provider plugin? If you want to integrate Traefik to a new orchestrator, for example, you will need to write a provider plugin. So the provider plugin will be a -- we need to connect to this orchestrator, or get some configuration, and so on. This will be the role of the provider plugin. But we do also have some middleware plugins, and the middleware plugins are here to intercept and modify requests on the fly. So that's two different things in Traefik.
182
+
183
+ Right now, the plugins integration is extremely new. It has been here with local plugins in the 2.5, and plugins themselves are here in the 2.x branch... So we don't have from now any strong versioning mechanism inside plugins, but we have started to -- we already have a framework here to implement that in the future. So that's what we have.
184
+
185
+ But for example, you have two ways to use plugins inside Traefik today - you can use plugins that are on our marketplace, so that are published on our marketplace, and you can use private plugins. So with private plugins you can do whatever you want. No version check whatsoever. You are free. And of course, if you use our marketplace plugins, we do generate some hashes for every build; so you can't touch plugins whatsoever, because it would change the hash... So we have some ways to ensure that if you use plugins from the marketplace, they are untouched. So it's a way of versioning plugins with a hash. That's all we have. We don't have, for example, minor versions of plugins yet.
186
+
187
+ **Gerhard Lazu:** Okay. That makes sense. But the plugins - do they use some APIs that Traefik exposes, and are those APIs part of your public API? Because that's like Go code, right? From the perspective of Go code, those interfaces - are they part of your public API that must be backwards-compatible between minors?
188
+
189
+ **Emile Vauge:** Absolutely.
190
+
191
+ **Gerhard Lazu:** Perfect. It makes sense. Again, for some projects I know it doesn't make sense, but I think this is important, because I just wanna know where you stand. And again, I love it.
192
+
193
+ **Break:** \[38:43\]
194
+
195
+ **Gerhard Lazu:** Changelog.com is a traditional three-tier monolithical application that runs on Kubernetes. We have a proxy in the front, we have the app itself, and we have the database. Fairly standard. One thing that we have been noticing - or I have been noticing, to be precise - is that we have some long-tail latencies in our proxy. Some requests, once they hit the proxy, they can take up to 40-50 seconds, while the 95th percentile is around 300-400 milliseconds. We'll have a whole debugging session, with Rawkode David and Marques from Equinix Metal around this... Because the stack is Kubernetes. So you have Kube Proxy, you have the database... There's so many layers there.
196
+
197
+ I'm wondering, if we were to use Traefik as a proxy, could it help us understand a little bit more why the requests are slow? At least from the proxy perspective.
198
+
199
+ **Emile Vauge:** One of the biggest pain points of users with microservices platforms -- so microservices are bringing so much to developers and to DevOps, whatever; but they are also bringing complexity. And finding the root cause of an issue is always kind of difficult, and could be a nightmare. So to answer your question, with that specific issue, finding some sporadic long request is always an issue. The best you can do with that issue is with Traefik you can enable distributed tracing, for example with Jaeger, OpenTracing, Zipkin, or whatever. And the best is to have tracing in all your services and in front of all your applications, database included. With that, at least you can see if your request takes some time in a specific service, or in the database, or whatever.
200
+
201
+ But sometimes it's even more complex. Sometimes the requests are slower inside the reverse proxy. You have a few requests that are so much slower; it could be a nightmare. One of the reasons, among others, is that some requests are using an older version of TLS. For example - but just an example - which implementation is slower? Or some TLS requests are using a specific cipher. Which is slower? Again. And it could be a nightmare to find. So the best way, in that case, is - with Traefik, you go on the dashboard, you can see the metrics in real-time, you can also export your metrics in real-time in any system - Datadog, Grafana, Prometheus. And the best you can do is enable the logs and you will see when you have a slow request in the logs, for example if it's a TLS request, which ciphers it uses, this kind of stuff. It can help.
202
+
203
+ But there is no magic. If you only have ten requests over a million which are slower, Traefik won't tell you "Hey, this is the reason why those ten requests took some time." You will need to find the root codes of that with the help of Traefik.
204
+
205
+ **Gerhard Lazu:** That is super-helpful, and what I do know is that our 99th percentile is a lot higher than our 95th percentile. So 95th, as I mentioned, 300 to 400 milliseconds. 99th, sometimes the spikes go as high as 40, 50 seconds. And that's what I need to understand - why does the 99th percentile from a proxy perspective take that long?
206
+
207
+ You mentioned something really interesting around services, and I'm wondering if you're thinking services from a Kubernetes perspective, or services from the perspective of putting Traefik in front of, for example, the database, so that requests -- because I know Traefik can proxy TCP requests. So is that what you're thinking, putting Traefik in front of the database and in front of the apps? So not just using the reverse proxy, but also using it for the services themselves.
208
+
209
+ **Emile Vauge:** Yeah, exactly. Services as a generic term.
210
+
211
+ **Gerhard Lazu:** Okay.
212
+
213
+ **Emile Vauge:** So in front of your application services, like Kubernetes, but also in front of your database.
214
+
215
+ **Gerhard Lazu:** Okay, that's really interesting. And are there CRDs that I would use? How would I configure this in the context of Kubernetes for Traefik?
216
+
217
+ **Emile Vauge:** It depends.
218
+
219
+ **Gerhard Lazu:** It depends. Okay.
220
+
221
+ **Emile Vauge:** Yeah. If you want to do this in front of your database -- specifically, it depends on how you are deploying your database, is it inside Kubernetes or not?
222
+
223
+ **Gerhard Lazu:** It's just \[unintelligible 00:43:59.00\]
224
+
225
+ **Emile Vauge:** \[44:01\] Yes. So if this is the case, you need to have something that handles the tracing in front of your database. I mean, it depends on the database. Some databases have some integration with those tracing systems, some don't. In this case, you need to have something in front of that.
226
+
227
+ **Gerhard Lazu:** Yeah, that makes sense. Okay. And can Traefik be the something, or would it need to be something else.
228
+
229
+ **Emile Vauge:** No, you can also use Traefik. So in that case, it wouldn't be an Ingress controller, I guess. It would be a bit different. But yes, it could be that.
230
+
231
+ **Gerhard Lazu:** Interesting, okay. I'll check it out. That's really interesting. Okay. So David and Marques, I know that you're not listening to this, because this will come out after we record, but just to let you know, I was thinking about this just before we did the recording. Okay.
232
+
233
+ So this is our own very specific problem, but I'm sure that you have a much broader perspective on the Traefik community. What other big problems are you seeing in the community, and what are you thinking about them, or how are you thinking about them?
234
+
235
+ **Emile Vauge:** This is the big question, because with Traefik we are talking about a really small \[unintelligible 00:45:02.06\] in the networking space. It is a reverse proxy thing, or the Ingress controller. But it's tiny. The networking space is so much bigger. And in fact, we've found that with the rise of microservices, the crazy exponential growth of the number of applications you have to deal with are -- not only you have to automate the reverse proxy in the networking space, you need to automate all the networking space. Basically, that's what we've found.
236
+
237
+ Another interesting aspect we discovered is that we do think that in the future, now that Kubernetes has won the orchestration war - there is no word on that, right? Companies are either testing, or migrating, or already using Kubernetes in production today, and they will be using Kubernetes even more tomorrow.
238
+
239
+ The big pain point that we are seeing coming is the number of Kubernetes clusters is just going to grow exponentially. It's already difficult to manage one Kubernetes cluster, but imagine if you have to manage ten or a hundred. It's crazy. And today what do you have to handle a hundred Kubernetes clusters? Nothing. You have to basically orchestrate all those Kubernetes clusters together yourself, and that is something that is pretty interesting. We do think that Traefik Labs being a networking expert as a company, has something to play in that space... Because having multiple Kubernetes clusters is just a big distributed system with wires between all those clusters; basically, if you control the networking aspect between all those clusters, that's fine. You handle everything. And that's what we think about the future, and that's what we think that will be the big pain in the next few years - how to handle all those Kubernetes clusters now that Kubernetes is a standard.
240
+
241
+ **Gerhard Lazu:** That's a really interesting perspective, because you're right - we ourselves only have one, and we're possibly the smallest team you could have; just a few people. Now, I'm already thinking of having another one. Like, a \[unintelligible 00:47:18.28\] cluster that manages all other clusters. So I'm wondering whether you're thinking about the management of Kubernetes clusters, whether that's your perspective, or the connectivity of Kubernetes clusters. What are you thinking about?
242
+
243
+ **Emile Vauge:** Everything. You can think about Kubernetes federation, that's one solution to handle several Kubernetes clusters as a management perspective, but also connectivity to those clusters, interconnectivity between those clusters, end-to-end security from users to all those clusters... All those aspects. High availability between all those clusters. How do you do a blue/green deployment between two clusters, or between a hundred clusters? This kind of stuff.
244
+
245
+ \[48:02\] So today it's almost impossible to do it simply. I mean, it is impossible to have something simply. You have to gather a gigantic number of software and platforms to make it work, and that's an interesting problem that we want to tackle at Traefik Labs.
246
+
247
+ **Gerhard Lazu:** That's a big problem space, and you made me really curious now, so I'll keep an eye. That sounds really interesting. So coming from this big problem space, coming to a smaller problem space - or not problem space, but like a space... Which is your favorite Traefik proxy? Because Traefik is so much more than just a proxy. By the way, if you've made it this far and you don't know what Traefik is, just go and check it out. There's so many aspects to it. But if we look just at the Traefik proxy, the 2.5 version, the latest minor, which is your favorite feature, Emile?
248
+
249
+ **Emile Vauge:** In the whole Traefik, reverse proxy - yeah; there are so many aspects... At least we have four categories, I guess. You have the routing, load balancing categories, the security aspect... You said the auto-discovery, the dynamic configuration aspect, I will say... And then finally, the observability aspect. And this has a lot of features in every of these categories. So it's kind of complex. But I guess one of my favorite features is one of the oldest, I guess... It's the LetsEncrypt integration. Traefik is natively integrated with LetsEncrypt, and this allows users to generate automatically TLS certificates for securing all those connections end-to-end. And this is one of the features that made Traefik so popular. You can get certified TLS for free; verified TLS certificate for free. And it's kind of magic when you see it work. So that's one of my favorite features in Traefik.
250
+
251
+ **Gerhard Lazu:** Okay. So this is unexpectedly interesting, and the reason it's unexpectedly interesting is because today we have Ingress NGINX and Cert Manager, which from what I'm hearing, Traefik is handling as a single component. That's interesting. Now, there's a certain requirement that we have with the certificates. Those certificates, especially the wild card ones, then we have to synchronize with the CDN. It's all running in Kubernetes, it's all self-contained, so that sync is happening, part of the same system, and it's like a closed system. Does Traefik expose those certificates? ...the private key, \[unintelligible 00:50:22.04\] in a way that we can upload it easily to using an API to the CDN? Is that available? Are those certificates available, do you know?
252
+
253
+ **Emile Vauge:** In theory, Traefik is connecting to the CDN itself. It is configuring itself the CDN, to create the DNS entry, for example, to validate your \[unintelligible 00:50:43.02\] certificate. So you don't have to do anything in that specific use case.
254
+
255
+ **Gerhard Lazu:** \[50:50\] I think I'm thinking about getting hold of the values of the public \[unintelligible 00:50:53.01\] and the private key, so that we can upload them to the CDN. Because Cert Manager, that manages the integration with the certificate provider - LetsEncrypt in this case - via DNS, so Cert Manager is integrated with DNS, which then gets a LetsEncrypt certificate... And then we have a job basically which automatically synchronizes the resulting private key in a certificate...
256
+
257
+ **Emile Vauge:** Yeah.
258
+
259
+ **Gerhard Lazu:** So we synchronize those with a CDN via the API. Because the CDN is running outside of Kubernetes. So Kubernetes is just like our origin...
260
+
261
+ **Emile Vauge:** Oh, okay, okay... Because you want to have the same certificate on the CDN.
262
+
263
+ **Gerhard Lazu:** Exactly, yes.
264
+
265
+ **Emile Vauge:** Okay, okay. So yes, basically you would have to do the same with Traefik.
266
+
267
+ **Gerhard Lazu:** Okay.
268
+
269
+ **Emile Vauge:** It would work the same, but you would have to do it.
270
+
271
+ **Gerhard Lazu:** Okay. So as long as I can access those values, that's all I would need, and that means I would reduce one of the components, or remove one of the components, and simplify the whole setup. I love that. That sounds great. Okay, so one more reason to look at Traefik. Wow, okay. Not that I needed it, but still. Okay, that's interesting.
272
+
273
+ So as we are wrapping up, as a listener, if I have to remember one thing from this conversation, what would that be?
274
+
275
+ **Emile Vauge:** So at Traefik Labs, as we already talked about during this podcast, we have a really strong connection with our community... And this is something I'm extremely proud of. Because first of all, it's not easy, and also, once you succeed in doing that, you get so much from it. So much. You get some feedback, you get some criticism, you get some angry people, you get many stuff. But that's super-important. And it helps to build some great tools together.
276
+
277
+ So yeah, I would love to encourage people to create this kind of communities even more and more in the future... Because at the end of the day, that's probably the best way to build a successful and useful product for your users. So yeah, that's my take-away. Communities are probably one of the hardest things to build and sustain, but the reward is huge.
278
+
279
+ **Gerhard Lazu:** From my perspective, that is a sign of a true cloud-native company and product. If you believe what you've just said, that's it. Because cloud-native is all about the community, all about the people. That's one of my focuses as well in this -- actually, that's my central focus for this podcast, the people behind everything that we do. Because if you don't nurture those relationships, if you don't look after those people, what do you have? A bunch of tech that goes outdated, and nobody wants to use, because it's horrible. Because it's not made for people, it's made for machines. It's made for - whatever. It doesn't really matter, because nobody cares. So that's it, that's a great one. I love that. Thank you, Emile. Thank you very much.
280
+
281
+ **Emile Vauge:** Thank you.
282
+
283
+ **Gerhard Lazu:** I loved having you. Looking forward to a next time. This was too good. Thank you.
284
+
285
+ **Emile Vauge:** Cool. Thank you so much for your time, too. Happy to discuss in the future.
Shipping KubeCon EU 2021_transcript.txt ADDED
@@ -0,0 +1,363 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So can you tell that this is my third time that I'm recording this?
2
+
3
+ **Stephen Augustus:** Oh, congrats!
4
+
5
+ **Gerhard Lazu:** So it's a new podcast, it's all about shipping stuff, and the reason why we are meeting is because you helped ship KubeCon, literally. Every day, you were shipping KubeCon. And whenever it's KubeCon, I like to get the organizers, the people behind the event, and then the co-chairs as well. So this is going to be a recurring theme, and that's why in October we will definitely record again, because all the hard work that you put in - you make it so amazing; you really do. So this is basically for you.
6
+
7
+ **Stephen Augustus:** Thank you.
8
+
9
+ **Gerhard Lazu:** I know it was so hard for you, because this was the European one, and the European one - you have to wake up, and you have to be there. You can't record yourself.
10
+
11
+ **Constance Caramanolis:** Yeah...
12
+
13
+ **Stephen Augustus:** Fair... \[laughs\] I think that's the part that makes it fun... Not so much the timezone shifts for a lot of people. We try not to do the MC-ing recorded, because it allows us to kind of react to the day and weave in stories from people who are experiencing KubeCon, maybe even for the first time...
14
+
15
+ **Gerhard Lazu:** \[04:24\] Yeah, that's right. It makes a big difference. I know it's hard, and I've seen it... But you've done such an amazing job. So if this is like that, or if this was the way it was, considering all the things, how is Los Angeles going to be? I'm looking forward to that.
16
+
17
+ **Constance Caramanolis:** Los Angeles - we hope there's gonna be an in-person... it is marked as happening right now, we hope the in-person part is gonna happen. So if the in-person part happens, we're kind of expecting it to be smaller than other North Americas, and also EU, partially, because we don't know what travel restrictions are, and we don't know all that stuff there. But we have some fun ideas... I come up with a lot of like -- you know, like the tinfoil hat; I come up with a lot of those ideas in terms of spicing things up, and so I have an idea for what we would do if there's an in-person component for the keynotes... So just to tease it out for people, if there is an in-person, this is gonna be like a favorite - at least in North America - game show with Bob. So just to give that as a clue for people.
18
+
19
+ **Gerhard Lazu:** Okay. I don't know what that means, but I'm intrigued. Do you know what it means, Stephen?
20
+
21
+ **Stephen Augustus:** I do, I do. I think we're still playing around with the idea, so I don't wanna give away too much just yet...
22
+
23
+ **Gerhard Lazu:** Okay. You don't mean Sponge Bob, by any chance, right? This is a different Bob.
24
+
25
+ **Constance Caramanolis:** No, no, no.
26
+
27
+ **Stephen Augustus:** Different Bob. Different Bob.
28
+
29
+ **Constance Caramanolis:** He was the presenter for this game show a while back; there's a new presenter now, his first name is Drew Carey or Drew Carey Flint to give people more hints if they want to Google it afterwards.
30
+
31
+ **Gerhard Lazu:** Okay, okay...
32
+
33
+ **Constance Caramanolis:** Especially like -- since we have a lot more impact on the keynote... We have a huge impact on the content, but as you're saying, our personality is where we get to change things up in keynotes... So we're trying to make it a little bit more like -- because it is a show; we just kind of forget that it is a show, because we're at a conference and we're all focused on the tech, so we're trying to add a little bit more of that life to it, and so... That's where the promo videos came in, and that was also an adaptation from us being virtual, but also it's probably something that we wanna keep on going forward.
34
+
35
+ **Gerhard Lazu:** I think the show idea is super-important, because if you make it fun... A show has to be fun; well, it should be fun, right?
36
+
37
+ **Stephen Augustus:** It should be fun.
38
+
39
+ **Gerhard Lazu:** If you approach it like that, you have some amazing elements there. Tinfoil hats? I love those. That was a great idea.
40
+
41
+ **Stephen Augustus:** \[laughs\]
42
+
43
+ **Gerhard Lazu:** And then the best part was like "Which tinfoil hats?" That was even better. Like, "How do you mean?" That was so good. I love that.
44
+
45
+ **Stephen Augustus:** We're silly people, and I think we often weave these bits into the show... So I think if you're gonna do a bit, you should commit to it. I think we've had questions and comments in the past about there being conspiracies in cloud-native, so we decided to play on that. And then with a conspiracy, there's definitely an element of denial, and gaslighting... So yeah, committing to the bit is important. \[laughs\]
46
+
47
+ **Constance Caramanolis:** It's also like we have distinct jokes. I think it'd be like we'd have to rehearse it beforehand.
48
+
49
+ **Stephen Augustus:** Yeah, I think a lot of it sometimes is off the cuff, and eventually kind of like just evolves day of... The promo videos that we did for North America Virtual - that was just a few takes, and we had that idea pretty much day of.
50
+
51
+ **Gerhard Lazu:** So for those of you that are listening and are wondering who are these wonderful people that joined me today, and if you don't recognize, we have Constance Caramanolis, and Stephen Augustus.
52
+
53
+ **Constance Caramanolis:** Yeah, you got it.
54
+
55
+ **Gerhard Lazu:** Both co-chairs. Constance was a co-chair since 2020, and I don't know about you, Stephen...
56
+
57
+ **Stephen Augustus:** 2020 - yeah, that sounds about right.
58
+
59
+ **Constance Caramanolis:** China, right?
60
+
61
+ **Stephen Augustus:** Yeah, so I believe China was the first... Yeah, we've been doing it for a little bit now.
62
+
63
+ **Gerhard Lazu:** \[08:18\] Nice. So for those that are still confused, they're not the co-chairs of China or Europe; they're co-chairs of KubeCon/CloudNativeCon Europe, China, North America. These are like the three big conferences in the cloud-native, Kubernetes (but mostly cloud-native) world.
64
+
65
+ **Stephen Augustus:** Just a quick clarification... The China event is now I believe OSS Summit China, maybe... So the primary KubeCon CloudNativeCon events are North America and EU. We do kind of like a region-specific for China moving forward... And Constance and I are not directly involved in that.
66
+
67
+ **Constance Caramanolis:** Not anymore, yeah. It's also because it's all up in the air too, because our co-chair reign of terror ends after North America... So China is supposed to be like afterwards, so Jasmine might know.
68
+
69
+ **Gerhard Lazu:** Who's Jasmine? I'm glad that you mentioned Jasmine. Who's Jasmine?
70
+
71
+ **Stephen Augustus:** Jasmine is our lovely, new co-chair. So we are actually expanding the roster from two to three co-chairs. What we've been doing every cycle is essentially we have a few chairs, or two chairs who were doing it, and then as one is getting ready to roll off, we pull in a new one. Those chairs tend to be balanced between kind of like Kubernetes and not. So kind of based on the conference name, we've got KubeCon and we've got CloudNativeCon. Also, we wanna make sure that we bring in perspectives from both the Kubernetes community, as well as the wider community. From my perspective, I'm definitely heavily involved in Kubernetes as a maintainer, and then Constance is involved in the open telemetry community. I will make a suggestion, and Constance is like "Maybe there is an observability play here, or "Maybe we're doing too much Kubernetes content over here, and maybe we should highlight this instead." So it's nice to have that balance.
72
+
73
+ With Jasmine coming in - Jasmine is an engineering manager, and the engineering effectiveness organization at Twitter, and what I love about that is that Jasmine is also -- I think each of us have been end users in the past in some way, shape or form. Jasmine started her cloud-native journey as an end user, so we're getting that -- and Twitter is also an end user company, so we're getting that end user perspective. And I think that that is definitely really important to me with having background in selling cloud-native solutions to customers. I think that if you're not paying attention to their perspective, you're not effectively selling anything. So having someone who is looking out for that end user perspective as we build out the program for North America and in the future is invaluable.
74
+
75
+ **Gerhard Lazu:** That's actually really interesting, that perspective, in that you're on the inside, Jasmine is on the outside, in a way, end user, and Constance is everywhere, because she's observability; she observes it all.
76
+
77
+ **Constance Caramanolis:** I do observe it all. That's correct.
78
+
79
+ **Gerhard Lazu:** Okay. So I know that Stephen is Caesar of Systems, and I think that's self-proclaimed... But what about you, Constance? Do you have a tagline like that? That was very catchy, Stephen. Great job if it was you. If it was someone else, great job to someone else.
80
+
81
+ **Stephen Augustus:** Yeah, definitely me. \[laughs\]
82
+
83
+ **Constance Caramanolis:** \[12:02\] I don't necessarily have a tagline, but for people who know me, I always have questions... And I guess it kind of goes with observability. Observability is all about answering the questions you have, and I just ask a lot of questions. So I guess that maybe I'll be the question master. Yeah, I'll go with that.
84
+
85
+ **Stephen Augustus:** The riddler! \[laughs\]
86
+
87
+ **Constance Caramanolis:** Oh, the riddler... \[laughs\]
88
+
89
+ **Gerhard Lazu:** Right. So you should be asking the questions, not me... Right? Is that what you're trying to say?
90
+
91
+ **Constance Caramanolis:** No, it's more of like my role is right now Splunk -- I switched to a product, but I'm always the person being like "Hey, so for our end users, why are we doing that?" How does that actually impact them? And I apply that also to open telemetry. So I'm always the person asking "But why? Okay, but why?" Kind of like a toddler; everyone knows a three-year-old is like "But why? But how? Really? Do we need to do this?" So I'm the 20 questions person.
92
+
93
+ **Gerhard Lazu:** Okay. So is there a Why question regarding this KubeCon, the EU KubeCon - is there a Why question that is on your mind, that wasn't the answer yet?
94
+
95
+ **Constance Caramanolis:** I haven't thought about that that much. Personally, both Stephen and I have had a lot of big changes these past few weeks and few months. I had to be in Canada for a family emergency, and I just came back from Canada and moved to a new house, so I've been kind of just compartmentalizing... But it's the task at the moment, so I haven't had a chance to actually reflect on KubeCon EU much. That's gonna be something I'm gonna unfold in the next few weeks as things calm down.
96
+
97
+ **Gerhard Lazu:** Okay. The one thing which I was wondering about KubeCon - there are many, many... Like, what you've just mentioned, for example, I wouldn't have known. You compartmentalized really, really well. If that's a compliment, I'm giving it as such. It's really hard, especially in this day and age; everything is changing so much, and things are just happening... They're just happening, literally. We just have to respond in one way or the other.
98
+
99
+ Speaking of changes on compartmentalizing, I was looking at your Twitter, Stephen, about your clothes shopping. That was really interesting. Like "Finally, I'm getting to do this. I'm going to shop for some clothes", right? I think there's like a really good story there. For those that want to check it out, it's all on your Twitter. I really appreciate those little real-life things... And sometimes you forget, right? You have KubeCon, and you have work, and you have all these million things happening and you need to catch up on, and then "Oh. Clothes. I need some of those."
100
+
101
+ **Stephen Augustus:** Clothes! Yeah. I am part of our co-chair of the unofficial one of the cool SIGs, SIG Fashion. SIGs are Special Interest Groups. I think what we've done with the in-person events is while they haven't been like official events, we try to have fun with it, so there's like SIG Bike, and folks that are into biking will bring their bikes to an event and get together and go on a ride. There's SIG Bouldering, people will do bouldering events there... There's SIG Beards, so people with awesome beards will get together and take pictures, and stuff.
102
+
103
+ I think part of bringing your entire self to the conference is expression, and part of how I express myself is by how I dress. So as we were getting ready to get this started, I realized "We've done so many of these events", and at this point I was like "I think they've seen all my cool stuff. I have to go shopping." So it was panic-shopping.
104
+
105
+ I think often the adrenaline of deadlines allows you to do things more effectively, even though it's not necessarily the best way to do things. So knowing that KubeCon was coming up, and I was like "I have no new clothes for the stage. I have to go shopping." \[laughs\]
106
+
107
+ **Gerhard Lazu:** \[15:59\] That was a good one. And Constance was paying attention, because she knew that you're missing a hat. So maybe that's how the idea came about? Like "Hm... He bought everything except the hat. I'll make you a hat you cannot buy." And this is exactly what happened.
108
+
109
+ **Stephen Augustus:** Oh, we were churning on the tinfoil hat idea for a while prior to doing the promo video and everything... So we had that in preparation, we were just figuring out how to do it.
110
+
111
+ **Constance Caramanolis:** Background processing.
112
+
113
+ **Stephen Augustus:** Yeah.
114
+
115
+ **Constance Caramanolis:** I think it was the week that the promo video -- the idea started coming out was like the week that conference talks were accepted, so there was a lot of chatter and a lot of misinformation out there about how talks were selected, and that kind of inspired us to do the promo videos a little bit. And those all relate to world news. Right now misinformation is massive in all aspects of life, and at KubeCon as well, which really isn't surprising. So that kind of inspired us to play on that narrative, make it a little bit more playful so people can think a little bit more about how they get their information... In an indirect way, but...
116
+
117
+ **Stephen Augustus:** For sure.
118
+
119
+ **Gerhard Lazu:** Yeah, that was appreciated in so many ways. I think it's going to lead to so many other things, this idea. You take -- I wouldn't say a negative, but something that could be a negative, turn it into a positive, a playful positive, and then it leads to many other positive things. So that was really nice.
120
+
121
+ **Constance Caramanolis:** Yeah. One thing I do wanna say is a lot of people did have very negative experiences. I personally had some negative experience, like in feedback, and it happened -- a lot of people who were big program chairs, and track chairs. We've said this many times, but they spend hours, up to hundreds of hours reviewing the talks and trying to give us feedback, so that we can curate it more and come and come up with a final selection... And that is thankless work. So thank you.
122
+
123
+ It's tedious to read... It's tedious because you're just seeing conference talk after conference talk, and you're trying to identify what's different, and what's unique, and trying to think about what people wanna see, and it's a really hard place to put yourself in, so thank you. Also, thank you to everyone who submitted CFPs. Thank you. People do review them, and we appreciate it. We really do.
124
+
125
+ **Stephen Augustus:** One of the hardest part outside of the volume of talks that we have to review, it's also like -- there are external factors to look at. It's like, time of day, what's going on with your family, how is work going? All these things outside of just trying to understand the technical content and the story that someone's trying to tell. Those are definitely at play in the review process. And even reviewing a talk that looks similar to another one that you've seen, but the first one you saw first, right? So by default, you kind of have this feeling about it. So I think it's important -- we go through the process and we kind of look at the talks that are also similar and go "Okay, just because we saw this one first doesn't mean it's the better one. What are the actual strengths between these two?"
126
+
127
+ I think when we have duplication of content, finding ways to put folks together in a room, maybe it's combining efforts with talks, maybe it is moving something to a different track and asking them to tweak the talk in the lens of that track... I think that observability is definitely a great example. I think customizing it and extending Kubernetes where you get lots of interesting takes... You've also got this end user play, where "How does this talk affect an end user?"
128
+
129
+ \[19:45\] The 101 track is another great example, where we have a talk that the way that the story is told fits very nicely for someone who is just getting started with this type of content. And then maybe the talk that looks similar is more intermediate-advanced content that may belong on, say, the observability track, or the customizing/extending track. It's definitely a balance for all of the reviewers to provide really thoughtful feedback, and we do heavily depend on that feedback to structure the program. So like Constance said, thank you again for everyone who gets involved in the review process.
130
+
131
+ **Gerhard Lazu:** I think there's a lot of nuance here that people just wouldn't think about. This must take a really long time... And not just that, a lot of mental effort. And to be honest, I still can't appreciate it, because I don't know what is the volume; how many talks do you have to go through? How many discussions, how many hours do you end up discussing? And when you look at the end result, you think "Oh, just a hundred talks. No big deal." But it is a very big deal. A very big deal, right?
132
+
133
+ **Stephen Augustus:** I think the answer to any good, hard problem is "It depends." It depends for KubeCon. Day by day, some of the stuff that we see ahead of time, some of the stuff that trickles in towards the end... What's also interesting about it is for our program - how many talks were we at for the official program, Constance? A hundred and change, right?
134
+
135
+ **Constance Caramanolis:** It was 90-something.
136
+
137
+ **Stephen Augustus:** Yeah, somewhere around 90 to 120 mark. And we had to cut...
138
+
139
+ **Constance Caramanolis:** To give a point of reference, for EU 2021 we accepted out of the 900, 100talks. And there was a maintainers track session that was separate. In EU 2020, which we expected to be in person, we accepted over 200. So that's also part of why that shock was huge in terms of -- we had send more rejections, because we were trying to make the schedule a lot smaller, so it could be more digestible for virtual. We didn't do that for 2020, because when Vicky and I were choosing the talks... Vicky and I met in person to choose the talks back in January 2020, so no one knew about -- or not no one knew about, but Covid didn't seem real at that point. So that was a huge cut.
140
+
141
+ And for North America 2020 we accepted like 150, 160 maybe. That was I think out of 1,000 talks, but for in-person North America. In-person North America 2019 I don't know what the numbers are, but more talks were accepted than EU, so probably like 300 talks maybe. We had to cut a lot of accepted talks because of the virtual component.
142
+
143
+ **Break:** \[22:27\]
144
+
145
+ **Gerhard Lazu:** If you are going to submit a talk, what would you like to say to those that submit the talk, Constance and Stephen, for North America?
146
+
147
+ **Constance Caramanolis:** One, thank you. We know it's really hard to put yourself out there. I think the one thing that usually ends up for me making things distinct from one talk over the other is clearly don't say like "I'm gonna tell you five benefits." Give me a hint, like "Hey, I have five benefits, and the first one is something different." Make sure you position yourself a little differently from other talks.
148
+
149
+ \[23:52\] We do face this -- now that KubeCon's been going on for so long, especially when it comes to Kubernetes deployment, like "Hey, I deployed Kubernetes at my company." Those are valuable stories, but we've heard a lot of those. So if there isn't a lot of differentiation between all those previous talks, that might be a better blog post, because a blog post is a little bit more easy to digest.
150
+
151
+ So maybe think about like "Hey, I'm deploying Kubernetes at my company, and we have this ridiculous scale. Or we have this ridiculous requirement that's really unique." Calling that out, about why your problem is different than something else really highlights it.
152
+
153
+ **Stephen Augustus:** Yeah, I think my big suggestion would be -- I think a lot of making decisions is about understanding the different lenses of the people that are involved in decisions, as well as the personas that might be involved in the results of your decision.
154
+
155
+ I would say as you're writing your abstracts - this is one of my favorite questions to ask, is "Would you want to go to your talk?" If you can't answer that, then there's something that you probably would want to tweak.
156
+
157
+ **Gerhard Lazu:** What about those that say "I have to go to my talk, because my company would force me to, to attend all the talks for my company."
158
+
159
+ **Stephen Augustus:** If you were not you, would you go to your talk? If you had no obligation to go to your talk, would you do it? Is there something valuable in that talk to take away? Would you go to it?
160
+
161
+ I guess the second suggestion would be don't make this decision in a vacuum. You usually have people that you can bounce your talk ideas off of. You have experienced reviewers that frequently, as KubeCon is happening, or CFPs are rolling around, people will tweet like "Hey, if you need a review for your proposal, feel free. I'm happy to give advice." So don't not take advantage of those opportunities, and definitely shop your ideas around, because that is usually where we see the best ones.
162
+
163
+ **Gerhard Lazu:** So let me see if I understood this correctly. Let's imagine that I'm submitting a talk. This comes out 17th of May, this will be live; you'll have six days to submit a talk. 23rd of May is the last day that you can submit a talk for KubeCon North America.
164
+
165
+ **Stephen Augustus:** Mm-hm.
166
+
167
+ **Gerhard Lazu:** If I was to submit a talk, based on what you said, this is what I understand. First of all, start with the takeaways; what are the main takeaways? And don't say they will be five. Say what they are. Be explicit about them. And when you put them like that, before you submit, think "If you weren't you, would you go and attend that talk? Are those takeaways that you've listed valuable enough, if you weren't you, to go and attend the talk?" Is that a good summary?
168
+
169
+ **Stephen Augustus:** Yeah.
170
+
171
+ **Constance Caramanolis:** There is one thing too - a lot of talks are actually a blog post, because there's a lot of things I maybe wanna learn from people, but it's really hard for me to -- at least for myself, how I learn is I can't consume things well in a talk, because they're sharing snippets of code, and they're running these things there, and it's really hard to follow. So a lot of talks I actually wish they were either blogs, or they had the accompanying blogs detailing exactly what they did.
172
+
173
+ At least for myself, whenever I'm searching for some problem-solving, if I have to watch a video, I don't process it really well, so that's where I wish people wrote more blogs. But also, I understand that it's incredibly difficult to write blogs, and I hate doing it whenever I have to do it, so I get why people don't.
174
+
175
+ **Stephen Augustus:** Well, I think it's playing to your strengths, too. To your point, everyone learns a little differently. We take SIG meetings, for example - like, I will not go back and view a recording. It needs to be something that I'm building for evidence, that will force me to go back and look for a recording. I don't like learning that way. I would prefer to see a digest, or something... There are some people who can easily churn out blogs. There are some people who are terrified of being on-stage.
176
+
177
+ \[27:56\] Figuring out how to play to your strengths... You know, some of the talks that we saw around kind of like playing with the borders of delivering a talk, because it was now virtual, like you just had more opportunity... People who have video editing experience went crazy with it for a few of the talks. I think both Tabby and Ellen's talk this time around and Justin Garrison's talk for NA were just brilliant examples of tearing down the borders of what it means to just give a talk to an audience. They played with it, and I liked that.
178
+
179
+ **Gerhard Lazu:** That's right. It's like a whole new world when you record yourself. Props. Sure. So easy, right? Do it again. It doesn't have to be the first time. Refine it. Give maybe an internal talk and see what people think, and then do it again. So it doesn't have to be the first time, right?
180
+
181
+ **Stephen Augustus:** For sure.
182
+
183
+ **Gerhard Lazu:** You have as many chances as you want. And the more work you put in... That's the one thing that you cannot skip - the more work that you put in it, the better it will be.
184
+
185
+ **Stephen Augustus:** It shows. It always shows.
186
+
187
+ **Constance Caramanolis:** I remember my last talk that I gave was at NA in 2018, and it was the second time I gave that talk. I had internally done it, and I am so upset with myself, because in the next three months after I gave that talk, I gave it again somewhere else, but I finally found a way to make things a little more clear, and I'm like "I wish I was ready for this at KubeCon North America." But then it was just a forcing function of doing that presentation, feeling like "Oh, I didn't see people laugh" or I didn't see people like... people had this like "do I"... but like "I don't get that." I was like, "Okay, I need to refine that." Until you get that feedback, you don't really know how to iterate on it, so... There is a good forcing function for practicing it.
188
+
189
+ **Gerhard Lazu:** I think internal demos -- like, you give a talk internally, in your company, you see how many people show up, see what they say, and if they don't like it very much, maybe take the hint. Improve, or drop it. That's another way, right?
190
+
191
+ **Constance Caramanolis:** Yeah. I've noticed that if people are giving a talk, it's usually because there's something that needs sharing, and usually it is probably the right time to share. One thing that's hard about talks is that it's 20-30 minutes of you just there, sitting, and you're absorbing information. And if you don't have a way -- especially if you don't have a way to interact with the data, it's hard to process it. So that's sometimes why things might flop, and some things may be better as a workshop, or tutorial versus a talk. The information you're trying to present - you have to think about different ways to interact with it.
192
+
193
+ **Gerhard Lazu:** Yeah. And it doesn't have to be a talk, as Stephen says. Write a blog post. That's okay. It's no less difficult or better or worse than a talk. it's just a different format. People love it.
194
+
195
+ **Constance Caramanolis:** Yeah.
196
+
197
+ **Stephen Augustus:** Hop on a podcast, hop on a Twitch stream. Talk about your idea. Show me your repo. Do the work in the way that you feel comfortable, first and foremost. Do the work in a way that you feel you're gonna be most effective, and it's gonna highlight your strengths.
198
+
199
+ We put on a conference, but it doesn't have to be a talk. Opportunities to play with it, things like the bug bash was a great example of getting people together and just -- we're hacking on stuff. We're hacking, and maybe that's your best way of highlighting your strengths. Like "Let's get into the code, let's see what's going on. Let's just get it going."
200
+
201
+ So yeah, I think taking time to assess what your strengths are and how to display them is really, really important.
202
+
203
+ **Constance Caramanolis:** I think that we kind of forget that there's an element of tech fame, of having your talk accepted, and presenting yourself out there, but that's one element of being recognized. And granted, recognition and appreciation is incredibly important. I know I thrive on it, and I'm sure for a lot of people that's their source of validation... But I know so many people who are amazing contributors to the project, and public speaking or doing this presentation is horrible, but the way that they disseminate information is like being active in maintainer sessions, and within their smaller groups, responding to issues.
204
+
205
+ \[32:06\] So this is one way to be recognized in the community... And maybe you do get a little larger audience. The individuals who don't like this venue of communicating and interacting with the larger community - they're doing a lot of hard work in terms of responding to issues and being involved there, and that's super-important. I guess we need to find a way to do more highlighting those people, too. We need to figure that out...
206
+
207
+ **Gerhard Lazu:** That's a very good point.
208
+
209
+ **Constance Caramanolis:** ...because that also might make the project important, like really successful... The people who are responding to issues, joining those meetings... "Hey, let's go over this design doc and talk about it." And they're not necessarily gonna wanna do a talk. I realized for myself it could take me up to 100 hours, three workweeks to come up with a slide deck and a first draft of the presentation I'll do.
210
+
211
+ **Gerhard Lazu:** Wow, okay.
212
+
213
+ **Constance Caramanolis:** That's a lot of emotional energy. So for others, they might not necessarily have that bandwidth, because they invest it into other things that they find valuable. There's other ways to be tech-famous, it's just that this is maybe a more obvious way.
214
+
215
+ **Stephen Augustus:** Yeah, I think part of it, too -- that hundred hours... I think the hundred hours is spent, for me at least, panicking and not necessarily doing anything useful. I usually want to have a conversation with you. My talks tend to be more of a discussion with people than delivering any one piece of content.
216
+
217
+ If you saw the Kubernetes keynote updates, or anything that happened last KubeCon, there were no slides. I just spoke to you. So yeah, I think, again, figuring out what works for you, and playing to those strengths.
218
+
219
+ **Gerhard Lazu:** I think that takes special talent. Very few people can pull that off. If you don't have slides, very few people can pull that off. I know a few, and yes, I would agree with what you've just said, Stephen. You're one of them.
220
+
221
+ **Break:** \[34:00\]
222
+
223
+ **Gerhard Lazu:** Is there anything specific that you're looking forward to in the next KubeCon? Is there a specific element that you're looking forward to, whatever that may be?
224
+
225
+ **Stephen Augustus:** Yeah, it really is the -- so those listening, you can't see the hands and stuff, but Constance was trying to shake my hand. I have not met Constance in person.
226
+
227
+ **Gerhard Lazu:** High five. There we go. Virtual high fives all around. That just happened.
228
+
229
+ **Stephen Augustus:** I've not met Constance in person, I have not had the opportunity to take the keynote stage since I have been chair. Again, my favorite part - I say this in pretty much every interview - of any conference is the hallway track... And we're actively working to do more to make it feel like you're in the hallway virtually, but being able to see the maintainers that I work with all year round in person, even if it's the six feet apart, wave from across the hall... I think that's what I'm looking forward to.
230
+
231
+ **Constance Caramanolis:** \[36:22\] Agreed.
232
+
233
+ **Gerhard Lazu:** So for those that have never been to KubeCon, what is the hallway track, Constance?
234
+
235
+ **Constance Caramanolis:** So hallway track - it ends up happening where you maybe go to a talk and you see someone ask a question to a speaker, and I thought that person asked a good question, and I go to the person and I'm like "Hey, that question was really good. I was wondering, that kind of relates to my problem." I've seen so many people who end up taking pen and paper in the hallway and being like "Okay, well I was doing this thing here", and they're like debugging things together, and they talk about it, and you end up becoming friends with these people.
236
+
237
+ So a hallway track is pretty much just meeting other people who have similar interests -- or not even similar interests, because there are some other happy hours where it's just like everyone's together, and you just talk to people... But you end up getting to make friends and getting to know people who are in the broader community, and you get to meet them. You just get to hang out, and then -- I'm just getting so excited about the prospect of hanging out with people in person.
238
+
239
+ I only got my first vaccine on Sunday. I'm in the States -- I was in Canada, \[unintelligible 00:37:25.18\] I can count how many weeks away it is where I could be maybe in a crowd of people... It's so exciting to think about it. But we did try to replicate this... So if things unfortunately take a turn and it goes to virtual-only, we do have hallway track Zooms. It's a little bit inorganic at first, because you get to be placed in breakout rooms, but then you have an opportunity to meet a group of people that you never would have met before... And that is really fun.
240
+
241
+ I've met someone who was actually interested in OpenTelemetry. "Let's talk about it." We spent like two hours talking about OpenTelemetry. Or people who are like -- someone goes climbing, or cycling... So you get to meet people with similar interests. It's fun.
242
+
243
+ **Gerhard Lazu:** Right.
244
+
245
+ **Stephen Augustus:** Yeah, I think it's definitely like extracting the -- they joke around, like "Oh, I've always imagined you as a tiny square on Slack, or GitHub." Often, the way that we interact with people is mostly through their contributions, asking and answering questions about things that are happening on our projects, but there is a transactional element of doing that, which is entirely different when you get to meet in person, because you get more of an opportunity to -- like, you're not at the computer... You get to have the opportunity to talk about the self that exists outside of the open source community or outside of their day-to-day work.
246
+
247
+ As we head back into this, I would remind folks of the PacMan rule... Depending on how close we're allowed to be at that point. So the PacMan rule is a fun one; if anyone has played PacMan or seen what PacMan looks like in the past - imagine a pizza with one of the slices removed. That's kind of like the image of PacMan. And what the PacMan rule is about is essentially like when you group up, when you start bunching up into -- because this is essentially what'll happen in the hallway... A bunch of folks that know each other will group up into a circle, and start having a conversation. And what I would say is follow the PacMan rule, let the circle be PacMan-shaped. Because when you let the circle be PacMan-shaped, it allows someone new to come into the conversation. And then they expand the circle wider and wider, and you start bringing in different perspectives... You maybe get an opportunity to talk to a new contributor, someone who hasn't gotten involved in things yet.
248
+
249
+ I think we're all human, but definitely for me, I have a bunch of heroes in this community, and you could be standing shoulder-to-shoulder with one of your heroes... So give them an opportunity to have that experience and to have conversations.
250
+
251
+ **Gerhard Lazu:** \[40:20\] That's a very good one. This is the first time I've heard that. That's really good. And what I would add to that is if the circle gets so big that you can't talk anymore, that's your limit, right? That's your limit. Stop expanding it. You can't talk.
252
+
253
+ **Stephen Augustus:** I think that one of the fun things that ended up happening in -- it was Barcelona. Barcelona, there was a giant swarm party, and I think there was a DJ that they hired for the party... And I think people were kind of grouped up in their own groups... And we noticed that there weren't that many people dancing. As we were chatting, our group was kind of out on the floor, and we were like "There's no one dancing. This is a party. We need to start dancing", and it kind of started off as a circle of maybe four, five folks, and turned into a very large circle of maybe 20 folks by the end of it. So I would say your circle can get as big as it needs to get, for the space that it allows.
254
+
255
+ **Gerhard Lazu:** Okay.
256
+
257
+ **Constance Caramanolis:** One thing - if a new person joins the group, someone in the circle should be like "Hey, new person, what's your name?" and actually give them an opportunity to introduce themselves. I'm actually pretty shy in new group situations, and I was always eventually going to be like "Hi." But if someone's just like "Hey, who are you? Nice to meet you." So you have an opportunity to introduce yourself and feel a little more easy to say hi and join the group.
258
+
259
+ **Gerhard Lazu:** Yeah. I think that when we'll meet first, we'll be so crazed by that moment that we won't know what to do. We're gonna be like "Whoa!! What is this happening?!" I think there's certain rules which we need to reiterate; it's very important that we start with them, because... We forget. It's all Zoom, it's all whatever else it is, online chatting, and then when we get in-person, when we bring the human element, which I think everybody is looking forward to... That question is almost obvious. The answer is it will be in-person; everyone will be looking forward to that. The in-person element, the human element. And then there will be certain rules which we'll need to remind ourselves of how it works, first of all. \[laughter\] And second of all, how to make it work in the new world. So the combination.
260
+
261
+ **Constance Caramanolis:** Yeah.
262
+
263
+ **Gerhard Lazu:** Is there anything specific that you wanna discuss in the last 15 minutes? Anything that you want to get out there... So now you get to take the reins, if you wish.
264
+
265
+ **Constance Caramanolis:** I do.
266
+
267
+ **Gerhard Lazu:** Go on, Constance. Anything. Go for it.
268
+
269
+ **Constance Caramanolis:** So one thing from the keynotes... \[unintelligible 00:42:47.17\] talk was amazing, and also important for the community. And one thing I do wanna (I guess) call out is a sense of ownership. People were asking "Oh, I hope they can do a follow-up talk again", and I wanna keep on hearing from them, but I also want this to be a call-to-action to the community... It isn't you know, Eva and Bob's responsibility to give us updates. It's actually like -- this is a really good time for us to pull from them. Don't let them push the information, let's pull the information from them. They're a part of big groups that are doing this effort, and they're posting updates, and there's a lot of ways to engage with them directly... Especially for \[unintelligible 00:43:23.02\] because there was such a strong reaction in the keynotes Slack channel; but it's like, they don't have to give us the updates, we can get the updates ourselves and be more involved.
270
+
271
+ This also applies to other projects, too. Everyone, all these projects, and SIGs, and working groups \[unintelligible 00:43:40.03\] all these places have mechanisms for being involved and pulling information from them. So it's more a call-to-action for everyone to be more of an active participant instead of a passive participant. If you're curious about something, pull that information; get it yourself.
272
+
273
+ \[43:57\] I guess it's like Eva and Bob did an amazing job, but it shouldn't be their responsibility to always update us. We should be active participants, getting all this information and making sure that we're being healthy community members.
274
+
275
+ **Gerhard Lazu:** How can people pull this information?
276
+
277
+ **Stephen Augustus:** So very specifically for their talk, something that Eva mentioned that is worth repeating - the responsibility for gaining information, especially as we're talking through and thinking through and having discussions about diversity, about equity, about inclusion in these communities, it's not our job to teach everyone everything. It is your job to care enough to do that work yourself.
278
+
279
+ With regards to that talk specifically, github.com/community will give you everything that you need to know about the Kubernetes community, a walkthrough on governance structure, all the various SIGs, working groups, subprojects that are within the community, as well as links out to information on the Kubernetes Steering Committee, as well as the Code of Conduct Committee.
280
+
281
+ I think for talks in general - we were going over this just yesterday, but it will be many more days since yesterday when you hear this... But we were going over kind of like the composition of talks, the expectation for content... It is not possible for us to give you all of the information that you need. I think that in any KubeCon that you go to, in any conference that you go to really, it should generate questions for you; hopefully it generates questions for you, hopefully it generates interest for you, to want to go and discover more about that particular topic. I think that there is a component of being kind to the people who are delivering this content as well. Very often you will see things on the internet where folks will go "Oh, well I was expecting X, Y and Z from this talk, because blah-blah-blah, and this is the thing that I care about." The thing that you care about is not necessarily the thing that the speaker cares about, or not necessarily the audience that the speaker is trying to speak to. So be kind when you think through content that you're receiving.
282
+
283
+ The content that is delivered on stage, virtually, what have you, is the sum total of hours and hours of dedication, conversations with multiple people, decades of experience often. So try to figure out how that content can be useful to you, but try to do it in constructive manner. I think that there is always a way to ask questions that can be effective and useful to both parties, to try to think of ways to do that.
284
+
285
+ When we say \[unintelligible 00:46:57.20\] Pull information from the committees that they represent, from the communities that they represent. I'm pretty sure that's what Constance meant.
286
+
287
+ **Constance Caramanolis:** That's exactly it, yeah.
288
+
289
+ **Stephen Augustus:** I think it's worth clarifying that point. Do not reach out to them necessarily. They can be your ingress point to these communities, but do your own due diligence to get this information. The information is out there because we have put it out there, because someone has asked that question before. You are usually not the first person to ask the question, and I think that a lot of the communities in the open source space do tremendous work to try to answer questions that have been asked before, and put them in places that are visible.
290
+
291
+ **Constance Caramanolis:** I will get us the links... Because you meant [github.com/kubernetes/](https://github.com/kubernetes/) community, right?
292
+
293
+ **Stephen Augustus:** Yes.
294
+
295
+ **Gerhard Lazu:** Yup. In the show notes. Great.
296
+
297
+ **Constance Caramanolis:** Yeah.
298
+
299
+ **Gerhard Lazu:** \[48:01\] So what I understand from what Stephen said - and you, Constance, as well - is that the content and the experience that is KubeCon, you pay for your ticket, but that doesn't give you the right to behave like an \*\*\*\*\* It's a gift. It's a privilege.
300
+
301
+ **Constance Caramanolis:** Nothing does. Nothing gives you that right.
302
+
303
+ **Gerhard Lazu:** Exactly. It's a gift. Everything that you receive, everything that you learn, all the conversations - they are a gift; treat them as such. If you don't like it, say "Thank you" and be polite, even if you don't like it. That's what you do when you get a gift. So if you think about them like that, then maybe you'll feel less privilege in that you're owed something. You're not owed anything.
304
+
305
+ **Stephen Augustus:** And I think there's for sure a flipside to that, where we're not standing from on high giving gifts out, necessarily. I think that we always want feedback; there are official venues to provide feedback - there are talk surveys, there are conference surveys, there are transparency reports that come out at the end of conferences to give you more of a clue of the composition of the conference and how people felt it went, and stuff... So if you have feedback, make it constructive, put it through official channels.
306
+
307
+ I think that a lot of the things that we often see - we're on Twitter, we're on the internet, and sometimes it's easy to get wrapped up in conversations that fall out of the official channels, and people will often have expectations of these conversations, but not realize that because you didn't put it through an official channel, the people who have the ability to change these things - you didn't give them the feedback. So that feedback that you thought was effective for whatever it's worth, it doesn't reach the people that you need to to change it.
308
+
309
+ **Gerhard Lazu:** And if you don't know how to give feedback, I'm pretty sure this is answered in the community guide on how to give constructive, positive feedback, right?
310
+
311
+ **Stephen Augustus:** We have speaker guides, we have conference guides, we have a code of conduct for the conference...
312
+
313
+ **Constance Caramanolis:** I've tweeted about it. We've tweeted about how to give constructive feedback, yeah. But you can actually also search "Constructive feedback", there's a lot of different ways to do it. There's a lot of training on it.
314
+
315
+ **Gerhard Lazu:** So there's that as well. Okay. So be kind. This is something that keeps showing up, and maybe don't take things too seriously, right? People do make mistakes sometimes, and it's not meant to hurt you in any way, right? It just happens, so don't take it too seriously. If you didn't like it, don't be a jerk, or whatever the other equivalent is. Be nice, be kind.
316
+
317
+ **Constance Caramanolis:** Yeah. I think there's something to add to that, too... People will make mistakes, and sometimes you'll accidentally be a jerk. And it's not okay, but the thing to do is to own it. Apologize \[unintelligible 00:51:00.10\] Don't just ignore it, because then it's not actually addressing it, and ignoring the problem is actually a part of the problem... But if you address it, like "Hey, I'm sorry. I made a mistake. I offended you, and that wasn't my intent." Rectify the situation, because that does build trust and makes them feel better.
318
+
319
+ **Stephen Augustus:** And I think there's a kind of aphorism that people agree and disagree with, which is "Assume good intent." And it's tricky, because when you walk into situations, there's of course a balance. I think that in the open source space you wanna be good, and kind, and true, and assume that people are operating in the best intentions of the community. At the same time, the flipside of that is you're often requesting that from people who have been historically marginalized and under-represented, so when you ask them to assume good intent from people who have not historically given/had good intentions for them, you're asking them to self-harm, essentially.
320
+
321
+ \[52:22\] So I think that there is, for sure, a balance of having thoughtful discussions, and again, doing the work to be thoughtful in your communication to people. I guarantee you, anytime I see -- there are quite a few communities within the cloud-native space that I work on, across Kubernetes, across KubeCon, for the Technical Advisor Group, for Contributor Strategy, for Inclusive Naming Initiative, all these places; if I see you potentially causing harm to one of the contributors that I work with, I will say something about it. Every time. I will call you out.
322
+
323
+ **Gerhard Lazu:** I think everybody should do that. It shouldn't be just Stephen. It's all of us, right? It's the community that we build.
324
+
325
+ **Constance Caramanolis:** We are responsible.
326
+
327
+ **Gerhard Lazu:** We know it's hard - certain degrees of hard, but it's hard... But it's worth it. It's worth it to be kind, it's worth it to be nice; it's worth to actually invest in this... Because it is ours. KubeCon is all of ours, so what do we want it to be? Well, it happens to be my favorite conference out of all the conference, and that doesn't just happen... And it's not just a group of people that made it happen, it's everybody. Literally, everybody. So if you're a participant, if it's your first time, if you've been at every single KubeCon - it doesn't really matter; it doesn't change anything.
328
+
329
+ **Stephen Augustus:** You're part of it. You're part of team CloudNative. I think that the second you decide that you want to attend, the second you stare at a GitHub repo, the second you join a mailing list for one of these projects, you're part of it. We do this for you, and we can't do this without you. So bring your best self to this.
330
+
331
+ **Constance Caramanolis:** I also think too, KubeCon became so big because the community is great, and because we keep on reflecting on what our standards and our commitment is to the community, and trying to improve ourselves, and we don't stay stagnant. And that's why these conferences are so large, it's because we do try to make it inclusive, and we try to hold ourselves accountable and try to grow and learn. Once we stop doing that, it will no longer be the community that many of us love.
332
+
333
+ **Stephen Augustus:** So a PSA on that that I think is important to hear as we prepare for hopefully more people to start doing in-person things again - and I mention this because it's very important right this second, because I've seen it happen... These projects, these communities, these events are held by a code of conduct, often multiple codes of conduct. Conduct yourself appropriately if you are attending one of these events. These events are not dating events, you do not have the permission or the right to make someone feel uncomfortable in these spaces, and if we find out about it, we will act on it. That is not invited behavior in our communities.
334
+
335
+ **Gerhard Lazu:** Thank you, Stephen, thank you, Constance. This was like -- if you've made it to the end, you got all the good parts; and if you've made it this far, rewind ten minutes and listen again, because that was the best part of this interview; it's really powerful. We have to acknowledge the negative parts, because they're there; we can't just gloss over them. There's also positive there. Choose whatever you wanna focus on, but don't ignore the bad bits. Manage them.
336
+
337
+ **Constance Caramanolis:** You can't ignore it.
338
+
339
+ **Stephen Augustus:** When you decide to ignore, that makes you culpable.
340
+
341
+ **Gerhard Lazu:** \[56:01\] That's how it starts.
342
+
343
+ **Constance Caramanolis:** Yeah.
344
+
345
+ **Gerhard Lazu:** That's the beginning. Who knows what the end will be, but don't get there. Just catch it early.
346
+
347
+ **Constance Caramanolis:** I know I've had horrible experiences in tech, and I will say, one thing that made me wanna keep on staying was this community. And if it's to ever resemble some of my previous bad experiences in tech, I would just be like "This isn't worth it for me." So I am very proud of this community for holding ourselves to higher standards than what the baseline is for tech. Because baseline for tech, honestly, is abysmal. We can do much better, and CNCF is thankfully -- the projects are doing much better than that, and I want us to keep on growing, because we still have a long ways to go.
348
+
349
+ **Gerhard Lazu:** So first of all, you mentioned about tech and how abysmal things are in tech, Constance. And you're right. We all have different perspectives. The under-represented groups have it the worst, and most of you have no idea what it's like. It can get really bad. And I think I'm finally - after years and years - starting to understand what is special about the cloud-native landscape and the CNCF. It's not even like the effort; it's this attitude that people have, and it's this community that's coming together. And it doesn't matter what happens, all the ideas coming together, all the good stuff coming out of it, it's the people that have this attitude which is very positive, which is very inclusive, and it's this strength that drives everything else. That's why people commit to it, and do amazing things within it, because they can thrive. So all we're doing is creating this place where people can thrive, they can feel safe, they can feel creative, and the sky is the limit, really. There's nothing you can't do. Well, I say that; there are things you can't do, and you shouldn't do... \[laughter\]
350
+
351
+ **Stephen Augustus:** There are limits, for sure. But I think that getting cert'ed in the cloud-native space, I definitely was very excited, bright-eyed, and bushy-tailed, and excited about learning the technology... I don't really care about the technology as much anymore. I spend a lot less time on the command line these days, I spend a lot more time especially in the new role, talking to people. I think that we're building a people system, or we're building a set of people systems... And every time I have a chat with someone who is just getting started, or is stuck on something, or maybe we catch up after a year of them being involved... That's what gives me the energy to keep doing it.
352
+
353
+ Because seeing us create a space where, like you said, people can thrive and learn and grow, and then how -- you know, with any good technology, especially in our space, it's like "How do we make it distributed? How do we scale it?" So it's beyond that, and this is a great example. I can't necessarily have this one-to-one conversation with everyone who might be listening to this later. But having opportunities to make something that is scalable... You know, how do I give a lesson to someone, or how do I learn from someone in such a way that we're gonna be able to replicate that with someone else, with a group of people, with a set of projects, with multiple areas of the tech industry? It's like, how do we continue scaling the good work that we're doing?
354
+
355
+ **Gerhard Lazu:** This is too good. I had too much fun. I'm not sure whether it's safe to have this much fun on a Friday, but this made it the best part of the day for me, so thank you.
356
+
357
+ **Stephen Augustus:** I sure hope so. \[unintelligible 00:59:52.23\]
358
+
359
+ **Constance Caramanolis:** Yeah, this was great. Thank you!
360
+
361
+ **Gerhard Lazu:** Thank you. This was the best conversation I had all week, so thank you.
362
+
363
+ **Stephen Augustus:** Absolutely. \[laughs\]
The foundations of Continuous Delivery_transcript.txt ADDED
@@ -0,0 +1,315 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** I remember how difficult it used to be to get code into production. FTP used to be used a lot, rsync used to be a thing, getting thing out there... And something happened around 2010. There starts to be a shift in 2012/2013; there was an acceleration of just git pushing, and the code would get out there. And I thought that was amazing. Like, why haven't we been doing this all along? I mean, what else do you need just to get it out there for the users to tell you "Does it work, or doesn't it work?" or "You're missing this." The quicker you can get to that point, the better off you are. And even if you make mistakes, that's okay. How do you learn if you don't make mistakes? So don't try to not make mistakes, try to make them so quickly and fix things so quickly that no one even notices. By the time they notice there's a problem, you've fixed it, and it doesn't exist.
2
+
3
+ I think you, Dave, had something to do with this, because around 2012 you published this book, you co-authored this book with Jez Humble, which was called "Continuous Delivery."
4
+
5
+ **Dave Farley:** Yeah.
6
+
7
+ **Gerhard Lazu:** \[04:04\] And even though, hand on heart, I haven't read the book, but everything that you capture in that book I sure have practiced for more than decades, and I cannot think of working any different. So how did you come up with the concept of continuous delivery?
8
+
9
+ **Dave Farley:** I'll start off by admitting that I'm very old and I've been doing this for a long time... So mostly how I came up with this is by doing it wrong in lots of weird and interesting ways first, and finding out what didn't work. I had a kind of formative experience, several formative experiences... But one time I remember in the late '90s building some reasonably complex software for an insurance company, and we were supposed to deploy it by writing a manual script that somebody would then take over and execute. And the system wasn't configured in the way that we expected. We didn't know what the configuration system was really, so that didn't work very well. So I remember me and a friend spending two days, stood up in a server room somewhere, trying to manually install the software... And I was thinking "There's gotta be a better way of doing this..."
10
+
11
+ Later on I worked on a project for a points-of-sale system when I worked for ThoughtWorks in the early 2000's... And we were doing extreme programming at some scale; at the time we thought that it was probably the biggest agile project in the world. There were about 200 people working on this agile project on three different continents... In those days it sounded really weird, because agile projects in those days were just small projects. And we were kind of trying ideas out.
12
+
13
+ So we started to get more disciplined in our approach, we started seeing the things that were going wrong, and we started to introduce more of the deployment automation, more of the configuration management infrastructure as code, better approaches to automated testing, and starting to formulate basic deployment pipelines out of those. And it was during that period that we started playing with the ideas and pulling something together and thought that we had something.
14
+
15
+ The book itself came out of a notion -- there were a bunch of us doing different things on different projects in ThoughtWorks, and we thought that we were on to something. We were starting to see patterns that worked, and we were starting to apply those to the project, so we would go in having a better sense of what to do to begin a project and have some success with it.
16
+
17
+ So we thought we'd kind of write a book that was initially meant to be a series of essays, and there were a bunch of us that said "Yeah, we've got some stuff to say" and started talking about it... Actually, when it came down to it, only Jez and I did any writing, so that's why we ended up writing the book. \[laughs\] And the book morphed into something else. It certainly didn't start out being called "Continuous Delivery", but it morphed into something else, and both of us, I think, were a little bit wary of thinking of this at the time as -- and I do think that it's kind of a methodology. It's a way of approaching software development in its own right, and I believe that it's an engineering practice. I think that engineering in the sense of amplifying the impact and the talents of the people that are making the changes. as you said, software development is weird stuff, and one of the really hard things is knowing how well you're progressing, knowing how good your ideas are, and so you want to be able to get those out into the hands of users quickly and efficiently, so you can learn from that and adapt and change.
18
+
19
+ I think you put it perfectly when you were describing it in terms of we wanna give ourselves the freedom to make mistakes. We want to be able to start -- I am a popular science nerd. I love reading about science, and physics in particular, and I think that we can learn a lot from the fundamental philosophies of science. I don't mean Six Sigma accuracy and statistics or something like that, but just applying scientifically rational thinking. You know, start off assuming that we're wrong, rather than assuming that we're right. Test out our ideas, try and falsify our ideas. Those are better ways of doing work, and it doesn't really matter what work it is that you're doing. That stuff just works better. And certainly, the ability to move quickly, make small changes quickly, observe the impacts of a change so that you're in effect controlling the variables, limiting the scope, the blast radius of mistakes is a fantastic way of making progress efficiently.
20
+
21
+ \[08:41\] One of the things that I am obsessed with at the moment is watching Elon Musk and SpaceX build starships to go to Mars and blowing them up in Texas, because that's how you learn. That's how you do great engineering. I think we're on to something here, because I think continuous delivery is an approach that allows us, facilitates that kind of thinking and that kind of approach to software.
22
+
23
+ **Gerhard Lazu:** You've made so many great points that I have difficulty tracking all the things that I wanna mention to all your points... I mean, there's just so much, so let me start with this. First of all, thank you very much. You have no idea how big of an impact your approach and your teachings and sharings had, not just on me, but on everyone I know. The stuff that you do and the staff that you've been promoting for decades now have been part of me in many different ways.
24
+
25
+ For example, I was also into physics when I was in high school, and I was convinced that I'll go to university to study physics. But then, when I was at the University of Puget Sound in Tacoma, there was the "100 Years for Max Planck" talk. That was a fascinating conference. But I discovered the Macintosh. And that changed my life.
26
+
27
+ So I was so good at making mistakes and learning from them, and deploying, and figuring out what doesn't work, that people said "Can you do my website? Can you host my thing? Oh, how do I do emails? I don't know how to do emails." This was like the early 2000's. That's when I started doing these things properly. And I realized "Well, this approach of trying to figure out if it works - it's so good that you don't need anything else. Keep learning, keep iterating, keep improving, and that's all there is to it. Nothing else." And continuous delivery is so fundamental to this approach of working that everything changes and you don't want to go back. I can't imagine--
28
+
29
+ **Dave Farley:** Absolutely. That's one of the things that I've observed. First of all, thank you for saying thank you; that's very kind. But I've had the privilege of working with some great teams over the years, and I have yet to see one that has adopted continuous delivery in the way that I would recognize it that ever wants to work in a different way.
30
+
31
+ One of my other formative experiences was I was involved in building a very high-performance financial exchange, while I was in the middle of writing the continuous delivery book... And that was an exercise in genuine engineering. We were doing some really hard stuff, some really difficult stuff to be able to build this ridiculously efficient software system. But we were starting with a blank sheet of paper. I was the head of software development, and I dipped in the middle of this continuous delivery thinking, because I was in the middle of writing the book... So we built the organization from the ground up as a continuous delivery organization. And the commonest message that I still get from the people that I worked with on that project is "Oh God, I miss what it was like there...", you know, if they've moved on and they've gone somewhere else.
32
+
33
+ One of my friends is now in New Zealand, and he regularly grumbles to me that "If only we were doing those things..." People don't wanna go back to a different way of working. This stuff works better. And fascinatingly, we're gathering data to back those sorts of statements up for science nerds like you and I. You shouldn't just trust what we say, we should also gather data, and you should try that for yourself, and all those sorts of things. Because if we are right, it's a reproducible thing. It's not some kind of magic.
34
+
35
+ **Gerhard Lazu:** \[12:07\] I love that. Starting from "I'm wrong. Let me figure out what right looks like."
36
+
37
+ **Dave Farley:** Yes.
38
+
39
+ **Gerhard Lazu:** And if you always assume that you're wrong, even with all your experience and everything you know, you will never be wrong - that's very weird, because you will figure what right is. You don't know what right is. It changes all the time. It's contextual. And most importantly, it's the people that you work with. They always change. You replace a team member, you have a new team. Someone leaves, someone joins - you have a whole new team. So how do you stick to those principles and how do you promote those healthy principles that everybody respects, abides by, and then magic happens? And I think you were capturing a little bit of that magic with your friends that you use to work with, and you became so much more than co-workers. That's amazing.
40
+
41
+ **Dave Farley:** Yes, certainly. And I think that's certainly the difficult problem. We're technologists, so we often get lured by the technology, and your joy of discovering the Macintosh and all that kind of stuff... But the hard parts -- I know it's trite, but it is true that the hard parts are the people parts. It's not that continuous delivery or its practice are particularly difficult. In fact, I would argue the reverse. I would argue that one of the lures of this way of working is that it's a much simpler way of working, but you have to put some work in and you have to think about things differently, and you have to discard some of the baggage from previous ways of thinking about things. And that is so incredibly difficult for people and organizations to do that these days I make a decent living with helping people to try and make that change... But it's incredibly difficult.
42
+
43
+ One of my proudest boasts is that in the organization that I've just mentioned, where we've built the exchange - it was an organization called LMAX. It still exists, still trading, still running these exchanges around the world built on our technology, and the culture is still fantastic. Funnily enough, I saw a job advert for developers for LMAX cross my event horizon today, and I was reading about it and I was smiling to myself, because I probably could have written that job advert seven years ago when I left... Because the culture is still there, the behavior is still there, and that's continued. So we started something with the group of people that were there at the beginning that's been durable in the organization; we established a development culture that has been not only lasting, but has been communicated to subsequent work generations of developers and other people working in that environment... Which I think is fantastic. I'm incredibly proud of that.
44
+
45
+ It is difficult to get people thinking differently, to change their minds, to jump out of the ruts of their old habits and jump into a new way of thinking about things, which these days is a lot of the thing that I get pleasure from, is trying to help people just think about ideas differently.
46
+
47
+ **Gerhard Lazu:** So would you say that the people influence these practices and the people contribute to how these practices work, or is the inverse true, where the practices influence the people to behave in a certain way, and then sustain these practices long-term?
48
+
49
+ **Dave Farley:** I think it's both. I think it's a combination. I think it's a little bit too trite... So we've always said things like "What it takes to build great software is you need great development teams." And that's true, you need good people. But it's not enough. I've worked with some genuinely brilliant developers, building bad software. And that's not their fault. A bad process will break good people every time. And so there's more to it than only that. And this, again, is one of those things that - I refer back to science quite so frequently... Often, developers and development teams and organizations are somewhat disdainful of process, because they assume that software is a heroic exercise carried out by geniuses toiling against the code mountain...
50
+
51
+ \[16:14\] But you could say the same kind of thing about science. Science is this terrific endeavor, a human activity carried out by fallible, mistaking human beings... But by organizing their thinking in a certain way, they eliminate whole classes of errors and biases that are built into us through our biology. And if you want to do engineering, which I would count as an application -- so practical science, to a practical end, is the way I think about engineering. If you wanna do engineering of that form, which is what I've come to think of what we do as, when we do this sort of stuff that I'm talking about - if you wanna do that, then you're gonna have a better outcome, you're gonna improve your chances. It's no guarantee. It's not going to make a bad development team great. It's not gonna make a bad development team build world-class software, but it's going to improve the quality of their work. It's going to amplify their talents and their skills to an extent so they can do better than they would do without it. And that's true of world-class developers, too.
52
+
53
+ One silly example, one of my good friends who I worked at LMAX with - he was the CTO, I was the head of software development - is Martin Thompson, and I regard Martin as at least one of the best, probably the best programmer that I've ever met. He's genuinely brilliant. Really, really talented guy. And I've known him for a long time, I've known him for many years. We first met in the '90s. He's a bit younger than me, and he was a young man then, and he was very, very good then... But I taught him to do test-driven development while we were at LMAX, and he and I think he's a better developer now than he was before.
54
+
55
+ There are some of these techniques that however good you are or however bad you are can improve you. And if we were to be able to identify something that we could class as an engineering discipline, then I think it would have that kind of property, which is really what I'm talking about. I think we are odd in software development circles in that we tend to take terms and change what they mean. I think in nearly every other context that we can think of outside of software, if you use the term "engineering", it means the stuff that works. In software development we've turned it to mean something else, usually something more complex and something that we don't like very much.
56
+
57
+ I think words have some power, and I think that I like to use a reasonably strict definitional approach to parsing things to be able to form ideas... And if we think of engineering in the terms of the practical application of science, then it ought to work for software at least as much as it does for anything else. And we have a bunch of advantages in our favor, too. We've got one of the most powerful experimental platforms that happens to also be exactly where our software lives, in a computer.
58
+
59
+ **Gerhard Lazu:** First of all, I really like that you point out this distinction between engineers, software engineers and developers, because it's a very important distinction. People don't even think about it, and they use the terms interchangeably... But they mean very different things. Now, I think we can have a show just about that, whether we are software engineers or software developers... So let's park that there, recognize it for what it is, and move on to the other thing, which I -- when you mentioned Martin being one of the best software developers, engineers...?
60
+
61
+ **Dave Farley:** Yeah.
62
+
63
+ **Gerhard Lazu:** Software developers or engineers?
64
+
65
+ **Dave Farley:** Both.
66
+
67
+ **Gerhard Lazu:** Okay, great. So being Martin is the best software craftsman, software person, TDD made him better. Before TDD, what made him so good, in your eyes?
68
+
69
+ **Dave Farley:** \[20:01\] There were lots of things. He's a very smart guy, which helps. It's not enough, but it helps. One of Martin's great talents is he's got a laser beam focus on simplicity. And one of the things that I learned from developing software with Martin is I've really strengthened one of the tools in my toolbox, which is focusing on the separation of concerns. So Martin is absolutely brilliant; he sees the least piece of code that is doing two things and immediately he's pulling it apart to try and separate those two things, so that each piece of code is focused on achieving one outcome, and then he's growing it from there.
70
+
71
+ Martin's code is almost like reading prose. It's readable, it's modular, it's cohesive... It's just nice code. It's also blisteringly fast. Martin's one of the world's experts on high-performance and concurrent systems, and he's widely recognized as such. The people on the Java team occasionally ask him for advice about how to speed things up, and that kind of thing... He's well-respected in the industry. But the thing that I value most is the focus on the separation of concerns as a driving force in the design that he applies to code. I kind of had a more informal use of that kind of technique. My design skills were pretty good. I'm a decent coder, I'm not a bad developer myself, but Martin was always so focused on it, and I've picked that up now. And now I'm always looking "Could I pull something apart here?" and my code is much nicer as a result.
72
+
73
+ **Gerhard Lazu:** I can definitely see how the TDD would have enhanced that property or that aspect, because it forces you to focus.
74
+
75
+ **Dave Farley:** Yes.
76
+
77
+ **Gerhard Lazu:** I mean, you could be focused or not as you write code, but when you write TDD, if you do it properly, if you start with red, which is always the first one... Do it Red, go Green, then refactor. You even have a video on this; we'll introduce this a bit later, but that's fascinating to see how simple it is if you really think about it. And one of the things - again, I'm jumping here between things, but there's so many things I wanna talk to you about... It's how it took you a really long time of thinking, working in this space of extreme programming, agile I'm sure it's a big part of that, continuous delivery, test-driven development, to not only hone your skills, but also share your skills and share your knowledge with everyone that wants to listen, or is interested in these things. So in a way, it's not just the focus, I would say, but also the consistency and the perseverance to stick with it. I mean, it's been decades and you've been sticking with the same thing, on the same hill... I mean, sure, you're sharing it with others; I know that Jez Humble has a same hill, or maybe a different hill... The point being, these things stand the test of time, and focusing on that one thing long-term is what recognition, admiration, thankfulness, respect - that's how you build them. Success... Whatever you wanna call it, it's all related, I think.
78
+
79
+ Okay, that was a very nice story. Thank you for that.
80
+
81
+ **Dave Farley:** It's a pleasure.
82
+
83
+ **Break:** \[23:17\]
84
+
85
+ **Gerhard Lazu:** Now as we come back, I would like to dig a little bit into the technology that you used back in the day, so the specifics around the CI system, the CD system, the programming language, the frameworks, how that used to work, if you had any project tracking tools, or how you would organize work in the days... And I would like to dig a little bit deeper into those specifics - time it took, which cloud provider you used (if any), where you would run these things, and how that changed over time.
86
+
87
+ **Dave Farley:** Yeah.
88
+
89
+ **Gerhard Lazu:** So you're telling us some very good stories, Dave, about your time at LMAX that you were very fond of, some great people that you've worked with, that you've been in contact ever since... And you were part of that family, in a way, right? Your work family maybe, your continuous delivery family, whatever you wanna call it...
90
+
91
+ I would like to dig a little bit deeper into the specific stack/technology that you used at the time, and also how that changed over time.
92
+
93
+ In the early 2000's, what did a technology stack look like? The programming language, the framework, the CI/CD, how did you organize work... That type of thing.
94
+
95
+ **Dave Farley:** I'm more than happy to talk about tools and technology. I should begin though by caveating it, because I'm not very technology-driven, which is weird for a technologist to say... But I think that the tools are secondary, I think that I value design and design thinking more than I value the tools, and I think you apply that in different tools.
96
+
97
+ Having said that -- so my background was largely in the C family of languages. I did a lot of programming in C in the early days, C++ later on... During the early 2000's, building the points of scale systems and stuff like that, I was doing a lot of work in Java, sometimes C\#, bits of other things... Python... I played with Ruby slightly... But mostly, the early days of continuous delivery were mostly on Java projects. One C\# project that I can think of... So we were mostly using the technologies around at the time. In the early 2000's there weren't many tools, certainly no continuous delivery style tools. In the early 2000's we'd just started doing continuous integration, really... Or at least it had become popular.
98
+
99
+ So people have been doing continuous integration for a long time, and I was doing some version of that in the early '90s, but we had continuous build... But it was all just done in shell scripts. Tools to manage a build process and those sorts of things didn't come along until about 2000, and the first one was I think CruiseControl, which is an open source project from ThoughtWorks.
100
+
101
+ When we built the exchange LMAX, we started off using Java and CruiseControl, the starting point for building our deployment pipeline. And we built a very sophisticated deployment pipeline with different instances of that, using mostly things like Ant files as the glue between stages, and those sorts of things, to encode more complicated bits of glue between the different pieces.
102
+
103
+ We did a lot of development of sometimes reasonably sophisticated tooling of our own. We built our own deployment mechanism, which was similar in some ways to something like Chef or Puppet. It ran a little agent on the server, and the server called back to some master repository, and it pulled down changes and deployed them for us...
104
+
105
+ We were doing early things with infrastructure as code. I remember we wanted to be able to version control the configuration of network switches, and the only way that you could configure the network switches was through a firmware admin console, web-based firmware, like you get in a home router, or something like that... So we wrote a little domain-specific language that we could program the configuration of this thing in, which then was backended by -- I think it was Selenium, or something similar.
106
+
107
+ **Gerhard Lazu:** \[28:11\] Yeah, that's right.
108
+
109
+ **Dave Farley:** It would then drive the web app to poke the values into the router. So we were doing a lot of messing around with those sorts of things; very ad-hoc, very -- incrementing those as we needed... And as I said, we did some fairly cool, fairly sophisticated things.
110
+
111
+ One of the things that one of my colleagues, Mark Pryce, wrote at LMAX was still the best version I've seen ever of a test distributor. So we were managing a fairly large set of infrastructure to be able to get our feedback fast enough. We had one big repo; we'd put everything in one big repo, and then we could build and test and deploy everything together. And we could be more certain then in our changes; it meant that we didn't have to worry too much about how loosely coupled. We wanted good design, to make sure our designs were loosely coupled, but we didn't have to have them independently deployable, the pieces, so we could test them together first.
112
+
113
+ So we did that, and that was efficient. We ended up with a network - when I left, it was about 48 different server instances that ran our continuous delivery infrastructure, and a dynamically managed compute grid to evaluate these things, which Mark Pryce wrote the software called Romero to be able to manage all of these different instances. So we did a lot of various tooling of that kind.
114
+
115
+ In that project in particular we weren't very big consumers of other people's software, to some degree. We wrote a lot of stuff of our own, partly because of the performance demands, most of the third-party software that we had wasn't fast enough for what we were trying to do with our exchange, so we wrote our own collections, for example...
116
+
117
+ **Gerhard Lazu:** Okay.
118
+
119
+ **Dave Farley:** ...a HashMap in Java, for each entry, at a time, creates five objects. So you've got five garbage collection problems for every item that you added to a HashMap in those days. So we wrote constant memory footprint HashMaps and stuff like this so we can go fast. So we did a lot of stuff at different levels of abstractions, very low-level technical detail stuff, to bigger picture things.
120
+
121
+ **Gerhard Lazu:** That's really fascinating to capture the context in which these ideas came to be... Because while different people may have had similar thoughts, first of all, ThoughtWorks was a consultancy at the time, I imagine...
122
+
123
+ **Dave Farley:** Yeah.
124
+
125
+ **Gerhard Lazu:** ...so that's how it started; that's what I know about ThoughtWorks. So not only you had to come up with these ideas, but you also had to build the tools, which didn't exist... Open source I think was only just getting started... This was like early 2000's, so it wasn't really a thing. Git was only just starting around that time... GitHub didn't exist, by the way, and we know what an important place that is for open source. Twitter didn't exist, Facebook didn't exist... A lot of the platforms that we have today didn't exist. And the CNCF, the Cloud-Native Foundation didn't exist either. So you were like -- I wouldn't say avoid, because that sounds negative, but you were in a big sea, with no islands, with no towns, no harbors inside, and you had to figure these things out. And that was really challenging. Not because it was "not invented here." It was not that syndrome. It just didn't exist. And the communication was very different at the time as well.
126
+
127
+ So that must have been very challenging. And even so, you built those things, you shipped those ideas, you shipped the code, and many people benefitting so many ways, decades after you started. 21 years later we're talking about how this started and how relevant it is, and it feels to me like the whole world, in a way or the other, the whole software world is revolving around the principles that you sat down then.
128
+
129
+ **Dave Farley:** \[31:56\] There's a bit more history than that. So there were people doing good stuff before us, and open source had been around as an idea. It wasn't as big as it is now, there wasn't as much choice as there is now, but you know, Linux was around... Linus Torvalds had released that as an open source project considerably before then, and so on. A lot of the ideas that we were building on.
130
+
131
+ ThoughtWorks was an interesting place. There was a brief spell -- I feel privileged to have worked for ThoughtWorks at a time that was very exciting. And ThoughtWorks in London in particular I think was not quite on the same scale perhaps, but it was almost like an agile Xerox PARC; it was a place where some interesting fundamental ideas were introduced.
132
+
133
+ As I said earlier, Agile at a big scale, Agile in a more commercial setting - we were doing it because we thought we could make software more quickly and better quality software using these techniques, so it would have a commercial advantage implied. These were the reasons why we were doing some of these things.
134
+
135
+ BDD, continuous delivery, mocking - these are ideas that came out of that office in ThoughtWorks. Several books that are famous, Growing Object-Oriented Software, Guided by Tests, my Continuous Delivery book... There were people that are now well-known in the industry that we were all part of that group of people working there at the time. So that was an exciting place to be. We were consciously experimenting and playing with new ideas, and trying to find better ways of doing things.
136
+
137
+ The software industry had been through what I think of as a fairly rough time of trying to industrialize it through the late '80s and '90s, applying techniques that people thought would work to make it more effective and productive, and they didn't. And the Agile movement was a bit of a reaction against that, I think.
138
+
139
+ What I'm trying to say is that we were building on the shoulders of giants. People that did stuff before. Continuous delivery I think of as second-generation extreme programming. It's extreme programming, but just with some other ideas added to it, that help you get there a bit more easily maybe, in some ways. But if you're doing extreme programming, you're not doing it wrong. That's a pretty good starting point.
140
+
141
+ **Gerhard Lazu:** I see a lot of the new systems, for example Argo CD - that's something which fascinates me right now, how it takes the concept of a pipeline to like a new level, and you have workflows, you have a programmable API, which is the Kubernetes, and this control plane where you define these custom resources, and then things happen, all these relationships emerge between them... Event sourcing is another big thing, which maybe is not as popular these days; I don't know. I'm not too into it, but I keep hearing it; it keeps coming up. But there have been so many CI/CD systems that appeared in the last five years, which seem to have exploded recently. There was Drone CI, there was CircleCI, there was GitHub Actions, which wasn't a CI to begin with, but it became one over time... And all these other systems. I mean, Jenkins - I think that came after CruiseControl, I remember...
142
+
143
+ **Dave Farley:** We switched to Jenkins at LMAX; we refactored our pipeline to use Jenkins later on.
144
+
145
+ **Gerhard Lazu:** Interesting. So there was this transition. And now we're in the era -- I think it's almost like the third one, which is a cloud-native one, where we have so many projects. I'm not sure whether you looked recently at the CNCF landscape, where you have all those projects. There's so much things there, and you can't even keep up with all of the updates, that's how many there are, never mind try them. It's impossible. There's not enough days in the week. Or hours in the day. You know what I mean.
146
+
147
+ **Dave Farley:** Both.
148
+
149
+ **Gerhard Lazu:** So I'm wondering, how did the cloud-native landscape shape your ideas of continuous delivery? Was there an impact of that, or was that happening in parallel? What influence, if any, do you feel coming from there?
150
+
151
+ **Dave Farley:** \[35:45\] I think that the gestation of the cloud was kind of in parallel with the starting points of continuous delivery. We published our book in 2010, and that kind of put a name on these practices, I suppose. And people these days talk about continuous delivery and continuous integration and all these sorts of ideas... I think we helped to popularize that through our book. But we'd been doing it for several years before that. And certainly in the early days, the cloud wasn't around, so all of the projects that continuous delivery began with were cloud projects.
152
+
153
+ I think that the cloud makes some of the continuous delivery thinking more obvious. I mean, you'd be absolutely insane to be an organization like Amazon or Google and to manually configure your servers. It's such a bizarre idea; it's just laughable. You couldn't do that. You're gonna automate that, unless you are crazy.
154
+
155
+ So ideas like infrastructure as code just seem obvious in the cloud. And they didn't always seem obvious to other people. People thought we were strange when we started automating those things, and making our servers in our data centers more repeatable and reliable. There's that kind of stuff.
156
+
157
+ If I'm honest, I don't think that cloud had a huge impact on the kinds of projects that I was working on during that time. Certainly, up until after the continuous delivery book came out. Of course, like everybody else, it has an impact on me now in the way that I think about these things and the way that I advise my clients how to approach solving problems.
158
+
159
+ I am, I suppose, an old school developer. My formative years in software development probably predated the cloud... The cloud is obviously a good thing; it gives us lots of opportunities. It commoditizes compute, but to my mind, it's not a fundamentally defining thing. It certainly changes some of the dimensions of design that I care about, the way that I would think about design...
160
+
161
+ One of the things that seems to me that changes is the economics of the design from an architectural point of view. In the old days, when we were doing stuff in our own data centers, we were probably more worried about managing storage, because storage was expensive. And that's largely become commoditized. And the price per bytes of storage has kind of just dropped through the floor with the introduction of Moore's Law and the introduction of cloud-based services. The big difference with the cloud now is that the unit of cost is really around compute. That's certainly if you start thinking about serverless things. So that ought to change the way in which we apply our design thinking. For example why bother normalizing data anymore? Why not just shard it out and make everything separate to optimize the compute cycle? You could do that. We could have processes sucking in data and just allocating them out in a way so that they're all more parallelizable... And I think that's kind of interesting, that it has those sorts of things. The easy access to be able to spin up some compute resource or storage resource or whatever else is fantastic... And the ever-raising of the bar of abstraction that the cloud services add is kind of interesting. And I think there's much more to come.
162
+
163
+ One of the things... We were talking about the continuous integration and continuous delivery tooling - I don't think we're there yet. I think there's more to do. I would like to see tooling that's more opinionated. I would like to see tooling that just -- bang, gave me a deployment pipeline. If I follow the rules, it's just going to run my unit tests, run my acceptance tests for me and deploy into production. That's doable, it seems to me. We could do that. I'm hoping to see the continuous delivery cloud vendors do more of that kind of thing - be more opinionated.
164
+
165
+ \[39:45\] One of my favorite technologies - we were talking about tech earlier on - is Gradle for build systems in the Java space. And one of the things that I always loved about Gradle is that if you don't care, if you just are willing to buy into its model, you can write your build script in one line. You can just say "I'm doing Java" and it'll do it for you. It'll compile to Java, it'll run the tests, it'll do all of those things for you. But if you want to override almost any behavior, you can do that, too. It's a whole programming language built on top of this well-designed domain model for builds, is what Gradle really is.
166
+
167
+ I like opinionated software. I like opinionated software that says "Do it like this", and if you don't like it like that, it'll get it out of your way really quickly. I'd like to see more tools like that, because I think there's a tragedy of the commons kind of thing goes on a little bit. If everybody has a choice, everybody's rediscovering everything from scratch, and I think we ought to be able to build a little bit more things and the cloud is one of those things that is doing that in some context. It's not doing it enough to my taste for build systems.
168
+
169
+ So I am currently in the middle for a piece of work that I'm working on to demonstrate how to build deployment pipelines, building a little sample application using GitHub Actions. It's alright. It's nice. I quite like GitHub Actions, it's okay. But it's too fiddly. I'm fighting with it, trying to get my Docker images to communicate with each other so I can run my acceptance tests, and this sort of thing. I'd like something that just worked. I'm not really interested in that part. I'd like something that just works if I wanna do something simple.
170
+
171
+ **Gerhard Lazu:** That's really interesting, because I have seen -- so these trends have been emerging in... I think build packs came closest to that, where it would automatically detect your application and it would know what to do; what is the build step, what is the run step, what is the package step... So it had this stuff built in. I think Heroku were the ones that made it popular, Cloud Foundry - that was the enterprise version of that... I know that there's other newcomers. Render is one of them, Fly I believe tries to do something similar... The point being there's some good concepts, but I don't think they're standardized. The one concept that was able to be standardized was the Kubernetes API, and it's amazing because it was unified API, and you can have almost anything via the same API. Do you want, for example, a VM? Well, you're programming in the same API and it spins up a VM. Do you want a SQL instance somewhere in some cloud? It's the same API. You can start doing arbitrage, you can start doing some really clever things.
172
+
173
+ There's even like a control plane which controls all the Kubernetes deployments, all that DNS, your CDN, all the things. So I think that is fascinating. Could we have something similar for CI/CD, this concept of a pipeline? I think we should. But I don't think there's any one clear winner as to how to approach it. It's just YAML, really. And that's okay... I mean, Tekton CD - that's trying to do something; it's using the same Kubernetes API to declare your pipelines and your inputs and your outputs and how that works... But you're right, it's still fiddly. It's almost like we need the next building block. And I think this is something that happened with the cloud - you had this fixed compute before, which was your cap ex; you'd buy the hardware, you'd invest and that's it... You know, you'd spend the money, so you have to use it... But it was very difficult to increase, or very slow to increase. Then the cloud came and you could have almost like infinite capacity. Do you want 1,000 CPUs? Two minutes later you have them. Do you want petabytes of SSD storage? You have it. And then storage wasn't an issue anymore. But it was difficult to scale that down to zero, and that's where serverless came.
174
+
175
+ Serverless - you can have an infinite capacity that you run for milliseconds, and then it spins down again. And that's a very interesting take. But you're right, how do you declare the pipeline or the thing that kind of controls all of these things that need to happen, because shipping the code out there, just part of it - you have tests, as you mentioned, you have the build, you have the dependencies you need to resolve... And that pipeline would be really big. So if you had to imagine it, it'd be massive. How could you declare it? And I think there's a lot of variance in pipelines.
176
+
177
+ **Dave Farley:** \[44:11\] There is. But if I'm honest, part of why I'm complaining and being a grumpy old man about this is I'd like people to take my opinion. \[laughs\]
178
+
179
+ **Gerhard Lazu:** Right, okay. Same does everyone else.
180
+
181
+ **Dave Farley:** Yeah, of course. \[laughs\] But one of the things that I wish people had picked up from the continuous delivery book that they didn't was that the book outlines a pattern for what a deployment pipeline is. Jez and I each wrote equal amounts of that book. I started off writing the beginnings of the pipeline bits. He contributed on top of it. But when I wrote the pipeline bits, what I meant is that I think this is the starting point for a pipeline. So yeah, absolutely you vary from there, but I think it's a bit like patterns. I think that if you wanna write software, what do you want? Well, you want fast feedback during the development phase to confirm that the code that you're writing is the code that you think it is. And then you want to be able to check that that code works as a system, is deployable, is configured correctly, it delivers value to customers. So there's a separate, different focus of testing that you need to establish that. And then there might be other things that are optional; maybe performance testing, security, whatever else.
182
+
183
+ So you could imagine -- so my minimum deployment pipeline is pretty fixed. You start off with a commit stage, we give you fast feedback, it'll search your coding standards, runs all of your unit tests... If it succeeds, it builds a release candidate. That release candidate is a deployable thing. You deploy it into an acceptance test environment, you run a bunch of BDD style acceptance tests against it to check that it's deployed correctly, it works correctly, it does what users want it to do... And then you can deploy it into production, because it's deployable.
184
+
185
+ That's my minimum deployment pipeline. I would pay my own money to be able to have that at the push of a button for a Java project or a Python project, or a C\# project, or whatever it was I was doing... And I can see no reason -- in fact, I have built that internally in organizations in the past. I must confess, when I started my own business and working for myself, my ambition was to earn enough money to pay me to have enough time to write some code so I could open source this model like this. And I earned enough money, but didn't have enough time. \[laughs\]
186
+
187
+ **Gerhard Lazu:** I think that worked out really well, because what you did manage to do was spend a bit more time on those videos that I've alluded to in the past... And that is actually how I came across, like "Oh, Dave Farley? I've heard that name. Continuous Delivery? Okay, I haven't read it, but I've heard of that book, and now I know I have to read it." Based on what you've said, there's some very important information there, which I need to get... So that's the first step.
188
+
189
+ The second step is you were able to capture some concepts in very simple terms, in very good terms, and these concepts stood the test of time. So that's out there, it's super-valuable, and it will continue being valuable for many years to come, I'm sure of it.
190
+
191
+ If someone's listening to this and wants to do this, that would be really interesting... Like, what would a GitHub Actions pipeline look like, for example, that resembles Dave's ideal pipeline? I think in your videos you even have -- like, that graphic keeps coming up. Do you have a specific course or book that talks more about that pipeline?
192
+
193
+ **Dave Farley:** I do. So the Continuous Delivery book that we've been discussing talks in broad principles about continuous delivery, and the deployment pipeline is kind of the core of the book, but it's not all the book's about. I have another book that was released this year on Leanpub, which is more of a focus manual on how to create deployment pipelines, and this pattern that I'm describing really, I suppose. So what are the key stages as I see them, and what should those be doing.
194
+
195
+ \[48:06\] It's a pattern, so you would expect it to evolve over time and to morph into different shapes, so you take it parallel, and so on. But it seems to me there are some fundamental things, is that your fast feedback on the technical quality of your work as a development team, and then you need confidence to know that your software is releasable. And the latter involves tests that are more expensive to run.
196
+
197
+ So you need to think about this as kind of a machine, a parallel computing algorithm, if you like, so that you can kind of trade-off - getting the fast feedback, and then moving ahead in confidence that you're likely to get good feedback from later stages. So if thinking in those sorts of terms helps you model it... I think that's a good pattern, and I would make that the starting point.
198
+
199
+ **Gerhard Lazu:** It's almost like a template for pipelines.
200
+
201
+ **Dave Farley:** Yes.
202
+
203
+ **Gerhard Lazu:** If there was a template that captured these important elements that need to be present, and then from that, it's almost like an RFC, and then from that you have a specific implementation which is for maybe a specific CI, and then you have variations of that implementation in whichever CI it is.
204
+
205
+ **Dave Farley:** Yes. And there are key stages that I would a system of any complexity would probably want to parallelize and grow, to get fast feedback... What people took from the idea of continuous delivery was -- when people think about deployment pipelines, I think what most people probably interpret that to mean is basically a build script that can deploy stuff at the end. And it's much more than that. There's more to the model than that, in my mind. This is one of the things that I can be definite about; I'm usually not definite about things, but in this case I can be, because I invented the term deployment pipeline, so I can be definitive about what I meant when I said it. A deployment pipeline goes from commit to releasable outcome. That's its job.
206
+
207
+ If at the end of a deployment pipeline you've got more work to do, it's not finished. It's not a deployment pipeline. The objective of a deployment pipeline is go commit to a releasable outcome. It doesn't mean you have to necessarily push the thing into production. That depends on the business, whether that makes sense or not. It makes sense to go frequently, but you don't have to... But working so your software is always in a releasable state is the way that I tend to describe continuous delivery; the deployment pipeline is the thing that determines releasability.
208
+
209
+ So that's what we're trying to get to... And if you're thinking about that - so what does that then take? Well, it depends on your system. But at a minimum, you want to know that the software works, it does what you think it does as a developer, and is deployable and does what the users want. As an absolute minimum, you must want to answer those questions in some way before you deploy it to production. So that's my minimum starting point. That's where I would start my template from. And then you could optionally plug in bits that would do performance testing, host where you put your performance testing.
210
+
211
+ So part of this is really to address the problem of how do you get organizations to start buying into this, and what you wanna do is that you wanna make it really easy to do the right things, or what you think are the right things, and possible to do other things. That's my approach. So I would like it to be absolutely trivially simple that if you are willing to accept some small constraints on the way in which you organize your code, then I will be able to build it for you, deploy it for you, run all of the unit tests, because you've put them in the right place. And then once I've deployed it, I'll be able to run all of the acceptance tests, because you've put them in a different place. And if all of those tests pass, I'm gonna give you something that you could deploy into production. And that's pretty trivial and doesn't really constrain you very much. All you're asking is "Tell us what you wanna build." You know, plug your build script in here, that's fine, build the pieces that you want, but tell us where the tests are, and we'll report them to you.
212
+
213
+ And now, as a developer, I'm gonna press the button, get my template out, start building my project against that template, and my deployment pipeline just starts functioning. Instead, at the moment, with all of the technologies that I've tried so far, I pretty much have to go through that exercise every time to set something up.
214
+
215
+ **Break:** \[52:35\]
216
+
217
+ **Gerhard Lazu:** Okay, let's do this... My assumption is that the pipeline that we use to push Changelog.com updates out is wrong. So my assumption is wrong. This is what the pipeline is, and I would like you to tell me what your thoughts are on the pipeline.
218
+
219
+ **Dave Farley:** Okay.
220
+
221
+ **Gerhard Lazu:** The pipeline, whenever there's a commit to the GitHub repository, it pulls down the code, it runs a build... By the way, it's an Erlang-based project, Elixir-based project, so it has to fetch some dependencies, compile the code... So that's the first step. And then it fans out into two other steps. One of the steps is to compile all the assets, and these are the static files - the CSS, the JavaScript, all those things - and the other one is to run the tests. The tests are mostly unit tests, but also some integration tests, because it does use a database... So it's like some hundreds of tests. It doesn't take too long to run, but a few minutes later it gets to the last stage. If both stages passed, it fans in; so there's a small fan out and fan in... And the last one is to build the artifact that is deployable. And in this case, it's a container image.
222
+
223
+ All those things put together, it takes maybe up to ten minutes to run. it can be a bit slow, but we won't get into that. The point is that the pipeline ends at publishing this artifact to a repository. And this is like an artifact repository.
224
+
225
+ In production, we receive notifications, "Oh, there's a new artifact." And the production system - in this case it's Kubernetes - knows how to pull the latest version down, how to do blue-green deploy, and there are checks which make sure that the new version actually works. It connects to the database, the health checks pass - all those things pass. Then it gets automatically promoted to be the new version.
226
+
227
+ All this happens within 15 minutes. A lot of it is just like slow workers... Anyways, it's a lot of free infrastructure there, especially on the build side. But within 10-15 minutes every commit goes out into production. What are your thoughts about this pipeline. I'm assuming it's the wrong one based on your description...
228
+
229
+ **Dave Farley:** \[laughs\] I think all of the things that you've said - I can't be too critical of it, because it's working. So I'm nothing if not a pragmatist. Let me critique it, nevertheless.
230
+
231
+ **Gerhard Lazu:** \[56:10\] Yes, please.
232
+
233
+ **Dave Farley:** So I think what you've said - I think the implications of what you've said is as a developer I don't know that I'm ready to move on to something new until after about ten minutes, when you've run all of your tests. That seems a little bit slow to me. I suppose it depends how big or complicated the code is. And part of the reason why it's slow is that you're conflating different kinds of tests. So the reason why a deployment pipeline is called a deployment pipeline is weird, and it's all my fault. I'm a software/computer nerd, and what this reminded me of when I came up with the idea was instruction pipelining in Pentium processors.
234
+
235
+ **Gerhard Lazu:** Okay.
236
+
237
+ **Dave Farley:** So when I say a pipeline, I don't mean a straight line. What I mean is an instruction pipeline. And an instruction pipeline in a Pentium processor was a branch prediction algorithm. So at a point in which you come to a branch in the code, a Pentium processor will start three threads of executions, three processes internally. It will start evaluating the statement and the condition that you're interested in, and it will also in parallel start executing what happens if that condition is true, and what happens if that condition is false. And then once it's finished evaluating the condition, it will discard the computation that wasn't useful. So it's made progress. It's made progress in parallel with carrying out the condition.
238
+
239
+ At the point at which a developer commits a change, my recommendation for a way of working is that you sit and you wait for the results of the tests. And at that point, what I'm looking for is a high level of confidence that if all of those tests pass in the commit stage, then everything else is gonna be fine. If my tests pass, I'm gonna move and I'm gonna start working on something new, with about 80% confidence that 80% of the time all the rest of the tests are going to be OK now. Now I can afford to run more slower, more complicated tests, because I'm making progress in parallel with executing those tests. And 80% of the time (or better) all of those tests are gonna pass, because I've got high confidence, because I'm doing the fail fast thing of testing things.
240
+
241
+ So I'm gonna run very fast, very efficient tests in the first stage, in the commit stage. It's gonna be focused on a really technical evaluation of what we're doing. Then I'm looking for the deployability of that. One of the things that you said was that the first time that you actually deploy the software is in production.
242
+
243
+ **Gerhard Lazu:** Yes. It goes straight into production, yes.
244
+
245
+ **Dave Farley:** Yeah. So how often is that a problem? Does it ever cause a problem, or does it always work?
246
+
247
+ **Gerhard Lazu:** It always works.
248
+
249
+ **Dave Farley:** Then I can't critique it. For other kinds of software though, I wanna test the deployment of the system. I want to test that that works... Because it changes over time. If you introduce a new service, or something that's different, then you're gonna be evolving that over time, so I'd like to be able to evaluate those kinds of things, too. I'd like to be able to test the configuration of the system. How does it work if it's in a thread pool, or whatever else it might be; I wanna test all those sorts of things, too. So I guess partly it depends on the consequence of things going wrong, how far you take that.
250
+
251
+ These days I make a living as a consultant, advising usually large companies on how to improve their software engineering practices... And one of the companies that I worked with was Siemens Healthcare. They're building machines that can kill you if they get the wrong -- you know, these medical devices in hospitals. There are chances that you don't wanna take with that kind of software, so you wanna be more thorough in your approach to evaluating those kinds of systems than what I would for other kinds of systems. So it probably does vary.
252
+
253
+ \[01:00:07.16\] So I must say, I can't really critique your project very well, because it sounds very good. It's light years ahead of probably whatever average means in our industry.
254
+
255
+ **Gerhard Lazu:** This was really good, because even talking to you about how it works, I realized why certain layers are so slow. Why this takes 15 minutes. And it's not the tests. The tests run in maybe 15 seconds. The tests are really fast. But it's all the caches, of dependencies, of pulling things down, of running updates, of compiling things.
256
+
257
+ When we run this in the CI, there's a queue. So your jobs may be queued for maybe 30 seconds or a minute. And you have multiple jobs, you have containers, you have to pull down images that may or may not be on the node or on the host where they run. And all those things, like the cache misses, can mean 30-45 seconds... Which in the big scheme of things it's not a lot, but they add up, because you have so many layers.
258
+
259
+ What is the impact of something not working? Well, when we deploy into production, the reason why it's slightly slower is because the first thing that we do is we back up the database before we're running the migration, so there's a full database backup every single time a new version starts. We backup all the assets to S3, so if we lose everything, that's okay, we can restore the whole thing in 30 minutes. And in front of the app - it's a monolith, by the way, and we didn't have time to discuss about microservices and monoliths (another time, I'm sure), is that we have --
260
+
261
+ **Dave Farley:** I have a good video on that topic.
262
+
263
+ **Gerhard Lazu:** I know you do. We will definitely discuss that next. And in front of the website there's a CDN which serves all the content cached. So if the origin is down, if the app is down, that's okay. Everything is cached worldwide. So we serve the cached content. So the impact on end users is none. They see the old content, but it doesn't go down. So the uptime is always 100%, because it's never down. It's distributed across the whole world, again. And all that -- there's like a complexity in the system which makes certain things slow.
264
+
265
+ But anyways, I would love to talk more about this, but we're running out of time, and this just shows how much we have to talk about... I would really like to talk about your YouTube channel next. So what made you start your YouTube channel? By the way, for those that don't know about this amazing -- it's my favorite YouTube channel right now; it's called Continuous Delivery. It's Dave's new YouTube channel. And week on week, every Wednesday, he publishes a new video. It's one of the best tech videos that I've seen. They're short, 17-18 minutes, but there's so much information there. I highly recommend you check it out. So Dave, what made you start this YouTube channel?
266
+
267
+ **Dave Farley:** The simple answer is it was Coronavirus.
268
+
269
+ **Gerhard Lazu:** Finally, there's a positive... \[laughter\]
270
+
271
+ **Dave Farley:** So it's something that I kind of had in the back of my mind for a long time. I am approaching the end of my career. I've done a lot of interesting things... I am opinionated, as you can probably tell from my conversation, about software... And I think that the teams that I've worked on, I've found some things that are worth spreading and worth hearing, at least. You can dismiss them, you can disagree with them; that's absolutely fine. But I think -- when I'm being grandiose, which I sometimes am... When I'm being grandiose, I think that we are on the verge of discovering what engineering for software might really mean. That is in the same sense as Elon Musk blowing up Starships in Texas. It's experimental. It's about learning and discovery and trying out ideas and focusing on the skills around that kind of thing.
272
+
273
+ \[01:03:59.00\] I think that if people just did that, then they would find a dramatic, experience-changing improvement in their experience of building and delivering software. You genuinely can build better software, faster, doing these techniques.
274
+
275
+ Sometimes I err on the side of being too prescriptive about some of these things, possibly, but I wanted to start talking about those things. And I've been talking at conferences for some years, working as a consultant for some years, helping people to do this kind of thing. And I had in the back of my mind it'd be nice to play with a YouTube channel one day. The Coronavirus happened, we were in lockdown, and at the time I was travelling around the world, constantly, as a consultant. And that kind of fell of a cliff. I was at home and I thought "Wow, what am I gonna do now?" So instead of writing software - I should have built the system that we've just described. Instead of doing that, what I did was I started a YouTube channel, and that's been a fascinating, engaging, delightful experience on the whole.
276
+
277
+ Sometimes some of the comments are not quite so delightful, but usually they are. Mostly they're lovely. And I've had a fantastic time. I think it helped to keep me, my wife and my son saner than we would have been otherwise through the process.
278
+
279
+ We have released a video every week at 7 PM on Wednesday since the start of the pandemic, and we haven't missed one yet.
280
+
281
+ **Gerhard Lazu:** Now, I have to thank you again... I find myself thanking you so much, because those videos - they are like a breath of fresh air. There's so many videos obviously on YouTube; it's massive. For me at least, and for our family, it's like the new TV. We use YouTube way more than anything else. Netflix is there, Apple TV is there, but it's YouTube by far. And I don't know how it happened, how I came across your videos, but they were so refreshing. They were simple, they were to the point... And it's not just me. If you look at the comments, the more positive ones, that's what the majority is saying. The way you capture these principles and the way you convey them is so good, and it's so simple... It's like "Yeah, it makes sense." At the end, after you watch a video, like "Oh, I wanna try this out." It just makes you think.
282
+
283
+ I'm sure that some of the information that you convey - it will not hit home until a few months later, or maybe even a few years later. It's simple, but there's so much there. And my favorite one - you keep mentioning Elon Musk, by the way... If you know him, or if someone that knows him is listening, I really want to interview him, because I think he's the embodiment of Shipping It. He's literally shipping the human race to a whole new level. I'm so fascinated by him. So my favorite video is the "SpaceX and software engineering | How to learn" on your YouTube channel. The link will be in the show notes, by the way.
284
+
285
+ Now, I try to limit myself to three. This was my top. The other one is "How to build quality software fast." That shipped yesterday. I mean, if you're paying attention, there are videos that you're just publishing which are top, so they are getting better, in my mind... "Why CI is BETTER than feature branching" - I would love to talk to you just about this. I'm a big believer in single branch, push straight into main/master (however you wanna call it). I would recommend main, the main branch. If you use Git, power to you; if you use something else, that's okay too. As long as you have single-branch, you continuously integrate, you continuously deliver - that's the place that you wanna be in, because you're trying to learn. And you will be wrong, even when you think you're right. So better think you're wrong, and start thinking that you're wrong, and it will be good. Trust - not me, trust Dave, because that's what he's saying.
286
+
287
+ **Dave Farley:** Don't trust me either. Try it out. \[laughs\]
288
+
289
+ **Gerhard Lazu:** Try it out, yeah. That's the best one. And "What's wrong with the state of DevOps?" That's the one that I want to watch again, because that's another very good video. We don't have time to talk about the specifics, but if anything, I feel like we should have another interview. We're just finishing this one, so I'm not sure how that's going to work, but I would definitely like to get together again - maybe this year; if not, next year is fine as well - to do another check-in, see how it's going.
290
+
291
+ Right now, you had like 53,000 subscribers, or 54,000. That was yesterday, by the way. It changes day to day. So let's see how many subscribers you'll have next time. When I started watching, you had like 5,000, 6,000, and then it just exploded. So yeah, the response has been positive. I hope you're pleased with it, because I'm very pleased with this YouTube channel... And thank the pandemic that it happened, right? \[laughter\] It's the weirdest thing to say, but it's the truth. If it wasn't for it, we would never have this YouTube channel.
292
+
293
+ **Dave Farley:** Yeah. Well, it doesn't compensate for the bad things... But it's been a lot of fun, and a lot of pleasure out of making the videos, but also engaging in the comments and talking to people about ideas, which is fantastic. It's all that any of us can do.
294
+
295
+ I'm interested in your selection. They weren't the ones that I expected, to be honest... They're not the most popular ones on the channel, some of the ones that you've mentioned... But they are ones that I like. I was slightly disappointed by the take-up of the SpaceX video, because I thought that was a good video. I liked that one.
296
+
297
+ **Gerhard Lazu:** Based on what I was saying earlier - I was saying that some of the things that you share, I don't think people realize how valuable they are until maybe a few months or even years later... And I think it depends on experience, it depends on what you value. But I see, for example - SpaceX is such an important thing. Tesla is such an important thing. Not the things that they do, it's how they approach it. How they're able to build -- that's what fascinates me, and I know it fascinates you too, because you mention it in the videos. So which are your favorite videos?
298
+
299
+ **Dave Farley:** Oh, last week's, of course. \[laughs\]
300
+
301
+ **Gerhard Lazu:** Last week's, okay. That was a good one. It's "How to build--" No, that was this week's. Which one was last week's?
302
+
303
+ **Dave Farley:** I just made always last week's. \[laughs\]
304
+
305
+ **Gerhard Lazu:** Oh, I see. Okay.
306
+
307
+ **Dave Farley:** No, there are some that I'm proud of. The early ones - I think there were some good ideas in the early ones, but my editing skills have improved significantly, and my equipment has improved a bit. It's still not very professional, but it's good enough now that it's not gonna make people run away screaming.
308
+
309
+ I liked the SpaceX one. The microservices video in which I talk about the problem with microservices is a good video... I was pleased with last week's video, which was "CI is better than feature branching", which is just talking mess in informational... So I'm trying not to do it in an emotional way. I'm trying to do it just based on information and just thinking about, you know, two pieces of information in two places, that are both being changed (they start off as copies), they will diverge. And the longer the time, the greater the divergence, therefore the more work to put them together again. That is incontroversably true. And so continuous integration, continuous delivery is about trying to minimize that time, trying to shrink that time down so that you're taking less risk with the changes. So there's ideas like that which I enjoy, and I enjoy trying to find a simple way of describing sometimes complex ideas.
310
+
311
+ **Gerhard Lazu:** I think that's a very good thought to end on, because it's a very profound one. I think people need to think about that. The simplicity in complexity, that I think everybody should strive to look for. Martin Thompson - I think he was a bit of an inspiration there as well. So keep improving, be wrong. Start being wrong, and maybe you'll be right. Who knows...? Nobody knows.
312
+
313
+ Check out David's YouTube channel, it's really good. It will be worth your time, trust me. And Dave, it's been a pleasure. Thank you very much for making the time, and I'm looking forward to the next one. Thank you.
314
+
315
+ **Dave Farley:** Great. Thank you very much. It's been fun.
What does good DevOps look like_transcript.txt ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So two years ago, in 2019, I gave a talk about making your system observable. And that was a DevOps Meetup Zurich, and I'll add a link in the show notes. Romano was the organizer, and he put up quite the event, so I would like to thank you for that. It was a great experience.
2
+
3
+ **Romano Roth:** You're welcome.
4
+
5
+ **Gerhard Lazu:** And this year, my intention was to join DevOps Days Zurich, but the timing wasn't right, so I couldn't make it work. But again, Romano was one of the organizers, and I'm wondering, how did the event go?
6
+
7
+ **Romano Roth:** It was absolutely great. So it was in the beginning of September when we had that event; it was also one of the first events which we could do in person, and that was amazing. The only thing -- it was quite frustrating in the beginning to organize all of that, because you need to look at the Covid numbers, you needed to create a concept for Covid, and that was quite stressful. But in the end, we could manage to do the event, we had a Covid security concept, everybody needed to line up, needed to have their certificates... And it also worked with all of the people which were coming from around the world. We had people from the U.S. coming, and also from Israel, and everything worked well. We had 250 people in there, and it was absolutely great.
8
+
9
+ **Gerhard Lazu:** And this was a two-day event, so it wasn't just like one day, which makes things slightly more complicated, right?
10
+
11
+ **Romano Roth:** Yeah.
12
+
13
+ **Gerhard Lazu:** Okay. How many talks did you have?
14
+
15
+ **Romano Roth:** I don't know the correct number, but what we have is always a keynote, then we had a set of talks, I think it was 3-4 talks... And then in the afternoon we had the Ignite talks, so that's the five-minute talks which we have... And then we usually have workshops and open spaces. And that's over the whole two days. So I would say roughly 20 talks altogether...
16
+
17
+ **Gerhard Lazu:** And was it single-track?
18
+
19
+ **Romano Roth:** Always single-track, yeah.
20
+
21
+ **Gerhard Lazu:** Okay. That's nice, because you have to sit there and enjoy; you don't have to change rooms, meeting rooms... Yeah, okay. Which was your favorite talk, do you remember? I'm sure there were many, but any one talk that stood out?
22
+
23
+ **Romano Roth:** Yeah. I liked very much the talk about -- what was it...? "Better. Sooner. Happier", from Jonathan Smart. I liked that quite a lot, because during the talk he asked always a question, and one of the questions was "Are you doing IT transformation? Please, hands up", and everybody was putting their hands up, and he said... "Don't." \[laughs\] And that was absolutely amazing. And he continued with these questions, for example "Are you using a scaled Agile framework? Don't." And so on. That was quite good, because he was going back to what really matters when you are doing an Agile transformation. That was cool stuff.
24
+
25
+ **Gerhard Lazu:** I want to ask you what that is. If you don't want to spoil that talk for you, you can skip maybe a few minutes... So what is it?
26
+
27
+ **Romano Roth:** \[laughs\]
28
+
29
+ **Gerhard Lazu:** Can you tell us?
30
+
31
+ **Romano Roth:** Yeah, sure. The thing is, you really need to focus on the people. You don't need to focus only on doing the transformation because you want to do a transformation. It is more focusing on what do you really want to achieve, and focus on changing these things. That's also why he said "Don't use a scaled Agile framework", because there you focus on the process, and on changing terminology. It is more shifting to "What do you really want to achieve?" Identify really what you want, and then changing these things. Not having that huge IT transformation that many people are doing. So really focusing on what really matters for you.
32
+
33
+ **Gerhard Lazu:** I seem to remember -- well, no, I remember that in the Agile Manifesto, as it was initially captured, one of the core principles were people over processes.
34
+
35
+ **Romano Roth:** Exactly.
36
+
37
+ **Gerhard Lazu:** So that's what this is. Okay, that makes sense. That was an interesting one. Now, I know that all the talks are available online, as videos, to catch up on-demand; again, I'll add the link in the show notes. I've also seen the pictures. So if you wanna see how this meetup was, you can go and look at those pictures.
38
+
39
+ I'm wondering - this is a yearly thing, right? So next year it's gonna happen again, also in-person, I'm imagining.
40
+
41
+ **Romano Roth:** \[08:06\] Yeah, exactly. I think the next one will be on the 31st of May, and we will do it again in-person, and it will be again in Zurich, or in Winterthur, as we call it. In the same building.
42
+
43
+ **Gerhard Lazu:** Okay. When do you open call for papers? When do people start submitting their talk proposals?
44
+
45
+ **Romano Roth:** \[laughs\] Very good question. We don't know yet. We are currently closing off all of the stuff which we need to do for the pat conference... But my opinion, I think it will be perhaps around December or it will be January when we will open up the call for papers.
46
+
47
+ **Gerhard Lazu:** Okay. Are there any specific topics that you would like to see more of in the next DevOps Days conference? What do you call it - conference, summit?
48
+
49
+ **Romano Roth:** Conference.
50
+
51
+ **Gerhard Lazu:** Conference.
52
+
53
+ **Romano Roth:** Yeah... Don't come with Kubernetes... \[laughs\]
54
+
55
+ **Gerhard Lazu:** Okay. So no Kubernetes.
56
+
57
+ **Romano Roth:** \[laughs\] We have so many proposals on Kubernetes, usually... No, what I really liked about the past conference, what we are focusing on, is diversity. And not only diversity like women or men, it's all about diversity also in different mindsets. That's why we also have different talks, and that's what I also like to see - big diversity, on different topics. We had talks on culture. We had talks on, for example, the role of UX in DevOps, which is also quite a special topic, but it's an important topic. And that's my wish - have that diversity on topics, and not only focusing, for example, on technology or only on the process; it's more also on the people side.
58
+
59
+ **Gerhard Lazu:** I love that. I mean, that really speaks to my heart, because we keep forgetting it's human beings, fallible, that get easily bored, and they keep chasing shiny, new things... And granted, Kubernetes may not be the shiny, new thing anymore, but then it's comfort, right? People are comfortable with that. So yeah, there's a lot there.
60
+
61
+ Now, I know that you're into DevOps; like, big-time into DevOps. But I don't know why. Why are you into DevOps?
62
+
63
+ **Romano Roth:** \[laughs\] Very good point. When I started my career, I was a .NET developer. And this was back in 2002, and we were doing there development of applications, or rich client applications. And one of the things also in the early times which struck me is "How can I ensure the quality of what I'm doing?" And yeah, of course, you could do testing also there, but it was not so automated. And I was always a little bit lazy, and I wanted to automate things... So I went into this area where we were starting automating the tests, and then also the deployment. So I went into continuous integration, continuous deployment, and the applications were getting bigger, and distributed... I was becoming an architect, and slowly I moved in the direction of these continuous delivery pipelines.
64
+
65
+ And when the whole DevOps movement started, I jumped on that, because this was really one of my hard topics, where I wanted to create these pipelines to continuously deal that value to the customer.
66
+
67
+ **Gerhard Lazu:** So my understanding is that you were passionate about how value gets delivered, which got you into DevOps, which seems to have made that almost like the center of its activity. How do you move this code from a repository into customer hands, wherever that may be? And there's like a whole lot of automation, because you can do it manually, but there is a better way, and automating that. Okay, interesting. Which was your first CI/CD system that you used? Do you remember?
68
+
69
+ **Romano Roth:** \[12:16\] The absolutely first CI/CD system was actually a command line that I used.
70
+
71
+ **Gerhard Lazu:** Interesting.
72
+
73
+ **Romano Roth:** Yeah, definitely. But I think what you want to ask is more the first product that I used, and this was -- I think it was the Team Foundation Server.
74
+
75
+ **Gerhard Lazu:** So when you mentioned the command line, did you mean like Rsync, or FTP, or SCP? What exactly did you do on the command line?
76
+
77
+ **Romano Roth:** Different things... For example, I had some scripts, command line scripts which I used to just compile or execute the tests. So my first build system was a batch file on my local computer, which I just could double-click and then it executed the tests. It compiled my code and it says "Yeah, everything is okay", and there is the deployable artifact. And when it went into the distributed system, I usually added also an FTP, where I just could move the code to the server, and then it was on the server.
78
+
79
+ **Gerhard Lazu:** Okay. What about today? What do you use today?
80
+
81
+ **Romano Roth:** Today I use quite a variety of systems. One of the products which I love is still TeamCity. I love that quite a lot, because you can do a lot of configuration. I use usually TeamCity together with Octopus Deploy, which I also love as a tool. But I see quite a strong movement in the moment into the direction of platforms, like GitHub and GitLab. So at the moment, when I look at the clients which I am working for, they are moving into this direction, away from Jenkins, Octopus \[14:05\] or CircleCI and into the direction of GitLab and GitHub. So these are now the big players. Customers are going into this directly because there you have a platform. Everything is there, and you don't need to deal with different tools which you need to stick together.
82
+
83
+ **Gerhard Lazu:** That's interesting. So I think that now we are starting to discover another side of Romano, because we know that you can put up a conference really well, as well as a meetup, but you also do other things. So when you don't organize various DevOps-related events, what do you do? Because you mentioned customers. There's more to it than organizing events, right? What exactly do you do?
84
+
85
+ **Romano Roth:** Yeah. \[laughs\] I'm the head of DevOps at \[14:52\] and there I have a whole unit of DevOps engineers and DevOps consultants, and I bring DevOps forward at Zühlke. So there is one side at Zühlke - I do a lot of trainings of people in the direction of DevOps, so that we can deliver better quality, better software to our clients... And on the other side, I'm also \[15:16\] projects, and there it's usually in projects where we are doing an Agile transformation, or we are doing a DevOps transformation. And I consult there different clients into this direction, and I also educate their people.
86
+
87
+ **Gerhard Lazu:** Okay. So how large is your team?
88
+
89
+ **Romano Roth:** The team is roughly at the moment I think 31 people, at Zühlke. But this is only in Switzerland, of course; there are other people around the world which are also doing DevOps.
90
+
91
+ **Gerhard Lazu:** Yeah. So your team is 31 DevOps engineers that work with various customers that you consult, help with DevOps-related projects. How many projects do you have?
92
+
93
+ **Romano Roth:** \[16:01\] Quite a lot, and the different people which are in my team, they work for different customers, and also with different engineers. So it is not only that these projects are under my responsibility, they are under different people's responsibility, but my team members are working in these projects.
94
+
95
+ **Gerhard Lazu:** Okay. So I'm thinking that you must have seen many projects this year that went well, as well as many projects that didn't go so well. Is this something that you can talk about, without giving any names? We don't have to give any names. But things that worked well, and things that didn't go so well. What do you think about that?
96
+
97
+ **Romano Roth:** Yeah, sure. When we think about things that went well, there is - especially at one customer, where we are creating a whole transformation... And what we did there is they are going through an Agile transformation. And one of the things you need to have for an Agile transformation is technical fundament, so that you can do this transformation. And we were thinking how we can do that, and we built up an Agile release train for that, with different teams in there, which were focusing on different aspects of this transformation. There is one team which is more focusing on the governance part, one team which is more focusing on, for example, the continuous integration and continuous delivery pipeline, and one team which is more focusing on the containerization. So it's about that. And that worked very well; we are now in the fourth \[17:45\] which we are doing, and that's quite cool. We had also a very, very good learning in there.
98
+
99
+ From the beginning, we identified who the customer is, and we said "We want to deliver to this customer." And we said, "Okay. Everything what we are doing must be something that the customer can use." And we started with that.
100
+
101
+ The thing was that in the first sprints which we did we saw that we were delivering to the customer, but only to the customer, but the customer was not using it. So we changed that, and we said, "No, no. From now on, the customer needs to use it", so we put that in the definition of Done that the customer needs to take that over. But that was also not enough, because we also had our system demos or our review meetings where we showed that, and that was not enough. And now we said, "Okay, when we are demo-ing stuff, not our people are demo-ing it. The customer needs to do the demonstration how he uses it." And that's a very strong thing you can do. So always deliver directly to the customer. Let the customer show what you have delivered and how he is using it. That would also be one of my recommendations to do in the future.
102
+
103
+ **Break:** \[19:16\]
104
+
105
+ **Gerhard Lazu:** So when it comes to the biggest obstacles, the biggest challenges to driving DevOps transformation or successful DevOps projects, what are they? What did you come across in your experience, Romano?
106
+
107
+ **Romano Roth:** So one of the biggest obstacles that is out there is actually the middle management. What you can see is you have a lot of companies which are organized in different units. And these units - they have goals. And of course, there is a head of this unit, and he has built that unit up, and he is chasing his goals. But one of the problems that we usually see is that there is a lot of misalignment between these units, or you can also call it silos.
108
+
109
+ So now, with Agile transformation or with the DevOps transformation, you are starting to align the people around the value stream, and you bring people together. And this means that some of these heads of these units - they are losing power. And they know that, they see that, and this is something they don't want to have. So they want to still be in charge, they want to have their budget, they want to see how things are done. And that's a big challenge, and I can also fully understand these people... But sometimes they are completely in the way, or they are also attacking an Agile transformation or a DevOps transformation, or they are doing stuff which makes it very difficult to bring that through. That's a big obstacle that I see, and it is very important to bring them on board, to educate them, and to also show them how their new job looks like in the future.
110
+
111
+ **Gerhard Lazu:** So how do you succeed with that, bringing them on board? How does that even look like?
112
+
113
+ **Romano Roth:** So one of the things you need to analyze is what are the goals these people have. And usually, it's not their goals, it's the goals of their bosses, and you need to change these goals. So when we look at an Agile transformation or a DevOps transformation, it is very crucial that it comes from the top management, and the top management has a clear vision, and also clear guidance what they want to do, and in which direction they want to go, and they need to change the goals of these people. Only by doing that you can change how these people are behaving.
114
+
115
+ **Gerhard Lazu:** Okay. And what about the type of person that doesn't want to change? What do you do then?
116
+
117
+ **Romano Roth:** This is a very difficult case, when you have that. The only thing you can do in that case is try to educate, try to convince, but if you can't, he/she potentially needs to leave the company.
118
+
119
+ **Gerhard Lazu:** \[24:02\] I see. Okay. So coming back to the Agile transformation or the DevOps transformation that you mentioned about - the first example that you gave us, of what good looks like in practice, is when you connect the value that the team builds, the customer(s) they build it for, and have the customer not even verify, but make sure the value is what they expect it to be. So that's what good looks like. So is there more to it, or is this basically the core of what you're referring to when you say Agile and DevOps transformation.
120
+
121
+ **Romano Roth:** It's more. One of the most important things that I always say is what you usually have is you have bright ideas. The business has ideas, the customer has bright ideas... And usually, you have a lot of these ideas. And what you want to do is you want to transform these ideas into value - value for the customer, value for the company.
122
+
123
+ So behind an idea, there is always a hypothesis, and you need to identify this hypothesis which is behind this idea. For example, a hypothesis can be when we bring this feature or this shiny, new mobile app, then we can have 10% more turnover. So that could be the hypothesis behind that. And now, the important thing is to find out what is the minimal thing we need to do to prove this hypothesis. This is a very important thing to do, because with that you can reduce the batch sizes which you have. So by analyzing what is the minimal thing, you identify the minimum viable product. And you need to also identify what are the leading indicators which indicate us that we are on the right track, and that this hypothesis is true, and we should invest more money into that. And by doing that, you can reduce massively the batch size and also the amount of work which is going through your value stream. And that's what you really need to do, you need to do less, but you need to do the things which you're doing in the right way. And by having these hypotheses and identifying them, and also having an evaluation on "Is it the right thing which we are doing?" and early also stop doing things, you can massively change the things you are doing, and you can create more value for the customer.
124
+
125
+ **Gerhard Lazu:** The way I understand that is ship less, more often, and check if it works.
126
+
127
+ **Romano Roth:** True. Absolutely.
128
+
129
+ **Gerhard Lazu:** That's how I'd summarize it.
130
+
131
+ **Romano Roth:** Yeah, perfect.
132
+
133
+ **Gerhard Lazu:** And in that case, you want to optimize the shipping cycle as much as you can. If it takes a day, try to go for an hour. If it takes an hour, try to go for a minute... No, that doesn't work. I don't think you can ship it in a minute. Maybe if you have a function, maybe... The point being, go as quickly as you can, but still it should feel like a comfortable pace. You shouldn't feel like you're rushing things out.
134
+
135
+ The scientific method is really important. Everything that you think is an assumption. And by the way, it's most likely wrong. But that's okay, because the quicker you can iterate on that, the quicker you'll figure out what right looks like. And once you know one right, then you'll have two, three, and before you know it, you have a set of things which work well together, and that's the value that I see.
136
+
137
+ **Romano Roth:** \[27:49\] Absolutely. One thing that I also find important - you said "Go quicker." Yes, this is right, but you also need to recognize when it is enough. And I think this is also quite an important thing. You don't need to chase a Google, Netflix, or so. It can be perfectly fine in your context to, for example, ship every day or every week, or so. You don't need to deploy to production every second, like Amazon is doing it. So I think there is always a sweet spot, and you need to identify this sweet spot.
138
+
139
+ **Gerhard Lazu:** Yeah. In my mind, any code, any feature that is built and it's not out there, it's inventory. And we all know that zero inventory is the best type of inventory. So whatever you have, just make sure it's out there; make sure people can start using it. Even if it's not complete, it doesn't really matter. Does it look right? And if it looks right, you have the confidence, "Okay, I'm walking in the right direction. Let's just keep adding on top of it." But if you can verify those assumptions as early as possible, the chances of you going terribly wrong are much less. You're less likely to go terribly wrong if you have that approach. You will still go wrong, but that's okay, as long as you can do those small-course adjustments. It's like driving on a motorway, right? You do a little bit of left, a little bit of right, and if you have an autopilot - because we were talking about your Tesla - you can see those very small steering wheel changes without you doing anything. So that's what you want, the small course adjustments, continuous; they happen every few seconds, and it's okay.
140
+
141
+ **Romano Roth:** Absolutely. And one of the things that I usually see - in many companies it is not allowed to make failures. And this is a huge pity, because when you always need to do things right, you cannot go that fast. That's a huge problem, and this is also a culture shift, and culture shifts take a lot of time.
142
+
143
+ **Gerhard Lazu:** Can you think of an example when making a mistake was a great thing? When making a mistake or a failure, people have learned from that failure? And if they weren't allowed to make it in the first place, they wouldn't have had those learnings. Can you think of such an example?
144
+
145
+ **Romano Roth:** Yeah, of course. I have such an example. So in one of the projects we needed to move fast. And in order to move fast, we said, "Okay, we have already an application in place, which we're using server-side rendering." But that was an old technology. And we said, "Okay, we can use that, but the user experience will not be that good. But we need to move fast."
146
+
147
+ So we made an architectural decision together with the business and said "Okay, we will use this old technology, and move on, so that we can learn." And we were building up the user interface with that. But soon, the business said, "Yeah, it looks okay, but we want to have a better user experience." So a user experience specialist came to the project, he designed some very nice user interfaces, and we said "We don't know if we can implement that with this old technology." But we tried, and we failed. We failed very hard, we had a lot of bugs... And this was the time when I went to the management and I said, "Look, management, we have the iceberg in front of us." Now we have three possibilities. We go left, and this would mean we need to change the UI technology of the whole application to a modern UI technology. I think it was Angular in that time. Or we go right, and right is we stay with this technology which we are having, but it will not look fancy, and remove everything that we just added, all that fanciness, and it's just user interfaces, with loading time, and so on. Or we go straight through the iceberg and we say "No, we want that with this old technology", but it can be that we will fail very hard.
148
+
149
+ \[32:14\] And we had a very good discussion about that, and we said we'd take the risk, we go to a new UI technology. Of course, we made some silly estimations, which were absolutely wrong; we completely underestimated it. But in the end, it was the right decision which we did... And the thing is the following - in the beginning, we started with this old technology, and that was of course, when you look back at that decision, but it enabled us to learn very fast what we've really wanted, or what the customer really wanted, and we were able to see, "Oh, okay, it looks like that." We added the whole user experience, we saw with this technology we were not able to do it, and we were able to see that we now need to change.
150
+
151
+ Of course, now you can say "Yeah, you could see that already in the beginning, and you could change that already in the beginning", but in my opinion it was not feasible.
152
+
153
+ **Gerhard Lazu:** Yeah. That's a really interesting story. And I'm wondering what that story would have been had the developers maybe stumbled across something like LiveView, which is server-side rendering, but it's a modern server-side rendering which exists in the Elixir ecosystem; it's running on the Erlang VM. Very efficient. It keeps JavaScript at a minimum, so you don't have to end up in the npm hell, as some call it. You can keep things simple, you can keep things server-side rendered, but it's still fast, it's still modern. So I'm wondering what that would have looked like with this technology. But obviously, you need to know your technology, you need to know what suits you, and you need to own it. So whatever you decide to use, you need to be confident that "I will make this work. And if it doesn't work, I will course-correct... Because hey, I was wrong." And that's perfectly fine. Saying "I was wrong" -- I think a lot of people are so afraid of saying they were wrong that they never admit that in the first place, and as a result they can never course-correct, and then they hit the iceberg, and then we know what happens next... Right?
154
+
155
+ **Romano Roth:** Yeah. And one of the important things is you should not be afraid of the sunk costs... Because that's always a bad thing. And you always hear that term quite a lot, "Yeah, but then we have sunk costs." Yeah, of course you have sunk costs, but throwing more money after a bad idea or a bad solution is also a very, very bad thing.
156
+
157
+ **Gerhard Lazu:** Yeah. It's not gonna make it better, right? The focus is on learning. The focus is not on the time spent to learn. What did you learn? Is this a good thing, and can you build on top of that? So if you switch your mindset and you think "Well, that's okay. We know not to do that again", and we know that that's an area that we're not comfortable with... And the longer you delay it, the worse it gets. We all know that, right? Just stop thinking about things like that.
158
+
159
+ Okay... Now, talking about technology, I'm wondering what role does a specific technology play in these decisions. I know that many teams get excited about something like Kubernetes, or they get excited about (as you mentioned) Angular. I'm not sure who gets excited about Angular these days, but I'm sure there are people out there which love it... Or some other JavaScript framework, and they say "No, we have to use this." How do you deal with those types of scenarios? First of all, have you been in those types of scenarios? And if you have, how did you deal with them successfully?
160
+
161
+ **Romano Roth:** I have been in these types of scenarios quite a lot. The thing is the following - what you need to do is you need to understand what the real need is, what you need to do. So getting excited about the technologies is a great thing; trying out this technology is also a great thing, but you should not do that in a huge project, trying out things.
162
+
163
+ \[36:11\] What I usually do is I really want to understand what exactly our need is, and what problem we are trying to solve, so what is the underlying problem we are trying to solve with this technology. And there is technology out there which perfectly fits the problem, but just looking at the technology and not knowing what problems we are trying to solve is a very bad thing. So what I do is when we have such a case, I really try to identify what the problem is... And of course, you then have different technology or different decisions you can do.
164
+
165
+ What I then do is I do sort of an analysis of the different possibilities, where I say, "Okay, this is technology Y, and this has these advantages, it solves us these problems, but it also could potentially introduce these problems. And this is the other technology which we are having", and then you have something which you can compare and which you also can say "Okay, should we go into this direction, or should we go more in this direction?" And after that, I usually also do some prototypes on these technologies, to get my hands dirty on that, so that I can see "Does it really work, or does it not work?"
166
+
167
+ **Gerhard Lazu:** Who decides which technology should be used? Do you let the developers decide, the ones doing the work? Or do you let the architect decide? Or the management? How does that look like?
168
+
169
+ **Romano Roth:** In my opinion, it should always be the decision of the team. The team which needs to work with this technology, they need to take the decision. Because if someone else takes the decision, the team does not stand behind this decision. So that's why I usually want that the team takes the decision, and also does the analysis, and everything. So they sort of need to come up with the idea, and also with the decision. Of course, there are companies out there where this is not really possible; then I also try to do that, but then I try to convince, for example, the central architecture or the management about this solution which we should do.
170
+
171
+ **Break**: \[38:51\]
172
+
173
+ **Gerhard Lazu:** I know, Romano, that you have a YouTube channel, which is growing in popularity. I've seen some really good videos. And I've checked today, and your most popular video to date is what are the DevOps trends that you have seen in 2021. I think "2021 DevOps Trends", something like that. I forget the exact title, but it was DevOps trends for 2021. So why do you think that video is so popular?
174
+
175
+ **Romano Roth:** Actually, I really don't know why it is so popular, but I made some analysis and it looks like people are googling this title, so they want to know what the trends are.
176
+
177
+ **Gerhard Lazu:** So what are they? Can you tell us what they are?
178
+
179
+ **Romano Roth:** Yeah, of course. So the trends which I brought up in 2021 was it's all about automation; so we need to automate more, so that's one of the trends that are pointed out. Security, so the whole dev sec ops - that was a huge one. And AIOps, that was also one of the trends that I pointed out. So we have a lot of data, and we need to deal with this data, so AI is a very good match for that, and these are the things that I see are coming up.
180
+
181
+ When I look back to the statements that I did, I think I was absolutely right with these trends. For example, when I look at AIOps, this is something that's coming, quite huge. I also started using AIOps in some areas, and the results are really amazing.
182
+
183
+ **Gerhard Lazu:** First of all, what is AIOps, and second of all, how do you make use of that? What does that look like in practice for you?
184
+
185
+ **Romano Roth:** So AIOps - as I said, usually you log out quite a lot of data. So you have a lot of log statements. And when you have a distributed system, you have distributed log files. So first of all, you need to put that all together into one logging system which you can have. But then you have a lot of logging statements in there, and it's impossible to really see where problems are, or where trends are. And here come AIOps into play, because AIOps can do pattern-matching. There's a ton of tools out there - I don't want to do advertising here, but...
186
+
187
+ **Gerhard Lazu:** What do you use? That's something--
188
+
189
+ **Romano Roth:** For example, I use quite a lot Dynatrace. And I'm a huge fan of Dynatrace, because we have some very difficult projects out there, and we were chasing some performance problems, and also some problems where suddenly something didn't work, and we weren't finding it. And also with log file analysis, we were not finding it. But by using Dynatrace, Dynatrace was able in minutes to point us to the correct server where the problem was, and it was just a configuration problem on that server. We were like, "Whoa, how did that go?" And that's quite amazing, how good these AIOps systems are already.
190
+
191
+ **Gerhard Lazu:** \[44:13\] Okay. Anything other than Dynatrace that you've used and you've liked?
192
+
193
+ **Romano Roth:** I've also used Datadog, I like that also. Beside of that, no, I cannot --
194
+
195
+ **Gerhard Lazu:** Okay. So Datadog - were you using it in the same way, in that you were shipping logs to Datadog, and then Datadog figured out what was going on in the system based on the logs?
196
+
197
+ **Romano Roth:** Exactly. Yeah.
198
+
199
+ **Gerhard Lazu:** Interesting. Okay. We talked about AIOps... Now, in automation, what tools do you find yourself reaching out for when you're automating things? What is in your toolbox, or what do you find maybe that your team likes to use?
200
+
201
+ **Romano Roth:** What we quite often use when it comes to deployment automation, we use quite a lot Octopus Deploy, of course... And when it comes to CI/CD pipelines, then of course we use Jenkins, but also TeamCity, ASH Devops is also a huge thing, and of course, GitHub and GitLab.
202
+
203
+ **Gerhard Lazu:** Okay. And another category was dev sec ops; I think I would call it like supply chain security... What tools do you use for securing the supply chain?
204
+
205
+ **Romano Roth:** We need to understand that there are different aspects of security when we talk about dev sec ops. One thing is the application security. So when we do continuous integration and our continuous integration server is compiling our source code, we do static code analysis. There, for example, we use of course SonarQube, Checkmarx is also one of the things you can use, and of course, there are also other tools... I think there is \[46:07\] tool, but I don't know the name anymore. A ton of tools are out there to do just static code analysis.
206
+
207
+ What you also need to do is you not only need to analyze your code, you also need to analyze the libraries, and the libraries of the libraries of the libraries...
208
+
209
+ **Gerhard Lazu:** Oh, yes... That's a big one.
210
+
211
+ **Romano Roth:** Exactly, that's a big one. And you need to identify these vulnerabilities there. And there I use usually WhiteSource to do that, which is also quite good because you also get the information about the licensing, which is also a difficult thing.
212
+
213
+ **Gerhard Lazu:** Oh, yes. That's a big one. You're right - once you enter the enterprise world... You don't even think about these things as a startup, but when you go in the enterprise, this is a big-ticket item; really, very important.
214
+
215
+ **Romano Roth:** Exactly, exactly. And the second thing you need to think of is of course when you are in production. First of all, what you need to do is monitor your system, and therefore you need to have these enterprise security monitoring systems. There is also a ton of products out there, but usually what you use is Splunk. You configure quite a good alerting together with the security experts, so that you get alerted about any security vulnerabilities.
216
+
217
+ **Gerhard Lazu:** That's interesting. So we have heard about the DevOps trends for 2021, and you gave us some great examples, some tools that you use in various spaces. I'm wondering, first of all, will you create a video for 2022?
218
+
219
+ **Romano Roth:** Sure, of course. I'm currently preparing it. I'm gathering all of the trends that I see at the moment, and at the end of the year I will create that video and publish it.
220
+
221
+ **Gerhard Lazu:** Okay. Can you give us a couple of hints as to what you're thinking about? Again, this is a draft, this is not the finished version, but a few things that you're thinking for this video.
222
+
223
+ **Romano Roth:** \[47:59\] Yeah. So first of all, what I will do is I will look back to what I said in my '21 video... I will have a look at that and I will say what kind of trends I see in the future. One of the huge trends that I see is hyper-automation. So it's not only about automating stuff, it's about automating nearly everything. So this is a huge trend that I'm seeing coming.
224
+
225
+ With the hyper-automation there is also another thing coming - you get a lot of data out of that, and you need to monitor that, and then you have again that big data problem, and again, AIOps comes into play, because with all of that automation you also need to maintain that, and you need to operate that. So your topic, observability, will be quite a huge thing.
226
+
227
+ **Gerhard Lazu:** Interesting. So hyper-automation - that is a great title. I'm sure that we could do an episode just on that - what it is, why is it important, what elements do you see in that... Interesting; okay, that's a great idea, I think. Let's run it by the product team, I think...? Because you were talking about ideas, everybody has one, so how do you figure out whether the ideas -- how do you formulate a hypothesis? So maybe if anyone listening to this can tell us if there's something they're excited about; we can connect them to the end users, to the ones listening, and if they would want for us to do an episode on that. I'm excited.
228
+
229
+ **Romano Roth:** What the next thing is, which we'll have - this is the whole cyber-resilience topic. Of course, on one side we have that dev sec ops thing, so we bring security into the whole DevOps cycle, but when you look at all of the attacks that are out there on companies, I think cyber-resilience will be one of the big, big topics, and I think together with dev sec ops, we will be able to give the companies this cyber-resilience in their application, but also in their infrastructure.
230
+
231
+ **Gerhard Lazu:** Interesting. I don't know enough about that topic, but it's something I would like to research, just to understand a bit better. I know all the ransomware attacks and all the cyber attacks - they're becoming more and more prevalent, and bigger, and they affect more and more users, but I don't know enough, like more details, other than what you just get from like afar. So I think that's something I would like to spend a bit more time in.
232
+
233
+ Switching subjects... Because I know that one topic that was top of your mind recently was how to allocate budget. And I forget the exact phrase that you used... It was a really good one. Let me check that, actually... Or you can tell me what it is.
234
+
235
+ **Romano Roth:** Sure. It's participatory budgeting.
236
+
237
+ **Gerhard Lazu:** Okay, what is that?
238
+
239
+ **Romano Roth:** \[laughs\]
240
+
241
+ **Gerhard Lazu:** What is participatory budgeting?
242
+
243
+ **Romano Roth:** Exactly. So participatory budgeting is a thing you can do to allocate budget. So what is one of the big problems that we have when allocating a budget? Usually, you have people who want to do stuff. And on the other side, you have people who have the budget, and who say where, which kind, and which amount of budget it gets.
244
+
245
+ The problem is that the people who have the budget don't really know what exactly the impact is, \[51:47\] people who want to do something really has. And that's a huge problem. What you usually get is the people who have the budget will just say "Yeah, we divide everything apart, and everybody gets the same amount", and then everybody is sort of happy.
246
+
247
+ \[52:05\] That's quite a bad thing you usually have. The better thing is to have that participatory budgeting. That's an event, and in this event everybody who wants to have a budget and is part of a value stream comes together, and they get allocated the budget. They sit at a table, they get the budget pot, and then they on the table need to pitch for their budget. And then they have together - participatory - a discussion on "In which area are we going to invest the money?" And that's a very, very good thing, because then the people are discussing about impact on value, and how much value this topic brings, and especially when you have, of course, OKR or a strategy, they are also coming up with the strategy and are saying "Hey, look, this initiative buys more into the strategy than the other one."
248
+
249
+ So there is that entrepreneurial thinking which is coming up, and they start to think like it is their own enterprise, and they are more emotionally attached, and in the end you get a better budget.
250
+
251
+ **Gerhard Lazu:** That was a great summary; I know that you gave a whole talk on this... And based on that summary, I'm going to watch it. So thank you for that. \[laughter\] Great. So we are just about to wrap up. I have one last, very important question. What is the most important takeaway for our listeners from our conversation? What would you like them to remember?
252
+
253
+ **Romano Roth:** A very good question. I would say don't be afraid to take decisions; don't be afraid to make a bad decision. Just constantly learn and react, and constantly adapt.
254
+
255
+ **Gerhard Lazu:** I love that. That's amazing, Romano. Thank you very much. This was a pleasure.
256
+
257
+ **Romano Roth:** Thank you also.
What does good DevOps look like?_transcript.txt ADDED
@@ -0,0 +1,855 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.08 --> 6.02] Welcome to another episode of Ship It. I'm Gerhard Lassu and today I'm chatting with Romano Roth,
2
+ [6.40 --> 13.62] head of DevOps at Zülke, a company founded by Gerhard Zülke in 1968. They help companies
3
+ [13.62 --> 20.82] all over the world build, ship and run anything from factory robots to AI assistants in complex
4
+ [20.82 --> 26.02] regulatory environments and even medical devices that perform autonomous robotic surgery.
5
+ [26.02 --> 30.74] Besides leading a team of 30 software engineers that specialize in operations,
6
+ [31.22 --> 38.28] infrastructure and cloud, Romano is one of the organizers of DevOps at Zülke and also the DevOps
7
+ [38.28 --> 45.30] meetup group, which is how we met in 2019. Having started his career as a .NET developer back in 2002,
8
+ [45.88 --> 51.76] Romano had his fair share of dev and ops challenges and he always enjoys seeing real business value
9
+ [51.76 --> 58.74] delivered continuously in an automated way. In recent years, his perspectives broadened and now
10
+ [58.74 --> 64.76] he sees DevOps challenges and wins across many companies. If you are curious about what good
11
+ [64.76 --> 69.78] DevOps looks like and what are the real challenges, then Romano has some good insights for you.
12
+ [70.14 --> 75.96] Big thanks to our partners Fastly, LaunchDarkly and Linode. Thank you for the great band with Fastly.
13
+ [75.96 --> 82.30] You can learn more at Fastly.com, ship new features with confidence by getting your feature flags powered
14
+ [82.30 --> 89.68] by LaunchDarkly.com and thank you Linode for keeping our Kubernetes fast and simple. You too can run our
15
+ [89.68 --> 94.58] infrastructure as we do via Linode.com forward slash changelog.
16
+ [94.58 --> 107.96] This episode is brought to you by Honeycomb. Honeycomb is built on the belief that there's a more
17
+ [107.96 --> 113.04] efficient way to understand exactly what is happening in production right now. When production
18
+ [113.04 --> 117.96] is running slow, it's hard to know exactly where problems originate. Is it your application code,
19
+ [117.96 --> 123.66] your users, or the underlying systems? Teams who don't use Honeycomb scroll through endless dashboards
20
+ [123.66 --> 129.08] guessing at what they mean. They deal with alert floods, guessing which ones matter, and go from tool
21
+ [129.08 --> 133.80] to tool to tool, guessing at how the puzzle pieces all fit together. It's this context switching and tool
22
+ [133.80 --> 138.80] sprawl that are slowly killing your teams and your business. With Honeycomb, you get a fast, unified,
23
+ [139.12 --> 144.28] and clear understanding of the one thing driving your business, production. Honeycomb quickly shows you the
24
+ [144.28 --> 149.82] correct source of issues, discover hidden problems, even in the most complex stacks, understand why your app
25
+ [149.82 --> 156.06] feels slow to only some users. With Honeycomb, you guess less and no more. Join the swarm and try
26
+ [156.06 --> 162.38] Honeycomb free today at honeycomb.io slash changelog. Again, honeycomb.io slash changelog.
27
+ [162.38 --> 178.84] We are going to ship in 3, 2, 1.
28
+ [178.84 --> 199.62] So two years ago in 2019, I gave a talk about making your system observable.
29
+ [200.36 --> 202.96] And that was DevOps Meet Albuteric.
30
+ [203.50 --> 205.58] And I'll add a link in the show notes.
31
+ [205.58 --> 209.32] And Romano was the organizer and he put up quite the events.
32
+ [209.40 --> 210.58] I'd like you to thank you for that.
33
+ [210.66 --> 211.76] It was a great experience.
34
+ [212.14 --> 212.52] Welcome.
35
+ [212.90 --> 217.68] And this year, my intention was to join DevOps Days Zurich, but timing wasn't right.
36
+ [217.68 --> 219.38] So I couldn't make it work.
37
+ [219.84 --> 222.00] But again, Romano was one of the organizers.
38
+ [222.44 --> 224.60] And I'm wondering, how did the event go?
39
+ [224.86 --> 227.04] Oh, it was absolutely great.
40
+ [227.32 --> 230.62] So it was in the beginning of September when we had that event.
41
+ [230.62 --> 236.02] It was also one of the first events which we could do in person.
42
+ [236.60 --> 238.30] And that was amazing.
43
+ [239.00 --> 249.38] The only thing was it was quite frustrating in the beginning to organize all of that because you need to look at the COVID numbers.
44
+ [249.58 --> 253.92] You needed to create a concept for COVID.
45
+ [254.34 --> 256.14] And that was quite stressful.
46
+ [256.70 --> 260.26] But in the end, we could manage to do the event.
47
+ [260.26 --> 263.12] We had a COVID security concept.
48
+ [263.82 --> 267.42] Everybody needed to line up, needed to have the certificates.
49
+ [268.40 --> 273.68] And it also worked with all of the people which were coming from around the world.
50
+ [273.80 --> 278.42] We had people from the U.S. coming and also from Israel.
51
+ [279.26 --> 281.30] And everything worked well.
52
+ [281.50 --> 283.96] We had 250 people in there.
53
+ [284.36 --> 286.64] And it was absolutely great.
54
+ [286.64 --> 289.14] And this was a two-day event.
55
+ [289.32 --> 292.98] So it wasn't just like one day, which makes things slightly more complicated, right?
56
+ [293.32 --> 293.44] Yeah.
57
+ [293.64 --> 293.90] Okay.
58
+ [294.30 --> 295.56] How many talks did you have?
59
+ [295.72 --> 297.70] I don't know the correct number.
60
+ [298.02 --> 300.90] But what we have is always a keynote.
61
+ [301.20 --> 303.42] Then we had a set of talks.
62
+ [303.42 --> 305.88] I think it was three or four talks.
63
+ [305.88 --> 310.24] And then in the afternoon, we had the Ignite talks.
64
+ [310.56 --> 314.76] So that's the five-minute talks which we have.
65
+ [315.18 --> 318.66] And then we usually have workshops and open spaces.
66
+ [318.92 --> 321.64] And that's over the whole two days.
67
+ [322.12 --> 326.86] So I would say roughly 20 talks altogether.
68
+ [326.86 --> 328.76] And was it single track?
69
+ [328.88 --> 330.12] Always single track, yeah.
70
+ [330.28 --> 330.58] Okay.
71
+ [330.70 --> 330.92] Okay.
72
+ [330.92 --> 331.48] That's nice.
73
+ [331.58 --> 331.98] That's nice.
74
+ [332.04 --> 334.80] Because you just have to sit there and enjoy, right?
75
+ [334.82 --> 338.10] Like you don't have to change rooms, meeting rooms.
76
+ [338.52 --> 338.70] Yeah.
77
+ [338.74 --> 338.96] Okay.
78
+ [339.26 --> 340.36] Which was your favorite talk?
79
+ [340.42 --> 340.90] Do you remember?
80
+ [341.32 --> 342.12] I'm sure there were many.
81
+ [342.30 --> 344.02] But any one talk that stood out?
82
+ [344.02 --> 344.50] Yeah.
83
+ [344.66 --> 348.74] I liked very much the talk about, what was it?
84
+ [348.88 --> 351.94] Better, sooner, happier from Jonathan Smart.
85
+ [352.48 --> 354.24] I liked that quite a lot.
86
+ [354.36 --> 357.96] Because during the talk, he asked always questions.
87
+ [358.32 --> 361.92] And one of the questions was, are you doing IT transformation?
88
+ [362.44 --> 363.58] Please, hands up.
89
+ [363.78 --> 366.24] And everybody was putting their hands up.
90
+ [366.28 --> 367.62] And he said, don't.
91
+ [368.84 --> 370.26] And that was absolutely amazing.
92
+ [370.58 --> 372.74] And he continued with these questions.
93
+ [372.74 --> 376.20] For example, are you using a scaled HL framework?
94
+ [376.84 --> 377.28] Don't.
95
+ [378.24 --> 379.28] And so on and so on.
96
+ [379.58 --> 381.32] And that was quite good.
97
+ [381.46 --> 388.02] Because he was going back to what really matters when you are doing an HL transformation.
98
+ [388.54 --> 389.98] And that was cool stuff.
99
+ [390.38 --> 392.96] Like, I want to ask you what that is.
100
+ [393.26 --> 397.28] If you don't want to spoil that talk for you, you can skip maybe a few minutes.
101
+ [397.68 --> 398.40] So what is it?
102
+ [399.62 --> 400.60] Can you tell us?
103
+ [400.92 --> 401.76] Yeah, sure, sure.
104
+ [401.76 --> 405.68] The thing is, you really, really need to focus on the people.
105
+ [405.68 --> 410.24] You don't need to focus only on doing the transformation.
106
+ [410.42 --> 412.12] Because you want to do a transformation.
107
+ [412.38 --> 417.12] It is more focusing on what do you really want to achieve.
108
+ [417.64 --> 420.94] And focus on changing these things.
109
+ [420.94 --> 425.74] And that's also why he said, don't use a scaled HL framework.
110
+ [425.74 --> 432.76] Because there you focus on the process and on changing terminology.
111
+ [433.16 --> 437.20] It is more shifting to what do you really want to achieve.
112
+ [437.62 --> 439.72] Identify really what you want.
113
+ [439.98 --> 441.84] And then changing these things.
114
+ [441.84 --> 447.04] Not having that huge IT transformation that many people are doing.
115
+ [447.04 --> 451.40] So really focusing on what really matters for you.
116
+ [451.40 --> 452.96] I think I seem to remember.
117
+ [453.36 --> 453.68] Well, no.
118
+ [453.98 --> 459.00] I remember that in the Agile Manifesto, as we initially captured, one of the core principles
119
+ [459.00 --> 461.18] were people over processes.
120
+ [461.80 --> 462.14] Exactly.
121
+ [462.46 --> 462.70] Okay.
122
+ [462.80 --> 463.72] So that's what this is.
123
+ [463.78 --> 463.98] Okay.
124
+ [464.10 --> 464.80] That makes sense.
125
+ [465.62 --> 467.60] So that was an interesting one.
126
+ [467.78 --> 472.42] Now, I know that all the talks are available online as videos to catch up on demand.
127
+ [472.60 --> 474.48] Again, I'll add a link in the show notes.
128
+ [474.48 --> 475.90] I've also seen the pictures.
129
+ [476.10 --> 481.08] So if you want to see how this meetup was, you can go and look at those pictures.
130
+ [481.42 --> 483.72] I'm wondering, this is a yearly thing, right?
131
+ [483.78 --> 485.32] So next year, it's going to happen again.
132
+ [485.52 --> 485.70] Yeah.
133
+ [485.78 --> 487.48] Also in person, I'm imagining.
134
+ [487.86 --> 488.28] Exactly.
135
+ [488.40 --> 488.76] Exactly.
136
+ [489.00 --> 493.80] I think the next one will be on the 31st of May.
137
+ [494.48 --> 497.08] And we will do it again in person.
138
+ [497.50 --> 502.92] And it will be again in Zurich or in Winterthur, as we call it.
139
+ [502.92 --> 505.24] And in the same building.
140
+ [505.62 --> 505.76] Okay.
141
+ [505.98 --> 509.18] When do you open Call for Papers?
142
+ [509.30 --> 511.54] When can people start submitting their talk proposals?
143
+ [512.58 --> 513.84] Very good question.
144
+ [513.98 --> 514.70] We don't know yet.
145
+ [514.74 --> 521.52] We are currently closing off all of the stuff which we need to do for the past conference.
146
+ [522.10 --> 529.54] But my opinion, I think it will be perhaps December or it will be January round where we
147
+ [529.54 --> 531.62] will open up the Call for Papers.
148
+ [531.62 --> 531.86] When they open up.
149
+ [532.10 --> 532.22] Yeah.
150
+ [532.22 --> 532.62] Okay.
151
+ [532.92 --> 540.54] Are there any specific topics that you'd like to see more of in the next DevOps Days conference?
152
+ [540.94 --> 541.62] What do you call it?
153
+ [541.66 --> 541.92] Conference?
154
+ [542.28 --> 542.52] Summit?
155
+ [542.80 --> 543.22] Conference.
156
+ [543.48 --> 543.76] Conference.
157
+ [543.78 --> 544.02] Conference.
158
+ [544.44 --> 544.64] Yeah.
159
+ [544.92 --> 545.30] Yeah.
160
+ [545.76 --> 547.20] Don't come with the Kubernetes.
161
+ [548.10 --> 548.58] Okay.
162
+ [549.10 --> 550.06] No Kubernetes.
163
+ [550.52 --> 550.68] No.
164
+ [551.06 --> 551.62] All good.
165
+ [551.74 --> 551.96] All good.
166
+ [552.62 --> 556.10] We have so many proposals on Kubernetes usually.
167
+ [556.10 --> 556.72] No.
168
+ [556.72 --> 556.74] No.
169
+ [556.92 --> 563.56] What I really liked about the past conference or what we are focusing on is diversity.
170
+ [563.56 --> 567.18] And not only diversity, women or men.
171
+ [567.18 --> 571.24] It's all about diversity also in different mindsets.
172
+ [571.24 --> 573.86] That's why we also have different talks.
173
+ [573.86 --> 576.90] And that's what I also like to see.
174
+ [577.70 --> 580.36] Big diversity on different topics.
175
+ [580.36 --> 582.36] We had talks on culture.
176
+ [582.36 --> 590.56] We had talks on, for example, the role of UX in DevOps, which is also quite a special topic,
177
+ [590.56 --> 593.24] but it's an important topic.
178
+ [593.24 --> 594.54] And that's my wish.
179
+ [594.70 --> 603.40] I have that diversity on topics and not only focusing, for example, on technology or only on the process.
180
+ [603.72 --> 606.70] It's more also on the people side.
181
+ [606.98 --> 607.42] I love that.
182
+ [607.56 --> 612.62] I mean, that really speaks to my heart because we keep forgetting it's human beings, fallible,
183
+ [613.00 --> 615.76] that get easily bored and they keep chasing shiny new things.
184
+ [616.18 --> 620.56] And granted, Kubernetes may not be the shiny new thing anymore, but then it's comfort, right?
185
+ [620.60 --> 621.56] People are comfortable with that.
186
+ [621.56 --> 625.06] So, yeah, there's a lot there.
187
+ [625.50 --> 631.08] Now, I know that you're into DevOps, like big time into DevOps, but I don't know why.
188
+ [631.38 --> 632.48] Why are you into DevOps?
189
+ [633.98 --> 635.12] Very good point.
190
+ [635.60 --> 639.74] When I started my career, I was a .NET developer.
191
+ [640.46 --> 643.32] And this was back in 2002.
192
+ [644.34 --> 650.04] And we were doing their development of application of rich client applications.
193
+ [650.04 --> 658.88] And one of the things also in the early times which struck me is how can I ensure the quality of what I'm doing?
194
+ [659.40 --> 665.78] And yeah, of course, you could do testing also there, but it was not so automated.
195
+ [665.78 --> 669.84] And I was always a little bit lazy and I wanted to automate things.
196
+ [669.84 --> 678.06] So, I went into this area where we were starting automating the tests and then also the deployment.
197
+ [678.24 --> 681.20] So, I went into continuous integration, continuous deployment.
198
+ [681.20 --> 685.28] And the applications were getting bigger and distributed.
199
+ [685.64 --> 687.64] I was becoming an architect.
200
+ [688.14 --> 693.34] And slowly, I moved into the direction of these continuous delivery pipelines.
201
+ [694.06 --> 708.76] And when the whole DevOps movement started, I jumped on that because this was really one of my hard topics where I wanted to create these pipelines to continuously deliver value to the customer.
202
+ [708.76 --> 721.68] So, my understanding is that you were passionate about how value gets delivered, which got you into DevOps, which seems to have made that almost like at the center of its activity.
203
+ [722.02 --> 727.90] How do you move this code from a repository into customer hands, wherever that may be?
204
+ [728.12 --> 732.14] And there's like a whole lot of automation because you can do it manually, but there is a better way.
205
+ [732.36 --> 733.32] And automating that.
206
+ [733.52 --> 734.22] Okay, interesting.
207
+ [734.62 --> 737.04] Which was your first CI-CD system that you used?
208
+ [737.14 --> 737.52] Do you remember?
209
+ [737.52 --> 746.38] The absolute first CI-CD system was actually a command line that I used.
210
+ [746.60 --> 747.70] Yeah, definitely.
211
+ [748.02 --> 753.76] But I think what you want to ask is more the first product that I used.
212
+ [753.90 --> 757.34] And this was, I think it was the team foundation server.
213
+ [757.70 --> 761.82] So, when you mentioned the command line, did you mean like rsync or FTP or SCP?
214
+ [762.20 --> 764.04] What exactly did you do on the command line?
215
+ [764.28 --> 765.38] Different things.
216
+ [765.38 --> 774.38] For example, I had some scripts, mount line scripts, which I used to just compile or execute the test.
217
+ [774.38 --> 782.28] So, my first build system was a batch file on my local computer, which I just could double click.
218
+ [782.74 --> 785.24] And then it executed the tests.
219
+ [785.24 --> 789.98] It compiled my code and it says, yeah, everything is okay.
220
+ [789.98 --> 792.60] And there is the deployable artifact.
221
+ [793.10 --> 802.38] And when it went into the distributed system, I usually added also an FTP where I just could move the code to the server.
222
+ [802.58 --> 804.88] And then it was on the server.
223
+ [805.10 --> 805.38] Okay.
224
+ [805.58 --> 806.70] What about today?
225
+ [806.84 --> 807.60] What do you use today?
226
+ [807.60 --> 811.48] Today, I use quite a variety of systems.
227
+ [811.92 --> 816.42] One of the products which I love is still TeamCity.
228
+ [816.62 --> 820.58] I love that quite a lot because you can do a lot of configuration.
229
+ [820.96 --> 827.48] I use usually TeamCity together with Octopus Deploy, which I also love as a tool.
230
+ [827.48 --> 836.60] But I see quite a strong movement in the moment into the direction of platforms like GitHub and GitLab.
231
+ [837.04 --> 853.90] So, at the moment, when I look at the clients which I'm working for, they are moving into this direction away from Jenkins, Octopus Deploy, TeamCity, and all of the Circle CI and into the direction of GitLab and GitHub.
232
+ [853.90 --> 855.92] So, these are now the big players.
233
+ [856.60 --> 859.78] Customers are going into this direction because there you have a platform.
234
+ [860.30 --> 867.06] Everything is there and you don't need to deal with different tools which you need to stick together.
235
+ [867.50 --> 868.28] That's interesting.
236
+ [868.72 --> 878.32] So, I think that now we are starting to discover another side of Romano because we know that you can put up a conference really well as well as a meetup.
237
+ [878.44 --> 879.50] But you also do other things.
238
+ [879.50 --> 883.90] So, when you don't organize various DevOps-related events, what do you do?
239
+ [883.96 --> 884.94] Because you mentioned customers.
240
+ [885.06 --> 887.32] There's more to it, right, than organizing events.
241
+ [887.52 --> 888.72] What exactly do you do?
242
+ [889.78 --> 890.08] Yeah.
243
+ [890.32 --> 893.18] I'm the head of DevOps at Zylke.
244
+ [893.72 --> 898.66] And there I have a whole unit of DevOps engineers and DevOps consultants.
245
+ [899.32 --> 902.50] And I bring DevOps forward at Zylke.
246
+ [902.50 --> 905.28] So, there is one side at Zylke.
247
+ [905.56 --> 914.40] I do a lot of trainings of people in the direction of DevOps so that we can deliver better quality, better software to our clients.
248
+ [914.88 --> 918.10] And on the other side, I'm also in client projects.
249
+ [918.76 --> 928.72] And there I usually are in projects where we are doing an IT or an HR transformation or where we are doing a DevOps transformation.
250
+ [928.72 --> 933.76] And I consult their different clients into this direction.
251
+ [933.76 --> 936.48] And I also educate their people.
252
+ [936.84 --> 936.92] Okay.
253
+ [937.20 --> 938.62] So, how large is your team?
254
+ [938.72 --> 944.58] The team is roughly, at the moment, I think, 31 people at Zylke.
255
+ [944.72 --> 947.08] But this is only in Switzerland.
256
+ [947.32 --> 950.98] Of course, there are other people around the world which are also doing DevOps.
257
+ [950.98 --> 951.48] Yeah.
258
+ [951.84 --> 961.22] So, your team is 31 DevOps engineers that work with various customers that you consult, help with DevOps-related projects.
259
+ [961.46 --> 962.70] How many projects do you have?
260
+ [962.92 --> 963.52] Quite a lot.
261
+ [963.74 --> 971.18] And the different people which are in my team, they work for different customers and also with different engineers.
262
+ [971.18 --> 979.42] So, it is not only that these projects are under my responsibility, they are under different people's responsibility.
263
+ [979.84 --> 984.04] But my team members are working in these projects.
264
+ [984.58 --> 984.84] Okay.
265
+ [985.28 --> 994.24] So, I'm thinking that you must have seen many projects this year that went well, as well as many projects which didn't go so well.
266
+ [994.50 --> 997.24] Is it something that you can talk about without giving any names?
267
+ [997.24 --> 1000.74] We don't have to give any names, but things that worked well, things that didn't go so well.
268
+ [1000.94 --> 1001.70] What do you think about that?
269
+ [1001.94 --> 1002.56] Yeah, sure.
270
+ [1002.86 --> 1013.24] When we think about things that went well, there is, especially at one customer where we are creating a whole transformation.
271
+ [1014.04 --> 1019.26] And what we did there is they are going through an agile transformation.
272
+ [1020.20 --> 1025.58] And one of the things you need to have for an agile transformation is technical fundament.
273
+ [1025.58 --> 1029.42] So, that you can do this transformation.
274
+ [1030.42 --> 1033.72] And we were thinking how we can do that.
275
+ [1033.98 --> 1043.10] And we built up an agile release train for that with different teams in there, which were focusing on different aspects of this transformation.
276
+ [1043.10 --> 1048.14] There is one team which is more focusing on the governance part.
277
+ [1048.26 --> 1055.46] One team which is more focusing on, for example, the continuous integration and continuous delivery pipeline.
278
+ [1055.66 --> 1060.12] And one team which is more focusing on the containerization.
279
+ [1060.52 --> 1061.68] So, it's about that.
280
+ [1062.16 --> 1064.32] And that worked very well.
281
+ [1064.60 --> 1068.18] We are now in the fourth PI, which we are doing.
282
+ [1068.18 --> 1070.44] And that's quite cool.
283
+ [1070.82 --> 1074.84] And we had also a very, very, very good learning in there.
284
+ [1075.06 --> 1078.70] From the beginning, we identified who the customer is.
285
+ [1078.78 --> 1082.28] And we said, we want to deliver to this customer.
286
+ [1082.60 --> 1090.56] And we said, okay, everything what we are doing must be something that the customer can use.
287
+ [1090.56 --> 1091.88] And we started with that.
288
+ [1092.62 --> 1101.92] The thing was that in the first sprints, which we did, we saw that we were delivering to the customer, but only to the customer.
289
+ [1102.22 --> 1103.94] But the customer was not using it.
290
+ [1104.14 --> 1105.12] So, we changed that.
291
+ [1105.26 --> 1106.38] And we said, no, no.
292
+ [1106.62 --> 1110.08] From now on, the customer needs to use it.
293
+ [1110.16 --> 1115.64] So, we put that in the definition of done, that the customer needs to take that over.
294
+ [1115.64 --> 1124.04] But that was also not enough because we also had our system demos or our review meetings where we showed that.
295
+ [1124.34 --> 1125.82] And that was not enough.
296
+ [1126.16 --> 1133.16] And now we said, okay, when we are demoing stuff, not our people are demoing it.
297
+ [1133.44 --> 1138.14] The customer needs to do the demonstration how he uses it.
298
+ [1138.60 --> 1141.90] And that's a very strong thing you can do.
299
+ [1141.90 --> 1145.48] So, always deliver directly to the customer.
300
+ [1145.70 --> 1150.24] Let the customer show what you have delivered and how he is using it.
301
+ [1150.48 --> 1155.26] And that would also be one of my recommendations to do in the future.
302
+ [1155.26 --> 1172.16] What's going on, shippers?
303
+ [1172.34 --> 1177.98] Our friends at Fastly are running an amazing promo with massive savings on Compute at Edge.
304
+ [1178.16 --> 1182.46] They're inviting our entire listener base to move latency-sensitive workloads to the edge.
305
+ [1182.46 --> 1188.44] Compute at Edge free for three months, plus up to $100,000 a month in credit for an additional six months.
306
+ [1188.94 --> 1196.90] This is a limited-time offer, so head to Fastly.com slash podcast as soon as you can to check it out and get all the details.
307
+ [1197.34 --> 1198.26] Here's the TLDR.
308
+ [1198.80 --> 1205.18] Fastly's Edge Cloud Network and modern approach to serverless computing allows you to deploy and run complex logic at the edge
309
+ [1205.18 --> 1209.16] with unparalleled security and blazing fast computational speed.
310
+ [1209.16 --> 1216.72] Scale instantly and globally, reduce origin load, get real-time observability, and get seamless integration with your existing tech stack.
311
+ [1217.08 --> 1220.96] Head to Fastly.com slash podcast to get Compute at Edge free for three months,
312
+ [1221.12 --> 1224.60] plus up to $100,000 a month in credit for an additional six months.
313
+ [1225.08 --> 1227.46] Once again, Fastly.com slash podcast.
314
+ [1227.56 --> 1229.56] Fastly.com slash podcast.
315
+ [1229.56 --> 1231.56] Fastly.com slash podcast.
316
+ [1231.56 --> 1233.56] Fastly.com slash podcast.
317
+ [1233.56 --> 1235.56] Fastly.com slash podcast.
318
+ [1235.56 --> 1237.56] Fastly.com slash podcast.
319
+ [1237.56 --> 1238.56] Fastly.com slash podcast.
320
+ [1238.56 --> 1253.70] So when it comes to the biggest obstacles, the biggest challenges to driving DevOps transformations or successful DevOps projects,
321
+ [1254.32 --> 1254.96] what are they?
322
+ [1255.06 --> 1256.82] What did you come across in your experience, Romano?
323
+ [1256.82 --> 1264.02] So one of the biggest obstacles that is out there is actually the middle management.
324
+ [1264.02 --> 1271.10] What you can see is you have a lot of companies which are organized in different units.
325
+ [1271.58 --> 1274.96] And these units, they have goals.
326
+ [1275.20 --> 1277.98] And of course, there is a head of this unit.
327
+ [1278.30 --> 1283.62] And he has built that unit up and he is chasing his goals.
328
+ [1284.36 --> 1291.00] But one of the problems that we usually see is that there is a lot of misalignment between these units,
329
+ [1291.00 --> 1293.62] or you can also call it silos.
330
+ [1293.62 --> 1298.16] So now with the HR transformation, you or with the DevOps transformation,
331
+ [1298.36 --> 1303.64] you are starting to align the people around the value stream.
332
+ [1304.14 --> 1306.28] And you bring people together.
333
+ [1306.86 --> 1315.76] And this means that some of these heads of these units, they are losing power.
334
+ [1315.76 --> 1318.56] And they know that.
335
+ [1318.76 --> 1319.60] They see that.
336
+ [1320.18 --> 1324.02] And this is something they don't want to have.
337
+ [1324.38 --> 1326.80] So they want to still be in charge.
338
+ [1326.90 --> 1328.36] They want to have their budget.
339
+ [1328.88 --> 1333.32] They want to say how things are done.
340
+ [1334.10 --> 1336.28] And that's a big challenge.
341
+ [1336.28 --> 1340.78] And I can also fully understand these people.
342
+ [1341.44 --> 1344.42] But sometimes they are completely in the way.
343
+ [1345.02 --> 1351.42] Or they are also attacking an HR transformation or a DevOps transformation.
344
+ [1351.86 --> 1357.48] Or that they are doing stuff, which makes it very difficult to bring that through.
345
+ [1358.06 --> 1361.46] And that's a big obstacle that I see.
346
+ [1361.46 --> 1366.68] And it is very important to bring them on board, to educate them,
347
+ [1367.00 --> 1373.30] and to also show them how their new job looks like in the future.
348
+ [1373.68 --> 1376.86] So how do you succeed with that, like bringing them on board?
349
+ [1377.16 --> 1378.36] How does that even look like?
350
+ [1378.68 --> 1385.44] So one of the things you need to analyze is what are the goals these peoples have.
351
+ [1385.44 --> 1391.56] And usually it's not their goals, it's the goals of their bosses.
352
+ [1391.96 --> 1394.26] And you need to change these goals.
353
+ [1394.90 --> 1399.18] So when we look at an HR transformation or a DevOps transformation,
354
+ [1399.76 --> 1403.60] it is very crucial that it comes from the top management.
355
+ [1404.36 --> 1410.92] And the top management has a clear vision and also clear guidance what they want to do
356
+ [1410.92 --> 1412.78] and in which direction they want to go.
357
+ [1412.78 --> 1416.76] And they need to change the goals of these people.
358
+ [1417.22 --> 1422.24] Only by doing that, you can change how these people are behaving.
359
+ [1422.98 --> 1423.58] Okay.
360
+ [1424.22 --> 1428.18] And what about the type of person that doesn't want to change?
361
+ [1428.52 --> 1429.22] What do you do then?
362
+ [1429.42 --> 1433.26] This is a very difficult case when you have that.
363
+ [1433.64 --> 1440.50] The only thing you can do in that case is try to educate, try to convince.
364
+ [1440.50 --> 1445.88] But if you can't, he potentially or she potentially needs to leave the company.
365
+ [1446.14 --> 1446.36] I see.
366
+ [1446.94 --> 1447.20] Okay.
367
+ [1447.82 --> 1453.00] So coming back to the agile transformation and the DevOps transformation that you mentioned about,
368
+ [1453.40 --> 1458.34] the first example that you gave us of what good looks like in practice
369
+ [1458.34 --> 1465.72] is when you connect the value that the team builds to the customer or customers they build it for
370
+ [1465.72 --> 1471.30] and have the customer not even verify, but make sure the value is what they expect it to be.
371
+ [1471.78 --> 1472.78] So that's what good looks like.
372
+ [1472.90 --> 1474.44] So is there more to it?
373
+ [1474.80 --> 1480.42] Or is this basically the core of what you're referring to when you say agile and DevOps transformation?
374
+ [1480.42 --> 1481.50] It's more.
375
+ [1481.66 --> 1488.50] One of the most important things that I always say is what you usually have is you have bright ideas.
376
+ [1489.04 --> 1490.24] The business has ideas.
377
+ [1490.48 --> 1492.80] The customer has bright ideas.
378
+ [1493.26 --> 1496.18] And usually you have a lot of these ideas.
379
+ [1496.80 --> 1501.74] And what you want to do is you want to transform these ideas into value.
380
+ [1501.96 --> 1504.44] Value for the customer, value for the company.
381
+ [1504.44 --> 1509.26] So behind an idea, there is always a hypothesis.
382
+ [1509.88 --> 1515.62] And you need to identify this hypothesis, which is behind this idea.
383
+ [1515.74 --> 1521.18] For example, a hypothesis can be when we bring this feature or this shiny new mobile app,
384
+ [1521.46 --> 1525.84] then we can have 10% more turnover.
385
+ [1526.74 --> 1529.64] So that could be the hypothesis behind that.
386
+ [1529.64 --> 1538.84] And now the important thing is to find out what is the minimal thing we need to do to prove this hypothesis.
387
+ [1539.94 --> 1547.38] And this is a very important thing to do because with that, you can reduce the batch sizes which you have.
388
+ [1547.72 --> 1554.48] So by analyzing what is the minimal thing, you identify the minimal viable product.
389
+ [1554.48 --> 1563.28] And you need to also identify what are the leading indicators which indicate us that we are on the right track
390
+ [1563.28 --> 1568.96] and that this hypothesis is true and we should invest more money into that.
391
+ [1569.90 --> 1579.66] And by doing that, you can reduce massively the batch size and also the amount of work which is going through your value stream.
392
+ [1580.26 --> 1583.08] And that's what you really need to do.
393
+ [1583.08 --> 1589.88] You need to do less, but you need to do the things which you are doing in the right way.
394
+ [1590.66 --> 1602.36] And by having these hypotheses and identifying them and also having an evaluation on is it the right thing which we are doing
395
+ [1602.36 --> 1612.76] and early also stop doing things, you can massively change the things you are doing and you can create more value for the cost.
396
+ [1612.76 --> 1619.82] The way I understand that is ship less, more often and check if it works.
397
+ [1620.34 --> 1620.48] True.
398
+ [1621.44 --> 1621.96] Absolutely.
399
+ [1621.96 --> 1622.90] That's the way I'll summarize it.
400
+ [1623.08 --> 1623.82] Yeah, perfect.
401
+ [1623.82 --> 1630.54] And in that case, you want to optimize the shipping cycle as much as you can.
402
+ [1630.72 --> 1633.36] If it takes a day, try to go for an hour.
403
+ [1633.70 --> 1635.60] And if it takes an hour, try to go in for a minute.
404
+ [1636.10 --> 1637.16] No, that doesn't work.
405
+ [1637.58 --> 1639.76] I don't think you can ship it like in a minute.
406
+ [1639.84 --> 1641.26] Maybe if you have a function, maybe.
407
+ [1641.64 --> 1646.72] The point being, go as quickly as you can, but still it should feel like a comfortable pace.
408
+ [1646.90 --> 1647.00] Yeah.
409
+ [1647.00 --> 1650.10] You shouldn't feel like you're rushing things out.
410
+ [1650.60 --> 1652.94] The scientific method is really important.
411
+ [1653.62 --> 1655.00] Everything that you think is an assumption.
412
+ [1655.48 --> 1657.52] And by the way, it's most likely wrong.
413
+ [1657.78 --> 1662.86] But that's okay because the quicker you can iterate on that, the quicker you'll figure out what right looks like.
414
+ [1663.00 --> 1663.06] Yeah.
415
+ [1663.06 --> 1667.62] And once you know one right, then you'll have two, three.
416
+ [1667.74 --> 1670.52] And before you know it, you have like a set of things which work well together.
417
+ [1670.78 --> 1672.96] And that's the value that I see.
418
+ [1673.20 --> 1673.50] Absolutely.
419
+ [1674.00 --> 1678.88] One thing that I also find important, you said, yeah, go quicker.
420
+ [1679.40 --> 1680.60] Yes, this is right.
421
+ [1680.88 --> 1685.08] But you also need to recognize when it is enough.
422
+ [1685.34 --> 1688.02] And I think this is also quite an important thing.
423
+ [1688.02 --> 1691.46] You don't need to chase Google, Netflix or so.
424
+ [1691.46 --> 1701.32] It can be perfectly fine in your context to, for example, ship every day or every week or so.
425
+ [1701.50 --> 1707.64] You don't need to deploy to production every second like Amazon is doing it.
426
+ [1708.12 --> 1713.52] So I think there is always a sweet spot and you need to identify this sweet spot.
427
+ [1714.24 --> 1714.38] Yeah.
428
+ [1714.76 --> 1720.60] In my mind, any code, any feature that is built and it's not out there, it's inventory.
429
+ [1720.60 --> 1724.18] And we all know that zero inventory is the best type of inventory.
430
+ [1724.44 --> 1726.64] So whatever you have, just make sure it's out there.
431
+ [1726.76 --> 1728.32] Make sure people can start using it.
432
+ [1728.32 --> 1730.28] Even if it's not complete, it doesn't really matter.
433
+ [1730.50 --> 1731.42] Does it look right?
434
+ [1731.92 --> 1734.74] And if it looks right, you have a confidence, okay, I'm walking in the right direction.
435
+ [1734.86 --> 1736.14] Let's just keep adding on top of it.
436
+ [1736.14 --> 1745.56] But if you can verify those assumptions as early as possible, the chances of you going terribly wrong are much less.
437
+ [1745.94 --> 1748.56] You're less likely to go terribly wrong if you have that approach.
438
+ [1748.80 --> 1754.28] You will still go wrong, but that's okay as long as you can do those small course adjustments.
439
+ [1754.28 --> 1755.88] It's like driving on a motorway, right?
440
+ [1755.92 --> 1758.22] You do like a little bit of left and a little bit of right.
441
+ [1758.22 --> 1765.68] And if you have an autopilot, because we were talking about your Tesla, you can see those like very small steering wheel changes without you doing anything.
442
+ [1766.04 --> 1769.76] So that's what you want, like the small course adjustments continues.
443
+ [1770.22 --> 1772.28] And they happen every few seconds and it's okay.
444
+ [1772.62 --> 1772.96] Absolutely.
445
+ [1773.28 --> 1780.30] And one of the things that I usually see, many companies, it is not allowed to make failures.
446
+ [1780.30 --> 1789.28] And this is a huge pity because when you always need to do things right, you cannot go that fast.
447
+ [1789.94 --> 1791.56] That's a huge problem.
448
+ [1791.78 --> 1797.78] And this is also a cultural shift and cultural shifts, they take a lot of time.
449
+ [1798.32 --> 1803.36] Can you think of an example when making a mistake was a great thing?
450
+ [1803.44 --> 1808.20] When making a mistake or failure, people have learned from that failure.
451
+ [1808.20 --> 1813.10] And if they weren't allowed to make it in the first place, they wouldn't have had those learnings.
452
+ [1813.34 --> 1814.52] Can you think of such an example?
453
+ [1814.96 --> 1816.90] Yeah, of course I have such an example.
454
+ [1817.24 --> 1821.78] So in one of the projects, we needed to move fast.
455
+ [1822.28 --> 1831.26] And in order to move fast, we said, okay, we have already an application in place, which we're using server-side rendering.
456
+ [1831.86 --> 1834.48] But that was an old technology.
457
+ [1834.48 --> 1840.22] And we said, okay, we can use that, but the user experience will not be that good.
458
+ [1840.42 --> 1841.96] But we need to move fast.
459
+ [1842.56 --> 1854.22] So we made an architectural decision together with the business and said, okay, we will use this old technology and move on so that we can learn.
460
+ [1854.64 --> 1857.62] And we were building up the user interface with that.
461
+ [1857.62 --> 1864.16] But soon the business said, yeah, it looks okay, but we want to have a better user experience.
462
+ [1864.88 --> 1868.50] So a user experience specialist came to the project.
463
+ [1868.64 --> 1872.78] He designed quite some very nice user interfaces.
464
+ [1872.78 --> 1877.40] And we said, poor, we don't know if we can implement that with this old technology.
465
+ [1877.40 --> 1880.34] But we tried and we failed.
466
+ [1880.44 --> 1881.56] We failed very hard.
467
+ [1881.68 --> 1882.84] We had a lot of bugs.
468
+ [1883.72 --> 1891.98] And this was the time when I went to the management and I said, look, management, we have the iceberg in front of us.
469
+ [1892.46 --> 1894.26] Now we have three possibilities.
470
+ [1894.60 --> 1895.18] We go left.
471
+ [1895.18 --> 1904.42] And this would mean we need to change the UI technology of the whole application to a modern UI technology.
472
+ [1904.98 --> 1907.14] I think it was Angular in that time.
473
+ [1907.14 --> 1908.68] Or we go right.
474
+ [1909.20 --> 1918.18] And right is we stay with this technology, which we are having, but it will not look fancy.
475
+ [1918.18 --> 1922.54] And we remove everything that we just added, all that fanciness.
476
+ [1923.12 --> 1928.02] And it's just user interfaces with loading time and so on.
477
+ [1928.10 --> 1934.80] Or we go straight through the iceberg and say, no, we want that with this old technology.
478
+ [1935.26 --> 1938.40] But it can be that we will fail very hard.
479
+ [1939.04 --> 1941.46] And we had a very good discussion about that.
480
+ [1941.66 --> 1943.82] And we said, we take the risk.
481
+ [1944.14 --> 1946.18] We go to a new UI technology.
482
+ [1946.18 --> 1952.36] Of course, we made some silly estimations, which were absolutely wrong.
483
+ [1952.94 --> 1955.04] We completely underestimated it.
484
+ [1955.48 --> 1959.70] But in the end, it was the right decision, which we did.
485
+ [1960.10 --> 1961.94] And the thing is, is the following.
486
+ [1962.32 --> 1965.40] In the beginning, we started with this old technology.
487
+ [1965.84 --> 1969.78] And that was, of course, when you look back, a bad decision.
488
+ [1969.78 --> 1978.38] But it enabled us to learn very fast what we really wanted or what the customer really wanted.
489
+ [1979.08 --> 1982.54] And we were able to see, okay, it looks like that.
490
+ [1982.82 --> 1985.24] We added the whole user experience.
491
+ [1985.44 --> 1988.66] We saw with this technology, we were not able to do it.
492
+ [1988.66 --> 1993.26] And we were able to see that we now need to change.
493
+ [1993.54 --> 1998.04] Of course, now you can say, yeah, you could see that already in the beginning.
494
+ [1998.04 --> 2000.64] And you could change that already in the beginning.
495
+ [2000.86 --> 2002.86] But in my opinion, it was not feasible.
496
+ [2003.46 --> 2003.58] Yeah.
497
+ [2003.82 --> 2005.82] That's a really interesting story.
498
+ [2005.82 --> 2015.60] And I'm wondering what the story would have been had the developers maybe stumbled across something like LiveView, which is server-side rendering.
499
+ [2015.92 --> 2020.38] But it's a modern server-side rendering, which exists in the Elixir ecosystem.
500
+ [2020.72 --> 2022.04] It's all running on the Erlang VM.
501
+ [2022.30 --> 2023.08] Very efficient.
502
+ [2023.46 --> 2025.98] It keeps JavaScript at a minimum.
503
+ [2026.38 --> 2030.32] So you don't have to end up in the NPM hell, as some call it.
504
+ [2030.48 --> 2031.46] You can keep things simple.
505
+ [2031.56 --> 2033.68] You can keep things server-side rendered.
506
+ [2034.12 --> 2034.96] But it's still fast.
507
+ [2035.02 --> 2035.54] It's still modern.
508
+ [2035.54 --> 2038.40] So I'm wondering what that would have looked like with this technology.
509
+ [2038.52 --> 2040.44] But obviously, you need to know your technology.
510
+ [2040.62 --> 2042.06] You need to know what suits you.
511
+ [2042.20 --> 2043.34] And you need to own it.
512
+ [2043.60 --> 2047.70] So whatever you decide to use, you need to be confident that I will make this work.
513
+ [2047.86 --> 2049.62] And if it doesn't work, I will course correct.
514
+ [2050.08 --> 2051.38] Because, hey, I was wrong.
515
+ [2051.50 --> 2052.38] And that's perfectly fine.
516
+ [2052.46 --> 2059.90] Saying I was wrong, I think a lot of people are so afraid of saying they were wrong that they never admit that in the first place.
517
+ [2060.18 --> 2061.72] And as a result, they can never course correct.
518
+ [2062.14 --> 2063.18] And then they hit the iceberg.
519
+ [2063.38 --> 2064.78] And then we know what happens next.
520
+ [2064.78 --> 2065.38] Right?
521
+ [2065.92 --> 2066.18] Yeah.
522
+ [2066.30 --> 2066.54] Okay.
523
+ [2066.94 --> 2072.16] And one of the important thing is you should not be afraid of the sunk cost.
524
+ [2072.30 --> 2074.82] Because that's always a bad thing.
525
+ [2074.90 --> 2078.78] And you always hear that term quite a lot.
526
+ [2078.86 --> 2078.96] Yeah.
527
+ [2079.04 --> 2080.60] But then we have sunk cost.
528
+ [2081.36 --> 2081.50] Yeah.
529
+ [2081.50 --> 2083.40] Of course, you have sunk cost.
530
+ [2083.40 --> 2091.00] But throwing more money after a bad idea or a bad solution is also a very, very bad thing.
531
+ [2091.20 --> 2091.36] Yeah.
532
+ [2091.42 --> 2092.74] It's not going to make it better, right?
533
+ [2092.74 --> 2094.66] Like, the focus is on learning.
534
+ [2095.14 --> 2098.14] The focus is not on the time spent to learn.
535
+ [2098.44 --> 2099.12] What did you learn?
536
+ [2099.20 --> 2100.04] Is this a good thing?
537
+ [2100.22 --> 2101.60] And can you build on top of that?
538
+ [2101.98 --> 2105.16] So if you switch your mindset and you think, well, that's okay.
539
+ [2105.26 --> 2106.56] We know not to do that again.
540
+ [2106.56 --> 2110.08] And we know that that's like an area that we're not comfortable with.
541
+ [2110.24 --> 2112.68] And the longer you delay it, the worse it gets.
542
+ [2113.10 --> 2114.04] We all know that, right?
543
+ [2114.34 --> 2116.72] Like, just stop thinking about things like that.
544
+ [2117.10 --> 2117.26] Okay.
545
+ [2117.26 --> 2124.22] Now, talking about technology, I'm wondering what role does specific technology play in
546
+ [2124.22 --> 2124.78] these decisions?
547
+ [2125.22 --> 2129.82] So I know that many teams get excited about something like Kubernetes or they get excited
548
+ [2129.82 --> 2131.20] about, you mentioned Angular.
549
+ [2131.48 --> 2134.92] I'm not sure who gets excited about Angular these days, but I'm sure there are people out
550
+ [2134.92 --> 2138.68] there which love it or some other, you know, JavaScript framework.
551
+ [2138.68 --> 2140.28] And they say, no, we have to use this.
552
+ [2140.62 --> 2143.86] How do you deal with those types of scenarios?
553
+ [2143.94 --> 2146.64] Well, first of all, have you been in those types of scenarios?
554
+ [2146.64 --> 2149.10] And if you have, how did you deal with them successfully?
555
+ [2149.38 --> 2153.62] I have been in these types of scenarios quite a lot.
556
+ [2153.82 --> 2155.04] The thing is the following.
557
+ [2155.38 --> 2162.16] It's what you need to do is you need to understand what the real need is, what you need to do.
558
+ [2162.80 --> 2166.16] So getting excited about the technology is a great thing.
559
+ [2166.62 --> 2173.32] Trying out this technology is also a great thing, but you should not do that in a huge
560
+ [2173.32 --> 2173.90] project.
561
+ [2174.34 --> 2176.22] So trying out things.
562
+ [2176.64 --> 2184.40] So what I usually do is I really want to understand what exactly our need is and what
563
+ [2184.40 --> 2186.66] problem we are trying to solve.
564
+ [2186.78 --> 2191.06] So what is the underlying problem we are trying to solve with this technology?
565
+ [2191.06 --> 2196.32] And there is technology out there which perfectly fits the problem.
566
+ [2196.52 --> 2204.72] But just looking at the technology and not knowing what problems we are trying to solve is a very
567
+ [2204.72 --> 2205.50] bad thing.
568
+ [2205.96 --> 2212.58] So what I do is when we have such a case, I really try to identify what the problem is.
569
+ [2212.58 --> 2217.90] And of course, you then have different technology or different decisions you can do.
570
+ [2218.34 --> 2228.94] And what I then do is I do sort of an analysis of the different possibilities where I say, OK,
571
+ [2229.14 --> 2230.62] this is technology.
572
+ [2230.62 --> 2231.62] technology, why?
573
+ [2231.62 --> 2234.22] And this has these advantages.
574
+ [2234.22 --> 2236.86] It solves us these problems.
575
+ [2236.86 --> 2240.64] But it also could potentially introduce these problems.
576
+ [2240.64 --> 2244.14] And this is the other technology which we are having.
577
+ [2244.14 --> 2251.02] And then you have something which you can compare and which you also can say, OK, should we go
578
+ [2251.02 --> 2254.80] into this direction or should we go in more in this direction?
579
+ [2254.80 --> 2264.16] And after that, I usually also do some prototypes on these technologies to get my hands dirty on that so
580
+ [2264.16 --> 2267.92] that I can see, does it really work or does it not work?
581
+ [2268.48 --> 2271.06] Who decides which technology should be used?
582
+ [2271.12 --> 2276.22] Do you let the developers decide the ones doing the work or do you let the architects decide or the
583
+ [2276.22 --> 2276.56] management?
584
+ [2276.92 --> 2277.68] How does that look like?
585
+ [2277.80 --> 2282.64] In my opinion, it should always be a decision of the team.
586
+ [2282.64 --> 2289.76] The team which needs to work with this technology, they need to take the decision.
587
+ [2290.40 --> 2297.62] Because if someone else takes the decision, the team does not stand behind this decision.
588
+ [2297.86 --> 2306.36] So that's why I usually want that the team takes the decision and also does the analysis and
589
+ [2306.36 --> 2306.94] everything.
590
+ [2306.94 --> 2313.90] And so they sort of need to come up with the idea and also with the decision.
591
+ [2314.26 --> 2318.40] Of course, there are companies out there where this is not really possible.
592
+ [2319.32 --> 2322.16] And then I also try to do that.
593
+ [2322.42 --> 2328.98] But then I try to convince, for example, the central architecture or the management about
594
+ [2328.98 --> 2331.24] this solution which we should do.
595
+ [2336.94 --> 2342.90] Hey, shippers.
596
+ [2343.06 --> 2346.42] This episode is brought to you by our friends at Equinix Metal.
597
+ [2346.72 --> 2351.02] If you want the choice and control of hardware with low overhead and the developer experience
598
+ [2351.02 --> 2353.32] of the cloud, check out Equinix Metal.
599
+ [2353.72 --> 2357.78] Deploying minutes across 18 global locations from Silicon Valley to Sydney.
600
+ [2358.22 --> 2363.16] Visit metal.equinix.com slash just add metal and receive $100 in credit to play with.
601
+ [2363.16 --> 2367.12] Again, metal.equinix.com slash just add metal.
602
+ [2367.34 --> 2369.84] And by our friends at FireHydrant.
603
+ [2369.98 --> 2372.88] FireHydrant is the reliability platform for teams of all sizes.
604
+ [2373.40 --> 2378.62] With FireHydrant, teams achieve reliability at scale by enabling speed and consistency from
605
+ [2378.62 --> 2381.08] a service deployment to an unexpected outage.
606
+ [2381.42 --> 2381.86] Here's the thing.
607
+ [2381.92 --> 2385.18] When your team learns from an incident, you can codify those learnings into repeatable
608
+ [2385.18 --> 2386.28] automated runbooks.
609
+ [2386.64 --> 2390.88] And these runbooks can create a Slack incident channel, notify particular team members, create
610
+ [2390.88 --> 2395.00] tickets, schedule a Zoom meeting, execute a script, or send a web hook.
611
+ [2395.32 --> 2396.06] Here's how it works.
612
+ [2396.24 --> 2399.98] Your app goes down, an alert gets sent to a specific Slack channel, which can then be
613
+ [2399.98 --> 2400.98] turned into an incident.
614
+ [2401.36 --> 2404.48] That will trigger a workflow you've created already in a runbook.
615
+ [2404.78 --> 2409.44] A pinned message inside Slack will show all the details, the Jira or Clubhouse ticket,
616
+ [2409.74 --> 2410.48] the Zoom meeting.
617
+ [2410.78 --> 2415.32] And all of this is contained in your dedicated incident channel that everyone on the team
618
+ [2415.32 --> 2416.10] pays attention to.
619
+ [2416.36 --> 2419.46] Now you're spending less time thinking about what to do next, and you're getting to work
620
+ [2419.46 --> 2421.44] actually resolving the issue faster.
621
+ [2422.02 --> 2425.70] What would normally be manual tickets across the entire spectrum of responding to an incident
622
+ [2425.70 --> 2429.18] can now be automated in every single way with FireHydrant.
623
+ [2429.46 --> 2430.56] And here's the best part.
624
+ [2430.74 --> 2432.52] You can try it free for 14 days.
625
+ [2432.64 --> 2434.40] You get access to every single feature.
626
+ [2434.78 --> 2436.02] No credit card required at all.
627
+ [2436.28 --> 2439.54] That way you can prove to yourself and your team that this works for you.
628
+ [2439.54 --> 2441.98] Get started at FireHydrant.io.
629
+ [2442.34 --> 2444.70] Again, FireHydrant.io.
630
+ [2444.70 --> 2466.60] I know, Romano, that you have a YouTube channel, which is growing in popularity.
631
+ [2466.86 --> 2468.18] I've seen some really good videos.
632
+ [2468.18 --> 2475.46] And I've checked today, and your most popular video today is what are the DevOps trends that
633
+ [2475.46 --> 2477.12] you have seen in 2021.
634
+ [2477.62 --> 2479.80] I think 2021 DevOps trends, something like that.
635
+ [2479.86 --> 2482.54] I forget the exact title, but it was DevOps trends for 2021.
636
+ [2483.22 --> 2485.70] So why do you think that video is so popular?
637
+ [2486.02 --> 2488.94] Actually, I really don't know why it is so popular.
638
+ [2488.94 --> 2495.08] But I made some analysis, and it looks like people are Googling this title.
639
+ [2495.36 --> 2498.32] So they want to know what the trends are.
640
+ [2498.58 --> 2499.12] So what are they?
641
+ [2499.22 --> 2500.60] Can you tell us what they are?
642
+ [2500.94 --> 2501.54] Yeah, of course.
643
+ [2501.74 --> 2508.24] So the trends which I brought up in 2021 was it's all about automation.
644
+ [2508.72 --> 2511.02] So we need to automate more.
645
+ [2511.02 --> 2514.70] So that's one of the trends that I pointed out.
646
+ [2515.04 --> 2515.52] Security.
647
+ [2515.88 --> 2520.50] So the whole DevSecOps, that was a huge one.
648
+ [2520.88 --> 2525.50] And AIOps, that was also one of the trends that I pointed out.
649
+ [2525.74 --> 2529.86] So we have a lot of data, and we need to deal with this data.
650
+ [2530.08 --> 2532.96] So AI is a very good match for that.
651
+ [2533.26 --> 2536.74] And these are the things that I see are coming up.
652
+ [2536.74 --> 2544.02] So when I look back to the statements that I did, I think I was absolutely right with these trends.
653
+ [2544.22 --> 2549.72] For example, when I look at AIOps, this is something that's coming quite huge.
654
+ [2549.98 --> 2557.72] I also started using AIOps in some areas, and the results are really amazing.
655
+ [2558.20 --> 2560.30] First of all, what is AIOps?
656
+ [2560.60 --> 2562.84] And second of all, how do you make use of that?
657
+ [2562.94 --> 2564.84] What does it look like in practice for you?
658
+ [2564.84 --> 2571.36] So AIOps, as I said, usually you log out quite a lot of data.
659
+ [2571.70 --> 2573.12] So you have a lot of log statements.
660
+ [2573.26 --> 2577.28] And when you have a distributed system, you have distributed log files.
661
+ [2577.50 --> 2584.20] So first of all, you need to put that all together into one logging system, which you can have.
662
+ [2584.34 --> 2587.16] But then you have a lot of logging statements in there.
663
+ [2587.16 --> 2594.86] And it's impossible to really see where problems are or where trends are.
664
+ [2595.58 --> 2601.44] And here comes AIOps into play because AIOps can do pattern matching.
665
+ [2601.66 --> 2604.16] And there is a ton of tools out there.
666
+ [2604.38 --> 2606.90] I don't want to do advertising here.
667
+ [2607.50 --> 2608.26] But they are...
668
+ [2608.26 --> 2609.04] What do you use?
669
+ [2609.18 --> 2609.70] What do you use?
670
+ [2609.74 --> 2610.10] That's something...
671
+ [2610.10 --> 2614.06] I use, for example, I use quite a lot Dynatrace.
672
+ [2614.06 --> 2622.12] And I'm a huge fan of Dynatrace because we have some very difficult projects out there.
673
+ [2622.58 --> 2630.60] And we were chasing some performance problems and also some problems where suddenly something didn't work.
674
+ [2630.96 --> 2632.24] And we were not finding it.
675
+ [2632.24 --> 2636.08] And also with log file analysis, we were not finding it.
676
+ [2636.20 --> 2645.76] But by using Dynatrace, Dynatrace was able in minutes to point us to the correct server where the problem was.
677
+ [2645.80 --> 2649.32] And it was just a configuration problem on that server.
678
+ [2649.72 --> 2653.22] And we were like, whoa, how did that go?
679
+ [2653.34 --> 2659.56] And that's quite amazing how good these AIOps systems are already.
680
+ [2659.56 --> 2660.16] Okay.
681
+ [2661.16 --> 2663.52] Anything other than Dynatrace that you've used and you've liked?
682
+ [2663.66 --> 2665.36] I also use Datadog.
683
+ [2665.96 --> 2667.24] I like that also.
684
+ [2667.74 --> 2669.88] Beside of that, no, I cannot.
685
+ [2670.46 --> 2670.68] Okay.
686
+ [2670.94 --> 2675.72] So Datadog, we're using it in the same way in that you were shipping logs to Datadog.
687
+ [2675.80 --> 2679.90] And then Datadog figured out what was going on in the system based on the logs?
688
+ [2680.16 --> 2680.60] Exactly.
689
+ [2681.24 --> 2681.54] Okay.
690
+ [2682.16 --> 2682.58] Interesting.
691
+ [2682.84 --> 2683.04] Okay.
692
+ [2683.38 --> 2684.80] We talked about AIOps.
693
+ [2684.80 --> 2692.28] Now, in automation, what tools do you find yourself reaching out for when you're automating things?
694
+ [2692.66 --> 2697.42] What is in your toolbox or what do you find maybe that your team likes to use?
695
+ [2697.80 --> 2697.96] Yeah.
696
+ [2698.08 --> 2708.16] What we quite often use when it comes to automation, when it comes to deployment automation, we use quite a lot of Octopus Deploy, of course.
697
+ [2708.16 --> 2721.08] And when it comes to CI, CD pipelines, then, of course, we use Jenkins, but also TeamCity, Azure DevOps is also a huge thing.
698
+ [2721.32 --> 2724.50] And, of course, GitHub and GitLab.
699
+ [2724.74 --> 2724.92] Okay.
700
+ [2725.22 --> 2725.46] Okay.
701
+ [2726.04 --> 2729.08] And other category was DevSecOps.
702
+ [2729.18 --> 2731.14] I think I would call it like supply chain security.
703
+ [2731.66 --> 2735.78] What tools do you use for supply chain, for securing the supply chain?
704
+ [2735.78 --> 2741.94] We need to understand that there are different aspects of security when we talk about DevSecOps.
705
+ [2742.36 --> 2745.10] One thing is the application security.
706
+ [2745.42 --> 2754.82] So, when we do continuous integration and our continuous integration server is compiling our source code, we do static code analysis.
707
+ [2755.52 --> 2757.78] There, for example, we use, of course, SonarCube.
708
+ [2758.68 --> 2762.80] Checkmarks is also one of the things you can use.
709
+ [2762.80 --> 2766.26] And, of course, there are also other tools.
710
+ [2767.02 --> 2771.40] Like, I think there is an OWASP tool, but I don't know the name anymore.
711
+ [2771.98 --> 2776.54] A ton of tools is out there to do just static code analysis.
712
+ [2777.04 --> 2784.98] And what you also need to do is you not only need to analyze your code, you also need to analyze the libraries.
713
+ [2785.70 --> 2788.02] And the libraries of the libraries and of the libraries.
714
+ [2788.14 --> 2788.62] Oh, yes.
715
+ [2788.94 --> 2789.40] Exactly.
716
+ [2789.52 --> 2790.14] That's a big one.
717
+ [2790.14 --> 2791.18] That's a big one.
718
+ [2791.32 --> 2794.06] And you need to identify these vulnerabilities there.
719
+ [2794.24 --> 2797.82] And there I use usually WhiteSource to do that.
720
+ [2798.48 --> 2805.98] Which is also quite good because you also get the information about the licensing, which is also a difficult thing.
721
+ [2806.18 --> 2806.64] Oh, yes.
722
+ [2806.98 --> 2807.48] Oh, yes.
723
+ [2807.50 --> 2808.20] That's a big one.
724
+ [2808.30 --> 2808.60] You're right.
725
+ [2808.70 --> 2814.22] Like, once you enter the enterprise world, these things, like, you don't even think about them as a startup.
726
+ [2814.22 --> 2817.94] But when you go in the enterprise, this is a big ticket item.
727
+ [2818.14 --> 2819.26] Really very important.
728
+ [2819.76 --> 2820.08] Exactly.
729
+ [2820.32 --> 2820.48] Exactly.
730
+ [2820.98 --> 2825.94] And the second thing you need to think of is, of course, when you are in production.
731
+ [2826.48 --> 2830.96] First of all, what you need to do is monitor your system.
732
+ [2830.96 --> 2836.42] And therefore, you need to have these enterprise security monitoring systems.
733
+ [2836.88 --> 2839.22] There is also a ton of products out there.
734
+ [2839.48 --> 2841.82] But usually what you use is Splunk.
735
+ [2842.02 --> 2850.14] You configure quite a good alerting together with the security experts so that you get alerted about any security vulnerabilities.
736
+ [2850.14 --> 2851.18] That's interesting.
737
+ [2851.44 --> 2855.60] So we have heard about the trends, the DevOps trends for 2021.
738
+ [2856.18 --> 2861.16] And you give us some great examples, some tools that, you know, you use in the various spaces.
739
+ [2861.68 --> 2865.50] I'm wondering, first of all, will you create a video for 2022?
740
+ [2866.18 --> 2866.50] Sure.
741
+ [2866.68 --> 2867.18] Of course.
742
+ [2867.48 --> 2867.78] Okay.
743
+ [2867.78 --> 2869.64] I'm currently preparing it.
744
+ [2869.84 --> 2873.32] I'm gathering all of the trends that I see at the moment.
745
+ [2873.58 --> 2877.76] And at the end of the year, I will create that video and publish it.
746
+ [2877.76 --> 2881.10] Can you give us a couple of hints as to what you're thinking about?
747
+ [2881.30 --> 2882.34] Again, this is a draft.
748
+ [2882.44 --> 2885.70] This is not a finished version, but a few things that you're thinking for this video.
749
+ [2885.94 --> 2886.08] Yeah.
750
+ [2886.56 --> 2893.32] So first of all, what I will do is I will look back to what I said in my 2021 video.
751
+ [2893.32 --> 2895.64] And I will have a look at that.
752
+ [2895.78 --> 2899.66] And then I will say what kind of trends I see in the future.
753
+ [2900.06 --> 2904.20] And one of the huge trends that I see is hyperautomation.
754
+ [2904.20 --> 2908.54] So it's not only about automating stuff.
755
+ [2908.82 --> 2911.84] It's about automating nearly everything.
756
+ [2912.34 --> 2916.84] So this is a huge trend that I'm seeing coming.
757
+ [2917.56 --> 2922.56] And with the hyperautomation, there is also another thing coming.
758
+ [2922.76 --> 2927.10] And this is you get a lot of data out of that and you need to monitor that.
759
+ [2927.10 --> 2930.94] And then you have, again, that big data problem.
760
+ [2931.18 --> 2939.34] And again, AIOps comes into play because with all of that automation, you also need to maintain that and you need to operate that.
761
+ [2939.80 --> 2944.16] So your topic observability will be quite a huge thing.
762
+ [2944.80 --> 2945.40] Interesting.
763
+ [2945.96 --> 2949.16] So hyperautomation, that is a great title.
764
+ [2949.84 --> 2952.36] I'm sure that we could do an episode just on that.
765
+ [2952.58 --> 2953.22] What it is?
766
+ [2953.32 --> 2954.24] Why is it important?
767
+ [2954.70 --> 2956.04] What elements do you see in that?
768
+ [2956.04 --> 2956.52] Interesting.
769
+ [2957.24 --> 2958.84] Okay, that's a great idea, I think.
770
+ [2959.30 --> 2962.06] Let's run it by the product team, I think.
771
+ [2962.76 --> 2964.92] Because you were talking about ideas, everybody has one.
772
+ [2965.02 --> 2968.62] So how do you figure out whether the ideas are you for a related hypothesis?
773
+ [2969.32 --> 2969.82] Is that something?
774
+ [2969.92 --> 2974.14] So maybe anyone listening to this can tell us if that's something they're excited about.
775
+ [2974.58 --> 2977.44] We can connect it to the users, to the end users, the ones listening.
776
+ [2977.60 --> 2980.52] And if they would want to for us to do an episode on that.
777
+ [2980.70 --> 2981.36] I'm excited.
778
+ [2981.36 --> 2989.14] What the next thing is, which we'll have, and this is the whole cyber resilience topic.
779
+ [2989.14 --> 2994.92] So, of course, we have on one side, we have that DevSecOps thing.
780
+ [2995.06 --> 2999.22] So we bring security into the whole DevOps cycle.
781
+ [2999.22 --> 3011.64] But when you look at all of the attacks that are out there on companies, I think cyber resilience will be one of the big, big topics.
782
+ [3011.64 --> 3023.34] And I think together with DevSecOps, we will be able to give the companies this cyber resilience in their application, but also in their infrastructure.
783
+ [3024.62 --> 3025.10] Interesting.
784
+ [3025.50 --> 3028.24] I don't know enough about that topic, but it's something I'd like to do.
785
+ [3028.44 --> 3031.56] I would like to research just to understand it a bit better.
786
+ [3031.56 --> 3039.00] I know all the ransomware attacks and all the cyber attacks, they're becoming more and more prevalent and bigger, and they affect more and more users.
787
+ [3039.50 --> 3046.00] But I don't know enough about more details other than what you just get from afar.
788
+ [3046.38 --> 3049.60] So I think that's something I'd like to spend a bit more time in.
789
+ [3049.92 --> 3058.08] I know that switching subjects, because I know that one topic that was top of your mind recently was how to allocate budget.
790
+ [3058.08 --> 3058.52] Yeah.
791
+ [3058.82 --> 3061.56] And I forget the exact phrase that you used.
792
+ [3061.60 --> 3062.42] It was a really good one.
793
+ [3062.66 --> 3063.58] Let me check that, actually.
794
+ [3063.96 --> 3065.06] Or you can tell me what it is.
795
+ [3065.22 --> 3065.36] Sure.
796
+ [3065.58 --> 3067.94] It's participatory budgeting.
797
+ [3068.40 --> 3068.76] Okay.
798
+ [3068.84 --> 3069.56] What is that?
799
+ [3070.02 --> 3071.82] What is participatory budgeting?
800
+ [3072.36 --> 3072.84] Exactly.
801
+ [3073.50 --> 3080.14] So participatory budgeting is a thing you can do to allocate budget.
802
+ [3080.74 --> 3085.90] So what is one of the big problems that we have when allocating a budget?
803
+ [3085.90 --> 3090.58] Usually, you have people who want to do stuff.
804
+ [3090.90 --> 3099.26] And on the other side, you have people who have the budget and who say where, which kind or which amount of budget gets.
805
+ [3099.76 --> 3109.78] The problem is that the people who have the budget don't really know what exactly the impact is of the certain topic.
806
+ [3109.78 --> 3114.50] The people who want to do something really has.
807
+ [3115.16 --> 3116.46] And that's a huge problem.
808
+ [3116.82 --> 3124.76] And what you usually get is the people who have the budget will just say, yeah, we divide everything apart.
809
+ [3125.06 --> 3128.16] And everybody gets the same amount.
810
+ [3128.48 --> 3132.10] And then everybody is sort of happy.
811
+ [3132.66 --> 3135.10] That's quite a bad thing you usually have.
812
+ [3135.10 --> 3139.72] And the better thing is to have that participatory budgeting.
813
+ [3139.80 --> 3140.60] That's an event.
814
+ [3141.04 --> 3147.64] And in this event, everybody who wants to have a budget and is part of a value stream comes together.
815
+ [3148.34 --> 3150.90] And they get allocated the budget.
816
+ [3151.08 --> 3152.28] They sit on a table.
817
+ [3152.64 --> 3155.06] They get the budget pot.
818
+ [3155.06 --> 3160.50] And then they on the table need to pitch for their budget.
819
+ [3160.88 --> 3170.40] And then they have together, participatory, a discussion on in which area are we going to invest the money.
820
+ [3170.96 --> 3181.84] And that's a very, very cool thing because then the people are discussing about impact on value, how much value this topic brings.
821
+ [3181.84 --> 3195.86] And especially when you have, of course, OKR or a strategy, they are also coming up with the strategy and are saying, hey, look, this initiative buys more into the strategy than the other one.
822
+ [3195.86 --> 3200.32] So there is that entrepreneurial thinking which is coming up.
823
+ [3200.76 --> 3205.86] And they start to think like it is their own enterprise.
824
+ [3206.50 --> 3208.74] And they are more emotionally attached.
825
+ [3208.92 --> 3210.96] And in the end, you get a better budget.
826
+ [3210.96 --> 3212.94] That was a great summary.
827
+ [3213.22 --> 3215.10] I know that you gave a whole talk on this.
828
+ [3215.58 --> 3217.78] And based on that summary, I'm going to watch it.
829
+ [3218.00 --> 3218.70] So thank you for that.
830
+ [3220.54 --> 3220.98] Great.
831
+ [3221.90 --> 3223.84] So we are just about to wrap up.
832
+ [3224.10 --> 3226.48] I have one last very important question.
833
+ [3227.00 --> 3230.88] What is the most important takeaway for our listeners from our conversation?
834
+ [3231.32 --> 3232.68] What would you like them to remember?
835
+ [3233.06 --> 3234.34] A very good question.
836
+ [3234.34 --> 3240.04] I would say don't be afraid to take decisions.
837
+ [3240.42 --> 3245.30] Don't be afraid to make a bad decision.
838
+ [3245.70 --> 3250.82] Just constantly learn and react and constantly adapt.
839
+ [3251.18 --> 3251.60] I love that.
840
+ [3251.96 --> 3252.86] That's amazing, Romana.
841
+ [3252.96 --> 3253.74] Thank you very much.
842
+ [3254.00 --> 3254.58] This was a pleasure.
843
+ [3254.84 --> 3255.40] Thank you also.
844
+ [3255.40 --> 3261.64] Thank you for tuning in to another episode of Ship It.
845
+ [3261.88 --> 3263.68] I enjoyed making it for you.
846
+ [3263.96 --> 3267.42] This is just one of the podcasts for developers that we ship.
847
+ [3267.80 --> 3271.12] Go to changelog.com forward slash master for the rest.
848
+ [3271.50 --> 3276.64] You can join me and the rest of our community at changelog.com forward slash community.
849
+ [3277.14 --> 3278.86] There are no imposters in our Slack.
850
+ [3279.20 --> 3280.46] Everyone is welcome.
851
+ [3280.46 --> 3284.76] Huge thanks to our partners Fastly, LaunchDarkly and The Note.
852
+ [3285.08 --> 3288.30] Thank you Breakmaster Cylinder for all our awesome things.
853
+ [3288.78 --> 3289.78] That's it for this week.
854
+ [3290.04 --> 3290.66] See you next week.
855
+ [3310.46 --> 3321.24] Game on.
What is good release engineering_transcript.txt ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** So end of 2016 I have joined this new team of developers, and they were the RabbitMQ core developers. The context of that meeting was the RabbitMQ Summit, which is something that used to happen every six months, twice a year. But that stopped, obviously, since the pandemic and all the recent changes.
2
+
3
+ The one person over the years that I really enjoyed working with is Jean-Sébastien. And if you're wondering who you can thank for all the makefile madness that I'm leaving in my trail, it's Jean-Sébastien. He's the one who introduced me to make, and the rest is history, as they say. Not only that, but he also introduced me to the RabbitMQ codebase; we were pairing buddies for a long time, and I found out about the build system, about the pipeline, about many things.
4
+
5
+ \[04:16\] So for the listeners - I mean, you've made so far into this episode, and you're still wondering what do I do; for those that don't know yet, this is where I tell you that my dayjob is to work on RabbitMQ. I'm a RabbitMQ core developer, same as Jean-Sébastien. So welcome, Jean-Sébastien, joining me in this new world...
6
+
7
+ **Jean-Sébastien Pedron:** Thank you very much.
8
+
9
+ **Gerhard Lazu:** You're very welcome. I was looking forward to this for a long time, actually... And one of the topics that are very relevant to this show are release engineering. And this is something that both you and me have been thinking about for years in the context of RabbitMQ, and have been working on it in different capacities... And my first question is "Why do you care about release engineering?"
10
+
11
+ I think it's an important part of a project, and in particular an open source project. The first reason is that you want to ship your code to end users to help them solve their own problems; you want those users to be happy with what you ship, and I think that to make that happen you need to communicate well with those users, explain what you ship to them, like "That new release contains these new features. That bug you hit, now it's fixed. You might be interested in that security vulnerability" and so on.
12
+
13
+ You also want those users to give you feedback on what you shipped, because that's how you can also improve your codebase and make the next version better than the current one. So yeah, that's what I would expect from a good release engineering, and I'd say that it's important for open source products, because you do not have any paying customers most of the time. So nobody is pressured to use what you produce and ship. So it's in the interest of the end users and you to have that great communication when you release something.
14
+
15
+ **Gerhard Lazu:** I think that relationship is really important, right? ...the relationship of an open source developer with a user of open source... And that has gone through many ups and downs over the years. I don't know exactly where it stands now, but it is becoming increasingly important for these projects and products to somehow make money.
16
+
17
+ Now, while release engineering for an open source product may not seem as important at first, because it's free, so why care, actually the opposite is true. The developers care a lot about these things. And once you have a certain number of users, which RabbitMQ has, these things become important... Because bugs can affect many users, and one of the ways that I think about RabbitMQ is a core infrastructure component. Typically, RabbitMQ is used in all sorts of systems - cars (even), factories, things like payment systems... You don't even know where it's used, only when it goes down, or only when there's a problem. And this is not great. And from that perspective, it becomes increasingly important to communicate changes well, to be careful with the changes that get introduced, because many things can end up being broken. And usually, what tends to happen if developers are not careful about this aspect is that end users - they stop upgrading. I mean, if you experience similar problems a couple of times, you're more reluctant to upgrade.
18
+
19
+ **Jean-Sébastien Pedron:** Yeah, you will probably try to find alternatives to that project. This one was free -- free, I mean it didn't cost you any money, so you do not lose anything by switching.
20
+
21
+ **Gerhard Lazu:** \[08:13\] Again, I view RabbitMQ as core infrastructure, but what does core infrastructure mean to you?
22
+
23
+ **Jean-Sébastien Pedron:** I think I have the same definition as you. A core infrastructure is a component you rely on to provide your service, for instance, if you're a company... Or even as someone at home, I rely on some core infrastructure just to run my own computers, even if it's not for work.
24
+
25
+ For instance, as a company, in the context of RabbitMQ for instance, I expect that it's crystal clear what I get from RabbitMQ, so that I'm confident when I want to deploy it, upgrade it, so that I can build my own business on top of that component, and this won't fall apart because of that core component, core infrastructure. And that is the same for any operating systems. Nobody likes when the operating system crashes... So yeah, that's why we call them core infrastructure, in fact.
26
+
27
+ **Gerhard Lazu:** Yeah. So I know that you have experience with RabbitMQ, but I don't know as much about your experience with FreeBSD, because you're not just a RabbitMQ contributor, but also a FreeBSD contributor.
28
+
29
+ **Jean-Sébastien Pedron:** Yes.
30
+
31
+ **Gerhard Lazu:** So you have seen both sides of a messaging system such as RabbitMQ and an operating system such as FreeBSD. So how do the two compare?
32
+
33
+ **Jean-Sébastien Pedron:** An operating system is a very generic tool. You don't expect it to be the best in a specific area, but you expect it to behave well, and be at least good in all areas. RabbitMQ is a bit different in that \[unintelligible 00:10:08.03\] because we want to provide a very specific service in RabbitMQ. Also, FreeBSD is an old project, with old manners, and it's a big community, so it takes time for things and workflows to evolve. But in the end, we also want to ship an operating system which will work for end users who use it at home and companies who build their businesses on top of it. So yeah, that's why the release engineering in FreeBSD is also very important. The goal is the same, how it is done is different, and how you test things, obviously. You cannot compare both projects.
34
+
35
+ **Gerhard Lazu:** So can you give me an example from your experience of release engineering gone wrong in both projects, if there is such a thing? I'm sure there is.
36
+
37
+ **Jean-Sébastien Pedron:** Starting with RabbitMQ, I remember one release -- I don't remember the version number, but at the same time we released a new version of RabbitMQ with both a security bug fix and a breaking change. That was perfect, I'm sure, for admins who wanted to deploy that security bug fix as soon as possible. I think that's probably the worst-case scenario.
38
+
39
+ In FreeBSD I remember the FreeBSD 5.0 release cycle, because between FreeBSD 4 and 5, one of the big changes was to replace global lock used all over the place in the kernel by fine-grained lockings. And this went pretty bad because it took years to stabilize that work.
40
+
41
+ \[12:09\] In parallel to that, new versions of FreeBSD 4 were cut and published, but it was really difficult for the project to ship something at that time, because the cut base was very unstable, and nobody knew when we could cut even beta, let alone the final version. Yeah, it was a big problem because of that. It put pressure on people working on that code. Other people were tired because we didn't ship anything, and I'm sure end users were saddened by that situation as well, because some of them were looking forward to using the new version... Other users would see that disaster coming, and in the end nobody wanted to use FreeBSD 5.0, because it was too uncertain what you could do with that. So I think that's a good example of bad release engineering.
42
+
43
+ **Gerhard Lazu:** I think you touched up on something really interesting, which is the longer you wait to ship something, the worse the release gets, or the more problematic the release can become.
44
+
45
+ **Jean-Sébastien Pedron:** Yeah.
46
+
47
+ **Gerhard Lazu:** I don't know whether it becomes is a definite but the longer you wait, the higher the chances the release will not be as good.
48
+
49
+ **Jean-Sébastien Pedron:** Yeah. And I think that the most important part is that people are losing confidence, both developers working on that, and users expecting the release.
50
+
51
+ **Gerhard Lazu:** Yeah, I think we keep forgetting, at the end of the day it is people like you and me that are responsible for some pretty important systems that they have to, first of all, consume these updates somehow, understand what changes they are rolling out, and when something goes wrong - well, guess what? They are the ones responsible to fixing those problems. I mean, they can blame the developers developing or releasing that software, but ultimately, they need to take certain precautions that things are rolled out in a good way.
52
+
53
+ So the harder it is for these developers to roll out these changes or to start using maybe new features, whatever it may be, the less likely they are to consume future changes. So it's almost like they enter a vicious cycle, but it's a negative one in that if it doesn't work as smooth and as consistent and as pain-free as a user would like -- for example your phone; if every time you upgraded your operating system on your phone things would break, would you do it?
54
+
55
+ **Jean-Sébastien Pedron:** No.
56
+
57
+ **Gerhard Lazu:** No. If things would change in unexpected ways, would you do it? No. If you had to wait a really long time for an update, like let's say two years, and then you applied it and everything broke, would you do it again? No. So there is a very strong relationship between the happiness of end users and the release engineering of the products and the systems that they use.
58
+
59
+ **Jean-Sébastien Pedron:** Yeah, I agree.
60
+
61
+ **Gerhard Lazu:** So if you had to pick one - I know it's an unfair comparison, but let's just go with it for the fun of it - what would you say is a more core infrastructure, RabbitMQ or FreeBSD?
62
+
63
+ **Jean-Sébastien Pedron:** It's a tough one...
64
+
65
+ **Gerhard Lazu:** You can answer it any way you want, by the way... It's meant to be fun, it's not meant to be tough. \[laughter\]
66
+
67
+ **Jean-Sébastien Pedron:** I think it depends on... If we stay in the company we're not end users and not people at home I mean, it depends on what kind of service you provide on top of that. For instance, if we are taking a company using RabbitMQ for cars, like you mentioned earlier, in that case RabbitMQ would be the most important one, because you want all those devices and cars and computers to communicate properly.
68
+
69
+ \[16:16\] So I think that's the most important component. For a company like Sony, for instance, who is using FreeBSD in their Playstation products, if the devices they ship to gamers crash all the time because their operating system is unstable, it will be a very sad story for everyone. So in that kind of context I think the operating system is important.
70
+
71
+ **Gerhard Lazu:** I know that Netflix are other big users of FreeBSD. Imagine if you can't stream your Netflix because there was a bug in FreeBSD, probably shipped worldwide, across all their \[unintelligible 00:16:54.02\]
72
+
73
+ **Jean-Sébastien Pedron:** WhatsApp is also using FreeBSD, but they're also exchanging messages... So in that company if they were to use RabbitMQ - yeah, it would be more difficult to define which component is the most important. I would say RabbitMQ.
74
+
75
+ **Gerhard Lazu:** I think they would get the best and worst of both components, so it depends on a combination of that how well that would work out... But I see what you mean. And for the listeners, this actually happened; both myself and JSP were in Paris - JSP is from Paris, France... JSP - that's how I refer to Jean-Sébastien. Do you know what JSP comes from? Actually, I don't think I've ever told you this... So JSP is obviously the abbreviation of Jean-Sébastien Pedron, your full name...
76
+
77
+ **Jean-Sébastien Pedron:** Yes.
78
+
79
+ **Gerhard Lazu:** But GSP is actually Georges St-Pierre, and he's an MMA fighter.
80
+
81
+ **Jean-Sébastien Pedron:** \[laughs\]
82
+
83
+ **Gerhard Lazu:** I used to do his workouts many years before I even met you. So whenever I say JSP, I'm thinking "Ah, Georges St-Pierre", and like "I should go for a workout." So that's something which happens -- I know you never knew that, but anyways, that was a tangent. So coming back... We were Paris, and we had to -- well, not figure out, but help this RabbitMQ customer to make sure that RabbitMQ will be reliable in all sorts of scenarios, because would end up not getting unlocked from their remote car key, because RabbitMQ is involved in between the car and the key; RabbitMQ is exchanging messages. You wouldn't think about that, and neither should you; why would you? People don't really care about these things. And when everything works, it doesn't matter. When it doesn't work, that's when the problems start appearing. So that was a very interesting conversation and meeting, I have to say. I enjoyed it greatly.
84
+
85
+ **Jean-Sébastien Pedron:** Yeah. Especially that RabbitMQ is often used to also mitigate problems on both the application emitting the message and the application consuming it.
86
+
87
+ **Gerhard Lazu:** That's right.
88
+
89
+ **Jean-Sébastien Pedron:** So if you have a problem in the middle...
90
+
91
+ **Gerhard Lazu:** Yeah, I'm pretty sure that today, for example, you have used a system that behind the scenes uses RabbitMQ. And that's why we think of it as core infrastructure, because we know that it's everywhere. And it works well in most cases, but as it happens, we get to find out about all the cases when it doesn't work. Then we have to fix it, and then ship those fixes. So that's a very interesting perspective.
92
+
93
+ **Break**: \[19:27\]
94
+
95
+ **Gerhard Lazu:** So we've been talking generally about the RabbitMQ release engineering, the FreeBSD one, how do they compare as projects, the whole core infrastructure notion... What I'm wondering now is how does the FreeBSD release engineering process look like?
96
+
97
+ **Jean-Sébastien Pedron:** So after that FreeBSD 5.0 disaster the release engineering team started to work on something so that FreeBSD never faces that situation again... And that process evolved a couple times since. Today, the FreeBSD release engineering is based on a fixed interval between major releases and also minor releases. We don't expect to start on a very specific day at 8 AM, for instance. The OpenBSD one is sharp as a Swiss clock, but not in FreeBSD.
98
+
99
+ When we want to start to prepare the next release, we have release engineers or someone who is hired by the FreeBSD Foundation and is paid for that; he will take care of announcing to the FreeBSD contributors - not only the contributors, but the entire community. He will publish a calendar where he will state that the code \[unintelligible 00:21:52.14\] We expect to get the first beta at this date, we expect perhaps two betas, then two release candidates, specifying, again, the dates. He will indicate as well the date for the final release of FreeBSD. So that calendar is updated on a regular basis while we make progress in that release cycle. For instance, if we discover that there are bugs or there is a security issue, or whatever the reason, we might want to delay beta for a couple of days, or we might want to add third or fourth beta, or same for the release candidates, and so on.
100
+
101
+ So that calendar is very flexible, but it's quite useful, because it tells to the FreeBSD contributors when to expect things, and it's very easy for contributors to organize and prioritize their tasks. For instance, if someone is working on new features, then he knows that he has to finish by these dates, or it will be delayed to the next release. So that's very helpful for contributors, and like I said, this is not that strict. So any contributors can communicate also to the release engineer what he is working on, so that the release engineers know that "Okay, this specific batch is incoming. It might introduce some instabilities, but we want that in the release", so he can anticipate that and perhaps tell anyone that "Okay, we expect this to come in the next couple of weeks. This will go in that beta and we will add another one after that", for instance.
102
+
103
+ \[23:52\] So that calendar tool is really useful, because it allows everyone in the community and the developers to communicate and understand what's going on. As I said, for users who will use that new version of FreeBSD, they can plan for testing, for instance. You mentioned Netflix - they'd appreciate that, because they can test in advance the new features, so they will fetch the development branch, for instance, compile FreeBSD and then try it in their environment and see how it goes, and they will give some feedback.
104
+
105
+ So the fact that we use a detailed calendar - yeah, it really helps the communication and it makes the whole process more reliable and the outcome more reliable as well. I think that's the main part which was introduced following FreeBSD 5. And we have some evolutions from time, but they are mostly around adjusting the timeframe between releases, so that it's easy for end users to understand that "Okay, this will come in next September. Perhaps the release will take a bit more time, but in next September - okay, we know that we'll have a new release." This would have been very helpful in the time of FreeBSD 5, because we could have delayed some of the work done around looking to a future version, for instance, instead of trying to finish that huge task before shipping anything.
106
+
107
+ **Gerhard Lazu:** Yeah. This is something -- first of all, this sounds really interesting, and what I'm wondering is could I see this calendar somewhere? Can I see how this process works? Is it publicly available?
108
+
109
+ **Jean-Sébastien Pedron:** Yeah, that calendar is published on the FreeBSD.org website, announced on the mailing lists... That's the main communication channels.
110
+
111
+ **Gerhard Lazu:** And where does the FreeBSD development happen? I know that the RabbitMQ one happens on GitHub, but where does the FreeBSD one happen?
112
+
113
+ **Jean-Sébastien Pedron:** Initially in CVS. I don't remember the years exactly, but at some point we switched to Subversion, and both servers were hosted internally in the FreeBSD infrastructure, and in the Yahoo cluster in Sunnyvale. In the past year we switched to Git, but we are still hosting that internally, and the reason is that we want to dogfood FreeBSD \[unintelligible 00:26:25.10\] There are read-only mirrors available on GitHub. And there are still some discussions around "Do we want to introduce GitLab, or some other tools?" The idea is that because that's a private -- not a private, but internal Git repository, currently we don't have all the nice tools provided by GitHub, for instance. It's still a barrier to entry for contributors who are used to use GitHub for any kind of open source project... And yeah, that's still a discussion, because you have to balance the fact that you want to dogfood FreeBSD, you don't want to depend on a company's service, which is perhaps free for now, but we cannot tell what the future will be. So that's on one side. And on the other side, the fact that GitHub is so popular, it's a great source for new contributors and contributions in general.
114
+
115
+ **Gerhard Lazu:** Okay. So I know that you can obviously communicate everything via the website. I don't know whether you have any commenting enabled; most websites don't. It tends to be a one-way channel... But how does the community talk to the developers? Is there a mailing list? How does that work?
116
+
117
+ **Jean-Sébastien Pedron:** \[27:46\] There are many mailing lists. In fact, either by topic, for instance, there are mailing lists around the graphics stack, around the Wi-Fi drivers, around network storage, a particular CPU architecture, and so on. And there are some mailing lists about topics such as the current development branch or the stable release branches. That's the primary communication channel in FreeBSD.
118
+
119
+ **Gerhard Lazu:** Let me guess - these mailing lists are software that runs on the same FreeBSD servers as the Git repo?
120
+
121
+ **Jean-Sébastien Pedron:** Yeah.
122
+
123
+ **Gerhard Lazu:** Okay.
124
+
125
+ **Jean-Sébastien Pedron:** Yeah, they are hosted...
126
+
127
+ **Gerhard Lazu:** Okay. Those must be some beefy machines, to run everything...
128
+
129
+ **Jean-Sébastien Pedron:** Yeah, the infrastructure - initially, it was hosted in the Yahoo infrastructure, because some FreeBSD developers were employed by Yahoo; they offered that service. But now that Yahoo doesn't use FreeBSD anymore and the company is splitting the various services, the infrastructure moved to some other companies. I don't remember which one, but they are offering the hosting and there are some servers around New York, still in San Francisco, and some of them are also in Europe and Asia.
130
+
131
+ **Gerhard Lazu:** So I understand how the community can talk to the FreeBSD developers... How can they participate in FreeBSD development?
132
+
133
+ **Jean-Sébastien Pedron:** One way to find tasks is to look at the Bugzilla bug tracker. That's also one tool which is discussed, because people of my age are very happy with Bugzilla, but I'm sure people almost 20 years younger might find it quite archaic. \[laughs\] So yeah, that part is still being discussed and will evolve... But yeah, Bugzilla is one place to find bug reports, and thus things to work on. The mailing is another one where you can see what people are talking about or complaining about in particular.
134
+
135
+ So if you don't know what to do, that's one way to find work to do. Another one is just solve the problem that you hit every day if you are using FreeBSD for work, or at home. That's how I started, in fact.
136
+
137
+ **Gerhard Lazu:** And how do you submit the patches?
138
+
139
+ **Jean-Sébastien Pedron:** You can send pull requests on GitHub. They should be taken care of by someone at some point. You can submit patches on mailing lists, you can submit patches on Bugzilla after opening an issue... There is no one specific channel to submit your work.
140
+
141
+ **Gerhard Lazu:** Okay. So this is a little bit of a tangent that we had for the last few minutes, because the question was "How does the FreeBSD release engineering look like?", so we covered that... So coming back to that topic, you had a very good description of how things work. I don't think you mentioned any timelines, in the sense that when a new release starts, how long before that release gets shipped? How long before the GA? What does it look like to go to a beta? Is there a time period when beta starts shipping? How long does it take typically before an RC (or the first RC) ships? And eventually the GA.
142
+
143
+ **Jean-Sébastien Pedron:** Yeah. It depends if it's a minor release or a major one. FreeBSD does not follow semantic versioning.
144
+
145
+ **Gerhard Lazu:** That's interesting, because the version would make you think that it does, right? It's currently version 12 or 13, I can't remember...
146
+
147
+ **Jean-Sébastien Pedron:** Yeah, both exist currently. \[unintelligible 00:31:28.23\]
148
+
149
+ **Gerhard Lazu:** Right, so both version 12 and 13. And you also have 12.1, 12.2... But those are not semantic versions.
150
+
151
+ **Jean-Sébastien Pedron:** No, not really. It's close, but - how can I say...? Yeah, this is close to semantic versioning, but this is not documented as that. I mean, in FreeBSD we pay a lot of attention to breaking changes, as we have what we call POLA, Principle of Least Astonishment. It means that old changes which go into FreeBSD should be the less disruptive, in fact. And we should not surprise users, even between major releases.
152
+
153
+ \[32:13\] So when you want to deprecate something or remove something, you have to announce that a long time before you want to do that. If possible, it's good if you can mitigate what you are about to change in a breaking way, so that the transition from one version to another major version - it must be as smooth as possible. We pay a lot of attention to compatibility between major releases. Of course, you cannot guarantee that all of the time, but that's an important part of the FreeBSD release engineering.
154
+
155
+ Back to the timeline, I would say that a major release -- between the beginning of the release cycle and the end we are talking 2-3 months, perhaps more if there are bugs that crept in and are difficult to track down. For minor releases, they are shorter, but we are still in the range of weeks, and perhaps months sometimes.
156
+
157
+ **Gerhard Lazu:** Okay. So now that we think about the FreeBSD release engineering as a whole, what can RabbitMQ learn from the FreeBSD release engineering?
158
+
159
+ **Jean-Sébastien Pedron:** I like the fact that it's based on a fixed interval between major and minor releases, and the fact that the release cycle follows a calendar which is announced in advance to everyone involved, contributors and users. I think this is a great tool to improve the communication and the organization of the work, in fact. I would love to introduce that into RabbitMQ, having that calendar.
160
+
161
+ **Gerhard Lazu:** Yeah, I think it makes a lot of sense. We have been thinking about this for a while, and we have been looking at -- well, FreeBSD is one example, but also other projects... And it does sound like a good idea. Obviously, between the idea and the implementation there's a whole ocean of things to go through, but the direction sounds reasonable to me. I'm wondering if there are any other open source projects that you like how they do release engineering.
162
+
163
+ **Jean-Sébastien Pedron:** For instance, there is the Darktable open source photo editing project. They are also publishing a calendar in advance, and because they provide translations of the software, they also have to take that into account into their release cycle, to give time to translators to provide their translations. That's one thing I like in what they do.
164
+
165
+ Another one is the Mesa library, that you can use on Unix. It's a library providing a 3D implementation of OpenGL, for instance, and all the new standards in that area... And now it grew a lot \[unintelligible 00:35:21.18\] GPU drivers, for instance. So this is a large piece of code now, and what I like in their release engineering -- I don't remember if they followed a fixed timeline or if they provide calendars, but I like how they handle the batches. A developer is working on a batch, and he doesn't know if that batch will go into the next minor release or that needs to wait for the next major release.
166
+
167
+ \[35:59\] So they have someone, like FreeBSD, who is responsible to manage the release engineering. This time he's not higher or paid for that work, so it's on his free/spare time. They are trying various ways to -- that was a few years ago, so that probably settled since, but they wanted to try several things on what would be the best way to make that communication possible... Like - a developer wants that batch into the next stable minor release, but it might not fit the timeline, and so on. They tried tags in the Git commits; I think they tried specific mailing lists, where people would post their patch, and so on. I don't know what they chose in the end, but I like how they explored various methods.
168
+
169
+ **Gerhard Lazu:** Do you know what I remember about this specific topic? During one of our RabbitMQ team summits -- by the way, RabbitMQ is a distributed team. As I mentioned, twice per year we used to meet in a single place. It used to be London. So we had like an on-site, which was an off-site for some -- but anyways, it was an on-site... And during these team summits I noticed that your laptop had a weird thing on its screen. I said "JSP, I think your screen needs replacing. This laptop needs replacing", and you were saying "No, it's okay. I'm working on some graphics drivers, and I don't quite have this thing right." So pixels were looking a bit weird, and I noticed the pixels started changing. I was like "Oh, JSP, why did you have to bring development graphics drivers to the team summit? Now we can't code properly." Then obviously I would take my laptop out and like "Okay, let's get a properly-tested and properly-running graphics card and graphics drivers." \[laughter\] That was a fun one.
170
+
171
+ Then you told me about your interest in developing graphics drivers, which I thought was fascinating. How do you even do that? I was like, "Whoa... Really?" Little did I know that -- you know, also FreeBSD, I have to thank you for my FreeNAS server, how stable that is, and a couple of other things... So yeah, that is pretty important. It's the backups, right? All the pictures, before iCloud and before other services I used to back everything up on FreeNAS, and it never failed me, so... There's something to say there. ZFS has something to do with it. Drives failed, but FreeBSD never failed me, so I was very happy.
172
+
173
+ **Jean-Sébastien Pedron:** Nice. That's good to know.
174
+
175
+ **Gerhard Lazu:** Yeah, that is good feedback for you.
176
+
177
+ **Jean-Sébastien Pedron:** Yes. \[laughs\]
178
+
179
+ **Gerhard Lazu:** It wasn't 5.0, it was 9 and 10 -- actually, no, it was 11. I remember that one, 11; when I started really depending on it, it was great. That was a great few years of service.
180
+
181
+ **Jean-Sébastien Pedron:** Great. And yeah, you mentioned the graphics drivers - that's a nice topic around release engineering, because it's one area where it's difficult to find the right balance, in fact, because we want to ship obviously a stable operating system in the end, and the Mesa library also wants to be stable for all end users, so that it can render your desktop videos and video games.
182
+
183
+ But that's an area where the hardware and the new models are put in the market at a high pace; the technology evolves a lot, and the GPU is a very complex beast. So on one side, you want to support the latest GPUs, but because if a user today buys a laptop, he will go for the latest shiny one. He won't choose the one released three years ago. So you want to ship all those new drivers and bug fixes as soon as possible, but it's very difficult because the drivers themselves are very complex, so it's very difficult to test what you ship, because no one has all the various graphic cards and GPUs and configuration in general, so it's impossible to thoroughly test.
184
+
185
+ \[40:15\] Yeah, it's very difficult to find the right balance between shipping often and shipping something stable. I don't think we've found the right balance in FreeBSD either. Now drivers are provided as packages; they are not in the core anymore, the source code of FreeBSD. That improved a lot, but it still has some issues from time to time to decide on when to ship a new version of that package.
186
+
187
+ **Gerhard Lazu:** I think the more you dig into this and the more you work with this, you realize that it's not as straightforward, and everybody tries to make the best decisions they can given what they know. No one is trying to purposefully ship broken software. Sometimes it's really hard, and it looks like people don't care, or they don't think, but they do, and it's really hard. That's something worth emphasizing again and again.
188
+
189
+ **Jean-Sébastien Pedron:** Yeah.
190
+
191
+ **Gerhard Lazu:** I think in certain contexts it's much easier to maybe use feature flags, or something similar, in that you're shipping the feature, but you're not enabling the feature. And this is a very important distinction to make. In some cases you can ship it, but not enable it, and that's okay. And then test it, or trickle it down through users, beta testers, and whatnot. When you have all your feedback, then if you can ship an update, then you do that, and everything is good, and everybody has the best, latest version, or the closest it can get, because it can always be improved, and there will always be bugs. After all, we are all human, and we will make mistakes. And that's okay, that's not the problem. Don't try not to make mistakes; try to limit the impact of those mistakes, and fix them before anyone notices, because then it looks like you've never made the mistake, while everybody knows the truth, right?
192
+
193
+ **Jean-Sébastien Pedron:** \[laughs\]
194
+
195
+ **Gerhard Lazu:** So yeah, countless times this has happened, and it will happen, so better be honest about it.
196
+
197
+ **Jean-Sébastien Pedron:** That's why it's important to communicate well to contributors and users. That's the responsibility of that release engineering; you know that it might not be perfect in the end, what you ship, but at least you tried to make sure that people are aware of what is fine and what might not be fine.
198
+
199
+ **Break**: \[42:35\]
200
+
201
+ **Gerhard Lazu:** JSP, what did you work on before RabbitMQ?
202
+
203
+ **Jean-Sébastien Pedron:** I worked as an Erlang developer for a small French company. The company was providing a website aggregating ads, so that people could look for jobs, apartments, various objects they would like to buy.
204
+
205
+ **Gerhard Lazu:** \[43:58\] Craigslist, or Gumtree? For the listeners...
206
+
207
+ **Jean-Sébastien Pedron:** Yeah, something like that. And we wanted to provide some kind of social media features on top of that, so that people could easily interact between them. In that company I was an Erlang developer; we were two Erlang developers working on the server side of that service. We chose to take Yaws, which is an Erlang-based web server. That was because it was easy for us to extend write directly in the Erlang VM. In fact, add our own modules and applications in addition to Yaws.
208
+
209
+ The website itself was developed in PHP and JavaScript, but we were not working on it; other developers were responsible for it. But those PHP files and static files were served by an Erlang VM. And what I liked about what we did is that we put some effort to make sure that the website was always running, even when we were working on it and upgrading it.
210
+
211
+ So if we had to upgrade the operating system, and especially the kernel which was Debian, obviously we would have to reboot the computer. But otherwise we wanted to leave the service running. And what was great is that we could in the end benefit from the hot code reloading feature of Erlang, this brilliant, awesome feature.
212
+
213
+ We were very happy, because we could build Debian packages for our service. So we packaged the Yaws server, all our Erlang codebase and the website itself, so the PHP scripts, static resources, so JavaScript and CSS on images, and so on... So we packaged everything as Debian packages, and when we would apt-get update, apt-get dist-upgrade the machine, the servers, then the new copy of the Erlang code was deployed, and we were using the Erlang features to reload that code live, while the HTTP server was still running and serving requests. We were very happy with that. It's a really great feature from Erlang.
214
+
215
+ **Gerhard Lazu:** To me, that sounds like you're using Erlang the way it was meant to be used, and what you're telling me is that it works really well when you use it the way it was built.
216
+
217
+ **Jean-Sébastien Pedron:** Yes.
218
+
219
+ **Gerhard Lazu:** Okay. Well, that is a great complement, and working as expected in this case, it's great; and sometimes even rare. Obviously, not all software works as expected, that's why I mention this. And when it does, like "Oh, yes! Everything works as it should", and it's great, and it feels great. So you were on the beaten track, as designed, and everything was good.
220
+
221
+ I know the answer to this, but I know that many listeners will be wondering... First of all, is RabbitMQ using hot code reloading?
222
+
223
+ **Jean-Sébastien Pedron:** No, it's not.
224
+
225
+ **Gerhard Lazu:** And the follow-up - why not?
226
+
227
+ **Jean-Sébastien Pedron:** It's quite difficult to manage. The first part is that all developers and all contributions to the RabbitMQ code might lead to changes which don't look as breaking changes when you think of a single instance of your Erlang VM, for instance; you stop the service, you load the code from the disk, it runs as expected, you stop the VM, and all is fine. But the problem starts to show when, for instance, the state of a process changes between one copy of the module and the next one. So you need to handle that migration from state v1 to state v2. There are tools to do that in Erlang, but this is not magic. You have to use them, and implement that migration from v1 to v2.
228
+
229
+ \[48:10\] And it gets even more complicated when you're having a cluster of Erlang VMs. So you have to take care of the fact that, for instance, an Erlang process, while the code is reloaded, will modify its own state, and will start to use inter-process messages with a newer structure. When I say "message", in this context it's messages exchanged between Erlang processes, not messages that RabbitMQ would handle from other applications.
230
+
231
+ You have to handle all those changes live, so that new process which was reloaded might receive new messages using the new format from process on that same node, but it might receive all the messages from a node which was not yet upgraded, and so on. So that part is quite difficult to handle, and if you have mistakes, then it will crash, obviously. So that feature is great, but it puts a lot of load and responsibilities on developers and contributors' shoulders, because you have to handle all the cases.
232
+
233
+ And the second part which is difficult is how to package that... Because Erlang was designed so that in the end you do not ship just the RabbitMQ Erlang applications, for instance; it was designed so that you ship the Erlang VM itself, the Erlang code you want to run on it, and the configuration. In the end, it's an appliance that you put on a server, but it's a whole thing, and a standalone thing. It has the VM, the code and the configuration. It's not meant to support changes to that configuration, even that. And trying to package that - in my previous job - as Debian packages was a great challenge, because the Erlang VM is installed by other Debian packages. We also want to be able to change the configuration, configuration which was installed not by the package, but by tools like -- we were using Puppet, but a configuration management tool. So it's quite difficult to use that Erlang feature in today's packaging and configuration management infrastructure.
234
+
235
+ **Gerhard Lazu:** I remembered that -- this just reminds me of a discussion that we had a few years back about this very subject... And it's interesting how it comes back again. I remember the plugin system in RabbitMQ being one of the challenges when it comes to packaging RabbitMQ in an Erlang release, being able to define what is running, when, and how it's running. Again, for the listeners, RabbitMQ has this concept of plugins; a lot of them ship with RabbitMQ, others can be added, just dropped in a directory and off you go... And those plugins - they are applications. So RabbitMQ really is - this is the way I think about it - a microservices architecture in a single Erlang VM, in a single system process. Because you have all these applications exchanging messages, and by the way, they could be cross-nodes. So that's where the Erlang distribution comes in, where those messages have to traverse a network, and then you have a cluster of three nodes or four nodes. And any message, by the way - this is like an AMPQ message, whether 0-9-1 or 1.0 or any MPQ protocol - it can arrive at any node and it will end up in the right place, because the cluster is aware of where the members are, where the processes are, and how to send those messages internally. And that's what makes it challenging.
236
+
237
+ \[52:03\] So the one thing that helped (I think) in recent years is containers. Containerizing RabbitMQ, having that tarball, which really used to be the Debian package, now it's called something else... FreeBSD jail - similar concept. So the container allows us to package Erlang, even the operating system... Because that's where you have OpenSSL and all the dependencies, and we have a single tarball, which is a runnable artifact. You spin it up and it has everything that you need, in the right order, preconfigured, a bunch of things. So that really helps.
238
+
239
+ And then on top of that, obviously, if you use something like Kubernetes, you want a cluster operator, or an operator that manages your deployments, which is especially important if you have a clustered stateful system such as RabbitMQ, or a distributed stateful system. In those cases it really helps.
240
+
241
+ And this just made me realize that one discussion which I would really like to have is with \[unintelligible 00:52:58.19\] about the cluster operator, and how RabbitMQ runs in the context of Kubernetes... Because I think it does a lot of thing really well, being a stateful distributed system on Kubernetes; that's challenging.
242
+
243
+ And I think the new tools made this problem easier from some perspectives, but they also made it harder from others... And adapting to the new world - it's very challenging. I think a lot of this is lost in the details, and it's important because many can learn from this, many stateful systems can learn from this. And I know a few stateful systems, databases which don't work that well in the context of containers of Kubernetes, of things that come and go so often, networks that break all the time, or more frequently than they do in the traditional data center, in the traditional bare metal hosts. So that's something which is challenging.
244
+
245
+ Okay, so I would say that my understanding is that you miss this hot code reloading from the olden days, that RabbitMQ doesn't have, and there are some practical limitations why it will be very difficult to implement. Not impossible, but very challenging. Is there anything else that you miss?
246
+
247
+ **Jean-Sébastien Pedron:** No, I think that's something I would love to see in RabbitMQ, and even though it's difficult, I don't think it's impossible. For instance, if we were to ship only bug fixes into our batch versions, then it would be pretty easy to have that hot code reloading. The way you describe it in Erlang means that we could say that going from a batch release to the next one, it supports hot code reloading... But we can also say that going from a version to the next minor it doesn't; the VM has to be restarted. So even that is supported by Erlang itself. The hot code reloading knows when it cannot be reloaded live.
248
+
249
+ So I think if we were to have only bug fixes in batch releases, we could have hot code reloading implemented, and it would not add a lot of load to our team, I think. That is achievable, and a great benefit from that is that upgrading RabbitMQ to the next patch release means you don't have to restart RabbitMQ, which means you don't have to spend a lot of time starting RabbitMQ if you have thousands or tens of thousands or hundreds of thousands of queues and exchanges and bindings, and so on.
250
+
251
+ **Gerhard Lazu:** Well, I've really enjoyed this discussion, JSP.
252
+
253
+ **Jean-Sébastien Pedron:** Yeah, me too.
254
+
255
+ **Gerhard Lazu:** Thank you for joining me. It was great fun. I'm looking forward to the next one... And I'm wondering if there's any closing thoughts that you have?
256
+
257
+ **Jean-Sébastien Pedron:** Yeah, so I would like to know in fact what people are doing in their job or their personal projects to ship what they produce. Do they have experience with various release engineering practices, and what works and didn't work for them. I would love to hear from their writing software, but I would also love to hear from people who are consuming those open source projects, or even commercial projects, what they like and what they don't like when they want to learn more about the new versions of the tool they use.
258
+
259
+ **Gerhard Lazu:** So if you're a FreeBSD user or a RabbitMQ user, let JSP know what you like about the release engineering, what don't you like, and what would you like to be better, and what does even better mean for you. He would enjoy, and I would enjoy as well, knowing about that.
260
+
261
+ **Jean-Sébastien Pedron:** Yeah, we will both benefit from the answer.
262
+
263
+ **Gerhard Lazu:** Well, this was fun, JSP. Thank you very much. See you next time.
264
+
265
+ **Jean-Sébastien Pedron:** Yeah. Thank you for the invitation.
What is good release engineering?_transcript.txt ADDED
@@ -0,0 +1,725 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.08 --> 5.72] Hey, how's it going? I'm your host, Gerhard Lzu, and you're listening to Ship It, a podcast
2
+ [5.72 --> 11.66] about getting your best ideas into the world and seeing what happens. We talk about code,
3
+ [11.96 --> 18.00] ops, infrastructure, and the people that make it happen. Yes, we focus on the people because
4
+ [18.00 --> 23.34] everything else is an implementation detail. Core infrastructure is the type of software
5
+ [23.34 --> 29.20] that many services depend on. If it breaks, everything gets affected. Think an operating
6
+ [29.20 --> 33.80] system update, which looks good at first, but then everything starts getting slower by the hour.
7
+ [34.30 --> 39.24] Maybe there is a messaging system at the heart of a stack, which is trusted with billions of
8
+ [39.24 --> 44.26] transactions per day, and one upgrade later, transactions start failing randomly for no
9
+ [44.26 --> 49.60] obvious reason. One aspect which makes debugging this really hard is that sometimes different
10
+ [49.60 --> 55.70] versions of software simply don't work well together. Combine some enterprise OS with an older
11
+ [55.70 --> 62.24] rock-solid Linux kernel, and a code VM, which has some IO optimizations a decade younger.
12
+ [62.76 --> 68.38] And you could be looking at weeks of debugging. I know because I did just this a few months ago,
13
+ [68.44 --> 73.12] and it wasn't easy to get to the bottom of it. Today, we talk with Jean-Sébastien Padron,
14
+ [73.48 --> 79.26] a RabbitMQ and FreeBSD contributor, about the importance of good release engineering for core
15
+ [79.26 --> 85.16] infrastructure. As the years went by, it became clearer to us just how important this is. That's
16
+ [85.16 --> 90.38] right. Both myself and Jean-Sébastien have been part of the Core RabbitMQ team for many years now.
17
+ [90.70 --> 95.46] We have built some of the biggest CI-CD pipelines, checked the show notes to see one example,
18
+ [95.92 --> 101.40] wrote and shipped some great code together, while breaking and fixing many things in the process.
19
+ [101.64 --> 108.72] We have been wrestling with this topic since 2016. Jean-Sébastien has some great FreeBSD stories to share as well,
20
+ [108.72 --> 115.58] and he has a very interesting perspective on shipping graphic card drivers. Oh, and by the way,
21
+ [115.58 --> 122.16] it's probably our fault why your remote car key stopped working that afternoon. It will all make
22
+ [122.16 --> 127.24] sense after you listen to this. Big thanks to our partners Fastly, LaunchDarkly, and Linode.
23
+ [127.46 --> 134.16] Our bandwidth is provided by Fastly, learn more at Fastly.com, feature flags powered by LaunchDarkly.com,
24
+ [134.16 --> 140.90] and we love Linode. They keep it fast and simple. Check them out at linode.com forward slash changelog.
25
+ [140.90 --> 153.00] What's up, Shippers? This episode is brought to you by our friends at Fly. Fly lets you deploy your apps
26
+ [153.00 --> 160.08] and databases close to your users in minutes. You can run your Ruby, Go, Node, Deno, Python,
27
+ [160.52 --> 166.70] or Elixir app and databases all over the world. No ops required. Fly's vision is that all apps should run
28
+ [166.70 --> 171.18] closer to their users. They have generous free tiers for most services, so you can easily prove to
29
+ [171.18 --> 175.76] yourself and your team that the Fly platform has everything you need to run your app globally.
30
+ [176.18 --> 180.82] Learn more at fly.io slash changelog, and check out the speedrun and their excellent docs.
31
+ [181.24 --> 184.56] Again, fly.io slash changelog, or check the show notes for links.
32
+ [187.56 --> 191.68] We are going to ship in three, two, one.
33
+ [191.68 --> 212.52] So, end of 2016, I have joined this new team of developers, and they were the RabbitMQ core developers.
34
+ [212.98 --> 219.02] And the context of that meeting was the RabbitMQ team summit, which is something that used to happen
35
+ [219.02 --> 225.70] every six months, twice a year. But that stopped, obviously, since the pandemic and all the changes,
36
+ [225.82 --> 231.32] all the recent changes. The one person over the years that I really enjoyed working with is Jean
37
+ [231.32 --> 238.86] Sébastien. And if you're wondering who you can thank for all the makefile madness that I'm leaving
38
+ [238.86 --> 244.84] in my trail, it's Jean Sébastien. He's the one that introduced me to make, and the rest is history,
39
+ [244.84 --> 251.10] as they say. Not only that, but he also introduced me to the RabbitMQ code base, who were like pairing
40
+ [251.10 --> 257.02] buddies for a long time. And I found about build system, about the pipeline, about many things.
41
+ [257.74 --> 262.44] So, for the listeners, I mean, you've made it so far to this episode, and you're still wondering,
42
+ [262.78 --> 267.98] what do I do? For those that don't know yet, this is where I tell you that my day job is to work
43
+ [267.98 --> 274.66] on RabbitMQ. I'm a RabbitMQ core developer, same as Jean Sébastien. So, welcome, Jean Sébastien,
44
+ [275.30 --> 278.06] joining me in this new world. Thank you very much.
45
+ [279.04 --> 284.00] You're very welcome. I was looking forward to this for a long time, actually. And one of the topics
46
+ [284.00 --> 289.72] that are very relevant to this show are release engineering. And this is something that both you
47
+ [289.72 --> 295.12] and me have been thinking about for years in the context of RabbitMQ, and have been working on it
48
+ [295.12 --> 301.52] in different capacities. And my first question is, why do you care about release engineering?
49
+ [301.52 --> 308.92] I think it's an important part of a project, and in particular, an open source project.
50
+ [309.76 --> 319.38] The first reason is that you want to ship your code to end users to help them solve their own
51
+ [319.38 --> 329.94] problems. You want those users are happy with what you ship. And I think that to make that happen,
52
+ [330.50 --> 338.90] you need to communicate well with those users, explain what you ship to them, like that new release
53
+ [338.90 --> 346.70] contains these new features, that bug you hit, now it's fixed. You might be interested in that
54
+ [346.70 --> 354.76] security vulnerabilities. And you also want that those users give you feedback on what you shipped,
55
+ [354.90 --> 362.54] because that's how you can also improve your code base and make the next version better than the
56
+ [362.54 --> 369.48] current one. So yeah, that's what I would expect from a good release engineering. And I say that it's
57
+ [369.48 --> 376.02] important for open source products, because you do not have any paying customers most of the time.
58
+ [376.02 --> 387.68] So nobody is pressured to use what you produce and ship. So it's in the interest of the end users and
59
+ [387.68 --> 390.86] you to have that great communication when you want to release something.
60
+ [391.48 --> 395.52] I think that relationship is really important, right? The relationship of an open source developer
61
+ [395.52 --> 401.08] with a user of open source. And that has gone through many ups and downs, I think, in the years.
62
+ [401.08 --> 405.86] I don't know exactly where it stands now. But it is becoming increasingly important for these projects
63
+ [405.86 --> 412.10] to somehow and products to somehow make money. Now, while release engineering for an open source
64
+ [412.10 --> 418.14] product may not seem as important at first, because it's free, so why care, right? Actually,
65
+ [418.20 --> 424.98] the opposite is true. The developers care a lot about these things. And once you have a certain
66
+ [424.98 --> 433.28] number of users, which RabbitMQ has, these things become important. Because bugs can affect many users.
67
+ [434.02 --> 439.62] And one of the ways that I think about RabbitMQ is a core infrastructure component. Typically,
68
+ [439.76 --> 446.30] RabbitMQ is used in all sorts of systems, cars, even factories. You wouldn't expect payment systems.
69
+ [446.48 --> 451.46] You don't even know where it's used. Only when it goes down, or only when there is a problem.
70
+ [451.46 --> 456.50] Yeah. And this is not great. And from that perspective, it becomes increasingly important
71
+ [456.50 --> 462.76] to communicate changes well, to be careful with the changes that get introduced, because many
72
+ [462.76 --> 469.76] things can end up being broken. And usually, what tends to happen if developers are not careful about
73
+ [469.76 --> 473.52] this aspect is that end users, they stop upgrading. Yeah.
74
+ [473.68 --> 479.30] Right? I mean, if you experience problems or similar problems a couple of times, you're more reluctant
75
+ [479.30 --> 486.54] to upgrade. Yeah, you will probably try to find alternatives to that project.
76
+ [486.54 --> 486.90] That's right. Yes.
77
+ [486.90 --> 494.46] Because this one was free. I mean, free. It didn't cost you any money. So you do not lose anything by
78
+ [494.46 --> 501.16] switching. So, I mean, again, I view RabbitMQ as core infrastructure. But what does core infrastructure
79
+ [501.16 --> 501.78] mean to you?
80
+ [502.20 --> 509.20] I think I have the same definition as you. Core infrastructure is a component you rely on.
81
+ [509.30 --> 516.38] To provide your service, for instance, if you're a company or even as someone at home,
82
+ [516.64 --> 524.24] I rely on some core infrastructure just to run my own computers, even if it's not for work.
83
+ [524.38 --> 531.80] I mean, for instance, as a company in the context of RabbitMQ, for instance, I expect that it's
84
+ [531.80 --> 541.38] crystal clear what I get from RabbitMQ so that I'm confident when I want to deploy it, upgrade it,
85
+ [541.38 --> 550.96] so that I can build my own business on top of that component. And this won't fall apart because of
86
+ [550.96 --> 557.24] that core component, core infrastructure. And that's the same for any operating systems.
87
+ [557.24 --> 568.64] Nobody likes when the operating system crashes. So, yeah, that's why we call them core infrastructure, in fact.
88
+ [569.12 --> 574.24] Yeah. So I know that you have experience with RabbitMQ, but I don't know as much about your experience with
89
+ [574.24 --> 579.20] FreePSD because you're not just a RabbitMQ contributor, but also a FreePSD contributor.
90
+ [579.20 --> 586.44] Yes. So you have seen both sides of a messaging system, such as RabbitMQ, and an operating system,
91
+ [586.58 --> 589.36] such as FreePSD. So how do the two compare?
92
+ [590.06 --> 598.20] So an operating system is like a very generic tool. You don't expect it to be the best in a specific
93
+ [598.20 --> 606.68] area, but you expect it to behave well and be at least good in all areas.
94
+ [606.68 --> 614.18] RabbitMQ is a bit different in that regard because we want to provide a very specific
95
+ [614.18 --> 623.76] service in RabbitMQ. Also, FreePSD, it's an old project with old manners and it's a big community.
96
+ [624.02 --> 631.58] So it takes time for things and workflows to evolve. But in the end, we also want to ship
97
+ [631.58 --> 640.44] an operating system, which will work for end users who use it at home and companies who build their
98
+ [640.44 --> 647.14] businesses on top of it. So yeah, that's why the release engineering in FreePSD is also very
99
+ [647.14 --> 653.46] important. The goal is the same. How it is done is different. How you test things, obviously,
100
+ [653.46 --> 655.22] you cannot compare both projects.
101
+ [656.32 --> 665.40] So can you give me an example from your experience of release engineering gone wrong in both projects,
102
+ [665.84 --> 668.16] if there is such a thing? I'm sure there is.
103
+ [668.80 --> 674.94] So starting with RabbitMQ, I remember one release. I don't remember the version number,
104
+ [674.94 --> 683.30] but at the same time we publish release, a new version of RabbitMQ with both a security bug fix
105
+ [683.30 --> 693.78] and a break-in change. That was perfect, I'm sure, for admins who wanted to deploy that security bug fix
106
+ [693.78 --> 701.14] as soon as possible. I think that's probably the worst case scenario.
107
+ [701.14 --> 714.22] In FreeBSD, I remember the FreeBSD 5.0 release cycle because between FreeBSD 4 and 5, one of the big
108
+ [714.22 --> 723.68] changes was to replace a global lock used all over the place in the kernel by fine-grade lockings.
109
+ [723.68 --> 730.96] And this went pretty bad because it took years to stabilize that work.
110
+ [732.04 --> 738.60] In parallel to that, new version of FreeBSD 4 were cut and published,
111
+ [738.98 --> 744.54] but it was really difficult for the project to ship something at that time because
112
+ [744.54 --> 752.10] the code base was very unstable and nobody knew when we could cut even beta,
113
+ [752.10 --> 754.38] let alone the final version.
114
+ [755.26 --> 758.28] Yeah, it was a big problem because of that.
115
+ [758.54 --> 762.70] It put pressure on people working on that code.
116
+ [763.68 --> 766.72] Other people were tired because we didn't ship anything.
117
+ [767.56 --> 772.96] And I'm sure end users were sad by that situation as well because some of them were
118
+ [772.96 --> 775.60] looking forward to use the new version.
119
+ [775.60 --> 780.02] Other users would see that disaster coming.
120
+ [780.64 --> 788.10] And in the end, nobody wanted to use FreeBSD 5.0 because it was too uncertain what you could
121
+ [788.10 --> 788.88] do with that.
122
+ [789.24 --> 792.06] So I think that's a good example of a bad release engineering.
123
+ [792.06 --> 798.60] I think you touched upon something really interesting, which is the longer you wait to ship something,
124
+ [799.20 --> 804.24] the worse the release gets or the more problematic the release can become.
125
+ [804.42 --> 804.48] Yeah.
126
+ [804.62 --> 808.20] I don't know whether it becomes, it's not like a definite, but the longer you wait,
127
+ [808.30 --> 810.66] the higher the chances the release will not be as good.
128
+ [810.66 --> 810.98] Yeah.
129
+ [811.46 --> 818.68] And I think that the most important part is that people are losing confidence, both developers
130
+ [818.68 --> 823.06] working on that and users expecting the release.
131
+ [823.42 --> 823.72] Yeah.
132
+ [823.76 --> 829.50] I think we keep forgetting at the end of the day, it is people like you and me that are
133
+ [829.50 --> 836.08] responsible for some pretty important systems that they have to, first of all, consume these
134
+ [836.08 --> 840.90] updates somehow, understand what changes they're rolling out.
135
+ [841.52 --> 844.06] And when something goes wrong, well, guess what?
136
+ [844.16 --> 847.20] They're the ones responsible to fixing those problems.
137
+ [848.00 --> 855.22] And I mean, they can blame the developers developing or like releasing their software, but ultimately
138
+ [855.22 --> 860.12] they need to take certain precautions that, you know, things are rolled out in a good way.
139
+ [860.12 --> 867.18] So the harder it is for these developers to roll out these changes or to start using like
140
+ [867.18 --> 873.62] maybe new features, whatever it may be, the less likely they are to consume future changes.
141
+ [873.92 --> 879.54] So it's almost like they enter a vicious cycle, but it's a negative one in that if it doesn't
142
+ [879.54 --> 886.42] work as smooth and as consistent and as pain-free as user would like, like for example, your phone.
143
+ [886.42 --> 892.32] If every time you upgraded your phone, your operating system on your phone, things would
144
+ [892.32 --> 894.40] break, like, would you do it?
145
+ [894.98 --> 895.10] No.
146
+ [895.38 --> 895.66] No.
147
+ [895.74 --> 898.10] If things would change in unexpected ways, would you do it?
148
+ [898.20 --> 898.52] No.
149
+ [898.86 --> 903.06] If you had to wait a really long time for an update, like let's say two years and then
150
+ [903.06 --> 906.20] you applied it and then everything broke, would you do it again?
151
+ [906.46 --> 906.84] No.
152
+ [907.72 --> 915.48] So there is a very strong relationship between the happiness of end users and the release
153
+ [915.48 --> 915.92] engineering.
154
+ [916.42 --> 916.62] Yeah.
155
+ [916.62 --> 919.00] Of the products and the systems that they use.
156
+ [919.10 --> 919.42] Yeah, I agree.
157
+ [919.52 --> 919.66] Okay.
158
+ [921.52 --> 928.18] So if you had to say, if you had to pick one, I know it's an unfair comparison, but let's
159
+ [928.18 --> 929.22] just go with it for the fun of it.
160
+ [929.50 --> 933.32] What would you say is more core infrastructure, RabbitMQ or FreeBSD?
161
+ [934.32 --> 936.84] It's a tough one.
162
+ [937.44 --> 939.10] You can answer it any way you want, by the way.
163
+ [939.34 --> 940.22] It's meant to be fun.
164
+ [940.32 --> 941.36] It's not meant to be tough.
165
+ [941.36 --> 951.28] I think it depends on if we stay in the company world, not end users and not people at home.
166
+ [951.42 --> 956.36] I mean, it depends on what kind of service you provide on top of that.
167
+ [956.36 --> 965.28] For instance, if we are taking a company using RabbitMQ for cars, like you mentioned earlier,
168
+ [965.82 --> 974.06] in that case, RabbitMQ would be the most important one because you want all those devices and cars
169
+ [974.06 --> 978.26] and computers to communicate properly.
170
+ [978.26 --> 980.78] So I think that's the most important component.
171
+ [981.32 --> 995.50] For a company like Sony, for instance, who is using FreeBSD in their PlayStation products, if the devices they ship to gamers crash all the time because the operating system is unstable,
172
+ [995.50 --> 999.66] so it will be a very sad story for everyone.
173
+ [1000.18 --> 1004.70] So in that kind of context, I think the operating system is important.
174
+ [1005.90 --> 1009.66] I know that Netflix are other big users of FreeBSD.
175
+ [1010.32 --> 1018.04] So imagine if you couldn't stream your Netflix because there was a bug in FreeBSD introduced, shipped worldwide across all their ports.
176
+ [1018.74 --> 1023.74] WhatsApp is also using FreeBSD, but they are also exchanging messages.
177
+ [1023.74 --> 1033.84] So in that company, if they were to use RabbitMQ, yeah, it would be more difficult to define which component is the most important.
178
+ [1034.10 --> 1034.94] I would say RabbitMQ.
179
+ [1035.66 --> 1039.26] I think they would get the best and worst of both components.
180
+ [1039.68 --> 1043.10] So it depends on the combination of that, how well that would work out.
181
+ [1043.40 --> 1044.24] But I see what you mean.
182
+ [1044.30 --> 1044.92] I see what you mean.
183
+ [1045.02 --> 1047.76] And for the listeners, this actually happened.
184
+ [1048.02 --> 1050.22] Both myself and JSB, we were in Paris.
185
+ [1050.46 --> 1051.86] JSB is from Paris, from France.
186
+ [1051.86 --> 1056.34] And, sorry, JSB, that's how I refer to Jean-Sébastien.
187
+ [1056.70 --> 1058.34] Do you know what JSB comes from?
188
+ [1058.48 --> 1060.28] Actually, I don't think I've ever told you this.
189
+ [1060.78 --> 1065.22] So JSB is obviously the abbreviation of Jean-Sébastien Pedron, your full name, JSB.
190
+ [1065.30 --> 1065.52] Yes.
191
+ [1065.94 --> 1070.02] But G, G-S-P, is actually Jean-Saint-Pierre.
192
+ [1070.40 --> 1071.92] And he's an MMA fighter.
193
+ [1072.84 --> 1078.04] And, yeah, I used to do his workouts many years before I even met you.
194
+ [1078.04 --> 1081.74] So whenever I say JSP, I'm thinking, ah, Jean-Saint-Pierre.
195
+ [1081.90 --> 1083.22] And, like, I should go for a workout.
196
+ [1083.60 --> 1084.90] So that's something which happens.
197
+ [1085.66 --> 1086.72] I know you never knew that.
198
+ [1086.78 --> 1087.90] But anyways, that was a tangent.
199
+ [1088.40 --> 1090.10] So coming back, coming back.
200
+ [1090.90 --> 1091.98] We were in Paris.
201
+ [1091.98 --> 1102.64] And we had to, well, not figure out, but help this customer, this RabbitMQ customer, to make sure that RabbitMQ will be reliable in all sorts of scenarios.
202
+ [1103.04 --> 1109.38] Because cars would end up not getting unlocked from their car key, from the remote car key.
203
+ [1109.44 --> 1115.10] Because RabbitMQ is involved in between the car and the key, RabbitMQ is exchanging messages.
204
+ [1115.52 --> 1116.98] And you wouldn't think about that.
205
+ [1117.14 --> 1118.64] And neither should you.
206
+ [1118.72 --> 1119.54] Why would you, right?
207
+ [1119.54 --> 1121.78] People don't really care about these things.
208
+ [1122.10 --> 1124.72] And when everything works, it doesn't matter.
209
+ [1124.84 --> 1126.88] When it doesn't work, that's when the problems start appearing.
210
+ [1127.14 --> 1130.90] So that was a very interesting conversation and meeting, I have to say.
211
+ [1131.06 --> 1132.38] I enjoyed it greatly.
212
+ [1133.22 --> 1139.96] And especially that RabbitMQ is often used to also mitigate problems on both sides.
213
+ [1140.24 --> 1145.60] The application emitting the message and the application consuming it.
214
+ [1146.30 --> 1147.06] That's right.
215
+ [1147.36 --> 1147.52] Yeah.
216
+ [1147.52 --> 1148.74] If you have a problem in the middle.
217
+ [1148.74 --> 1154.78] Yeah, I'm pretty sure that today, for example, you have used a system that behind the scenes uses RabbitMQ.
218
+ [1155.28 --> 1159.38] And that's why we think of it as core infrastructure, because we know that it's everywhere.
219
+ [1159.84 --> 1159.96] Yeah.
220
+ [1160.08 --> 1162.00] And it works well in most cases.
221
+ [1162.30 --> 1165.90] But as it happens, we get to find out about all the cases when it doesn't work.
222
+ [1166.16 --> 1168.48] And then we have to fix it and then ship those fixes.
223
+ [1168.48 --> 1170.58] So that's a very interesting perspective.
224
+ [1170.58 --> 1181.88] What's up, shippers?
225
+ [1181.88 --> 1185.56] This episode is brought to you by our friends at Teleport.
226
+ [1185.56 --> 1190.20] With Teleport Access Plane, you can quickly access any computing resource anywhere.
227
+ [1190.66 --> 1198.22] Engineers and security teams can unify access to SSH servers, Kubernetes clusters, web applications, and databases across all environments.
228
+ [1198.22 --> 1201.34] Teleport is open core, which you can use for free.
229
+ [1201.34 --> 1207.22] And it's supported by their cloud-hosted version, which lets you forget about configuring, updating, or managing Teleport.
230
+ [1207.44 --> 1209.16] The Teleport team does all that for you.
231
+ [1209.42 --> 1213.84] Your team can focus on your projects and spend less time worrying about infrastructure access.
232
+ [1214.44 --> 1217.90] Try Teleport today in the cloud, self-hosted, or open source.
233
+ [1218.26 --> 1220.96] Head to goteleport.com to learn more and get started.
234
+ [1220.96 --> 1222.96] Again, goteleport.com.
235
+ [1232.32 --> 1243.40] So we've been talking generally about the RapidMQ release engineering, the FreePST one, how do they compare its projects, the whole core infrastructure notion.
236
+ [1244.10 --> 1249.84] What I'm wondering now is, how does the FreePST release engineering process look like?
237
+ [1249.84 --> 1264.22] So after that FreePST 5.0 disaster, the release engineering team started to work on something so that FreePST never faces that situation again.
238
+ [1264.98 --> 1268.24] And that process evolved a couple times since.
239
+ [1269.26 --> 1277.96] And today, the FreePST release engineering is based on a fixed interval between major releases, also minor releases.
240
+ [1277.96 --> 1284.96] And we don't expect to start on a very specific day at 8 a.m., for instance.
241
+ [1285.40 --> 1291.38] The OpenBSD one is sharp as a Swiss clock, but not in FreeBSD.
242
+ [1291.62 --> 1301.08] When we want to start to prepare the next release, we have release engineers, so someone who is hired by the FreeBSD Foundation and is paid for that.
243
+ [1301.08 --> 1310.38] He will take care of announcing to the FreeBSD contributors, but not only the contributors, but the entire community.
244
+ [1310.92 --> 1319.00] He will publish a calendar where he will state that the code slush will begin at this date.
245
+ [1319.34 --> 1320.74] Code freeze will be this date.
246
+ [1320.74 --> 1325.10] We expect to cut the first beta at this date.
247
+ [1325.10 --> 1332.16] We expect perhaps two betas, then two release candidates, specifying, again, the date.
248
+ [1332.76 --> 1337.88] And he will indicate as well the date for the final release of FreeBSD.
249
+ [1337.88 --> 1345.84] So that calendar is updated on a regular basis while we make progress in that release cycle.
250
+ [1346.08 --> 1357.48] For instance, if we discover that there are bugs or there is a security issue or whatever the reason, we might want to delay beta for a couple days.
251
+ [1357.48 --> 1363.76] Or we might want to add third or fourth beta or same for the release candidates and so on.
252
+ [1364.12 --> 1374.70] So that calendar is very flexible, but it's quite useful because it tells to the FreeBSD contributors when to expect things.
253
+ [1374.94 --> 1381.68] And it's very easy for contributors to organize and prioritize their tasks.
254
+ [1381.68 --> 1394.60] For instance, if someone is working on some new features, then he knows that he has to finish by this date or it will be delayed to the next release.
255
+ [1394.98 --> 1397.20] So that's very helpful for contributors.
256
+ [1398.28 --> 1401.02] And like I said, this is not that strict.
257
+ [1401.32 --> 1406.90] So any contributors can communicate also to the release engineer what he's working on.
258
+ [1406.90 --> 1413.58] And so that the release engineer knows that, OK, this specific patch is incoming.
259
+ [1413.86 --> 1419.24] It might introduce some instabilities, but we want that in the release.
260
+ [1419.80 --> 1430.08] So he can anticipate that and perhaps tell anyone that, OK, we expect this to come in the next couple of weeks.
261
+ [1430.42 --> 1434.06] This will go in that beta and we will add another one after that, for instance.
262
+ [1434.06 --> 1446.28] So that calendar tool is really useful because it allows everyone in the community and the developers to communicate and understand what's going on.
263
+ [1446.50 --> 1454.60] As I say, for users who will use that new version of FreeBSD, they can plan for testing, for instance.
264
+ [1455.40 --> 1456.64] You mentioned Netflix.
265
+ [1456.64 --> 1461.78] They appreciate that because they can test and invent the new feature.
266
+ [1461.92 --> 1471.26] So they will fetch the development branch, for instance, compile FreeBSD and try it in their environment and see how it goes.
267
+ [1471.34 --> 1473.14] They will give some feedback.
268
+ [1474.40 --> 1486.56] So the fact that we use a calendar, a detailed calendar, yeah, it really helps the communication and makes the whole process more reliable and the outcome more reliable as well.
269
+ [1486.64 --> 1491.66] So I think that's the main part which was introduced following FreeBSD 5.
270
+ [1492.00 --> 1506.70] And we have some evolutions from time to time, but they are mostly around adjusting the timeframe between releases so that it's easy for end users to understand that, OK, this will come in next September.
271
+ [1506.70 --> 1512.46] Perhaps the release will take a bit more time, but in next September, OK, we know that we'll have a new release.
272
+ [1512.78 --> 1530.62] And this would have been very helpful in the time of FreeBSD 5 because we could have delayed some of the work done around locking to a future version, for instance, instead of trying to finish that huge task before shipping anything.
273
+ [1530.62 --> 1532.76] Yeah, this is something.
274
+ [1533.16 --> 1534.92] So first of all, this sounds really interesting.
275
+ [1535.52 --> 1541.66] And what I'm wondering is, can users, sorry, could I see this calendar somewhere?
276
+ [1542.08 --> 1543.92] Can I see how this process works?
277
+ [1543.94 --> 1544.92] Is it publicly available?
278
+ [1545.40 --> 1552.56] Yeah, the calendar is published on the FreeBSD.org website, announced on the mailing lists.
279
+ [1552.76 --> 1553.14] OK.
280
+ [1553.50 --> 1555.16] That's the main communication channels.
281
+ [1555.46 --> 1557.88] And where does the FreeBSD development happen?
282
+ [1557.88 --> 1561.58] I know that the RabbitMQ one happens on GitHub, but where does the FreeBSD one happen?
283
+ [1562.36 --> 1581.30] Initially in CVS, I don't remember the years exactly, but at some point we switched to subversion and both servers were hosted internally in the FreeBSD infrastructure and in the Yahoo cluster in Sunnyvale.
284
+ [1581.30 --> 1587.10] In the past year, we switched to Git, but we are still hosting that internally.
285
+ [1587.32 --> 1590.64] And the reason is that we want to dock food FreeBSD itself.
286
+ [1591.32 --> 1595.58] There are read-only mirrors available on GitHub.
287
+ [1596.32 --> 1599.56] And there are still some discussions around that.
288
+ [1599.72 --> 1603.00] Do we want to introduce GitLab, for instance, or some other tools?
289
+ [1603.00 --> 1614.58] The idea is that because that's a private, not a private, but internal Git repository, currently we don't have all the nice tools provided by GitHub, for instance.
290
+ [1615.64 --> 1624.74] Yeah, it's still a barrier to entry for country readers who are used to use GitHub for any kind of open source project.
291
+ [1624.74 --> 1630.82] And yeah, that's still a discussion because you have to balance the fact that you want to dock food FreeBSD.
292
+ [1631.10 --> 1637.62] You don't want to depend on the company's service, which is perhaps free for now, but we cannot tell what the future will be.
293
+ [1638.10 --> 1639.34] So that's on one side.
294
+ [1639.46 --> 1650.24] And on the other side, the fact that GitHub is so popular, it's a great source for new contributors and contributions in general.
295
+ [1650.24 --> 1657.02] Okay. So I know that you can obviously communicate everything via the website.
296
+ [1657.24 --> 1659.50] I don't really have any commenting enabled.
297
+ [1659.84 --> 1660.92] Most websites don't.
298
+ [1661.06 --> 1662.50] It tends to be a one-way channel.
299
+ [1662.82 --> 1666.56] But how do users, how does the community talk to the developers?
300
+ [1666.92 --> 1667.96] Is there a mailing list?
301
+ [1668.80 --> 1669.72] How does that work?
302
+ [1670.18 --> 1674.06] There are many mailing lists, in fact, either by topic.
303
+ [1674.06 --> 1686.58] For instance, there are mailing lists around the graphic stack, around the Wi-Fi drivers, around network storage, a particular CPU architecture, and so on.
304
+ [1686.84 --> 1694.94] And there are some mailing lists about topics such as the current development branch or the stable release branches.
305
+ [1696.12 --> 1701.36] And yeah, that's the primary communication channel in FreeBSD.
306
+ [1701.36 --> 1707.20] Let me guess, these mailing lists are software that runs on the same FreeBSD servers as the Kit Repo?
307
+ [1707.42 --> 1708.24] Yeah, they are hosted.
308
+ [1708.58 --> 1711.46] Okay, those must be some beefy machines to run everything.
309
+ [1712.26 --> 1713.90] Yeah, the infrastructure.
310
+ [1714.34 --> 1723.54] So initially, it was hosted in the Yahoo infrastructure because some FreeBSD developers were employed by Yahoo.
311
+ [1723.78 --> 1725.48] They offered that service.
312
+ [1725.48 --> 1736.18] But now that Yahoo doesn't use FreeBSD anymore, and that the company is splitting the various services, the infrastructure moved to some other companies.
313
+ [1736.40 --> 1739.54] And I don't remember which one, but they are offering the hosting.
314
+ [1739.78 --> 1746.42] And there are some servers around New York, still around San Francisco.
315
+ [1746.76 --> 1750.10] And some of them are also in Europe and Asia.
316
+ [1750.10 --> 1755.06] So I understand how the community can talk to the FreeBSD developers.
317
+ [1755.70 --> 1758.90] How can they participate in FreeBSD development?
318
+ [1759.58 --> 1763.92] One way to find tasks is to look at the Bugzilla bug tracker.
319
+ [1763.92 --> 1773.28] And that's also one tool which is discussed because, I mean, people of my age are very happy with Bugzilla.
320
+ [1773.48 --> 1780.04] But I'm sure people with 20 years younger might found it quite archaic.
321
+ [1781.32 --> 1786.04] So, yeah, that part is still being discussed and will evolve.
322
+ [1786.98 --> 1791.68] But, yeah, Bugzilla is one place to find bug reports and there's things to work on.
323
+ [1791.68 --> 1798.68] And mailing list is another one where you can see what people are talking about or complaining about in particular.
324
+ [1799.22 --> 1803.72] So if you don't know what to do, that's one way to find work to do.
325
+ [1804.24 --> 1813.34] Another one is just solve the problem that you hit every day if you are using FreeBSD for work or at home.
326
+ [1813.58 --> 1814.80] That's how I started, in fact.
327
+ [1815.12 --> 1816.32] And how do you submit the patches?
328
+ [1816.72 --> 1819.24] You can send pull requests on GitHub.
329
+ [1819.24 --> 1822.60] They should be taken care of by someone at some point.
330
+ [1823.24 --> 1826.10] You can submit patches on mailing lists.
331
+ [1826.40 --> 1831.10] You can submit patches on Bugzilla after opening an issue.
332
+ [1831.52 --> 1835.58] There is no one specific channel to submit your work.
333
+ [1836.12 --> 1836.72] Okay.
334
+ [1837.36 --> 1842.98] So this is a little bit of a tangent that we had for the last few minutes because the question was,
335
+ [1843.12 --> 1846.00] how does the FreeBSD release engineering look like?
336
+ [1846.26 --> 1847.00] So we covered that.
337
+ [1847.00 --> 1852.58] So coming back to that topic, you had a very good description of how things work.
338
+ [1852.92 --> 1858.40] I don't think you mentioned any timelines in the sense that when a new release starts,
339
+ [1858.40 --> 1860.60] how long before that release gets shipped?
340
+ [1860.92 --> 1862.54] How long before the GA?
341
+ [1862.96 --> 1865.00] What does it look like to go to a beta?
342
+ [1865.32 --> 1867.98] Is there a time period when betas start shipping?
343
+ [1868.36 --> 1873.58] How long does it take typically before an RC or the first RC ships?
344
+ [1873.58 --> 1875.42] And eventually the GA?
345
+ [1875.92 --> 1876.08] Yeah.
346
+ [1876.28 --> 1880.02] It depends if it's a minor release or a major one.
347
+ [1880.54 --> 1883.48] So FreeBSD does not follow semantic versioning.
348
+ [1883.72 --> 1887.68] That's interesting because the version would make you think that it does, right?
349
+ [1887.72 --> 1889.84] Like it's currently version 13 or 12?
350
+ [1889.94 --> 1891.14] 12 or 13, I can't remember.
351
+ [1891.56 --> 1892.88] Yeah, both exist currently.
352
+ [1893.04 --> 1893.26] Right.
353
+ [1893.36 --> 1894.70] So both version 12 and 13.
354
+ [1894.70 --> 1895.14] Right.
355
+ [1895.50 --> 1899.98] And you also have like 12.1, 12.2, but those are not semantic versions.
356
+ [1900.56 --> 1901.26] No, not really.
357
+ [1901.48 --> 1904.76] It's close, but how can I say?
358
+ [1905.14 --> 1909.90] Yeah, this is close to semantic versioning, but this is not documented as that.
359
+ [1910.54 --> 1916.06] I mean that in FreeBSD, we pay a lot of attention to breaking changes.
360
+ [1916.06 --> 1919.60] We have what we call POLA.
361
+ [1919.98 --> 1922.26] So it's a principle of list astonishment.
362
+ [1922.84 --> 1932.04] So it means that all changes which go into FreeBSD should be the less disruptive, in fact.
363
+ [1932.32 --> 1937.06] And we should not surprise users, even between major releases.
364
+ [1937.06 --> 1947.78] So when you want to deprecate something or remove something, you have to announce that a long time before you want to do that.
365
+ [1948.20 --> 1955.24] If possible, it's good if you can mitigate what you're about to change in a breaking way.
366
+ [1955.24 --> 1963.60] So that the transition from one version to another major version, it must be as smooth as possible.
367
+ [1963.98 --> 1969.22] And we pay a lot of attention to compatibility between the major releases.
368
+ [1970.28 --> 1974.12] So, of course, you cannot guarantee that all of the time.
369
+ [1975.26 --> 1978.52] But yeah, that's an important part of the FreeBSD release engineering.
370
+ [1978.52 --> 1991.18] So back to the timeline, I would say that a major release between the beginning of the release cycle and the end, we are talking months, like two, three months.
371
+ [1992.04 --> 1999.96] Perhaps more if there are bugs that crept in and are difficult to track down.
372
+ [1999.96 --> 2010.78] And for minor releases, they are shorter, but we are still in the range of weeks and perhaps months sometimes.
373
+ [2011.62 --> 2011.76] Okay.
374
+ [2012.08 --> 2022.44] So now that we think about the FreeBSD release engineering as a whole, what can RabbitMQ learn from the FreeBSD release engineering?
375
+ [2022.44 --> 2030.00] So I like the fact that it's based on fixed interval between major and minor releases.
376
+ [2030.52 --> 2039.00] And the fact that the release cycle follows a calendar which is announced in advance and to everyone involved, contributors and users.
377
+ [2039.82 --> 2047.16] I think this is a great tool to improve the communication and the organization of the work, in fact.
378
+ [2048.16 --> 2051.46] Yeah, I would love to introduce that into RabbitMQ.
379
+ [2051.46 --> 2054.52] Having that calendar, in fact.
380
+ [2054.86 --> 2056.44] Yeah, I think it makes a lot of sense.
381
+ [2056.68 --> 2062.68] I mean, we have been thinking about this for a while and we have been looking at, well, FreeBSD is one example, but also other projects.
382
+ [2063.32 --> 2065.20] And it does sound like a good idea.
383
+ [2065.58 --> 2070.08] Obviously, between the idea and the implementation, there's a whole ocean of things to go through.
384
+ [2070.36 --> 2072.84] But the direction sounds reasonable to me.
385
+ [2073.28 --> 2078.44] I'm wondering if there are any other open source projects that you like how they do release engineering.
386
+ [2078.44 --> 2081.94] So which one do I know about?
387
+ [2081.94 --> 2087.56] So for instance, there is the Darktable open source photo editing project.
388
+ [2088.00 --> 2090.82] They are also publishing a calendar in advance.
389
+ [2091.16 --> 2107.80] And because they provide translations of the software, they also have to take that into account into their release engineering cycle to give time to translators to provide their translations.
390
+ [2107.80 --> 2111.32] That's one thing I like in what they do.
391
+ [2111.72 --> 2113.80] Another one is the Mesa library.
392
+ [2114.72 --> 2117.94] So the library you can use on Unix.
393
+ [2118.34 --> 2125.54] So it's a library providing 3D implementation of OpenGL, for instance, and all the new standards in that area.
394
+ [2125.54 --> 2128.66] And now it grew a lot and provides as well.
395
+ [2128.66 --> 2132.10] And user-owned parts of GPU drivers, for instance.
396
+ [2133.00 --> 2135.36] So this is a large piece of code now.
397
+ [2136.06 --> 2145.44] And what I like in their release engineering, so I don't remember if they follow fixed timeline or if they provide calendars.
398
+ [2145.44 --> 2149.76] But I like how they handle the patches.
399
+ [2150.22 --> 2163.38] Like a developer is working on a patch and he doesn't know if that patch will go into the next minor release or if that needs to wait for the next major release.
400
+ [2163.96 --> 2169.22] So they have someone like FreeBSD who is responsible to manage the release engineering.
401
+ [2169.22 --> 2173.10] This time he's not hired or paid for that work.
402
+ [2173.20 --> 2175.64] So it's on his free time, spare time.
403
+ [2176.74 --> 2180.22] Yeah, they are trying various ways to...
404
+ [2181.20 --> 2182.46] That was a few years ago.
405
+ [2182.68 --> 2185.70] So that probably settled since.
406
+ [2185.82 --> 2193.02] But they wanted to try several things on what would be the best way to make that communication possible.
407
+ [2193.24 --> 2198.02] Like a developer want that patch into the next stable minor release.
408
+ [2198.02 --> 2202.80] But it might not fit the timeline and so on.
409
+ [2202.94 --> 2207.70] So they tried tags in the Git commits.
410
+ [2208.20 --> 2214.10] I think they tried mailing list, a specific mailing list where people would post their patch and so on.
411
+ [2214.14 --> 2216.42] So I don't know what they choose in the end.
412
+ [2216.56 --> 2220.70] But yeah, I like how they explored various methods.
413
+ [2221.28 --> 2223.50] Do you know what I remember about this specific topic?
414
+ [2223.96 --> 2226.94] During one of our RabbitMQ team summits...
415
+ [2226.94 --> 2229.16] By the way, RabbitMQ is a distributed team.
416
+ [2229.42 --> 2232.74] As I mentioned, twice per year, we used to meet in a single place.
417
+ [2232.82 --> 2233.52] It used to be London.
418
+ [2233.68 --> 2236.52] So we had like an on-site, which was an off-site for some.
419
+ [2236.66 --> 2237.58] But anyways, it was an on-site.
420
+ [2238.32 --> 2243.68] And during these team summits, I noticed that your laptop had like a weird thing on its screen.
421
+ [2244.26 --> 2245.48] And you were saying, like I said,
422
+ [2245.48 --> 2247.92] JSB, I think your screen like needs replacing.
423
+ [2248.06 --> 2249.16] This laptop needs replacing.
424
+ [2249.28 --> 2250.86] And you were saying, no, no, it's okay.
425
+ [2251.04 --> 2253.04] I'm working on some graphics drivers.
426
+ [2253.42 --> 2255.42] And I don't quite have this like thing right.
427
+ [2255.50 --> 2257.08] So pixels were looking like a bit weird.
428
+ [2257.18 --> 2259.36] And I noticed the pixels started changing.
429
+ [2259.46 --> 2265.12] And I was like, oh, JSB, why did you have to bring like a developer graphics card?
430
+ [2265.26 --> 2268.50] And then like a development graphics drivers to the team summit?
431
+ [2268.64 --> 2270.06] Like now we can't code properly.
432
+ [2270.06 --> 2276.48] So then obviously I like would take up my laptop out and okay, let's get a properly tested and
433
+ [2276.48 --> 2279.18] properly running graphics card and the graphics drivers.
434
+ [2279.56 --> 2280.82] That was a fun one.
435
+ [2281.10 --> 2285.34] And then you told me about like, you know, your interest in developing graphics drivers,
436
+ [2285.34 --> 2286.66] which I thought was fascinating.
437
+ [2286.66 --> 2287.68] Like, how do you even do that?
438
+ [2287.72 --> 2288.94] I was like, whoa, maybe?
439
+ [2289.42 --> 2292.10] Little did I know that, you know, also like free BSD.
440
+ [2292.38 --> 2294.24] I have to thank you for my free NAS server.
441
+ [2294.64 --> 2295.56] How stable that is.
442
+ [2296.28 --> 2297.54] And a couple of other things.
443
+ [2297.66 --> 2299.20] So yeah, that is pretty important.
444
+ [2299.20 --> 2301.30] And I mean, it's the backups, right?
445
+ [2301.32 --> 2305.12] All the pictures before iCloud and before other services, I used to back everything up on
446
+ [2305.12 --> 2306.40] free NAS and it never failed me.
447
+ [2306.56 --> 2308.20] So there's something to say there.
448
+ [2308.56 --> 2309.76] ZFS has something to do with it.
449
+ [2309.94 --> 2312.54] Drives failed, but free BSD never failed me.
450
+ [2312.64 --> 2313.64] So I was very happy.
451
+ [2314.58 --> 2314.74] Nice.
452
+ [2314.84 --> 2315.46] That's good to know.
453
+ [2315.86 --> 2319.02] So yeah, that is good feedback for you.
454
+ [2319.64 --> 2319.88] Yes.
455
+ [2320.42 --> 2321.44] It wasn't 5.0.
456
+ [2321.58 --> 2323.16] It was, I think, 9, 10.
457
+ [2323.32 --> 2324.28] Actually, no, it was 11.
458
+ [2324.34 --> 2325.10] I remember that one.
459
+ [2325.42 --> 2327.70] 11 when I like started really depending on it.
460
+ [2327.76 --> 2328.36] It was great.
461
+ [2328.36 --> 2330.86] So that was a great few years of service.
462
+ [2331.52 --> 2331.78] Great.
463
+ [2332.80 --> 2335.00] And yeah, you mentioned the graphic drivers.
464
+ [2335.42 --> 2342.46] That's a nice topic around release engineering because it's one area where it's difficult to find the right balance.
465
+ [2342.46 --> 2347.64] In fact, because we want to ship, obviously, a stable operating system in the end.
466
+ [2347.64 --> 2359.74] And the Mesa library also wants to be stable for all end users so that it can render your desktop videos and video games.
467
+ [2359.74 --> 2368.90] But that's an area where the hardware and the new models are put in the market at a high pace.
468
+ [2369.08 --> 2375.62] The technology evolves a lot and the GPU is a very complex beast.
469
+ [2375.62 --> 2380.62] So on one side, you want to support the latest GPUs.
470
+ [2380.62 --> 2389.16] But because if a user today buys a laptop, he will go for the latest shiny one.
471
+ [2389.26 --> 2391.88] He won't choose the one released three years ago.
472
+ [2391.88 --> 2398.36] So you want to ship all those new drivers and bug fixes as soon as possible.
473
+ [2398.94 --> 2405.66] But it's very difficult because the drivers themselves are very complex.
474
+ [2405.66 --> 2417.96] So it's very difficult to test what you ship because no one has all the various graphic cards and GPUs and configuration in general.
475
+ [2418.14 --> 2420.52] So it's impossible to thoroughly test.
476
+ [2420.98 --> 2429.14] So yeah, it's very difficult to find the right balance between shipping often and shipping something stable.
477
+ [2430.30 --> 2434.66] And I don't think we find the right balance in FreeBSD either.
478
+ [2434.66 --> 2438.76] So now drivers are provided as packages.
479
+ [2439.22 --> 2442.74] They are not in the core anymore, the source code of FreeBSD.
480
+ [2443.10 --> 2444.46] So that improved a lot.
481
+ [2444.78 --> 2454.06] But yeah, it still has some issues from time to time to decide on when to ship a new version of that package.
482
+ [2454.62 --> 2460.00] I think the more you dig into this and the more you work with this, you realize that it's not as straightforward.
483
+ [2460.00 --> 2465.72] And everybody tries to make the best decisions they can given what they know, right?
484
+ [2465.76 --> 2469.28] I mean, no one is trying to purposefully ship broken software.
485
+ [2469.80 --> 2471.38] Sometimes it's really hard.
486
+ [2471.52 --> 2475.42] And it looks like people don't care or they don't think, but they do.
487
+ [2475.82 --> 2477.22] And it's really, really hard.
488
+ [2477.22 --> 2479.50] That's something worth emphasizing again and again.
489
+ [2480.16 --> 2480.26] Yeah.
490
+ [2480.52 --> 2491.94] I think in certain contexts, it's much easier to maybe use feature flags or something similar in that you're shipping the feature, but you're not enabling the feature.
491
+ [2492.08 --> 2494.38] And this is a very important distinction to make.
492
+ [2494.88 --> 2497.42] In some cases, you can ship it, but not enable it.
493
+ [2497.42 --> 2498.42] And that's okay.
494
+ [2498.58 --> 2503.56] And then test it or, you know, trickle it down through users, beta testers and whatnot.
495
+ [2503.82 --> 2508.24] And when you have all your feedback, then if you can ship an update, then you do that.
496
+ [2508.34 --> 2509.76] And everything is good.
497
+ [2509.80 --> 2514.20] And everybody has the best latest version, right?
498
+ [2514.30 --> 2519.06] Or the closest it can get because it can always be improved and there will always be bugs.
499
+ [2519.48 --> 2522.08] After all, we are all human and we will make mistakes.
500
+ [2522.40 --> 2523.10] And that's okay.
501
+ [2523.18 --> 2523.88] That's not the problem.
502
+ [2524.40 --> 2526.28] Don't try not to make mistakes.
503
+ [2526.28 --> 2533.26] Try to limit the impact of those mistakes and fix them before anyone notices because then it looks like you've never made the mistake.
504
+ [2533.44 --> 2535.20] Well, everybody knows the truth, right?
505
+ [2535.70 --> 2538.20] So, yeah, countless times this has happened and it will happen.
506
+ [2538.38 --> 2540.06] So, better be honest about it.
507
+ [2540.54 --> 2546.14] That's why it's important to communicate well to contributors and users.
508
+ [2546.78 --> 2549.10] That's the responsibility of that release engineering.
509
+ [2549.40 --> 2552.78] You know that it might not be perfect in the end, what you ship.
510
+ [2552.78 --> 2562.50] But at least you try to make sure that people are aware of what is fine and what might not be fine.
511
+ [2562.50 --> 2580.56] This episode is brought to you by Linode.
512
+ [2580.62 --> 2585.24] Gone are the days when Amazon Web Services was the only cloud provider in town.
513
+ [2585.24 --> 2597.96] Linode stands tall to offer cloud computing developers trust, easily deploy cloud compute, storage, and networking in seconds with a full-featured API, CLI, and cloud manager with a user-friendly interface.
514
+ [2597.96 --> 2607.52] Whether you're working on a personal project or managing your enterprise's infrastructure, Linode has the pricing, scale, and support you need to launch and scale in the cloud.
515
+ [2608.00 --> 2612.22] Get started with $100 in free credit at linode.com slash changelog.
516
+ [2612.52 --> 2616.08] Again, linode.com slash changelog.
517
+ [2623.24 --> 2626.94] So, JSP, what did you work on before RapidMQ?
518
+ [2626.94 --> 2631.14] So, I worked as an airline developer for a small French company.
519
+ [2631.82 --> 2643.90] The company was providing a website aggregating ad so that people could look for jobs, apartments, various objects they would like to buy.
520
+ [2644.18 --> 2646.62] Craigslist or country for the listeners.
521
+ [2646.62 --> 2647.44] Yeah, something like that.
522
+ [2647.44 --> 2656.46] And we wanted to provide some kind of social media features on top of that so that people could easily interact between them.
523
+ [2656.94 --> 2660.18] In that company, so I was an airline developer.
524
+ [2660.66 --> 2666.12] We were two airline developers working on the server side of that service.
525
+ [2666.50 --> 2672.08] We chose to take Yoz, which is an airline-based web server.
526
+ [2672.08 --> 2680.00] So, we chose that because it was easy for us to extend right directly in the AirLong VM.
527
+ [2680.26 --> 2685.44] In fact, add our own AirLong modules and application in addition to Yoz.
528
+ [2686.12 --> 2690.48] The website itself was developed in PHP and JavaScript.
529
+ [2690.98 --> 2692.92] So, that part, we were not working on it.
530
+ [2692.92 --> 2695.56] Other developers were responsible for it.
531
+ [2695.56 --> 2703.56] But, yeah, those PHP files and static files were served by an AirLong VM.
532
+ [2704.52 --> 2714.68] And what I liked about what we did is that we put some effort to make sure that the website was always running,
533
+ [2714.68 --> 2719.36] even when we were working on it and upgrading it.
534
+ [2720.14 --> 2726.28] So, if we had to upgrade the operating system, and especially the kernel, which was Debian,
535
+ [2726.76 --> 2729.14] obviously, we would have to reboot the computer.
536
+ [2730.42 --> 2734.58] But otherwise, we wanted to leave the service running.
537
+ [2734.58 --> 2741.32] And what was great is that we could, in the end, benefit from the hot code reloading feature of AirLong,
538
+ [2741.44 --> 2743.70] which is really an awesome feature.
539
+ [2744.12 --> 2750.90] We were very happy because we could build Debian packages for our service.
540
+ [2751.02 --> 2757.96] So, it packaged the Yoz server, all our AirLong code base, and the website itself.
541
+ [2758.08 --> 2763.86] So, the PHP scripts, static resources, so JavaScript and CSS and images, and so on.
542
+ [2764.58 --> 2768.00] So, we packaged everything as Debian packages.
543
+ [2769.36 --> 2775.78] And when we would apt-gate update, apt-gate dist-upgrade the machine, the servers,
544
+ [2776.40 --> 2781.22] then the new copy of the AirLong code was deployed,
545
+ [2781.66 --> 2786.92] and we were using the AirLong features to reload that code live,
546
+ [2787.02 --> 2792.20] while the server, the HTTP server, was still running and serving requests.
547
+ [2792.20 --> 2795.90] And, yeah, we were very happy with that.
548
+ [2796.32 --> 2797.96] It's a really great feature from AirLong.
549
+ [2797.96 --> 2801.84] So, to me, that sounds like you're using AirLong the way it was meant to be used.
550
+ [2802.06 --> 2805.80] And what you're telling me is that it works really well when you use it the way it was built.
551
+ [2806.20 --> 2806.48] Okay.
552
+ [2806.66 --> 2807.98] Well, that is a great compliment.
553
+ [2808.48 --> 2811.38] And working as expected in this case, it's great, right?
554
+ [2811.76 --> 2813.20] And sometimes even rare.
555
+ [2814.02 --> 2816.40] And obviously, not all software works as expected.
556
+ [2816.48 --> 2817.32] That's why I mentioned this.
557
+ [2817.40 --> 2820.22] And when it does, like, oh, yes, everything works the way it should.
558
+ [2820.26 --> 2820.82] It's great.
559
+ [2820.82 --> 2821.56] And it feels great.
560
+ [2821.56 --> 2825.58] So, you were on the beaten track as designed, and everything was good.
561
+ [2825.98 --> 2830.42] I know the answer to this, but I know that many listeners will be wondering,
562
+ [2830.82 --> 2833.64] first of all, is RabbitMQ using hot code reloading?
563
+ [2834.04 --> 2834.68] No, it's not.
564
+ [2834.88 --> 2836.24] And the follow-up, why not?
565
+ [2836.62 --> 2839.74] So, it's quite difficult to manage.
566
+ [2839.74 --> 2849.42] The first part is that all developers and all contributions to the RabbitMQ code might lead to changes,
567
+ [2849.74 --> 2858.16] which don't look as breaking changes when you think of a single instance of your AirLong VM, for instance.
568
+ [2858.28 --> 2859.34] You stop the service.
569
+ [2860.14 --> 2860.28] Okay.
570
+ [2860.36 --> 2862.80] So, you load the code from the disk.
571
+ [2863.12 --> 2864.16] It runs as expected.
572
+ [2865.42 --> 2866.96] You stop the VM.
573
+ [2867.38 --> 2868.72] And, okay, all is fine.
574
+ [2869.74 --> 2879.74] But problem starts to show when, for instance, the state of a process changes between one copy of the module and the next one.
575
+ [2880.22 --> 2886.04] So, you need to handle that migration from state V1 to state V2.
576
+ [2886.92 --> 2890.48] There are tools to do that in AirLong, but this is not magic.
577
+ [2890.80 --> 2896.38] You have to use them and implement that migration from V1 to V2.
578
+ [2896.38 --> 2904.38] And it gets even more complicated when you're having a cluster of AirLong VMs.
579
+ [2905.00 --> 2920.94] So, you have to take care of the fact that, for instance, an AirLong process, while the code is reloaded, will modify its own state and will start to use inter-process messages with a newer structure.
580
+ [2920.94 --> 2932.96] So, when I say message in this context, it's messages exchanged between AirLong processes, not messages that RabbitMQ would handle from other applications.
581
+ [2932.96 --> 2936.70] So, you have to handle all those changes live.
582
+ [2937.02 --> 2945.66] So, that new process, which was reloaded, might receive new messages using the new format from process on that same node.
583
+ [2945.94 --> 2950.62] But it might receive old messages from a node which was not yet upgraded.
584
+ [2951.70 --> 2952.18] And so on.
585
+ [2952.18 --> 2955.86] So, that part is quite difficult to handle.
586
+ [2956.16 --> 2960.08] And if you have mistakes, then it will crash, obviously.
587
+ [2960.60 --> 2967.74] So, that feature is great, but it puts a lot of load and responsibilities on developers and contributors' shoulders.
588
+ [2968.36 --> 2971.26] Because you have to handle all the cases.
589
+ [2971.26 --> 2976.14] And the second part, which is difficult, is how to package that.
590
+ [2976.76 --> 2984.74] Because AirLong was designed so that, in the end, you do not ship just the RabbitMQ AirLong applications, for instance.
591
+ [2985.02 --> 2993.24] It was designed so that you ship the AirLong VM itself, the AirLong code you want to run on it, and the configuration.
592
+ [2993.24 --> 3002.24] I mean, in the end, it's an appliance that you put on a server, but it's a whole thing, and a standalone thing.
593
+ [3003.06 --> 3007.08] It has the VM, the code, and the configuration.
594
+ [3007.46 --> 3013.20] It's not meant to support changes to that configuration, even that.
595
+ [3013.20 --> 3022.12] And trying to package that, in my previous job, to package that as Debian packages, it was a great challenge.
596
+ [3022.12 --> 3026.90] Because the AirLong VM is installed by other Debian packages.
597
+ [3027.60 --> 3031.62] We also want to be able to change the configuration.
598
+ [3032.00 --> 3042.14] Configuration, which was installed not by the package, but by tools like we were using Puppet, but a configuration management tool.
599
+ [3042.14 --> 3051.42] So, it's quite difficult to use that AirLong feature in today's packaging and configuration management infrastructure.
600
+ [3051.42 --> 3052.62] I remember that.
601
+ [3052.76 --> 3057.06] This just reminds me of the discussion that we had a few years back about this very subject.
602
+ [3057.52 --> 3059.84] And it's interesting how it comes back again.
603
+ [3060.36 --> 3067.52] I remember the plugin system in RabbitMQ being one of the challenges when it comes to packaging RabbitMQ in an Erlang release.
604
+ [3068.20 --> 3072.22] Being able to define what it's running, when, and how it's running.
605
+ [3072.48 --> 3075.44] Again, for the listeners, RabbitMQ has this concept of plugins.
606
+ [3075.78 --> 3077.18] A lot of them ship with RabbitMQ.
607
+ [3077.28 --> 3080.42] Others can be added, just dropped in a directory, and off you go.
608
+ [3080.42 --> 3082.92] And those plugins, they are applications.
609
+ [3083.86 --> 3088.20] So, RabbitMQ really is, this is the way I think about it.
610
+ [3088.50 --> 3095.86] It's a microservices architecture in a single Erlang VM, in a single system process.
611
+ [3096.30 --> 3102.26] Because of all these applications exchanging messages, and by the way, they could be cross nodes.
612
+ [3102.26 --> 3106.58] So, that's where the Erlang distribution comes in, where those messages have to traverse the network.
613
+ [3107.18 --> 3109.02] And then you have a cluster of three nodes or four nodes.
614
+ [3109.70 --> 3118.70] And any message, by the way, this is like an NQP message, or whether it's 091 or 1.0, or any NQT protocol.
615
+ [3118.70 --> 3127.22] It can arrive at any node, and it will end up in the right place, because the cluster is aware of where the members are, where the processes are, how to send those messages internally.
616
+ [3127.70 --> 3128.96] And that's what makes it challenging.
617
+ [3128.96 --> 3133.48] So, the one thing that helped, I think, in recent years is containers.
618
+ [3134.52 --> 3139.34] Containerizing RabbitMQ, having that tarball, which really used to be the Debian package.
619
+ [3139.72 --> 3140.96] Now it's called something else.
620
+ [3141.14 --> 3141.98] FreeBSD jail.
621
+ [3142.34 --> 3143.12] Similar concept.
622
+ [3143.12 --> 3148.68] So, the container allows us to package Erlang, even the operating system, right?
623
+ [3148.72 --> 3151.02] Because that's what you have, OpenSSL, and all the dependencies.
624
+ [3151.30 --> 3154.84] And we have a single tarball, which is a runnable artifact.
625
+ [3155.30 --> 3160.84] You spin it up, and it has everything that you need in the right order, pre-configured, a bunch of things.
626
+ [3161.00 --> 3162.04] So, that really helps.
627
+ [3162.52 --> 3166.96] And then on top of that, obviously, if you use something like Kubernetes, you want a cluster operator,
628
+ [3166.96 --> 3174.44] or an operator that manages your deployments, which is especially important if you have a clustered system,
629
+ [3174.80 --> 3179.20] clustered stateful system, such as RabbitMQ, or a distributed stateful system.
630
+ [3179.96 --> 3181.82] And in those cases, it really helps.
631
+ [3182.34 --> 3187.70] And this just made me realize that one discussion which I would really like to have is with Chuni
632
+ [3187.70 --> 3193.28] about the cluster operator and how RabbitMQ runs in the context of Kubernetes.
633
+ [3193.28 --> 3196.62] Because I think it does a lot of things really, really well.
634
+ [3196.96 --> 3201.70] Being a stateful distributed system on Kubernetes, wow, that's challenging.
635
+ [3202.18 --> 3208.68] And I think the new tools made this problem easier from some perspectives,
636
+ [3208.96 --> 3211.06] but it also made it harder from others.
637
+ [3211.46 --> 3214.12] And adapting to the new world, it's very challenging.
638
+ [3214.58 --> 3217.02] And I think a lot of this is lost to the details.
639
+ [3217.06 --> 3219.28] And it's important because many can learn from this.
640
+ [3219.86 --> 3221.58] Many stateful systems can learn from this.
641
+ [3221.58 --> 3223.98] And I know a few stateful systems like databases,
642
+ [3223.98 --> 3232.00] which don't work that well in the context of containers, of Kubernetes, of things that come and go so often,
643
+ [3232.16 --> 3237.60] networks that break all the time, or more frequently than they do in the traditional data center,
644
+ [3237.66 --> 3239.30] in the traditional bare metal hosts.
645
+ [3239.60 --> 3241.20] So that's something which is challenging.
646
+ [3241.20 --> 3241.76] Okay.
647
+ [3242.24 --> 3250.76] So I would say that my understanding is that you miss this hot code reloading from the olden days that RabbitMQ doesn't have.
648
+ [3250.82 --> 3255.16] And there are some practical limitations why it would be very difficult to implement.
649
+ [3255.52 --> 3258.40] Not impossible, but very, very challenging.
650
+ [3258.40 --> 3261.58] And is there anything else that you miss?
651
+ [3261.98 --> 3265.08] No, I think that's something I would love to see in RabbitMQ.
652
+ [3265.32 --> 3270.00] And even though it's difficult, I don't think it's impossible.
653
+ [3270.20 --> 3275.98] For instance, if we were to ship only bug fixes into our patch versions,
654
+ [3276.40 --> 3280.04] then it would be pretty easy to have that hot code reloading.
655
+ [3280.04 --> 3287.24] And the way you describe it in Erlang means that we could say that going from a patch release to the next one,
656
+ [3287.72 --> 3290.06] it supports hot code reloading.
657
+ [3290.60 --> 3296.84] But we can also say that going from a version to the next miner, it doesn't.
658
+ [3297.10 --> 3299.48] And the VM has to be restarted.
659
+ [3299.94 --> 3302.76] So even that is supported by Erlang itself.
660
+ [3302.96 --> 3308.74] The hot code reloading knows when it cannot be reloaded live.
661
+ [3308.74 --> 3313.62] So I think that if we were to have only bug fixes in patch releases,
662
+ [3313.88 --> 3316.18] we could have hot code reloading implemented.
663
+ [3316.64 --> 3321.34] And it would not add a lot of load to our team, I think.
664
+ [3321.68 --> 3323.38] That is achievable.
665
+ [3324.06 --> 3329.56] And a great benefit from that is that upgrading RabbitMQ to the next patch release
666
+ [3329.56 --> 3332.10] means you don't have to restart RabbitMQ,
667
+ [3332.64 --> 3337.52] which means you don't need to spend a lot of time starting RabbitMQ
668
+ [3337.52 --> 3342.72] if you have thousands or tens of thousands or hundreds of thousands of queues
669
+ [3342.72 --> 3345.00] and exchanges and bindings and so on.
670
+ [3345.40 --> 3348.70] Well, I've really enjoyed this discussion, JSP.
671
+ [3348.90 --> 3349.24] Yeah, me too.
672
+ [3349.26 --> 3350.14] Thank you for joining me.
673
+ [3350.30 --> 3351.36] It was great fun.
674
+ [3351.76 --> 3353.08] I'm looking forward to the next one.
675
+ [3353.70 --> 3357.94] And I'm wondering if there's any closing thoughts that you have?
676
+ [3357.94 --> 3364.04] Yeah, so I would like to know, in fact, what people are doing in their job
677
+ [3364.04 --> 3370.46] or their personal projects to ship what they produce.
678
+ [3371.06 --> 3376.08] Do they have experience with various release engineering practices
679
+ [3376.08 --> 3381.64] and what works and didn't work for them?
680
+ [3381.64 --> 3387.16] So I would love to hear from their writing software,
681
+ [3387.44 --> 3393.82] but I would also love to hear from people who are consuming those open source projects
682
+ [3393.82 --> 3399.12] or even commercial projects, what they like and what they don't like
683
+ [3399.12 --> 3403.54] when they want to learn more about the new versions of the tool they use.
684
+ [3403.54 --> 3408.36] So if you're a FreeBSD user or a RabbitMQ user,
685
+ [3408.72 --> 3411.28] let JSP know what you like about the release engineering,
686
+ [3411.44 --> 3414.98] what don't you like and what would you like to be better
687
+ [3414.98 --> 3416.96] and what does even better mean for you?
688
+ [3417.32 --> 3420.00] He would enjoy and I would enjoy as well knowing about that.
689
+ [3420.06 --> 3422.30] Yeah, we will both benefit from the answers.
690
+ [3423.24 --> 3424.60] Well, this was fun, JSP.
691
+ [3424.80 --> 3425.54] Thank you very much.
692
+ [3425.68 --> 3426.24] See you next time.
693
+ [3426.42 --> 3427.72] Yeah, thank you for the invitation.
694
+ [3427.72 --> 3433.54] That's it for this episode of Ship It.
695
+ [3433.82 --> 3435.20] Thank you for tuning in.
696
+ [3435.42 --> 3438.36] We have a bunch of podcasts for developers at Changelog
697
+ [3438.36 --> 3439.68] that you should check out.
698
+ [3440.06 --> 3443.94] Subscribe to the master feed at changelog.com forward slash master
699
+ [3443.94 --> 3446.26] to get everything we ship.
700
+ [3446.68 --> 3450.70] I want to personally invite you to join your fellow changeloggers
701
+ [3450.70 --> 3453.80] at changelog.com forward slash community.
702
+ [3454.10 --> 3455.60] It's free to join and stay.
703
+ [3455.60 --> 3459.16] Leaving, on the other hand, will cost you some happiness credits.
704
+ [3459.64 --> 3461.18] Come hang with us on Slack.
705
+ [3461.58 --> 3462.46] They're no imposters.
706
+ [3462.88 --> 3464.06] Everyone is welcome.
707
+ [3464.66 --> 3468.96] Huge thanks again to our partners Fastly, LaunchDarkly and Minote.
708
+ [3469.30 --> 3473.98] Also, thanks to Breakmaster Cylinder for making all our awesome beats.
709
+ [3474.44 --> 3475.64] That's it for this week.
710
+ [3475.92 --> 3476.70] See you next week.
711
+ [3476.70 --> 3506.68] See you next week.
712
+ [3506.70 --> 3507.78] Bye.
713
+ [3507.82 --> 3508.12] Bye.
714
+ [3512.10 --> 3512.32] Bye.
715
+ [3512.68 --> 3512.82] Bye.
716
+ [3512.88 --> 3513.46] Bye.
717
+ [3513.46 --> 3513.88] Bye.
718
+ [3514.78 --> 3515.24] Bye.
719
+ [3515.26 --> 3516.00] Bye.
720
+ [3516.04 --> 3516.32] Bye.
721
+ [3516.42 --> 3517.50] Bye.
722
+ [3517.50 --> 3517.62] Bye.
723
+ [3527.48 --> 3528.14] Bye.
724
+ [3530.04 --> 3530.98] Bye.
725
+ [3531.14 --> 3533.32] Bye.
Why Kubernetes_transcript.txt ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** I'd like to start with a story. I know that you've been helping Changelog.com, the codebase, in different ways... The thing which I remember is that our response latency went down; you did some tweaks... Is that right, or am I confusing you with Alex?
2
+
3
+ **Lars Wikman:** I think Alex gets the credit on that...
4
+
5
+ **Gerhard Lazu:** I definitely know he improved the N+1 queries, that's for sure...
6
+
7
+ **Lars Wikman:** Yeah. I even caused some of those...
8
+
9
+ **Gerhard Lazu:** Okay, okay... So you're the opposite of Alex.
10
+
11
+ **Lars Wikman:** Yes, exactly. I'm here to create opportunities for performance improvements.
12
+
13
+ **Gerhard Lazu:** I see. So that's the way it goes... Okay. So you're making it worse and he's making it better... The difference is like it's zero, right? Okay, so we're not going anywhere.
14
+
15
+ **Lars Wikman:** \[04:04\] It's very important to have a stable codebase, and a very stable operation... \[laughs\]
16
+
17
+ **Gerhard Lazu:** It is.
18
+
19
+ **Lars Wikman:** Some of the work I've done with The Changelog has been on a few things that haven't been released, and a few things that -- basically, housekeeping around how emails are sent out, and to whom. I think and I hope that there will be some more stuff done with the meta-costs feature I made; I had the opportunity to write a small DSL, which would be nice to expose to the public. I don't think Jerod has put it into action, so this is a good time to shame a little bit about it...
20
+
21
+ **Gerhard Lazu:** Okay. So Jerod, if you're not listening, it's okay, and if you are listening, what's up with the meta-costs? I don't know anything about it, but... Yeah - what's up with it? That's what I'm wondering.
22
+
23
+ **Lars Wikman:** Yeah. But it was very fun to get a chance to work with Jerod and the Changelog codebase in a slightly dedicated fashion. So it was a few months... That would have been last summer that I spent some time with this codebase. Then I introduced Alex when I didn't have time anymore, and he seems to have torn things up. He really, really has pushed a few things forward.
24
+
25
+ **Gerhard Lazu:** He did, yes. Definitely. The PromEx stuff I think is the one that I got the most excited about, because it touches on the infrastructure side of things; it just integrates a couple of things together, so that it is from my perspective very visible and something which I'm very interested in to see how are things behaving.
26
+
27
+ The N+1 optimizations, improvements, N+1 query improvements - that was great to see as well. I didn't know that you were the cause for it, so... I'm not sure how I feel about that. \[laughs\]
28
+
29
+ **Lars Wikman:** I think I only introduced one fairly chunky case of them. It was mostly when you're doing development that it turned things a little bit slow to start, because I was doing something optimistic, but it didn't turn out...
30
+
31
+ **Gerhard Lazu:** I mean, the key takeaway from this little conversation is that you are deep into Elixir, into Erlang, into -- is Erlang fair to say? You're a co-host on BEAM Radio, so BEAM is all Erlang...
32
+
33
+ **Lars Wikman:** Yeah. I'm very excited and enthusiastic about Erlang, but I don't write Erlang. I write Elixir. It runs on the same VM as Erlang, so all the Erlang technology benefits Elixir. A lot of the Elixir technology benefits Erlang, but it can't fully go in both directions, unfortunately. Mostly a technical reason for it...
34
+
35
+ **Gerhard Lazu:** Okay.
36
+
37
+ **Lars Wikman:** But yeah, I am very invested in the BEAM ecosystem. BEAM is the name of the virtual machine...
38
+
39
+ **Gerhard Lazu:** Do you know what BEAM stands for?
40
+
41
+ **Lars Wikman:** I think early on it was Bogdan's Something-something Machine. I don't remember exactly...
42
+
43
+ **Gerhard Lazu:** Erlang Abstract Machine... Bogdan's Erlang Abstract Machine.
44
+
45
+ **Lars Wikman:** Exactly. Because initially, it was JAM, which was Joe's Abstract Machine, I imagine...
46
+
47
+ **Gerhard Lazu:** Yes. Joe, you're still in our minds. I know that you're not listening, but those that know, Joe Armstrong, the co-creator of Erlang - you're still in our minds. Thank you for everything you've done. You've shipped a great thing into the world.
48
+
49
+ **Lars Wikman:** Yeah. The BEAM and Erlang absolutely wild. It's been interesting that through many years I've heard of Erlang, and people have been like "That's a weird one, but it has some really strong ideas and it has some really strong features", and it's like "Oh, okay. Whatever." I don't really do FP; it wasn't really in my wheelhouse, and I figured it was probably too complicated for me. Now I'm very, very keen to avoid working with non-BEAM languages, if I can... Because there's just so much you get with a BEAM that you just don't have in other runtimes, or that you have to work so very hard for in other runtimes.
50
+
51
+ **Gerhard Lazu:** Which are your top three favorite BEAM features?
52
+
53
+ **Lars Wikman:** \[08:07\] Concurrency and parallelism at the same time, for essentially no extra effort. It makes you do concurrency and parallelism correctly and reasonably, without tripping you into sort of mutable state and the dangers of concurrency and parallelism. So that's one.
54
+
55
+ Then there's the whole resiliency thing, which is built on sort of the same idea, or some of the same ideas, where there will be things that happen to your application that are unexpected, that you can't really catch with just catching an exception. Maybe the disk was full, maybe the service you were talking to was down... There's always something to make it blow up. And it has been described as the "Let it crash philosophy", but it's not always the most -- it's not the best marketing. It makes managers very, very nervous... But the idea that it's okay if certain components fail, and an important thing is to have a recovery strategy. This actually sort of feeds into the Kubernetes thing, which has a similar approach, but on a diffrent scale. And this sets me apart from a lot of functional programmers... Some functional programmer enthusiasts really like their types. I'm very glad that Erlang and Elixir are dynamic.
56
+
57
+ **Gerhard Lazu:** Okay. Apparently there is a typed Erlang syntax, DSL, coming from Facebook. I say Facebook, but it's really WhatsApp... I keep forgetting its name, but something-Muskała... Do you know who I'm referring to?
58
+
59
+ **Lars Wikman:** Oh, yeah, Michał Muskała is the guy that, as far as I know, started the effort, or was probably leading the effort...
60
+
61
+ **Gerhard Lazu:** That's right.
62
+
63
+ **Lars Wikman:** I spoke to him once in Prague. That was before he was at WhatsApp. But that's a super-interesting effort, and I think that type system makes perfect sense for what they need. They're a very large organization... But I don't really find it compelling for building the kind of web apps and systems that I do. I find type systems to be a little bit annoying. I've done some work recently with Elm, which has a lot of types... That was frustrating at first, but it was also compelling. It showed me some of what you really get with a types-first approach, I guess... So it's interesting, but I'm not sure I love it.
64
+
65
+ I'm very, very happy with having a dynamic language. I come from Python and PHP originally, so that's... Yeah, the Ruby lineage of Elixir works fine with what I'm used to. It was a fairly easy transition, all things considered.
66
+
67
+ **Gerhard Lazu:** That is a really good top three. So we have a good idea of why you like Erlang, and which are the top three features of the BEAM... Specifically I say Erlang. When I'm saying Erlang, I'm referring to the ecosystem more, the virtual machine, and less the programming language. So that makes a lot of sense. I'm wondering when you're done coding your Elixir app, how do you ship it? How do you get it out there?
68
+
69
+ **Lars Wikman:** So that very much depends on context.
70
+
71
+ **Gerhard Lazu:** Let's take the last one. The last Elixir app that you had to -- and whether it's a service -- I mean, you can tell me about it. How did you ship it?
72
+
73
+ **Lars Wikman:** So right now I've been spending part of my day setting up a Docker file, so that'll tell you something... So Elixir and Erlang have this idea of releases, where you bundle everything, including the runtime, into a nice little package that you can just shove into a server and start, without needing any dependencies, essentially; or very few dependencies at least.
74
+
75
+ **Gerhard Lazu:** OpenSSL is always the trickiest.
76
+
77
+ **Lars Wikman:** Yeah, OpenSSL, and usually ncurses. Libncurses.
78
+
79
+ **Gerhard Lazu:** Yeah, if you need that... But yes. I know OpenSSL - you will definitely need that, because you will be doing some sort of encryption somehow; it doesn't matter how... But ncursus is --
80
+
81
+ **Lars Wikman:** \[12:13\] There's always encryption in there somewhere...
82
+
83
+ **Gerhard Lazu:** Exactly.
84
+
85
+ **Lars Wikman:** So I think releases are sort of my ideal for keeping it very lean and just shipping it to a server... But in this case we're going to be doing on-prem deployments, so someone else is gonna set it up on their own hardware... And my plan is for them to be given a Docker compose file, some credentials, and just go docker-compose up. I'm mostly using Docker because we want to set up a database, and it's not an embedded database, so we need to start a database in Docker...
86
+
87
+ **Gerhard Lazu:** Which one?
88
+
89
+ **Lars Wikman:** In this case it will be Postgres, probably. It was built with MySQL, but I'm sort of transitioning it to Postgres. That's a little bit of a preference of mine. In this case, Docker is mostly serving as sort of being so industry-standard that it will be familiar to more operations people than actually just running a binary would be...
90
+
91
+ **Gerhard Lazu:** Yeah, that's interesting, because I think if you are shipping just the app itself, then a binary - that's okay, right? It's an executable, you just run it, and off it starts. It's no different than, for example, a Docker container. Now, if you do have dependencies, like PostgreSQL, how do you get that started, and which version will you get, and will the package manager have the version that you get? And will it have SSL enabled? Maybe it will, maybe it won't. So all that configuration - now you're starting to get into the whole configuration aspect of it... So how do you configure it? How do you get them to talk? What about -- I don't know, maybe you need to do some tunings in PostgreSQL. Will you be shipping them as well, or will you just let the team that runs it figure that part out?
92
+
93
+ **Lars Wikman:** Yeah. And in this case, we would want to take care of all of that and just provide the docker-compose and go HAM. And whenever there's an update, maybe we need to tell them to pull a new docker-compose, or maybe they just need to update an image... But yeah, when you have additional infrastructure and you need someone else to set it up - that's a different case from, for example, how I run my own stuff, just small services I run. I run beambloggers.com, which is just scraping RSS feeds for the BEAM community... So if you wanna track sort of Erlang and Elixir, that's a good place to get an ever-growing RSS feed.
94
+
95
+ But the way I do that is just the release that I actually build on the server, and stand up there... Because the availability level I need to maintain on that one is whatever I feel like.
96
+
97
+ **Gerhard Lazu:** That's a good one. I think that has merits. I mean, some use cases, that's perfectly fine; nothing wrong with that. It's all contextual, I keep mentioning this. If that works for you, that's great. There's no problem. And maybe someone could benefit from that simplicity.
98
+
99
+ **Lars Wikman:** And that system particularly actually stores all its data in memory, and whenever I restart it, it just blows it away, it refetches it from the web...
100
+
101
+ **Gerhard Lazu:** That's interesting. Okay...
102
+
103
+ **Lars Wikman:** It was a fun way of building it, mostly... It means I don't have to deal with any database setup for that particular service. I have a few different services where I just keep things around in memory, because they are fairly ephemeral, or the history isn't particularly important...
104
+
105
+ **Gerhard Lazu:** So what I'm hearing is they are stateless systems, stateless services, which means that you could start them anywhere, and they would get the data just in time, after they boot, or maybe part of the booting process. I'm not sure when exactly it happens. But there's no state that you have to move with a service.
106
+
107
+ For example, if you were to stand this beambloggers elsewhere, on boot it would get all the data that it needs, and would start serving it. It wouldn't need to run on a specific machine.
108
+
109
+ **Lars Wikman:** \[16:14\] Yeah, so it's very -- at least very independent. It's stateful when it runs, in that it keeps a lot of state around... But it absolutely does not rely on some source of state, or needing to carefully manage state when it goes up and down.
110
+
111
+ For some other services where I do want to keep history around, I've started using SQLite much more than I used to... Because that's also operationally much simpler than Postgres, and I don't find Postgres particularly challenging. It's easy enough to manage, and I like it, but SQLite is even easier, and makes a lot of sense if you don't have a lot of heavy needs... And I've recently seen -- so there's a project called Litestream, which solves one of my bigger concerns with SQLite, which is replicating it, or at least having a very recent back-up... Because it's very easy to accidentally blast away a file on disk. So it hooks into the write-ahead log of SQLite and just ships it to any S3-compatible storage, on any update. So it does an ongoing replication of SQLite, and then you can just restore from that. I don't think it's necessarily feasible to do high-availability with SQLite, but if I was building a product right now, sort of a small-scale SaaS, or that kind of thing, this would definitely be something I'd consider.
112
+
113
+ There was a Hacker News thread around the time that Litestream got some attention... It's done a few rounds. Someone mentioned running a product on SQLite, and I think they've benchmarked it to 10,000 reads a second, or 5,000 writes a second, on an NVMe drive... That's a lot of read and write activity; a lot more than I would typically expect to need to serve for a small-scale SaaS. And if you can scale with just using something like SQLite up to that level, then you're probably successful enough that you can switch it out for something else at that point, and make all those decisions about complexity.
114
+
115
+ **Gerhard Lazu:** That is a very good point, actually... Litestream - it will be in the show notes, but it's litestream.io. It's Ben B. Johnson. I think he was on Changelog at some point. I remember this coming up... And you're right, Ben Johnson - he's the author of BoltDB, so he has some experience in this area, let's put it that way... I do remember it sounding really interesting, so you can check it out if you want.
116
+
117
+ But my takeaway is that you like keeping things simple... And if it gets the job done, that's it, that's all it needs to be. It doesn't need to be fancy, it doesn't need to be impressive, it doesn't need to be "Look at me. I've done it in this way, that no one else has done it before." It doesn't have to be that. It just has to work.
118
+
119
+ **Lars Wikman:** Yeah.
120
+
121
+ **Gerhard Lazu:** And if this works for you, that's great. Yeah. And since I do consulting for a number of different clients, I always have to adapt to whatever is already there. So the client that I will be shipping on-prem for doesn't actually have a thing in place, so that's sort of me putting my opinions and stamp on that. I'm there to solve that problem. But in other cases there is an existing ops person or ops team, and I'm mostly shipping code, and then I'll roll with whatever they have. And if I don't like it, I'll be swearing a little bit under my breath and maybe giving them some opinions... But typically, I'm happy to roll with whatever is there.
122
+
123
+ \[20:09\] I don't really believe in making radical changes to software that's already working, even if it's not working in the way you think it maybe should. But there is this trend also, particularly in the BEAM ecosystem, where there's a lot of things you can get done by using only the BEAM. The BEAM actually ships with a distributed database inside of it, Mnesia. It has a lot of challenges; it has some sort of conflict resolution problems when you run it in a distributed fashion. So I haven't been keen on using it for anything else than sort of caching. But with SQLite in place, then you can actually use the standard tooling in Elixr around Ecto, which is essentially the ORM -- not so much objects, but relational mapper, I guess...
124
+
125
+ **Gerhard Lazu:** Do you know which is the biggest Erlang project that uses Mnesia?
126
+
127
+ **Lars Wikman:** It would probably be WhatsApp.
128
+
129
+ **Gerhard Lazu:** They do, but they use it in a different way; in a very different very way. As far as I know - and this was many years ago - they used it on just a few servers, and they used it for (I think it was) just metadata, but very small metadata. So nothing that is heavy writes or heavy reads, and I think the eventual consistency was okay for it, so things did not -- like, dirty reads for example were a big thing for them... But they used it on a subset of nodes, and they had dedicated nodes for that... And I think they wanted to move away from it, or there was talk about that... This was at least five years ago.
130
+
131
+ The project which I have in mind, and I had a first-class seat to it, was RabbitMQ. It's one of its Achilles' heels. Mnesia - oh, wow. At any sort of scale, you start seeing some serious issues. 10,000 writes per second? No way. No way. Because it's the synchronization part, and you have to go over a network, and you have multiple nodes, and it's all synchronous... You have transactions...
132
+
133
+ **Lars Wikman:** Yeah. So you have to typically look at Mnesia in the context it was created, which was telecom. And as far as I understand, it was typically between machines that were very tightly coupled together. I've heard people talk about [backplanes](https://en.wikipedia.org/wiki/Backplane) and I have no idea what that is, so I'm not even gonna try... But yeah, it was about managing phone calls, and that kind of connecting... Which is very different from your typical web app, or "keeping everything around forever" type of infrastructure that we deal with now.
134
+
135
+ I've definitely looked for something that would essentially scale arbitrarily as a database across nodes as you add more... Not that I have the need, just because I want to see if there's the perfect solution out there... And I've found CockroachDB to be very appealing in that sense, because it's Postgres-compatible, and it's made to be distributed by default, which - Postgres has a lot of upside, and it's great, but it is not built to be distributed by default. And they've built a lot of distributed features into it, but you know very well what can happen when you try to replicate Postgres.
136
+
137
+ **Gerhard Lazu:** Oh, yes...
138
+
139
+ **Lars Wikman:** \[23:45\] I thankfully haven't had a reason to spend too much time replicating Postgres. But yeah, looking at Cockroach though, you'll also see that sort of suggested specs and what they suggest for setting up Cockroach - there's a lot of concerns and a lot of things to think about, and a lot of details suddenly, that you don't typically think about when you're setting up a single Postgres instance. And I think this feeds sort of into the whole idea of Kubernetes as well. It's like "Oh, but this is an abstraction layer that simplifies everything. It generalizes everything, so you don't have to think about all the details." But in my book, you can never ever stop thinking about the details. It's like, "Okay, we brought in Kubernetes, so now we don't have to know how Linux works?" No, I don't think so... What's your experience there? Does bringing Kubernetes in make you stop having to care about your Linux installations?
140
+
141
+ **Break:** \[24:47\]
142
+
143
+ **Gerhard Lazu:** You've mentioned a couple of things which I would like to dig in a little bit more. First of all, you mentioned about using PostgreSQL in your most recent project that you're doing for the customer, the one that you are deploying using docker-compose, or that you're using docker-compose to run it... And I'm wondering, in that context, why did you choose PostgreSQL over SQLite.
144
+
145
+ **Lars Wikman:** Yeah. That's actually a very good question, and I've been wrestling with it myself a little bit. So one of the bigger reasons is that the current SQLite adapter for Elixir is fairly new... And SQLite is very reliable, but I don't feel like that particular adapter has necessarily been proven out yet, and shipping that to customers before I'm certain and I have a track record with it that's more than a few experiments - I just don't feel entirely comfortable doing that, so I opted for even steering them away from MySQL, which is perfectly well-supported, into what is the absolute mainline of Phoenix, which is Postgres. It seems to have the community behind it.
146
+
147
+ Partly, I want to leave the client with something that other developers will definitely recognize and be capable of working with, if it ends up that I'm not around in the long run, or for whatever reason. I wanna bring us closer to the main line. And there are a few very cool projects and very useful projects in the Elixir community that lean on Postgres-specific features. One of them is Oban, a job processor. So having the option of using that is also a good one... But this would be a good project for SQLite and shipping that.
148
+
149
+ \[27:50\] There's also a little bit of a question mark around some back-ups... Like, okay, then we will want to use Litestream. But do I have something S3-compatible to ship it to, or do I need to stand that up myself and then pull the file out and throw it at -- yeah...
150
+
151
+ **Gerhard Lazu:** Those are really good points, and I really like the way you're thinking about this, because it's about confidence. Whatever you're giving when you're (let's say) shipping it, and "Here you go, customer. This is what was done for you", someone has to maintain it. Someone has to deal with all the issues that arise, because they will arise. Updates - hello? Everybody seems to forget about them, except when they have to be done and then they don't do them, because... "Hmmm. Updates? No." It's very important to keep up with those things. CVEs - how do you address CVEs if you don't have a good way of releasing these updates out there? And if you're not confident in what you have and the point that you reach, it becomes a bit more difficult to take those small steps, those small improvement steps... So I think it makes perfect sense. Not to mention that, as you said, you may not be around. Someone else may take this over, and you want them to take over the most supported, the most documented, the most known thing... And I think Ruby on Rails was like that for a long, long time, in that I can see a lot of parallels between Ruby on Rails and Phoenix, and there were some good, sensible defaults on Ruby on Rails that if you went outside of those, there was a lot of pain there.
152
+
153
+ So - sure, you can use MongoDB, but why would you, with your Ruby app? Just stick to MySQL. That's what the majority does.
154
+
155
+ I do remember being in situations in the past when we did that, and there was some pain there. The drivers were great, I still remember many discussions with Jordan - I forget his family name, but he was the maintainer of Mongo, I believe, if I remember it correctly... And that was a great library, but still, there were issues that you wouldn't expect. So it just goes to show that ever from my experience, I remember moments when I wished I had chosen the default, and I didn't. And not just me, but others paid the price for that. It was just not fair.
156
+
157
+ So if I learned anything, if you can stick with the defaults, or with the most common path, especially in these cases, it's maybe best to. Now, if you have a personal project, like you have - you have a couple of experimental projects - you can use anything you want, because your SLO is whatever you want it to be. And it can change from day to day, and it's fine, so it doesn't really matter. But for others where reliability, upgradability is important, you need to choose differently.
158
+
159
+ **Lars Wikman:** Yeah. Sometimes it pays to make a dull choice here and there.
160
+
161
+ **Gerhard Lazu:** Yes.
162
+
163
+ **Lars Wikman:** I'm happy to go absolutely wild on my own projects, but it's also things like -- if I'm shipping a library to the community, that's also where I will be looking quite closely at like "Okay, but what is a good library? What does it mean to behave well as a library in this ecosystem?" I can't just put all of my opinions in there if I want to be a good citizen. And yeah, I think that sort of carefulness about what you choose, that's something I've picked up with the years. I've definitely had a few years of chasing shiny, new frameworks, shiny, new ops technology, setting up servers in cool, new ways, building a custom microservice architecture from the ground up...
164
+
165
+ **Gerhard Lazu:** Just because you could, right? No other reason. "I can do this, so why not?"
166
+
167
+ **Lars Wikman:** Yeah. Oh, we absolutely needed to scale that product so hard... That's actually what we had as an objective... "Like, this has to be scalable. The last iteration of this product was not scalable. Let's greenfield it, let's build it right. It should be able to scale." And that architecture could absolutely have scaled, but that product did not need that scale. At all. It could have been so much simpler.
168
+
169
+ **Gerhard Lazu:** That's a good -- like, why? Why does it need to scale? If you don't ask enough why's... Like, "Why(s)?" This is something which I have seen - teams and products that keep going in the wrong direction, and then it doesn't matter how fast you go in that direction, because it's still wrong. You're going infinitely fast in the wrong direction, so... If you're going infinitely slow, because it's -- you're not even going in the right direction, so what's the point? Why are you rushing towards a direction that doesn't benefit anyone?
170
+
171
+ \[32:23\] And then years later, people would be asking "But why did we do that?" and none would recall, because it doesn't make any sense. Things that don't make sense, people tend to forget. Like, you're right, it doesn't make sense...
172
+
173
+ **Lars Wikman:** Yeah. I wrote a retrospective on that particular architecture, an entire product through three different iterations, and put it on my blog... And I've had some interesting feedback on that, because people don't always share -- I wouldn't even call it a failure story, because the product was a success and it did fine, until it was shut down at some point.
174
+
175
+ Some of the technical choices I would not make again, but that's where I learned that I probably shouldn't have done that, or I shouldn't have done it that way. Some of the choices checked out, some of them didn't.
176
+
177
+ **Gerhard Lazu:** So in that retrospective of a post that you wrote, by the way - what's the title of the post?
178
+
179
+ **Lars Wikman:** I think it's "Ten years in the vertical."
180
+
181
+ **Gerhard Lazu:** "Ten years in the vertical", okay. We will link it in the show notes for those that want to read it.
182
+
183
+ **Lars Wikman:** Yes. It's a three-part series, one for each version of the system.
184
+
185
+ **Gerhard Lazu:** Okay. So get your coffee/tea ready, whatever you're drinking. Strap down. It's a long one, but a good one. Worth it, right? I will read it myself, by the way, because it sounds very interesting. Is it funny?
186
+
187
+ **Lars Wikman:** I'm not sure if it's funny. I hope it's a little bit funny...
188
+
189
+ **Gerhard Lazu:** That's a killer combo...
190
+
191
+ **Lars Wikman:** I've definitely had good feedback on it, so it should be bearable to read, at the very least...
192
+
193
+ **Gerhard Lazu:** Okay. Alright. The coffee will make it worth it. No, no, no. I'm joking. The fun and interesting - it's like a killer combo, and if you can do both, it's great. It's like the jackpot of content.
194
+
195
+ **Lars Wikman:** And on the shipping side of that, that was mostly Ansible. But it ended up being a lot of Ansible, because we did split everything up into microservices...
196
+
197
+ **Gerhard Lazu:** Oh, yes.
198
+
199
+ **Lars Wikman:** ...for a three-person team.
200
+
201
+ **Gerhard Lazu:** That's what you get, right? It's one of the trade-offs that you get. You may need that; I know that some teams do... But not everyone does. And knowing the difference, when to use a microservice versus a monolith, is a very important thing. Like, know the answer before you embark on the journey... And even if the answer comes slower, it's worth it. Take your time. Because getting out of that particular journey - it will be very difficult. It can be done, but it's unlikely to happen.
202
+
203
+ So it's one thing that you want to choose wisely... You could choose maybe your cloud provider, you can migrate, and even that can be a bit difficult, but it's easier than going back from a microservice decision... Or a monolith one. By the way, sometimes that is the wrong decision, so we're not saying that one is better than the other.
204
+
205
+ Okay, so we touched on a couple of interesting things, but I still think we haven't dug deep enough in the hole, before you mention about Kubernetes. So I don't think we dug deep enough into that. One of the reason why we're even having this conversation - because I know that for you Kubernetes doesn't make sense... And that fascinates me, because I'm not saying that everybody should use that, I'm not saying that, but I can see a lot more reasons to use it than not to use it, and it's that API that from my perspective is the best thing that it has. So it's how it approaches operations, and the building blocks that you have at your disposal. You can achieve the same thing in different ways, but I don't know, having tried most of them, I kind of like it, and it makes a lot of sense.
206
+
207
+ So why in your case you're not using Kubernetes, at all? Because I don't think you're using Kubernetes. You hear about it a lot, but you don't use it. Why is that?
208
+
209
+ **Lars Wikman:** My experience with Kubernetes is essentially I tried K3s at some point, and started learning how to set up manifest files, and a lot of swearing ensued, and then I stopped, essentially. For one thing, I don't generally build systems at a large scale. I typically work with teams that are maybe five developers or so...
210
+
211
+ **Gerhard Lazu:** \[36:21\] That didn't stop us from using Kubernetes at Changelog. We're like three developers, one full-time, and even that one's not full-time... And we're still using Kubernetes. \[laughs\] So that didn't stop anyone. But please continue.
212
+
213
+ **Lars Wikman:** Yeah... I could argue with you whether Changelog should be using Kubernetes...
214
+
215
+ **Gerhard Lazu:** Yes, please. Let's.
216
+
217
+ **Lars Wikman:** But I for sure do not see the need for a system such as the Changelog to have a Kubernetes. Now, again, context... The guy that's responsible for operating Changelog apparently likes Kubernetes, which means that he enjoys his job more if he gets to run it on Kubernetes, so... \[laughs\] It sort of checks out.
218
+
219
+ **Gerhard Lazu:** But it's not that, because I'm that guy. So just for the listeners - that's me, right?
220
+
221
+ **Lars Wikman:** Yeah, yeah. \[laughs\] I'm absolutely talking about you there.
222
+
223
+ **Gerhard Lazu:** I'm that guy, okay? So let's unpack this... I tried to answer this question a couple of times, and either people -- I must be answering it wrong, so let me try again. The reason why we chose Kubernetes is because it reached a certain level of maturity. That was one of the things. And Linode, our partner for all things infrastructure, they started offering a managed Kubernetes service. So that was important for us. We don't want to deal with managing it. So that is a provider concern.
224
+
225
+ We had to solve a couple of things, like for example DNS. DNS updates... Whenever the IP changes, or the load balancer changes, the IP has to be updated in the DNS. The certificate - we used to pay for those, and then Letsencrypt came along. So how do we get free certificates via Letsencrypt, and support that mindset?
226
+
227
+ **Lars Wikman:** A cron job.
228
+
229
+ **Gerhard Lazu:** A cron job. Excellent. Okay, great. A cron job. That is a valid answer. And then how do you push updates? Like, do you have your CI that deploys? In some cases you do. In some cases the CI is the thing that has the keys to the kingdom, and that's what we had. And it can do anything. Is that a good thing? I don't think it is. But whatever... It's just like an opinion. But there's more. How do you keep your certificate in sync between your CDN, your load balancer, and any other place that may use it? In our case it was just these two - the load balancer and the CDN. So not only you have to renew it, but then you have to upload it and make sure it's the same one everywhere. Excellent.
230
+
231
+ How do you run backups? Another cron job, right? So before you know it, you have all these things that you need to have... Like, what gets for example docker-compose, or whatever you're using, in place? What installs Erlang? What determines which version of Erlang you have? What about the monitoring? Where do you run that? How do you configure the monitoring? How do you configure for example the monitoring -- not just the metrics and the logging, but I'm also talking about the synthetic monitoring - your pings, your Pingdoms, or your Grafana Clouds, or whatever you may be using. And before you know it, you have all these concerns that typically are either in a wiki, or in someone's head, or different people approach it in different ways; in this case it's just me, so it's not really a problem... But you have all these things -- oh, secrets. That's another one. Where do you store the secrets and how do you rotate secrets when there's a leak?
232
+
233
+ **Break:** \[39:37\]
234
+
235
+ **Gerhard Lazu:** The way I approach this is "What is a system that can manage all these things in a way that doesn't have me worrying about versions as much?" Because we used Terraform, and then we had to do upgrades because it was running locally, we had plugin issues, we had to upgrade those... And the issues were problems that you wouldn't expect to have, but we were having because of this different tooling that we were using.
236
+
237
+ We used Ansible, and - did we use Chef at some point? No. We didn't use Chef. We only used Ansible at some point, many years ago. By the way, there was like a progression, so every year we blogged about this, we talked about this... It didn't just come out of the blue, "I know, let's use Kubernetes." No. We'd been using Ansible for years, we'd been using Concourse CI to run the builds, to do the deploys... We used docker-compose and then Docker Swarm for, again, a couple of years. So we grew into this architecture, and right now everything is stored -- like, all the YAML, all the config is stored in the repo. Okay, we have some [Make](https://www.gnu.org/software/make/) glue, which I'm not very proud of... It's great, but I know there's a better way. Maybe Argo CD. I don't know. GitOps. I keep hearing about that; maybe we try that, I don't know. But can we have something that continuously applies those configs, and you don't have to have your machine to run that stuff? So maybe something like a control plane which is different from your service.
238
+
239
+ I know that you mention large-scale... I don't think Changelog is very large-scale; it's a simple app, but it's still serving many of terabytes every month of traffic... And there's the CDN; when the CDN goes down there's a big problem, as we had a couple of days ago, and you have to know how to basically update it very quickly, which we could, and you have to have that space and room...
240
+
241
+ So the answer is a bit more complicated. It's contextual, and it's not because I like Kubernetes, it's because it makes all these concerns easier than if we used anything else than we did before, by the way. It improves on that.
242
+
243
+ **Lars Wikman:** Yeah.
244
+
245
+ **Gerhard Lazu:** What do you think about that?
246
+
247
+ **Lars Wikman:** Easier for you, I would say. For me, it's like - I barely know where I would start on making Kubernetes do this... And I did start looking at K3s specifically because I wanted the CD part. I wanted something to pick up my finished Docker containers and spin up the new version. That's essentially why I wanted to set that up, to have a very, very lightweight approach to what Kubernetes can do.
248
+
249
+ The thing is I don't see sort of keeping that load balancer up to date, or keeping certificates up to date as that complicated of an endeavor, with current baseline tools like Letsencrypt... So I wouldn't bring in layers to solve them. It could be a Bash script, it could be some fairly tightly spec-ed tools... For example, in Elixir there's a fantastic library by Saša Jurić which is called SiteEncrypt, which will simply do the Letsencrypt dance for you if you configure your Phoenix app to use it. So when you start your application, it checks "Do we already have certificates lying around? I'll use those. If not, I'll talk to Letsencrypt. We'll shake hands, I'll get some certificates, and now we're certified."
250
+
251
+ \[44:13\] And with that, to some extent you might not even need NGINX at that point. I bet you would probably be able to serve Changelog with the previously mentioned SQLite performance of like 10k reads a second. You were talking about terabytes, and that's like the mp3 files, right?
252
+
253
+ **Gerhard Lazu:** Mm-hm.
254
+
255
+ **Lars Wikman:** So files serving is one of the places where I would typically reach for proprietary cloud stuff, like S3, or Linode Object Store, or one of those... Because it just solves a lot of the -- like, okay, I wanna still have some redundancy in this, I want to be able to scale it essentially arbitrarily. For file serving I would typically use a service like that. It's super-annoying dealing with large drives and RAID so I'd rather not. So pragmatism... I don't think you should peel everything off, but I'm also not sure, like, when do you actually need a load balancer?
256
+
257
+ Having NGINX in front of your app can be very nice, because it allows you to do things like "Oh, actually we're down for maintenance right now. I still wanna show something nice to the user." Or pointing to different instances that you're starting up, or whatever. But there's also the potential risk of your NGINX being misconfigured or less well configured than your application, and actually being a bottleneck to your applications. I've seen that happen, too. Typically, I would set something up with NGINX.
258
+
259
+ Also, one of the things with Kubernetes is all this -- like, any node can go away at any time; we're on very moving ground cloud infrastructure. We only use what we need... But you always need some, so usually you're at a base level, like "We have these instances up constantly." At that point, I'm like "But do you need a cluster of three instances running the actual Kubernetes, and then an app instance, and a DB instance, and a load balancer instance? Or is this one application instance and one database instance?" Would that do?
260
+
261
+ **Gerhard Lazu:** I think it would. And if you look at Changelog, at its core, that's exactly what we have. We have the application and we have the database. A single instance PostgreSQL. There's a great story how we used replicated PostgreSQL and how that was the cause of a couple of downtimes; I think we covered that in the episode one... A different story.
262
+
263
+ And CockroachDB - that's something which I definitely wanna try out. Distributed PostgreSQL with a PostgreSQL compatible wire format that's a very interesting one to try out, for sure. It's on my list. But I think what I'm hearing, going back to what you were saying, is that for you, getting started with Kubernetes seems very complicated for a value that isn't very clear. Like, what is the value proposition? A lot of the things that you can do today - I mean, does Kubernetes make them any different? And maybe the answer is no from your perspective. You're saying like "Let's just use a cron job." In my mind -- I think this is where I wish we had more time to dig into this... So what I'm proposing is a follow-up, because we will run out of time. But there's so much more -- like for example the monitoring, the shipping of logs, all those things... And you have to configure them somehow. Then you have to worry about OS patches, whichever host OS you're running; that is not an issue when you're running in the context of the Kubernetes, because it's just your container, and you don't care about the worker node that runs the kubelet, that runs the Kubernetes infrastructure, so to speak.
264
+
265
+ \[48:01\] When it comes to NGINX, you don't install NGINX; you have Ingress NGINX, which is a component that exposes certain custom resource definitions (CRDs) and it's more like it implements Ingresses. Now, what is an Ingress? Do you care about it? Well, you do, because you need to know how to configure it; but beyond that, how that maps to an NGINX concept - that's abstracted away from you. And you have this self-discovery service, and it's all just happening behind the hood... And you're right, it feels a bit magical, but it's no different to a framework. For example, if you use Phoenix.
266
+
267
+ **Lars Wikman:** But that's the whole thing... See, Phoenix is a fairly explicit framework. It has a few things that feel a bit magical, but it is quite explicit about everything it does.
268
+
269
+ **Gerhard Lazu:** And Kubernetes isn't?
270
+
271
+ **Lars Wikman:** Yeah, it's not the impression I'm getting... But what I see when you bring in something like Kubernetes - you're placing a lot of abstractions in place, and you're going to be working with those abstractions. Those abstractions are still doing all of the things under the hood, and you need to be aware of how they do those to be able to do it gracefully. Most of the use cases and most of the way you want to work with infrastructure should be ideally enshrined in how Kubernetes handles this... But I don't feel like you can just say "Okay, but now I don't have to care about this." You still have to care about updating Linux, you still have to care about how your certs are propagated, or you could get kicked off of Letsencrypt... There's a lot of automation, but it's also very generalized.
272
+
273
+ This is a thing where I think Kubernetes ends up being a bit over -- I wouldn't say it's over-engineered. It's "Don't repeat yourself" taken quite far. And that's the correct move for some cases. For example, you'll see in enterprise software, things are often generalized, and the software is generally not that tight to work with. It's usually a little bit annoying and a little bit too much... And that's sort of the experience I'm getting from everything I see and hear about Kubernetes - it tries to solve everything, and I don't need my everythings solved... So there is this opposite direction I can take things in when working with Erlang, Elixir and the BEAM, where the BEAM - which is meant to handle high-availability, high-reliability, concurrent distributed systems, and I can bring all of my application concerns in there... It's like, "Do I need an SSH server?" Well, they have one. \[laughs\] Do I need to talk to DNS? Do I need to do DNS? Yeah, there's probably something in there for that... And that's a very rare runtime that you can lean on to do that kind of thing.
274
+
275
+ But let's say for example shipping updates to your app - the BEAM can hot code update your app while it's still running, without ever taking it down. That's a little bit trickier to use than a lot of other ways... It's not like bringing your container down and then bringing up another one, but it's definitely a capacity that's there. And I think -- like, a BEAM application can handle everything that I need to get done, but also the 99% case or the 90% case for small products in SaaS. If you need a bit of observability, for example I have Dashboard, which gives a baseline of observability with no effort.
276
+
277
+ \[51:56\] Or you can install something like PromEx and then you need to have Prometheus and Grafana stood up somewhere. Then you're starting to get a little bit more infrastructure, or you use the cloud offerings... And I think that's sort of always what it boils down to - at a certain point you need more visibility into the details. Okay, at a certain point you should probably start looking at installing something to give you that. But Kubernetes is installing all of it at once, and you have to care about certs, you have to care about the DNS details, you have to care about the Ingress, you have to care about all of it. And I think both the barrier and the maintenance cost of it is something I wouldn't choose to take on lightly in any project... Because I think it's typically too early for Kubernetes, and I'm thinking it's probably too early for Kubernetes in most projects before they're at an international scale. If you need high availability across many regions and timezones, that's probably a good reason to use Kubernetes. But I also realize, if you spend a lot of time working with Kubernetes, setting it up might not be that much effort. I'd rather code a fairly custom deployment setup that I find explicit and simple, than lean on something I understand so poorly, and which would take me years to have a good grasp of, which is Kubernetes.
278
+
279
+ **Gerhard Lazu:** I think there is a lot of -- okay, so first of all there's simplicity in complexity, and the other way around. But in this case, in Kubernetes, it's complex, but it's also simple, if you look at it from a certain perspective. So things are fairly well defined; you know what to reach out for and how to combine things, and there's a whole community around it, and there's so many projects which are solving specific issues... The interface is very clear, you know how to interact with it, there's an API. It's this single API via which you request anything, including other VMs, other load balancers. Do you want a SQLite instance with such and such provider? You can get that. Okay, you have to extend Kubernetes in order to benefit from these features, but it's possible, and there's only one way that you can do this... And that's very powerful.
280
+
281
+ I think the separation of concerns gets a bit more clear... So anybody, just ship us a container image. It doesn't matter what language you have, it doesn't matter what VM you're running; ship us a container image, we'll take care of the rest. Now, I know it's too simplistic, but it works.
282
+
283
+ Heroku, for example - shipping containers, they've made it popular. You just git push and things happen. And guess what - the way that Changelog has been developed hasn't changed. You git push and things happen behind the scenes. And because that contract has never been broken with the developers, everybody's happy.
284
+
285
+ **Lars Wikman:** Yeah, Jerod would be pissed if he had to SSH into the server to set things up.
286
+
287
+ **Gerhard Lazu:** There you go.
288
+
289
+ **Lars Wikman:** That's no good.
290
+
291
+ **Gerhard Lazu:** Yeah. Do you really care about which OS you're running? No, you don't. Do you wanna switch Erlang versions? Super-easy. Guess what - all you have to do is change the container. Hot code reloading? Yes, you can do it; it's hard, maybe you don't need to... And again, it doesn't matter whether you use Erlang, or Elixir, or Ruby, or Python, or Go... It really doesn't matter. Do you want to use serverless? Well, guess what - you have all these projects which you can set up and you can run it in the same context. And the list goes on and on. It really just goes forever.
292
+
293
+ \[55:42\] I have used Chef, for many years. I gchef at one point (Gerhard Chef). There's even an org on GitHub... So I spent a fair time with that. Knife, when that was a thing - I don't think many people were using... Because Chef server was so difficult. I was there, I remember that period. Ansible - I loved it when it was a thing. Certain things were difficult with it, but it was saner than Chef. Is Kubernetes saner than Ansible? I don't know; for us, it felt like the next evolution. You're right, there is a learning curve, like with Vim, or Emacs.
294
+
295
+ **Lars Wikman:** Kubernetes is definitely a big step in some direction from Ansible. It's not just the next sort of iteration on scripting your servers; that's not what it is. It's something different. And you did ask me for a hot take that you could put as the title on this, and I think it would be fair to say Kubernetes is the Electron of operations...
296
+
297
+ **Gerhard Lazu:** Oh, okay... Wow. I think people are like "What is Electron?" That would be the first I'd ask.
298
+
299
+ **Lars Wikman:** What?!
300
+
301
+ **Gerhard Lazu:** What is Electron? Which electron do you mean? Do you mean the physical one, or the Electron JavaScript --
302
+
303
+ **Lars Wikman:** \[laughs\] Oh, you're like "Oh, physics...!"
304
+
305
+ **Gerhard Lazu:** Exactly.
306
+
307
+ **Lars Wikman:** Yeah, I mean in that it makes operations at the outset a lot simpler, but it also paves over everything that you could get right, and the details, I feel like. I think you have access to every little detail you would need in Kubernetes, but it doesn't particularly seem to encourage you getting into all the details.
308
+
309
+ So whenever you add abstraction layers - and I think that's sort of my hesitance on adding more tools, especially tools that sit on top and sort of obscure what's going on... It's that I've come to rely on explicit things. Because if you can just read the code and see what it's gonna do - that's very powerful. I mean, it's not declarative. People like declarative for particular things... And declarative can be nice. But it also doesn't make it clear, like A, to B, to C. What is going on? What's being done? And for most server installs, they don't have to be very complicated.
310
+
311
+ **Gerhard Lazu:** Yeah.
312
+
313
+ **Lars Wikman:** And if it doesn't have to be very complicated, and there's not a lot of complexity to manage, if you bring in a large abstraction layer, which is supposed to hide a lot of complexity and make managing very complex things possible, which I think is a fair thing to say about Kubernetes - it seems to make it possible to manage very, very complex things... If you bring that into an already fairly simple thing, I think you're shooting yourself in the foot. But it also depends on what tools are you comfortable with. You've spent years and years deeply immersed in ops...
314
+
315
+ **Gerhard Lazu:** Try decades... But yes, I agree.
316
+
317
+ **Lars Wikman:** Yeah. I've spent much more time building the actual applications. I've spent a fair bit of time on servers, and operations, but not nearly the majority of my time, because I care much more about the building of the thing. And I consider the operations a part of what I do. I don't want to hand off a container, particularly to operations, and just guess how it's gonna be run... I see there's a lot of -- I don't wanna call it full-stack; maybe end-to-end stack. I want to care about the whole, and I have no idea what's going on in half of the whole if I bring in a tool like Kubernetes. I definitely would use it, and I would learn it, if I saw that I definitely had the need. If I was going to run hundreds and hundreds of instances, or scale across continents - yeah, it probably makes sense to bring in something that lets me take that overview, that 10,000 miles view of the world, and like "Oh, yeah, we have decent performance in Asia. Oh, we're dropping performance in Antarctica." But that's typically not where I operate, and that's typically not what I go for first.
318
+
319
+ **Gerhard Lazu:** And on that thought, thank you, Lars, very much for joining me. This was a pleasure. I do realize that we have so much more to talk about. Dev and ops talking, finally... \[laughs\] I think for decades we've tried to do that, and it's finally happening. We have respect for each other, we know that each context is difficult, challenging, but worth exploring, and I don't think we should be just shoving code across the fence, like "Here you go. You run this. Figure out what to do with it." I think it's nicer when we agree on what the abstractions should be. Everybody benefits. And when things go wrong - because they will go wrong - people know what to do. And it's not a reactive approach, it's like a planned, "We kind of know what we need to do." So I'm really excited about that world.
320
+
321
+ Thank you very much for joining me. This was a pleasure, Lars. I look forward to seeing you next time and talking to you next time, whenever that may be. Hopefully soon.
322
+
323
+ **Lars Wikman:** Yeah, happy to come back. Thanks for having me.
Why Kubernetes?_transcript.txt ADDED
@@ -0,0 +1,600 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [0.08 --> 5.72] Hey, how's it going? I'm your host, Gerhard Lzu, and you're listening to Ship It, a podcast
2
+ [5.72 --> 11.64] about getting your best ideas into the world and seeing what happens. We talk about code,
3
+ [11.96 --> 18.00] ops, infrastructure, and the people that make it happen. Yes, we focus on the people because
4
+ [18.00 --> 24.12] everything else is an implementation detail. Getting a Kubernetes cluster is easy. It can
5
+ [24.12 --> 30.82] take as little as 15 seconds with K3S. But the rest of the first steps are not as straightforward.
6
+ [31.30 --> 37.72] And do you even need Kubernetes? The fact that everyone is talking about it, and even those
7
+ [37.72 --> 43.66] that don't need it, use it, doesn't mean that you should. Coming from an Elixir background,
8
+ [44.00 --> 50.40] with many years of helping companies build and run highly concurrent and fault-tolerant applications,
9
+ [50.40 --> 57.62] Lars developed a most pragmatic approach to shipping software. Erlang has some great primitives
10
+ [57.62 --> 64.02] built-in, including self-contained releases and hot code reloading. And sometimes a monolith running
11
+ [64.02 --> 69.92] on a single host with continuous backups and a built-in self-restore capability is everything
12
+ [69.92 --> 76.54] that a small team of developers needs. That's right, KISS. As in, keep it simple, Sprout.
13
+ [76.54 --> 82.88] Check Lars's blog to understand what I really mean. But after two years of running changelog.com,
14
+ [83.06 --> 87.60] a Phoenix monolith on Kubernetes, what do I think? Let's find out.
15
+ [88.02 --> 93.76] Big thanks to our partners Fastly, LaunchDarkly, and Linode. Our bandwidth is provided by Fastly,
16
+ [94.10 --> 100.22] learn more at fastly.com, feature flags powered by LaunchDarkly.com, and we love Linode.
17
+ [100.22 --> 105.20] They keep it fast and simple. Check them out at linode.com forward slash changelog.
18
+ [106.54 --> 115.36] This episode of Ship It is brought to you by Render, the zero DevOps cloud that empowers you
19
+ [115.36 --> 120.50] to ship faster than your competitors. Here's Anurag Goel, CEO of Render, sharing why developers
20
+ [120.50 --> 123.70] choose Render over Heroku and how they're innovating much faster.
21
+ [124.06 --> 129.24] A lot of Render customers come to us from Heroku and they tell us Render is what Heroku
22
+ [129.24 --> 134.74] could have been. I think it's because we offer a more streamlined approach to hosting modern cloud
23
+ [134.74 --> 141.36] applications at a significantly better price point. Applications on Render heal themselves and
24
+ [141.36 --> 146.94] scale automatically, giving developers the features and flexibility of something like Kubernetes,
25
+ [147.38 --> 153.02] but without any of the complexity. We're always working to bring the latest industry advances
26
+ [153.02 --> 158.44] to our platform so your applications can leverage the state of the art in the cloud without you having
27
+ [158.44 --> 164.08] to do or learn anything. All right, learn more about how Render compares to Heroku at render.com
28
+ [164.08 --> 170.22] slash compare or email changelog at render.com for a personal intro and to ask questions about
29
+ [170.22 --> 175.84] the Render platform. Again, that's render.com slash compare or email changelog at render.com.
30
+ [175.84 --> 181.42] We are going to send three, two, one.
31
+ [195.10 --> 200.48] I'd like to start with a story. I know that you've been helping changelog.com, the code base,
32
+ [200.48 --> 206.86] in different ways. The thing which I remember is that our response latency went down.
33
+ [207.80 --> 212.10] You did some tweaks. Is that right? Or am I confusing you with Alex?
34
+ [212.62 --> 214.96] I think Alex gets the credit on that.
35
+ [215.16 --> 219.30] Alex gets the credit. I definitely know he improved the N plus one queries, that's for sure.
36
+ [219.72 --> 221.94] Yeah, I even cost some of those.
37
+ [222.34 --> 225.28] Oh, right. Okay. Okay. So you're the opposite of Alex. Okay.
38
+ [225.28 --> 232.26] Yes, exactly. I'm here to create opportunities for performance improvements.
39
+ [232.74 --> 237.02] I see. So that's the way it goes. Okay. So you're making it worse and he's making it better.
40
+ [237.36 --> 243.74] And the difference is like it's zero, right? Okay. So we're not going anywhere.
41
+ [244.08 --> 248.26] It's very important to have a stable code base and a very stable operation.
42
+ [248.26 --> 255.80] It is. Right. So some of the work I've done with the changelog has been on a few things that haven't
43
+ [255.80 --> 263.28] been released and a few things that basically housekeeping around how emails are sent out and
44
+ [263.28 --> 271.20] to whom. I think and I hope that there will be some more stuff done with the meta costs feature I
45
+ [271.20 --> 277.70] made. I had the opportunity to write a small DSL, which would be nice to expose to the public. I don't
46
+ [277.70 --> 282.08] think Jared has put it into action. So that's a good time to shame him a little bit about it.
47
+ [282.44 --> 287.10] Okay. So Jared, if you're not listening, it's okay. And if you are listening, what's up with the
48
+ [287.10 --> 292.62] meta costs? I don't know anything about it, but yeah, what's up with it? That's what I'm wondering.
49
+ [292.62 --> 300.60] Yeah. So, but it was, it was very fun to get a chance to work with Jared and the changelog code base
50
+ [300.60 --> 306.96] in a slightly dedicated fashion. So it was a few months that would have been last summer that I,
51
+ [306.96 --> 311.68] that I spent some time with this code base. Then I introduced Alex when I didn't have time
52
+ [311.68 --> 317.96] anymore. And he seems to have torn things up. He really, really has pushed a few things forward.
53
+ [318.34 --> 323.44] So he did. Yes, definitely. The, the, the Promex stuff, I think is the one that I got most excited
54
+ [323.44 --> 327.62] about because it touches on the infrastructure side of things. It just integrates a couple of
55
+ [327.62 --> 331.72] things together. So that is from my perspective, very visible and something which I'm very interested
56
+ [331.72 --> 338.28] in to see how are things behaving. The N plus one optimizations improvements, N plus one query
57
+ [338.28 --> 342.44] improvements. That was great to see as well. I didn't know that you're the, you were the cause
58
+ [342.44 --> 352.58] for it. So I'm not sure how I feel about that. I think I only introduced one fairly chunky case of
59
+ [352.58 --> 357.66] them. And it was mostly, mostly when you're doing development that it turned things a little bit
60
+ [357.66 --> 362.90] slow to start because I was doing something, something optimistic and that didn't turn out.
61
+ [363.52 --> 369.22] I mean, the, the key takeaway from this little conversation is that you are deep into Elixir,
62
+ [369.38 --> 375.88] into Erlang, into, is it, is Erlang fair to say? I mean, you're on Beam Radio, a co-host of Beam Radio.
63
+ [376.14 --> 377.80] So Beam is all Erlang.
64
+ [378.26 --> 383.72] I'm very excited and enthusiastic about Erlang, but I don't write Erlang. I write Elixir.
65
+ [383.72 --> 390.84] It runs on the same VM as Erlang. So all the Erlang technology benefits Elixir. A lot of the
66
+ [390.84 --> 397.14] Elixir technology benefits Erlang, but it can't fully go in both directions, unfortunately.
67
+ [397.88 --> 403.20] Mostly a technical reason for it. Yeah. But, but that's, I am very invested in,
68
+ [403.32 --> 407.28] in the Beam ecosystem. So the Beam is the name of the virtual machine.
69
+ [407.60 --> 408.74] Do you know what Beam stands for?
70
+ [408.74 --> 417.38] So I think early on it was Bogdan's something, something machine. I don't remember exactly.
71
+ [417.58 --> 421.76] Erlang, abstract machine. Bogdan's Erlang, abstract machine.
72
+ [421.82 --> 426.94] Because initially it was Jam, which was Joe's abstract machine, I imagine.
73
+ [427.12 --> 432.12] Yes. Joe, you're still in our minds. I know that you're not listening, but those that know,
74
+ [432.46 --> 437.98] Joe Armstrong, the co-creator of Erlang, you're still in our minds. Thank you for everything you've done.
75
+ [437.98 --> 440.38] And you shipped a great thing into the world.
76
+ [440.84 --> 447.20] Yeah. The Beam and Erlang are absolutely wild. And it's, it's been interesting that through many
77
+ [447.20 --> 452.88] years I've heard of Erlang and people have been like, oh, it's a weird, that's a weird one,
78
+ [453.44 --> 457.98] but it has some really strong ideas and it has some really strong features. And it's like, okay,
79
+ [457.98 --> 463.58] whatever. I don't really do FP though. It wasn't really in my wheelhouse. And I figured it was
80
+ [463.58 --> 473.06] probably too complicated for me. Now I'm very, very keen to avoid working with non-Beam languages if I
81
+ [473.06 --> 481.46] can, because there are, there's just so much you get with, with a Beam that you just don't have in
82
+ [481.46 --> 485.12] other runtimes or that you have to work so very, very hard for in other runtimes.
83
+ [485.12 --> 487.96] Which are your top three favorite Beam features?
84
+ [488.46 --> 495.38] Concurrency and parallelism at the same time for essentially no extra effort. It makes you do
85
+ [495.38 --> 502.28] concurrency and parallelism correctly and reasonably without tripping you into sort of mutable state
86
+ [502.28 --> 507.94] and the dangers of concurrency and parallelism. So that's one. Then there's the whole resiliency
87
+ [507.94 --> 516.32] thing, which is built on sort of the same idea or some of the same ideas where there will be things
88
+ [516.32 --> 523.02] that happen to your application that are unexpected that you can't really catch with just catching an
89
+ [523.02 --> 527.22] exception. Maybe the disc was full. Maybe the service you were talking to was down. There's always
90
+ [527.22 --> 532.22] something to make it blow up. And it has been described as the let it crash philosophy, but it's
91
+ [532.22 --> 539.42] not always the most, it's not the best marketing. It makes managers very, very nervous. But the idea
92
+ [539.42 --> 545.86] that it's okay if certain components fail, the important thing is to have a recovery strategy.
93
+ [546.28 --> 553.12] And this actually sort of feeds into, to the Kubernetes thing, which, which has a similar
94
+ [553.12 --> 559.90] approach, but in a, on a different scale. And this, this sets me apart from a lot of functional
95
+ [559.90 --> 566.38] programmers, some functional programming enthusiasts really, really like their types. I'm very,
96
+ [566.38 --> 574.04] very glad that Erlang and Elixir are dynamic. Okay. Apparently there is a typed Erlang syntax,
97
+ [574.16 --> 579.58] DSL coming from Facebook. I say Facebook, but it's really WhatsApp. I keep forgetting its name,
98
+ [580.24 --> 584.26] but something Muscala. Do you know who I'm referring to?
99
+ [584.26 --> 590.82] Yeah. Mikhail Muscala is the guy that, as far as I know, sort of started the effort or
100
+ [590.82 --> 596.20] that's probably leading the effort. I spoke to him once in Prague. That was before he was at WhatsApp,
101
+ [596.44 --> 603.52] but that's a super interesting effort. And I think that type system makes perfect sense for what they
102
+ [603.52 --> 611.72] need. They're a very large organization, but I don't really find it compelling for building the kind of
103
+ [611.72 --> 619.10] web apps and the systems that I do. I find type systems to be a little bit annoying. I've done
104
+ [619.10 --> 624.10] some work recently with Elm, which has a lot of types. That was frustrating at first, but it was
105
+ [624.10 --> 631.36] also compelling. It showed me some of what, what you really get with a, with a types first approach,
106
+ [631.36 --> 640.16] I guess. So interesting, but I'm not sure I love it. So I'm very, very happy with, with having a
107
+ [640.16 --> 647.74] dynamic language. I come from Python and PHP originally. So that's, yeah, the Ruby lineage
108
+ [647.74 --> 653.64] of Elixir works fine with what I'm sort of used to. It was an easy, a fairly easy transition,
109
+ [654.04 --> 659.84] all things considered. That is a really good top three. So we have a good idea of, well,
110
+ [660.14 --> 666.14] why you like Erlang and which are the top three features of the Beam, specifically I say Erlang.
111
+ [666.14 --> 669.80] When I'm saying Erlang, I'm referring to the ecosystem more, the virtual machine,
112
+ [670.22 --> 676.56] less the programming language. So that makes a lot of sense. I'm wondering when you're done
113
+ [676.56 --> 681.64] coding your Elixir app, how do you ship it? How'd you get it out there?
114
+ [681.96 --> 685.72] So that very much depends on, on context. So I'm...
115
+ [685.72 --> 689.88] Let's take the last one, last Elixir app that you had to, and whether it's a service,
116
+ [690.00 --> 694.06] I mean, you can, you can tell me about it. How did you get it into, how did you ship it?
117
+ [694.06 --> 699.64] So right now I've been spending part of my day setting up a Docker file.
118
+ [700.40 --> 706.32] So that, that'll tell you something. So Elixir and Erlang has this idea of releases
119
+ [706.32 --> 711.46] where you bundle everything, including the runtime into a nice little package
120
+ [711.46 --> 715.90] that you can just shove into a server and start without needing any dependencies,
121
+ [716.14 --> 718.40] essentially, or very few dependencies, at least.
122
+ [718.90 --> 720.42] OpenSSL is always the trickiest.
123
+ [720.42 --> 725.74] Yeah. OpenSSL and usually encurses, limit curses.
124
+ [725.94 --> 729.88] If you, if you need that, but yes, I know OpenSSL, you will definitely need that because
125
+ [729.88 --> 733.14] you will be doing some sort of encryption somehow, it doesn't matter how.
126
+ [734.68 --> 736.62] But there's always encryption in there somewhere.
127
+ [736.98 --> 737.32] Exactly.
128
+ [737.74 --> 742.36] So I think releases are sort of my ideal for keeping it very lean and just shipping it to
129
+ [742.36 --> 746.02] a server. But in this case, we're going to be doing on-prem deployments.
130
+ [746.02 --> 753.74] So someone else is going to set it up on their own hardware. And my plan is for them to be given
131
+ [753.74 --> 761.24] a Docker compose file, some credentials and just go Docker compose up. There I'm mostly using Docker
132
+ [761.24 --> 770.52] because we want to set up a database and it's not an embedded database. So we need to start a
133
+ [770.52 --> 772.00] database. Which one?
134
+ [772.80 --> 778.68] In this case, it will be Postgres, probably. It was built with MySQL, but I'm sort of transitioning
135
+ [778.68 --> 784.82] it to Postgres as a little bit of a preference of mine. In this case, Docker is mostly serving as sort
136
+ [784.82 --> 791.62] of being so industry standard that it will be familiar to more operations people than actually
137
+ [791.62 --> 793.24] just running a binary would be.
138
+ [793.24 --> 799.28] Yeah. I mean, that's interesting because I think if you are shipping just the app itself,
139
+ [799.72 --> 805.48] then a binary, that's okay, right? Executible, just run it and off it starts. It's no different
140
+ [805.48 --> 811.98] than, for example, a Docker container. Now, if you do have dependencies like Postgres SQL,
141
+ [812.62 --> 817.36] how do you get that started? And which version will you get? And will the package manager have
142
+ [817.36 --> 824.72] the version that you get? And will it have SSL enabled? Maybe it will, maybe it won't. So all
143
+ [824.72 --> 830.52] that configuration now is starting to get into the whole configuration aspect of it. So how do you
144
+ [830.52 --> 834.72] configure it? How do you get them to talk? What about, I don't know, maybe you need to do some
145
+ [834.72 --> 839.80] tunings in Postgres SQL. Will you be shipping them as well? Or will you just let the team that runs it
146
+ [839.80 --> 840.84] figure that part out?
147
+ [840.84 --> 848.08] Yeah. And in this case, we would want to take care of all of that and just provide the Docker
148
+ [848.08 --> 854.12] Compose and like, go ham. And whenever there's an update, maybe we need to tell them to pull a new
149
+ [854.12 --> 861.64] Docker Compose, or maybe they just need to update an image or, but yeah, when you have additional
150
+ [861.64 --> 866.38] infrastructure and you need someone else to set it up, that's a different case for, from, for example,
151
+ [866.38 --> 873.26] how I run my own stuff. Just small services I run. I run beambloggers.com, which is just
152
+ [873.26 --> 879.72] scraping RSS feeds for the Beam community. So if you want to track sort of Erlang and Elixir,
153
+ [880.56 --> 891.28] that's a good place to get an ever-growing RSS feed. But the way I do that is just a release that I
154
+ [891.28 --> 899.72] actually build on a server and stand up there because the availability level I need to maintain
155
+ [899.72 --> 901.58] on that one is whatever I feel like.
156
+ [902.50 --> 908.52] That's a good one. I think that has merit, right? I mean, some use cases, that's perfectly fine.
157
+ [908.52 --> 913.32] Nothing wrong with that. It's all contextual. I keep mentioning this. If that works for you,
158
+ [913.74 --> 918.72] that's great. There's no problem, right? And maybe someone could benefit from that simplicity.
159
+ [918.72 --> 924.92] And that system particularly actually stores all its data in memory. And whenever I restart it,
160
+ [924.94 --> 927.48] it just blows it away and it refitches it from the web.
161
+ [927.84 --> 929.00] That's interesting. Okay.
162
+ [929.26 --> 934.98] It was a fun way of building it, mostly. It means I don't have to deal with any database setup for
163
+ [934.98 --> 940.60] that particular service. I have a few different services where I just keep things around in memory
164
+ [940.60 --> 946.26] because they are fairly ephemeral or like the history isn't particularly important.
165
+ [946.26 --> 951.86] So what I'm hearing is that there are stateless systems, stateless services, which means that
166
+ [951.86 --> 958.48] you could start them anywhere and they would gather data just in time after they boot or maybe part of
167
+ [958.48 --> 963.72] the booting process. I'm not sure when exactly it happens, but there's no state that you have to
168
+ [963.72 --> 971.66] move with a service. So for example, if you were to stand this bean bloggers elsewhere on boot,
169
+ [971.66 --> 976.30] it would get all the data that it needs and would start serving it, it would need to run on a specific
170
+ [976.30 --> 976.80] machine.
171
+ [977.30 --> 984.92] Yeah. So it's very, at least very independent. It's stateful when it runs in that it keeps a lot
172
+ [984.92 --> 992.52] of state around, but it absolutely does not rely on some source of state or needing to carefully
173
+ [992.52 --> 999.84] manage state when it goes up and down. For some other services where I do want to keep history around,
174
+ [999.84 --> 1006.78] I've started using SQLite much more than I used to because that's also operationally much simpler
175
+ [1006.78 --> 1014.12] than Postgres. And I don't find Postgres particularly challenging. It's easy enough to manage and I like
176
+ [1014.12 --> 1021.78] it. But SQLite is even easier and makes a lot of sense if you don't have a lot of heavy needs. And I've
177
+ [1021.78 --> 1031.00] recently seen, so there's a project called Lightstream, which solves one of my bigger concerns with SQLite,
178
+ [1031.00 --> 1038.64] which is replicating it or at least having a very recent backup because it's very easy to accidentally
179
+ [1038.64 --> 1046.42] blast away a file on disk. So it hooks into the write-ahead log of SQLite and just ships it to
180
+ [1046.42 --> 1055.10] NES3 compatible storage on any update. So it does an ongoing replication of SQLite and then you can
181
+ [1055.10 --> 1060.40] just restore from that. I don't think it's necessarily feasible to do sort of high availability
182
+ [1060.40 --> 1068.20] with SQLite. But I mean, if I was building a product right now, sort of a small scale SaaS or
183
+ [1068.20 --> 1074.56] that kind of thing, this would definitely be something I consider. There was a Hacker News thread around the
184
+ [1074.56 --> 1083.60] time that Lightstream got some attention. It's done a few rounds. Someone mentioned running a product
185
+ [1083.60 --> 1090.02] on SQLite and I think they'd benchmarked it to 10,000 reads a second to 5,000 writes a second
186
+ [1090.02 --> 1098.70] on an NVMe drive. That's a lot of read and write activity. A lot more than I would typically expect
187
+ [1098.70 --> 1106.60] to need to serve for a small scale SaaS. And if you can scale with just using something like SQLite
188
+ [1106.60 --> 1112.10] up to that level, then you're probably successful enough that you can switch it out for something
189
+ [1112.10 --> 1118.24] else at that point and make all those decisions about complexity. That is a very good point,
190
+ [1118.44 --> 1124.68] actually. Lightstream, it will be in the show notes, but it's lightstream.io. It's Ben Johnson,
191
+ [1124.68 --> 1131.28] Ben B. Johnson. I think he was on Changelog at some point. I remember this coming up and you're right.
192
+ [1131.38 --> 1136.30] I mean, he's Ben Johnson. He's the author of Bold DB. So, you know, he has some experience in this
193
+ [1136.30 --> 1142.08] area. Let's put it that way. I do remember it sounding really interesting. So you can check it
194
+ [1142.08 --> 1149.68] out if you want. But my takeaway is that you like keeping things simple. And if it gets the job done,
195
+ [1150.30 --> 1153.84] that's it. That's all it needs to be. It doesn't need to be fancy. It doesn't need to be impressive.
196
+ [1153.84 --> 1159.38] It doesn't need to be, you know, look at me, you know, I've done it in this way that no one else
197
+ [1159.38 --> 1165.26] has done it before. It doesn't have to be that. It just has to work. Yeah. And if this works for you,
198
+ [1165.46 --> 1174.08] that's great. Yeah. And since I do consulting for a number of different clients, it's,
199
+ [1174.60 --> 1180.04] I always have to adapt to whatever's already there. So the client that I will be shipping
200
+ [1180.04 --> 1186.68] on-prem for doesn't actually have a thing in place. So that's sort of me putting my opinions
201
+ [1186.68 --> 1193.04] and stamp on that. I'm there to solve that problem. But in other cases, there's an existing ops
202
+ [1193.04 --> 1200.32] person or ops team and I'm mostly shipping code and then I'll roll with whatever they have. And
203
+ [1200.32 --> 1204.92] if I don't like it, I'll be swearing a little bit under my breath and maybe giving them some
204
+ [1204.92 --> 1211.96] some opinions, but, but typically I'm happy to roll with whatever's, whatever's there.
205
+ [1212.38 --> 1217.24] I don't really believe in making radical changes to software that's already working,
206
+ [1217.24 --> 1222.74] even if it's not working in the way you think it maybe should. But there is this
207
+ [1222.74 --> 1230.52] trend also in, particularly in the Beam ecosystem where there's a lot of things you can get done by
208
+ [1230.52 --> 1238.60] using only the Beam. The Beam ship actually ships with an, a distributed database inside of it.
209
+ [1239.08 --> 1247.08] So Amnesia, it has a lot of challenges. It has some sort of conflict resolution problems when you run
210
+ [1247.08 --> 1253.88] it in a distributed fashion. So I haven't been keen on using it for anything else than sort of caching,
211
+ [1253.88 --> 1260.12] but with SQLite in place, then you can actually use the sort of standard tooling in Elixir around
212
+ [1260.12 --> 1266.28] Ecto and which is essentially the ORM, not so much objects, but relational mapper, I guess.
213
+ [1266.28 --> 1271.32] Do you know which is the biggest Erlang project that uses Amnesia?
214
+ [1271.64 --> 1273.24] It would probably be WhatsApp.
215
+ [1274.28 --> 1281.16] They do, but they use it in a different way, very different way. So they, as far as I know,
216
+ [1281.16 --> 1287.00] and this was like many years ago, they used it on just a few servers and they used it for,
217
+ [1287.00 --> 1295.24] I think it was just metadata, but like very small metadata. So nothing that is heavy writes or heavy
218
+ [1295.24 --> 1301.88] reads. And I think the eventual consistency was okay for it. So things did not like, like dirty reads,
219
+ [1301.88 --> 1307.16] for example, were a big thing for them, but they used it like on a subset of nodes and I had like
220
+ [1307.16 --> 1312.68] dedicated nodes for that. And I think they wanted to move away from it or there was talk about that.
221
+ [1312.68 --> 1315.00] This was like at least five years ago.
222
+ [1315.00 --> 1315.48] Yeah.
223
+ [1315.48 --> 1322.44] The project which I have in mind and I had a first class C to it is RabbitMQ. And it's one of its
224
+ [1322.44 --> 1330.04] Achilles heels. Amnesia, oh wow. Like if it's at any sort of scale, you start seeing some serious issues,
225
+ [1330.04 --> 1337.32] like 10,000 writes per second. No way. No way. Because it's the synchronization part and you have
226
+ [1337.32 --> 1342.92] to go over a network and you have multiple nodes and it's all synchronous. So you have transactions.
227
+ [1343.88 --> 1351.48] Yeah. So you have to typically look at Amnesia in the context that was created, which was telecom.
228
+ [1351.48 --> 1357.80] And as far as I understand, it was typically between machines that were very tightly coupled together.
229
+ [1358.52 --> 1363.00] I've heard people talk about back planes and I have no idea what that is. So I'm not even going to try.
230
+ [1363.96 --> 1373.72] But yeah, it was about managing. So phone calls and that kind of connecting, which is very different
231
+ [1374.44 --> 1380.92] from your typical sort of web app or like we're keeping everything around forever type of
232
+ [1380.92 --> 1387.64] infrastructure that we deal with now. I've definitely looked for something that would
233
+ [1388.68 --> 1394.68] essentially scale arbitrarily as a database across nodes as you add more. Not that I have the need,
234
+ [1394.68 --> 1399.88] just because I want to see if there's sort of the perfect solution out there. And I found CockroachDB
235
+ [1399.88 --> 1406.84] to be very appealing in that sense, because it's Postgres compatible and it's made to be distributed by
236
+ [1406.84 --> 1415.88] by default, which Postgres has a lot of upside and it's great, but it is not built to be distributed
237
+ [1415.88 --> 1423.00] by default. And they've built a lot of sort of distributed features into it, but you know very well
238
+ [1423.00 --> 1431.08] what can happen when you try to replicate Postgres. I thankfully haven't had a reason to spend too
239
+ [1431.08 --> 1439.32] much time replicating Postgres. But yeah, looking at Cockroach though, you'll also see that sort of
240
+ [1439.32 --> 1448.52] suggested specs and what they suggest for setting up Cockroach, there's a lot of concerns and a lot of
241
+ [1448.52 --> 1455.24] things to think about and a lot of details suddenly that you don't typically think about when you're
242
+ [1455.24 --> 1461.80] setting up a single Postgres instance. And I think this feeds sort of into the whole idea of Kubernetes
243
+ [1461.80 --> 1469.96] as well. That's like, oh, but this is an abstraction layer that simplifies everything. It generalizes
244
+ [1469.96 --> 1477.00] everything so you don't have to think about all the details. But in my book, you can never, ever stop
245
+ [1477.00 --> 1482.84] thinking about the details. It's like, okay, we brought in Kubernetes, so now we don't have to know
246
+ [1482.84 --> 1491.40] how Linux works? No, no, I don't think so. Or what's your experience there? Does bringing Kubernetes in
247
+ [1492.44 --> 1495.88] make you stop having to care about your Linux installations?
248
+ [1505.80 --> 1511.64] What's up shippers? This episode is brought to you by our friends at Teleport. With Teleport Access
249
+ [1511.64 --> 1517.16] Plane, you can quickly access any computing resource anywhere. Engineers and security teams can unify
250
+ [1517.16 --> 1523.00] access to SSH servers, Kubernetes clusters, web applications, and databases across all environments.
251
+ [1523.32 --> 1528.12] Teleport is open core, which you can use for free, and it's supported by their cloud hosted version,
252
+ [1528.12 --> 1533.16] which lets you forget about configuring, updating, or managing Teleport. The Teleport team does all
253
+ [1533.16 --> 1539.00] that for you. Your team can focus on your projects and spend less time worrying about infrastructure access.
254
+ [1539.00 --> 1543.16] Try Teleport today in the cloud, self hosted, or open source. Head to
255
+ [1543.16 --> 1547.80] GoTeleport.com to learn more and get started. Again, GoTeleport.com
256
+ [1547.80 --> 1561.56] You mentioned a couple of things which I would like to dig in a little bit more.
257
+ [1561.56 --> 1569.40] First of all, you mentioned about using PostgreSQL in your most recent project that you're doing for
258
+ [1569.40 --> 1574.76] a customer, the one that you're deploying using Docker Compose, or that you're using Docker Compose to run it.
259
+ [1575.64 --> 1581.32] And I'm wondering in that context, why did you choose PostgreSQL over SQLite?
260
+ [1581.32 --> 1585.80] Yeah, that's actually a very good question, and I've been wrestling with it myself a little bit.
261
+ [1585.80 --> 1596.28] So one of the big reasons is that the current SQLite adapter for Elixir is fairly new.
262
+ [1597.08 --> 1605.64] And SQLite is very reliable, but I don't feel like that particular adapter has necessarily been proven
263
+ [1605.64 --> 1615.16] out yet. And shipping that to customers before I'm certain and I have a track record with it.
264
+ [1616.44 --> 1622.68] That's more than a few experiments. I just don't feel entirely comfortable doing that. So I opted for
265
+ [1623.32 --> 1629.96] even steering them away from MySQL, which is perfectly well supported into what is the absolute main
266
+ [1629.96 --> 1639.08] line of Phoenix, which is PostgreSQL. It seems to have the community behind it. Partly, I want to leave
267
+ [1639.08 --> 1646.52] the client with something that other developers will definitely recognize and be capable of working with.
268
+ [1647.48 --> 1653.88] If it ends up that I'm not around in the long run or for whatever reason, I want to bring us closer to
269
+ [1653.88 --> 1660.28] the main line. And there are a few very cool projects and very useful projects in the Elixir community that
270
+ [1660.28 --> 1667.08] lean on PostgreSQL specific features. One of them is Oba in a job processor. So having the option of using
271
+ [1667.08 --> 1675.16] that is also a good one. But this would be a good project for a SQLite and shipping that. There's also
272
+ [1675.16 --> 1681.32] a little bit of a question mark around some backups. Like, okay, then we will want to use Lightstream.
273
+ [1681.32 --> 1687.56] But do I have something S3 compatible to ship it to? Or do I need to stand that up myself and then
274
+ [1687.56 --> 1693.16] pull the file out and throw it at... Yeah. Those are very good points. And I really like the way
275
+ [1693.16 --> 1697.64] you're thinking about this because it's about confidence. Whatever you're giving, right? When
276
+ [1697.64 --> 1703.48] you're, let's say, shipping it and here you go, customer, this is what was done for you. Someone has
277
+ [1703.48 --> 1709.80] to maintain it. Someone has to deal with all the issues that arise because they will arise. Updates,
278
+ [1709.80 --> 1714.20] hello. Everybody seems to forget about them except when they have to be done and then they don't do
279
+ [1714.20 --> 1721.24] them because updates now. It's very important to keep up with those things. CVEs, right? How do you
280
+ [1721.24 --> 1726.68] address CVEs if you don't have a good way of releasing these updates out there? And if you're
281
+ [1726.68 --> 1731.88] not confident in what you have and, you know, like the point that you reach, it becomes a bit more
282
+ [1731.88 --> 1737.16] difficult to take those small steps, those small improvement steps. So I think it makes perfect sense.
283
+ [1737.16 --> 1742.60] Not to mention that, as you said, you may not be around. Someone else may take this over and you
284
+ [1742.60 --> 1748.60] want them to take over the most supported, the most documented, the most known thing, right?
285
+ [1748.60 --> 1754.20] Yeah. And I think Ruby on Rails was like that for a long, long time in that I can see a lot of parallels
286
+ [1754.20 --> 1760.12] between Ruby on Rails and Phoenix. And there were some good sensible defaults in Ruby on Rails that if you
287
+ [1760.12 --> 1767.88] went outside of those, there was a lot of pain there. So sure, you can use MongoDB, but why would
288
+ [1767.88 --> 1773.48] you with your Ruby app? Just stick to my SQL, like that's what the majority does. And I do remember
289
+ [1773.48 --> 1777.72] being in situations in the past when we did that and there was some pain there. The drivers were
290
+ [1777.72 --> 1783.24] great. I still remember many discussions with Jordan. I forget his family name, but he was the
291
+ [1783.24 --> 1788.36] the maintainer of Mongoid, I believe, if I remember it correctly. And that was a great library, but
292
+ [1788.36 --> 1792.68] still there were issues that you wouldn't expect. So it just goes to show that even from my experience,
293
+ [1792.68 --> 1801.72] I remember moments when I wished I had chosen the default and I didn't. And not just me, but others
294
+ [1801.72 --> 1806.84] paid the price for that. And it was just not fair. So if I learned anything, if you can stick with the
295
+ [1806.84 --> 1812.28] defaults or like with the most common path, especially in these cases, it may be best to. Now, if you have a
296
+ [1812.28 --> 1816.44] personal project like you have, right, like you have a couple of like experimental projects,
297
+ [1816.44 --> 1822.12] you can use anything you want because your SLO is whatever you want it to be. And it can change from
298
+ [1822.12 --> 1827.88] day to day and it's fine. So it doesn't really matter. But for others, you know, that's reliability,
299
+ [1827.88 --> 1833.64] upgradeability is important. You need to choose differently. Yeah. Sometimes it pays to make a
300
+ [1833.64 --> 1839.96] dull choice here and there. Yes. I'm happy to go absolutely wild on my own projects,
301
+ [1839.96 --> 1846.60] but it's also things like if I'm shipping a library to the community, that's also where I will be
302
+ [1846.60 --> 1853.32] looking quite closely at like, okay, but what is a good library? What does it mean to be behave well as
303
+ [1853.32 --> 1860.52] a library in this ecosystem? I can't just put all of my opinions in there if I want to be a sort of good
304
+ [1860.52 --> 1869.72] citizen. Yeah. I think that sort of carefulness about what you choose, that's something I've picked
305
+ [1869.72 --> 1877.48] up with, with the years. I've definitely had a few, a few years of chasing shiny new frameworks,
306
+ [1877.48 --> 1885.40] shiny new ops technology, setting up servers in cool new ways, building a custom microservice
307
+ [1885.40 --> 1889.88] architecture from the ground up. Just because you could, right? Now the reason I can do this,
308
+ [1889.88 --> 1894.52] so why not? No, no. Oh, we absolutely needed to scale that product so hard. That's actually what
309
+ [1894.52 --> 1899.40] we had sort of as an objective. Like this has to be scalable. The last iteration of this product was
310
+ [1899.40 --> 1904.52] not scalable. Let's greenfield it. Let's build it right. It should be able to scale. And that
311
+ [1904.52 --> 1909.16] architecture could absolutely have scaled, but that product did not need that scale at all.
312
+ [1910.04 --> 1915.48] It could have been so much simpler. That's a good, like why, why does it need to scale? If you don't
313
+ [1915.48 --> 1924.36] ask enough why's, like why with an S at the end, you will like, this is something which I have seen,
314
+ [1925.16 --> 1931.24] teams and products that keep going in the wrong direction. And then it doesn't matter how fast
315
+ [1931.24 --> 1936.60] you go in that direction because it's so wrong. You're going infinitely, infinitely fast in the wrong
316
+ [1936.60 --> 1941.24] direction. So we're going infinitely slow, right? Because it's like, you're not even going in the
317
+ [1941.24 --> 1947.56] right direction. So what's the point? Why are you rushing towards a direction that doesn't benefit
318
+ [1947.56 --> 1952.60] anyone? And then years later, people will be asking, but why do we do that? And no one will recall
319
+ [1953.56 --> 1956.76] because it doesn't make any sense, right? Like things that don't make sense, people tend
320
+ [1956.76 --> 1959.24] people tend to forget. Like you're right. It doesn't make sense.
321
+ [1959.24 --> 1965.72] Yeah. I wrote a retrospective on that particular architecture, the entire product through like
322
+ [1965.72 --> 1970.76] three different iterations and put it on my blog. And I've had some interesting feedback on that because
323
+ [1971.80 --> 1976.28] people don't always share. I wouldn't even call it a failure story because the product was a success
324
+ [1976.28 --> 1983.64] and it did fine until it was shut down at some point. Yeah. Some of the technical choices I would not
325
+ [1983.64 --> 1989.64] make again, but that's where I learned that I probably shouldn't have done that or shouldn't
326
+ [1989.64 --> 1994.20] have done it that way. Some of the choices checked out. Some of them didn't.
327
+ [1994.20 --> 1999.48] So in that retrospective of a post that you wrote, by the way, what's the title of the post?
328
+ [1999.48 --> 2004.60] I think it's 10 years in the vertical. 10 years in the vertical. Okay. We will link it in the show
329
+ [2004.60 --> 2009.96] notes for those that want to read it. It's a three part series, one version of the system.
330
+ [2009.96 --> 2015.32] Awesome. So get your coffee ready, tea ready, whatever you're drinking, strap down. It's a
331
+ [2015.32 --> 2020.76] long one, but a good one worth it. Right. I will read it myself by the way, because it sounds very
332
+ [2020.76 --> 2025.64] interesting. Is it funny? I'm not sure if it's funny. I hope it's a little bit funny.
333
+ [2025.64 --> 2030.12] That's the killer. I definitely had good feedback on it. So it should be bearable to read at the very
334
+ [2030.12 --> 2035.56] least. Okay. All right. The coffee will make it worth it. No, no, no. I'm joking. Like the funny and
335
+ [2035.56 --> 2039.64] interesting, it's like a killer combo. And if you can do both, it's great. Right. It's like,
336
+ [2040.12 --> 2043.64] the jackpot I think of content. And on the shipping side of that,
337
+ [2043.64 --> 2049.72] that was mostly Ansible, but it ended up being a lot of Ansible because we did split everything
338
+ [2049.72 --> 2055.24] up into microservices. Oh yes. For a three person team. That's what you get, right? I mean,
339
+ [2055.24 --> 2060.20] it's like one of the trade-offs that you get and you may need that, right? I know that some teams do,
340
+ [2060.20 --> 2065.48] but not everyone does. And knowing the difference when to use a microservice versus a monolith is a very
341
+ [2065.48 --> 2071.56] important thing. Like know the answer before you embark on the journey. And even if the answer
342
+ [2071.56 --> 2078.28] comes slower, it's worth it. Take your time. Because getting out of that particular journey,
343
+ [2078.28 --> 2083.80] it will be very difficult. It can be done, but it's unlikely to happen. So it's one thing that
344
+ [2083.80 --> 2088.28] you want to choose wisely. You could choose maybe your cloud provider, you can migrate,
345
+ [2088.28 --> 2095.08] and even that can be a bit difficult, but it's easier than going back from a microservice decision
346
+ [2095.08 --> 2099.08] or a monolith one. By the way, sometimes that is the wrong decision. So we're not saying that one is
347
+ [2099.08 --> 2105.32] better than the other. No. Okay. So we covered about, like we touched on a couple of interesting
348
+ [2105.32 --> 2111.00] things, but I still think we haven't dug deep enough in the whole, before you mentioned about
349
+ [2111.00 --> 2115.80] Kubernetes. So I don't think we dug deep enough into that. One of the reasons why we're even having
350
+ [2115.80 --> 2121.08] this conversation, because I know that for you, Kubernetes doesn't make sense. And that fascinates
351
+ [2121.08 --> 2126.36] me because I'm not saying that everybody should use that. I'm not saying that, but I can see a lot more
352
+ [2126.36 --> 2132.84] reasons to use it than not to use it. And it's that API that, from my perspective, is the best thing
353
+ [2132.84 --> 2139.64] that it has. So it's how it approaches operations and the building blocks that you have at your
354
+ [2139.64 --> 2147.08] disposal. You can achieve the same thing in different ways, but I don't know, having tried most of them,
355
+ [2147.08 --> 2155.16] I kind of like it and it makes a lot of sense. So why in your case, Kubernetes, you're not using it
356
+ [2155.16 --> 2159.88] at all, right? Because I don't think you're using Kubernetes. You hear about it a lot, but you don't
357
+ [2159.88 --> 2166.92] use it. Why is that? My experience with Kubernetes is essentially, I tried K3S at some point and started
358
+ [2166.92 --> 2173.88] sort of learning how to set up manifest files and a lot of swearing ensued. And then I stopped,
359
+ [2174.52 --> 2183.24] essentially. For one thing, I don't generally build systems at a large scale. I typically work with
360
+ [2183.24 --> 2188.60] teams that are maybe five developers or so. That didn't stop us from using Kubernetes
361
+ [2188.60 --> 2192.76] changelog, right? There were like, what, three developers? And like one full time,
362
+ [2192.76 --> 2197.00] and even not that much full time, and we're still using Kubernetes. So that didn't stop anyone.
363
+ [2197.00 --> 2198.04] Yeah. But please continue.
364
+ [2198.04 --> 2202.36] Yeah. I could argue with you whether a changelog should be using Kubernetes.
365
+ [2202.36 --> 2204.36] Yes, please. Let's.
366
+ [2204.36 --> 2211.16] I for sure do not see the need for a system such as the changelog to have Kubernetes. Now,
367
+ [2211.96 --> 2219.00] again, context, the guy that's responsible for operating changelog apparently likes Kubernetes,
368
+ [2219.64 --> 2227.80] which means that he enjoys his job more if he gets to run it on Kubernetes. So it sort of checks out.
369
+ [2227.80 --> 2230.92] But it's not that because I'm that guy. So just like for the listeners, that's me.
370
+ [2230.92 --> 2234.12] Oh, yeah. Yeah. I'm absolutely talking about you there.
371
+ [2234.12 --> 2239.32] I'm that guy. Okay. So let's unpack this. I tried to answer this question a couple of times,
372
+ [2239.32 --> 2245.56] and either people, I must be answering it wrong. So let me try again. Okay. The reason why we chose
373
+ [2245.56 --> 2251.96] Kubernetes is because it reached a certain level of maturity. That was one of the things. And Linode,
374
+ [2251.96 --> 2256.92] our partner for all things infrastructure, they started offering a managed Kubernetes service.
375
+ [2256.92 --> 2261.80] So that was important for us, right? We don't want to deal with managing it. So that is a provider
376
+ [2261.80 --> 2269.16] concern. We had to solve a couple of things, like for example, DNS. DNS updates, like whenever the IP
377
+ [2269.16 --> 2275.08] changes or the load balance that changes, the IP has to be updated in the DNS. The certificate,
378
+ [2275.08 --> 2280.36] we used to pay for those. And then Let's Encrypt came along. So how do we get free certificates
379
+ [2280.36 --> 2284.84] via Let's Encrypt and support that mindset?
380
+ [2284.84 --> 2286.20] A cron job.
381
+ [2286.20 --> 2292.52] A cron job. Excellent. Okay, great. Great. A cron job. So yeah, that is a valid answer.
382
+ [2292.52 --> 2301.24] And then how do you push updates? Like, do you have your CI that deploys? In some cases you do,
383
+ [2301.80 --> 2307.16] right? In some cases, the CI is the thing that has the keys to the kingdom. And that's what we had.
384
+ [2307.16 --> 2312.52] And it can do anything. Is that a good thing? I don't think it is. But whatever, you know,
385
+ [2312.52 --> 2318.84] it's just like an opinion. But there's more. How do you keep your certificate in sync between your CDN,
386
+ [2319.96 --> 2326.28] your load balancer, and any other place that may use it? In our case, it was just these two,
387
+ [2326.28 --> 2330.60] the load balancer and the CDN. So you have to keep, not only have to renew it, but then you have to
388
+ [2330.60 --> 2335.32] upload it and make sure it's the same one everywhere. Excellent. How do you run backups?
389
+ [2335.32 --> 2341.64] Another cron job, right? So before you know it, you have like all these things that you need to
390
+ [2341.64 --> 2346.84] have. Like what gets, for example, Docker Compose or whatever you're using in place? What installs
391
+ [2346.84 --> 2352.44] Erlang? What determines which version of Erlang you have? What about the monitoring? Where do you run
392
+ [2352.44 --> 2357.64] that? How do you configure the monitoring? How do you configure, for example, the monitoring, not just
393
+ [2357.64 --> 2363.00] like the metrics and the logging, but I'm also talking about the synthetic monitoring, your pings,
394
+ [2363.00 --> 2367.40] your pingdoms, or your Grafana clouds, or whatever you may be using. And before you know it, you have
395
+ [2367.40 --> 2375.56] all these concerns that typically are either in a wiki or in someone's head or different people
396
+ [2375.56 --> 2379.80] approach it in different ways. In this case, it's just me. So, you know, it's not really a problem,
397
+ [2379.80 --> 2383.64] but you have all these things, secrets. Oh, that's like another one. Where do you store
398
+ [2383.64 --> 2396.04] the secrets and how do you rotate secrets when there's a leak?
399
+ [2401.24 --> 2407.16] This episode is brought to you by our friends at Cockroach Labs, the makers of CockroachDB,
400
+ [2407.16 --> 2413.08] the most highly evolved database on the planet. With CockroachDB, you can scale fast, survive
401
+ [2413.08 --> 2419.64] anything, and thrive everywhere. It's open source, Postgres wire compatible, and Kubernetes friendly,
402
+ [2419.64 --> 2424.04] which means you can launch and run it anywhere. For those who need more, you can build and scale
403
+ [2424.04 --> 2429.32] fast with Cockroach Cloud, which is CockroachDB hosted as a service. It's the simplest way to deploy
404
+ [2429.32 --> 2434.60] CockroachDB and is available instantly on AWS and Google Cloud. With Cockroach Cloud,
405
+ [2434.60 --> 2440.36] a team of world-class SREs maintains and manages your database infrastructure so you can focus less
406
+ [2440.36 --> 2445.24] on ops and more on code. Get started for free with a 30-day free trial or try their new forever
407
+ [2445.24 --> 2450.04] free tier that's super generous. Head to CockroachLabs.com slash changelog to learn more.
408
+ [2450.04 --> 2453.08] Again, CockroachLabs.com slash changelog.
409
+ [2453.08 --> 2468.60] The way I approach this is what is a system that can manage all these things in a way that doesn't
410
+ [2469.24 --> 2474.76] have me worrying about versions as much? Because we use Terraform and we had to do upgrades because
411
+ [2474.76 --> 2480.84] it was running locally. We had plugin issues, we had to upgrade those. And the issues were like
412
+ [2480.84 --> 2486.04] stuff like things that you, problems that you wouldn't expect to have that we were having
413
+ [2486.04 --> 2491.80] because of like this different tooling that we're using. We used Ansible. Did we use Chef at some
414
+ [2491.80 --> 2495.64] point? No, we didn't use. We only used Ansible at some point many, many years ago. By the way,
415
+ [2495.64 --> 2499.56] there was like a progression. So every year we blogged about this. We talked about this.
416
+ [2499.56 --> 2504.44] It didn't just come out of the blue. I know, let's use Kubernetes. No, we've been using Ansible for years.
417
+ [2504.44 --> 2509.72] We've been using Concourse CI to run the builds, to do the deploys. We used Docker Compose and then
418
+ [2509.72 --> 2515.64] Docker Swarm for again, a couple of years. So we grew into this architecture. And right now,
419
+ [2516.52 --> 2522.68] everything is stored, like all the YAML, all the config is stored in the repo. Okay, we have some
420
+ [2522.68 --> 2529.24] Make Glue, which I'm not very proud of. It's great, but I know there's a better way. Maybe Argo CD. I don't
421
+ [2529.24 --> 2534.04] know. GitOps. I keep hearing about that. Maybe we try that. I don't know. But can we have something
422
+ [2534.04 --> 2538.68] that continuously applies those configs and you don't have to have your machine to run that stuff?
423
+ [2539.56 --> 2545.16] So maybe something like a control plane, which is different from your service. And I know that
424
+ [2545.16 --> 2549.56] you mentioned like large scale. I don't think changelog is very large scale. It's a simple app,
425
+ [2550.20 --> 2555.80] but it's still serving many of terabytes every month of traffic. And there's the CDN. When the CDN
426
+ [2555.80 --> 2559.88] goes down, there's a big problem as we had a couple of days ago. And you have to know how to
427
+ [2559.88 --> 2565.80] basically update it very quickly, which we could. And you have to have that space and room. So the
428
+ [2565.80 --> 2570.12] answer is a bit more complicated. It's contextual. And it's not because I like Kubernetes, it's because
429
+ [2570.12 --> 2576.84] it makes all these concerns easier than if we used anything else than we did before, by the way. It
430
+ [2576.84 --> 2584.20] improves on that. Yeah. What do you think about that? Easier for you, I would say. For me, it's like,
431
+ [2584.20 --> 2590.60] I barely know where I would start on making Kubernetes do this. And I did start looking at
432
+ [2590.60 --> 2596.76] K3S specifically because I wanted the CD part. I wanted something to pick up my finished Docker
433
+ [2596.76 --> 2602.84] containers and spin up the new version. That's essentially why I wanted to set that up to have
434
+ [2602.84 --> 2611.72] a very, very lightweight approach to what Kubernetes can do. The thing is, I don't see sort of keeping the
435
+ [2611.72 --> 2620.76] load balancer up to date or keeping certificates up to date as that complicated of an endeavor with
436
+ [2620.76 --> 2627.80] sort of current baseline tools like Let's Encrypt. So I wouldn't bring in layers to solve them.
437
+ [2628.60 --> 2635.00] It could be a bash script. It could be some fairly tightly specced tool. So for example, in Elixir,
438
+ [2635.00 --> 2641.72] there is a fantastic library by Sasha Jurich, which is called Site Encrypt, which will simply do the
439
+ [2641.72 --> 2647.08] Let's Encrypt dance for you if you configure your Phoenix app to use it. So when you start your
440
+ [2647.08 --> 2653.16] application, it checks, do we already have certificates lying around? I'll use those. If not, I'll talk to
441
+ [2653.16 --> 2660.76] Let's Encrypt. We'll shake hands. I'll get some certificates. And now we're certified. And with that,
442
+ [2660.76 --> 2666.20] to some extent, you might not even need Nginx at that point. I bet you would probably be able to
443
+ [2666.20 --> 2672.20] serve changelog with the previously mentioned SQLite performance of like 10k reads a second.
444
+ [2672.84 --> 2678.28] You were talking about terabytes and that's like the MP3 files, right? So file serving is one of the
445
+ [2678.28 --> 2686.20] places where I would typically reach for sort of proprietary cloud stuff like S3 or Linode Object
446
+ [2686.20 --> 2694.68] store or one of those because it just solves a lot of the like, okay, I want to have some redundancy
447
+ [2694.68 --> 2702.92] in this. I want to be able to scale it essentially arbitrarily. For file serving, I would typically use
448
+ [2702.92 --> 2708.92] a service like that. Just it's super annoying dealing with large drives and RAID. So I'd rather not.
449
+ [2708.92 --> 2714.28] So pragmatism, I don't think you should like peel everything off, but I'm also not sure like,
450
+ [2714.76 --> 2720.44] when do you actually need a load balancer? Having Nginx in front of your app can be very nice
451
+ [2721.80 --> 2727.16] because it allows you to do things like, oh, actually we're down for maintenance right now.
452
+ [2727.16 --> 2732.12] I still want to show something nice to the user or pointing to different instances that you're
453
+ [2732.12 --> 2737.64] starting up or whatever. But there's also the potential risk of your Nginx being
454
+ [2737.64 --> 2746.12] misconfigured or less well configured than your application and actually being a bottleneck
455
+ [2746.12 --> 2752.12] to your application. So I've seen that happen too. Typically, I would set something up with Nginx.
456
+ [2752.68 --> 2758.84] But also one of the things with Kubernetes is all this, like any node can go away at any time where
457
+ [2758.84 --> 2766.20] we're on very moving ground cloud infrastructure. We only use what we need, but you always need some.
458
+ [2767.08 --> 2774.28] So usually you're at a base level, like we have these instances up constantly. At that point,
459
+ [2774.28 --> 2780.92] I'm like, but do you need a cluster of three instances running the actual Kubernetes and then
460
+ [2780.92 --> 2788.04] like an app instance and a DB instance and like a load balancer instance? Or is this like one
461
+ [2788.04 --> 2792.68] application instance and one database instance? Would that do?
462
+ [2792.68 --> 2797.88] I think it would. And if you look at changelog at its core, that's exactly what we have. We have
463
+ [2797.88 --> 2802.76] the application and we have the database, single instance PostgreSQL. There's a great story how we
464
+ [2802.76 --> 2807.80] used replicated PostgreSQL and how that was the cause of a couple of downtimes. I think we cover
465
+ [2807.80 --> 2809.88] that in the episode one. Yeah.
466
+ [2809.88 --> 2815.08] A different story. And CockroachDB, that's something which I definitely want to try out.
467
+ [2815.08 --> 2820.84] Distributed PostgreSQL with a PostgreSQL compatible wire format. That's a very interesting one to try
468
+ [2820.84 --> 2827.88] out for sure. It's on my list. But I think what I'm hearing, going back to what you were saying,
469
+ [2827.88 --> 2835.72] is that for you, getting started with Kubernetes seems very complicated for a value that isn't very
470
+ [2835.72 --> 2841.48] clear. Like what is the value proposition? A lot of the things that you can do today,
471
+ [2841.48 --> 2849.08] I mean, does Kubernetes make them any different? And maybe the answer is no, from your perspective,
472
+ [2849.08 --> 2854.60] right? You're saying like, let's just use a cron job. In my mind, I think this is where I wish we had
473
+ [2854.60 --> 2859.00] more time to dig into this. So what I'm proposing is a follow-up because we will run out of time.
474
+ [2859.00 --> 2864.68] But there's so much more. So there's so much more to like, for example, the monitoring,
475
+ [2864.68 --> 2868.68] the shipping of logs, like all those things. And you have to configure them somehow. Then you have
476
+ [2868.68 --> 2875.00] to worry about OS patches, whichever host OS you're running. That is not an issue when you're running in
477
+ [2875.00 --> 2881.56] the context of Kubernetes because it's just your container, right? And you don't care about the node,
478
+ [2881.56 --> 2887.24] the worker node that runs the kubelet, that runs like the Kubernetes infrastructure, so to speak.
479
+ [2887.24 --> 2892.68] When it comes to Nginx, you don't install Nginx. You have ingress Nginx, which is a component
480
+ [2892.68 --> 2899.00] that exposes certain CRDs, custom resource definitions. And it's more like it implements
481
+ [2899.64 --> 2904.92] ingresses. Now, what is an ingress? Do you care about it? Well, you do because you need to know
482
+ [2904.92 --> 2912.28] how to configure it. But beyond that, how that maps to a Nginx concepts, that's abstracted away from
483
+ [2912.28 --> 2916.84] you. And you have like this self-discovery service, and it's all just happening behind the hood.
484
+ [2916.84 --> 2920.68] And you're right, it feels a bit magical, but it's no different to a framework. Like,
485
+ [2920.68 --> 2925.80] for example, if you use Phoenix. But that's the whole thing. See,
486
+ [2925.80 --> 2932.12] Phoenix is a fairly explicit framework. It has a few things that feel a bit magical.
487
+ [2932.12 --> 2937.72] Yes. But it is quite explicit about what everything does.
488
+ [2937.72 --> 2939.32] And Kubernetes isn't.
489
+ [2939.32 --> 2947.40] Yeah, it's not the impression I'm getting. But what I see when you're bringing in something like
490
+ [2947.40 --> 2952.52] Kubernetes, you're placing a lot of abstractions in place, and you're going to be working with those
491
+ [2952.52 --> 2959.24] abstractions. Those abstractions are still doing all of the things under the hood. And you need to be
492
+ [2959.24 --> 2966.76] aware of how they do those to be able to do it gracefully. Most of the use cases and most of the
493
+ [2967.96 --> 2975.40] the way you want to work with infrastructure should be ideally enshrined in how Kubernetes handles this.
494
+ [2976.20 --> 2983.80] But I don't feel like you can just say, okay, but now I don't have to care about this. Still have to care
495
+ [2983.80 --> 2993.48] about sort of updating Linux. You still have to care about how your search are propagated, or you could
496
+ [2993.48 --> 3001.32] get kicked off of let's encrypt or there's a lot of automation, but it's also very generalized. So
497
+ [3002.20 --> 3009.00] this is a thing where I think Kubernetes ends up being a bit over, I wouldn't say it's over-engineered.
498
+ [3009.00 --> 3015.16] It's a, it's don't repeat yourself taking quite far. And that's the correct move for some cases.
499
+ [3016.12 --> 3022.36] For example, you'll see an enterprise software, things are often very generalized and the software
500
+ [3022.92 --> 3028.28] is generally not that tight to work with. It's, it's usually a little bit annoying and a little bit
501
+ [3028.28 --> 3034.60] too much. And that's sort of the experience I'm, I'm getting from everything I see and hear about
502
+ [3034.60 --> 3039.88] Kubernetes. It, it tries to solve everything and I don't need my everything's solved.
503
+ [3041.88 --> 3047.16] So there is this opposite direction. I can take things in when working with Erlang Elixir and the
504
+ [3047.16 --> 3054.28] Beam, where the Beam, which is meant to handle sort of high availability, high reliability,
505
+ [3054.28 --> 3060.12] concurrent distributed systems. And I can bring all of my application concerns in there. It's like,
506
+ [3060.12 --> 3068.60] do I need an SSH server? Well, they have one. Do I need to talk to DNS? Do I need to do DNS? Yeah,
507
+ [3068.60 --> 3074.76] there's probably something in there for that. And that's, that's a very rare runtime that you can,
508
+ [3075.72 --> 3081.88] that you can lean on to, to do that kind of thing. But let's say, for example, shipping updates to your
509
+ [3081.88 --> 3087.00] app, the Beam can hot code update your app while it's still running without ever taking it down.
510
+ [3088.20 --> 3094.36] That's a little bit trickier to use than a lot of other ways. It's not like bringing your container
511
+ [3094.36 --> 3101.08] down and then bringing up another one, but it's definitely a capacity that's, that's there. And I
512
+ [3101.08 --> 3107.64] think like a Beam application can handle like everything that I need to get done, but also
513
+ [3108.60 --> 3115.96] the 99% case or the 90% case for small products and SaaS. Like if you need a bit of observability,
514
+ [3117.00 --> 3123.32] you have, for example, live dashboard, which gives a baseline of observability with no effort,
515
+ [3123.32 --> 3129.16] or you install something like Promex and then you need to have Prometheus and Grafana stood up
516
+ [3129.16 --> 3134.60] somewhere. Then you're starting to get a little bit more infrastructure or you use the cloud offerings.
517
+ [3134.60 --> 3140.04] And I think that's sort of always what it boils down to. Like at a certain point, you need more,
518
+ [3140.04 --> 3146.44] more visibility into the details. Okay. At a certain point, you should probably start looking at
519
+ [3146.44 --> 3152.20] installing something to give you that. But Kubernetes is installing all of it at once.
520
+ [3153.08 --> 3158.92] And you have to care about search. You have to care about the DNS details. You have to care about the
521
+ [3158.92 --> 3165.80] ingress. You have to care about all of it. And I think the, both the barrier and sort of the
522
+ [3165.80 --> 3172.20] maintenance cost of it is something I wouldn't choose to take on in lightly in any project.
523
+ [3173.48 --> 3178.20] Because I think it's too, typically too early for Kubernetes. And I'm thinking it's probably too
524
+ [3178.20 --> 3185.08] early for Kubernetes in most projects before they're like at an international scale. Like if you need
525
+ [3185.08 --> 3193.00] high availability across many regions and time zones, that's probably a good reason to use Kubernetes.
526
+ [3193.00 --> 3200.52] But I also realized like, if you spend a lot of time working with Kubernetes, setting it up might not be
527
+ [3200.52 --> 3210.04] that much effort. I'd rather code a fairly custom sort of deployment setup that I find explicit and simple,
528
+ [3210.04 --> 3219.48] than lean on something I understand so poorly, and which would take me years to have a good grasp of,
529
+ [3220.52 --> 3221.72] which is Kubernetes.
530
+ [3221.72 --> 3227.16] I think there is a lot of, well, okay. So first of all, there's simplicity and complexity,
531
+ [3227.72 --> 3232.44] and the other way around. But in this case, in Kubernetes, it's complex, but it's also simple,
532
+ [3232.92 --> 3238.12] if you look at it from a certain perspective. So things are fairly well defined. Like,
533
+ [3238.12 --> 3242.68] you know what you need to reach out for and how to combine things. And there's like a whole community
534
+ [3242.68 --> 3247.48] around it. There's like so many projects which are solving specific issues. The interface is very
535
+ [3247.48 --> 3253.00] clear. You know how to interact with it. There's an API. It's this single API by which you request
536
+ [3253.00 --> 3258.28] anything, including other VMs, other load balancers. Do you want a SQLite instance with
537
+ [3258.28 --> 3262.76] such and such provider? You can get that. Okay. You have to extend Kubernetes in order to benefit
538
+ [3262.76 --> 3268.76] from these features, but it's possible. And there's only one way that you can do this. And that's very
539
+ [3268.76 --> 3276.84] powerful. I think the separation of concerns, it gets a bit more clear. So anybody just ship us a
540
+ [3276.84 --> 3282.52] container image. It doesn't matter what language you have. It doesn't matter what VM you're running.
541
+ [3282.52 --> 3287.96] Ship us a container image will take care of the rest. Okay. Now I know it's too simplistic,
542
+ [3288.92 --> 3294.84] but it works. Like Heroku, for example, shipping containers, they made it popular. You just get push
543
+ [3294.84 --> 3300.44] and things happen. And guess what? The way the changelog is being developed hasn't changed. You get push
544
+ [3300.44 --> 3306.12] and things happen behind the scenes. And because that contract has never been broken with the
545
+ [3306.12 --> 3313.80] developers, everybody's happy. Yeah. Gerald would be pissed if he had to, as his agent to the servers,
546
+ [3313.80 --> 3318.36] to set things up. There you go. Yeah. That's no good. Yeah. Do you really care about like which OS
547
+ [3318.36 --> 3323.48] you're running? No, you don't. Do you want to switch Erlang versions? Super easy. Guess what? All you have to do is
548
+ [3323.48 --> 3329.24] change the container. Hot code reloading? Yes, you can do it. It's hard. Maybe you don't need to.
549
+ [3329.24 --> 3335.24] And again, it doesn't matter whether you use Erlang or Elixir or Ruby or Python or Go. It really doesn't
550
+ [3335.24 --> 3340.68] matter. Do you want to use serverless? Well, guess what? You have all these projects which you can set
551
+ [3340.68 --> 3347.08] up and you can run it on in the same context. And the list goes on and on. I mean, it's really,
552
+ [3347.08 --> 3354.68] it just goes forever. And it's not like I have used Chef for many years. I was G Chef at one point,
553
+ [3354.68 --> 3360.12] Gerhard Chef. That's like even in Oregon, GitHub. So I spent like a fair time with that knife when that
554
+ [3360.12 --> 3365.12] was a thing. I don't think many people were using, because Chef server was so difficult. I was there.
555
+ [3365.24 --> 3372.40] I remember that period. Ansible, I loved it when it was a thing. Certain things were difficult with it,
556
+ [3372.40 --> 3381.38] but it was saner than Chef. Is Kubernetes saner than Ansible? I don't know. For us,
557
+ [3381.58 --> 3386.14] it felt like the next evolution. You're right. There is a learning curve. Like Vim, there will be,
558
+ [3386.24 --> 3393.76] or Emacs. Kubernetes is definitely a big step in some direction from Ansible. It's not just the next
559
+ [3393.76 --> 3399.12] sort of iteration on scripting your servers. That's not what it is. It's something different.
560
+ [3399.12 --> 3404.58] And you did ask me for a sort of hot take that you could put as the title on this. And I think,
561
+ [3404.92 --> 3410.16] like, would it be fair to say Kubernetes is the electron of operations?
562
+ [3410.54 --> 3415.92] It's the electron. Oh, okay. Wow. I think people are like, what is electron? That would be the
563
+ [3415.92 --> 3417.00] first thing I would ask. What?
564
+ [3417.52 --> 3423.28] What is electron? Which electron do you mean? Do you mean the physical one or the electron JavaScript?
565
+ [3423.28 --> 3432.32] Oh, you're like, ooh, physics. No, I mean, yeah. I mean, in that it makes operations at the outset,
566
+ [3432.76 --> 3442.68] a lot simpler, but it also paves over everything that you could get right in the details. I feel like,
567
+ [3443.08 --> 3449.34] I think you have access to every little detail you would need in Kubernetes, but it doesn't
568
+ [3449.34 --> 3456.08] particularly seem to encourage you getting into all the details. So whenever you add abstraction
569
+ [3456.08 --> 3464.90] layers, and I think that's sort of my, my hesitance on adding more tools, especially tools that sit on
570
+ [3464.90 --> 3475.02] top and sort of obscure what's going on is that I've come to rely on explicit things, because if you can
571
+ [3475.02 --> 3481.18] just read the code and see what it's going to do, that's, that's very powerful. I mean, it's not
572
+ [3481.18 --> 3488.52] declarative. People like declarative for particular things and declarative can be nice, but it also
573
+ [3488.52 --> 3497.90] doesn't make it clear like A to B to C what is going on, what's being done. And for most server installs,
574
+ [3497.90 --> 3500.08] they don't have to be very complicated. Yeah.
575
+ [3500.08 --> 3505.42] And if it doesn't have to be very complicated and there's not a lot of complexity to manage,
576
+ [3505.72 --> 3510.70] if you bring in a larger abstraction layer, which is supposed to hide a lot of complexity and
577
+ [3510.70 --> 3518.42] make managing very complex things possible, which I think is, is a fair, fair thing to say about
578
+ [3518.42 --> 3524.32] Kubernetes. It seems to make it possible to manage very, very complex things. If you bring that into a
579
+ [3524.32 --> 3529.08] fairly, already fairly simple thing, I think you're shooting yourself in the foot.
580
+ [3529.08 --> 3536.22] But, but it's, it also depends on what tools are you comfortable with? Like you've spent years and
581
+ [3536.22 --> 3539.60] years deeply immersed in, in ops and like.
582
+ [3539.60 --> 3541.82] Tried decades, but yes, I agree.
583
+ [3543.42 --> 3543.94] Yeah.
584
+ [3544.46 --> 3551.84] I've spent much more time building the actual applications. I spent a fair bit of time on servers
585
+ [3551.84 --> 3559.68] and operations, but not nearly the majority of my time because I care much more about,
586
+ [3559.98 --> 3567.26] about the building of the thing. And I consider the operations and a part of what I do. I don't
587
+ [3567.26 --> 3573.16] want to hand off a container particularly to, to operations and just guess how it's going to be run.
588
+ [3573.16 --> 3594.58] I see there's a lot of, I don't want to call it full stack, maybe end to end stack. Like I want to care about the whole and I have no idea what's going on in half of the whole. If I, if I bring in a cool like tool like Kubernetes, I definitely would use it for, and I would learn it.
589
+ [3594.58 --> 3616.22] If I saw that I definitely had the need, if I was going to run hundreds and hundreds of instances or, or scale across continents. Yeah. It probably makes sense to bring in something that lets me take that, that overview, that like 10,000 miles view of the world.
590
+ [3616.22 --> 3630.54] And then like, Oh yeah, we have decent performance in Asia. Oh, we're dropping performance in Antarctica. Like, but that's typically not where I operate. And it's typically not what I go for first.
591
+ [3630.54 --> 3659.98] And on that thought, thank you, Lars, very much for joining me. This was a pleasure. I do realize that we have so much more to talk about. Dev and Ops talking finally. I think for decades, we tried to do that and it's finally happening. We have respect for each other. We know that each context is difficult, challenging, but worth exploring. And I don't think we should be just shoving code across the fence. Like, here you go.
592
+ [3659.98 --> 3686.64] You run this, figure out what to do with it. I think it's nicer when we agree on what the abstractions should be. Everybody benefits. And when things go wrong, because they will go wrong, people know what to do. And it's not a reactive approach. It's like a planned, you know, we kind of know what, what we need to do. So I'm really excited about that world. Thank you very much for joining me. This was a pleasure, Lars. I look forward to seeing you next time and talking to you next time, whenever that may be. Hopefully soon.
593
+ [3686.98 --> 3688.86] Happy to come back. Thanks for having me.
594
+ [3689.98 --> 3719.96] Happy to come back.
595
+ [3720.32 --> 3725.12] Come hang with us on Slack. They're knowing posters. Everyone is welcome.
596
+ [3725.72 --> 3730.00] Huge thanks again to our partners. Fastly, LaunchDarkly and Minout.
597
+ [3730.34 --> 3735.04] Also, thanks to Breakmaster Cylinder for making all our awesome beats.
598
+ [3735.50 --> 3737.76] That's it for this week. See you next week.
599
+ [3737.76 --> 3767.74] Thank you.
600
+ [3767.76 --> 3797.74] Thank you.
🎄 Merry Shipmas 🎁_transcript.txt ADDED
@@ -0,0 +1,683 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Gerhard Lazu:** The first present this Christmas is a CI/CD LEGO set that Changelog.com is already using for production. The entire story, including code and screenshots, are available in our GitHub repository; see pull request \#395. Our new pipeline gets coding to prod at least twice as fast as before, and you can see it running in GitHub Actions.
2
+
3
+ Since we recorded this, we made it over a minute quicker, which is a big deal when everything used to take less than five minutes in total. And there is one more pull request open, which will improve it even more. Check the name of the person that opened PR \#401. If you like the CUE language and understand the potential of direct acyclical graphs for pipelines, this present will take your CI/CD to a whole new level.
4
+
5
+ So in episode \#23 we were talking to Sam and Solomon about this new universal deployment engine called Dagger - that's how it was introduced - and one of the things which I mentioned towards the end is that I would like to make it part of the Changelog infrastructure. So - hi, Joel, hi Guillaume.
6
+
7
+ **Joel Longtine:** Hello.
8
+
9
+ **Guillaume de Rouville:** Hi.
10
+
11
+ **Gerhard Lazu:** How are you doing today?
12
+
13
+ **Joel Longtine:** Good. Excited to be here with you.
14
+
15
+ **Guillaume de Rouville:** Yeah, same for me.
16
+
17
+ **Gerhard Lazu:** How was it for you to work on this? ...because we didn't have a lot of time, really; we tried to squeeze it around all sorts of things... What was it like the last month working on this? Tell us about it.
18
+
19
+ **Joel Longtine:** For me, it was fun. It gave me an opportunity to dig into Dagger, in the tool, and the way that we use it, more than I had thus far. I'm relatively new to Dagger, so this was part of my learning about how our system actually works... And it was fun to kind of begin to grok how we use CUE, how we use BuildKit, and how the layers and different file system states work together in those contexts.
20
+
21
+ It was also fun to work with you and Guillaume and try to figure out how to replicate what you've done in CircleCI inside of Dagger... Like you said, in part so that we could actually transition it over to GitHub Actions, or wherever else you wanted to run it.
22
+
23
+ **Gerhard Lazu:** Right back at you. What about you, Guillaume?
24
+
25
+ **Guillaume de Rouville:** \[03:58\] For me, it was really fun working with you. One of the things maybe, some of headaches because I didn't know CircleCI, and it's quite interesting to -- because as I was helping you... I know Dagger, I don't know this technology, so to help you port it, I had to learn a lot of things, mix and then create and we encountered a lot of issues along the way, and in order to tweak them, to fix them, you need to properly understand what you're doing... Because your config at the moment, the CircleCI one, is quite a big one, and in order to port it, we needed to understand it properly. But it was a lot of fun.
26
+
27
+ **Gerhard Lazu:** That is actually my key takeaway as well. I wasn't expecting to learn as much. I was hoping, but I wasn't expecting it... And then with you two - it was great; we went on such a journey... And I think what helped is that we didn't have a lot of time, but we had long gaps between us working together. So maybe it was like a couple of days, and then we got together for like half an hour or an hour. Joel, you're in Colorado, and Guillaume was in Paris, so he's like an hour ahead of me. I think that really helped, because in a way we found a pace, and then we just bounced ideas off one another, and we bridged that gap really nicely, I think.
28
+
29
+ **Joel Longtine:** Yeah, I think that's one of the things that I got out of this too, was just the where we are now, and what's possible with Dagger today, and some of the difficulties that we currently have... Interesting interactions between CUE, and BuildKit, and how we're interpreting that CUE, and applying it BuildKit states... And then kind of what we're doing with this new release, just what I'm seeing as being possible in that context, and just how much more intuitive and powerful it's going to be.
30
+
31
+ So that was part of what was fun for me, was learning what our current state is, while learning where we're headed, and seeing where that delta is actually gonna be an immense improvement in the tool.
32
+
33
+ **Gerhard Lazu:** So what does the new pipeline look like? We get and compile the dependencies, and we do this in parallel. So we do tests, and we do prod. The tests - we need to compile them, then we use a cache, and this is something to do with the volumes, like to copy all the layers. We don't need to go into too much detail, but it's BuildKit and CUE working together, and then we run the tests. Before we can run the tests, we need to start the test database. It's an ephemeral one, it's just a container, PostgreSQL, because the tests are integration tests, some of them, so they need the database... And then we stop the database when the tests finish running.
34
+
35
+ Now, in parallel, we resolve the assets. These are like the CSS, the JavaScript, all that in development... It's like a step towards production. Then we digest them, and that is one of the inputs to the production image.
36
+
37
+ On the right-hand side we have to compile the dependencies for production. We have the same caching mechanism, and this is like -- it's a necessary step based on the current version of Dagger, which by the way, this is something which will improve. And how do I know that? Well, Joel has been telling me all about it and he's been very excited to work on that. Maybe you wanna mention a little bit about that...
38
+
39
+ **Joel Longtine:** Yeah, so we're basically improving the developer experience around the low-level interactions that Dagger has with BuildKit. So we're basically changing the API to BuildKit. Right now we have kind of an implicit, kind of spread all over the place API to BuildKit, instead of our CUE packages. And the changes that were in the process of building out actually make that API much more explicit, and kind of form like a low-level representation of the BuildKit API within CUE... Which then can be used by our packages, or other packages, to interact with BuildKit, the various file system states and actions on those file systems as well.
40
+
41
+ \[08:07\] So yeah, I think this is gonna get a lot better. We'll be able to actually use some of the features that we weren't able to use this time around, of BuildKit, like mounting volumes in a much cleaner way.
42
+
43
+ **Gerhard Lazu:** Okay. And then when that is done, the last step is obviously assemble the image and push it to Docker Hub. The one step which we don't have here, and we would want, is to git commit the digest of the image that was deployed, so that we can do like a proper GitOps way, so that rather than our production pulling the latest - and you know, there's a couple of issues around that; I won't go into them, but I know we have to improve that... We would like Dagger in this case to make that Git commit. And I say Dagger, but now I realize it could just be GitHub Actions. And why do I say that? Part of this pull request we did the integration with GitHub Actions, and we'll get to that in a minute. But first of all, I would like to show what the new pipeline looks like, and what makes it better. So what are these green items here, Guillaume? How would you describe these? What are they?
44
+
45
+ **Guillaume de Rouville:** These are -- I think it's actions. An action represents a step, so in general, it lies inside a definition in Dagger... So how do you build a Dagger pipeline? You just assemble actions all together. And at run time we built a dag that's a little above. That's how you have parallel dependency builds.
46
+
47
+ **Gerhard Lazu:** Okay. What is an action? If you had to describe it, Joel, how would you describe an action?
48
+
49
+ **Joel Longtine:** An action typically would be like a collection of BuildKit steps. So the people familiar with Docker - a specific command within a Docker file, like a copy, or an exec, an env - those sorts of things... They basically represent a stage within BuildKit. And typically, one of these actions is gonna be a set of those steps. So it might be a number of runs within a container exec-ing a shell script, or something along those lines, and then getting the resulting file system state.
50
+
51
+ **Gerhard Lazu:** They all run in the context of a container, right?
52
+
53
+ **Joel Longtine:** Yes.
54
+
55
+ **Gerhard Lazu:** So when you think of a step, there's a container which gets created, that step runs, and there's some inputs and outputs for that one step, is that correct?
56
+
57
+ **Joel Longtine:** Yeah, that's a great way of describing it. So you have the set of inputs - that could be a file system state, that could be a volume mount, it could be secret mounts as well... This is something that's a piece of BuildKit and some of the new features that docker build got as a result of BuildKit.
58
+
59
+ So you have all these inputs coming into this node, which is that file system state plus some action... And then something results from that. If you're doing an echo hello to world.txt, then that new file system state has that new file on top of it.
60
+
61
+ **Gerhard Lazu:** Right, yeah. So if you can see here, these steps - I mean, there's no cache, right? If you remember the Docker file, if you think about that, and how some of those commands could be cached, and then they're really quick... For example, the app image - you can think of it almost like a command in the Docker file. So that is cached and it takes 0.9 seconds. It just has to verify where it is in the cache. Now, these run in parallel, and we'll do a run for you to see what they look like. But this whole pipeline as a whole, even though it looks flat, it runs in parallel, and it takes 190 seconds. So it's a slight improvement over the 3 minutes and 38 seconds which we had here. But you have to realize that these 3 minutes and 38 seconds will always be just that. I mean, this is using caching. But Dagger, if it does use the caching, if everything is cached, if you don't have to compile anything, it just has to run the tests themselves - it's five times quicker. That is a huge speed-up. So this pipeline run, all of it, took 45 seconds. And the test took the longest, 42 seconds... Versus 3 minutes and 38. So - much, much quicker.
62
+
63
+ \[12:14\] And by the way, this will run against any Docker daemon. That's the only requirement. You need Build Kit, and the easiest way of getting Build Kit is just in your Docker. It already has it. So there's no special CI setup required; you can run this anywhere, whether it runs the same way, whether it runs in GitHub Actions, Circle CI, or your local machine, which is really cool.
64
+
65
+ The other very cool feature is that open tracing is built in. So what that means is that you can see what does the span look like for a cached run, versus an uncached run. And all you have to do is to run Jaeger, and have an environment variable. By the way - all this code, all the integration is here. So if you look at pull request 395, you can see all of it.
66
+
67
+ So what we're seeing here is that this cached run - we can see it compiling the dependencies, and you can see that some of these steps run in parallel. So dep's compiled prod are still running, while the test cache already started here. Same thing image pod start here assets devs so on and so forth... And tests - the tests are running, and we are already... We started building the production image. And that is the beauty of the pipeline. You want to run as many things as you can in parallel, and then I do like optimistic branching, it's called, in CPUs... And then when you get to the end of it, it's just like the last step. You assume that everything will just work, and that's what will make it really quick.
68
+
69
+ So you can see what a cached run looks like. You can see that all these steps are really, really quick, the tests take the longest, and all in all, we're done in 47 seconds... 46.98. Let's be precise.
70
+
71
+ **Joel Longtine:** I think that's one of the beautiful things about Dagger and our use of BuildKit too, is that because we're describing at a very fine-grained level the relationships between these relatively fine-grained steps that might be within the context of an action, we can run many of those in parallel. So if you need to go run a bunch of things along, say, the assets pipeline, you can do that at the same time that you're doing stuff with Mix, and then like you said, you're basically waiting for both of those things to be done, because those are inputs to some next stage. And you can imagine much more complicated versions of this as well, where you're going and building a ton of microservices in parallel.
72
+
73
+ **Gerhard Lazu:** Yeah, exactly. What about the GitHub Actions integration? Well, this is a screenshot, this is what it looks like. We wanted to do like a point in time, and we can see how much quicker this is... But I would like to talk about this, and maybe Guillaume can run us through it, what does this look like. This is a GitHub Actions config. So what can you tell us about it, Guillaume? How do you read this?
74
+
75
+ **Guillaume de Rouville:** So it's like a normal GitHub Action. What I see here - you have environment variables, so a Docker host, the hotel to the Jaeger endpoint, and then you have a job, only one job, which is named ci. It runs on Ubuntu, so you just check out the code for the context of the changes. Then you use basically Dagger here...
76
+
77
+ **Gerhard Lazu:** A Dagger action, yeah...
78
+
79
+ **Guillaume de Rouville:** A Dagger action, exactly... And then you configure the Tailscale tunnel, I think it's for you, I believe...
80
+
81
+ **Gerhard Lazu:** Yeah. Because this Docker is remote, that's right. And it's the same Docker host which I use locally. I don't run Docker locally, I just have a Tailscale tunnel, which connects me to that host. And it's the same host that the CI uses. Now, there's an improvement to be made there, and we'll get to that maybe at the end... But yeah, it's the same one. So if it runs locally, it will run exactly the same in the CI, and that's really cool, I think. And then what about the last step?
82
+
83
+ **Guillaume de Rouville:** Basically, it's the step you do when you run it locally. You just do a Dagger rep and I presume you have specified an input, which is a local folder, and you don't have to specify it.
84
+
85
+ **Gerhard Lazu:** \[16:07\] That's right. So if you don't see the glue code, so the dagger op - you're right, it's just a step, which already takes some values that have been preconfigured. So those values are committed, including the secrets, by the way. Dagger is using this really cool thing called SOPS - you may have heard of it, from Mozilla - to encrypt all the secrets. So we have to set in terms of a secret the h key for them to be able to be decrypted, if you think of it like the private key... And yeah, everything just works. So we commit secrets, right; we're crazy, I know... No, actually, it works really well. It's a done thing. And I waited for a long time to do this, and I'm really excited.
86
+
87
+ So this is what the glue code looks like locally. So it's basically what puts everything together. It is a makefile, that's what we use. It just makes things easier. It just runs a bunch of commands. And what I would like to point out is, for example, the new CI package, it declares a new CI... This is a plan, is that right? Is it called a plan, or is it an environment?
88
+
89
+ **Joel Longtine:** It's an environment in the current version, and we're transitioning to the name plan, or a dag, potentially.
90
+
91
+ **Gerhard Lazu:** Right. Oh, that's a good one. Dag this. It asks me to enter my username... This will be stored encrypted, by the way, because -- no, this was to be stored as text; nothing secret, it's Gerhard, you already guessed it... And then it asks me for my Docker Hub password, so that it can push the image. These are stored encrypted, using SOPS, locally. And then there's a couple of things here, we'll skip over them... And then we provide the inputs. Those inputs are important because that's what the environment, or the plan, or the dag, as Joel mentioned, calls it. So we have the app, which is basically the whole source code... There's a couple of things that we need from the environment, like the Git SHA, the Git author... Hm. I don't think I fixed those; I need to fix them. Okay, this is something which still needs to improve; I just realized, going through this now. See? It's so good we're doing these things. So helpful.
92
+
93
+ Cool. So then the Docker host, which is the remote one, it knows how to connect to it... And then it runs the same command that you've seen in GitHub Actions. Docker -- sorry. Docker... That's like a Freudian slip. Dagger - Dagger Up, log-level debug, environment CI. And that's exactly the same thing.
94
+
95
+ The other part of this is obviously the CI queue. And this is like all the code that actually declares the pipeline. And what is this ci.queue? How would you describe it?
96
+
97
+ **Joel Longtine:** It's basically the description of those various stages that we were describing earlier. So there's the app image, you had the test container, or the test db container definition prior to that... And then -- let's see. This is some of the -- so deps is basically kind of helping copy the actual application, which is all the changelog.com source code, into a container... And then we have just some CUE variable, or CUE fields, in essence, that help us store some information about how we wanna be mounting these dependency caches and build caches...
98
+
99
+ **Gerhard Lazu:** Yup.
100
+
101
+ **Joel Longtine:** We also do the same thing for Node modules... And then this \#depscompile is - we're using that basically as a way to describe a kind of structure, that we're then going to apply in a few other places. So you can see deps compiled test actually uses that definition and specializes it with args mix and tests. And we do the same thing with dev and prod, if I remember correctly.
102
+
103
+ **Gerhard Lazu:** Yeah. deps compiled name is right down here so the only difference you're right the the mix end the same definitition as deps compiled with something changed; actually, added... Because it appends stuff to it. Okay. And what is CUE?
104
+
105
+ **Guillaume de Rouville:** \[20:05\] CUE is a configuration language. It aims to be a better JSON and a better YAML. It stands for Configure, Unify and Execute. Basically, I think Joel will be able to continue after that... \[laughs\]
106
+
107
+ **Joel Longtine:** Yeah, so like Guillaume said, it's a configuration language, and one of the things that I think is really lovely about CUE is schema definition, data validation, and it basically allows you to create configurations that have types, so they can be type-checked, preferably, before you get to prod... \[laughs\] And that actually is true; it's how it works.
108
+
109
+ I personally love that it is not white space-dependent, like YAML is. I've been bit so many times by that, with Helm and other various tools; Ansible comes to mind, too... There's lovely things about those tools, and I've found myself bitten by that bug and a number of them.
110
+
111
+ **Gerhard Lazu:** That's why YAML vaccine resonated with you, right? When I mentioned it, this is exactly what I meant, because you had the bug multiple times, and damn it, it's not fun.
112
+
113
+ **Joel Longtine:** Yeah. I've had production deploys fail because an engineer added an environment variable and used tabs instead of spaces in a Helm chart. I prefer not having those sorts of -- those sorts of problems are avoidable, and CUE is a really powerful tool for doing that.
114
+
115
+ Just to kind of dig into the schema definition stuff a little bit deeper, because I think it's useful to understand... You can basically define the shape of a particular configuration, including constraints on different fields. A good example of this might be like a Kubernetes deployment. So you can have a Kubernetes deployment with your API version, your kind deployment, and then you can, for instance, say, set the CPU field, and actually set a constraint on that. You can set an upper bound, and a lower bound etc.
116
+
117
+ And then when any configuration from a developer or an SRE comes into that, if it doesn't match that specification, then the compile of the CUE will fail. So it will allow you to fail at a much earlier stage, potentially even on a developer's local machine, rather than once it gets to production.
118
+
119
+ **Guillaume de Rouville:** That's exactly what we've been using. We've developed a serverless package to usually deploy serverless functions on AWS. That's basically what we used. So it's kind of useful... Sometimes you have -- the names, they are forbidden characters, and we just do it, we use these validations to fail early.
120
+
121
+ **Gerhard Lazu:** Yeah. Okay. So yeah, there's a lot to explore here. I really, really like CUE, I have to say; there's so many great things about it... And it makes not having the right inputs, not having the right values - it just really helps. The compiler errors for CUE were really good, and they steer you in the right direction. And with Vim, there's a good plugin which kind of works; I can share it in the show notes... But it's good; it's much better than not having it. I'm sure that that can improve as well.
122
+
123
+ **Joel Longtine:** There's some rumblings in the CUE community around creating a language server as well.
124
+
125
+ **Gerhard Lazu:** Ooh... Wow. An LSP. I would love that. I would love that. Okay, right. So I'll definitely want to watch, for sure. So what comes next?
126
+
127
+ **Joel Longtine:** \[23:59\] I think one thing that occurs to me - at least as far as I remember, this is currently still using the docker build. So you're actually pushing out the contents of a bunch of those steps to the Docker Engine to actually then build the image... And with Europa and some of the improvements there, that should not be necessary. You should be able to just take the output of one of these stages and just add the information that you want on top of it, and be off to the races, and then be able to push that directly. Because right now what's happening is a bunch of the context is still having to be pushed from within BuildKit to Docker Engine, so that it can build the image. And that will not be necessary with some of the new Europa stuff.
128
+
129
+ **Gerhard Lazu:** Interesting. Okay, it sounds great. Anything to add, Guillaume, to that, or something else?
130
+
131
+ **Guillaume de Rouville:** Yeah, I think that with Europa, as Joel mentioned earlier, the DX will be far better. What we're trying to do at the moment - if the people watch the PR with Europa, it will be normal.
132
+
133
+ **Gerhard Lazu:** I think -- yeah, that makes a lot of sense. Europa will make this a lot simpler. And while we had to jump through a couple of hoops, it just made it obvious they shouldn't be there... So I'm really excited to adapt this to that new way. That will be great. And to see what improvements we can get. Because at the end of the day, that's what you care about, right? This looks not as good as it could; I mean, it works, and that's what you care about, make it work, make it write... I think that's what's happening; now we're making it write, and then we're making it fast. I'm very excited about that.
134
+
135
+ Okay. Well, I'm gonna wish you both a Merry Christmas, even though this is weeks before Christmas... But by the time listeners will be listening to this, it'll be Christmas... And a happy new year!
136
+
137
+ **Joel Longtine:** Same to you, Gerhard. Thanks. It's been a lot of fun to work with you and Guillaume on this. It's been a nice opportunity to get to know you, and get to know Guillaume as well. Like you mentioned, I live in kind of the Boulder/Denver area, and Guillaume lives in France... And it was a good opportunity to bump into each other more regularly.
138
+
139
+ **Gerhard Lazu:** Definitely. Right back at you again. Same for me, so I'm glad that this worked the way it did. I also had a lot of fun. Thank you very much.
140
+
141
+ **Joel Longtine:** Thank you.
142
+
143
+ **Guillaume de Rouville:** Yeah, thank you.
144
+
145
+ **Gerhard Lazu:** Our second present to you this Christmas is sharing my way of understanding CPU time used by Kubernetes workloads. Think near real-time Flame Graphs, as well as being able to compare CPU profiles for the same process, at different points in time. If you're familiar with Brendan, Gregg's book Systems Performance, this goes really well with it. So why is this a big deal? And why was it more difficult to do this in the past? I know just the right person to unwrap this present with.
146
+
147
+ **Frederic Branczyk:** Let me talk a little bit about why that's interesting and why that's useful. So profiling has kind of been in the developer toolbox ever since software engineering has existed, because we always needed to know "Why is my program executing, and how is it executing the way it is?"
148
+
149
+ So profiling has been around for a very long time. It's essentially us recording what the program is doing. You can literally think of it as we're recording the stack traces that are happening 100 times per second. That has kind of evolved over the years. Profiling used to be a very expensive operation to do, which is why you only did it when you really needed to. So one thing that kind of changed the perspective was when we discovered sampling profiling. In the olden days, the way that profiling worked is that we literally recorded everything that was happening in our program. And naturally, that's really expensive.
150
+
151
+ \[28:10\] Sampling profiling kind of go a different strategy and say "Actually, we only need something that's statistically significant." So instead of recording everything that's happening, as I said earlier, we only look at the stack traces a hundred times per second. And that, we can do incredibly efficiently.
152
+
153
+ The reason why this is super-useful and why being able to record stack traces with statistical significance is useful is that now we can say "This is where my program is spending time." So that can be used to save money on your infrastructure, but also there are a lot of optimizations that you can only do if you have that type of depth of data to analyze. So you can actually down to the line number tell what is using your CPU resources.
154
+
155
+ One really cool conversation that I had yesterday - this perfectly translates in the serverless world, where you actually pay for basically every single CPU cycle that your serverless function is running, and any CPU second that you can cut off from that is money you're saving from your serverless bill. I think that's a really obvious value proposition. Because we simply have this data and we're recording it always, we can actually reliably tell where we can optimize our code.
156
+
157
+ **Gerhard Lazu:** So out of these three things - saving money (very important for some), improving performance... I love that. Shipping code fast - great. Making it better and improving it - I love that. And when things go wrong, understanding what exactly went wrong. What CPU, what disk, what network, where is the bottleneck from a system perspective, as well as obviously from like if you have microservices, between microservices. So Parka helps us understand from a CPU perspective where is the time spent. In the current implementation, in the current version, that's what it tells us really, really well. So how about we try it out?
158
+
159
+ We're going to run it in our production Kubernetes setup. Just like that. Why not? Create a namespace, apply the server, and apply the agent. And as I do this in the background, what is the difference, Frederic, between the server and the agent?
160
+
161
+ **Frederic Branczyk:** The server is essentially the component that allows you to store and query profiling data, while the agent - the one and only purpose of the agent is to capture this data from your applications at super-low overhead. And one of the really exciting technologies that we're using here is eBPF. So because we know exactly what the format is that we're gonna want this type of data in, we can in kernel, without having to spend all of this overhead of doing context switches from kernel space to user space, we can immediately record the stack traces in kernel, and present it to Parka Agent, and then Parka Agent - it does some resorting in the data, but essentially it just sends that off to Parka. And then from Parka you can actually visualize it.
162
+
163
+ **Gerhard Lazu:** Okay. So we have the server and the agent... So let's port forward to the server, to the UI, and in our browser, local host 7070 - let's see what that looks like.
164
+
165
+ **Frederic Branczyk:** One thing that I think is really important to mention - everything revolves around the pprof standard. This is kind of an industry standard format for profiling data. So everything produces or works with pprof format. So you could send any kind of profile, like memory profiles that have been captured through some other mechanism, to Parka, and analyze that as well. It's just that the agent today can only produce CPU profiles and continuously send those.
166
+
167
+ \[32:12\] The agent actually also produces pprof-compatible profiles, and maybe we can have a look at that later. The server ingests those, and then one additional really cool feature, I think, is any query that you do in the Parka frontend, you can download again in pprof format. And if you have any other sort of tooling around the pprof format, you can still use them and compose your workflows.
168
+
169
+ **Gerhard Lazu:** Okay. We are on the server, looking at all the CPU profiles. This is the profile coming from container Parka. How do we read this? There's a CPU sample, we can see the root, that's the root's span... What about all the other spans? What are these?
170
+
171
+ **Frederic Branczyk:** This is what's called a Flame Graph, and every span that we're seeing here represents how much this span, as well as all of its children make up in cumulative. That's actually what the frontend also says - the cumulative value.
172
+
173
+ **Gerhard Lazu:** Right.
174
+
175
+ **Frederic Branczyk:** And essentially, we're saying "Everything from this point onwards and further down, uses up --" In this case you're hovering over one that says 11%. So, for example, we can see here in the middle, runtime.greyObject, for example. If we were able to optimize that greyObject function, for example, and say -- for whatever reason we're able to optimize 100% of it away, we would actually be saving 15% of our CPU resources here.
176
+
177
+ In this case, you actually clicked a particularly interesting sample, because we can see in our metrics above that we have these spikes every now and then, and we can very clearly see what it is that is causing this spike in this profile; we can see that it's garbage collection. A very classic thing, that can use a lot of CPU resources.
178
+
179
+ **Gerhard Lazu:** Right. So this is garbage collection that happens in
180
+
181
+ **Frederic Branczyk:** Right.
182
+
183
+ **Gerhard Lazu:** Okay. So why does this garbage collection happen?
184
+
185
+ **Frederic Branczyk:** Because of how Go works, you allocate objects in memory, and when you don't use them anymore, eventually the runtime will come around and see that this piece of memory is not in use anymore, and kind of free that memory to the operating system, so that anybody on the machine can use it.
186
+
187
+ And in this case, essentially, what we're seeing because we have such a huge spike, that's telling us Parka is doing a lot of allocations, it's allocating a lot of memory, that then consequently is kind of thrown away and can be garbage-collected. So it seems like there's probably some potential in optimizing allocations here.
188
+
189
+ That said, having allocations is not a bad thing, because at the end of the day I can write a program that does absolutely nothing, and does no app allocations, but that's also not useful. Producing a side effect - it's one of those things that as software engineers we try to not produce a side effect; but as it turns out a side effect tends to be the thing that's actually useful in the real world.
190
+
191
+ **Gerhard Lazu:** That's when real work happens, right? These spikes are an artifact of real work happening. And if I had to guess, without knowing too much - I mean, what Parka does behind the scenes, but not knowing all the details... I think that this is related to all those profiles being maybe read, being symbolized, or something happens in the background. So it reads a profile, builds whatever data structures it needs to build to get an output, and when that output/result is achieved, then all the intermediary objects can be garbage-collected. I think that's what's happening here.
192
+
193
+ **Frederic Branczyk:** The two major things are definitely what you already mentioned. Symbolization, because this happens asynchronously, as you have uploaded your profiling data... And then it's actually ingesting and writing that profiling data to its storage. This is something that, because we're doing continuous profiling, it happens continuously. And every network request that we receive causes memory allocations. Because we read that from the network stack, and that causes memory allocations.
194
+
195
+ \[36:11\] Now, there are a number of optimizations that can be done to reduce this, and you can reuse buffers, and stuff like that... And we'll get to all of that, but it's unlikely that we'll ever get to know. Zero. But there's definitely lots of optimization potential here.
196
+
197
+ **Gerhard Lazu:** Okay. I do have to say, looking at this Flame Graph, it's really amazing. If you remember how difficult this used to be in the past, where you had generate a pprof, and then use that pprof, or something similar that can read that profile, to get at this Flame Graph, and then try and slice and dice... Now, if I don't want this Flame Graph, I want a different one, I just click on it, and there we go. Database, Postgres... Let's see, what do we get from Postgres? Okay. So this is slightly a different view. This is a machine-compiled binary, right?
198
+
199
+ **Frederic Branczyk:** Right.
200
+
201
+ **Gerhard Lazu:** So why do we see only these numbers? What are those numbers, first of all?
202
+
203
+ **Frederic Branczyk:** Yeah, that's a really good question. So these are the raw memory addresses that we obtained from the agent. And the reason why we're only seeing memory addresses is because most of the time when you install a package from -- let's say a Debian package, or something like that... By default, these packages are distributed without debug information. So they were intentionally removed from those binaries to reduce the size of the binary. Sometimes it can also have a performance impact, but usually it's just for size optimization.
204
+
205
+ In the case of Debian, for example, if you still want those debug symbols, the convention is that you -- let's say "apt-get postgres", the convention then is the package name is -dbgsym (debug symbols), and that downloads the debug symbols as a separate package, which can then again be picked up by the Parka Agent as well.
206
+
207
+ But in this case we didn't have any debug information available, and so - yeah, this particular Postgres binary is stripped, and so it does not have this debug information. That said, there is a really cool project called Debuginfod, where the distributions have come together and they're hosting these servers where using this build ID you can request the debug symbols on-demand.
208
+
209
+ This is great news, because it means that you don't have to install these debug packages manually anymore. Parka can just go through this Debuginfod server and retrieve it itself. That's the good news. The bad news is Parka doesn't have support for this just yet. We already have support for this plan, I just haven't got to it yet.
210
+
211
+ **Gerhard Lazu:** So there's a good news and a bad news, and that "yet" is the good news and the bad news. It's coming, but it's not there yet.
212
+
213
+ **Frederic Branczyk:** Exactly.
214
+
215
+ **Gerhard Lazu:** That's really cool. I didn't know this. I knew about strip boundaries but I didn't know about those build IDs and being able to use those build IDs to get the debug symbol for this particular binary from that server? That's really cool.
216
+
217
+ Okay, so we've seen Postgres... What about Erlang VM? So this is our app, and we can see that we have beam.smp all over the place, which is the name of the binary for the Beam Erlang VM. So we see the same thing here...
218
+
219
+ **Frederic Branczyk:** Yeah. So this is kind of another variation of this, but the first difference is this is not a binary that was compiled to machine-readable code, right? This is, in the broadest possible sense, interpreted code. The good news about Erlang is it actually has a just-in-time compiler. So what that means is even though it is technically a virtual machine, on the fly it compiles parts of your code to actually machine-executable code.
220
+
221
+ \[40:17\] This is kind of good news again, because at least in theory, the same strategy can be applied. It just turns out that a lot of the strategies that these dynamic languages or virtual machines tend to very subtly differ, and so we do have to essentially implement small pieces of runtime-specific things.
222
+
223
+ One thing that's actually really cool, that I think Erlang does implement, and the Node.js runtime implements as well, is something called perf.maps. This is something that many just-in-time compilers implement, where essentially the just-in-time compiler, because it generates or compiles this code on the fly, it can also write out this mapping from the memory address to the human-readable symbol, and that Parka Agent can, again, pick up and symbolize on the fly. Now, I have tried this with Node.js... Unfortunately, we haven't gotten it to work with Erlang just yet.
224
+
225
+ **Gerhard Lazu:** Okay.
226
+
227
+ **Frederic Branczyk:** There seems to be something specific that the Erlang VM does, that we don't fully understand yet... But it's one of those things where language support is something that's always in progress, and hopefully will soon have full support for the Erlang VM as well.
228
+
229
+ **Gerhard Lazu:** Nice. So we can't really see that. But there's another thing which we haven't shown - the compare one; the compare view. So we can compare two profiles side by side... So we take a low one - I think that's how you like to start. You take a low profile on the left, you take a high on the right, and it will compare them side by side. So how do we interpret when this loads, how do we interpret this result?
230
+
231
+ **Frederic Branczyk:** Yeah, so this is going to be hard when we just see memory addresses, but essentially, anything that is blue has stayed exactly the same. It used exactly the same amount of CPU in the one observation as it did in the compared one. Anything that's green, the CPU cycles got less. I can actually see one very tiny one on the left...
232
+
233
+ **Gerhard Lazu:** That one.
234
+
235
+ **Frederic Branczyk:** ...somewhere in there there's one that got very slightly better. 50%. It seems like it was two CPU samples before, and now it was only one.
236
+
237
+ **Gerhard Lazu:** How do you know it was two CPU samples?
238
+
239
+ **Frederic Branczyk:** So we see that the dif if -1, right?
240
+
241
+ **Gerhard Lazu:** Right...
242
+
243
+ **Frederic Branczyk:** And the current sample is 1. So there must have been two before.
244
+
245
+ **Gerhard Lazu:** Okay. So that's CPU cycles.
246
+
247
+ **Frederic Branczyk:** It's observations of stack traces. So we at most look at a process 100 times per second, and so that means -- 100 means one CPU core being used; in this case, this is 1%, like one millicore...
248
+
249
+ **Gerhard Lazu:** Right, okay.
250
+
251
+ **Frederic Branczyk:** ...that was being used within those ten seconds.
252
+
253
+ **Gerhard Lazu:** Okay. So this one is slightly better... But this one, the beam.smp - and I wish we knew what this was... Or maybe this one, which is just a memory address... This is 350% worse. So I can see, or I can think -- I mean, even though this is very Christmasy, and I like it, like red and green, and it's very nice, it would be easier if we had used a different color for the ones which have an infinity. Maybe black, or something like that, which - they're completely new. I like the diff idea, but a different color from the ones that are like -- for example this one, +700. So this is just worse... But this is brand new. This didn't even happen in the previous sample.
254
+
255
+ **Frederic Branczyk:** \[44:13\] Yeah.
256
+
257
+ **Gerhard Lazu:** Okay.
258
+
259
+ **Frederic Branczyk:** I'm writing this down.
260
+
261
+ **Gerhard Lazu:** Cool. So this is great, to be able to see the difference... And I'm just wondering, if we were to take this memory address, and you were to look into that file, into that perf.map file, would we be able to figure out what this is?
262
+
263
+ **Frederic Branczyk:** It's possible. The problem is, in this case -- so we can look at the process and we can kind of go through the steps of what the Parka Agent would do manually, and then we can try to see if we can figure out why this is not able to symbolize this.
264
+
265
+ My theory is, because of what we can see here - the way that this binary code was memory-mapped, we weren't actually able to understand where it's mapped. So the way that this works is - let's go back to our terminal, I would say, and we can inspect this, actually, the way that binary code is memory-mapped for the process.
266
+
267
+ **Gerhard Lazu:** Okay.
268
+
269
+ **Frederic Branczyk:** So we can, again, look into our procfs - this is where all the magic happens in Linux...
270
+
271
+ **Gerhard Lazu:** Okay, so do we wanna go like on the host?
272
+
273
+ **Frederic Branczyk:** We can do the host, or the container. Yeah, it shouldn't matter. Both should work.
274
+
275
+ **Gerhard Lazu:** Okay, yeah. Let's go on the host. So we won't go on that -- let's see do we still have the CD? There we go. That's the proc. Yes?
276
+
277
+ **Frederic Branczyk:** Right. And here there's a file called maps...
278
+
279
+ **Gerhard Lazu:** Yes.
280
+
281
+ **Frederic Branczyk:** Yeah. So let's have a look at what it says in there. And the way that symbolization effectively works is that we take that memory address that we saw, and we try to find in which range within this file that memory address is from.
282
+
283
+ **Gerhard Lazu:** So this one right here. Okay. So that's the memory address. So do we need to do 7ff? I mean, I can see something here, 7ff...
284
+
285
+ **Frederic Branczyk:** Well, if you're able to search within your terminal, maybe we can -- it's a bit of a hack, but we can search for the address that you have copied, and we can just try to remove certain digits until we maybe get a match.
286
+
287
+ **Gerhard Lazu:** Okay. So let's remove those two
288
+
289
+ **Frederic Branczyk:** And as we can see, the ranges don't have the 0x prefix here, so we're gonna need to remove that.
290
+
291
+ **Gerhard Lazu:** Okay.
292
+
293
+ **Frederic Branczyk:** So this is an interesting one... And this is exactly why this is not working. So the way that this table works is that we have these ranges, and then it tells us on the very right, this is the binary that this executable code came from. Actually, the stack - I wanna say this could be a... I don't know if it's necessarily a bug, but what can happen in some languages - and in Go this can happen as well... Sometimes when we do the stack trace snapshots, when we retrieve them from eBPF, sometimes the kernel does them a bit too tall, and we don't fully understand why. Basically, what it does is it goes back and walks the stack, and sometimes it walks too far. And in this case, it doesn't actually make sense that the stack contains executable code. That shouldn't be how things work.
294
+
295
+ **Gerhard Lazu:** Yeah.
296
+
297
+ **Frederic Branczyk:** \[47:54\] So it could be that this is just an artifact of that. But because it's also a virtual machine, maybe there's something happening that we don't understand, and we are actually executing code that is on the stack. It seems unlikely, but it's one of those things where -- I'm not an expert on the Erlang VM, so I don't know for sure...
298
+
299
+ **Gerhard Lazu:** Yeah.
300
+
301
+ **Frederic Branczyk:** But my intuition says that this shouldn't be possible just from the way that processes work.
302
+
303
+ **Gerhard Lazu:** Right. Okay. So this is like the Erlang runtime itself, how it executes code on the kernel. That's what we would need to know.
304
+
305
+ **Frederic Branczyk:** Yes.
306
+
307
+ **Gerhard Lazu:** So I think that we have a person that we can ask, which is Lukas Larsson. Even though he's very busy, I know, and he's focused deep down on some very gnarly problems in the world of Erlang, we can ask him... And if you're interested to follow what happens - I mean, this is like pull request 396, is what started this - I intend to keep as many details as I can here, and all the follow-ups. So yeah, this is a place to go, I suppose, to see what else has happened since this was recorded.
308
+
309
+ So what I would like to say is thank you very much, Frederic, for running us through Parka.
310
+
311
+ **Frederic Branczyk:** My pleasure.
312
+
313
+ **Gerhard Lazu:** I can see so much potential here... I really like where this is going and how simple it makes certain things. It makes me excited as to what's coming next year. But this was great. Thank you, Frederic.
314
+
315
+ **Break:** \[49:34\]
316
+
317
+ **Gerhard Lazu:** What we want to do is, first of all, fix this R, damn it. Someone can't type. Infrastructure... As if my life depended on it. Right. So we want in 2022 for the Changelog.com setup to use Crossplane to provision our Linode Kubernetes cluster. That's the goal. And the way we're thinking of achieving it is to follow this guide to generate a Linode Crossplane provider using the Terrajet tool, which is part of the Crossplane ecosystem. And we can generate any Crossplane provider from any Terraform provider. Cool. So how are we going to do that?
318
+
319
+ **Dan Mangum:** Yeah, I think -- well, there's a couple different parts here. In order to be able to test out anything that we generate, we're going to need a Crossplane control plane running somewhere. That being said, we need to generate and package up this provider to be able to install it in Crossplane, and go through our package manager there. But it could be as simple as even just having a local kime cluster to start out, and after generating, using go run to just apply some CRDs and see if it picks them up correctly.
320
+
321
+ **Gerhard Lazu:** That is a good idea. I like it. But I have found issues when I went from kime to something else. GKE, LKE, any Rio cluster, because there's different things, like RBAC, for example, or different security policies, or who knows what. So I like starting with production, which is a bit weird, because you would think -- like, you start from development; but I like starting with production. What I'm thinking is I want to start with Crossplane installed in a production setup... And I can't remember if this was episode \#16 or \#17, where I was saying that if there was a Crossplane -- \#15, there we go. Gerhard has an idea for the Changelog '22 setup. So the idea was to use a managed Crossplane, which would be running on the Upbound cloud, and with that Crossplane, that should manage everything else. So that is our starting point. That's what we're doing here. If we go to Upbound cloud - there we go. Control planes. I have already created one... It's a Christmas gift.
322
+
323
+ **Dan Mangum:** \[52:12\] Nice...
324
+
325
+ **Gerhard Lazu:** So this exists. I will contact you after my free trial... \[laughter\]
326
+
327
+ **Dan Mangum:** That's good.
328
+
329
+ **Gerhard Lazu:** So just before Easter, I'll say "Hey, Dan, is there like an Easter Egg in here, or something?" Cool.
330
+
331
+ **Dan Mangum:** We'll send you an email as a reminder... \[laughs\]
332
+
333
+ **Gerhard Lazu:** Cool. So we have a control plane, we have a Kubernetes cluster, which is this one... So K... K version, that's the one.
334
+
335
+ **Dan Mangum:** Also, just to note, you'll want to make sure to clean up that token that was exposed there before you post this anywhere, because that'll give folks the ability to get a kubeconfig to your cluster.
336
+
337
+ **Gerhard Lazu:** This token, yes. Thank you. Oh, yes. That would be quite the Christmas gift, wouldn't it? \[laughs\]
338
+
339
+ **Dan Mangum:** Yeah.
340
+
341
+ **Gerhard Lazu:** "Here you go! You have access to it. You can take everything down." That is a very good catch, thank you. Cool. We have Crossplane, we have access to it... Could we see the versions? So I use K9s, and I think you do, too. I've seen you use it a couple of times. It's a lot quicker. So these are all the pods... If I do d for describe, it's version 1.3.1. Cool. Is that good enough?
342
+
343
+ **Dan Mangum:** Yup, that's good. Although, actually, by end of day today you'll be able to get as recent as 1.5.1. But a nice policy here is also -- and this will actually be rolling out today as well... You know, we have patches for minor versions, and your control plane will automatically receive the latest patch here, and you shouldn't see any disruption with that. So you'll actually get up to 1.3.3 if you kept this control plane around.
344
+
345
+ **Gerhard Lazu:** Right, okay. But to get Terrajet to work, will I need a newer version of Crossplane, or is 1.3 sufficient?
346
+
347
+ **Dan Mangum:** 1.3 should be fine for what we're doing here. Terrajet just basically generates the provider, so as long as that provider is supported, then you're good.
348
+
349
+ **Gerhard Lazu:** Cool. Okay. We can connect to this, everything is running... Shall we just follow these instructions and see how far we can get?
350
+
351
+ **Dan Mangum:** Sure. Yeah, it sounds great. And a disclaimer for everyone at home - I am not intimately familiar with Terrajet actually, because we had another team of Crossplane contributors who have worked on this... So I'm gonna be learning as we go along here in terms of the actual generation process. So this should be fun.
352
+
353
+ **Gerhard Lazu:** That's amazing. So that is --
354
+
355
+ **Dan Mangum:** That's correct, Muvaffak; yup.
356
+
357
+ **Gerhard Lazu:** Okay. And Hassan.
358
+
359
+ **Dan Mangum:** Yup.
360
+
361
+ **Gerhard Lazu:** Okay, amazing.
362
+
363
+ **Dan Mangum:** All those folks are actually some of my co-workers at Upbound, and Muvaffak has been a Crossplane maintainer with me for a number of years now.
364
+
365
+ **Gerhard Lazu:** Amazing. Okay.
366
+
367
+ **Dan Mangum:** Yeah, they're awesome.
368
+
369
+ **Gerhard Lazu:** Well, thank you very much. Let's how all it works. My favorite. Let's see what happens.
370
+
371
+ **Dan Mangum:** Right.
372
+
373
+ **Gerhard Lazu:** Excellent. What follows next is an hour-long pairing session with Dan, condensed into seven minutes. If you don't want to listen us two newbs figuring stuff out, skip ahead to the end result, when I talk through with one of the Terrajet creators, Muvaffak Onuş.
374
+
375
+ **Gerhard Lazu:** You used this template...
376
+
377
+ **Dan Mangum:** Mm-hm.
378
+
379
+ **Gerhard Lazu:** Okay... Why?
380
+
381
+ **Dan Mangum:** If you're intending to make this an open source project, that's a way to get started right off the bat.
382
+
383
+ **Gerhard Lazu:** So basically clone this, right?
384
+
385
+ **Dan Mangum:** If you click the Provider Jet template there, it will have a "Use this template" button, which means you can just create a new repo right from it.
386
+
387
+ **Gerhard Lazu:** Okay. Let's go for that.
388
+
389
+ **Dan Mangum:** Awesome.
390
+
391
+ **Gerhard Lazu:** Changelog... Perfect. Okay. First step, provide a Jet Linode. Clone the repository CD, replace Template with your provider name. Okay.
392
+
393
+ **Dan Mangum:** Yeah.
394
+
395
+ **Gerhard Lazu:** So where was the template?
396
+
397
+ **Dan Mangum:** \[55:52\] So all you're doing here is you're specifying what you want your provider name lower and upper to be, and then these commands are going to replace all instances of Template.
398
+
399
+ **Gerhard Lazu:** Ah, I see. Okay, I'm with you. Okay. Replace all the occurrences... I see. So now I just basically run this command. Okay.
400
+
401
+ **Dan Mangum:** I'm guessing that has to do with the name of the Terraform repo, potentially... But it says that it checked out line in the controller docker file. Look like a broken link, potentially...
402
+
403
+ **Gerhard Lazu:** Found. Cool. So that is the link that we should use. Perfect.
404
+
405
+ **Dan Mangum:** So it sounds like that just the Terraform provider Linode is what we're looking for there, if I look in the Docker file here and see how Terraform provider source is used.
406
+
407
+ **Gerhard Lazu:** It's adding this... I think it's --
408
+
409
+ **Dan Mangum:** I am a little confused about the difference between Terraform provider source and Terraform download name here, based on the Docker file that we're looking at...
410
+
411
+ **Gerhard Lazu:** Yeah.
412
+
413
+ **Dan Mangum:** It seems like they should be the same.
414
+
415
+ **Gerhard Lazu:** Yeah... I think they should be the same. I think you're right.
416
+
417
+ **Dan Mangum:** I think that might be getting Terraform itself, and installing it. Let's see if there is a --
418
+
419
+ **Gerhard Lazu:** Ah, yes. You're right, that is getting the Terraform itself. You're absolutely right. Okay, so this actually is the entire URL.
420
+
421
+ **Dan Mangum:** Right.
422
+
423
+ **Gerhard Lazu:** I think it's actually all of it... Ah, no. Maybe not. Because look at the location.
424
+
425
+ **Dan Mangum:** Yeah, it's just the URL prefix. So I think it's just --
426
+
427
+ **Gerhard Lazu:** It's this.
428
+
429
+ **Dan Mangum:** Yeah, exactly.
430
+
431
+ **Gerhard Lazu:** Okay, that makes sense. Cool. Okay, so this is the Changelog, this is Terraform provider Linode, and then that's it. v4 GitHub, I think.
432
+
433
+ **Dan Mangum:** Yeah. I'm confused a little bit about the v4 GitHub portion of that.
434
+
435
+ **Gerhard Lazu:** Mm-hm. Well, that was added there, so that means that there should be a GitHub... Um, not this one. This one, the Changelog.
436
+
437
+ **Dan Mangum:** It'd probably be helpful if we took a look at one of the -- potentially if some of the existing providers use this, and...
438
+
439
+ **Gerhard Lazu:** Mm-hm. So if we take this one -- is this public? It is. Cool. So GitHub - look at that. GitHub is there.
440
+
441
+ **Dan Mangum:** Yeah. So I think this is an example of, so I think you'd have Linode instead of GitHub, right?
442
+
443
+ **Gerhard Lazu:** Yeah, yeah.
444
+
445
+ **Dan Mangum:** But I'm not sure where the v4 is coming from necessarily. I didn't see that there.
446
+
447
+ **Gerhard Lazu:** Actually - yeah, it's v4 GitHub. That is interesting. You're right. I didn't see v4 either. So I've seen GitHub... I don't know where that's coming from, indeed. Okay, so if we come back to this... Maybe I'm not reading this right. The way I understand it, it's actually the Linode Terraform provider. It's this one that I'm linking to. This is it.
448
+
449
+ **Dan Mangum:** Yup.
450
+
451
+ **Gerhard Lazu:** This is what I think I need to provide. So it's basically this.
452
+
453
+ **Dan Mangum:** No, I think what you have potentially is right... Because I believe this is pointing to -- well, no. Is it using the Linode -- hold on one second, actually...
454
+
455
+ **Gerhard Lazu:** So in this example it was -- oh, yeah, you're right. No, actually no.
456
+
457
+ **Dan Mangum:** Here we go. This is helpful. So I'm dropping it in the Zoom chat here. This tells us where integrations is coming from, which is the Git repo. The org is called Integrations, that Terraform Provider GitHub is in. And then GitHub - they don't have the v4 in there though. I don't know where that's coming from.
458
+
459
+ **Gerhard Lazu:** Yeah, no... So I think there was something here, there was something in the documentation. Where was it? This one.
460
+
461
+ **Dan Mangum:** Oh, I know what it is. This is a Go package, and they have a v4 version. So that's just the import path for the Go package. So you can leave that out as long as the Linode provider is a normal Go package here.
462
+
463
+ **Gerhard Lazu:** \[59:54\] Look, that is the line... Found, downloading. So that pulls it from the right place. Okay, great. If your provider is using an old version... How do I know if it's using an old version? Oh, okay, I see. I was confused.
464
+
465
+ **Dan Mangum:** I believe you need an actual replace stanza down there at the bottom.
466
+
467
+ **Gerhard Lazu:** You think?
468
+
469
+ **Dan Mangum:** I believe so...
470
+
471
+ **Gerhard Lazu:** Okay. This is a require. So the way I understand it, I need to replace this, with this.
472
+
473
+ **Dan Mangum:** No, I believe that you'll have a dependency there on HashiCorp Terraform plugin SDK, and then you'll have a replace statement at the bottom of the go mod, that indicates you want to replace that dependency that's in your require with the fork there that Hassan has.
474
+
475
+ **Gerhard Lazu:** Okay. So you're saying that all I need to do is comment out this line. This replace.
476
+
477
+ **Dan Mangum:** Yup, that should be what we're looking for here.
478
+
479
+ **Gerhard Lazu:** Okay. I wasn't sure that go mod supports this...
480
+
481
+ **Dan Mangum:** Yup.
482
+
483
+ **Gerhard Lazu:** ...but okay. Yeah, okay. They're more tidy.
484
+
485
+ **Dan Mangum:** I believe here is where we need to setup whatever the credentials are needed to talk to Linode...
486
+
487
+ **Gerhard Lazu:** I see.
488
+
489
+ **Dan Mangum:** So we may want to do the same thing that the Terraform provider is doing, but --
490
+
491
+ **Gerhard Lazu:** I see what you mean. Okay, I'm with you.
492
+
493
+ **Dan Mangum:** Yeah.
494
+
495
+ **Gerhard Lazu:** So the only thing that we really need is a key.
496
+
497
+ **Dan Mangum:** To talk to Linode?
498
+
499
+ **Gerhard Lazu:** Yeah. That's the only thing.
500
+
501
+ **Dan Mangum:** Cool.
502
+
503
+ **Gerhard Lazu:** I would call it CLI token, because that maps it to what the Linode CLI expects it to be.
504
+
505
+ **Dan Mangum:** Is that what you use with Terraform to be able to authenticate?
506
+
507
+ **Gerhard Lazu:** I don't know.
508
+
509
+ **Dan Mangum:** Because I believe what we're doing here - so we're taking things out of the provider config and then setting the environment variable based on that, so when the underlying Terraform plugin is invoked, it will utilize those credentials specified by the environment variables.
510
+
511
+ **Gerhard Lazu:** Yup. Let's see... Linode token.
512
+
513
+ **Dan Mangum:** Nice. So I'm guessing that's what we want there.
514
+
515
+ **Gerhard Lazu:** That's what we want, yeah. Where does this key come from? Hang on, let me see. Key username. Where does this key come from?
516
+
517
+ **Dan Mangum:** You just deleted the variable that was key username... But you can name it whatever -- sorry, it was way up at the top.
518
+
519
+ **Gerhard Lazu:** Ah, nftoken. I see, okay.
520
+
521
+ **Dan Mangum:** Yup, that sounds good. I also am gonna have to wrap up here pretty soon...
522
+
523
+ **Gerhard Lazu:** Okay, let's wrap up now.
524
+
525
+ **Dan Mangum:** Okay.
526
+
527
+ **Gerhard Lazu:** Yeah, let's wrap up now. I think this is a good point.
528
+
529
+ **Gerhard Lazu:** After the pairing session with Dan, I had a few more with Muvaffak Onuş, one of the Terrajet creators. And then he joined me to talk about the end result.
530
+
531
+ **Muvaffak Onuş:** Yeah, I'm glad to be here.
532
+
533
+ **Gerhard Lazu:** We had a couple of early mornings, and I think I had a couple of late nights... So why did we do this? The reason why we did this is because we wanted our Kubernetes clusters to not be provisioned via UI or CLI. So no ClickOps, Dan; that was a great word. No ClickOps, no UI, and not even CLI. We didn't want to have a CLI that we need to command a type to provision a Kubernetes cluster. Now, that is not entirely true, because obviously, we still have to give it a config... But there's something that provisions the cluster for us, and that is Crossplane. But not just Crossplane. There's this secret sauce element which I didn't know about, until Dan mentioned that "Hey, have you seen Terrajet?" That was your idea...
534
+
535
+ **Muvaffak Onuş:** Well, so you see, in the Crossplane ecosystem there are many providers, and not all of them have support for all APIs that clouds actually expose.
536
+
537
+ **Gerhard Lazu:** Right.
538
+
539
+ **Muvaffak Onuş:** And one of the examples was Linode. We didn't have it provided. Also, the plan with Terrajet, the motivation was that "Let's build something that can utilize the whole great Terraform community and the great work that they did." So that was how it came to be. Design a code generator and a generic controller that can take any Terraform provider and bake a Crossplane provider up.
540
+
541
+ **Gerhard Lazu:** \[01:04:02.21\] Right. So this is full-circle happening. Markus, if you're listening to this - this is what happened with your Terraform provider. I remember I worked with Markus while he was still at Linode. We were using Terraform to provision the instances, which were running Docker at the time, to host the changelog.com website and the entire setup. Then that was the seed which created the Linode Kubernetes engine. Then Markus joined Crossplane and Upbound, and now, using the Terraform Linode provider that Markus started to provision Kubernetes clusters on Linode, using the Terraform provider, using Crossplane. Like, how crazy is that? It just takes a while just to wrap your head around. This was like years in the making, and we didn't even know it until a few months ago, when Dan mentioned Terrajet. I didn't even know that this thing existed.
542
+
543
+ So that's what we're using as a generator for a Linode provider that uses Terraform. So - okay... How many providers have been generated with Terrajet to date, and where can we see them?
544
+
545
+ **Muvaffak Onuş:** Yeah, so today we have the provider for the big three, AWS, Azure and GCP, and those three providers have almost 2,000 CRDsin total.
546
+
547
+ **Gerhard Lazu:** Right.
548
+
549
+ **Muvaffak Onuş:** And then if you go to Crossplane Contrib, you will see other providers, similar to like Jet Linode; for example, we have Equinix, Equinix Metal, we have Exoscale... All of these are completely bootstrapped by the community. So I would say in total like seven or eight right now.
550
+
551
+ **Gerhard Lazu:** Okay. Yeah, there's quite a few providers... TF, Equinix, I can see that; provider Helm, provider Civo... What else am I seeing here? Provider Jet AWS - this is an interesting one. So even though you have an AWS provider, there's also a Provider Jet AWS. Do you know the story behind that?
552
+
553
+ **Muvaffak Onuş:** So the provider AWS, the one that calls APIs directly, has around 100 CRDs, which means perhaps 100 services... But AWS has hundreds. So if you look at that Jet AWS, you will see it has 765 custom resource definitions, which is, you know, just too many for the Kubernetes community at this point.
554
+
555
+ **Gerhard Lazu:** Yeah. I can imagine having so many CRDs in your Kubernetes. You wouldn't even know which one to pick; there's just so many of them. Okay, so that makes sense. And we added another provider, haven't we? In the last week.
556
+
557
+ **Muvaffak Onuş:** Yes.
558
+
559
+ **Gerhard Lazu:** That was amazing. 12 commits, that's all it took to generate a provider Jet Linode, which is Crossplane Contrib. This is, by the way, our gift to you, our Christmas gift to you. If you want to provision Linode Kubernetes Engine clusters using Crossplane, this is the modern way of doing it... Because has built a Crossplane provider for Linode, which hasn't seen much maintenance, I think. The last update was a year ago, maybe a bit longer, and I don't think it's working with the latest Crossplane versions. Many things have changed since... So this one we know it works. But it only has a single resource, right? Because that's all that we needed.
560
+
561
+ **Muvaffak Onuş:** Yes.
562
+
563
+ **Gerhard Lazu:** And that is the LKE resource. Linode LKE cluster. Now, if you want more resources, contribute. It's an open source repository, public to everyone. So if there's anything missing, what I would like to see is a Linode instance. I would like to provision some Linode instances, some VMs with this. So that would be my request to anyone that's listening to this; Markus, maybe? What do you think? Or someone else. But anyways, it's there... I'm wondering, what is coming next for Terrajet?
564
+
565
+ **Muvaffak Onuş:** So Terrajet - when we first started with Terrajet, we had hit a problem with APIs, so we were handling that many CRDs, actually.
566
+
567
+ **Gerhard Lazu:** Right.
568
+
569
+ **Muvaffak Onuş:** \[01:07:52.20\] When you install 700 CRDs, API showed our guests were responsible for 40 minutes or something, which affects all the workloads that it was supposed to schedule. So we have fixed that problem; there was a patch and we accelerated some of the processes in upstream. So now, we are able to use those Jet providers.
570
+
571
+ In January, we will have a big splash of announcements. We'll announce AWS, Azure and GCP providers, Jet providers with their API group stabilized, conflicts are stabilized, and API fields are stabilized. And then we will start making some of the resources we want better one. Which has more guarantees around that.
572
+
573
+ Then we will have commercial WebHooks in Crossplane, which will affect how easily we can make a resource, let's say, that you're not happy with the implementation in Terraform provider; you can just switch to native implementation, with API calls directly to AWS.
574
+
575
+ **Gerhard Lazu:** Okay.
576
+
577
+ **Muvaffak Onuş:** So all this new stuff that will allow community to bootstrap new providers, and make upstream work with them. It's just so many CRDs, and built easily, that you won't have a problem like "Hey, is this resource supported?" Well, yes. Probably. Instead of, you know, let me take a look how hard it would be to implement it.
578
+
579
+ **Gerhard Lazu:** I do have to say, having gone from nothing - like, I knew nothing about how to implement a Crossplane provider - to using Terrajet... That was really smooth. I think anyone that is determined to write a Crossplane provider that doesn't exist yet, and there is a Terraform provider which exists - ours, and they can have it. Which is amazing to see. So this is basically proof that your idea works.
580
+
581
+ **Muvaffak Onuş:** Yeah. I mean, in fact, we had a case where someone in the community - the provider Exoscale, you saw... That was actually written in six hours.
582
+
583
+ **Gerhard Lazu:** There we go. Amazing.
584
+
585
+ **Muvaffak Onuş:** And also, that was the hardest part, bootstrapping the provider. If you, for example, decide to add an instance resource to Jet Linode provider, it's 10 or 15 lines of code as you see like the single configuration.
586
+
587
+ **Gerhard Lazu:** Yeah, that's right. So all the commits are there, go and check them, see what we've done for provider Jet Linode. Again, it's very, very simple. So what I would like to do now is show you how easy it is to actually do this. And I say "show" because we record video, and we may not have time to publish everything in time, or ever; things can get very busy. But at least we'll do a step-by-step process; there's a pull request, by the way, in the Changelog org, the changelog.com repository, pull request 399, which has all the text, all the screenshots, everything on how to do this, all the links.
588
+
589
+ So this is what we're going to do next - we're going to install Crossplane, install the provider, and then provision the Linode Kubernetes Engine cluster using this provider. Then we target it, and then we try something crazy. You know that I'm all for crazy, trying crazy things and seeing what happens... So that's what we're going to do next.
590
+
591
+ Okay, so I am in the 2021 directory currently, and I'm going to do -- I'm already targeting our production Kubernetes clusters. Oh yes, of course - Muvaffak, when I mentioned this to you first, like I develop in production, you laughed... But I'm serious. \[laughs\] That is the only thing that matters. If it's not in production, it's in inventory. I don't like inventory, I like stuff being out there.
592
+
593
+ So make, in this case, LKE Crossplane. And what that does - that installs Crossplane version 1.5.1, using Helm, straight into production. So installing Crossplane... Two minutes later, it's done. That's how simple it is.
594
+
595
+ The next step is make Crossplane Linode provider, and that's it. That's simple. That was really quick, because the provider is super-small. Like 18 kb, I've seen the image; which then pulls a bigger image. How does that work, can you tell us?
596
+
597
+ **Muvaffak Onuş:** \[01:11:58.12\] Yeah. So the better data image, it said it was CI image, but where has only the meta data YAML, that contains your CRDs, and also some information about your package. Once it's downloaded by the package manager, it installs the CRDs and then creates a deployment with the image that you provided there. So that other image contains the binary.
598
+
599
+ **Gerhard Lazu:** Okay. And which version did we install of the provider? Version 0.0.0-12. So there's no tag for this. This is like a dev-only version. We trust dev in production. The dream is real. Production became dev. Great.
600
+
601
+ Okay, so we installed it, we configured it, it's all there. So how can we check that the provider is there? If we maybe get all the pods in the namespace, Crossplane system? Because that's where everything gets installed. We see that we have Crossplane installed, we have the Crossplane RBAC manager - these are two pods, and the third one is the Jet Linode pod. Cool. So what can we do next? I'm using K9s as the CLI allows me to do things really, really quick. So if we go to look at all the aliases, which is Ctrl+A for me, and I search for cluster, we see a new CRD. And the CRD is in the Linode Jet Crossplane IO v1 Alpha 1 group. That's how we can provision new clusters. So let's try that.
602
+
603
+ If we go to Cluster, to list out clusters, we have no clusters. Great. Let's do "Make Crossplane LKE." All this does - I still have to run a command, okay... I know what I mentioned earlier, there will be no commands, but this is a different type of command. I'm not telling the Linode API "Hey, Linode, create me an LKE instance." I'm telling Crossplane to create on my behalf an LKE instance. And there's something really cool about this, because Crossplane will continuously reconcile what I ask of it. How cool is that? I think that's my favorite Crossplane feature, which happens to be a Kubernetes feature as well. You know, declarative, you tell it what you want, and it will make it so. I love that story. Great.
604
+
605
+ Okay, so this succeeded. What are we seeing now? We are seeing that 42 seconds ago a new LKE 2021, 12, 17 - by the way, it's the 17th of December when we are recording this... It just uses the current date when this new cluster has been created, or it's asked to be created.
606
+
607
+ So if we go to Linode, and if we go to our Kubernetes lists, we see a new cluster which is Kubernetes version 1.22. Nice. I'm wondering, could that be our new production cluster for 2022? If you could see me, I'm winking; yes, it will be. 1.22 Kubernetes will be the first version of our production 2022 Kubernetes cluster. This is it. Because it's ready, it's synced, we have the external name, which is the ID, the instance has booted... Great.
608
+
609
+ Okay... So did it work? Well, let's try make a Crossplane, LKE kubeconfig... All these, by the way, are in our repo, you can check them out. Actually, do you wanna tell us what happened behind the scenes, like how were we able to do this?
610
+
611
+ **Muvaffak Onuş:** Yeah. So Crossplane has this notion of connection details secret, where it stores all the sensitive information you need to use that resource, if any. For example, we see that mostly in Kubernetes clusters database instances, where you have a password, or some other details, and not in others, for example with PCs where you don't need any token or something to connect.
612
+
613
+ \[01:16:00.01\] So here, what we see is that Terrajet does this automatically, using Terraform's tfstate, and exports it in its secret. And then we have added a custom configuration that will get that secret - you see the attribute .kubeconfig; that is automatically put here, taken from state. But the problem is that Linode Terraform provider actually base64-encodes the kubeconfig. So you've got secret base64 encoding, and then another encoding on top of that. What we did was to provide a custom configuration for Terrajet, which takes one field from attributes and base64-decodes it and puts it here, which makes it ready to use right away, with kubectl, or other provider Helm, or provider Kubernetes controllers.
614
+
615
+ **Gerhard Lazu:** So while we can get the kubeconfig locally, and then we can use kubectl and use that kubeconfig to target that cluster, what we may want to do is let Crossplane provision other things inside this cluster, so that we wouldn't necessarily need to give this kubeconfig away. It stays within Crossplane, it's all there, Crossplane has it for it to be able to provision other things inside of this cluster. And maybe this is the path where I lose access to Kubernetes clusters. Is that it? \[laughs\] Like, it's more difficult for me to just run commands against them... The idea being that this could be like a fully self-automated system. It creates itself, it provisions itself with everything it needs, it pulls down all the bits, including the application, the latest version of the Changelog app, and it just runs. It updates DNS, because it's like a self-updating system... So this is one step closer to a self-updating, self-provisioning system. And that is a dream which I had many years ago, and I'm one step closer, and that makes me so happy.
616
+
617
+ Okay. So we have the kubeconfig locally, and I'm not there yet in that dream world, so I'm still putting in the kubeconfig, pulling it down locally, and now going with K9s, targeting the new cluster. And what we see is that it's just like any regular cluster, there it is. Just the default pods. Four minutes ago they were created. If we look at the node, it's the new node; it's version 1.22.2, so the latest Kubernetes version on Linode currently... And I'm wondering, what is going to happen if by accident - and I'm doing air quotes - if accidentally Jerod deletes the cluster? \[laughter\] I don't know, Jerod, I just gave an example; we do crazy things together all the time, so you're the first one when I'm thinking about someone deleting some Changelog infrastructure... \[laughs\] So let's just click this Delete button, pretend I'm Jerod, "Oh, I don't recognize this cluster. Let me just delete it. It's just extra resource."
618
+
619
+ So let's delete the cluster... And yes, I confirm I want to delete it... And the cluster is gone. Luckily, I deleted the correct cluster; I haven't deleted our production cluster. But if I had deleted our production cluster - I mean, good luck setting everything up. There's like a lot of stuff to do, a lot of steps. And yes, we have like a make target, which puts everything together, and it's okay... But it's not as good as it could be.
620
+
621
+ **Muvaffak Onuş:** Yeah. Jerod wouldn't do that.
622
+
623
+ **Gerhard Lazu:** No, Jerod wouldn't do that. \[laughter\] No, he wouldn't. I do that all the time... You know, like "Let's just take production down. Whatever. Let's see what happens. Just for the fun of it." So what will this do behind the scenes, with the new setup that we have? Muvaffak, can you tell us?
624
+
625
+ **Muvaffak Onuş:** Yes. So what's gonna happen is that the controller will reconcile and see that the cluster is not there, it's gone... Which is what happens when you first create a resource. The very first thing that a provider does is to check whether the resource is there, and create it if not. And for further controllers, it will be just like that, "Hey, I checked the resource and it's not there, so I need to create it." So it goes ahead and tries to create a new cluster.
626
+
627
+ **Gerhard Lazu:** \[01:20:21.25\] Right. And that takes 30 seconds, a minute...? How long does it take for it to figure out that "Hey, I'm missing a cluster"?
628
+
629
+ **Muvaffak Onuş:** Well, so because it doesn't get any events or anything in Kubernetes cluster, it will need to hit the long wait period, which is like one minute. So at most, in a minute it will recognize that change. Or you can make a change on the custom resource, which will trigger a Kubernetes event. So you go to that controller and it will start all the processes there.
630
+
631
+ **Gerhard Lazu:** I was trying to find this out to see where it's reconciling. It's finding it -- I think I just missed it, the event. Everything is synced now, everything's ready... The cluster is back; I mean, I just had to refresh the page.
632
+
633
+ **Muvaffak Onuş:** Nice.
634
+
635
+ **Gerhard Lazu:** What about the Linodes? Is it still there? It's offline. Interesting. I don't know why that's offline. So when I deleted the cluster, whatever happened behind the scenes... Maybe the default node pool got deleted as well. Oh, it's booting... So I think that the node was deleted as well. And this is like the worker VM. And a new one was created.
636
+
637
+ So deleting the cluster from the Linode UI, from the cloud.linode.com - it also deletes all the worker nodes. So when the cluster gets recreated, it has to obviously recreate all the nodes... And there it is. It's back.
638
+
639
+ Okay, so everything here is ready, it's synced... Because while the cluster has been created, the cluster object, the node pool that's associated with it hasn't been finished yet, and I think that's where composite resources come in. Can you tell us a bit about that?
640
+
641
+ **Muvaffak Onuş:** So in other cases where you have the node group represented as a different resource, you can actually have like two resources in a single composition.
642
+
643
+ **Gerhard Lazu:** Right.
644
+
645
+ **Muvaffak Onuş:** And additionally, just like you mentioned earlier, we can have more things installed there as well... Because the dependencies are resolved automatically, just like in Kubernetes. So for example, you would create your composite cluster resource, a cluster will be created and node groups will be booted, and then the installations will start, with provider kubeconfig or provider Helm.
646
+
647
+ So once your composite cluster CR reports ready, everything is ready, and just back in initial state. So it will just revert it back to the original state, including all the things in composition.
648
+
649
+ **Gerhard Lazu:** Okay, so now what happened is we are targeting the same control plane, and we could see how the pods were being recreated. So 90 seconds ago, 100 seconds ago, everything was created from scratch. We accidentally (air quotes again) deleted the cluster, Crossplane recreated the cluster, the node pools recreated, the node pool had a single node, and then everything was put back on; by default, what's there. What we would have been missing, if for example we had added any extra resources, like Ingress NGINX, or ExternalDNS, or all the other components that we need - those would no longer be present... Because let's be honest, we deleted the cluster, and that should delete everything in it. And this is, I think, where the human i.e. me, would have come in and run commands, "Ah, I have to get production back, because it was deleted." But how amazing would it be if Crossplane could do this? So it would know, "Oh, it's not just a cluster which I need, it's all this extra stuff that needs to be present in a cluster." Now, that is really exciting. Next year, right?
650
+
651
+ **Muvaffak Onuş:** Yup.
652
+
653
+ **Gerhard Lazu:** \[01:23:52.22\] I think we did enough this Christmas. \[laughs\] Cool. Alright. So what happens next? Well, I think here's a couple of improvements that we can do... I already mentioned about installing all -- I think this was your idea... Can you tell us about your idea, Muvaffak? This was really, really good, the two compositions.
654
+
655
+ **Muvaffak Onuş:** So maybe I can give a little summary about what a composition does.
656
+
657
+ **Gerhard Lazu:** Sure.
658
+
659
+ **Muvaffak Onuş:** A composition has two parts. One is XRD - similar to CRDs, but you define your own API. But with XRDs, you can define two different APIs. One is namespaced, and the other one is cluster-scoped, which does not have any namespaces. So what we usually see is that people create a composition with all the base system components in the same composition. We call it "batteries included."
660
+
661
+ If you go to platform references we have on the Upbound org, you will see some of the examples, where we for example install Prometheus, or a few other tools that your platform team might want every cluster to have, like security agents.
662
+
663
+ In this case, as listed there in the PR, you've got the cert manager, you've got Grafana Agent, and a few other components that you want installed. And then the other composition is usually the application itself. In that composition you would define what Changelog specifically needs, so for example you would create a single cluster with that base composition, and then refer to it from many namespaces in your seed cluster, and from many applications that can be installed to that cluster.
664
+
665
+ **Gerhard Lazu:** Right.
666
+
667
+ **Muvaffak Onuş:** So you would have the cluster that is managed in one namespace, maybe like Changelog system, with its own claim, claim is what we call similar to PVC. So you would have that production cluster, but different teams or developers in their own namespace - they would refer to that central production cluster in their claims that are defined, again, by you, via XRD.
668
+
669
+ **Gerhard Lazu:** Yeah.
670
+
671
+ **Muvaffak Onuş:** So it's about like in publishing a new API - instead of going through all the fields of the specific clouds, you would publish API, with the only difference that you want it to be configured.
672
+
673
+ **Gerhard Lazu:** Okay. That is really cool. I can hardly wait to do that. That is seriously cool. Having all this stuff abstracted in a composition, to just capture what it means for the entire Changelog setup to come online, would be so amazing.
674
+
675
+ The other thing which would be also amazing is to move Crossplane from being hosted on our cluster, to be hosted on Upbound cloud. Because the dream is there is a seed cluster somewhere, which is managed by someone else, in this case Upbound cloud. The Crossplane is there, we can define all the important stuff, and that is the seed which controls all the other clusters, everything else; and not just clusters, other things as well.
676
+
677
+ Again, I don't wanna go too far with this idea, like blow your minds completely, but why doesn't it manage some Fly.io apps? Or why doesn't it manage maybe some DNS? Or why doesn't it manage other things from the seed cluster? Because right now, the external DNS is what we use in every cluster to manage its own DNS. And that's okay, we may need to do that, but what about a top-level thing, which then seeds everything else. So that's something which I'm excited about.
678
+
679
+ Well, I'm really looking forward to what we'll do together next year, Muvaffak, with all this stuff. There's so many improvements which we can drive... I'm really keen on that. It's the first step. But you as a listener, what I would say is have a look at the provider Jet Linode in the Crossplane Contrib org, see if it's helpful, and... Merry Christmas and a Happy New Year. Anything else to add?
680
+
681
+ **Muvaffak Onuş:** Yeah, it was great working with you for the last couple of days to get all these things done. Yeah, I'm honored to be here. Happy Christmas.
682
+
683
+ **Gerhard Lazu:** Thank you, Muvaffak. It's been my pleasure, thank you very much. See you next year!
🎄 Merry Shipmas 🎁_transcript.txt ADDED
The diff for this file is too large to render. See raw diff